Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
$\lim_{a \to \infty}\prod_{i=0}^{a} x_i = \infty$ for $x_i > 1$? For $x_i > 1$, where $i$ is an index and $x_i$ real number, $\lim_{a \to \infty}\prod_{i=0}^{a} x_i = \infty$ always? How does one prove/disprove this?
It is relatively easy to find a counterexample showing that this is not true. Simply choose any increasing bounded sequence $(p_n)$ such that $p_0=1$. I.e., you have $p_n<p_{n+1}$ for each $n$ and, since the sequence is bounded, there exists a finite limit $\lim\limits_{n\to\infty} p_n=P$. Now you can put $x_0=1$ and $$x_n = \frac{p_n}{p_{n-1}}$$ for $n>1$. In this way you have $$\prod_{k=0}^n x_k = p_n$$ and $$\prod_{k=0}^\infty x_k = \lim\limits_{n\to\infty} \prod_{k=0}^n x_k = P.$$ Moreover, you have $x_k>1$ for each $k$. It was already mentioned in another answer that there is a general criterion for convergence of infinite products of the form $\prod\limits_{k=0}^\infty (1+a_k)$ (where $a_k$ is a real sequence). You will probably find several posts about this if you browse (or search) the infinite-product tag a bit. For example: * *sufficiency and necessity of convergence of $\sum a_n$ wrt convergence of $\prod (1 + a_n)$ *$\prod (1+a_n)$ converges iff $ \sum a_n$ converges *Suppose $a_n>0$ for $n\in \mathbb{N}$. Prove that $\prod_{n=1}^\infty (1+a_n)$ converges if and only if $\sum_{n=1}^\infty a_n<\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1936927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is this a sufficient statistic for uniform distribution? I'm having trouble proving that given a uniform distribution $X_i \sim U(0,\theta)$ with $\theta$ unknown, the statistics $2\bar X$ and $\bar X$ are not sufficient; any ideas? Thanks for your help.
Here's a hint: $ \operatorname{E}(2\bar X) = \theta,$ so $2\bar X$ is an unbiased estimator of $\theta$, but if, for example, $(X_1,X_2,X_3) = (1,2,12)$ then the estimate of $\theta$ is actually smaller than the largest of the three observations. That shows there is more information about $\theta$ in the sample than there is in $\bar X$. Here's somewhat more than a hint: The conditional distribution of $X_1,X_2,X_3$ given that $\bar X = 1$ and $\theta = 2$ is supported in some subset of $[0,2]^3$, but the conditional distribution of $X_1,X_2,X_3$ given that $\bar X = 1$ and $\theta=20$ has the point $(0,0,3)$ in its support. (Note that $(0+0+3)/3 = \bar X = 1$.) That shows that $\bar X$ is not sufficient without helping you find any one-dimensional sufficient statistic. In fact, one exists.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1937018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Radius-4 circle skewered on one line and touching another Find the equation of the circle with radius $4$ units, whose centre lies on the line $4x+13y=32$ and which touches the line $4x+3y+28=0$. My Approach: Radius $r=4$ units Let $P(h,k)$ be the centre of the circle. Then $4h+13k=32$. Please help me to move further.
The equation of a circle with center in $(a,b)$ and radius $4$ is $$ \left( {x - a} \right)^{\,2} + \left( {y - b} \right)^{\,2} = r^{\,2} = 16 $$ Now you must have $$ \left\{ \begin{gathered} 4a + 13b = 32\quad \text{(center}\,\text{on}\,\text{the}\,\text{line}\,(\text{a))} \hfill \\ 4x + 3y + 28 = 0\quad \text{(point}\,\text{on}\,\text{the}\,\text{line}\;\text{(b))} \hfill \\ \left( {x - a} \right)^{\,2} + \left( {y - b} \right)^{\,2} = 16\quad \text{(point}\,\text{on}\,\text{the}\,\text{circle)} \hfill \\ \end{gathered} \right. $$ Now, from the first express e.g. $a$ in function of $b$, and from the second e.g. $y$ in function of $x$ and place them in the third. You will get $$\left( {x - 8 - \frac{{13}} {4}b} \right)^{\,2} + \left( { - \frac{{28}} {3} - \frac{4} {3}x - b} \right)^{\,2} = 16 $$ Expand as a quadratic equation in $x$ and impose to have two coincident solutions, i.e. find $b$ such that the discriminant be $0$. You will find two values, corresponding to whether the circle is on one side or the other with respect to the crossing point of the two lines. Use each value of $b$ to determine the corresponding $x$, $y$, $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1937148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
What is the value of $x$ in $222^x−111^x∗7=111^x$? Can anyone help me on this? It is for a 8th grader. What is the value of $x$ in $222^x-111^x*7=111^x$? I know the equation can be rearranged as $222^x=111^x*7-111^x=6*111^x$. Then what is next?
Actually you got the first step backwards. $222^x - 7 \cdot 111^x = 111^x$ $222^x = 8 \cdot 111^x$ $(2 \cdot 111)^x = 8 \cdot 111^x$ $2^x \cdot 111^x = 8 \cdot 111^x$ $2^x = 8$ $x = 3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1937258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Manifold with Ricci curvature not bounded below Could somebody please show an example (or give a reference to one) of a connected complete Riemannian manifold whose Ricci curvature is not bounded by below? I guess there are standard examples to this, somewhere...
Consider $g=dr^2+ f(r)^2 d\theta^2$ on $\mathbb{R}^2$ So if we have suitable sequences $x_n,\ y_n$ and $f$ s.t. $$ x_n<y_n<x_{n+1},\ f(x_n)>f(y_n)<f(x_{n+1}) $$ then Gaussian curvatures around $r=y_{n}$ go to $-\infty$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1937356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
In Linear Algebra, what is a vector? I understand that a vector space is a collection of vectors that can be added and scalar multiplied and satisfies the 8 axioms, however, I do not know what a vector is. I know in physics a vector is a geometric object that has a magnitude and a direction and it computer science a vector is a container that holds elements, expand, or shrink, but in linear algebra the definition of a vector isn't too clear. As a result, what is a vector in Linear Algebra?
Just to help and understand the change of concept from physics to linear algebra about vectors, without pretending to be rigorous. Consider that in physics (Newtonian) you consider an euclidean space, so you can speak in terms of magnitude. In linear algebra we want to be able and define a vector in broader terms, in a reference system that is not necessarily orthogonal, what is called an affine space/subspace. Infact in affine geometry (which help to visualize) an oriented segment $\mathop {AB}\limits^ \to$ is an ordered couple of points, and a vector corresponds to the ordered $n$-uple of the difference of their coordinates (the translation vector). A vector therefore is a representative of all the segments, oriented in the same direction, which are parallel and have the same "translation" (and not modulus, which is not defined, or better it is not preserved under an affine change of coordinates).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1937464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "84", "answer_count": 9, "answer_id": 1 }
Showing that $\Bbb R[x] / \langle x^2 + 1 \rangle$ is isomorphic to $\Bbb C$ question Show that $\Bbb R[x] / \langle x^2 + 1 \rangle$ is isomorphic to $\Bbb C$. Let $\phi$ be the homomorphism from $\Bbb R[x]$ onto $\Bbb C$ given by $f(x) \rightarrow f(i)$ (that is, evaluate a polynomial in $\Bbb R[x]$ at $i$). Then $x^2 + 1 \in \operatorname{Ker} \phi$ and is clearly a polynomial of minimum degree in $\operatorname{Ker} \phi$. Thus, $\operatorname{Ker} \phi = \langle x^2 + 1 \rangle$ and $\Bbb R[x] /\langle x^2 + 1 \rangle$ is isomorphic to $\Bbb C$. My question is: How is $x^2 + 1$ a polynomial of minimum degree in $\operatorname{Ker} \phi$?
$⟨x^2+1⟩$ is an irreducible polynomial over $\mathbb R$, then it is a maximal ideal.Because of that $\mathbb R[x]/⟨x2+1⟩$ is a field. All elements in $⟨x^2+1⟩$ given by $a+bx$. Now we could easily define an isomorphism between $\mathbb C$ and $\mathbb R[x]/⟨x^2+1⟩$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1937513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
How many triangles with whole number leg lengths are there such that area and the perimeter is equal? I've tried to use Heron's formula to approach the problem , but it doesn't make any sense .I also tried to guess the lengths and I got two triangles , one of them is (5,12,13) and the second is (6,8,10). So,I hope you can help me to find out "Is there efficient way to solve this problem ?"
Trying with Pythagorean triplets with sides $ (2mn, m^2-n^2,m^2+n^2),$ your condition leads to $$ 2 m^2 + 2 mn = mn ( m^2-n^2) ; \quad n (m-n) = 2; $$ or $$ n = (m + \sqrt{ m^2-8})/2 $$ which gives an infinite set including $$ (m,n) = (3,2), \sqrt2 (2,1),.. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1937588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Proving a graph is planar - mutually tangent circles in a plane Let $C_1,\dots ,C_n$ be circles in the plane with pairwise disjoint interiors. Define the tangency graph to have $n$ vertices such that vertices are adjacent if the corresponding circles are tangent to each other. Prove this graph is planar. It looks like I have to prove there can't be minor $K_5$'s or $K_{3,3}$'s. For $K_5$, I thought to prove there's no $K_4$ anywhere. Thing is, I don't even know how to rigorously prove we can't have four pairwise tangent circles in the plane. This doesn't even cover minors and I have no idea how to tackle $K_{3,3}$'s... How to solve this problem? How should such problems be approached?
It's obvious: Take the centers $v_i$ $(1\leq i\leq n)$ of the circles as vertices and connect two vertices by a straight segment if the corresponding circles are tangent to each other. Or have I missed something?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1937741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Using L'Hopital's Rule to show limit is 0? I am trying to show for any non-negative integer $n$, $\lim \limits_{x \rightarrow 0+} \frac{e^{-{1 /x}}}{x^n}=0$. For $n=0$ this follows directly since $1/x \rightarrow +\infty$ as $x\rightarrow 0+$. For $n>0$, I notice the limit has indeterminate form $0/0$ but applying L'Hopital's Rule directly gives $ \lim \limits_{x \rightarrow 0+} \frac{e^{-{1 /x}}}{nx^{n+1}}$, which only seems to complicate the problem. Also, it seems that repeatedly applying the rule would just increase the exponent of $x$ on the denominator. Why doesn't L'Hopitals Rule not work directly in this problem, and how could it be used, if at all, to evaluate the limit? Alternatively, is there another method to evaluate it?
Actually, this limit can be done through a simple substitution. As indicated we assume $x\to0^+$ for your limit. Setting $x=1/t$, the limit becomes $\lim \limits_{t \rightarrow \infty} \frac{t^n}{e^t}$. After applying L'Hospital's Rule sufficient amount of times (it's an infinity over infinity situation), your polynomial numerator gets "exhausted" whereas the e-power in the denom stays "as is". And thus the limit goes to zero!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1937837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Choosing toppings for pizza Problem: Ordering a "deluxe" pizza means you have four choices from 15 available toppings. How many combinations are possible if toppings cannot be repeated? If they can be repeated? Assume the order in which the toppings are selected does not matter. If the toppings cannot be repeated, then we have $C_4^{15}$ choices. I am having difficulty with the toppings can be repeated problem. It feels like this isn't solvable with the stars and bars method because we only care about "4" of the "15" toppings and not all 15. What I think: Use "14" bars to then have "15" categories, each responding to a possible topping. Have 4 stars to represent choosing a topping from that category. Then we should get $C^{18}_{4}$?
This may not be the most efficient way of doing this, but you can consider it as separate cases. * *If all toppings are distinct, then you have $C_4^{15}$ combinations. *If there are three distinct toppings, you have $3 \cdot C_3^{15}$ combinations (because we have $C_3^{15}$ choices for toppings and then $3$ choices for which of those three toppings is doubled). *If there are two distinct toppings, you have $3 \cdot C_2^{15}$ combinations (because there are $C_2^{15}$ choices for topping and $3$ possibilities: either both toppings doubled, the first is tripled, or the second is tripled). *If there is only one distinct topping, you have $15$ possibilities. Add these up to get your total. Let's enumerate them with $5$ options instead of $15$ for illustrative purposes Case 1: ABCD ABCE ABDE ACDE BCDE Case 2: AABC ABBC ABCC AABD ABBD ACDD AABE ABBE ABEE AACD ACCD ACDD AACE ACCE ACEE AADE ADDE ADEE BBCD BCCD BCDD BBCE BCCE BCEE BBDE BDDE BDEE CCDE CDDE CDEE Case 3: AABB AACC BBCC AADD BBDD CCDD AAEE BBEE CCEE DDEE and BBBA CCCA DDDA EEEA AAAB CCCB DDDB EEEB AAAC BBBC DDDC EEEC AAAD BBBD CCCD EEED AAAE BBBE CCCE DDDE Case 4: AAAA BBBB CCCC DDDD EEEE
{ "language": "en", "url": "https://math.stackexchange.com/questions/1937959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Unable to get a terminal at the start for the GNF The grammar is: S-> AA|a A-> SA|b If I substitute one rule in another, there would always remain a non-terminal at the start. How do I get the terminal at the start? GNF has productions of form: A->xB Where x is a single terminal and B can be a combination of non-terminals.
To transform a grammar in GNF you have to remove direct and indirect left recursion. here you have indirect left recursion so the strategy is to make it direct by replacing the first $A$ in the rule of $S$. You obtain the following equivalent grammar: $$S\to SAA|bA|a $$ $$A\to SA | b $$ You now have a direct recursions $S\to SAA$ so you add a new nonterminal $S'$ and transform the rule for $S$ in: $$ S\to bAS'|aS'$$ and $$S'\to AAS'|\epsilon $$ Replacing the new $S$ in the rule for $A$ you get $$ A\to bAS'A|aS'A|b $$ Now replacing the first $A$ in the rule of $S'$ you get $$S'\to bAS'AAS'|aS'AAS'|bAS'|\epsilon $$ Thus you get the grammar: $$ S\to bAS'|aS'$$ $$S'\to bAS'AAS'|aS'AAS'|bAS'|\epsilon $$ $$ A\to bAS'A|aS'A|b $$ Now you only need to suppress the $S'\to epsilon$ by replacing the rules where $S'$ appear by the combination of the same rule but where $S'$ is present or not thus you get $$ S\to bAS'|aS'|bA|a$$ $$S'\to bAS'AAS'|aS'AAS'|bAS'|bAS'AA|aS'AA|bA|bAAAS'|aAAS' |bAAA|aAA|bA$$ $$ A\to bAS'A|aS'A|b|bAA|aA $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1938091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do there exist $m,n$ such that $6 = 2(2m+1)^2/(2n+1)^2$? Can the two numbers $n$ and $m$ exists such that $$6=\frac{2(2m+1)^2}{(2n+1)^2}$$ where $\gcd(m,n) = 1$?
By the comments above, it suffices to show that $\sqrt{3}$ is not a rational number. We proceed by contradiction. So suppose that $\sqrt{3}=\frac{a}{b}$ with $a,b$ integers such that $\text{gcd}(a,b)=1$. (We may assume this without loss of generality). Then $3=\frac{a^2}{b^2}$, hence $3a^2=b^2$. It follows that $3\mid b^2$ and thus that $3\mid b$. Thus we can write $b=3k$ for some integer $k$. Thus $3a^2=9k^2$, thus $a^2=3k^2$. In the same fashion we conclude that $3\mid a$, thus we may write $a=3l$ for some $l$. But then $\text{gcd}(a,b)\geq 3$ contrary to our assumption. Thus we conclude that $\sqrt{3}$ is irrational. Let me give you an alternative proof. Notice that the minimal polynomial of $\sqrt{3}$ over $\mathbb{Q}$ is given by $x^2-3$. Indeed, clearly $\sqrt{3}$ is a root of this polynomial and by the criterium of Eisenstein, we see that this polynomial is irreducible. It follows that the extension degree $[\mathbb{Q}(\sqrt{3}):\mathbb{Q}]=2$. Thus $\left\{1,\sqrt{3}\right\}$ forms a $\mathbb{Q}$-basis of $\mathbb{Q}(\sqrt{3})$, it follows that $\sqrt{3}\notin \mathbb{Q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1938203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
In a finite field product of non-square elements is a square I came across one problem in a finite field as follows: Let $F$ be a finite field. Show that if $a, b\in F$ both are non-squares, then $ab$ is a square. I wanted to prove it by using the idea of Biquadratic field extension. But there is no biquadratic extension over finite fields. Please, any hints for proving above fact? Thanks.
Hint: I assume, $a,b,\neq 0$ otherwise it is obvious. $F^*$ is cyclic. Suppose it is generated by $x$, you can write $a=x^n, b=x^m$, $n,m$ odd. Then $n+m$ is even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1938366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 0 }
Why are these sets equal? how can I formally see that these following sets are equal? For $X_1,\ldots,X_n,\ldots$ random variables with values in $[-\infty,\infty]$, then: $\{\inf_n X_n < a\} = \bigcup_n\{X_n < a\}$ and $\{\sup_n X_n>a\} = \bigcup_n \{X_i > a\}$ I have also difficulties to see the following equality $\limsup_n X_n = \inf_m(\sup_{n\geq m}X_n)$. Thank you for your help
Suppose $\inf_n X_n < a.$ Remember that "inf" means the largest lower bound. That means nothing larger than that can be a lower bound. Thus $a$ is not a lower bound of $\{X_n: n\}.$ To say that $a$ is not a lower bound of that set means $\exists n\ X_n<a,$ and that's the same as saying the event $\bigcup_n \{X_n<a\}$ occurs. Conversely, suppose the event $\bigcup_n \{X_n<a\}$ occurs. That means $\exists n\ X_n<a$. That means $a$ is not a lower bound of $\{X_n : n\}$. And that implies all lower bounds are $<a$, since if some lower bound were $\ge a$ then $a$ would be a lower bound. Hence the largest lower bound is $<a$, i.e. $\inf_n X_n < a.$ The argument for $\sup$ is the same with the inequalities inverted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1938591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is there a formula for $\int {x^n e^{-x} dx}$ I saw in this question that $$ \int {x^n e^x dx} = \bigg[\sum\limits_{k = 0}^n {( - 1)^{n - k} \frac{{n!}}{{k!}}x^k } \bigg]e^x + C. $$ and I was wondering if we can get some formula like that for $$ \int {x^n e^{-x} dx} $$ when $n \in \mathbb N$. I already know that $$ \int^{\infty}_0 {x^n e^{-x} dx} = \Gamma(n+1) $$ but I'm looking for a general formula.
The general formula is simply $n\to z$ in which $z\in\mathbb{R}$: $$\int_0^{+\infty}\ t^{z-1}e^{-t}\ \text{d}t = \Gamma(z)$$ Possibly avoid the poles: $z = 0, -1, -2, \ldots$. If you are talking about a general formula for the indefinite integral, then the series expansion is what you are searching for. Just expand the exponential in series $$e^{-x} = \sum_{k = 0}^{+\infty}\frac{(-x)^k}{k!}$$ and you'll get the result: $$\int x^n e^{-x}\ \text{d}x = \sum_{k = 0}^{+\infty} \frac{(-1)^k}{k!}\int x^{n+k}\ \text{d}x$$ Which is $$\sum_{k = 0}^{+\infty}\frac{(-1)^k}{k!} \frac{1}{n+k+1}x^{n+k+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1938682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that a certain set of elements is a basis of the free module $\mathbf{Z}[\xi]$ Let $\xi$ be a $p$-th root of unity for $p$ a prime. It is well-known that $\mathbf{Z}[\xi]$ is a free $\mathbb Z$-module. Now I'd like to show that $1, (1-\xi)^2, ..., (1-\xi)^{p-1}$ is a basis of $\mathbf{Z}[\xi]$. Since I know that the rank of $\mathbf{Z}[\xi]$ is $p-1$, we only need to show that $1, (1-\xi)^2, ..., (1-\xi)^{p-1}$ is a set of linearly independent elements (over $\mathbf{Z}$). This is where I'm stuck and need help. However, I do not know if this is even a basis, it is only a guess. Thanks for any help!
It is not sufficient to show that a set is linearly independent to show that it is an integral basis. For instance, consider $ \mathbf Z[\sqrt{2}] $ which is a free $ \mathbf Z $-module of rank $ 2 $ - the set $ \{ 1, 2 \sqrt{2} \} $ is linearly independent over $ \mathbf Z $, but it is not an integral basis of $ \mathbf Z[\sqrt{2}] $, for instance, $ \sqrt{2} $ does not lie in its integral span. Unfortunately, your set is not an integral basis - the easiest way to see this is to show that the change of basis matrix from an integral basis we know to this basis has determinant $ \neq \pm 1 $. Clearly $ \mathbf Z[\xi] = \mathbf Z[1 - \xi] $, so the set $ \{ 1, 1 - \xi, (1 - \xi)^2, \ldots, (1 - \xi)^{p-2} \} $ is an integral basis of $ \mathbf Z[\xi] $. Write the change of basis matrix from this basis to the linearly independent set in question, and use expansion along the second row to compute its determinant. You will see that the determinant turns out to be $ p(p-1)/2 $ up to a sign, so in fact the integral span of your set is a submodule of index $ p(p-1)/2 $ in $ \mathbf Z[\xi] $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1938774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $\{m,n\} =\{k,l\}$ Let $n,m,k,l$ be positive integers and $p$ and odd prime with $0 \leq n,m,k,l \leq p-1$ and $$n+m \equiv k+l \pmod{p}, \quad n^2+m^2 \equiv k^2+l^2 \pmod{p}.$$ Prove that $\{m,n\} =\{k,l\}$. We can rearrange the third condition to get $(n-l)(n+l) \equiv (k-m)(k+m) \pmod{p}$. How do we continue from here?
First, observe that $n^2 + 2mn + m^2 \equiv (n+m)^2 \equiv (k+l)^2 \equiv k^2 + 2kl + l^2$, so $kl \equiv mn$ (since $p$ is odd, 2 is a unit). If $k \equiv 0$, then $0 \equiv kl \equiv mn$. Without loss of generality we have $m \equiv 0$, and thus $n \equiv l$. The claim follows then from $0 \le m,n,k,l \le p-1$. Otherwise, let $k \not\equiv 0$. This means $k$ is a unit, and we can find $a$ in $\mathbb{F}_p$ with $m \equiv ak$ (choose e.g. $a = k^{-1} m$). It follows that $kl \equiv akn$, and thus $l \equiv an$ (by multiplying both sides with $k^{-1}$). Furthermore, $k + an \equiv k + l \equiv n + m \equiv n + ak$. If $a \equiv 1$, then we immediately have $l \equiv n$ and $m \equiv k$, and the claim follows as before from $0 \le m,n,k,l \le p-1$. Else, we get $k \equiv n$ and thus $l \equiv an \equiv ak \equiv m$. As before, $\{ k,l \} = \{ m,n \}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1938968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Get covariance of pixels in an image? What is the best way to calculate the covariance matrix for a small square of pixels? Assume it is a $3\times 3$ square of grayscale values. I have read that in some cases, you can get the covariance matrix by inverting the Hessian. So, I was thinking of using a gradient operator to estimate the second derivatives ($J_{xx}, J_{yy}, J_{xy}$). Then, I believe the covariance would be this: $$ \frac1{J_{xx}J_{yy} - J_{xy}^2} \begin{bmatrix}J_{yy} & -J_{xy} \\-J_{xy} & J_{xx}\end{bmatrix} $$ Again, I'm not sure if that's correct. Alternatively, I was considering just trying to apply the covariance formula directly. I believe to do that, I'd have each pixel's center $(x,y)$ coordinate be a separate sample. Then calculate covariance, weighting each sample by the pixel color.
From my tests, it appears that inverting the Hessian does not give the correct covariance matrix. I'm not sure under what circumstances the inverted Hessian will give the correct answer. What I ended up doing is assigning each pixel becomes a vector $x$ based on its pixel coordinates: $$ \begin{matrix} (-1,-1)&(0,-1)&(1,-1)\\ (-1,0)&(0,0)&(1,0)\\ (-1,1)&(0,1)&(1,1) \end{matrix} $$ Then, using the color of each pixel as the weight $w$, you can compute the matrix with (see wikipedia): $$ n = \sum_{i=0}^8 w_i\\ m = \sum_{i=0}^8 w_i^2\\ \overline x = \frac1n\sum_{i=0}^8 w_ix_i\\ Cov = \frac n{n^2-m}\sum_{i=0}^8 w_i(x_i-\overline x)^T(x_i-\overline x) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1939040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimizing Perimeter of a quadrilateral Let us say that we are given a quadrilateral where the diagonals are congruent and fixed at a certain length, and the angle between the two diagonals are fixed. How would you prove that the minimum perimeter is achieved when the quadrilateral is a rectangle? I know it when diagonals aren't fixed at a certain length, one can prove it is a square by considering consecutive sides. However, when the length isn't convenient, and on top of that FIXED, the conditions are different, and you cannot do the same argument. Any help on how to convince/prove to me that the rectangle minimizes the perimeter?
Call $a,b,c,d$ the vertices and view them as vectors in $\mathbb R^2$. Call $m,n$ the midpoints of the two diagonals $\overline{ac}$ and $\overline{bd}$. Finally, suppose that the baricenter $(a+b+c+d)/4$ is the origin. Then it is easy to check that $m=-n$ (because the baricenter coincides with the midpoint between $m$ and $n$). In this way you can write \begin{align} a&=m+v\\ b&=-m+w\\ c&=m-v\\ d&=-m-w \end{align} where $2v=a-c$ and $2w=b-d$ are the diagonals. Now \begin{align} &\overline{ab}=|a-b|=|2m+(v-w)|\\ &\overline{bc}=|b-c|=|2m-(v+w)|\\ &\overline{cd}=|c-d|=|2m-(v-w)|\\ &\overline{da}=|d-a|=|2m+(v+w)| \end{align} and summing the two opposite pairs we obtain by triangle inequality that $$\overline{ab}+\overline{cd}=|2m+(v-w)|+|2m-(v-w)|\geq 2|v-w|$$ and $$\overline{bc}+\overline{da}=|2m-(v+w)|+|2m+(v+w)|\geq 2|v+w|,$$ and summing $$Per(abcd)\geq 2(|v+w|+|v-w|).$$ Since the length of the diagonals is the same and the angle between them is fixed, then $2(|v-w|+|v+w|)$ is fixed and equal to the perimeter of the rectangle with that diagonals. Moreover this is basically the only case, because the equality case holds in both inequalities if and only if $\{m,v-w,v+w\}$ are linearly dependent, and that's not the case unless $v,w$ are linearly dependent, which corresponds to the trivial case where the angle is zero and can be dealt with separately.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1939164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Explanations about some Mittag-leffler partial fraction expansions Is it possible to show where the following series come from? $$\sum _{k=1}^{\infty } \left(\frac{1}{\pi ^2 k^2}-\frac{2}{(x-2 \pi k)^2}-\frac{2}{(2 \pi k+x)^2}\right)+\left(-\frac{2}{x^2}-\frac{1}{6}\right)=\frac{1}{\cos (z)-1}$$ $$\sum _{k=1}^{\infty } \left(-\frac{2 \pi ^2 k^2+3}{6 \pi ^4 k^4}+\frac{2}{3 (x-2 \pi k)^2}+\frac{4}{(x-2 \pi k)^4}+\frac{2}{3 (2 \pi k+x)^2}+\frac{4}{(2 \pi k+x)^4}\right)+\frac{4}{x^4}+\frac{2}{3 x^2}+\frac{11}{180}=\frac{1}{(\cos (z)-1)^2}$$ $$\sum _{k=1}^{\infty } \left(\frac{8 \pi ^4 k^4+15 \pi ^2 k^2+15}{60 \pi ^6 k^6}-\frac{4}{15 (x-2 \pi k)^2}-\frac{2}{(x-2 \pi k)^4}-\frac{8}{(x-2 \pi k)^6}-\frac{4}{15 (2 \pi k+x)^2}-\frac{2}{(2 \pi k+x)^4}-\frac{8}{(2 \pi k+x)^6}\right)+\left(-\frac{8}{x^6}-\frac{2}{x^4}-\frac{4}{15 x^2}-\frac{191}{7560}\right)=\frac{1}{(\cos (z)-1)^3}$$ sorry for the inconvenient I forget to add a part of the formula.
As I commented earlier, I have a problem with the first expression. So, since Maple said that it is correct, I suppose I am wrong but I would like to know where. Let me consider $$S_1=\sum _{k=1}^{\infty } \frac{1}{\pi ^2 k^2}\qquad S_2=\sum _{k=1}^{\infty }\frac{1}{(x-2 \pi k)^2}\qquad S_3=\sum _{k=1}^{\infty }\frac{1}{(x+2 \pi k)^2}$$ So $$S_1=\frac 16\qquad S_2=\frac{\psi ^{(1)}\left(1-\frac{x}{2 \pi }\right)}{4 \pi ^2}\qquad S_3=\frac{\psi ^{(1)}\left(1+\frac{x}{2 \pi }\right)}{4 \pi ^2}$$ where appears the first derivative of the digamma function.$$S_1-2S_2-2S_3=\frac{1}{6}-\frac{\psi ^{(1)}\left(1-\frac{x}{2 \pi }\right)+\psi ^{(1)}\left(1+\frac{x}{2 \pi }\right)}{2 \pi ^2}=\frac{1}{6}+\frac{2}{x^2}+\frac{1}{\cos (x)-1}$$ the last simplification being obtained using a CAS. Using Taylor around $x=0$, what I find is $$S_1-2S_2-2S_3=-\frac{x^2}{120}-\frac{x^4}{3024}-\frac{x^6}{86400}+O\left(x^{8}\right)$$ while $$\frac{1}{\cos (x)-1}=-\frac{2}{x^2}-\frac{1}{6}-\frac{x^2}{120}-\frac{x^4}{3024}-\frac{x^6}{86400}+O\left(x^{8}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1939284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the value of the nested radical $\sqrt[3]{1+2\sqrt[3]{1+3\sqrt[3]{1+4\sqrt[3]{1+\dots}}}}$? The closed-forms of the first three are well-known, $$x_1=\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{1+\dots}}}}\tag1$$ $$x_2=\sqrt[3]{1+\sqrt[3]{1+\sqrt[3]{1+\sqrt[3]{1+\dots}}}}\tag2$$ $$x_3=\sqrt{1+2\sqrt{1+3\sqrt{1+4\sqrt{1+\dots}}}}\tag3$$ $$x_4=\sqrt[3]{1+2\sqrt[3]{1+3\sqrt[3]{1+4\sqrt[3]{1+\dots}}}}=\;???\tag4$$ with $x_1$ the golden ratio, $x_2$ the plastic constant, and $x_3=3\,$ (by Ramanujan). Questions: * *Trying to generalize $x_3$, what is the value of $x_4$ to a $100$ or more decimal places? (The Inverse Symbolic Calculator may then come in handy to figure out its closed-form, if any.) *What is the Mathematica command to compute $x_4$? P.S. This other post is related but only asks for its closed-form which resulted in speculation in the comments. (A method/code to compute $x_4$, and a verifiable numerical value is more desirable.)
To answer this question we need to find the system that is error resilient. We can write the equation as $$y(x)=x\sqrt[3]{1+(x+1)\sqrt[3]{1+(x+2)\sqrt[3]{1+...}}}$$ from where we have $$y(x)=x\sqrt[3]{1+y(x+1)}$$ or $$y(x-1)=(x-1)\sqrt[3]{1+y(x)}$$ Now we need to estimate how this function behaves and we can easily see that $$y(x) \sim x^{\frac{3}{2}}$$ because $$(x^{\frac{3}{2}})^3\sim(x-1)\sqrt[3]{1+x^{\frac{3}{2}}}$$ From here we have an algorithm. Take large $N$ start with $y(N)=N^{\frac{3}{2}}$ and go backwards using $$y(k-1)=(k-1)\sqrt[3]{1+y(k)}$$ With, for example, $N=20$ we have $y(1)=1.70221913267155$ $N=50$ we have $y(1)=1.70221913269546$ already fixing 14 digits which is obvious from $N=100$ $y(1)=1.70221913269546$ It is not difficult to estimate the error terms. If we have missed initial value by $\Delta x$ the error term exponentially diminishes until we reach $y(1)$. Extracting Mathematica or any other command or language from this is rather trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1939394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Probability of a painted cube being reassembled into itself Suppose 27 cubes are stacked together, suspended in the air, to form a larger cube. The cube is then painted on all the exposed surfaces and dried. The smaller cubes are then randomly permuted in both spatial position and spatial orientation to form another large cube. What is the probability that new larger cube is identical to the original? This is not homework, it was passed on by a friend who found it somewhere on the internet, and they do not remember where.
There are $27$ cubies of four types: * *one body cubie (B), *six face cubies (F), *twelve edge cubies (E), and *eight vertex cubies (V). We can represent the cubie types occupying the $27$ cubie positions by a $27$-letter word using the letters B, F, E, and V with the multiplicities above. The number of distinct such words, all of which are equally likely and only one of which matches that of the original arrangement, is given by the multinomial coefficient $$ \binom{27}{1,6,12,8}=\frac{27!}{1!\cdot6!\cdot12!\cdot8!}. $$ Now for how each of the cubie types might appear. * *There is only one way the body cubie can appear (i.e. with no faces painted). *There are six ways a face cubie can appear (e.g. front face painted, top face painted, etc.) In fact these six possible appearances are precisely those realized by the six face cubies in the original arrangement—one of the six has its front face painted, one its top face painted, and so on. *Similarly, there are twelve ways an edge cubie can appear (e.g. front and top painted, front and left painted), each realized by one of the twelve edge cubies in the original arrangement. *Likewise there are eight ways a vertex cubie can appear. Putting together the type information and the appearance information for each of the cubies, there are $$ \binom{27}{1,6,12,8}\cdot1^1\cdot6^6\cdot12^{12}\cdot8^8 $$ distinct arrangements, each of which has the same probability and only one of which matches the original arrangement. Hence the probability of matching the original arrangement is the reciprocal of this number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1939512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Separable closure and normality Let $K/F$ be a normal algebraic extension and let $L = (K/F)^{sep}$ be the subfield of elements of $K$ which are separable over $F$ (this is also called the separable closure of $F$ in $K$). Is $L/F$ necessarily normal? I really do not have a clue about this question so any hints will be appreciated.
Take $a\in L$ and let $p$ be its minimal polynomial. Since $a\in K$ and $K$ is normal, $p$ is a product of linear factors : $$p=\prod_{j=1}^n(X-a_j),$$ (where $a=a_1$) and by definition of $L$ all the $a_j$'s are distinct. They have the same minimal polynomial $p$, so all of them are in $L$; hence $L$ is normal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1939612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How many homomorphisms are there from $D_6$ to $D_5$? I know that: $D_6$={$e,a,a^{2},a^{3},a^{4},a^{5},b,ab,a^{2}b,a^{3}b,a^{4}b,a^{5}b$}, with $a^{6}=e$ and $ba^{k}b=a^{-k}$. $D_5$={$e,r,r^{2},r^{3},r^{4},s,rs,r^{2}s,r^{3}s,r^{4}s$}, with $r^{5}=e$ and $sr^{k}s=r^{-k}$. Let $\varphi:D_6\rightarrow D_5$, a homomorphism is an injective function such that: $\forall x,y\in D_6:\varphi(xy)=\varphi(x)\varphi(y)$ I am now asked to determine how many homomorphisms there are from $D_6$ to $D_5$ but the only one i can find is the trivial one thus: $\varphi(e)=\varphi(a)=\varphi(a^{2})=\varphi(a^{3})=\varphi(a^{4})=\varphi(a^{5})=\varphi(b)=\varphi(ab)=\varphi(a^{2}b)=\varphi(a^{3}b)=\varphi(a^{4}b)=\varphi(a^{5}b)=r^{5}$ Can anyone help me with this i'm sure there have to be more homomorphism but i can't find them.
Hint: In light of the first isomorphism theorem, you want to find normal subgroups $N\triangleleft D_6$ and then find subgroups of $D_5$ isomorphic to $D_6/N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1939704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Understanding proof that connected graph with $V=E+1$ is acyclic This is an excerpt proving that if a graph $G$ is connected and has one more vertex than edge, then it is acylic. Suppose $|V|=n$ and that $G$ has a $k$-cycle. This cycle has $k$ vertices and edges, hence $G$ has $n-k$ additional vertices. Each of these vertices has a minimal path to the cycle. By minimality, each of these paths contains an edge not appearing in any other. Hence we have at least $n-k$ new edges, so at least $n$ in total, contradicting the assumed equality. Can someone explain how exactly minimality implies each path has an edge not on any other? It seems to me entirely possible that a minimal path is entirely contained in another - simply have one vertex adjacent to the cycle, and another one adjacent to the previous one but not directly to the cycle. The proof is $3\implies 4$ here.
Let $v$ be a vertex not belonging to the cycle $c_1c_2\cdots c_k$. Then by connectedness there exists a path the form $v_0v_1\cdots v_r$ with $v_0=v$ and $v_r=c_i$. Among all such paths for $v$, pick one that minimizes $r$. As $v$ is not in the cycle, $r\ge 1$. Associate $v$ with the first edge $vv_1$ of one such shortest path. If §w§ is another vertex not belonging to the cycle, we likewise associate it with the first edge $ww_1$ of a shortest path $ww_1\cdots w_s$ for $w$. The claim is that $vv_1$ is not the same edge as $ww_1$. Indeed, for these to be equal, we need $v_1=w$ and $w_1=v$. But then either $v_1\cdots v_r$ is a shorter path for $w$ or $w_1\cdots w_s$ is a shorter path for $v$, contradiction. Hence the above association of vertices with edges is injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1939814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fixed-point iteration and continuity of parameters Let $X$ a compact set and $A\subseteq \mathbb{R}$. Consider a continuous function $f\colon X\times A\to X$ and construct a fixed-point iteration as follows $$ x_{k+1}=f(x_k,a),\quad x_0\in X, a \in A.\quad (\star) $$ My question: If $(\star)$ admits a unique fixed point, denoted by $\mathrm{Fix}(f_a)$, for all $a\in A$, can we conclude that $\mathrm{Fix}(f_a)$ is a continuous function of $a$? What can be said in case $(\star)$ admits a set of fixed points for all $a\in A$? Thanks for your help. Comments. This question is different from this one. Indeed, here $f$ is not assumed to be contractive.
Suppose $X$ is metric (with metric $d_X$), and consider the following parameterized family of optimization problems: $$ \max_{x \in X} \, -d_X\big(x, f(x,a)\big) \tag{$\ast$} $$ By construction, $d_X\big(x, f(x,a)\big)$ is continuous in $(x,a)$, and given $X$ is compact, it is straightforward to verify the other conditions of Berge's Theorem of the Maximum. Consider the argmax correspondence, i.e. the possibly set-valued mapping $\phi: A \rightrightarrows X$ defined via $a \mapsto \{x \in X : f(x,a) =x\}$. By Berge, we obtain that $\phi$ is upper hemicontinuous as a correspondence. Any upper hemicontinuous correspondence that additionally is singleton-valued is a continuous function. Thus, if each $f(\cdot, a)$ admits a unique fixed point, $\phi$ is a continuous function. If some $f(\cdot, a)$ admit more than one fixed point, $\phi$ will not be singleton-valued, but will instead be upper hemicontinuous as a correspondence (here, equivalent to having a closed graph in $A \times X$). Note nothing in this requires these fixed points to be attracting under the specified dynamics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1939968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Commutativity among the elements in the quotient group $G$ and the ful group $H$ under homomorphism $H \overset{r}{\to} G$ Say the full group is $H$, and we pick up a normal subgroup $N$, and we define the quotient group $$G=H/N.$$ There is a group homomorphism $r$ from $H$ to $G$ $$ H \overset{r}{\to} G. $$ My question concerns the following relations are true in general or whether there are counter examples? * *$ g \cdot r(h) \cdot g^{-1}=r(h)$, for $\forall h \in H$, $\forall g \in G$. *$ r(h) \cdot g \cdot r(h)^{-1}=g$, for $\forall h \in H$, $\forall g \in G$. If some of them is not true, please give the simplest (counter) example (especially the finite group).
Since $r$ is a group homomorphism, your relations are in general not true: Try a pair $(h_1,h_2)$ of (non-commuting elements), of a non-abelian group $H$, such that: $h_1h_2h_1^{-1}h_2^{-1}\notin N$, for providing counterexamples: $$ r(h_1h_2)=r(h_1)r(h_2)=r(h_2)r(h_1)=r(h_2h_1)\Leftrightarrow \\r(h_1)r(h_2)r(h_1)^{-1}r(h_2)^{-1}=1_N \Leftrightarrow \\ r(h_1h_2h_1^{-1}h_2^{-1})=1_N=r(N) \Leftrightarrow \\ h_1h_2h_1^{-1}h_2^{-1}\in N $$ thus, (taking the contrapositive of the above) we get: $$ h_1h_2h_1^{-1}h_2^{-1}\notin N \Leftrightarrow r(h_1)r(h_2)=r(h_1h_2)\neq r(h_2h_1)=r(h_2)r(h_1) $$ Maybe the simplest counterexample (reflecting the above argument in its most simplistic form) is one already mentioned by user Jason DeVito in his comment above: Pick any non-abelian group $H$ and let $N=\{e\}$. It is easy to see that, in that case: $$H= H/N= H/\{e\}$$ Thus, any pair of non-commuting elements of $H$ would do.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1940149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let $f:[0,1]\to\Bbb{R}$ a continuous function which is differentiable in $(0,1)$, but is not in $0$ and $1$ I need to draw a graphic which the function is differentiable in whole $(0,1)$ but is not in $0$ and $1$. I just can imagine the function $f(x)=cotg(\frac{x}{\pi})$ but this $f$ is not well-defined in $[0,1]$, despiste being not differentiable at $0$ and $1$.
Oh, I missed the restriction that the domain of $f$ is $[0,1]$. If $f$ is differentiable on $(0,1)$ and continuous on $[0,1]$ and nonexistent on $(-\infty,0)$ and $(1, \infty)$ this is impossible unless $\lim f'(x) = \pm \infty$. $f'(0) = \lim_{h\rightarrow 0^+}\frac{f(h) - f(0)}{h}$. As $f$ is continuous $f(0) = lim_{x\rightarrow 0^+} f(x)$ so $f'(0) = \lim_{h\rightarrow 0^+}\frac{f(h) - f(0)}{h}= \lim_{x\rightarrow 0^+}\lim_{h\rightarrow x^+}\frac{f(x+h) - f(x)}{h}=\lim_{x\rightarrow 0^+} f'(x)$ So is differentiable at $x = 0$ if the limit exists. Likewise: $f'(1) = \lim_{h\rightarrow 0^+}\frac{f(1-h) - f(1)}{-h}$. As $f$ is continuous $f(1) = lim_{x\rightarrow 1^-} f(x)$ so $f'(1) = \lim_{h\rightarrow 0^+}\frac{f(h) - f(0)}{h}= \lim_{x\rightarrow 1^-}\lim_{h\rightarrow x^-}\frac{f(x-h) - f(x)}{-h}=\lim_{x\rightarrow 1^-} f'(x)$ So is differentiable at $x=1$ if the limit exist. So we want a function where $lim f(x)$ exists but $\lim f'(x)$ doesn't. Graphically a function that goes vertical at $x= 0,1$ will do this. ie. $f(x) = \sqrt{\frac 14 - (\frac 12 - x)^2}$ [$f'(x) = -\frac 2{\sqrt{\frac 14 - (\frac 12 - x)^2}2 (\frac 12 - x)^2}$ which is defined for $0 < x < 1$ but undefined at $x=0$ or $x=1$] BUT if $f:\mathbb R \rightarrow \mathbb R$ where $f$ is continuous and $f$ is differentiable on $(0,1)$ but not differentiable at $0$ or $1$ is certainly possible if $f$ "goes off at a sharp angle" at 0 and 1. Examples: $f(x) = |x(x-1)|$. for $f(x) = 0; x < 0; f(x) = x; 0 \le x \le 1; f(x) = 1; x > 1$ or $f(x) = x^2; x < 0; f(x)=x; 0 \le x \le 1; f(x)=-x^3 + 2; x > 1$, etc. In all of these $\lim_{h\rightarrow 0^+}\frac{f(x + h) - f(x)}{h}\ne \lim_{h\rightarrow 0^-}\frac{f(x + h) - f(x)}{h}$ at $x = 0$ or $x=1$ so are not differentiable there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1940267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Prove that $\tan(x+\frac{\pi}8)>\mathrm{e}^x+\ln x$ for $x\in(0,\frac{3\pi}8)$ Prove that $$\tan\left(x+\frac{\pi}8\right)> \mathrm{e}^x+\ln x,\quad x\in\left(0,\frac{3\pi}8\right).$$ By plotting the graph I found that this is indeed true (in fact I found this inequality through plotting), but how can I prove this? There are various functions in this inequality and I don't know how to start. Any hints will be appreciated. Edit: I was suggested to post the graph (from WolframAlpha). This is the graph of $\tan(x+\frac{\pi}8)-\mathrm{e}^x-\ln x$.
To prove that $$\tan(x+\frac{\pi}8)>e^x+\ln x \qquad\ x\in(0,\frac{3\pi}8)$$ I suppose that it is sufficient to show that function $$F(x)=\tan(x+\frac{\pi}8)-e^x-\ln x $$ is always positive in the given range. The function is positive infinite at both ends. So, let us show that $F(x)$ goes through aminimum value which is positive. Compute the derivatives $$F'(x)=\sec ^2\left(x+\frac{\pi }{8}\right)-e^x-\frac{1}{x}$$ $$F''(x)=2 \tan \left(x+\frac{\pi }{8}\right) \sec ^2\left(x+\frac{\pi }{8}\right)-e^x+\frac{1}{x^2}$$ The first derivative is $-\infty$ at the left bound and $\infty$ at the upper bound. If you plot $F'(x)$, you will notice that $F'(x)=0$ has a single root. Since no explicit solution can be obtained for such as a transcendental equation, we need to use some numerical method. So, let us use Newton method will will give for the iterates $$x_{n+1}=x_n-\frac{F'(x_n)}{F''(x_n)}$$ and start iterating at the midpoint of the interval $(x_0=\frac{3 \pi }{16})$. This will give the following iterates $$\left( \begin{array}{cc} n & x_n \\ 0 & 0.5890486225 \\ 1 & 0.6131825159 \\ 2 & 0.6121561606 \\ 3 & 0.6121539948 \end{array} \right)$$ Now, using your pocket calculator, $$F(0.6121539948)\approx 0.22053$$ $$F''(0.6121539948)\approx 11.7739$$ The second derivative test confirms that the point is a minimum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1940384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Uniqueness unitization of a non unital $C^*$-algebra I am trying to show that the unitization of a non-unital AF $C^*$-algebra, $A$ , is again an AF $C^*$-algebra. In order to do so, I tried to claim the following: Let $A, B$ be $C^*$ algebras and suppose that $A$ is an ideal in $B$ and $B/A$ is isomorphic to $\Bbb{C}$. In these conditions, is it true that $B$ and $A^+$ (the unitization of $A$) are isomorphic as $C^*$ algebras? Edit: I think I should require $B$ to be with multiplicative unit.
When $A$ is not unital, we have that $A^+=A\oplus\mathbb C$ with the product $$(a_1,\lambda_1)(a_2,\lambda_2)=(a_1a_2+\lambda_2a_1+\lambda_1a_2,\lambda_1\lambda_2).$$ Consider the map $\phi:A^+\to B$ given by $$ \phi(a,\lambda)=a+\lambda I. $$ This is clearly a $*$-homomorphism. Since $A$ is non-unital, this map is one-to-one. So it remains to check it is onto. Given $b\in B$, we have $b+A=\lambda(b)+A$ for some $\lambda(b)\in\mathbb C$, so there exists $a\in A$ such that $b=\lambda I+a=\phi(a,\lambda(b))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1940472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
As $\frac1{(1-x)^2} = {1+2x+3x^2 +...nx^{n-1}} + \frac{x^n}{(1-x)^2} + \frac{nx^n}{(1-x)}$, is this is a function of only $x$ or both $x$ and $n$? $$\frac1{(1-x)^2} = {1+2x+3x^2 +...nx^{n-1}} + \frac{x^n}{(1-x)^2} + \frac{nx^n}{(1-x)}$$ Now, the LHS seems to be a function of only $x$, whereas the RHS seems to be a function of both $x$ as well as $n$. Please remove this ambiguity for me as I don't understand why is it that by simply choosing a value of $x$ on LHS is immaterial of what value $n$ holds.
If you rewrite as: $$\lim\limits_{n\to\infty}\sum\limits_{i=1}^n ix^{i-1}$$ the '$n$' is just an index.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1940583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Union of metric spaces Is the union of two metric spaces a metric space? I tried it but could't define a suitable metric on intersection. Can somebody help me to understand it?
(1). If $(X_1,d_1), (X_2,d_2)$ are metric spaces and $X_1\cap X_2=\emptyset$ we can define a metric $d_3$ on $X_1\cup X_2$ by $d_3(x_1,x_2)=1$ when $x_1\in X_1 ,x_2\in X_2,$ and $d_3(x,y)=d_1(x,y)$ when $x,y\in X_1,$ and $d_3(x,y)=d_2(x,y)$ when $x,y\in X_2.$ Then the subspace topologies on $X_1$ and $X_2,$ as subspaces of $X_1\cup X_2,$ co-incide with their topologies induced by $d_1$ and $d_2.$ (2). If $X_1\cap X_2 \ne \emptyset$ this may not be possible. Example: For $j\in \{1,2\}$ let $$X_j=(\mathbb Q\times \{0\})\cup ((\mathbb R \backslash \mathbb Q)\times \{j\}), $$ and let $d_j((x,u),(y,v))=|x-y|$ for $(x,u),(y,v)\in X_j.$ Note that each $X_j$ is an isometric copy of $\mathbb R.$ Suppose $T$ is a topology on $X_1\cup X_2$ such that the subspace topologies on $X_1$ and $X_2$ are generated by the metrics $d_1,d_2.$ Let $(q_n)_{n\in \mathbb N}$ be a sequence in $\mathbb Q$ with $\lim_{n\to \infty}|q_n-\sqrt 2|=0.$ Consider any $U_1, U_2\in T$ such that $(\sqrt 2,1)\in U_1$ and $ (\sqrt 2,2)\in U_2.$ For $j\in \{1,2\}$ the set $U_j\cap X_j$ is a nbhd of $(\sqrt 2,j)$ in the space $X_j,$ and $\lim_{n\to \infty}d_j((\sqrt 2,j)(q_n,0))=\lim_{n\to \infty}|\sqrt 2 -q_n|=0.$ Therefore, for $j\in \{1,2\}$ the set $\{n\in \mathbb N: (q_n,0)\not \in U_j\cap X_j\}$ is finite, so $\{n\in \mathbb N:q_n \not \in U_j\}$ is finite. So $q_n\in U_1\cap U_2$ for all but finitely many $n\in \mathbb N.$ So it is not possible that $U_1\cap U_2=\emptyset.$ So the points $(\sqrt 2,1) ,(\sqrt 2,2)$ do not have disjoint nbhds in $X_1\cup X_2.$ So the topology $T$ on $X_1\cup X_2$ is not Haudorff, and cannot be generated by a metric. Remark: There does exist a non-Hausdorff topology $T$ on $X_1\cup X_2$ such that the subspace topologies on $X_1$ and $X_2$, as subspaces of $X_1\cup X_2,$ are generated by the metrics $d_1,d_2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1940700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Non trivial solutions to $ y'' + 2y' + ay = 0$ For which values of $a$ does the following equation has non trivial solutions $y'' + 2y' + ay = 0 , \space y(0) = y(\pi ) = 0$ The characteristic equation is: $$x^2+2x+a = 0$$ and I have found the roots to be $$x_1 = \sqrt{(1-a)}-1$$ $$x_2 = -\sqrt{(1-a)}-1$$ I have tried for $a =1, a<1, a>1$. $a=1$, I got: $y(t) = C_1e^{-t}+C_2e^{-t}$ and $C_1=C_2=0$ $a>1$, gives complex roots and the solution is of the style: $y(t) = C_1e^{-t}\cos b+C_2e^{-t}\sin b$ and $C_1=C_2=0$ $a<1$, gives distinct roots. I don't know if I have calculated the complex roots right. I have found that all solutions give the trivial. Is that correct? The solution says that $a=1+n^2$ Where does that come from?
Hint. Consider the case when $1-a>0$, $1-a<0$, $1-a=0$. Note that when $1-a<0$, then the general solution is $$y(x)=Ae^{-x}\cos(\sqrt{a-1}x)+Be^{-x}\sin(\sqrt{a-1}x).$$ The condition $y(0)=0$ implies that $A=0$ and $y(\pi)=0$ implies $$Be^{-\pi}\sin(\sqrt{a-1}\pi)=0$$ The solution is non trivial if $B\not=0$, therefore $\sqrt{a-1}$ should be an integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1940808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Burgers' equation with boundary conditions Consider the signaling problem $u_t + c(u)u_x= 0, t> 0, x> 0$ $u(x, 0) = u_0, x> 0,$ $u(0, t) = g(t), t> 0,$ where $c$ and $g$ are given functions and $u_0$ is a positive constant. If $c (u) > 0$, under what conditions on the signal $g$ will no shocks form? Determine the solution in this case in the domain $x > 0, t> 0$. Here's what I have: Characteristic lines are given by: $x = ut + x_o$ Solving for $du/dt$ gives: $u = k(x_o)$ The initial condition gives: $u(x_o, 0) = u_o = k(x_o) = k(ut-x)$ $u(x-ut, 0) = k(ut-x)$ The boundary condition gives: $u(0, t) = g(t)$ $u(0, \frac{x-x_o}{u}) = g(\frac{x-x_o}{u})$ $u(x,t) = g(\frac{x-x_o}{u})$ But from here I am quite confused... I don't see how I could solve for $u$ or determine conditions on $g$ where a shock will not develop?
$$u_t+C(u)u_x=0\quad\text{where }C(u)\text{ is a given function}$$ GENERAL SOLUTION : The system of characteristic differential equations is : $$\frac{dt}{1}=\frac{dx}{C(u)}=\frac{du}{0}$$ A first equation of characteristic cuves comes from $du=0\quad\to\quad u=c_1$ . A second equation of characteristic cuves comes from $\frac{dt}{1}=\frac{dx}{C(c_1)}\quad\to\quad x-C(c_1)t=c_2$ The general solution of the PDE is expressed on the form of implicit equation $\Phi\left(c_1,c_2\right)=0$ where $\Phi$ is any differentiable function of two variables. $$\Phi\left(u\:,\:x-C(u)t\right)=0$$ This is a manner to express any relationship between the two variables. This is equivalent to express the relationship by any function $F$ : $$x-C(u)t=F(u)$$ where $F$ is any derivable function. In general, this implicit equation cannot be solved for $u$ on closed form. DETERMINATION OF THE FUNCTION $F$ according to the condition $u(0,t)=g(t)$ : $0-C\left(g(t)\right)t=F\left(g(t)\right)$ Let $g(t)=X \quad\to\quad t=g^{-1}(X)\quad$ where $g^{-1}$ is the inverse function of $g$. $$-C(X)g^{-1}(X)=F(X)$$ Thus the function $F$ is now determined, given the functions $C$ and $g$. PARTICULAR SOLUTION FITTED WITH THE GIVEN CONDITION : With the particular function $F$ found above : $x-C(u)t=F(u)$ with $F(u)=-C(u)g^{-1}(u)$ $$x-C(u)t=-C(u)g^{-1}(u)$$ $$x+C(u)\left(g^{-1}(u)-t \right)=0$$ $g^{-1}(u)=t-\frac{x}{C(u)}$ $$u=g\left(t-\frac{x}{C(u)}\right)$$ The result is on the form of implicit equation. Solving for $u$ in order to obtain an explicit form $u(x,t)$ is generally not possible, except in case of particular functions $C$ and $g$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1940946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why is cross product defined in the way that it is? $\mathbf{a}\times \mathbf{b}$ follows the right hand rule? Why not left hand rule? Why is it $a b \sin (x)$ times the perpendicular vector? Why is $\sin (x)$ used with the vectors but $\cos(x)$ is a scalar product? So why is cross product defined in the way that it is? I am mainly interested in the right hand rule defintion too as it is out of reach?
This may be a bit too deep but let $V$ be a finite dimensional vector space with basis $v_1,...,v_n$. We say $(v_1,...,v_n)$ is an oriented basis for $V$. We can define an equivalence class on orientations of $V$ by $[v_1,...,v_n] \sim [b_1,...,b_n] \iff [v_1,...,v_n] = A[b_1,...,b_n]$ (where $A$ is the transition matrix) and $\textbf{det}(A)>0$. Therefore, orientations are broken up into two classes, positive and negative. If we let $\textbf{e}^1,...,\textbf{e}^n$ be the standard basis for $\mathbb{R}^n$ then if we wish to check that $[b_1,...,b_n] \sim [\textbf{e}^1,...,\textbf{e}^n]$ then we simply have to look at; $$\textbf{det}\left(\begin{bmatrix} b_1 & b_2 & \cdots & b_n \end{bmatrix}\right) >0$$ Since $A = [b_i]$ is the change of basis matrix. $$\\$$ Edit (1): Above gives a generalization of determining orientation. In your case you would have $E=[\textbf{e}^1,\textbf{e}^2, \textbf{e}^3]$ which gives positive orientation and the ''right hand rule'' should be thought of as a geometric property that one can check without knowing all the information I presented above. Edit (2): In regards to your question about why $\sin \theta, \cos \theta$ are used in certain situations is also because of geometry. The area of a parallelogram is base $\times$ height $= |\vec{a} \times \vec{b}| \sin \theta$ where $\vec{a}, \vec{b}$ span the parallelogram.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1941044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60", "answer_count": 13, "answer_id": 2 }
Problem on rearrangement inequality I read that the rearrangement inequality deals with sorted sequences of real numbers. We have $-5>-6$ and $3>2$ , hence by rearrangement inequality we have $-15>-12$ which is obviously false. What am I missing out?
In this context the rearrangement inequality says that $$ x_1 = -6 < -5 = x_2 \text{ and } y_1 = 2 < 3 = y_2 $$ implies that $$ -28 = -18 -10 = x_1 y_2 + x_2 y_1 \le x_1 y_1 + x_2 y_2 = -12 -15 =-27, $$ which is true. It does not say what you're asserting.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1941136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
a divides bc if and only if For all integers $a,b,c$, $$\;a\mid bc \iff \frac{a}{\gcd⁡(a,b)}∣c.$$ Can anyone help me in proving above statement? I thought I'd start with trying to prove $\Leftarrow.$ $$\frac{a}{\gcd⁡(a,b)}\mid c \Rightarrow$$ I know that $\gcd(a, b) = ax+by = d$ but I am lost as to how to make use of that to proceed from here.
$ a\mid bc \iff a\mid ac,bc\iff a\mid (ac,bc)=(a,b)c\iff a/(a,b)\mid c\ $ by gcd Distributive Law
{ "language": "en", "url": "https://math.stackexchange.com/questions/1941259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Graph theory problem: n-dimensional cube Let $Q_n$ be the $n-dimensional$ cube graph: Its vertices are all the $n-tuples$ of $0$ and $1$ with two vertices being adjacent if they dier in precisely one position.For example, in $Q_3$, the vertices $(1,0,0)$ and $(1,0,1)$ are adjacent because they differ only in the third position.Show that $Q_n$is bipartite. Can anybody help me with this question please?
Hint: Think of the defining properties of a bipartite graph. In particular, what do you know about the length of cycles in a bipartite graph?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1941390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability Unfair Coins I have two coins, yielding heads at $P(H) = a$ and the other with $P(H) = b$. Is it possible for the following to be equal? If so, what should be values for $a$ and $b$? (a) $p_1$=neither is head (b) $p_2$=exactly one of the coins is a head (c) $p_3$=both are heads I am not sure where I am going with this. I am thinking that: $p_1= (1-a)(1-b)$, $p_2=a(1-b)+b(1-a)$ $p_3=ab$ Is this correct? In addition, I see the coins as definitely unfair, but should they be double headed/tailed? I found: $p_1=p_3 \iff a+b=1$ Is it safe to describe the possible values of $a$ and $b$ as complements?
To satisfy (a),(b) and (c) you need first (a)=(c), which as you say, means $$a+b=1$$ This also means that $1-2ab=ab$ (from (b)=(c)), so $ab=\frac 13$. Substitute $a=1-b$, we get $b(1-b)=\frac 13$. However $b(1-b)$ has a local maxima at $b=\frac 12$, and this equates to $\frac 14$, meaning that no real solution is possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1941491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Evaluation of Half Dirac Delta Function $\delta^{(1/2)}_\mu(x)=\frac{1}{2\pi}\int_{-\infty}^{\infty}|k|^{1/2}e^{ikx}e^{-\mu|k|}dk$ which provides the half-Dirac delta distribution in the limiting case of $\mu\to0$. I know the solution of $\delta^{(1/2)}_\mu=\frac{1}{2\pi}\int_{-\infty}^{\infty}|k|^{1/2}e^{ikx}e^{-\mu|k|}dk$ is $\sqrt{\frac{1}{4\pi}}(x^2+\mu^2)^{-3/4}\cos(\frac{3}{2}\tan^{-1}(\frac{x}{\mu}))$, but I would appreciate knowing how the evaluation is accomplished. I can get to the point of $\delta^{(1/2)}_\mu=2\int_0^{\infty}\cos(kx)e^{-\mu k}k^{1/2}dk$, but beyond here I have no idea. EDIT: Obvious thing that I missed on the first go-about is that this is related to the Laplace transform of $k^{1/2}\cos(k)$. Evaluating this seems "straight-forward" albeit incredibly nontrivial.
As I said you need some complex analysis for showing that $$\int_0^\infty t^{a-1} e^{-zt}dt = \Gamma(a) z^{-a}, \qquad \text{Re}(z) > 0,\ \text{Re}(a) > 0$$ The proof is that for $\text{Re}(z) > 0$ the LHS and the RHS are analytic in $z$, and for $z \in (0,\infty)$ they are equal (change of variable $u = zt$), hence by analytic continuation (or by the identity theorem, or by the Cauchy integral theorem) they are equal for every $\text{Re}(z) > 0$. Then you have (for $x \in \mathbb{R}$, $\mu > 0$) $$2\pi \delta^{(1/2)}_\mu(x) = \int_{-\infty}^\infty |k|^{1/2} e^{-\mu |k|} e^{i k x}dk = \int_0^\infty k^{1/2} (e^{-(\mu+ix) k}+e^{-(\mu-ix) k})dk$$ $$ = \Gamma(3/2) ((\mu+ix)^{-3/2}+(\mu-ix)^{-3/2}) = \Gamma(3/2)\, 2\,\text{Re}((\mu+ix)^{-3/2})$$ $$ = 2 \Gamma(3/2) \, |\mu+ix|^{-3/2} \cos(\text{arg}((\mu+ix)^{-3/2})) = \sqrt{\pi}\, (\mu^2+x^2)^{-3/4} \cos(\frac{3}{2}\arctan(x/\mu))$$ There is a plot of $\delta_\mu^{(1/2)}$ for $\mu = 1,0.7,0.5$ :
{ "language": "en", "url": "https://math.stackexchange.com/questions/1941585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What is the quotient group o mod the identity I am trying to prove something using the first isomorpism theorem, which basically says if $G,H$ are groups, and $f:G\to H$ is a group homomorphism, then $G/ker(f)\cong f(H)$. In this case, suppose $f$ is an epimorphism meaning $f(G)=H$. I want to show that two groups are isomorphic, so one way I tried to do this was to find a homormorphism with trivial kernel. Which would then give $G/ \{e_G\} \cong H$ Does this even make sense? I think that $G/\{e_G\}$ is just $G$ but I have a feeling this doesn't make sense. Any help is greatly appreciated.
You are correct in that if $f:G\to H$ is a surjective homomorphism (epimorphism) then $G/\text{Ker} f \cong H$. It is perfectly valid to find a map $f$ with $\text{Ker}f = \{e_G\}$. The then valid conclusion is that $G /\{e_G\} \cong G \cong H$. It is worth noting however that in your search for such a map, showing that its kernel is trivial is the same as showing that its injective, and showing that that its an epimorphism guarantees surjectivity. So really the isomorphism theorem here is overkill, as the process to verify the hypotheses will prove your claim without the actual result of the first isomorphism theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1941681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
mimimum value of expression $a^2+b^2$ If $a,b$ are two non zero real numbers and $ab(a^2-b^2) = a^2+b^2,$ Then $\min(a^2+b^2)$ $\bf{My\; Try::}$ We can write it as $$ab=\frac{a^2+b^2}{a^2-b^2}\Rightarrow a^2b^2=\frac{(a^2+b^2)^2}{(a^2+b^2)^2-4a^2b^2}$$ Now Put $a^2+b^2=u$ and $a^2b^2=v,$ Then expression convert into $$v=\frac{u^2}{u^2-4v}\Rightarrow 4v^2-u^2v+u^2=0$$ For real roots, $\bf{Discriminant \geq 0}$ $$u^4-16u^2\geq 0\Rightarrow u^2(u^2-16)\geq 0$$ So we get $$u^2\geq 16\Rightarrow u\geq 4\Rightarrow x^2+y^2\geq 4,$$ My question is can we solve it any other way (without Trigonometric substution), If yes , Then plz explain here, Thanks
If we let $z = a + b i$ then we get $$ab = \frac{ z^2 - \bar{z}^2}{4i}$$ $$a^2 - b^2 = \frac{ z^2 + \bar{z}^2}{2}$$ $$a^2 + b^2 = z \bar{z}$$ Thus your original equation can be rewritten as $$\frac{Im(z^4)}{4} = \frac{z^4 - \bar{z}^4}{8 i} = \frac{(z^2 - \bar{z}^2)(z^2 + \bar{z}^2)}{8i} = z \bar{z}$$ We have $\frac{\|z\|^4}{4} \geq \frac{Im(z^4)}{4} = \|z\|^2$, so $\|z\|^2 \geq 4$, as desired. This is achieved exactly when $z^4$ is purely imaginary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1941844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Showing there exists no subintervals without digit. Let $A$ be the subset of $(0,1)$ containing the real numbers which have a 3 in their decimal expansion. As part of a larger result in an exercise in my text, I'm trying to show that $B = [0,1] \setminus A$ doesn't have any open interval $(x,y)$ with $0 \leq x < y \leq 1$. The prompt is to first show that for any $0 \leq x < y \leq 1$ there is a non-empty subinterval $(f,g) \subseteq (x,y)$ such that $(f,g) \subseteq A$. I'm having a hard time figuring out how to go about proving the prompt first of all. My first thought was to use the fact that between any two rationals/irrationals there is an irrational/rational and place $3$s on the end of decimal expansions to show that there must be some matching subset for that criteria, but I'm seriously doubting whether this approach is correct.
Consider the standard (i.e., we prefer things like $0.7000\ldots$ over 0.6999\ldots$ $) decimal digit representations $0.x_1x_2x_3\ldots$ and $0.y_1y_2y_3\ldots$ of $x$ and $y$. (The special case $y=1.0000\ldots$ is not covered by this, but not difficult). As $x\ne y$, there is a first index $n$ such that the $x_n\ne y_n$, and as $x<y$, we must have $x_n<y_n$ for this $n$. As we use standard representations, there is $m>n$ such that $x_m\ne 9$. Let $z$ be the number with representation $0.z_1z_2z_3\ldots$ where $$z_k=\begin{cases}x_k&k<m\\ x_m+1&k=m\\ 3&k=m+1\\ 0&k>m+1\end{cases}$$ Then * *$x<z$ because $z_m>x_m$ and $z_k=x_k$ for $k<m$, *$z<y$ because $z_n<y_n$ and $z_k=y_k$ for $k<n$, *and $z\in A$ because $z_{m+1}=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1941985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$ \int\frac{2+\sqrt{x}}{\left(x+\sqrt{x}+1\right)^2}dx$ $$I=\int\frac{2+\sqrt{x}}{\left(x+\sqrt{x}+1\right)^2}dx$$ I can't think of a substitution to solve this problem, by parts won't work here. Can anyone tell how should I solve this problem?
Let $$I = \int\frac{2+\sqrt{x}}{(x+\sqrt{x}+1)^2}dx = \int \frac{2+\sqrt{x}}{x^2\left(1+x^{-\frac{1}{2}}+x^{-1}\right)^2}dx$$ So $$I = \int\frac{2x^{-2}+x^{-\frac{3}{2}}}{\left(1+x^{-\frac{1}{2}}+x^{-1}\right)^2}dx$$ Put $\left(1+x^{-\frac{1}{2}}+x^{-1}\right) = t\;,$ Then $\displaystyle \left(-\frac{1}{2}x^{-\frac{3}{2}}-x^{-2}\right)dx = dt\Rightarrow \left(2x^{-2}+x^{-\frac{3}{2}}\right)dx = -2dt$ So $$I = -2\int\frac{1}{t^2}dt = \frac{2}{t}+\mathcal{C} = \frac{2}{1+x^{-\frac{1}{2}}+x^{-1}}+\mathcal{C}=\frac{2x}{x+\sqrt{x}+1}+\mathcal{C}$$ So $$I = 2\left[\frac{(x+\sqrt{x}+1)-(\sqrt{x}+1)}{x+\sqrt{x}+1}\right]+\mathcal{C} = -\frac{2(\sqrt{x}+1)}{x+\sqrt{x}+1}+\mathcal{C'}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1942038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Does negative transpose sign mean inverse of a transposed matrix or transpose of an inverse matrix? I want to know meaning of $$H^{-T}$$Is it same with $$(H^{-1})^T$$or $$(H^T)^{-1}$$
$H^{-1}$ is defined such that $I=H^{-1}H=HH^{-1}$, taking the transpose of this equation yields $$I=I^T=(H^{-1}H)^T=H^T(H^{-1})^T$$ Therefore $(H^{-1})^T$ is the inverse of $H^T$, so $$(H^{-1})^T=(H^T)^{-1}$$ So yes, $H^{-T}$ it is the same as both.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1942211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Use congruence's to find the reminder when $2^{50}$ and $41^{65}$ are divided by $7$ Use congruence's to find the reminder when $2^{50}$ and $41^{65}$ are divided by 7 $2^{50}$ $50=(7)^2+1$ $2^{50}=2^{7\cdot7+1}$ and I'm not sure where to go from here?
Note that, $$\begin{align} & 2^3\equiv 1 \pmod7 \\ \implies & (2^3)^{16}\equiv 1^{16} \pmod7 \\ \implies & 2^{48}\equiv 1 \pmod7 \\ \implies & 2^{48}\cdot 2^2\equiv 1\cdot 2^2 \pmod7 \\ \implies & \color{blue}{2^{50}\equiv 4 \pmod7}\end{align}$$ Also note that $$\begin{align} & 41\equiv -1\pmod7 \\ \implies & 41^{65}\equiv (-1)^{65}\equiv -1\pmod7 \\ \implies & \color{blue}{41^{65}\equiv 6\pmod7}\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1942297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A sum involving binomial coefficients, powers and alternating signs How to prove that $$ \sum_{k=0}^n(-1)^{n-k}{n \choose k} k^n = n! $$ using mathematical induction? Please, do not use definition of Stirling number etc. algebra tricks.
Suggested intro: maybe these solutions don't meet the OP's specific requirements, but I think that they're worth making available anyway. $\color{red}{n!}$ is the number of bijective functions from $A=\{1,2,\ldots,n\}$ to $A$. Let we say that a function $f:A\to A$ has type $k$ if $|f(A)|=k$. The number of functions with type $\leq n$ is simply given by $n^n$, i.e. the number of all possible functions from $A$ to $A$. The number of functions with type $\leq(n-1)$ is given by $\binom{n}{n-1}$ (the number of ways for choosing $(n-1)$ elements in $A$) times $(n-1)^n$ (the number of functions from $A$ to a set with $(n-1)$ elements). By the inclusion-exclusion principle, the number of functions with type $n$ (i.e. the number of bijective functions) is given by: $$ n^n-\binom{n}{n-1}(n-1)^n+\binom{n}{n-1}(n-2)^n-\ldots = \color{red}{\sum_{k=0}^{n}(-1)^{n-k}\binom{n}{k}k^n}. $$ Alternative proof. Let $\delta$ be the operator that maps a polynomial $p(x)$ into $p(x+1)-p(x)$. The following properties are trivial to prove: * *If $\partial p$ (the degree of $p$) is $\geq 1$, the degree of $\delta p$ is $\partial p-1$; *If the leading term of $p(x)$ is $a x^n$, the leading term of $\delta p(x)$ is $an x^{n-1}$, i.e. $\delta$ acts on the leading term like the derivative $\frac{d}{dx}$. Since our sum is just $\delta^n p(0)$ with $p(x)=x^n$, by (2.) it follows that our sum equals $n(n-1)(n-2)\cdots 1 = n!$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1942401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Are they the same function? $y = x^2/x$ and $y = x$ Are they the same function? $$y=\frac{x^2}{x}$$ and $$y=x$$ For the first function, if we don't divide both the numerator and the denominator by x, then the domain of it is the real line except the point x = 0, which is different from the domain of the second function.
The are not the same function, since they have different domains. For two function to be the same they should have the same domain and the same mapping rules.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1942489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Existence of measurable set $A\subseteq{\mathbb R}$ which is locally uncountable and so is its complement Is there a measurable set $A\subseteq{\mathbb R}$ such that $|A\cap I|$ and $|A^\complement\cap I|$ are both uncountable for any open interval $I$?
The set of real numbers whose decimal representation has a finite number of ones (let's agree a number can't end with infinite 9's, even if it doesn't really change anything.) It is measurable because you can write it as a countable union of countable intersections of intervals (it's kind of tedious to write down); it is uncountable on every interval because you can truncate the decimal representation and then put only $2$'s and $3$'s as you want; the complement is uncountable because you can truncate the decimal representaiton and then put $1$'s at odd positions and $2$'s or $3$'s at even positions as you want. As a side note this set is of measure zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1942586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
properties of relations I'm trying to do some chapter problems on equivalence relations. I'm stuck in the second section "properties of relations." Question: Let $A=\{a,b,c,d\}$. Give an example of a relation $R$ on $A$ that is neither reflexive, symmetric ,or transitive. What I tried doing was writing out all the pairs and then canceling out the ones that matched with the laws. I was left with $\{(d,b) (d,c)\}$. Does that mean $R=\{(d,b) (d,c)\}$ is not reflexive, transitive, or symmetric ?
Well, the relation you give is indeed neither reflexive nor transitive, but there are many ways to get such a relation. "Cancelling out pairs that don't match the laws" is not a well defined procedure. (Well, it is defined for reflexivity, but it does not do what you want: You would be left with $\{(a,a),(b,b),(c,c),(d,d)\}$ which is reflexive). I think the problem would like you to find a relation that involves all four of the members of the set. You could, for example, add in $(a,b)$. Or even $(a,a)$; just one element being equivalent to itself does not make the relation reflexive if others are not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1942689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prime subfield is either isomorphic to $\mathbb{Q}$ or $F_p$ I'm trying to prove the following statement: Let $F$ be a field. The intersection of all subfields of $F$ is a subfield which is isomorphic to $\mathbb{Q}$ if $\operatorname{char}(F)=0$, and isomorphic to $F_p$ if $\operatorname{char}(F)=p$. I assume I need to set up a injective ring homomorphism $\varphi\colon \mathbb{Q}\to F.$ I don't really know where to go apart from that though.
Define a map $f:\Bbb{Z}\to{F}$ by $f(n)=n.e$ , where $e$ is the unit element of $F$. $f$ is homomorphism: $$f(m+n)=(m+n).e=m.e+n.e=f(m)+f(n)$$ and $$f(mn)=(mn).e=(m.e)(n.e)=f(m)f(n)$$. By Fundamental theorem of homomorphism, $$f(\Bbb{Z})\cong \frac{\Bbb{Z}}{\operatorname{Ker}{f}}\subseteq F$$. Where $\operatorname{Ker}{f}$ is the ideal of $\Bbb{Z}$, in fact principal ideal of $\Bbb{Z}$.Then $\operatorname{Ker}f=<q> $, for some $q\in \Bbb{Z}$. Now consider the needed cases and you are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1942821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Algebra transitive action (Isaacs 4.1) I'm working on the following questions and I'm unable to make any progress on it. Can anyone offer any insight? Let $G$ act transitively on a set $\Omega$ and suppose $G$ is finite. Define an action of $G$ on $\Omega \times \Omega$ by putting $(\alpha, \beta) \cdot g =(\alpha\cdot g, \beta\cdot g)$. Let $\alpha \in \Omega$. Show that $G$ has the same number of orbits on $\Omega \times \Omega$ as $G_{\alpha}$ does on $\Omega$.
With character theory this can be proved rather easily. Put $\chi(g)=\#\{\alpha \in \Omega: \alpha^ g=\alpha\}$. Then the function $\chi$ is called the permutation character of the action of $G$ on $\Omega$. It is a standard fact (basically the Burnside-Cauchy-Frobenius Lemma) that the number of orbits of $\Omega$ under the action of $G$ equals $[\chi,1_G]$. Write $G_{\alpha}$ for the subgroup of $G$ fixing $\alpha$. You need the following. Lemma Let $G$ act transitively on $\Omega$. Then the number of orbits of $G_{\alpha}$ acting on $\Omega$ equals $[\chi,\chi]$. For a proof see I.M. Isaacs, Character Theory of Finite Groups, (5.16). Now the key observation is that the permutation character of $G$ acting on $\Omega \times \Omega$ is exactly the character $\chi^2$, since if $g \in G$ has $\chi(g)$ fixed points on $\Omega$, it has $\chi(g)^2$ fixed points on $\Omega \times \Omega$ – namely one for every pair of fixed points in the action on $\Omega$. Therefore $[\chi^2,1_G]$ is the number of orbits in the action of $G$ on $\Omega \times \Omega$. But obviously $[\chi^2,1_G]=[\chi,\chi]$ and by the Lemma we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1942943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is this rewrite correct? If x mod 9 = 1, can I rewrite the equation as x = 1 mod 9. If not, what is the correct way? I know this sounds silly. But please help me.
There are two possible meanings of "mod" here: One is a binary operation: $a \operatorname{mod} n$ is a number; it is the unique integer $r$ with $0 \le r < n$ such that $a - r$ is a multiple of $n$. If interpreted this way, the two equations aren't equivalent (for example, $x = 10$ is a solution of the former but not he latter). The other is an equivalence relation: $a \equiv b \pmod{n}$ is a statement; it means that $b - a$ is a multiple of $n$. Only the second equation you wrote makes sense in this notation. Maybe you could mix notations (e.g. it is always true that $a \equiv a \operatorname{mod} n \pmod{n}$), but that rapidly gets confusing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1943018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $\{ \sum_{n=0}^k z^n \}_{k=0}^\infty$ does not converge uniformly? So as the title says, can anyone prove that $\{ \sum_{n=0}^k z^n \}_{k=0}^\infty$ does not converge uniformly 0n the disk $D(0,1)$? I think it would converge uniformly to $1/(1-z)$ since it is a geometric series, but professor posed the problem so I'm thinking that must not be correct. Thoughts?
Realize that $\sum_{k=0}^{\infty} x = 1/(1-x)$ is unbounded at $x=1$ on $(0,1)$. Also realize that $\sum_{k=0}^n x$ is bounded by $n$ on $(0,1)$. From this, you can conclude that convergence cannot be uniform (do you see how?).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1943126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Question about mathematical logic with set and inclusion relation Let $L = \{ \subseteq \}$ and let $M$ be the L-structure whose universe is $P^\mathbb{N}$, the set of all subsets of $\mathbb{N}$, and where $\subseteq$ is interpreted in the usual way. 1) Show that for all integer $n$, there is a formula $\psi_n[x]$ such that $M \vDash \psi_n[a]$ holds if and only if $a$ has at least $n$ elements. I get stuck so badly on this question and really need some hints 2) Show thatt for any automorphism $\sigma$ of $M$, we have $\sigma(\emptyset)=\emptyset$. Attempt: This is just a routine check of the definitions of automorphism. But I doubt if there is some hidden corners, since it's labeled as the 2nd question, so it should be easier than the 1st question, but I couldn't do the 1st. 3) Let $\sigma$ bean automorphism of $M$ such that $\sigma( \{ n \})$ = $\{ n \}$ for all $n \in \mathbb{N}$. Show that $\sigma$ is the identity. Attempt: I did a very straight forward induction. If $n = 0$, we are done as proved in (2). If $n=1$, we're done, as it's the hypothesis. So suppose that for a given given set of $K$ of $k$ elements, we have $\sigma(K)=K$. We try to prove that for a set of $K'$ of $k+1$ elements, we have $\sigma(K')=K'$. This will be done by the standard technique, i.e., one is a subset of the other. Am I on the right track? 4) Find all automorphism of $M$. Attempt: The most educated guess I can think of is that the automorphism on $M$ is precisely the identity, i.e., for an $a \in P^\mathbb{N}$, then if $f$ is an automorphism on $M$, then $f(a)=a$. Could you please give me some hints on this question? Thanks!
Regarding the first question: $$\psi_n[x] \equiv\exists y_1 \dots \exists y_n \ (\bigwedge_{i=1}^n y_i \subseteq x) \wedge (\bigwedge_{1 \le i < j \le n} y_i \neq y_j)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1943209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
The definitions of finitely presented modules? Let $M$ be a module over a ring or an algebra $A$.I have seen three definitions of finitely presented modules: (1) A module $M$ is called finitely presented if there is an exact sequence of $A$-modules: $0 \rightarrow L \rightarrow F \rightarrow M \rightarrow 0$, where $F$ is a free module of finite rank, $L$ is finitely generated; (2) A module $M$ is called finitely presented if there is an exact sequence of $A$-modules: $F_2 \rightarrow F_1 \rightarrow M \rightarrow 0$, with $F_1,F_2$ free modules of finite rank; (3) A module $M$ is called finitely presented if there is an exact sequence of $A$-modules: $P_2 \rightarrow P_1 \rightarrow M \rightarrow 0$, with $P_1,P_2$ finitely generated projetive modules. I can get (2) by (1), get (3) by (2). But I can't get they are equivalent. So who can tell me how to prove they are equivalent?
Suppose $$P_2\to P_1\to A\to 0$$ is as in (3). Since $P_1$ is finitely generated and projective, there is a finitely generated module $Q$ such that $P_1\oplus Q=F$ is a finite rank free module. Now take the direct sum of our exact sequence and the exact sequence $$Q\stackrel{1_Q}\to Q\to 0\to 0$$ to get another exact sequence $$P_2\oplus Q\stackrel{f}\to F\stackrel{g}\to A\to 0.$$ Since $P_2$ and $Q$ are finitely generated, $\operatorname{im}(f)=\ker(g)$ is finitely generated (it is generated by the images of generators of $P_2$ and $Q$ under $f$). Let $L=\ker(g)$. We then have a short exact sequence $$0\to L\to F\stackrel{g}\to A\to 0$$ which satisfies the requirements of (1).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1943308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
$G$ is a group that is transitive on $\{1,2,\dots,n\}$, and let $H_i$ be the subgroup of $G$ that leaves $i$ fixed. Prove that $|G|=n|H_i|$. A subgroup $H$ of the group $S_n$ is called transitive on $B=\{1,2,\dots,n\}$ if for each pair $i,j$ of elements of $B$ there exists an element $h\in H$ such that $h(i)=j$. Suppose $G$ is a group that is transitive on $\{1,2,\dots,n\}$, and let $H_i$ be the subgroup of $G$ that leaves $i$ fixed: $$H_i=\{g\in G\;|\;g(i)=i\}$$ for $i=1,2,\dots,n$. Prove that $|G|=n|H_i|$. Here I observed that: (i) Each $H_i$ is of same size. (ii) ${H_i}$ does not form partition of $G$. What I have done is I define $\phi:H_i\rightarrow H_j$ by $$\phi(g)=hgh^{-1}, \text{where }h(i)=j,h\in G $$ Such $h$ exists since $G$ is transitive. So it is easy to see that this $\phi$ is bijective. This shows that each $H_i$ is of the same size.
Hint: Use the orbit-stabilizer theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1943420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is the maximum likelihood estimator $y_{\operatorname{max}}$ instead of $y_{\operatorname{min}}$ Use the method of maximum likelihood to estimate the parameter $\theta$ in the uniform pdf, $f_y(y ;\theta)=\frac{1}\theta$ where $0\leq y\leq\theta$ According to the solution manual $\theta_e=y_{\operatorname{max}}$, however the likelihood function $L(\theta)=(\frac{1}\theta)^n,$ For me, what would maximize $L(\theta)$ would be $\theta_e=y_{\operatorname{min}}$
Well if you think about it your answer does not make much sense! Maybe better to see it as follows: Let $\hat \theta$ be the MLE, then the likelihood of all points $y_i$ that would be greater than $\hat\theta$ would be zero according to that model, and the entire likelihood would be zero. Therefore, in order to maximize the likelihood, $\hat\theta = y_{\text{max}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1943580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can you get the asymptotics of the following integral? I am interested in the big $N$ asymptotics of the following integral $$ \int_0^{\infty}dx\,e^{-2xN}\frac{1-(2x)^N}{1-2x} $$ I have considered applying Laplace's method in some way, but I cannot make it work. Can anybody do better than me?
$\dfrac{1-(2x)^N}{1-2x} = \sum_{j=0}^{N-1} (2x)^j$ so your integral is $$ \sum_{j=0}^{N-1} \int_0^\infty dx\; e^{-2xN} (2x)^j = \sum_{j=0}^{N-1} \dfrac{j!}{2N^{1+j}}$$ Write this as $\dfrac{1}{2N}\sum_{j=0}^\infty a_j(N)$, so $a_0(N) = 1$. Note that for each $j$, $a_j(N) \le a_j(j+1) = j!/(j+1)^j$, and $\sum_{j=0}^\infty j!/(j+1)^j < \infty$, so by Dominated convergence $$\lim_{N \to \infty} \sum_{j=0}^\infty a_j(N) = \sum_{j=0}^\infty \lim_{N \to \infty} a_j(N) = 1$$ and your integral is asymptotic to $1/(2N)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1943672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $\mathrm{span}(v_1,\dots,v_k) = \mathrm{span}(v_1,\dots,v_k,v)$ Show that $\mathrm{span}(v_1,\dots,v_k) = \mathrm{span}(v_1,\dots,v_k,v)$ if and only if $v$ is in $\mathrm{span}(v_1,\dots,v_k)$ I am thinking that if $v$ is in $\mathrm{span}(v_1,\dots,v_k$) it must be one of the elements and can be written $\mathrm{span}(v_1,\dots,v_k,v)$ but I'm not sure how to show this mathematically.
Prove the implication two ways. First prove the forward direction $\implies $: Let $V=\text{span}(v_1, \ldots, v_k)$ and $W = \text{span}(v_1, \ldots, v_k, v)$ and assume $V=W$. We will prove that $v \in V$. This is trivial since $v \in W = V$ and thus $v \in V$. Now we prove the backwards implication $\impliedby$: Assume $v \in V$. We need to prove that $V = W$. To do this, we pick any arbitrary vector $w \in W$ and prove that it is in $V$ and we pick any arbitrary vector $x \in V$ and prove it is in $W$, essentially proving that $V \subseteq W$ and $W \subseteq V$. Let $x$ be any vector in $V$. Then $x = a_1v_1 + a_2v_2 + \ldots + a_k v_k$ for some coefficients $a_i$. It is evident that $x \in W$ since we can write $x$ as a linear combination of $W$ as such: $x = a_1v_1 + a_2v_2 + \ldots + a_k v_k + 0v$. Thus $V \subseteq W$. Now let $w$ be any vector in $W$. Then $w = b_1v_1 + b_2v_2 + \ldots + b_kv_k + bv$ and since $v \in V$, it can be written as a linear combination of $(v_1, \ldots, v_k)$ i.e. $ v = c_1v_1 + c_2v_2 + \ldots c_kv_k$ for some coefficients $c_i$. We can then substitute this into the linear combination of $w$: $$ \begin{align} w &= b_1v_1 + b_2v_2 + \ldots + b_kv_k + bv \\ &= b_1v_1 + b_2v_2 + \ldots + b_kv_k + b(c_1v_1 + c_2v_2 + \ldots c_kv_k) \\ &= (b_1 + bc_1)v_1 + (b_2 + bc_2)v_2 + \ldots + (b_k + bc_k)v_k \end{align} $$ and thus $w$ can be written as a linear combination of $(v_1, \ldots, v_k)$ and so $w \in V$. Since our choice of $w$ was arbitrary, $W \subseteq V$. Since we proved that $V \subseteq W$ and $W \subseteq V$, it follows that $V=W$ as required. $\tag*{$\blacksquare$}$ Note: I would normally use summation notation but pedagogically i've found writing the combinations out explicitly can sometimes be easier to grasp initially for students.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1943766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to prove that $(a\cos\alpha)^n + (b\sin\alpha)^n = p^n$ under the following conditions? How to prove that $(a\cos\alpha)^n + (b\sin\alpha)^n = p^n$ when then line $x\cos\alpha + y\sin\alpha = p$ touches the curve $$\left (\frac{x}{a} \right )^\frac{n}{n-1} + \left (\frac{y}{b} \right )^\frac{n}{n-1}=1$$ What I've tried: I've equated the derivative of the given line with the general derivative of the given curve but couldn't proceed to any meaningful step thereafter.
At the tangent point (x, y), the normal of the two curves are parallel, so we can get $$\{\cos \alpha, \sin \alpha \}//\{\frac{x^\frac{1}{n-1}}{a^\frac{n}{n-1}}, \frac{x^\frac{1}{n-1}}{a^\frac{n}{n-1}}\} $$ so we get the equation: $$\frac{x^\frac{1}{n-1}}{a^\frac{n}{n-1}\cos \alpha}=\frac{x^\frac{1}{n-1}}{a^\frac{n}{n-1}\sin \alpha}=k$$ Hence: $$x=k^{n-1}a^n(\cos\alpha)^{n-1}$$ $$y=k^{n-1}b^n(\sin\alpha)^{n-1}$$ Take x and y into $x\cos\alpha + y\sin\alpha = p$ we get: $$k^{n-1}((a\cos\alpha)^n+(b\sin\alpha)^n)=p$$ Take x and y into $\left (\frac{x}{a} \right )^\frac{n}{n-1} + \left (\frac{y}{b} \right )^\frac{n}{n-1}=1$ we get: $$k^n((a\cos\alpha)^n+(b\sin\alpha)^n)=1$$ Therefore:$$k=1/p$$ Take K into $k^n(a^n\cos\alpha^n+b^n\sin\alpha^n)=1$ get the result: $$(a\cos\alpha)^n + (b\sin\alpha)^n = p^n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1943891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Practice PSAT Question About Rational Functions This is taken directly from a PSAT Practice Test: $f(x)=\dfrac{2x-4}{2x^2+2x-4}$ A rational function is defined above. Which of the following is an equivalent form that displays values not included in the domain as constants or coefficients? A) $f(x)=\dfrac{x-2}{x^2+x-2}$ B) $f(x)=\dfrac{2(x-2)}{2(x+2)(x-1)}$ C) $f(x)=\dfrac{1}{x+1}$ D)$f(x)=\dfrac{1}{2x^2}$ My understanding is that this question is asking for the form that includes $-2$, and $1$ as either a coefficient or constant, and is equal to the original equation. Both A and B seem to satisfy this, both of them have $-2$ as a constant, and 1 as a coefficient. The answer key in the back says it is B, but I and a few of my teachers thought it was A, since A was a simpler version of B. What are we doing wrong here?
The answer should be (b). The domain does not include -2 and 1. Answer (a) has "-2" as a constant (for x-2 and x^2+x-2) and "1" as a coefficient (for x^2(="1"*x^2) and x terms). No values other than "-2" and "1" as constants or coefficients are in (a). Though (b) has "-2" as a constant for x-2 and "1" as a coefficient for x+2 (="1"*x+2), x-2, and x-1, (b) also has "+2" as a constant for x+"2" and as a coefficient for 2(x+2).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1944021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is it possible to draw a regular pentagon on a regular 3D grid by only connecting the intersection points? Think of it as infinite rows of cubes side by side and infinite rows on top of each other. Like this, but as many cubes as needed Let's say the distance between the points (the edges of the cubes) are of 10 units. Is it possible to draw a pentagon by only connecting the vertices of the cubes? If so, is it possible to draw it without using millions of cubes? As an example, you can make an equilateral triangle by connecting 3 vertices of a single cube, see:
No. Suppose you have found such a regular pentagon $ABCDE$ then, since the lattice points are well.. on a lattice, the points $A' = A + \vec{BC}, B' = B + \vec{CD}, \ldots , E' = E + \vec{AB}$ are also on the lattice. And they also form a regular pentagon, but one whose side is smaller than the side of the original pentagon by some factor $K < 1$. Iterating this procedure you can find regular pentagons with lattice points, that get smaller and smaller and smaller. However, you can't get two points on your lattice that are closer than the side of one of the original cube (your $10$ units) so there can't be any regular pentagon on lattice points whose side is smaller than $10$ units, which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1944129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
function spaces in linear algebra I am in upper division linear algebra, and i need help in proving a function space as a vector space. I just need help proving 2 particular vector space axioms. Axiom 1: For each pair of elements a,b in F(field), and each element x in V (Vector space), ab(x)= a(bx). Axiom 2:For each element a in F and each pair of elements x,y in V a(x+y)=ax+ay To prove axiom 1, you would need 2 scalars, so can I just say, let a and b be a member of F... So wouldn't the proof be exactly the same as the axiom? ab(s)= a(b(s))? To prove axiom 2: I really don't know where to start.... What are your tips and tricks to prove these axioms when you have function spaces? I am having a lot of trouble proving how the axioms work. Any help would be greatly appreciated...
It's generally a bad idea to try and work with a vector space when you haven't got a good idea of how it's defined. What do we mean by scalar multiplication? Addition of two vectors? The way it's usually set up is as follows: If $V$ is a set of functions from $X$ to $Y$, where $Y$ is a vector space over a field $F$, then we define the addition of two functions: $(f+g)(x) = f(x) + g(x)$ for all $f, g \in V$ and all $x \in X$. and multiplication by scalars: $(\alpha f )(x) = \alpha(f(x))$ for all $f \in V, \alpha \in F$ and $x \in X$. Once you have understood the setup, it's pretty easy to show the axioms are fulfilled.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1944240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Determining which functions are one to one So I know for sure that $x^3$ is one to one, being that the entire span of real number $y$'s can be found by some x and it passes the vertical line test. Graphically, its pretty clear. I know that $x^2$ is also one to one though not onto, since no negative $y$'s can be obtained. I don't have the same graphical intuition for options B, D, and E. How would I go about making that determiniation?
The easiest way is to look at the kernel for the not-so obvious ones. I should note that this works for only linear maps. (b) The kernel is $\{ (x,x,x) \}$, so no not injective (c) No, clearly. (d) The kernel of this is $\{(x,-x) \}$, so no not injecive (e) Yes the kernel of this is trivial (can you see why?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1944344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove Bonferroni’s inequality I have read other solutions regarding proof of Bonferroni’s inequality. However, is my derivation correct? Suppose that E and F are two events of a sample space S. Conclude: P(EF) ≥ P(E) + P(F) − 1 Proof: P(EF)= 1 - P((EF)^c) = 1 - P(E^c ∪ F^c) = 1 - (P(E^c) + P(F^c)) = 1 - ((1 - P(E)) + (1 - P(F))) = 1 + -1 + P(E) -1 + P(F) = 1 - P(E) + P(F)
You seem to assume that $E^c$ and $F^c$ are disjoint in writing $$ 1 - P(E^c \cup F^c) = 1 -[P(E^c) + P(F^c)].$$ (Also, you don't write any inequalities in your proof. Though maybe you meant to use an inequality at precisely this step...) A simple proof notes that in general we have, $$P(E \cap F) = P(E) + P(F) - P(E \cup F).$$ And this implies the result because $P(E \cup F) \leq 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1944488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove Ux$\cdot$Uy = x$\cdot$y $\forall$ x,y $\in$ $\mathbb{R}^n$ where U is Isometric transformation Suppose that U is a linear transformation from $\mathbb{R}^n$ into $\mathbb{R}^m$ that is isometric, i.e., $\lVert Ux\rVert$ =$\lVert x\rVert$ for all x $\in$ $\mathbb{R}^n$ a) Prove that Ux$\cdot$Uy = x$\cdot$y $\qquad$ $\forall$x,y$\in \mathbb{R}^n$ I have an attempt that basically plays on using $\lVert U(x+y)\rVert^2$ as my starting point: Near the end through some manipulation I get: $\lVert x\rVert^2$ + 2($x\cdot$y) + $\lVert y\rVert^2$ I feel like I've gotten somewhere as clearly $\lVert x\rVert^2$ = $\lVert Ux\rVert^2$ by how isometric is defined. Similarly, $\lVert y\rVert^2$ = $\lVert Uy\rVert^2$ But I don't know if this actually lets me show: Ux$\cdot$Uy = x$\cdot$y And even if it does, I'm not sure how to make the last argument to show that. Also, sorry for leaving out some of the intermediate details. I'm still very new to using this MathJax thing (I need to get around to learning Latex!) and it took very long to type that little bit out. Let me know if I need to clarify anything. Thanks!
You're right to look at $||U(x+y)||^2$, but I suggest trying to use the polarization identity $$ x\cdot y=\frac{1}{4}\Big(||x+y||^2-||x-y||^2\Big)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1944599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How would you calculate the surface of the part of the paraboloid $z=x^2+y^2$ with $1 \le z \le 4$? Do you calculate it like done below? can you calculate it in another way? $z=x^2+y^2$ Let $x=\sqrt{z}\cos\theta$ $y=\sqrt{z}\sin\theta$ $z=z$ where $\theta\in[0,2\pi]$ and $z\in[1,4]$ $ \dfrac{\partial{x}}{\partial{\theta}}=-\sqrt{z}\sin\theta$ $ \dfrac{\partial{y}}{\partial{\theta}}=\sqrt{z}\cos\theta$ $ \dfrac{\partial{z}}{\partial{\theta}}=0$ $ \dfrac{\partial{x}}{\partial{z}}=\dfrac{1}{2\sqrt{z}}\cos\theta$ $ \dfrac{\partial{y}}{\partial{z}}=\dfrac{1}{2\sqrt{z}}\sin\theta$ $ \dfrac{\partial{z}}{\partial{z}}=1$ $\begin{Vmatrix}\hat{i} & \hat{j} & \hat{k} \\ -\sqrt{z}\sin\theta &\sqrt{z}\cos\theta & 0 \\ \dfrac{1}{2\sqrt{z}}\cos\theta & \dfrac{1}{2\sqrt{z}}\sin\theta & 1\end{Vmatrix}$ $=\sqrt{z}\cos\theta\hat{i}+\sqrt{z}\sin\theta\hat{j}+\dfrac{1}{2}(-\sin^2 \theta-\cos^2 \theta)\hat{k}$ $=\left<\sqrt{z}\cos\theta,\sqrt{z}\sin\theta,-\dfrac{1}{2}\right>$ And its magnitude is given by $\sqrt{z\cos^2\theta+z\sin^2\theta+\dfrac{1}{4}}=\sqrt{z+\dfrac{1}{4}}=\dfrac{1}{2}\sqrt{4z+1}$ Surface $=\displaystyle \int \int_S dS$ $=\displaystyle \int_1^4 \int_0^{2\pi} \dfrac{1}{2}\sqrt{4z+1} \,d\theta\,dz$ $=\bigg|2\pi\cdot\dfrac{1}{2}\cdot\dfrac{2}{3}(4z+1)^{\frac{3}{2}}\dfrac{1}{4}\bigg|_1^4$ $=\dfrac{\pi}{6}(17\sqrt{17}-5\sqrt{5})$
Your result is correct. By using polar coordinates, we obtain $$\iint_S dS=\int_{\rho=1}^2 \int_0^{2\pi} \sqrt{1+f_x^2+f_y^2} \,(d\theta\,\rho d\rho)=2\pi\int_{\rho=1}^2 \sqrt{1+4\rho^2} \,\rho d\rho\\ =\dfrac{\pi}{6}\left[(1+4\rho^2)^{3/2}\right]_{\rho=1}^2 =\dfrac{\pi}{6}(17\sqrt{17}-5\sqrt{5})$$ where $f(x,y)=x^2+y^2$, $x=\rho\cos\theta$ and $y=\rho\sin\theta$. P.S. We have that $f_x(x,y)=2x$, $f_y(x,y)=2y$ and therefore $$\sqrt{1+f_x^2+f_y^2}=\sqrt{1+4(x^2+y^2)}=\sqrt{1+4\rho^2}$$ For details take a look here: https://en.wikipedia.org/wiki/Surface_integral
{ "language": "en", "url": "https://math.stackexchange.com/questions/1944725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Series $\sum_{n=2}^\infty \frac{14^n}{3^{3n+4}(3n+7)}$ Convergence or Divergence Using The Ratio Test I am trying to determine if the following series converges or diverges by using the ratio test, which I believe can be summarized as the following: $$L=\lim_{n\to \infty} \left| \frac{a_{n+1}}{a_n} \right|$$ If $L < 1$ then the series converges, if $L > 1$, then it diverges, and if $L = 1$, then it is ambiguous. The series is below: $$\sum_{n=2}^\infty \frac{14^n}{3^{3n+4}(3n+7)}$$ I understand the ratio test in theory, but am not sure how to put it into practice for a series like this.
Ratio test: $$\frac{14^{n+1}}{3^{3n+7}(3n+10)}\cdot\frac{3^{3n+4}(3n+7)}{14^n}=\frac{14}{3^3}\cdot\frac{3n+7}{3n+10}\xrightarrow[n\to\infty]{}\frac{14}{27}\cdot1=\frac{14}{27}<1$$ $\;n\,-$ th root test: $$\sqrt[n]{\frac{14^n}{3^{3n+4}(3n+7)}}=\frac{14}{3^3}\cdot\frac1{\sqrt[n]{3^4(3n+7)}}\xrightarrow[n\to\infty]{}\frac{14}{27}\cdot1=\frac{14}{27}<1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1944859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
why two solutions to DE are contradictory? Solution to the given differential equation $$\frac{dx}{dt}=4.9-0.196x$$ is given by a) $x=25+ke^{-0.196t}\qquad$ b) $x=50+ke^{-0.196t}\qquad$ c) $x=50-ke^{-0.196t}\qquad$ d) $x=25-ke^{-0.196t}\qquad$ where k is some constant my try: method(1) $$\frac{dx}{4.9-0.196x}=dt$$ $$\int \frac{dx}{4.9-0.196x}=t+c$$ substituting $4.9-0.196 x=z$, $dx=-\frac{dz}{0.196}$, i get $$\int \frac{dz}{z}=-0.196(t+c)$$ $$\ln z=\ln(4.9-0.196x)=-0.196(t+c) \tag{*}$$ $$0.196x=4.9-e^{-0.196(t+c)}=4.9-e^{-0.196c}\cdot e^{-0.196t}$$ $$x=25-ke^{-0.196t}\tag 1$$ method(2) $$\frac{dx}{dt}=4.9-0.196x$$ $$\frac{dx}{dt}+0.196x=4.9$$ now, use integration factor $I.F.=e^{\int 0.196\ dt}=e^{0.196 t}$ so the solution is $$x\cdot (I.F.)=\int (I.F.)\cdot 4.9\ dt+c$$ $$x\cdot e^{0.196 t}=\int e^{0.196 t}\cdot 4.9\ dt+c$$ $$x\cdot e^{0.196 t}=\frac{4.9}{0.196}e^{0.196 t}+c=25e^{0.196 t}+c$$ $$x=25+ke^{-0.196 t}\tag 2$$ now, you see i am getting two different solutions (1) & (2) to the same D.E., but i don't know which one is correct & why. please explain me where i am wrong or which is the correct option & why?
They are both correct. $k$ is any real number. Observe that the set $$\{5k | k \in \mathbb R\}$$ is the same as $$\{-5k | k \in \mathbb R\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1944930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How is $\frac{dt}{dx}$ just the reciprocal of $\frac{dx}{dt}$ ?? Came across this problem. Derivative of a parametric. $$x=2\cos(t)$$ $$y=2\sin(t)$$ .....Hence, $$\frac{dx}{dt}=-2\sin(t)$$ $$\frac{dy}{dt}=2\cos(t)$$ As we know, chain rule states: $$\frac{dy}{dx} = \frac{dy}{dt} \cdot \frac{dt}{dx}$$ Apparently, this results in $$\frac{dy}{dx} = 2\cos(t)\frac{1}{-2\sin(t)}$$ That last part seems weird. How do you get $\frac{dt}{dx} = \frac{1}{-2\sin(t)}$ ? You just take the reciprocal of $\frac{dx}{dt}$ ?? Seems like a bizarre coincidence. Inverse is not the same as reciprocal. If I wanted to get $\frac{dt}{dx}$ explicitly, start with the original: $$x=2\cos(t)$$ Then put in terms of t, and you'd have: $$t = \cos^{-1}(\frac{x}{2})$$ Then take $$\frac{dt}{dx} = - \frac{1}{\sqrt{1-\bigl(\frac{x}{2}\bigr)^2}} \cdot \frac{1}{2} = -\frac{1}{\sqrt{4-x^2}}$$ The point is this doesn't look anything like what we used for $\frac{dt}{dx}$ which was $\frac{1}{-2\sin(t)}$.
They are the same thing, but one is expressed in terms of $t$ and the other is in terms of $x$. $$\frac{dt}{dx} = -\frac{1}{\sqrt{4-x^2}} = -\frac{1}{\sqrt{4-(2\cos t)^2}} = -\frac{1}{2\sqrt{1-\cos^2 t}} = -\frac{1}{2 \sin t}$$ where the last step uses the Pythagorean identity: $\sin^2 t + \cos^2 t = 1$. For parametric equations, the chain rule is often written as $$\frac{dy}{dx} = \frac{\frac{dy}{dt}}{\frac{dx}{dt}}$$ which makes things a bit more intuitive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1945058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Solving two curves Let two curves be:- S1: $x^2/9+y^2/4=1$ S2: $y^2=2x$ Now, on solving these two ,by substituting $y^2$ as $2x$ from S2 in S1, I get two values viz. $x=3/2$ and $x=-6$. But we can clearly see from the graph that the curves intersect at two distinct points in the first and fourth quadrant. So, should not the quadratic equation hence formed in $x$ rather be a whole square viz $(x-3/2)^2$ ?
In general, when you intersect two conics, you can have $4$ intersection points, that's why you see a ghost. Imagine if you pinched the parabola and pulled it to the left through the other side of the ellipse. Then you would get $4$ intersections, and the other solution for $x$ would be relevant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1945174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Finding limit of $\left(1 +\frac1x\right)^x$ without using L'Hôpital's rule. Is there a way to find this limit without using L'Hôpital's rule. Just by using some basic limit properties. $$\lim_{x\to\infty}\left(1+\frac1x\right)^x=e$$
In the comments it has been pointed out that this usually serves as a definition of e. But I think I know what you're asking. Usually in introductory calculus classes to evaluate this limit you let $$y= \lim_{x\to\infty}\left(1+\frac{1}{x}\right)^x,$$ and then take the natural log of both sides to get $$\ln(y) = \lim_{x\to\infty} x\ln\left(1+\frac{1}{x}\right)=\lim_{x\to\infty}\frac{\ln\left(1+\frac{1}{x}\right)}{\frac{1}{x}},$$ where the last limit you use L'Hôpital's rule to get that it is equal to 1. So I think maybe what you are actually asking is can, $$\lim_{x\to\infty} x\ln\left(1+\frac{1}{x}\right)$$ be evaluated without L'Hôpital's rule? Please correct me if I'm wrong. If this is indeed your question you can expand $\ln\left(1+\frac{1}{x}\right)$ as a series at $x=\infty$ which I think makes the limit trivial. I don't think I've ever used these kinds of power series so I would appreciate any input about this last suggestion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1945254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How to derive the closed form solution of geometric series I have the following equation: $$g(n) = 1 + c^2 + c^3 + ... + c^n$$ The closed form solution of this series is $$g(n) = \frac{c^{n+1} -1}{c-1}$$ However, I am having a difficult time seeing the pattern that leads to this. $$ n =0 : 1 $$ $$ n =1 : 1 + c $$ $$ n = 2 : 1 + c + c^2 = 1 + c(1+c)$$ $$ n =3 : 1 + c(1+ c(1 + c))$$ Can someone provide some insight here?
Let $S=1+c+c^2+...+c^n$ then $c.S=c+c^2+...c^{n+1}$, subtract them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1945378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
If two matrices have the same column space and null space, are they the same matrix? If two matrices have the same column space and null space, are they the same matrix? I am thinking no because if A=[1 2;2 1] and B=[2 1;1 2] then they have the same column space (I think) but they are not identical
This fails even in one dimension: $1$ and $2$ have the same column and null spaces. You can easily find other examples in higher dimensions. For example $I$ and $2I$. In fact, all invertible matrices have the same column and null spaces, yet there are many different invertible matrices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1945518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Efficiently computing Schur complement I would like to compute the Schur complement $A-B^TC^{-1}B$, where $C = I+VV^T$ (diagonal plus low rank). The matrix $A$ has $10^3$ rows/columns, while $C$ has $10^6$ rows/columns. The Woodbury formula yields the following expression for the Schur complement: $$A-B^TB+B^TV(I+V^TV)^{-1}V^TB.$$ This expression can be evaluated without storing large matrices, but the result suffers from numerical inaccuracies. Is there a numerically more stable way of computing the Schur complement, without high memory requirements?
Let us decompose the matrix $VV^T$ into its eigenvectors. Thus we have, $VV^T = \Sigma u_iu_i^T$ where $u_i$ is defined where $ u_i = \sqrt[2]\lambda_i \alpha_i$ where $\lambda_i$ is an eigenvalue of $VV^T$ and $\alpha_i$ is the corresponding eigenvector. Since the matrix $VV^T$ is positive definite its eigenvectors would be orthogonal meaning that $u_i^Tu_j= \sqrt[2]\lambda_i\alpha_i^T\alpha_j\sqrt[2]\lambda_j = 0$ when $i \neq j$.Applying Sherman-Morrison formula, for a $u_1$ we get - $(I+u_1u_1^T)^{-1} = I^{-1} - \frac{u_1u_1^T}{1 + u_1^Tu_1}$. This expression simplifies to $(I+u_1u_1^T)^{-1} = I - \frac{u_1u_1^T}{1 + \lambda_1}$ where $\lambda_1$ is the eigenvector corresponding to $u_1$. Now consider $(I^{-1}+ u_1u_1^T +u_2u_2^T) = (I+u_1u_1^T)^{-1} + \frac{(I - \frac{u_1u_1^T}{1 + \lambda_1})u_2u_2^T(I - \frac{u_1u_1^T}{1 + \lambda_1})}{1 + u_2^Tu_2}$. Since $u_1^Tu_2 =0$, $u_2u_2^Tu_1u_1^T = 0$. Hence the numerator simplifies as $u_2u_2^T$. Thus, $(I + u_1u_1^T + u_2u_2^T)^{-1} = I - \frac{u_1u_1^T}{1+\lambda_1} - \frac{u_2u_2^T}{1+\lambda_2}$. By induction it can be proven that $(I + \Sigma u_iu_i^T)^{-1} = I - \Sigma \frac{u_iu_i^T}{(1 + \lambda_i)}$. This can be used to compute the inverse of $(I + VV^T)$ with sufficient numerical stability. Regarding the final computation I think multiplication is inevitable. The time complexity of inversion by the given method is still $O(n^3)$ where $VV^T$ has dimensions $nxn$. So this would be the time complexity of the overall operation as well. But the memory required is $O(n^2)$. Also the net inaccuracy would be proportional to $\Sigma \Delta \lambda_i$ where $\Delta \lambda_i$ is the error in computing the eigenvalues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1945581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Bounding sum of reciprocals of the square roots of the first N positive integers I am trying to derive the following inequality: $$2\sqrt{N}-1<1+\sum_{k=1}^{N}\frac{1}{\sqrt{k}}<2\sqrt{N}+1,\; N>1.$$ I understand for $N\rightarrow \infty$ the summation term diverges (being a p-series with p=1/2), which is consistent with the lower bound in this inequality being an unbounded function in $N$. With respect to deriving this inequality, it is perhaps easier to rewrite it as $$2\sqrt{N}-2<\sum_{k=1}^{N}\frac{1}{\sqrt{k}}<2\sqrt{N},\; N>1.$$ In this expression, the fact that the upper and lower bounds are so concisely expressed made me wonder whether there is a more basic inequality I can begin with of the form $$a_{k}<\frac{1}{\sqrt{k}}<b_{k},\; \forall k>1,$$ from which summing through would yield the desired inequality. That is $a_k \text{ and } b_k$ should satisfy $\sum_{k=1}^{N}a_k=2\sqrt{N}-2$ and $\sum_{k=1}^{N}b_k=2\sqrt{N}$. However, I am stuck on how to find the appropriate $a_{k}$ and $b_{k}$ to obtain the desired bounds. Alternatively, what other methods are at my disposal?
To get the lower bound, you can convert the sum to an integral. $\frac 1{\sqrt k} \gt \int_k^{k+1}\frac 1{\sqrt x}dx=2\sqrt x|_k^{k+1}=2(\sqrt {k+1}-\sqrt k)$, so $\sum_{k=1}^N \frac 1{\sqrt k} \gt \int_1^N \frac 1{\sqrt x}dx=2\sqrt N -2$ Then to get the upper limit you have the same approach. $\frac 1{\sqrt k} \lt \int_{k-1}^{k}\frac 1{\sqrt x}dx=2\sqrt x|_{k-1}^{k}=2(\sqrt {k}-\sqrt {k-1})$ so $\sum_{k=1}^N \frac 1{\sqrt k} \lt \int_0^{N-1} \frac 1{\sqrt x}dx=\int_0^{1} \frac 1{\sqrt x}dx+\int_1^{N-1} \frac 1{\sqrt x}dx=2+2\sqrt N -2=2\sqrt N$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1945663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Prove that the Dihedral group $D_n$ is isomorphic to $Z_n \rtimes_{\psi} Z_2$ I consider the following map $\psi : Z_2 \rightarrow Aut(Z_n)$ where we map the identity element 0 to the identity map and $1 \mapsto \theta : Z_n \rightarrow Z_n$ where $\theta(x) = -x$. I am not sure how to proceed further. Any help would be nice. $D_n = ⟨ r, s : r^n = e = s^2, r^js = sr^{-j} ⟩ $
Perfectly good answer was already given by Galena Rupp, but I will try to give intuition why the given isomorphism really is a natural choice. First of all, let us start with general semidirect product $N\rtimes_\psi H$. Instead of ordered pair $(n,h)$, I will simply write $nh$ (where concatenation is multiplication in $N\rtimes_\psi H$). So, how do we multiply $(n_1h_1)(n_2h_2)$? Well, $(n_1h_1)(n_2h_2) = n_1(h_1n_2h_1^{-1})h_1h_2 = (n_1\psi_{h_1}(n_2))(h_1h_2)$ which is precisely how we define multiplication in semidirect product. This tells us that $\psi$ is encoding how $H$ acts by conjugation on $N$ in the semidirect product - with this information we can completely determine the structure of semidirect product. Let us get back at $\Bbb Z_n\rtimes_\psi \Bbb Z_2$. First I will switch to multiplicative notation $C_n\rtimes_\psi C_2$, denote the generator of $C_n$ with $r$ and the generator of $C_2$ with $s$, and appropriately change $\psi_s$ to $r^j\mapsto r^{-j}$. Now, if I want to give presentation of $C_n\rtimes_\psi C_2$, I note that it is generated by $r,s$ subject to conditions $r^n = e$, $s^2 = e$ and $\psi_s(r^j) = r^{-j}$, i.e. $$C_n\rtimes_\psi C_2 = \langle r,s\,|\, r^n = s^2 = e, \psi_s(r^j) = r^{-j}\rangle$$ Now, remember that $\psi_s$ really is a conjugation, so rewrite the above as $$C_n\rtimes_\psi C_2 = \langle r,s\,|\, r^n = s^2 = e, sr^js^{-1} = r^{-j}\rangle = \langle r,s\,|\, r^n = s^2 = e, r^js = sr^{-j}\rangle$$ which is precisely the presentation of $D_n$. So, what is the isomorphism $\Bbb Z_n\rtimes_\psi \Bbb Z_2\cong D_n$? Well, from the above, we just have to rename generators to $r$ and $s$ and switch to multiplicative notation, i.e. the isomorphism is given by $(i,j)\mapsto r^is^j$ as noted in the answer by Galena Rupp.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1945768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Who discovered the following formula for finding the next consecutive square? I know I'm not the first to discover this while being bored in seventh-grade math "learning" about perfect squares. The formula I discovered is this: Sorry guys, I tried expressing it algebraically but I'm not very good with numbers, so I'll do my best to tell you in English. * *Take the square of $0$ : $0$ *Then take the square of the next whole number: $1^2=0^2+1=1$ *Then take the square of the next whole number: $2^2=1^2+3=1$ As you can see there is a pattern here. The power of a whole number is the number of the odd number that you can add to the square of the las number. If this is confusing then this should help : $1=1$ $3=2$ $5=3$ ... Can someone just tell me who first discovered this? (Sorry for the lack of simple notation once again. In case it wasn't clear, my mathematical studies have not even taken me to the end of elementary prealgebra.)
I'm not sure who first discovered this, but notice this: $(n+1)^2=n^2+2n+1$. The $2n+1$ is an odd number which is being added to the square of $n$ to obtain the square of $n+1$. Just like you discovered yourself. Using the method of induction, one may prove without any doubt that $n^2=1+3+5+\cdots+(2n-3)+(2n-1)$. You might have heard of Babbage's difference engine. Polynomials are used quite often to approximate more complicated functions, and people used to go through considerable effort using them to make books full of important numbers (for instance, logarithm tables). One thing people noticed is that you can compute the values of a polynomial by doing repeated additions using the method of divided differences (if you want to see what these are: try taking a polynomial, say $n^2+n+1$, making a table of the polynomial for $n=1,2,3,4,\ldots$, then taking the difference between consecutive values, then the differences of the differences, then the differences of those differences again. Notice a pattern? You must have already done something like this for $n^2$.) The difference engine used gears to then compute the values of polynomials much faster than anyone could do by hand: just as fast as someone could write down the numbers from the display! As a bonus, here is a graphical proof that the sum of the first odd numbers is a square: http://proofsfromthebook.com/2013/12/05/sum-of-the-first-n-odd-integers-is-a-square/
{ "language": "en", "url": "https://math.stackexchange.com/questions/1945885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability of an event happening if at least one of 2 events happen So I'm doing a probability question, the probability of event A happening is $0.4$, event B is $0.7$. What is the probability of only event A happening given that at least one of the two events happen? What I have tried so far is getting the probability of at least one of two events happening, which I got $0.82$ from adding the two and subtracting their sum. From here I'm not too sure where to go, I have thought about the simple operations I could do such as multiplying the $0.82$ by the probability of the $2$, but none of them seem to be the right choice. EDIT: sorry I forgot to mention that it's given that these two events are independent
You want to find $P(A \cap \neg B|A \cup B)$. $$P(A \cap \neg B|A \cup B) = \frac{P((A \cap \neg B) \cap (A \cup B))}{P(A \cup B)} = \frac{P(A \cap \neg B)}{P(A \cup B)} = \frac{P(A) P(\neg B)}{P(A \cup B)} = \frac{P(A) P(\neg B)}{1-P(\neg A)P(\neg B)} = \frac{0.4 \times 0.3}{1-0.6 \times 0.3} = \frac{0.12}{0.82} \approx 0.146$$ I think is helps to look at these problems visually too. Since the two events are independent we can draw the event space to look like the following picture (imagine you can take a horizontal line to sample the event space):
{ "language": "en", "url": "https://math.stackexchange.com/questions/1945963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
ODE for mixing problem A jar of hot water is in a sink, and we are pouring water into the jar with a rate of $a$ gallons per minute. The water spills over the jar at a rate of $a$ gallons per minute after being thoroughly mixed. Write the ODE for $x(t),$ where the $x(t)$ is the temperature of water in the jar at time $t$. I am having trouble interpreting the temperature into the ODE. What I know is that $x'(t)$ equals the input rate minus the output rate, and in this problem, I'm writing the output as: $$a\frac{x(t)}{N}$$ where $N$ is how much gallon of water the jar is able to contain. I have no idea about the input rate because should we not multiply the rate $a$ by the current temperature? Is it $x(t)$?
My attempt. Over a period of time $\Delta t$, a quantity $a \Delta t$ will flow to the jar, from the source at temperature $T_{source}$. The jar has volume $L$, and for simplicity I define $n = L / a \Delta t$. Assuming that the updated temperature is just a (volume)-weighted average, the updated temperature $T_1$ will read $$ T_1 = \frac{n T_0 + T_{source}}{n+1}$$ Where $T_0$ stands fort the temperature before the quantity of fluid is added. The temperature difference $\Delta T = T_1 – T_0$ equals then $$ \Delta T = \frac{T_{source} – T_0}{n+1}$$ In the limit $\Delta t \to 0$, $$ \frac{(T_{source} – T_0) a \Delta t}{L } $$ $$ \frac{\Delta T } {\Delta t} = \dot{T}$$ And one recovers the ODE $$ \dot{T} = \frac{(T_{source} - T)a}{L}$$ The larger the jar, the slower the temperature change, as one would expect. The cooler the temperature at the inlet, the faster the change.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1946071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Direct sums of invariant subspaces Let $A$ be a complex $n\times n$ matrix, with its Jordan carnonical form as $J=diag(J_1,\cdots,J_s)$. Then there exists an invertible matrix $P$ such that $P^{-1}AP=J$. It is easy to verify that $\Bbb C^n=V_1\oplus \cdots \oplus V_s$ with $V_i$ is the columns of $P$ correspoing to $J_i$. My question is that: for any invariant subspace $V\subset \Bbb C^n$, can we have $V=(V\cap V_1)\oplus\cdots\oplus (V\cap V_s)?$$ If all $J_i$ is of $1\times 1$ matrix, then it is easy. But for general case, I do not have an idea.
This is false. Let $A$ be the $2\times 2$ identity matrix. Then $V_1 = span{\begin{pmatrix} 1 \\ 0 \end{pmatrix}}$, and $V_2 = span{\begin{pmatrix} 0 \\ 1 \end{pmatrix}}$ are the invariant subspaces corresponding to the Jordan blocks of $A$, and $\mathbb{C}^2 = V_1 \oplus V_2$. Let $V = span{\begin{pmatrix} 1 \\ 1 \end{pmatrix}}$. Then $V \cap V_1 = 0 = V \cap V_2$. So $V \neq (V\cap V_1) \oplus (V \cap V_2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1946186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why are (S)pinor representations *restrictions* of Clifford algebra representations? Firstly, just to clarify my notation: Let $Cl(V,q)$ denote the Clifford Algebra of a quadratic vector space $(V,q)$ and denote by $Cl(V,q)_{0\vert 1}$ the even/odd part in the $\mathbb{Z}_2$-grading of $Cl(V,q) = Cl(V,q)_0 \oplus Cl(V,q)_1$ of the Clifford-algebra. Now for the subgroups $Pin(V,q) \subset Cl(V,q)$ and $Spin(V,q)\subset Cl(V,q)_0$ is is defined: A pinor representation is the restriction of an irreducible representation of $Cl(V,q)$ onto $Pin(V,q)$. Similary a spinor representation is the restriction of an irreducible representation of $Cl(V,q)_0$ onto $Spin(V,q)$. My question is: What is the reason in defining pinor/spinor-representations as the restrictions of Clifford algebra representations, rather then just as usual group-representations of the groups themselves? Remark: "Physical" explanations (as: 'The so defined spinor fields wouldn't behave like spinors, since...') are also very welcome.
I suspect that there is a matter of terminology involved here. When speaking about "spinors" we do not exactly mean the elements of the group $Spin(V,q)$ but rather the elements of special vector spaces upon which Clifford algebras act. In other words, the terminology spinors imply the elements of some specific (Clifford algebra, Pauli matrices etc )-module. On the other hand, you are right in the fact that these spinor representations are not the only representations of the $Spin(V,q)$ group. In fact these ones are faithful. They are the ones obtained by restriction from the action of the Clifford algebra on the spinor space (spinor module). You can have a more detailed look here for the definitions and the elementary properties implied (look mainly at the definitions and the proposition of the first page).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1946332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Prove the transitivity of $<$ relation in natural numbers using the classical definition and the associativity of addition In the book Notes on Mathematical Analysis the set $\mathbb{N}$ of natural numbers was defined using the classical Peano's axioms. Then $<$ was defined as if exist $k\in \mathbb{N}$ such that $b=S^k(a)$ then we say $a<b$. Then it was asked to prove the transitivity of $<$ relation. I proceed as follows: if $a<b$ then by definition there is a $k$, $b=S^k(a)$. Same for $b<c$, $c=S^l(b)$. Then it will comes that $c=S^{k+l}(a)$, which means $a<c$. I can't find errors in my proof but in the book's hint this transitivity should be proved by the associativity which proved before. I can't find the required proof either. Any help will be greatly appreciated. ps. Moreover, in the next section another definition of $\mathbb{N}$ appears as: Given a set $N$ and a map $S:N\rightarrow N$, such that a) $1\notin S(N)$; b) $S$ is injective; c) The set $N$ is well-ordered, that is, for any non-empty subset $E$ of $N$, there exists $m\in E$, $\forall n\in E(m\le n)$. The $<$ definition is the same as above. In this case it is also asked to prove the transitivity of $<$, as well as $N=\mathbb{N}$. This also confuse me since I can't tell the difference between this two definitions, and I think the prove to things like "among $a<b, b<a, a=b$ exactly one is correct" and "1 is the minimal number in $N$" is the same as before. Please help. Thanks in advance.
Long comment From : $a < b$ and $b < c$, we have : $b=S^k(a)$ and $c=S^l(b)$. By substitution : $c=S^l[S^k(a)]$. Now we need addition; $S(a)=a+1$, form first axiom for addition and from the hypotheses : $S^n(a)=a+n$ we have : $S^{n+1}(a)=S[S^n(a)]=S[(a+n)]=(a+n)+1$. By asociativity : $(a+n)+1=a+(n+1)$ and thus we conclude with : $S^{n+1}(a)=a+(n+1)$. "Cooking them" together, we have : $S(a)=a+1$ and : if $S^n(a)=a+n$, then $S^{n+1}(a)=a+(n+1)$. By Induction we conclude with: $S^k(a)=a+k$, for any $k$. Going back to : $c=S^l[S^k(a)]$, we have : $c=S^l[a+k]=(a+k)+l=a+(k+l)=S^{k+l}(a)$ and we have proved that : $a < c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1946422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Product of Levi-Civita symbol is determinant? I am confused with how can one write product of Levi-Civita symbol as determinant? I want to prove 'epsilon-delta' identity and found this questions answers it. But I am stuck at product of Levi-Civita symbol Proof relation between Levi-Civita symbol and Kronecker deltas in Group Theory $$ \varepsilon_{ijk}\varepsilon_{lmn} = \begin{vmatrix} \delta_{il} & \delta_{im} & \delta_{in} \\ \delta_{jl} & \delta_{jm} & \delta_{jn} \\ \delta_{kl} & \delta_{km} & \delta_{kn} \\ \end{vmatrix} $$ This Wikipedia article gives the above relation. I am confused how Product of Levi-Civita symbol is a determinant. Can someone explain?
This identity relates the product of the volumes spanned by two sets of three vectors (with a minus sign if the sets are oppositely oriented) in terms of the inner products of the individual vectors. In particular, the determinant can be understood as computing the product of the volumes by projecting one set of vectors onto the other set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1946493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Power sets in set theory What is the set $\mathcal{P}(\mathcal{P}(\mathcal{P}( \emptyset )))$? Would it be $\{\{\{ \emptyset\}\}\}$? I understand that $\mathcal{P}(\{a,b,c\}) = \{ \{ \}, \{a\}, \{b\}, \{c\}, \{a,b\}, \{a,c\}, \{b,c\}, \{a,b,c\} \}$.
For every set $A$ we have $A\subseteq A$ so that $A\in\wp(A)$. So also for $\varnothing$. If $B\neq\varnothing$ then some $b$ exists with $b\in B$. So in that case $B\subseteq\varnothing$ would lead to $b\in\varnothing$, wich is evidently not true. We conclude now that $\varnothing$ is the only subset of $\varnothing$. That is:$$\wp(\varnothing)=\{\varnothing\}$$ Looking at $\{\varnothing\}$ we find exactly two subsets: $\varnothing$ (which is a subset of every set) and $\{\varnothing\}$. This leads to:$$\wp(\wp(\varnothing))=\wp(\{\varnothing\})=\{\varnothing,\{\varnothing\}\}$$ This set has $4$ subsets (find them!). I leave the rest to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1946604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Help with calculating $(3+5i)^6$ I want to calculate $(3+5i)^6$ with $i = \sqrt{-1}$, but somehow I'm doing something wrong. How I calculate it: Say $z = x + yi = 3+5i$, so $z^6$ has to be calculated. $$|z| = \sqrt{3^2+5^2} = \sqrt{34}$$ $$\text{Arg}(z) = \arctan\left(\frac{y}{x}\right) = \arctan\left(\frac{5}{3}\right)$$ $$z = |z|e^{i\text{Arg}(z)} = \sqrt{34}e^{i\arctan(5/3)}$$ $$z^6 = (\sqrt{34})^6e^{6i\arctan{(5/3)}} =39304e^6e^{i\arctan(5/3)} = 39304e^6(\cos{(\arctan{(5/3)})} + i\sin{(\arctan{(5/3)})}$$ Now to calculate $\cos{(\arctan{(5/3)})}$ and $\sin{(\arctan{(5/3)})}$, I draw a right triangle with an angle $\theta$ and edges that satisfy this. From this triangle, $$\cos{(\arctan{(5/3)})} = \cos\theta = \frac{3}{\sqrt{34}}$$ and $$\sin{(\arctan{(5/3)})} = \sin\theta = \frac{5}{\sqrt{34}}$$ So $$z^6 = 39304e^6\left(\frac{3}{\sqrt{34}} + i\frac{5}{\sqrt{34}}\right) = \frac{\sqrt{117912e^6}}{\sqrt{34}} + i\times\frac{196520e^6}{\sqrt{34}}$$ When plugged into a calculator (or in this case Julia), this is approximately julia> 117912e^6/sqrt(34) + im*196520e^6/sqrt(34) 8.158032643069409e6 + 1.3596721071782347e7im So I get $z^6 \approx 8.16 + 1.36i$. (I'm writing this part because I want to document everything I did, since I don't know what I'm doing wrong.) However when I calculate $(3 + 5i)^6$ directly, I get julia> (3 + 5im)^6 39104 - 3960im So I get $z^6 = 39104 - 3960i$. What am I doing wrong? Thanks in advance.
Generalize the problem, when $\text{z}\in\mathbb{C}$ and $\text{n}\in\mathbb{R}$: $$\text{z}^\text{n}=\left(\Re\left[\text{z}\right]+\Im\left[\text{z}\right]i\right)^\text{n}=\left(\left|\text{z}\right|e^{\left(\arg\left(\text{z}\right)+2\pi k\right)i}\right)^\text{n}=\left|\text{z}\right|^\text{n}e^{\text{n}\left(\arg\left(\text{z}\right)+2\pi k\right)i}=$$ $$\left|\text{z}\right|^\text{n}\cos\left(\text{n}\left(\arg\left(\text{z}\right)+2\pi k\right)\right)+\left|\text{z}\right|^\text{n}\sin\left(\text{n}\left(\arg\left(\text{z}\right)+2\pi k\right)\right)i$$ Where $\left|\text{z}\right|=\sqrt{\Re^2\left[\text{z}\right]+\Im^2\left[\text{z}\right]}$, $\arg\left(\text{z}\right)$ is the complex argument of $\text{z}$ and $k\in\mathbb{Z}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1946697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Proof of 1 = 0, use of ill-formed statements In his book "Analysis 1", Terence Tao writes: A logical argument should not contain any ill-formed statements, thus for instance if an argument uses a statement such as x/y = z, it needs to first ensure that y is not equal to zero. Many purported proofs of “0=1” or other false statements rely on overlooking this “statements must be well-formed” criterion. Can you give an example of such a proof of "0=1"?
Start with the assumption $$x = 0$$ Divide both sides by $x$ to get $$x/x=0/x$$ and thus $$1=0$$ That's the general scheme. Of course it generally gets more obfuscated, for example by starting with the assumption $a+b=c$ and then later dividing both sides with $c-a-b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1946824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Transfinite Numbers in Set Theory Since my school doesn't teach higher-level things like set theory or transfinite numbers, I have been learning on my own. There are large gaps in my knowledge of higher-level math because of this. Where would transfinite numbers fit in this sequence? $$\mathbb S \supseteq \mathbb O \supseteq \mathbb H \supseteq \mathbb C \supseteq \mathbb R \supseteq \mathbb Q \supseteq \mathbb Z \supseteq \mathbb N$$ or alternatively: Sedenions $\supseteq$ Octonions $\supseteq$ Quaternions $\supseteq$ Complex numbers $\supseteq$ Real numbers $\supseteq$ Rational numbers $\supseteq$ Integers $\supseteq$ Natural numbers. (I know I'm not including irrational numbers, but you get the point).
They would fit nowhere. In fact, the "transfinite numbers", more commonly called ordinals, extend the natural numbers, but do not contain the negative integers. So, the closes thing we can say is that $$ \mathbb S \supseteq \mathbb O \supseteq \mathbb H \supseteq \mathbb C \supseteq \mathbb R \supseteq \mathbb Q \supseteq \mathbb Z \supseteq \mathbb N \subseteq \mathsf{On}, $$ which doesn't help you much. In fact, the ordinals are a totally different way of extending numbers than the extensions you've given. The extensions you've given are all about having different algebraic properties; the naturals you start with, the integers solve all equations of the form $a + x = b$, the rationals all equations of the form $a\cdot x = b$, the reals complete the rationals (which gives you calculus), the complex numbers add solutions to $x^2 = -1$, and the final ones are all interesting because they extend the complex numbers in such a way that they are "finite-dimensional complex algebras" (you'll learn eventually what this means) where multiplication acts relatively nicely. By contract, the ordinals are an extension that have to do with ways of "well-ordering" sets. If you want something in the middle, there is the field of surreal numbers -- it extends both $\mathbb R$ and $\mathsf{On}$ at the same time, and it is a very interesting object. It is in some sense the "largest field" that is still linearly ordered -- that is, either $a < b$ or $a = b$ or $a > b$. (Note that that already doesn't make sense for the complex numbers.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1947079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why does this derivative equation hold? $$\frac{d(Q/x)}{dx} = \frac{x(\frac{dQ}{dx})-Q(\frac{dx}{dx})}{x^2}$$ Assume $Q$ is a function of $x$. This equation is in my microeconomics textbook, but I don't know how we can get from the left-hand side to the right-hand side. Can someone please explain?
It is known as the quotient rule. The more general derivative is given as: $$\frac{d}{dx}\frac fg=\frac{f'g-fg'}{g^2}$$ Inputting $f=Q$ and $g=x$ gives $$\frac{d}{dx}\frac Qx=\frac{Q'x-Qx'}{x^2}=\frac{x\left(\frac{dQ}{dx}\right)-Q\require{cancel}\cancel{\frac{dx}{dx}}}{x^2}$$ $$=\frac{x\left(\frac{dQ}{dx}\right)-Q}{x^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1947145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
What is $\bigcap_{n=1}^{\infty}[\frac{1}{3n}, 1+\frac{1}{n})$ What is $\bigcap_{n=1}^{\infty}[\frac{1}{3n}, 1+\frac{1}{n})$ So, just from thinking about it logically, I got $\bigcap_{n=1}^{\infty}[\frac{1}{3n}, 1+\frac{1}{n})=[\frac{1}{3}, 1)$. However, I'm not sure about whether the bound on the right should be closed or open. Is $1$ included in this set or not?
$1\in [1/3n, 1+1/n)$ for every $n\in \mathbb N.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1947314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
For a compact set K, does $K\supset f(K)\supset f^{2}(K)\supset...$ hold for function f? Let $K$ be a compact normed space and $f:K\rightarrow K$ such that $$\|f(x)-f(y)\|<\|x-y\|\quad\quad\forall\,\, x, y\in K, x\neq y.$$ Is it true that $K\supset f(K)\supset f^{2}(K)\supset...$? How to show it?
If this were true, then all points of the space would be fixed: every set of the form $\{x\}$ is compact. (and then the map would be the identity, which is impossible...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1947427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How was this geometry problem created? This is a standard High School Olympiad problem and for an experienced problem solver a quite easy solve. But how was this problem created. To pose a problem, I believe is much harder, than to solve a posed problem. Here the problem poser had to first make the figure up and then simultaneously realise that $ND$ had the wonderful property of being equal in magnitude to the circumradius. Is there a nifty way to find out these wonderful geometric properties?
Let me show you another solution ;)... Maybe this is how they came up with this problem. Manipulating midpoints, orthogonality and parallel lines. Draw the perpendicular of edge $AC$ at point $A$ and let it intersect the line $BN$ at point $P$. Since $\angle \, CBP = 90^{\circ} = \angle \, CAP$, it follows that quadrilateral $BCAP$ is inscribed in a circle (the circumcircle of triangle $ABC$) and $PC$ is a diameter of that circle. Since line $FK \equiv NK$ is perpendicular to $AC$, it is parallel to $AP$. As $FK$ passes trough the midpoint $F$ of segment $AB$ and is parallel to $AP$, its intersection point $N$ with segment $BP$ is the midpoint of $BP$ (i.e. $FN$ is the midsegment of triangle $BAP$). Consequently, in triangle $BCP$ the points $N$ and $D$ are midpoints of edges $BP$ and $BC$ respectively, making $ND$ the midsegmetn of triangle $BCP$ parallel to the diameter $CP$ and half of its length, i.e. $ND = \frac{1}{2} CP$ equals the radius of the circumcirlce of triangle $ABC$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1947507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Find positive integer $x,y$ such that $7^{x}-3^{y}=4$ Find all positive integers $x,y$ such that $7^{x}-3^{y}=4$. It is the problem I think it can be solve using theory of congruency. But I can't process somebody please help me . Thank you
Let us go down the rabbit hole. Assume that there is a solution with $ x, y > 1 $, and rearrange to find $$ 7(7^{x-1} - 1) = 3(3^{y-1} - 1) $$ Note that $ 7^{x-1} - 1 $ is divisible by $ 3 $ exactly once (since $ x > 1 $): the contradiction will arise from this. Reducing modulo $ 7 $ we find that $ 3^{y-1} \equiv 1 $, and since the order of $ 3 $ modulo $ 7 $ is $ 6 $, we find that $ y-1 $ is divisible by $ 6 $, hence $ 3^{y-1} - 1 $ is divisible by $ 3^6 - 1 = 2^3 \times 7 \times 13 $. Now, reducing modulo $ 13 $ we find that $ 7^{x-1} \equiv 1 $, and since the order of $ 7 $ modulo $ 13 $ is $ 12 $, we find that $ x-1 $ is divisible by $ 12 $. As above, this implies that $ 7^{x-1} - 1 $ is divisible by $ 7^{12} - 1 $, which is divisible by $ 9 $. This is the desired contradiction, hence there are no solutions with $ x, y > 1 $. For an outline of the method I used here, see Will Jagy's answers to this related question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1947650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Very general object-free categories? Is there a name for categories $\mathcal C$ such that $\text{Obj}(\mathcal C)$ coinside with $\text{Mor}(\mathcal C)$? In the diagram below the morphisms are $a\overset{a}{\to}a$, $a\overset{b}{\to}c$, $d\overset{c}{\to}c$, $a\overset{d}{\to}e$, $e\overset{e}{\to}f$, $c\overset{f}{\to}e$. I don't know if this is interesting mathematically. My idea is modelling. In the models all objects should be of the arrow-type in the same category. Nothing in the definition of categories exclude this. Domains and codomains can be defined as usual.
Interpreted in a certain way what you are asking is standard. A category $C$ is a collection $C_1$ together with two maps $s,t:C_1\to C_1$ and a map $m$ from $C_1\times_{\langle s,t\rangle} C_1 =\{(f,g)\,|\,s(f)=t(g)\}$ to $C_1$ satisfying: $st=t$, $ts=s$, $m(m(f,g),h))=m(f,m(g,h))$ and $m(f,s(f))=f=m(t(f),f)$. Note that normally we write $m(f,g)=f\circ g$. To obtain the usual description of category set $C_0=\{f\in C_1\,|\,s(f)=f\}$, keep the same composition, and let the domain and codomain maps $c,d :C_1\to C_0$ be defined by $c(f)=t(f)$ and $d(f)=s(f)$ respectively. Conversely given a category $C$ forget the set $C_0$, keep the same composition, and let $s$ and $t$ be the maps defined by $t(f) = 1_{c(f)}$ and $s(f)=1_{d(f)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1947747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
All alternative solution for an equation I'm looking for all alternative solutions of this $$x'=x(x-1)(x+1)$$ But I absolutely don't know what I have to do! Thanks!
Let we start by noticing that $$\frac{1}{(z-1)z(z+1)} = \frac{1/2}{z-1}-\frac{1}{z}+\frac{1/2}{z+1}\tag{1}$$ by partial fraction decomposition/the residue theorem. It follows that the separable DE $$ \frac{x'}{(x-1)x(x+1)} = 1 \tag{2}$$ leads to $$ \frac{1}{2}\log(x-1)-\log(x)+\frac{1}{2}\log(x+1) = t+C \tag{3} $$ or to: $$ \log\left(1-\frac{1}{x^2}\right) = 2t+D\tag{4} $$ from which: $$ x(t)^2 = \color{red}{\frac{1}{1-K e^{2t}}}\tag{5} $$ where $K>0$ depends on the initial conditions. In particular, $x(0)$ tells us exactly where the associated solution has a blowup.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1947843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
How can I know whether the point is a maximum or minimum without much calculation? Find the maximum and minimum of this function and state whether they are local or global: $$f: \mathbb{R} \ni x \mapsto \frac{x}{x^{2}+x+1} \in \mathbb{R}$$ \begin{align*} f'(x)&= \frac{-x^{2}+x}{\left(x^{2}+x+1\right)^{2}}\\ f'(x)&=0 \iff \frac{-x^{2}+x}{\left(x^{2}+x+1\right)^{2}}=0 \iff -x^2+x=0 \iff x(1-x)=0, \end{align*} which gives $x_{1}=0, x_{2}=1$. Here comes the disturbing part, we need to know if these are maximum or minimum and for this we usually used the second derivative. But this would be soo exhausting, I don't even want think of doing it. There must be an easier way and I remember someone here has even recommended me using monotony somehow. But how can we do this here? Please do tell me, at home I got enough time to use second derivative but surely not in the exam : /
Your expression for the derivative is wrong, but I'll let you sort that out. What is important is that: * *$f(0)=0$; *there are just two values of $x$ for which $f'(x)=0$, and one of them is positive, the other negative; *$f(x)$ tends to zero as $x$ tends to $\pm\infty$; *$f(x)$ has the same sign as $x$. These facts are enough to prove the nature of the stationary points. You should draw a graph if it helps. (You certainly don't have to calculate the second derivative.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1947917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Is there a series approximation in terms of $n$ for the sum of the harmonic progression : $\sum_{k=0}^{n}\frac{1}{1+ak}$? When $a=1$ the sum is given by $ H_{n} $ and we have : $$H_{n}=log(n)+γ+\frac{1}{2n}-\frac{1}{12n^2}+\frac{1}{120n^4} \hspace{0.5cm}.\hspace{.1cm}.\hspace{.1cm}. $$ Does any representation of a similar type exist for arbitrary positive $a$ ?
As Felix Marin answered,$$\sum_{k = 1}^{n}{1 \over 1 + ak}=\frac{H_{n+\frac{1}{a}}-H_{\frac{1}{a}}}{a}$$ Now, using the asymptotics $$H_m=\gamma +\log \left({m}\right)+\frac{1}{2 m}-\frac{1}{12 m^2}+\frac{1}{120 m^4}+O\left(\frac{1}{m^5}\right)$$ $$H_{n+\frac{1}{a}}=\gamma +\log \left({n+\frac{1}{a}}\right)+\frac{1}{2 (n+\frac{1}{a})}-\frac{1}{12 (n+\frac{1}{a})^2}+\frac{1}{120(n+\frac{1}{a} )^4}+O\left(\frac{1}{n^5}\right)$$ Now, developing each term as Taylor series $$\log \left({n+\frac{1}{a}}\right)=\log \left({n}\right)+\frac{1}{a n}-\frac{1}{2 a^2 n^2}+\frac{1}{3 a^3 n^3}-\frac{1}{4 a^4 n^4}+O\left(\frac{1}{n^5}\right)$$ $$\frac{1}{ (n+\frac{1}{a})}=\frac{1}{n}-\frac{1}{a n^2}+\frac{1}{a^2 n^3}-\frac{1}{a^3 n^4}+O\left(\frac{1}{n^5}\right)$$ $$\frac{1}{ (n+\frac{1}{a})^2}=\frac{1}{n^2}-\frac{2}{a n^3}+\frac{3}{a^2 n^4}+O\left(\frac{1}{n^5}\right)$$ $$\frac{1}{ (n+\frac{1}{a})^4}=\frac{1}{n^4}+O\left(\frac{1}{n^5}\right) $$and replacing, you should get $$\sum_{k = 1}^{n}{1 \over 1 + ak}=\frac{\gamma-H_{\frac{1}{a}} }{a}+\frac{\log \left({n}\right) }{a}+\frac{a+2}{2 a^2 }\frac 1{n}-\frac{a^2+6 a+6}{12 a^3 }\frac 1{n^2}+\frac{a^2+3 a+2}{6 a^4 }\frac 1{n^3}+\frac{a^4-30 a^2-60 a-30}{120 a^5 }\frac 1{n^4}+O\left(\frac{1}{n^5}\right)$$ Take care that, for $a=1$ $$\sum_{k = 1}^{n}{1 \over 1 + k}=H_{n+1}-1=\gamma -1+\log \left({n}\right)+\frac{3}{2 n}-\frac{13}{12 n^2}+\frac{1}{n^3}-\frac{119}{120 n^4}+O\left(\frac{1}{n^5}\right)$$ and not $H_n$ (for which you gave the correct expansion) because of the shift of the index.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1948139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Find the values of $\alpha $ satisfying the equation(determinant) Find the values of $\alpha $ satisfying the equation $$\begin{vmatrix} (1+\alpha)^2 & (1+2\alpha)^2 & (1+3\alpha)^2\\ (2+\alpha)^2& (2+2\alpha)^2 & (2+3\alpha)^2\\ (3+\alpha)^2& (3+2\alpha)^2 & (3+3\alpha)^2 \end{vmatrix}=-648\alpha $$ I used $$R_3 \rightarrow R_3- R_2 \qquad R_2 \rightarrow R_2- R_1$$ $$\begin{vmatrix} (1+\alpha)^2 & (1+2\alpha)^2 & (1+3\alpha)^2\\ 3+2\alpha& 3+4\alpha & 3+6\alpha\\ 5+2\alpha& 5+4\alpha & 5+6\alpha \end{vmatrix}$$ Then $$R_3 \rightarrow R_3- R_2$$ $$\begin{vmatrix} (1+\alpha)^2 & (1+2\alpha)^2 & (1+3\alpha)^2\\ 3+2\alpha& 3+4\alpha & 3+6\alpha\\ 2& 2 & 2 \end{vmatrix}$$ Now applying column will make zero but the question will become too long . This is contest question and so it should not be that long .
Hint write it as a product of two determinants after taking $\alpha,\alpha^2$ common from one of the determinants to get $\alpha=\pm 9$ or to continue your method use $R_1\to R_1-R_2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1948216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }