text stringlengths 256 16.4k |
|---|
It is arguable, that
lambda n: 2*n^2-1 # if n in ZZ and n > 0 else None
is the solution getting the full score in an exam. (But this is always a good test for the sense of humor of the examiner. And for the own courage in real life.)
Instead, we may translate as follows from the Italian:
Write a function in Sage that computes the matrix of the linear map $F$, written in suitable bases,
from the space $V_A\times V_B$, where $V_A=V_A(n)$, $V_B=V_B(n)$ are two copies of $M_n(R)$
to the space $W=W(n)$ of endomorphisms of $M_n(R)$,
constructed as follows: For $A\in V_A$ and $B\in V_B$ we associate $F(A,B) = F(A,0)+F(0,B)$ which acts on a matrix $X$ as follows: $F(A,B)(X)=AX+XB$.
Then compute the rank for some small values of n.
My choice of the bases is as follows. Let $E_A(j,k)$ be the standard generators of $V_A$. And $E_B(j,k)$ be the standard generators of $V_B$. It is clear that the family of $2n^2$ endomorphisms in $W$ of the shape $F(\ E_A(j,k), 0\ )$ and $F(\ 0,E_B(j,k)\ )$ generate the image of $F$. We associate the matrix for them and the standard generators of $W$, which are the homomorphisms $E(\ (s,t)\to(u,v)\ )$ mapping all standard generators to zero, except for $E(s,t)$, which is mapped to $E(u,v)$.
It is clear now, that a non-trivial liniar combination leads to an equation of the shape $AX+XB\equiv 0$, an identity in $X$. ($A$ and $B$ come by assembling the coefficients in the non-trivial linear combination.) This is the case only for $A=-B$ a diagonal matrix. So the dimension of the kernel is one. The following lines compute the matrix explicitly, and the rank is as expected. The programmer has no problems with the linear algebra, but rather with the reshaping of a matrix as a vector.
def FMatrix( n ): R = range(n) A = matrix( QQ, n^4, n^2+n^2 ) count_columns = 0 for j, k in cartesian_product( [R, R] ): # the elementary matrix (j,k) acts on the vector of morphisms ( (s,t) -> (u,v) ) # first of all from the left, then from the right, # from the left count_rows = 0 for s, t, u, v in cartesian_product( [R,R,R,R] ): # is (j,k) o (s,t) = (u,v) ? if j == u and k == s and t == v: A[ count_rows, count_columns ] = 1 count_rows += 1 count_columns += 1 # from the right count_rows = 0 for s, t, u, v in cartesian_product( [R,R,R,R] ): # is (s,t) o (j,k) = (u,v) ? if s == u and t == j and k == v: A[ count_rows, count_columns ] = 1 count_rows += 1 count_columns += 1 return Afor n in [1..4]: print "Dimension of W(%s) is: %s" % ( n, FMatrix(n).rank() )
And after a short thrill:
Dimension of W(1) is: 1Dimension of W(2) is: 7Dimension of W(3) is: 17Dimension of W(4) is: 31
2
No.2 Revision
It is arguable, that
lambda n: 2*n^2-1 # if n in ZZ and n > 0 else None
is the solution getting the full score in an exam. (But this is always a good test for the sense of humor of the examiner. And for the own courage in real life.)
Instead, we may translate as follows from the Italian:
Write a function in Sage that computes the matrix of the linear map $F$, written in suitable bases,
from the space $V_A\times V_B$, where $V_A=V_A(n)$, $V_B=V_B(n)$ are two copies of $M_n(R)$
to the space $W=W(n)$ of endomorphisms of $M_n(R)$,
constructed as follows: For $A\in V_A$ and $B\in V_B$ we associate $F(A,B) = F(A,0)+F(0,B)$ which acts on a matrix $X$ as follows: $F(A,B)(X)=AX+XB$.
Then compute the rank for some small values of n.
My choice of the bases is as follows. Let $E_A(j,k)$ be the standard generators of $V_A$. And $E_B(j,k)$ be the standard generators of $V_B$. It is clear that the family of $2n^2$ endomorphisms in $W$ of the shape $F(\ E_A(j,k), 0\ )$ and $F(\ 0,E_B(j,k)\ )$ generate the image of $F$. We associate the matrix for them and the standard generators of $W$, which are the homomorphisms $E(\ (s,t)\to(u,v)\ )$ mapping all standard generators to zero, except for $E(s,t)$, which is mapped to $E(u,v)$.
It is clear now, that a non-trivial liniar combination leads to an equation of the shape $AX+XB\equiv 0$, an identity in $X$. ($A$ and $B$ come by assembling the coefficients in the non-trivial linear combination.) This is the case only for $A=-B$ a diagonal matrix. So the dimension of the kernel is one. The following lines compute the matrix explicitly, and the rank is as expected. The programmer has no problems with the linear algebra, but rather with the reshaping of a matrix as a vector.
def FMatrix( n ): R = range(n) A = matrix( QQ, n^4, n^2+n^2 ) count_columns = 0 for j, k in cartesian_product( [R, R] ): # the elementary matrix (j,k) acts on the vector of morphisms ( (s,t) -> (u,v) ) # first of all from the left, then from the right, # from the left count_rows = 0 for s, t, u, v in cartesian_product( [R,R,R,R] ): # is (j,k) o (s,t) = (u,v) ? if j == u and k == s and t == v: A[ count_rows, count_columns ] = 1 count_rows += 1 count_columns += 1 # from the right count_rows = 0 for s, t, u, v in cartesian_product( [R,R,R,R] ): # is (s,t) o (j,k) = (u,v) ? if s == u and t == j and k == v: A[ count_rows, count_columns ] = 1 count_rows += 1 count_columns += 1 return Afor n in [1..4]: print "Dimension of W(%s) is: %s" % ( n, FMatrix(n).rank() )
And after a short thrill:
Dimension of W(1) is: 1Dimension of W(2) is: 7Dimension of W(3) is: 17Dimension of W(4) is: 31 |
To start with, let me apologize for my ignorance as I know next to nothing about partial differential equations. My question is about the tensor product of Banach spaces but actually I do not understand those either, so let me rephrase it as a Hilbert space question. Alright! Let us consider the Lebesgue Hilbert space $L^2(X)$ and why not also $L^2(Y)$ where $X$ and $Y$ are the domains of our respective functions.
Now the tensor product of $L^2(X) \otimes L^2(Y)$ is readily seen to be the closure of the space of functions that have the form $fg$ where $f$ is in $L^2(X)$ and $g$ is in $L^2(Y)$. Perfect. This is of course (Banach spaces so think isomorphic) also equal to $L^2(X \times Y)$. Nothing interesting here. (For an interesting related discussion check out my question Sequence of measurable functions and the excellent answer by Nate which gives a couple of very interesting tools.)
So let me start with a nice PDE. The Poisson equation. $\Delta u = -f$ where - if you have a inclination to thinking in practical terms - you could see $f$ as a normalized charge density function and then $u$ would be the potential this charge density gives us.
So, where does our solution live? As $u$ would be the convolution with the so called fundamental solution of the Laplace equation this would mean that the solution $u$ is at least as smooth as $f$ itself (that is why I took this equation...). Also, if we would look on the whole of $\mathbf R^3$ then we might have a problem with integrability: the maximum principle would state that if the function would be bounded (like if it would have compact support and $f$ continuous) then it would be a constant. That messes up the integrability. So as I know nothing about these things let us just consider the domain to be the unit cube $Q$. Integrable for sure!
So the solution would live in $L^2([0,1])$ tensored with itself three times. Actually, all its derivatives should be in this space as well, so we have the Sobolev space $W^{2, 2}(Q)$ if we take $f$ to be at least $C^2$. Also, as I have kinda shown here (need to restrict) here (for $L^2$ you can also use Plancherel) this is equal to the $L^2$ space intersected with the $L^2$ functions that have Laplacian as well in $L^2$.
Now the question...
As I have asked before in Non-separable linear PDE and about the excellent answer of Willy there I was wondering about solutions in these spaces.
If we solve for instance the heat equation by separation of variables in a 1D-heat bar we find two ODEs for both the position variable and the time variable. These belong to a set of simultaneous eigenvalues $\lambda_n$, and have corresponding solutions $s_n$ and $t_n$. Then we construct the solution as$$u = \sum_n \lambda u_n t_n.$$
Question: This is a solution in the product space. What makes it so that one cannot write a posteriori a solution of a non-separable equation in this way? The simultaneous $\lambda_n$ would not correspond to such an equation? Any omissions and mistakes are solely due to my misunderstanding and/or misconception. I appreciate any constructive comment. Oh blimey... For now, let me make it more abstract to find my source of misunderstanding. Let $H_1$ and $H_2$ be Hilbert spaces. A simple tensor (tensor of rank one) $x_1 \otimes x_2$ is the identification of $x_1$ with its dual element $x_1^*$. Kind of like $x_1 \times x_2$ where $x_1$ is in $H_1$ and $x_2$ is in $H_2$. Then the inner product defined on $H_1 \otimes H_2$ is given by$$\langle u_1 \otimes u_2, v_1 \otimes v_2 \rangle = \langle u_1, v_1 \rangle_{H_1} \langle u_2, v_2 \rangle_{H_2}.$$Next, extend by linearity and complete under this inner product.
More abstractly, to each simple tensor $x_1 \otimes x_2$ associate the rank one operator $$ \begin{align} T:H_1^* &\to H_2\\ x^* &\mapsto x^*(x_1) x_2. \end{align} $$ We can use this to construct a linear mapping between $H_1 \otimes H_2$ and the space of finite rank operators from $H_1^*$ to $H_2$. These are a subspace of the Hilbert-Schmidt operators which has scalar product for an orthonormal basis $(e_n)$ of $H_1$, $$\langle \Lambda_1, \Lambda_2 \rangle = \sum_n \langle \Lambda_1 e_n^*, \Lambda_2 e_n^*\rangle.$$ Hence, we can identify the tensor product $H_1 \otimes H_2$ as the Hilbert Schmidt operators from $H_1^*$ to $H_2$.
Also to my knowledge, whenever the spaces are separable: $$L^2(X) \otimes L^2(Y) \cong L^2(X \times Y) \cong L^2(X; L^2(Y)).$$
So, if I have $f$ in $L^2$ and $g$ in $L^2$, then the set of $fg$ is in $L^2 \otimes L^2$. Furthermore, it is a dense set in $L^2(X \times Y)$. |
when we are proving limit that supposed to be exist , we are have a nice relation between $\epsilon$ and $\delta$. i mean regardless what epsilon we choose we will able to find a suitable delta , that later will help us to find parameter for how should we choose x , but if the limit is not exist, how both epsilon and delta related? here is one example and it's solution and i have something to ask about it,
Q: prove that $\lim _{x \to \infty} x\sin x$ doesn't exist
A: assume the limit exists and is $L$ fix $\epsilon = 1$ . there exist $M > 0$ such that for $x > M .| x \sin x - L | < \epsilon = 1$ , set $P = max \{L , M\}$ . let $ x_0 = (\frac {\pi} {2} +2P\pi)$ we have , $x_0 > M$ and $x_0 \sin x_0 = (\frac {\pi} {2} +2P\pi) → | x_0 \sin x_0 - L | > 1$ hence we have the contradiction
so, my first question, since $\epsilon$ fixed by 1 how it relation with $M$ that we can choose $x_0= (\frac {\pi} {2} +2P\pi)$ ? in order follow $x > M $. i know if the limit exist we can express epsilon using delta for example $\epsilon = \sqrt {\delta} or \epsilon = \frac {1} {\delta}$.something like that but how in this case ?
second, in " set $P = max \{L , M\}$ " part, i still understand to choose P=M because surely $$(\frac {\pi} {2} +2M\pi) > M$$ reflect to $x_0 > M $. but why there is L ? |
I am trying to price a call option on a mutual fund.
Given the lack of market implied data, I am going to estimate the fund´s
expected volatility using as a reference its historical volatility (calculated over a period comparable to the option maturity).
However, I am not sure how to estimate the fund´s
expected return.Some alternatives are: Use an appropriate risk free rate and deduct the annual feesof the underlying fund Use the historical returnas expected return and neglect the annual fees
(1) will be consistent with the traditional risk-neutral framework while (2) can be backed by the lack of market implied information, the potential limitations in hedging the call option, and the fact that historical returns may exhibit some statistical persistence in mutual funds.
Which alternative do you think is more appropriate? Given the fund´s historical returns and the current risk-free rates (1) and (2) produce very different option values.
EDIT: Per comments I understand that there might be no generally accepted answer for this question. Therefore I will use both risk-neutral and real-world distributions.
Regarding the real-world distribution, once I have estimated $\mu$ and $\lambda =\frac{\mu-r_f}{\sigma}$ my idea will be :
Estimate the real-world terminal distribution using $\mu$ instead of $r_f$. For instance using: $\frac{dS}{S} = \mu dt + \sigma dW_t$ Calculate the expected payoff under the real-world terminal distribution Discount this payoff using an appropriate discount rate based on $\mu$ and $\lambda$.
Do you think this approach is enough or are there other details that I should take into account.
Finally, which will be an appropriate discount rate for the real-world payoffs? I am inclined to use a simple CAPM approach $e^{-r_dt}$ with $r_d = r_f + \beta(E(r_m) - r_f)$, but this do not make any explicit use of $\lambda$, so I am uncertain here. |
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open.
The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers.
Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic?
By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the
standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models.
We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a
topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic?
In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic.
Let me state the main theorem and briefly sketch the proof.
Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output.
\begin{equation*}\small\begin{array}{rcr}
\cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*}
This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the
final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$.
Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired.
But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the
final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired.
Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order.
The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$.
We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$
The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$.
Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable.
The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order.
Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order.
Go to the article to read more.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } |
Forgot password? New user? Sign up
Existing user? Log in
In the above diagram, x=21,x=21,x=21, y=14,y=14,y=14, α=35∘,\alpha={35}^\circ,α=35∘, and β=75∘.\beta={75}^\circ.β=75∘. Which condition would mean that the two triangles are similar?
Note: The above diagram is not drawn to scale.
Problem Loading...
Note Loading...
Set Loading... |
...for every point $a \in N$ and for every open set $V$ of $N$ that contains $a$, $f^{-1}(V)$ is an open set of $M$.
There is no need to select a point $a \in N$; in fact, doing so is misleading. First, two definitions for clarity. A map $F: M \rightarrow N$ is
continuous if and only if for all open sets $W$ of $N$, the preimage $F^{-1}(W)$ is an open set of $M$. A neighborhood of a point $p \in M$ is an open set of $M$ which contains $p$. Now let us see how my definition of continuity differs from yours. Let ${\cal S}$ be the collection of all neighborhoods of all points of $N$. Let ${\cal T}$ be the collection of all open sets of $N$ (the topology of $N$). Since $A \in {\cal S}$ implies $A$ is open, we have ${\cal S} \subset {\cal T}$. Now let $B \in {\cal T}$. If $B$ contains a point $b$, then $B$ is a neighborhood of some point of $N$, so $B \in {\cal S}$. However, ${\cal T}$ contains a single set $\emptyset$ which is not a neighborhood of any point of $B$. Thus$$ {\cal S} = {\cal T} \backslash \{\emptyset\} $$Suppose $F$ is continuous under my definition of continuity. Then since each neighborhood in $N$ is an open set of $N$, $F$ is continuous under your definition. But suppose your definition of continuity. The preimage of all members of ${\cal S}$ are open, but what about the preimage of $\emptyset$? Well, the preimage of $\emptyset \subset N$ is $\emptyset \subset M$, so your definition of continuity just so happens to work. However, it works based on a kind of "side effect" regarding the set-theoretic properties of $\emptyset$.
On to your proof. The first problem I see is that you write $\psi \circ g \circ \phi^{-1}(V)$, though $V \subset N$, and $\phi : M \rightarrow \mathbf{R}^n$, so $\phi^{-1}(V)$ doesn't make sense. Nor would $\phi^{-1}(U)$. Thus I'm not sure how to fix this part of the proof. Next, you are right that we need to use the fact of differentiability to imply continuity. But it seems to be slicker here to stick to functions instead of dealing with open sets. My proof below will do so.
Let $F: M \rightarrow N$ be differentiable with $(U,\phi)$ a chart of $M$ and $(V,\psi)$ a chart of $N$ with $F(U) \subset V$. The map$$ \psi \circ F \circ \phi^{-1}: \phi^{-1}(U) \subset \mathbf{R}^m \rightarrow \mathbf{R}^n $$is differentiable by assumption, and hence contunuous. Since $\phi$ and $\psi$ are homeomorphisms, all of $\phi$, $\phi^{-1}$, $\psi$, and $\psi^{-1}$ are continuous. Then$$ \psi^{-1} \circ (\psi \circ F \circ \phi^{-1}) \circ \phi : U \rightarrow V$$is continuous as the composition of continuous maps. Since this holds for all charts $U$, and the charts cover $M$, we know that $F$ is continuous on $M$ by the topological gluing lemma.
Overall, your proof lacks precision, and I think this is why you didn't catch your nonsensical statements. Being explicit with domains and codomains would have probably saved you some headache.
Note: as for
why a proof avoiding open sets and preimages is better, I can only speculate. The most elegant proof I have seen for differentiability implies continuity in single variable calculus uses the sequential definition of continuity, which is only equivalent to standard continuity for first countable spaces. Since almost everyone takes manifolds to be first countable, there is no problem for us. But assuming sequential continuity allows for the cleanest proof, a proof using the standard definition of continuity would need to invoke first countability at some point, defeating the purpose of proving using standard continuity. |
This is an exercise from Apostol's
Calculus, Volume 1. It asks us to sketch the graph in polar coordinates and find the area of the radial set for the function:
$$f(\theta) = \theta$$
On the interva $0 \leq \theta \leq 2 \pi$. I think to find the area we should just integrate $\theta \ d\theta$ from 0 to $2\pi$ like any other function? Is that right? Also I'm not sure how to think about sketching a function in polar coordinates.
The problem is the book gives the answer as $4\pi^3/3$ which is not what I get if I just integrate the function. |
In his article
De la pression ou tension dans un corps solide [On the pressure or tension in a solid body], Cauchy introduces a theory that allows to define Cauchy stress tensor. It looks like he makes a mistake in his formula for the "surface element" in his surface integrals, unless I am missing something. Can someone check and confirm this, please? I didn't find any errata or comments about this apparent error.
The "surface element" in his surface integrals (in Cartesian coordinates) is written as $\cos\gamma\,dy\,dx$, where $\gamma$ is the angle between the normal vector to the surface and the positive direction of the $z$-axis. On page 62, formula (4), he seems to claim that the surface area is given by the integral $$ \int\!\!\!\int\cos\gamma\,dy\,dx = s. $$ Of course this is not true: the surface area is given by the integral $$ \int\!\!\!\int\frac{dy\,dx}{\cos\gamma} = s, $$ and the "surface element" is $\frac{dy\,dx}{\cos\gamma}$.
Since one should be free to use any symbol for the "surface element" as long as one does not try to interpret it, such an error would normally not lead to any other errors, so it seems plausible that it might pass unnoticed... |
Notice the definition of center of mass $x_c$ for two particles rearranges to
$$x_c = \Big(\frac{m_1}{m_1+m_2}\Big)x_1+\Big(\frac{m_2}{m_1+m_2}\Big)x_2$$
Let's call the ratio $(\frac{m_1}{m_1+m_2})$ "$W_1$". It can can be interpreted as the percentage of mass that object 1 contributes to the system, generally $(\frac{mass of object 1}{total mass})$. To convert to units of percentage simply multiply the resulting decimal by 100%. Notice that $W_1 \leq 1$, that is object 1 (any object) cannot contribute more that 100% of the mass of the system.
In general a weighted average (arithmetic mean) $x_{av}$ of $n$ components is
$$x_{av} = w_1x_1 + w_2x_2 + ... + w_nx_n = \sum_{i=1}^{n} w_ix_i$$
where $w_i$ is the weight (in the statistical sense) of component $x_i$. For convenience we often wish for the weight factors to be normalized, that is,
$$\sum_{i=1}^{n} W_i = 1$$
Notice here I wrote $W_i$, the normalized weight factors, instead of $w_i$, which are not necessarily normalized.
For your physics application of this math, the mass $m_i$ of object $i$ corresponds to the "crude" weight factor $w_i$. To normalize weight factors (so that none is greater than 100%), we divide the
crude factor by the total weight of all components:
$$W_i = \frac{w_i}{\sum_{i=1}^{n} w_i}$$
or, for us, the
object mass by the total mass of the system:
$$W_i = \frac{m_i}{\sum_{i=1}^{n} m_i} = \frac{m_i}{M} $$
(where "$M$" is a common abbreviation for the total system mass that is, $M = \sum_{i=1}^{n} m_i$.) This is why our weight factors for calculating center of mass consist of the mass of the object divided by the total mass of the system. For two objects ($n=2$), the weight factor $W_i$ for object $i$ is of course
$$W_i = \frac{m_i}{\sum_{i=1}^{2} m_i} = \frac{m_i}{m_1+m_2} $$
Thus,
$$x_c = \Big(\frac{m_1}{m_1+m_2}\Big)x_1+\Big(\frac{m_2}{m_1+m_2}\Big)x_2 \\= \sum_{i=1}^{2} \Big(\frac{m_i}{m_1+m_2}\Big) x_i \\= \frac{\sum_{i=1}^{2} m_ix_i}{\sum_{i=1}^{2} m_i} \\= \frac{1}{M} \sum_{i=1}^{2}x_im_i$$
I've written a few forms of $x_c$ above so you can mull over how this expression is constructed. For the general center of mass of a system of $n$ objects, simply replace $2$ with $n$ in the final expression.
Let's explicitly juxtapose our center of mass formula with the weighted average formula to drive the point home:
$$x_{av} = \sum_{i=1}^{n} W_ix_i \\\\x_c= \sum_{i=1}^{n} \Big(\frac{m_i}{M}\Big) x_i$$
Thus,
the center of mass is the weighted average of position with respect to mass.$$\\ \\$$P.S. If the system has a continuous distribution of mass rather than $n$ discrete chunks (as for any real object that is not being approximated as a dimensionless point particle), instead of
$$x_c = \frac{1}{M} \sum_{i=1}^{n}x_im_i$$
We can use
$$x_c = \frac{1}{M} \int xdm$$
which arises when we take the limit as $n \to \infty$. (Note this form is not the most practical to calculate so we usually make the substitution $dm = \rho dV$ where $\rho$ is the density of the object, which may vary with position.) |
We'll use this lemma which I'll leave for you to prove:
Ic $p=4n+1$ is prime, then $a^n\equiv 1\pmod {p}$ if and only if $a\equiv x^4\pmod p$ for some $x$ not divisible by $p.$
We'll show that $-4$ is always a fourth power, $\pmod p$.
If $n$ is
even, then $p\equiv 1\pmod 8,$ so $2$ is a square, and hence $4$ is a fourth power. Also $(-1)^n\equiv 1\pmod p$, $-1$ is a fourth power modulo $p$ by our lemma, and so $-4$ is a fourth power.
If $n$ is
odd, then $-1\equiv x^2$ where $x$ is not a square. This is because if $x$ were a square, then $x^4=1$ and $x^{(p-1)/2}=1$, and hence $x^2=1,$ since $\gcd\left(4,\frac{p-1}{2}\right)=2.$
Similarly, $2$ is not a square modulo $p,$ So $2x$
is a square, $\pmod p$. So $4x^2\equiv -4\pmod p$ is a fourth power.
Knowing that $-4$ is a fourth power is all you need, since:
$$-4n\equiv 1\pmod p$$
So $n$ is a fourth power, too, so, again by our lemma, $n^n\equiv 1\pmod p$.
An alternative approach to showing that $x^4+4\equiv 0\pmod{p}$ for some $x$ is to use that:
$$x^4+4 = (x^2+2)^2-(2x)^2=(x^2-2x+2)(x^2+2x+2)=((x-1)^2+1)((x+1)^2+1)$$
Thus we have that $x^4+4\equiv 0\pmod p$ when $(x\pm 1)^2\equiv -1\pmod p.$ But since $p\equiv 1\pmod{4},$ $-1$ is a square modulo $p$, we know such $x$ exists.
Using that last approach as a jumping off point, we can start with $y$ such that $y^2\equiv -1\pmod{p}.$
Then $$\begin{align}(y+1)^4 &= y^4+4y^3+6y^2+4y+1 \\&\equiv 1+4y(-1)+6(-1)+4y+1\pmod{p}\\&=1+(-6)+1\\&=-4\pmod{p}\end{align}$$ |
I need an intuition how this is possible - Imagine I have a DAG(directed acyclic graph), so a graph without cycles and only directed edges;
Now for every DAG with $d$ nodes there is (at least) one topological order $\pi:1,...,d$ such that for every $j$ being an ancestor of $i$ it also holds that $j<i$, so that $j$ is before $i$ in the topological order; (follows directly from the fact that edges are directed and there are no loops)
Now I have a model defined on the graph, namely $X_1,...,X_d$ is such that:
$$X_i=\sum\limits_{j \in \text{an}(i)}c_{ji}X_j+\varepsilon_i,$$
where $\varepsilon_i$ is some positive and continous innovation, $c_{ji}>0$ and an$(i)$ are the ancestors of node $i$ - now given some data sample $X^1,...,X^n \in \mathbb R^d$ I want to estimate the topological order;
Since $ X_i/X_j \geq c_{ji}$ if $j$ is an ancestor of $i$ and $0 \leq X_i/X_j \leq 1/c_{ij}$ if $i$ is an ancestor of $j$ it might be resasonable to define a matrix $A$ with entries $a_{ji}=\bigwedge_{k=1}^nX_i/X_j$ and try to find the topological order $\pi$ such that:
$$\max_{\pi \in \Pi}\sum\limits_{\pi(j)<\pi(i)}a_{ji}$$
Also we could based on the same idea minimize the value which is NOT in our topological sort, namely:
$$\min\max_{\pi(j)>\pi(i)}a_{ji}$$
Both have the same approach but the sum should be more robust - however when I test it, the sum leads to worse results which I dont really understand;
Anyone an idea? |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open.
The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers.
Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic?
By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the
standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models.
We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a
topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic?
In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic.
Let me state the main theorem and briefly sketch the proof.
Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output.
\begin{equation*}\small\begin{array}{rcr}
\cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*}
This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the
final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$.
Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired.
But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the
final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired.
Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order.
The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$.
We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$
The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$.
Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable.
The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order.
Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order.
Go to the article to read more.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } |
How would Integrate the following.
$\int \cos^3(x)\cos(2x)$
I did
$\int \cos^3(x)(1-2\sin^2(x))$
$2\int \cos^3(x)-\cos^3x\sin^2x$
But I find myself stuck....
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
That's a good way to proceed. So our integral is $$\int \cos^3 x(1-2\sin^2 x)\,dx.$$
Rewrite as $$\int \cos x(1-\sin^2 x)(1-2\sin^2 x)\,dx$$ and let $u=\sin x$. We end up with $$\int (1-u^2)(1-2u^2)\,du.$$ Expand and integrate.
HINT:
As $\cos3x=4\cos^3x-3\cos x$
$\displaystyle\cos^3x\cos2x=\frac{(\cos3x+3\cos x)\cos2x}4=\frac{2\cos3x\cos2x+3\cdot2\cos2x\cos x}8$
Use $2\cos A\cos B=\cos(A-B)+\cos(A+B)$ and $\displaystyle\int\cos mxdx=\frac{\sin mx}m+K$ where $K$ is an arbitrary constant
$\int \cos^3(x)cos(2x) dx$
$\int \cos^3(x)(1-2\sin^2(x)) dx$
$\int \cos^2(x) \cdot \cos(x) (1-2\sin^2(x)) dx$
$\int (1-\sin^2x)\cdot \cos(x) \cdot(1-2\sin^2(x)) dx$
$\int ( \cos(x)- 3 \sin^2x \cdot \cos(x) + 2 sin^4x \cdot \cos(x) ) dx$
now using inverse chain rule which is $$\int f^n(x) \cdot f'(x) dx= f^{n+1}(x) +C$$
$\sin(x)- \sin^3x +2/5 \sin^5x + C$ |
I wanted to better understand dfa. I wanted to build upon a previous question:Creating a DFA that only accepts number of a's that are multiples of 3But I wanted to go a bit further. Is there any way we can have a DFA that accepts number of a's that are multiples of 3 but does NOT have the sub...
Let $X$ be a measurable space and $Y$ a topological space. I am trying to show that if $f_n : X \to Y$ is measurable for each $n$, and the pointwise limit of $\{f_n\}$ exists, then $f(x) = \lim_{n \to \infty} f_n(x)$ is a measurable function. Let $V$ be some open set in $Y$. I was able to show th...
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Consider a non-UFD that only has 2 units ( $-1,1$ ) and the min difference between 2 elements is $1$. Also there are only a finite amount of elements for any given fixed norm. ( Maybe that follows from the other 2 conditions ? )I wonder about counting the irreducible elements bounded by a lower...
How would you make a regex for this? L = {w $\in$ {0, 1}* : w is 0-alternating}, where 0-alternating is either all the symbols in odd positions within w are 0's, or all the symbols in even positions within w are 0's, or both.
I want to construct a nfa from this, but I'm struggling with the regex part |
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there
my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them
but still, the first one from well, almost a decade ago shows up as the default content in the search window
1,2,3,6,11,23,47,106,235
well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go
oh well "what would cotton mathers do?" the chat room unanimously ponders lol
i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway?
or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm
A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference
But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please
very general advice for any number of topics for someone like yourself sir
assuming gender because you should hate text based adam long ago if you were female or etc
if its false then I apologise for the statistical approach to human interaction
So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos
So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field?
(I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.)
(which is just the product of the integer and its conjugate)
Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$
You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings
(Plus I'm at work and am pretending I'm doing my job)
Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit.
@Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole
also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$
this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$
the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$
(just as a quotient of additive groups, that quotient group is finite)
in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers
that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$
there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus)
@MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively.
$\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first:
By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$.
The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$.
@Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics
@MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$?
Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists...
As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity
eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore
Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years
Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour
Or more likely, we will need to start recognising machines as a new species and interact with them accordingly
so covert operations AI may still exists, even as domestic AIs continue to become widespread
It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces
But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other
that is, until their processing power become so strong that they can outdo human thinking
But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way
However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners
That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly
That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction
for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise
i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed
Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy.
I was just genuinely curious
How does a message like this come from someone who isn't trolling:
"for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise"
3
Anyway feel free to continue, it just seems strange @Adam
I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree?
So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$)
@RyanUnger You're the guy to ask for this sort of thing I think:
If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way?
I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right?
We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method.
How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ?
@anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
I read in a book that the protons in formaldehyde have the largest known positive geminal coupling constant (41 Hz). I thought only couplings between non-equivalent protons can be seen in the spectrum. Since formaldehyde has C2v symmetry and (if I remember correctly) coupling constants cannot be calculated from theory accurately, how come this coupling is apparent in the spectrum?
This is an extrapolation of an experimentally measured coupling constant. It is possible to determine the coupling constant, $J_{\ce{H-H}}$ based on the $J_{\ce{H-D}}$, where the following relationship is valid:
$$ \left |J_{\ce{H-H}} \right | = (\gamma_\ce{H}/\gamma_\ce{D}) \cdot \left |J_{\ce{H-D}}\right |$$
Shapiro, Kopchick and Ebersole reported the coupling constant for $J_{\ce{H-D}}$ in formaldehyde-
d1 in 1963 (The Journal of Chemical Physics 39, 3154 (1963); doi: 10.1063/1.1734163), and calculated the $J_{\ce{H-H}}$ from this. Changes in coupling constants due to the influence of isotopically-modified electron density are apparently very modest for H/D, and so this relationship appears to hold well.
There is no experimental method to measure coupling between magnetically equivalent spins. In order to break the redundancy of the energy transitions, one must remove the equivalence.
You can actually check this isotope relationship very easily with common NMR solvents and the $\ce{C-H}$ and $\ce{C-D}$ coupling if you run carbon experiments ($\gamma_\ce{H}/\gamma_\ce{D}=6.51439$).
$$\begin{array}{ccc} \hline \text{Solvent} & J_{\ce{C-D}} & J_{\ce{C-H}}\\[1.1em] \hline \ce{CHCl3} & 32 & 208\\ \ce{DMSO} & 21 & 137 \\ \hline \end{array}$$ |
Consider again the coupon collector's problem:
There are $c$ different types of coupon, and each coupon obtained is equally likely to be any one of the $c$ types. Find the probability that the first $n$ coupons which you collect do not form a complete set, and deduce an expression for the mean number of coupons you will need to collect before you have a complete set.
For the probability that the first $n$ coupons do not form a complete set, it will be the complementary probability that the first $n$ coupons do form a complete set, thus: $$ \mathbb{P}(n\text{ coupons, not complete set}) = 1 - {n \choose c} \left(\frac{1}{c}\right)^c. $$ The idea for this expression is that the sequences of length $n$ that have at least the $c$ coupons are equal to the number of ways that $c$ elements can be disposed on a sequence of length $n$, thus ${n \choose c}$. The sequence of $c$ distinct coupons has probability $c^{-c}$, and as I do not care about the remaining $n-c$ spots in the sequence, these can be anything.
For the mean number of coupons needed to collect before having a complete set, I would say: $$ \sum_{m=c}^\infty m {m-1 \choose c-1} \left(\frac{1}{c}\right)^{c-1}\left(\frac{c-1}{c}\right)^{m-c-1}\frac{1}{c}, $$ as in this case for every length $m$ we need to count the number of sequences which contain all the $c$ distinct coupons, but the last coupon of the collection must be in the $m$-th place, so we need to dispose $c-1$ coupons in $m-1$ places. In the remaining $m-c$ places we can allow any coupon except for the last one we need, so the probability is $\frac{c-1}{c}$.
What do you think? If I am correct, is there a way to compute the summation? |
Given $f\colon\mathbb{R} \to \mathbb{R}$, $f$ is differentiable on $\mathbb{R}$ and the $\lim_{x \to \infty}f(x)$ does not exists . show/prove formally that there exists $x_0 \in \mathbb{R}$ such that $f'(x_0)=0$
My strategy is showing that $f$ is not monotonic function because all monotonic functions have limits when $x \to \infty$ ( $\lim_{x \to \infty}f(x)=l$ while $l \in \mathbb{R}$ or $l=+/- \infty$ ) now i can say that there exists $x_1$ and $x_2 \in \mathbb{R}$ such that $f'(x_1)<0$ and $f'(x_2)>0$ and use the mean value for derivative function (Darboux's theorem)
Can i really say that a continuous function is not monotonic just because $\lim_{x \to \infty}f(x)$ does not exists ? or its a "one way" statement ?
does all non-monotonic continues functions have $x_0 \in \mathbb{R}$ such that $f'(x_0)=0$ ? |
Yes, there are ways one could claim to calculate an angle between two non-bonding electron pairs.
As Mithoron points out, this Chem.SE question illustrates how quantum chemical calculations and photoelectron spectroscopy both demonstrate the non-equivalence of the lone pairs of $\ce{H2O}$, an analysis which presumably applies equally well to the analogous $\ce{H2S}$. Thus, the methods used for calculating such an angle will be controversial, and the results may or may not be of any particular practical value given that they're at odds with PES data. That being said, I'll show here one way that a putative 'non-bonding electron pair angle' can be calculated. BUT:
The calculation below is based on application of the quantum theory of atoms in molecules (QTAIM) to the quantity called the electron localization function (ELF), which is a scalar function in $\mathbb R^3$. QTAIM is useful for identifying intrinsic features of various three-dimensional fields that arise in quantum chemical calculations, with a significant focus placed on the 'critical points,' where the field gradient is zero. It was originally developed for analysis of the electron density distribution, but has been extended to the ELF and other quantities. One of my favorite papers illustrating coupled QTAIM/ELF analysis is a 2009 review by Matito and Solà (
Coord Chem Rev 253: 647); it also discusses the localization and delocalization indices (see here and here), which I won't go into in any detail here.
I'm going to start out by emphasizing the distinction between electron
density and electron localization. The electron density at a point is fairly straightforward: it's a measure of, on average, how many electrons (or fraction thereof) are located at that point, per unit volume. Due to its waveform nature, each electron's position is a continuous distribution function, and the electron density of the system represents the collective distribution function of all the electrons present.
The electron
localization at a point can be harder to get a grasp of: it's a measure of how "spread out" the "location distribution" is, of the electrons that contribute to the density at that point. Regardless of the magnitude of the electron density, the electron localization at a point is high when the electrons contributing to the density at that point overall contribute relatively little to the electron density in other parts of the system. That is to say, these electrons associated with the point of interest don't "travel around very far" from that point, in their quantum-mechanical meanderings about the system. Conversely, the electron localization is low when the electrons associated with the point do contribute significantly ("wander") to "distant" regions of the system.
My answer here provides a more concrete example of how the electron
localization can differ significantly among systems whose electron density distributions are otherwise relatively similar. It also anticipates the discussion below, noting features of the three-dimensional ELF field that suggest the locations of electron lone pairs on N, O and F atoms. It is this ELF field that I will be focusing on in the following analysis.
I performed DFT calculations of $\ce{H2O}$, $\ce{H2S}$, $\ce{H2Se}$, $\ce{H2Te}$ and $\ce{H2Po}$ for this analysis using ORCA v3.0.3, and I carried out the subsequent QTAIM/ELF analysis in Multiwfn v3.3.7. The ORCA input for $\ce{H2S}$ was
! RKS PBE0 def2-TZVP def2-TZVP/J RIJCOSX
! OPT GRID4 GRIDX5 PRINTBASIS
* xyzfile 0 1 H2S.xyz
H2S.xyz was an initial geometry file generated by Avogadro, overwritten with the optimized geometry at the end of the ORCA run. The inputs for $\ce{H2O}$ and $\ce{H2Se}$ were identical except for the identity of the central atom; I added relativistic effects for $\ce{H2Te}$ via the
! ZORA simple keyword, and for $\ce{H2Po}$ via the use of a 60-electron effective core potential by adding
! ECP{def2-TZVP,def2-TZVP/J}. (While there is an argument to be made that I perhaps should've included relativistic effects for $\ce{H2Se}$ also, I expect them to minimally affect the geometry and overall valence-electronic structure of that system, which is what is of interest here.)
I generated a MOLDEN wavefunction file for each computation, to serve as inputs to Multiwfn. Using $\ce{H2S}$ as an example, the shell command was:
$ orca_2mkl H2S -molden
I then renamed the resulting
H2S.molden.input to
H2S.molden, which is necessary for Multiwfn to correctly identify it as an ORCA-generated MOLDEN file.
After loading
H2S.molden into Multiwfn, I used the following series of commands to locate the 3-D maxima ("attractors") of the ELF distribution:
Main menu:
[17] Basin analysis
[1] Generate basins
[9] ELF
[3] High quality grid
Multiwfn's search routine found six ELF attractors, one of which was identified as a cluster of 'degenerate attractors'. The figure below plots these attractors (via Multiwfn sub-command
[0] and some MS Paint manipulation) atop two different views of the $\ce{H2S}$ molecule (click to enlarge):
As can be seen, there are two attractors located right where chemical intuition would expect lone pairs to sit, along with two more attractors located at the hydrogen atoms. The final non-degenerate attractor is right in the center of the sulfur atom, and could be interpreted as representing the $n=1$ core electron shell. The degenerate attractor is distributed around the $n=1$ attractor, and could similarly be interpreted as the $n=2$ core electron shell. Interestingly, the two $n=3$ valence electrons of the sulfur that are involved in bonding to the $\ce{H}$ atoms do not exhibit independent attractors---each shares an attractor with the electron originating from its respective hydrogen atom.
The QTAIM method provides a way to subdivide the space occupied by a molecule based solely on the properties of the ELF distribution, and associate portions of the electron density to each of these attractors. Integrating these subdivisions of the electron density then provides the number of electrons associated with each attractor:
$$\begin{array}{cccc}\hline\text{lp} & \text{H} & \text{S 1s} & \text{S 2sp} \\2.114 & 1.855 & 2.143 & 7.865 \\\hline\end{array}$$
All of these values seem pretty reasonable---the attractors corresponding to single orbitals $(\ce{lp}$, $\ce{H}$, $\ce{S 1s})$ have about $\ce{2 e-}$ associated with them, and the $\ce{S 2sp}$ attractor has about $\ce{8 e-}$. In my experience, the signs and magnitudes of these deviations from $2.0$ and $8.0$ are typical for ELF basins.
So, leaving the question of
actual physical significance aside, I would argue that these ELF attractors provide a reasonable representation of the (non-)bonding structure of the molecule. Thus, to the main question: what is the structure? Conveniently, angles and distances among the attractors and atoms can be calculated via Multiwfn sub-command
[-2]:
$$\begin{array}{cccc}\hline\angle\,\ce{H-S-H} & 92.3^\circ & r_\ce{S-H} & 2.535\,\mathrm{Bohr} \\\angle\,\ce{lp-S-lp} & 127.8^\circ & r_\ce{S-lp} & 1.838\,\mathrm{Bohr} \\\angle\,\ce{H-S-lp} & 108.5^\circ \\\hline\end{array}$$
(In the above, the actual nuclear positions of the atoms were used where relevant, not the positions of the associated attractors. While each such attractor generally falls close to its associated nucleus, it's usually displaced by a tiny amount, $<0.1\,\mathrm{Bohr}$. I've always attributed this to numerical precision limits of the calculation methods, but I don't know for sure if that's what causes it.)
Therefore: By this QTAIM/ELF method, the angle between the two lone pairs of $\ce{H2S}$ is calculated to be $127.8^\circ$. This is considerably greater than the $\ce{H-S-H}$ angle, consistent with the intro-chem conception of the increased 'steric bulk' of a lone pair.
For comparison, I used the same procedure to generate results for $\ce{H2O}$, $\ce{H2Se}$, $\ce{H2Te}$ and $\ce{H2Po}$:
$$\begin{array}{c|c|ccccc}\text{Quantity} & \text{Units} & \ce{H2O} & \ce{H2S} & \ce{H2Se} & \ce{H2Te} & \ce{H2Po} \\\hline\int_\text{lp}{\rho} & \ce{e-} & 2.265 & 2.114 & 2.206 & 2.265 & 2.390 \\\int_{\ce{x}\ce{-H}}{\rho} & \ce{e-} & 1.667 & 1.855 & 1.861 & 1.807 & 1.806 \\\int_\text{core}{\rho} & \ce{e-} & 2.129 & 10.008 & 27.273 & 44.567 & 77.609 \\\hliner_{\ce{x}\ce{-H}} & \mathrm{Bohr} & 1.813 & 2.535 & 2.772 & 3.129 & 3.289 \\r_{\ce{x}\ce{-lp}} & \mathrm{Bohr} & 1.103 & 1.838 & 2.526 & 3.216 & 3.399 \\\hline\angle \ce{H-x}\ce{-H} & ^\circ & 105.1 & 92.3 & 91.2 & 90.9 & 89.6 \\\angle \ce{lp-x}\ce{-lp} & ^\circ & 114.9 & 127.8 & 139.5 & 156.8 & 156.8 \\\angle \ce{H-x}\ce{-lp} & ^\circ & 109.1 & 107.8 & 104.0 & 98.1 & 98.2 \\\hline\end{array}$$
Due (presumably) to numerical precision limitations, the calculated values of the four different $\ce{H-x}\ce{-lp}$ angles for each system varied by $1^\circ$ or so; I've reported the means of the obtained values above.
Those at home who are doing their math carefully will note that there is some electron density missing in the systems with the heavier central atoms. This is because at the 'high' grid quality used, the deep core electrons are poorly captured by the numerical integration involved. (Indeed, the deep core electrons may not be accurately captured at just about
any computationally reasonable grid size!) I think the $\int_\text{lp}{\rho}$ and $\int_{\ce{x}\ce{-H}}{\rho}$ values should be pretty reliable, though.
Naturally, $\ce{H2O}$ does not exhibit a degenerate attractor for the $n=2$ valence electron shell, though it does have a non-degenerate attractor for the $n=1$ core shell. The two heaviest chalcogenides examined possess degenerate attractors representing their $n\geq 2$ core electrons, as with sulfur.
In general, the $\ce{x}\ce{-H}$ and "$\ce{x}\ce{-lp}$" bond lengths increase as the central atom is varied down the group, which is unsurprising ("bigger atoms are bigger"). Interestingly, while the $\ce{x}\ce{-lp}$ distance starts out
considerably shorter than the $\ce{x}\ce{-H}$ bond length for $\ce{x}=\ce{O}$, by the time one reaches $\ce{x}=\ce{Te}$ the relative magnitudes are reversed. I don't have a particularly good explanation for this trend; presumably it's due to some sort of counterbalance between the nucleus-nucleus repulsion and nucleus-electron attraction forces.
Where these results are particularly dramatic is in the clear trends in the bond angles: as the central atom is made heavier, the $\ce{lp-x}\ce{-lp}$ bond angle increases markedly while the $\ce{H-x}\ce{-H}$ and $\ce{lp-x}\ce{-H}$ angles steadily decrease. The lone pair 'steric bulk' apparently increases uniformly down the group for this series of analogous systems, at least as determined by this QTAIM/ELF approach. The answers to this question provide more detail into the physical reasons for this decreasing trend in $\ce{H-x}\ce{-H}$ bond angles. |
Can you tell me if my answer is correct:
Show that the set $P^{\omega_\omega}(\omega)$ exists.
My answer:
Let $P^0 (\omega) = \omega$, $P^{\alpha + 1}(\omega) = P(P^\alpha (\omega))$ and for a limit ordinal $\lambda$ let $P^\lambda (\omega) =\bigcup_{\beta < \lambda} P^\beta (\omega)$.
The transfinite recursion theorem tells us that if $G: V \to V$ is a class function from the class of all sets to the class of all sets then there exists a unique function $F: ON \to V$ with $F(\alpha) = G(F\mid_\alpha)$.
Hence to show the existence of $P^{\omega_\omega}(\omega)$ we need to define a $G: V \to V$ with $G(F\mid_{\omega_\omega}) = P^{\omega_\omega}(\omega) = F(\omega_\omega)$.
Also we know that for successor ordinals $\alpha$ we want $F\mid_{\beta + 1} = F(\beta) = P^{\beta}(\omega)$. So for successor ordinals $\alpha = \beta + 1$ we define $G(F\mid_{\alpha})= G(F\mid_{\beta + 1}) = G( P^{\beta}(\omega)) := P( P^{\beta}(\omega))$.
We also know that $F\mid_\varnothing = \varnothing$ so $G(F\mid_\varnothing) = G(\varnothing) := P^0 (\omega)$.
Finally, if $\lambda$ is a limit ordinal we want $G(F\mid_\lambda) := \bigcup_{\alpha < \lambda} P^\alpha (\omega)$.
For all other sets we define $G$ to map to the empty set.
Thanks for your help. |
Advances in Differential Equations Adv. Differential Equations Volume 14, Number 7/8 (2009), 643-662. Hot spots for the heat equation with a rapidly decaying negative potential Abstract
We consider the Cauchy problem of the heat equation with a radially symmetric, negative potential $-V$ which behaves like $V(r)=O(r^{-\kappa})$ as $r\to\infty$, for some $\kappa>2$, and study the relation between the large-time behavior of hot spots of the solutions and the behavior of the potential at the space infinity. In particular, we prove that the hot spots tend to the space infinity as $t\to\infty$ and how their rates depend on whether $V(|\cdot|)\in L^1({\bf R}^N)$ or not.
Article information Source Adv. Differential Equations, Volume 14, Number 7/8 (2009), 643-662. Dates First available in Project Euclid: 18 December 2012 Permanent link to this document https://projecteuclid.org/euclid.ade/1355867229 Mathematical Reviews number (MathSciNet) MR2527688 Zentralblatt MATH identifier 1182.35135 Citation
Ishige, K.; Kabeya, Y. Hot spots for the heat equation with a rapidly decaying negative potential. Adv. Differential Equations 14 (2009), no. 7/8, 643--662. https://projecteuclid.org/euclid.ade/1355867229 |
Q. An unknown metal of mass 192 g heated to a temperature of $100^{\circ}C$ was immersed into a brass calorimeter of mass 128 g containing 240 g of water a temperature of $8.4^{\circ}C$ Calculate the specific heat of the unknown metal if water temperature stabilizes at $21.5^{\circ}C$ (Specific heat of brass is 394 $J \; kg^{-1} \; K^{-1}$)
Solution:
$192 \times S \times (100 - 21.5)$ $= 128 \times 394 \times (21.5 - 8.4)$ $+ 240 \times 4200 \times (21.5 - 8.4)$ $\Rightarrow \; S = 916$ Questions from JEE Main 2019 4. One of the two identical conducting wires of length L is bent in the form of a circular loop and the other one into a circular coil of N identical turns. If the same current is passed in both, the ratio of the magnetic field at the central of the loop $(B_L)$ to that at the centre of the coil $(B_C)$ , i.e., $R \frac{B_L}{B_C}$ will be 9. The magnetic field associated with a light wave is given, at the origin, by $B = B_0 [\sin(3.14 \times 10^7)ct + \sin(6.28 \times 10^7)ct]$.
If this light falls on a silver plate having a work
function of 4.7 eV, what will be the maximum
kinetic energy of the photo electrons ?
$(c = 3 \times 10^{8} ms^{-1}, h = 6.6 \times 10^{-34} J-s)$ Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open.
The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers.
Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic?
By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the
standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models.
We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a
topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic?
In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic.
Let me state the main theorem and briefly sketch the proof.
Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output.
\begin{equation*}\small\begin{array}{rcr}
\cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*}
This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the
final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$.
Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired.
But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the
final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired.
Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order.
The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$.
We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$
The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$.
Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable.
The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order.
Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order.
Go to the article to read more.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } |
I am trying to implement a routine to solve a differential equation in Python. Basically the kind of equation that I am interested in solving is of the form:
$\displaystyle \frac{d}{dx^2} \left(x y(x) \right) = 2 x ((U(x)-a) y(x)+ 2 b y(x)^3)$
where $a$ is an unknown constant and $b$ is a known constant and $U(x)$ is a function that depends on $x$ but that I only know numerically (I mean, I don't have an explicit form of $U(x)$ in terms of $x$).
I need to find the value of $a$ that fulfills my initial and boundary conditions ($y(0)=y_0$ and $y(x\rightarrow \infty)=0$, $a<1$).
For that purpose I was considering to use a shooting method (a secant method to be precise) by solving several times the above equation with a RK4 (using
scipy.integrate.ode).
The problem that I have is that I don't know how to introduce the numerical value of $U(x)$ in my equations given the fact that ode only ask for values at the initial conditions.
Is there a way to solve my equation with
scipy.integrate.ode or with another solver? |
(Take ZFC as background.)
The following two statements both follow from GCH:
ICF. Injective continuum function.
The continuum function (i.e. $\kappa \mapsto 2^\kappa)$ is injective.
NJA. No jumping axiom.
For all infinite cardinals $\kappa$ and $\nu$, we have: $$\kappa < 2^\nu \rightarrow 2^\kappa \leq 2^\nu.$$
To see that NJA follows from GCH, assume $\kappa < 2^\nu$. Hence $\kappa < \nu^+$. So $\kappa \leq \nu$. Thus $2^\kappa \leq 2^\nu$.
In fact, the converse holds too; we can actually
prove GCH from the above two axioms. To see this, notice that GCH can be interpreted as saying that from $\kappa < 2^\nu$, we can derive $\kappa \leq \nu$. So assume $\kappa < 2^\nu$. Then from NJA, we deduce $2^\kappa \leq 2^\nu$. So from ICF, it follows that $\kappa \leq \nu$, as required.
Another interesting axiom that seems to be related to NJA is:
BA. Beth axiom.
For all infinite cardinals $\kappa$, there exists an ordinal $\alpha$ such that $2^\kappa = \beth_\alpha$.
I haven't been able to puzzle out whether or not NJA and BA imply each other, so:
Question 0.Is there a relationship between NJA and BA?
Okay. My motivation for factoring GCH as ICF+NJA is that I'm interested in axioms for set theory that determine the structure of the cardinal numbers (like GCH), but which don't trivialize the cardinal characteristics of the continuum (unlike GCH).
One approach to find such axioms is to look for statements that prove the truth of NJA, but which don't prove ICF. An obvious first axiom to consider is:
DCF. Degenerate continuum function.
For all infinite cardinal numbers $\kappa$, the following hold.
If $\kappa$ is regular, then $2^\kappa = 2^{\kappa^+}$.
$2^\kappa$ is regular.
This clearly refutes ICF. For example, under DCF, we can prove that $$2^{\aleph_0} = 2^{\aleph_1} = 2^{\aleph_2} = \cdots$$
and even that $$2^{\aleph_0} = 2^{\aleph_\omega}.$$
But, I haven't been able to puzzle out whether DCF implies either and/or both of NJA or BA.
Question 1.Does DCF imply either and/or both of NJA or BA? |
Suppose you have a linear system like this:
$$\mathbf{x}[k+1] = \mathbf{D} \mathbf{x}[k]$$ where matrix $\mathbf{D}$ is diagonal. Assume its diagonal entries are real, greater than zero and less than one, but not necessarily distinct. Is it still technically correct to say that the convergence rate is determined by the largest value in $\mathbf{D}$? And what about non-diagonalizable matrices (when eigenvalues have multiplicity greater than one)?
Suppose you have a linear system like this:
The diagonal case is trivial because the system decouples into one-dimensional ones.
If the matrix is not diagonalizable, and the largest block in the Jordan normal form for the largest eigenvalue $\lambda$ is $m \times m$, then you can have $\|x[n]\| \sim C n^{m-1} \lambda^n$ as $n \to \infty$. This is because if $B$ is a matrix with $(B-\lambda I)^m = 0$ and $(B - \lambda I)^{m-1} \ne 0$, $B^n = (B-\lambda I+ \lambda I)^n = \sum_{k=0}^{m-1} {n \choose k} \lambda^{n-k} (B-\lambda I)^k$, and ${n \choose k}$ is a polynomial in $n$ of degree $k$. |
I thought I was done writing about this topic, but it just keeps coming back. The internet just cannot seem to leave this sort of problem alone:
I don't know what it is about expressions of the form \(a\div b(c+d)\) that fascinates us as a species, but fascinate it does. I've written about this before (as well as why "PEMDAS" is terrible), but the more I've thought about it, the more sympathy I've found with those in the minority of the debate, and as a result my position has evolved somewhat.oomfies solve this pic.twitter.com/0RO5zTJjKk— em ♥︎ (@pjmdolI) July 28, 2019
So I'm going to go out on a limb, and claim that the answer
shouldbe \(1\).
Before you walk away shaking your head and saying "he's lost it, he doesn't know what he's talking about", let me assure you that I'm obviouly not denying the left-to-right convention for how to do explicit multiplication and division. Nobody's arguing that.* Rather, there's something much more subtle going on here.
What we may be seeing here is evidence of a mathematical "language shift".
It's easy to forget that mathematics did not always look as it does today, but has arrived at its current form through very human processes of invention and revision. There's an excellent page by Jeff Miller that catalogues the earliest recorded uses of symbols like the operations and the equals sign -- symbols that seem timeless, symbols we take for granted every day.
People also often don't realize that this process of invention and revision still happens to this day. The modern notation for the floor function is a great example that was only developed within the last century. Even today on the internet, you occasionally see discussions in which people debate on how mathematical notation can be improved. (I'm still holding out hope that my alternative notation for logarithms will one day catch on.)
Of particular note is the evolution of grouping symbols. We usually think only of parentheses (as well as their variations like square brackets and curly braces) as denoting grouping, but an even earlier symbol used to group expressions was the
vinculum-- a horizontal bar found over or under an expression. Consider the following expression: \[3-(1+2)\] If we wrote the same expression with a vinculum, it would look like this: \[3-\overline{1+2}\] Vincula can even be stacked: \[13-\overline{\overline{1+2}\cdot 3}=4\] This may seem like a quaint way of grouping, but it does in fact survive in our notation for fractions and radicals! You can even see both uses in the quadratic formula: \[x=\dfrac{-b\pm\sqrt{b^2-4ac}}{2a}\]
Getting back to the original problem, what I think we're seeing is evidence that
concatenation-- placing symbols next to each other with no sort of explicit symbol -- has become another way to represent grouping.
"But wait", you might say, "concatenation is used to represent
multiplication, not grouping!" That's certainly true in many cases, for example in how we write polynomials. However, there are a few places in mathematics that provide evidence that there's more to it than that.
First of all, as a beautifully-written Twitter thread by EnchantressOfNumbers (@EoN_tweets) points out, we use concatenation to show a special importance of grouping when we write out certain trigonometric expressions without putting their arguments in parentheses. Consider the following identity:
\[\sin 4u=2\sin 2u\cos 2u\] When we write such an equation, we're saying that not only do \(4u\) and \(2u\) represent multiplications, but that this grouping is so tight that they constitute the entire arguments of the sine and cosine functions. In fact, the space between \(\sin 2x\) and \(\cos 2x\) can also be seen as a somewhat looser form of concatention. Then again, so does the space between \(\sin\) and \(x\), which represents a different thing -- the connection of a function to its argument. Perhaps this is why the popular (and amazing) online graphing calculator Desmos is only so permissive when it comes to parsing concatenation:
An even more curious case is
mixed numbers. When writing mixed numbers, concatenation actually stands for addition, not multiplication. \[3\tfrac{1}{2}=3+\tfrac{1}{2}\] In fact, concatenation actually makes addition come beforemultiplication when we multiply mixed numbers! \[3\tfrac{1}{2}\cdot 5\tfrac{5}{6}=(3+\tfrac{1}{2})\cdot(5+\tfrac{5}{6})=20\tfrac{5}{12}\]
Now, you may feel that this example shows how mixed numbers are an inelegance in mathematical notation (and I would agree with you). Even so, I argue that this is evidence that we
fundamentallyview concatenation as a way to represent grouping. It just so happens that, since multiplication takes precedence over addition anyway in the absence of other grouping symbols, we use concatenation when we write it. This all stems from a sort of "laziness" in how we write things -- - laying out precedence rules allows us to avoid writing parentheses, and once we've established those precedence rules, we don't even need to write out the multiplication at all.
So how does the internet's favorite math problem fit into all this?
The most striking feature of the expression \(8\div 2(2+2)\) is that
it's written all in one line.
Mathematical typesetting is difficult. LaTeX is powerful, but has a steep learning curve, though various other editors have made it a bit easier, such as Microsoft Word's Equation Editor (which has much improved since when I first used it!). Calculators have also recognized this difficulty, which is why TI calculators now have MathPrint templates (though its entry is quite clunky compared to Desmos's "as-you-type" formatting via MathQuill).
Even so, all of these input methods exist in very specific applications. What about when you're writing an email? Or sending a text? Or a Facebook message? (If you're wondering "who the heck writes about math in a Facebook message", the answer at least includes "students who are trying to study for a test".) The evolution of these sorts of media has led to the importance of one-line representations of mathematics with easily-accessible symbols. When you don't have the ability (or the time) to neatly typeset a fraction, you're going to find a way to use the tools you've got. And that's even more important as we realize that
everybodycan (and should!) engage with mathematics, not just mathematicians or educators.
So that might explain why a physics student might type "hbar = h / 2pi", and others would know that this clearly means \(\hbar=\dfrac{h}{2\pi}\) rather than \(\hbar=\dfrac{h}{2}\pi\). Remember, mathematics is not about just answer-getting. It's about communication of those ideas. And when the medium of communication limits how those ideas can be represented, the method of communication often changes to accomodate it.
What the infamous problem points out is that while almost nobody has laid out any explicit rules for how to deal with concatenation, we seem to have developed some implicit ones, which we use without thinking about them. We just never had to deal with them until recently, as more "everyday" people communicate mathematics on more "everyday" media.
Perhaps it's time that we address this convention explicitly and admit that
concatenation really has become a way to represent grouping, just like parentheses or the vinculum. This is akin to taking a more descriptivist, rather than prescriptivist, approach to language: all we would be doing is recognizing that this is alreadyhow we do things everywhere else.
Of course, this would throw a wrench in PEMDAS, but that just means we'd need to actually talk about the mathematics behind it rather than memorizing a silly mnemonic. After all, as inane as these internet math problems can be, they've shown that (whether they admit it or not) people really
dowant to get to the bottom of mathematics, to truly understand it.
I'd say that's a good thing.
* If your argument for why the answer is \(16\) starts with "Well, \(2(2+2)\) means \(2\cdot(2+2)\), so...", then you have missed the point entirely. |
For
p = 2, we have,
$\begin{align}&\sum_{n=1}^\infty[\zeta(pn)-1] = \frac{3}{4}\end{align}$
It seems there is a general form for odd
p. For example, for p = 5, define $z_5 = e^{\pi i/5}$. Then,
$\begin{align} &5 \sum_{n=1}^\infty[\zeta(5n)-1] = 6+\gamma+z_5^{-1}\psi(z_5^{-1})+z_5\psi(z_5)+z_5^{-3}\psi(z_5^{-3})+z_5^{3}\psi(z_5^{3}) = 0.18976\dots \end{align}$
with the
Euler-Mascheroni constant $\gamma$ and the digamma function $\psi(z)$. Anyone knows how to prove/disprove this? Also, how do we split $\psi(e^{\pi i/p})$ into its real and imaginary parts so as to express the above purely in real terms?
More details in my blog. |
I am trying to write down a representation of $D_n = \langle \sigma, \tau \mid \sigma^n = \tau^2 =e, \tau \sigma = \sigma^{-1} \tau \rangle$ over $\mathbb{R}^2 \cong \mathbb{C}$ (as an $\mathbb{R}$-vector space).
What I Know:
My representation has to be a group homomorphism $\rho: D_n \rightarrow \text{GL}(2, \mathbb{R})$. I know that the homomorphism will be fully specified by specifying where I send the generators of $D_n$. Intuitively I know that I should be sending $\sigma$ to the $2 \times 2 $ rotation matrix for an angle of $2 \pi /n$ and $\tau$ to a horizontal reflection.
This makes me think I should be choosing \begin{array}{c c c} \sigma^k \rightarrow \begin{pmatrix} \cos{(2\pi k /n)} & -\sin{(2\pi k /n)} \\ \sin{(2\pi k /n)} & \cos{(2\pi k /n)} \end{pmatrix} & \text{and }& \tau \rightarrow \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \\ \end{array}
My Problem:
I am having issues checking that this map respects the group operation. I know I want to check that: $$ \rho((\sigma^a \tau^b)(\sigma^c\tau^d)) = \rho(\sigma^a \tau^b) \rho(\sigma^c\tau^d) $$ since an arbitrary member of $D_n$ can be written as $\sigma^a \tau^b$. I think it would suffice to show that $\rho(\sigma \tau)= \rho(\sigma) \rho(\tau)$. I can easily write down the matrices on the RHS as I have just defined them above. However, I don't know how to check to see if this product matrix equals the matrix for $\rho(\sigma \tau)$, since I don't know what that is! How should I go about verifying my homomorphism is in fact a homomorphism? |
Consider the strictly convex unconstrained optimization problem $\mathcal{O} := \min_{x \in \mathbb{R}^n} f(x).$ Let $x_\text{opt}$ denote its unique minima and $x_0$ be a given initial approximation to $x_\text{opt}.$We will call a vector $x$ an $\epsilon-$ close solution of $\mathcal{O}$ if \begin{equation} \frac{||x - x_{\text{opt}}||_2}{||x_0 - x_\text{opt}||_2} \leq \epsilon. \end{equation}
Suppose that there exists two iterative algorithms $\mathcal{A}_1$ and $\mathcal{A}_2$ to find an $\epsilon-$ close solution of $\mathcal{O}$ with the following properties:
For any $\epsilon > 0,$ the total computational effort, i.e. effort required per iteration $\times$ total number of iterations, to find an $\epsilon-$ close solution is same for both the algorithms. The per iteration effort for $\mathcal{A}_1$ is $O(n),$ say, while that of $\mathcal{A}_2$ is $O(n^2).$
Are there situations, where one would prefer one algorithm over the other? Why? |
I am new to
Mathematica and I'm trying to plot the following analytical function at few time steps (say t=0.1, 0.5, 1, 10) all in one plot (similar to
hold on in MATLAB).
$$\begin{align*} p(x;t)=\frac1{\sigma\sqrt{2\pi t}}&\left[\exp\left(-\frac{(x-x_0-\mu t)^2}{2\sigma^2 t}\right)+\exp\left(\frac{-4x_0\mu t-(x+x_0-\mu t)^2}{2\sigma^2 t}\right)+\right.\\&\left.\frac{2\mu}{\sigma^2}\exp\left(\frac{2\mu x}{\sigma^2}\right)\left\{1-\Phi\left(\frac{x+x_0+\mu t}{\sigma \sqrt t}\right)\right\}\right] \end{align*}$$
This function is from page 224 (with some typos corrected) of the book:
Cox, D. R., and H. D. Miller.
The Theory of Stochastic Processes. Vol. 134. CRC Press, 1977.
Note that the function $\Phi[(x+x_0+\mu t)/(\sigma \sqrt t)]$ is the integral from $-\infty$ to $x$ as:
$$\Phi(x)=\frac1{\sqrt{2\pi}}\int_{-\infty}^x e^{-\frac12 y^2}\mathrm dy$$
I am aware of
Mathematica's error function
Erf and Normal Distribution but they integrate from $0$ to $x$ and my function is from $-\infty$ to $x$.
Here is what I have for a code as an initial one-time trial with some parameters.
ClearAll[μ, σ, t, z, z0, f1, f2, f3, f4, f5, f6, f]; z0 = 0.01; μ = 0.1; σ = 1; t = 10;f1 := 1/(σ*Sqrt[2*π*t])f2[z_] := Exp[-((z - z0 - μ*t)^2)/(2*t*σ^2)];f3[z_] := Exp[(-4*z0*μ*t - (z + z0 - μ*t)^2)/(2*t*σ^2)];f4[z_] := (2*μ)/σ^2*Exp[(2*μ*z)/σ^2];f5[z] := Integrate[Exp[-1/2 ((u + z0 + μ*t)/(σ Sqrt[t]))^2]/Sqrt[2 π], {u, -∞, z}](*f6[z_]:=Simplify[f5,t>0];*)f = f1[z]*(f2[z] + f3[z] + f4[z]*(1 - f5[z]))
(1/(10 Sqrt[2 π]))[z] (E^(1/200 (-0.4 - (-9.99 + z)^2)) + E^(- (1/200) (-10.01 + z)^2) + 0.2 E^(0.2 z) (1 - ((5. - 1.32904*10^-15 I) + (0.353553 (10.01 + 1. z) Erf[0.0707107 Sqrt[(10.01 + 1. z)^2]]) / Sqrt[(0.707814 + 0.0707107 z)^2])[z]))
Plot[{f[z]}, {z, z0, 10}]
which is not working (I use
z for
x here). The function should be real, but my answer has an imaginary part. |
I will use variances instead of standard deviations in notation for random variables.
For any two random variables $A$ and $B$ with respective means $\mu_A, \mu_B$ and respective variances $\sigma_A^2, \sigma_B^2$, what is the "best" estimate of $B$ in terms of $A$? By "best" we mean that if $g(A)$ denotes the estimate of $B$ in terms of $A$, then we seek the function $g(\cdot)$ such that $E[(B-g(A))^2]$, the mean-square error of the estimate, is as small as possible. It is well-known that the minimum mean-square error (MMSE) estimator of $B$ in terms of $A$ is the
conditional mean $E[B\mid A]$ of $B$ given $A$. This conditional mean is a random variable and it is a function of $A$, not of $B$ as it seems, that is, $E[B\mid A]$ is the $g(A)$ that we seek. It is also known that $E[g(A)] = E\big[E[B\mid A]\big]$, the mean of this particular function of $A$, happens, by a miracle of modern mathematics, to equal $E[B] = \mu_B$, the mean of $B$.
A more constrained version of the above function optimization problem asks, "What is the
linear minimum mean-square error (LMMSE) estimator of $B$ in terms of $A$" where we seek estimators that are constrained to be of the form $\alpha A + \beta$ and we try to find $\alpha$ and $\beta$ such that $E[(B-(\alpha A + \beta))^2]$ is as small as possible. Here the well-known answer is that $$\alpha = \rho\frac{\sigma_B}{\sigma_a}, \quad \beta = \mu_B - \rho\frac{\sigma_B}{\sigma_a}\mu_A = \mu_B - \alpha\mu_A \tag{1}$$twhere $\rho$ is the correlation coefficient,and it is also known that this function $\alpha A + \beta$ has mean and variance given by$$E[\alpha A + \beta] = \alpha \mu_A + \beta = \mu_B, \quad\operatorname{var}(\alpha A + \beta) = \sigma_B^2(1-\rho)^2.\tag{2}$$
As pointed out by @Ben, your assumptions that $A$ is a normal random variable $N(0, \sigma_A^2)$ and that the conditional distribution of $B$ given that $A$ has value $a$ is $N(qa,\sigma_b^2)$ (where $q$ and $\sigma_b$ are known
constants (not dependent on $a$ in any way)) is equivalent to the assumption that $A$ and $B$ are jointly normal random variables. If you don't like this bald assertion, notice that\begin{align}f_A(a) &\propto \exp\left(-\frac{a^2}{2\sigma_A^2}\right)\\f_{B\mid A}(b\mid a) &\propto \exp\left(-\frac{(b-qa)^2}{2\sigma_b^2}\right)\end{align}and so the joint density $$f_{A,B}(a,b) = f_{B\mid A}(b\mid a)\cdot f_A(a) \propto \exp(Q(a,b))$$where $Q(a,b)$ is a quadratic function of $a$ and $b$, that is, after completing the square etc, it will be found that the joint density is a jointly normal (a.k.a. bivariate normal) density.
Now, in the special case when $A$ and $B$ are
jointly normal random variables (that is, they have a bivariate normal density), the MMSE estimator $E[B\mid A]$ is a linear function of $A$, and so the MMSE estimator must coincide with the LMMSE estimator, no? For jointly normal random variables, the conditional distribution of $B$ given $A = a$ is a normal distribution with mean and variance given by$$E[B \mid A = a] = \rho\frac{\sigma_B}{\sigma_A}a + \mu_B - \rho\frac{\sigma_B}{\sigma_a}\mu_A, \quad \operatorname{var}(B \mid A = a) = \sigma_B^2(1-\rho^2) \tag{3}$$where we are told that $\mu_A = 0$ in this instance, and that $$E[B \mid A = a] = qa, \quad \operatorname{var}(B \mid A = a) = \sigma_b^2\tag{4}$$ (Note the subscript is a lower-case $b$). Comparing $(3)$ and $(4)$ we see that it must be that $\mu_B = 0$, and that the known $q$ equals $\rho\frac{\sigma_B}{\sigma_A}$ (remember that we are given the value of $\sigma_A$) while the known $\sigma_b^2$ must equal $\sigma_B^2(1-\rho^2)$.So we get that $$\rho\sigma_B = q\sigma_A, \sigma_b^2 = \sigma_B^2-\rho^2\sigma_B^2= \sigma_B^2 - (q\sigma_A)^2\\\implies \sigma_B^2 = \sigma_b^2 + (q\sigma_A)^2, \quad \rho = \frac{q\sigma_A}{\sqrt{\sigma_b^2 + (q\sigma_A)^2}} \tag{5}$$
So, $A$ and $B$ are zero-mean jointly normal random variables with variances $\sigma_A^2$ and $\sigma_B^2 = \sigma_b^2 + (q\sigma_A)^2$ and correlation coefficient $\rho = \frac{q\sigma_A}{\sqrt{\sigma_b^2 + (q\sigma_A)^2}}$. It follows that the conditional distribution of$A$ given that $B = b$ is a normal distribution with mean and variance$$E[A \mid B = b] = \rho\frac{\sigma_A}{\sigma_B}b, \quad\operatorname{var}(A \mid B = b) = \sigma_A^2(1-\rho^2)$$ |
eQuant aims at providing a fast and easy-to-use service for assessing protein model quality. In addition, lightweight visualizations help to quickly catch the information that is of most importance to the user. If users are interested in inspecting and further processing the made assessments, raw output data is also available for download. A major advantage of eQuant over most available quality assessment programs (MQAPs) is the ability to process structures within only a few seconds. Also no processing parameters have to be provided by the user (force fields, distance cut-offs etc.). In the assessment process, all chains in a structure are considered, including given inter-molecular interactions between residues. Essentially, eQuant employs a set of features and procedures that have proven themselves to be successful in other, existing approaches, but also considers information derived from coarse-grained knowledge-based potentials [Heinke, 2012, Heinke, 2013] as well as residue-residue interaction preferences and packing statistics. These data are obtained for each residue and evaluated by means of the random subspace method [Ho, 1998], which gives a prediction of local per-residue error. For each residue, the corresponding predicted error is an estimate of its C
α deviation between its location in the submitted model and in the unknown native structure. With respect to processing time, the assessment of a normal-sized query structure (a single chain with about 120 residues) requires less than one second of computation. However, like most MQAPs, eQuant is currently limited to processing soluble proteins.
Given all predicted local errors (C
α deviations), an estimate on the global structural match to the unknown native structure can be made; or to put it differently: the deviation between structure model and native structure after superimposition is estimated. Here, the global distance test total score (GDT_TS) score [Kryshtafovych, 2014] is a widely-used measure. Ranging from 0% to 100%, it quantifies the quality of the structural match. Values close to 100% indicate almost perfect structural alignments. Based on predicted per-residue deviations, the GDT can be computed and an overall quantification of the deviation to the unknown native structure estimated. Furthermore, the computed GDT score is re-scaled in order to provide a Z-score. The transformation is calculated by analyzing the population of pre-processed structures with a relative length of ± 10% to the query model structure. If a set of structures is submitted, all models are sorted according to the Z-score and reported in descending order. Thus, the model predicted as most reliable will appear first.
To derive the GDT scores two structures of the same sequence, but with different tertiary structure were superimposed. The GDT score was computed by $$GDT\_TS = \frac{1}{4} \sum_{t=1,2,4,8} \frac{f(t) \cdot 100}{N}$$ with \(f(t)\) referring to the absolute count of residues pairs less than \(t\) Å apart and \(N\) being the number of aligned atom pairs.
As alternative global quality measure the TM-score is provided. It ranges from 0 to 1, whereby values above 0.5 generally indicate the same fold for the submitted and the native structure. Scores close to 1 occur for almost identical structures [Xu, 2010].
The backbone of the utilized assessment routine are the per-residue predictions of C
α deviations of residue locations in the given model and the native structures. eQuant was trained on the CASP9 dataset and evaluated on CASP10 [Kryshtafovych, 2014].
As the first step in the process, the following descriptive features are determined for each residue:
These data provide a numerical description of a residue's environment with respect to observed and expected energies, and thus local stability. The random subspace method [Ho, 1998] is finally employed to predict and report C
α deviations as a measure of 'unnaturalness'. Thus, all predictions are made only by analyzing the submitted model; thus no knowledge and information on the native structure is required. All underlying statistics are gathered from 63 soluble, non-redundant, high-resolution protein structures obtained via the PDB-REPRDB service [Noguchi, 2001]. During training the CASP X data set [Moult, 2014] was used to compute C α-C α distances - the target function. QMEAN [Benkert, 2009; Benkert, 2011; Benkert, 2008] followed a comparable approach during design and strongly influenced the development of this method.
eQuant accepts files containing structure data in RCSB Protein Data Bank format as well as archive files. Supported archive file formats are .tar, .gz, .rar, .zip and .7z. This enables you to submit multiple structures (and structure complexes) at the same time. After processing, all structure assessments are reported on one single page in descending order with respect to global quality expressed by the GDT Z-score.
Simple text files are accessible to export the evaluation results. The SMALL file contains the results in condensed format. Only basic residue information, the actual per-residue error score and a rudimentary interpretation are provided. Thereby, scores exceeding 3.8 Å are considered unreliable. Should you be interested in more detailed data, choose the FULL report file, as it summaries not only the final evaluation scores, but also all information which was gathered during the quality assessment routine. Even though PV [Biasini, 2014] is used for visualization of the structure, you can furthermore download a modified PDB file of the originally submitted structure with evaluation scores written to the B-factor column. Using e.g. PyMOL [DeLano, 2002] you can conveniently create appealing images. Additionally an ZIP-archive is provided, which contains the result files. Last but not least, each figure on the result page can be locally stored by utilizing the browser's capabilities or HighChart's [Highsoft, 2012] context menu in the upper right corner. |
Predicting the Products of Electrolysis
Electrolysis is a way of separating a compound by passing an electric current through it; the products are the compound’s component ions.
Learning Objectives
Predict the products of an electrolysis reaction
Key Takeaways Key Points The main components of an electrolytic cell are an electrolyte, DC current, and two electrodes. The key process of electrolysis is the interchange of atoms and ions by the removal or addition of electrons to the external circuit. Oxidation of ions or neutral molecules occurs at the anode, and reduction of ions or neutral molecules occurs at the cathode. Key Terms electrolyte: A substance that, in solution or when molten, ionizes and conducts electricity. electrolysis: The chemical change produced by passing an electric current through a conducting solution or a molten salt. What Is Electrolysis?
In order to predict the products of electrolysis, we first need to understand what electrolysis is and how it works. Electrolysis is a method of separating bonded elements and compounds by passing an electric current through them. It uses a direct electric current (DC) to drive an otherwise non-spontaneous chemical reaction. Electrolysis is very important commercially as a stage in the separation of elements from naturally occurring sources, such as ores, using an electrolytic cell.
The main components required to achieve electrolysis are:
An electrolyte: a substance containing free ions, which are the carriers of electric current in the electrolyte. If the ions are not mobile, as in a solid salt, then electrolysis cannot occur. A direct current (DC) supply: provides the energy necessary to create or discharge the ions in the electrolyte. Electric current is carried by electrons in the external circuit. Two electrodes: an electrical conductor that provides the physical interface between the electrical circuit providing the energy and the electrolyte. The Interchange of Atoms and Ions
The key process of electrolysis is the interchange of atoms and ions by the removal or addition of electrons to the external circuit. The required products of electrolysis are in a different physical state from the electrolyte and can be removed by some physical processes.
Each electrode attracts ions that are of the opposite charge. Positively charged ions, or cations, move toward the electron-providing cathode, which is negative; negatively charged ions, or anions, move toward the positive anode. You may have noticed that this is the opposite of a galvanic cell, where the anode is negative and the cathode is positive.
At the electrodes, electrons are absorbed or released by the atoms and ions. Those atoms that gain or lose electrons become charged ions that
pass into the electrolyte. Those ions that gain or lose electrons to become uncharged atoms separate from the electrolyte. The formation of uncharged atoms from ions is called discharging. The energy required to cause the ions to migrate to the electrodes, and the energy to cause the change in ionic state, is provided by the external source. Oxidation and Reduction
Oxidation of ions or neutral molecules occurs at the anode, and reduction of ions or neutral molecules occurs at the cathode. For example, it is possible to oxidize ferrous ions to ferric ions at the anode:
[latex]\text{Fe}^{2+} (\text{aq}) \rightarrow \text{Fe}^{3+} (\text{aq}) + \text{e}^-[/latex]
It is also possible to reduce ferricyanide ions to ferrocyanide ions at the cathode:
[latex]\text{Fe}(\text{CN})^{3-}_6 + \text{e}^- \rightarrow \text{Fe}(\text{CN})^{4-}_6[/latex]
Neutral molecules can also react at either electrode. Electrolysis reactions involving H
+ ions are fairly common in acidic solutions. In alkaline water solutions, reactions involving hydroxide ions (OH –) are common. The substances oxidized or reduced can also be the solvent, which is usually water, or the electrodes. It is possible to have electrolysis involving gases. Predicting the Products of Electrolysis
Let’s look at how to predict the products. For example, what two ions will CuSO
4 break down into? The answer is Cu 2+ and SO 4 2-. Let’s look more closely at this reaction.
We take two copper electrodes and place them into a solution of blue copper sulfate (CuSO
4) and then turn the current on. We notice that the the initial blue color of the solution remains unchanged, but it appears that copper has been deposited on one of the electrodes but dissolved on the other. This is because Cu 2+ ions are attracted to the negatively charged cathode, and since the the cathode is putting out electrons, the Cu 2+ becomes reduced to form copper metal, which is deposited on the electrode. The reaction at this electrode is:
[latex]\text{Cu}^{2+} (\text{aq}) + 2\text{e}^- \rightarrow \text{Cu} (\text{s})[/latex]
At the positive anode, copper metal is oxidized to form Cu
2+ ions. This is why it appears that the copper has dissolved from the electrode. The reaction at this electrode is:
[latex]\text{Cu} (\text{s}) \rightarrow \text{Cu}^{2+} (\text{aq}) + 2\text{e}^-[/latex]
We just saw electric current used to split CuSO
4 into its component ions. This is all it takes to predict the products of electrolysis; all you have to do is break down a compound into its component ions. Electrolysis of Sodium Chloride
Two commonly used methods of electrolysis involve molten sodium chloride and aqueous sodium chloride, which give different products.
Learning Objectives
Predict the products of electrolysis of sodium chloride under molten and aqueous conditions
Key Takeaways Key Points Sodium metal and chlorine gas can be obtained with the electrolysis of molten sodium chloride. Electrolysis of aqueous sodium chloride yields hydrogen and chlorine, with aqueous sodium hydroxide remaining in solution. The reason for the difference is that the reduction of Na +(E° = –2.7 v) is energetically more difficult than the reduction of water (–1.23 v). Key Terms anode: The electrode of an electrochemical cell at which oxidation occurs. cathode: The electrode of an electrochemical cell at which reduction occurs. Electrolysis of NaCl
As we have covered, electrolysis is the passage of a direct electric current through an ionic substance that is either molten or dissolved in a suitable solvent. This results in chemical reactions at the electrodes and the separation of materials. Two commonly used methods of electrolysis involve molten sodium chloride and aqueous sodium chloride. You might think that both methods would give you the same products, but this not the case. Let’s go through each of the methods to understand the different processes.
Electrolysis of Molten NaCl
If sodium chloride is melted (above 801 °C), two electrodes are inserted into the melt, and an electric current is passed through the molten salt, then chemical reactions take place at the electrodes.
Sodium ions migrate to the cathode, where electrons enter the melt and are reduced to sodium metal:
[latex]{\text{Na}}^{+} + {\text{e}}^{-} \rightarrow \text{Na}[/latex]
Chloride ions migrate the other way, toward the anode. They give up their electrons to the anode and are oxidized to chlorine gas:
[latex]{\text{Cl}}^{-} \rightarrow \frac{1}{2}{\text{Cl}}_{2} + {\text{e}}^{-}[/latex]
The overall reaction is the breakdown of sodium chloride into its elements:
[latex]2\text{NaCl} \rightarrow 2\text{Na}(\text{s}) + {\text{Cl}}_{2}(\text{g})[/latex]
Electrolysis of Aqueous NaCl
What happens when we have an aqueous solution of sodium chloride? Well, we can’t forget that we have to factor water into the equation. Since water can be both oxidized and reduced, it competes with the dissolved Na
+ and Cl – ions. Rather than producing sodium, hydrogen is produced.
The reaction at the cathode is:
[latex]{\text{H}}_{2}\text{O} (\text{l}) + 2 {\text{e}}^{- } \rightarrow {\text{H}}_{2}(\text{g}) + 2{ \text{OH}}^{- }[/latex]
The reaction at the anode is:
[latex]{\text{Cl}}^{- } \rightarrow \frac{1}{2} {\text{Cl}}_{2}(\text{g}) +1\text{e}^-[/latex]
The overall reaction is as follows:
[latex]\text{NaCl}(\text{aq}) + {\text{H}}_{2}\text{O}(\text{l}) \rightarrow {\text{Na}}^{+}(\text{aq}) + {\text{OH}}^{-}(\text{aq}) + {\text{H}}_{2}(\text{g}) + \frac{1}{2}{\text{Cl}}_{2}(\text{g})[/latex]
Reduction of Na
+ (E° = –2.7 v) is energetically more difficult than the reduction of water (–1.23 v), so in aqueous solution, the latter will prevail. Electrolysis of Water
Pure water cannot undergo significant electrolysis without an electrolyte, such as an acid or a base.
Learning Objectives
Recall the properties of an electrolyte that enable the electrolysis of water
Key Takeaways Key Points Electrolysis of a solution of sulfuric acid or of a salt, such as NaNO 3, results in the decomposition of water at both electrodes. Hydrogen will appear at the cathode and oxygen will appear at the anode. The amount of hydrogen generated is twice the number of moles of oxygen and both are proportional to the total electrical charge conducted by the solution. Key Terms electrolysis: The chemical change produced by passing an electric current through a conducting solution or a molten salt.
Pure water cannot undergo significant electrolysis without adding an electrolyte. If the object is to produce hydrogen and oxygen, the added electrolyte must be energetically more difficult to oxidize or reduce than water itself. For example, electrolysis of a solution of sulfuric acid or of a salt, such as NaNO
3, results in the decomposition of water at both electrodes:
Cathode: [latex]{\text{H}}_{2}\text{O} + 2 {\text{e}}^{-} \rightarrow {\text{H}}_{2}(\text{g}) + 2 {\text{OH}}^{-}[/latex]
E = 0.00 V
Anode: [latex]2 {\text{H}}_{2}\text{O} \rightarrow {\text{O}}_{2}(\text{g}) + 4 {\text{H}}^{+} + 4 {\text{e}}^{-}[/latex]
E° = -1.23 V
Multiplying the cathode reaction by 2, in order to match the number of electrons transferred, results in this net equation, after OH
– and H + ions combine to form water:
Net: [latex]2 {\text{H}}_{2}\text{O}(\text{l}) \rightarrow 2 {\text{H}}_{2}(\text{g}) + {\text{O}}_{2}(\text{g})[/latex]
E = -1.23 v
Hydrogen will appear at the cathode, the negatively charged electrode, where electrons enter the water, and oxygen will appear at the anode, the positively charged electrode. The number of moles of hydrogen generated is twice the number of moles of oxygen, and both are proportional to the total electrical charge conducted by the solution. The number of electrons pushed through the water is twice the number of generated hydrogen molecules, and four times the number of generated oxygen molecules.
Johann Ritter, who went on to invent the first electrochemical cell, was one of the first people to discover the decomposition of water by electricity.
Electrolysis Stoichiometry
The amount of chemical change that occurs in electrolysis is stoichiometrically related to the amount of electrons that pass through the cell.
Learning Objectives
Predict how many coulombs a given electrochemical reaction will require
Key Takeaways Key Points From the perspective of the voltage source and circuit outside the electrodes, the flow of electrons is generally described in terms of electrical current using the SI units of coulombs and amperes. It takes 96,485 coulombs to constitute a mole of electrons, a unit known as the faraday (F). The equivalent weight of a substance is defined as the molar mass divided by the number of electrons required to oxidize or reduce each unit of the substance. Key Terms coulombs: In the International System of Units, the derived unit of electric charge; the amount of electric charge carried by a current of 1 ampere flowing for 1 second. Symbol: C. faraday: The quantity of electricity required to deposit or liberate 1 gram equivalent weight of a substance during electrolysis; approximately 96,487 coulombs. Stoichiometry of an Electrolytic Cell
The extent of chemical change that occurs in an electrolytic cell is stoichiometrically related to the number of moles of electrons that pass through the cell. From the perspective of the voltage source and circuit outside the electrodes, the flow of electrons is generally described in terms of electrical current using the SI units of coulombs and amperes. It takes 96,485 coulombs to constitute a mole of electrons, a unit known as the faraday (F).
This relation was first formulated by Michael Faraday in 1832, in the form of two laws of electrolysis:
The weights of substances formed at an electrode during electrolysis are directly proportional to the quantity of electricity that passes through the electrolyte. The weights of different substances formed by the passage of the same quantity of electricity are proportional to the equivalent weight of each substance.
The equivalent weight of a substance is defined as the molar mass divided by the number of electrons required to oxidize or reduce each unit of the substance. Thus, one mole of V
3+ corresponds to three equivalents of this species, and will require three faradays of charge to deposit it as metallic vanadium ([latex]\text{V}^{3+} + 3\text{e}^- \rightarrow \text{V}[/latex]).
Most stoichiometric problems involving electrolysis can be solved without explicit use of Faraday’s laws. The “chemistry” in these problems is usually very elementary; the major difficulties usually stem from unfamiliarity with the basic electrical units:
current (in amperes) is the rate of charge transport: 1 amp = 1 [latex]\frac {\text{Coulombs}}{\text{second}}[/latex]. power (in watts) is the rate of energy production or consumption: 1 w = 1 [latex]\frac {\text{Joule}}{\text{second}}[/latex]. 1 kilowatt-hour = 3600 J. Example
A metallic object to be plated with copper is placed in a solution of CuSO
4. What mass of copper will be deposited if a current of 0.22 amp flows through the cell for 1.5 hours?
To solve, set up a dimensional analysis equation:
[latex]1.5\ \text{hours} \times \frac {3600\ \text{seconds}}{1\ \text{hour}} \times \frac {.22\ \text{Coulombs}}{\text{second}} \times \frac {1\ \text{mole}\ \text{e}^-}{96485\ \text{Coulombs}} \times \frac {1\ \text{mole}\ \text{Cu}^{2+}}{2\ \text{mole}\ \text{e}^-} \times \frac {63.54\ \text{grams}\ \text{Cu}}{1\ \text{mole}\ \text{Cu}} =[/latex]
The answer is 0.39 g Cu.
What if this question were asked in a different fashion? How would you go about solving it?
How many amps would it take to deposit 0.39 grams of Cu in 1.5 hours?
Again, use dimensional analysis relationships to solve:
[latex]0.39\ \text{g}\ \text{Cu} \times \frac {1\ \text{mole}\ \text{Cu}}{63.54\ \text{g}\ \text{Cu}} \times \frac{2\ \text{moles}\ \text{e}^-}{1\ \text{mole}\ \text{Cu}^{2+}} \times \frac {96485\ \text{Coulombs}}{1\ \text{mole}\ \text{e}^-} = 1184\ \text{Coulombs}[/latex]
1.5 hours is the equivalent of 5400 seconds:
[latex]\frac{1184\ \text{Coulombs}}{5400\ \text{seconds}} = 0.22\ \text{Amps}[/latex] |
It is known that the following is true in the Solovay model (SM) for ZFC: any countable OD (ordinal-definable) set $X$
of reals necessarily consists of OD elements. What about countable OD sets of higher rank? Theorem. In SM, if $\,A\ne\emptyset$ is a countable OD set of sets of reals then any its element (a set of reals) is OD itself. Sketch (only works provided sets in $A$ are pairwise disjoint). As being OD is an OD property, it suffices to prove that $A$ contains at least one OD element. Suppose that this is not the case. There is a countable ordinal $\xi$ and a $Coll(\omega,\xi)$-name $t\in V$ ($V$ being the ground model of ZFC whose SM-extension we are talking about) such that the union $U$ of all sets in $A$ (an OD set of reals) contains a real of the form $t[f]$, where $f\in\xi^\omega$ is $Coll(\omega,\xi)$-generic over $V$. Now define an equivalence relation $E$ on $\xi^\omega$ such that $f\,E\,g$ iff either the reals $t[f],t[g]$ belong to the same set in $A$ or both $t[f],t[g]$ do not belong to $U$. Obviously $E$ is OD and has at most countably many equivalence classes. As the domain of $E$ is the same as reals, it suffices to conclude that all $E$-classes are OD, therefore some sets in A are OD too, a contradiction. Question. Is it true in SM that any countable OD set (no restrictions to the nature of its elements) has an OD element ?
(After some attempts to solve it.) Still the problem persists, and it seems to be focused in the following conjecture.
Conjecture.In the Solovay model for ZFC (= the Levy collapse extension of L), let $X$ be a countable or finite non-empty OD (ordinal-definable) set of sets of reals. Then $X$ contains a proper OD subset $\emptyset\ne X'\subsetneqq X$.
The conjecture is equally open for many other generic models not especially designed to make something definable - like for instance a simple Cohen or random-Solovay extension $L[a]$ of $L$. |
My first thoughts are along the lines of your second-last paragraph. Here is a quick sketch of how to formalise it. From simple Hückel theory, you can obtain the coefficients of the AOs in the MOs:
$$|\psi_i\rangle = \sum_a c_{n,i} |n\rangle$$
where $c_{n,i}$ denotes the coefficient of AO $|n\rangle$ in the MO $|\psi_i\rangle$.
In Hückel theory the main difference between benzene and pyridine is that one atomic orbital is shifted in energy, so one of the $\alpha$ terms turns into, say, $\gamma$. Without loss of generality we will label the nitrogen with the index $n = 1$. In theory, the resonance integrals $\beta$ involving the nitrogen should change as well, but to a first approximation we assume they are the same.
In this case therefore the Hamiltonian matrix, when cast in the AO basis, changes from $\mathbf{H}_0$ to $\mathbf{H}$:
$$\mathbf{H}_0 = \begin{pmatrix}\alpha & \beta & 0 & 0 & 0 & \beta \\\beta & \alpha & \beta & 0 & 0 & 0 \\0 & \beta & \alpha & \beta & 0 & 0 \\0 & 0 & \beta & \alpha & \beta & 0 \\0 & 0 & 0 & \beta & \alpha & \beta \\\beta & 0 & 0 & 0 & \beta & \alpha \end{pmatrix} \qquad \longrightarrow \qquad \mathbf{H} = \begin{pmatrix}\gamma & \beta & 0 & 0 & 0 & \beta \\\beta & \alpha & \beta & 0 & 0 & 0 \\0 & \beta & \alpha & \beta & 0 & 0 \\0 & 0 & \beta & \alpha & \beta & 0 \\0 & 0 & 0 & \beta & \alpha & \beta \\\beta & 0 & 0 & 0 & \beta & \alpha \end{pmatrix}$$
Most of the Hamiltonian is unchanged, so we could consider this within the framework of perturbation theory, where we have a perturbed Hamiltonian $\hat{H} = \hat{H}_0 + \hat{V}$. The relevant perturbation is therefore
$$\mathbf{V} = \begin{pmatrix}\gamma - \alpha & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}$$
or to condense this back into operator form, $\hat{V} = (\gamma - \alpha)|1\rangle\langle 1|$.
The proper mathematical analysis is slightly annoying because the (unperturbed) eigenstates that we're interested in looking at are degenerate, which then necessitates that you cast the matrix $\hat{V}$ in the basis of the degenerate states of interest. Here, the two states we're interested in are $|\psi_2\rangle$ and $|\psi_3\rangle$. The application of Hückel theory to benzene gives the formulae for these unperturbed states as
$$\begin{align}|\psi_2\rangle &= \frac{\sqrt 3}{3}|1\rangle + \frac{\sqrt 3}{6}|2\rangle - \frac{\sqrt 3}{6}|3\rangle - \frac{\sqrt 3}{3}|4\rangle - \frac{\sqrt 3}{6}|5\rangle + \frac{\sqrt 3}{6}|6\rangle \\|\psi_3\rangle &= \frac{1}{2}|2\rangle + \frac{1}{2}|3\rangle - \frac{1}{2}|5\rangle - \frac{1}{2}|6\rangle\end{align}$$
Note that this choice of coefficients is not unique, as the two states are degenerate: any linear combination of these two MOs is also a permissible MO with the same energy. In any case, the relevant matrix elements of $\hat{V}$ in the new basis $\{|\psi_2\rangle,|\psi_3\rangle\}$ are given by
$$V_{i-1,j-1} = \langle \psi_i | \hat{V} | \psi_j \rangle$$
and if you assume that $\langle n | n' \rangle = \delta_{nn'}$ (i.e. the AOs have no overlap with each other, which is a key assumption of simple Hückel theory anyway), then it becomes very simple:
$$\mathbf{V} = \begin{pmatrix} \frac{\gamma-\alpha}{3} & 0 \\ 0 & 0\end{pmatrix}$$
Perturbation theory tells us that the eigenvectors of $\hat{V}$ are the "states which are stable towards the perturbation". What this means is that, since the degeneracy is lifted by the perturbation, you also remove the free choice of having any linear combinations be a permissible MO: the coefficients of AOs in the new MOs must be uniquely determined (ignoring the phase factor). Since $\hat{V}$ is diagonal in the basis $\{|\psi_2\rangle,|\psi_3\rangle\}$, it conveniently means that both $|\psi_2\rangle$ and $|\psi_3\rangle$ are the eigenvectors of $\hat{V}$.
More interesting here is the first-order corrections to the eigenvalues of $\hat{H}_0$, which are simply the eigenvalues of $\hat{V}$. In this case, since $\hat{V}$ is diagonal, the eigenvalues are just $(\gamma-\alpha)/3$ and $0$: that is to say, the MO $\psi_2$ is shifted in energy by $(\gamma-\alpha)/3$ and the MO $\psi_3$ is unshifted in energy. |
Q. Two guns A and B can fire bullets at speeds 1 km/s and 2 km/s respectively. From a point on a horizontal ground, they are fired in all possible directions. The ratio of maximum areas covered by the bullets fired by the two guns, on the ground is :
Solution:
$R = \frac{u^{2} \sin 2 \theta}{g} $ $ A = \pi R^{2} $ $ A \propto R^{2} $ $ A \propto u^{4} $ $ \frac{A_{1}}{A_{2}} = \frac{u^{4}_{1}}{u^{4}_{2}} = \left[\frac{1}{2}\right]^{4} = \frac{1}{16} $ Questions from JEE Main 2019 2. The pitch and the number of divisions, on the circular scale, for a given screw gauge are 0.5 mm and 100 respectively. When the screw gauge is fully tightened without any object, the zero of its circular scale lies 3 divisions below the mean line.
The readings of the main scale and the circular scale, for a thin sheet, are 5.5 mm and 48 respectively, the thickness of this sheet is : 9. A convex lens is put 10 cm from a light source and it makes a sharp image on a screen, kept 10 cm from the lens. Now a glass block (refractive index 1.5) of 1.5 cm thickness is placed in contact with the light source. To get the sharp image again, the screen is shifted by a distance d. Then d is : Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit. |
Okay, so I'm trying to find $ \int \frac1{\cos x}\mathrm{d}x$ using the substitution $t = \tan\left(\frac{x}{2}\right)$.
I sub in the trig identity for $\sec$ as $\frac{1+t^2}{1-t^2}$ and then rearrange and substitute $\frac{\mathrm{d}t}{\mathrm{d}x} = \frac12 \left(1+ \tan^2\left(\frac{x}{2}\right)\right)$ so I am left with
$\frac2{1-t^2}$
I then used partial fractions to find
$\frac2{1-t^2} = \frac1{1+t} + \frac1{1-t}$
and therefore integrating I get
$\int \frac1{\cos x}dx = \ln(t+1) - \ln(t-1)$
But subbing in $t = \tan\left(\frac{x}{2}\right)$ doesn't seem to get me anywhere close to the solution that I want to find, which is:
$ \int \frac1{\cos x}\mathrm{d}x = \ln(\sec x + \tan x) + C$
Any help on this would be greatly appreciated.
Thanks! |
I'm looking for an algorithm to decide if a given first order formula over a fixed vocabulary admits a logically equivalent existential one (i.e. a formula in prenex form where all quantifiers are existentials).
Is it known if such an algorithm exists or does not exist? At least when no functions symbols are admitted? And if it actually exists, which is its complexity?
I know that, given a formula, we can build an equivalent one in prenex form. However, here is an example which illustrates that a formula, equivalent to an existential one, admits also an equivalent prenex formula which is not existential:
Consider the vocabulary $L=\{E\}$, where $E$ is a binary relation symbol. Then the formula $$\exists x\,E(x,x)$$ which states that a directed graph admits a loop, is an existential formula and is equivalent to $$\exists x\forall y\,\,\,\,x=y\rightarrow E(x,y)$$ which is in prenex form, but not existential. So applying the algorithm to build a prenex form to the second formula would not produce an existential one. |
The problem is NP-hard.
A
connected vertex cover is a vertex cover where the subgraph induced by these vertices is connected. Consider the following connected vertex cover problem (CVC): Input: a graph and an integer $k$. Output: whether there is a connected vertex cover of size no more than $k$.
CVC is proved to be NP-hard even if the graph is planar and has maximum degree 4 [1]. I'll reduce CVC to the decision version of this problem (say CDC for simplify).
Given an instance of CVC $G=(V,E)$, for each edge $e\in E$, create a vertex $v_e$. Let $V_E=\{v_e\mid e\in E\}$. Create a new bipartite graph $G'$ whose vertex set is $V\uplus V_E$, and there is an edge between $v\in V$ and $v_e\in V_E$ iff $v$ is an endpoint of $e$ in $G$. The instance of CDC is $G'$ where vertices in $V_E$ are type-A vertices and vertices in $V$ are type-B vertices.
Note a connected vertex cover $U\subseteq V$ of $G$ makes the subgraph of $G'$ induced by $U\cup V_E$ connected, and vice versa. Therefore, the instance of CVC has a solution iff the instance of CDC has a solution.
[1] Garey, Michael R., and David S. Johnson. "The rectilinear Steiner tree problem is NP-complete."
SIAM Journal on Applied Mathematics 32.4 (1977): 826-834. |
Navigating
The most natural way of navigating is by clicking wiki links that connect one page with another. The “Front page” link in the navigation bar will always take you to the Front Page of the wiki. The “All pages” link will take you to a list of all pages on the wiki (organized into folders if directories are used). Alternatively, you can search using the search box. Note that the search is set to look for whole words, so if you are looking for “gremlins”, type that and not “gremlin”. The “go” box will take you directly to the page you type.
Creating and modifying pages Registering for an account
In order to modify pages, you’ll need to be logged in. To register for an account, just click the “register” button in the bar on top of the screen. You’ll be asked to choose a username and a password, which you can use to log in in the future by clicking the “login” button. While you are logged in, these buttons are replaced by a “logout so-and-so” button, which you should click to log out when you are finished.
Note that logins are persistent through session cookies, so if you don’t log out, you’ll still be logged in when you return to the wiki from the same browser in the future.
Editing a page
To edit a page, just click the “edit” button at the bottom right corner of the page.
You can click “Preview” at any time to see how your changes will look. Nothing is saved until you press “Save.”
Note that you must provide a description of your changes. This is to make it easier for others to see how a wiki page has been changed.
Page metadata
Pages may optionally begin with a metadata block. Here is an example:
---format: latex+lhscategories: haskell mathtoc: notitle: Haskell and Category Theory...\section{Why Category Theory?}
The metadata block consists of a list of key-value pairs, each on a separate line. If needed, the value can be continued on one or more additional line, which must begin with a space. (This is illustrated by the “title” example above.) The metadata block must begin with a line
--- and end with a line
... optionally followed by one or more blank lines.
Currently the following keys are supported:
format Overrides the default page type as specified in the configuration file. Possible values are
markdown,
rst,
latex,
html,
markdown+lhs,
rst+lhs,
latex+lhs. (Capitalization is ignored, so you can also use
LaTeX,
HTML, etc.) The
+lhsvariants indicate that the page is to be interpreted as literate Haskell. If this field is missing, the default page type will be used.
categories A space or comma separated list of categories to which the page belongs. toc Overrides default setting for table-of-contents in the configuration file. Values can be
yes,
no,
true, or
false(capitalization is ignored).
title By default the displayed page title is the page name. This metadata element overrides that default. Creating a new page
To create a new page, just create a wiki link that links to it, and click the link. If the page does not exist, you will be editing it immediately.
You can also type the path to and name of the file in the browser URL window. Note that in that case any new directory included in the path will be created but only if you tick the corresponding boxes (in the edit window) to confirm their creation.
Deleting a page
The “delete” button at the bottom of the page will delete a page. Note that deleted pages can be recovered, since a record of them will still be accessible via the “activity” button on the top of the page.
Markdown
This wiki’s pages are written in pandoc’s extended form of markdown. If you’re not familiar with markdown, you should start by looking at the markdown “basics” page and the markdown syntax description. Consult the pandoc User’s Guide for information about pandoc’s syntax for footnotes, tables, description lists, and other elements not present in standard markdown.
Markdown is pretty intuitive, since it is based on email conventions. Here are some examples to get you started:
*emphasized text*
emphasized text
**strong emphasis**
strong emphasis
`literal text`
literal text
\*escaped special characters\*
*escaped special characters*
[external link](http://google.com)
external link

Wikilink:
[Front Page](Front Page)
Wikilink: Front Page
H~2~O
H 2O
10^100^
10 100
~~strikeout~~
$x = \frac{{ - b \pm \sqrt {b^2 - 4ac} }}{{2a}}$
x = \frac{{ - b \pm \sqrt {b^2 - 4ac} }}{{2a}} 1
A simple footnote.^[Or is it so simple?]
A simple footnote. 2 > an indented paragraph, > usually used for quotations pre> #!/bin/sh -e # code, indented four spaces echo “Hello world”
pre> * a bulleted list * second item - sublist - and more * back to main list 1. this item has an ordered 2. sublist a) you can also use letters b) another item
pre> Fruit Quantity ——– ———– apples 30,200 oranges 1,998 pears 42Table: Our fruit inventory
For headings, prefix a line with one or more
# signs: one for a major heading, two for a subheading, three for a subsubheading. Be sure to leave space before and after the heading.
# MarkdownText...## Some examples...Text...
Wiki links
Links to other wiki pages are formed this way:
[Page Name](Page Name). (Gitit converts markdown links with empty targets into wikilinks.)
To link to a wiki page using something else as the link text:
[something else](Page Name).
Note that page names may contain spaces and some special characters. They need not be CamelCase. CamelCase words are
not automatically converted to wiki links.
Wiki pages may be organized into directories. So, if you have several pages on wine, you may wish to organize them like so:
Wine/Pinot NoirWine/BurgundyWine/Cabernet Sauvignon
Note that a wiki link
[Burgundy](Burgundy) that occurs inside the
Wine directory will link to
Wine/Burgundy, and not to
Burgundy. To link to a top-level page called
Burgundy, you’d have to use
[Burgundy](/Burgundy).
To link to a directory listing for a subdirectory, use a trailing slash:
[Wine/](Wine/) will link to a listing of the
Wine subdirectory.
Editing the wiki with darcs
The wiki uses darcs as a backend and can thus be edited using a local darcs repository mirroring the web site.
To be able to push to the darcs repository, you will need write access permissions which you need to get from the site administrator (Stéphane Popinet). These rights will be granted based on your SSH public key.
Once this is done, you can get the entire content of the web site using:
darcs get wiki@basilisk.fr:wiki
You can also use Makefiles, make plots, and generate HTML pages typically in the local copy of your
sandbox/, to make sure that everything works before darcs recording and darcs pushing your changes to the web site.
Note that the (new) wiki engine is clever enough to detect markdown comments in most files, so that the .page extension is not required anymore. |
V. Gitman and J. D. Hamkins, “A model of the generic Vopěnka principle in which the ordinals are not Mahlo,” Archive for Mathematical Logic, pp. 1-21, 2018.
@ARTICLE{GitmanHamkins2018:A-model-of-the-generic-Vopenka-principle-in-which-the-ordinals-are-not-Mahlo, author = {Gitman, Victoria and Hamkins, Joel David}, year = {2018}, title = {A model of the generic Vopěnka principle in which the ordinals are not Mahlo}, journal = {Archive for Mathematical Logic}, issn = {0933-5846}, doi = {10.1007/s00153-018-0632-5}, month = {5}, pages = {1--21}, eprint = {1706.00843}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1xT}, abstract = {The generic Vopěnka principle, we prove, is relatively consistent with the ordinals being non-Mahlo. Similarly, the generic Vopěnka scheme is relatively consistent with the ordinals being definably non-Mahlo. Indeed, the generic Vopěnka scheme is relatively consistent with the existence of a Δ2-definable class containing no regular cardinals. In such a model, there can be no Σ2-reflecting cardinals and hence also no remarkable cardinals. This latter fact answers negatively a question of Bagaria, Gitman and Schindler.}, }
Abstract.The generic Vopěnka principle, we prove, is relatively consistent with the ordinals being non-Mahlo. Similarly, the generic Vopěnka scheme is relatively consistent with the ordinals being definably non-Mahlo. Indeed, the generic Vopěnka scheme is relatively consistent with the existence of a $\Delta_2$-definable class containing no regular cardinals. In such a model, there can be no $\Sigma_2$-reflecting cardinals and hence also no remarkable cardinals. This latter fact answers negatively a question of Bagaria, Gitman and Schindler.
The
Vopěnka principle is the assertion that for every proper class of first-order structures in a fixed language, one of the structures embeds elementarily into another. This principle can be formalized as a single second-order statement in Gödel-Bernays set-theory GBC, and it has a variety of useful equivalent characterizations. For example, the Vopěnka principle holds precisely when for every class $A$, the universe has an $A$-extendible cardinal, and it is also equivalent to the assertion that for every class $A$, there is a stationary proper class of $A$-extendible cardinals (see theorem 6 in my paper The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme) In particular, the Vopěnka principle implies that ORD is Mahlo: every class club contains a regular cardinal and indeed, an extendible cardinal and more.
To define these terms, recall that a cardinal $\kappa$ is
extendible, if for every $\lambda>\kappa$, there is an ordinal $\theta$ and an elementary embedding $j:V_\lambda\to V_\theta$ with critical point $\kappa$. It turns out that, in light of the Kunen inconsistency, this weak form of extendibility is equivalent to a stronger form, where one insists also that $\lambda<j(\kappa)$; but there is a subtle issue about this that comes up with the virtual forms of these axioms, where the virtual weak and virtual strong forms are no longer equivalent. Relativizing to a class parameter, a cardinal $\kappa$ is $A$-extendible for a class $A$, if for every $\lambda>\kappa$, there is an elementary embedding $$j:\langle V_\lambda, \in, A\cap V_\lambda\rangle\to \langle V_\theta,\in,A\cap V_\theta\rangle$$ with critical point $\kappa$, and again one may equivalently insist also that $\lambda<j(\kappa)$. Every such $A$-extendible cardinal is therefore extendible and hence inaccessible, measurable, supercompact and more. These are amongst the largest large cardinals.
In the first-order ZFC context, set theorists commonly consider a first-order version of the Vopěnka principle, which we call the
Vopěnka scheme, the scheme making the Vopěnka assertion of each definable class separately, allowing parameters. That is, the Vopěnka scheme asserts, of every formula $\varphi$, that for any parameter $p$, if $\{\,x\mid \varphi(x,p)\,\}$ is a proper class of first-order structures in a common language, then one of those structures elementarily embeds into another.
The Vopěnka scheme is naturally stratified by the assertions $\text{VP}(\Sigma_n)$, for the particular natural numbers $n$ in the meta-theory, where $\text{VP}(\Sigma_n)$ makes the Vopěnka assertion for all $\Sigma_n$-definable classes. Using the definable $\Sigma_n$-truth predicate, each assertion $\text{VP}(\Sigma_n)$ can be expressed as a single first-order statement in the language of set theory.
In my previous paper, The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme, I proved that the Vopěnka principle is not provably equivalent to the Vopěnka scheme, if consistent, although they are equiconsistent over GBC and furthermore, the Vopěnka principle is conservative over the Vopěnka scheme for first-order assertions. That is, over GBC the two versions of the Vopěnka principle have exactly the same consequences in the first-order language of set theory.
In this article, Gitman and I are concerned with the virtual forms of the Vopěnka principles. The main idea of virtualization, due to Schindler, is to weaken elementary-embedding existence assertions to the assertion that such embeddings can be found in a forcing extension of the universe. Gitman and Schindler had emphasized that the remarkable cardinals, for example, instantiate the virtualized form of supercompactness via the Magidor characterization of supercompactness. This virtualization program has now been undertaken with various large cardinals, leading to fruitful new insights.
Carrying out the virtualization idea with the Vopěnka principles, we define the
generic Vopěnka principle to be the second-order assertion in GBC that for every proper class of first-order structures in a common language, one of the structures admits, in some forcing extension of the universe, an elementary embedding into another. That is, the structures themselves are in the class in the ground model, but you may have to go to the forcing extension in order to find the elementary embedding.
Similarly, the
generic Vopěnka scheme, introduced by Bagaria, Gitman and Schindler, is the assertion (in ZFC or GBC) that for every first-order definable proper class of first-order structures in a common language, one of the structures admits, in some forcing extension, an elementary embedding into another.
On the basis of their work, Bagaria, Gitman and Schindler had asked the following question:
Question. If the generic Vopěnka scheme holds, then must there be a proper class of remarkable cardinals?
There seemed good reason to expect an affirmative answer, even assuming only $\text{gVP}(\Sigma_2)$, based on strong analogies with the non-generic case. Specifically, in the non-generic context Bagaria had proved that $\text{VP}(\Sigma_2)$ was equivalent to the existence of a proper class of supercompact cardinals, while in the virtual context, Bagaria, Gitman and Schindler proved that the generic form $\text{gVP}(\Sigma_2)$ was equiconsistent with a proper class of remarkable cardinals, the virtual form of supercompactness. Similarly, higher up, in the non-generic context Bagaria had proved that $\text{VP}(\Sigma_{n+2})$ is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals, while in the virtual context, Bagaria, Gitman and Schindler proved that the generic form $\text{gVP}(\Sigma_{n+2})$ is equiconsistent with a proper class of virtually $C^{(n)}$-extendible cardinals.
But further, they achieved direct implications, with an interesting bifurcation feature that specifically suggested an affirmative answer to the question above. Namely, what they showed at the $\Sigma_2$-level is that if there is a proper class of remarkable cardinals, then $\text{gVP}(\Sigma_2)$ holds, and conversely if $\text{gVP}(\Sigma_2)$ holds, then there is either a proper class of remarkable cardinals or a proper class of virtually rank-into-rank cardinals. And similarly, higher up, if there is a proper class of virtually $C^{(n)}$-extendible cardinals, then $\text{gVP}(\Sigma_{n+2})$ holds, and conversely, if $\text{gVP}(\Sigma_{n+2})$ holds, then either there is a proper class of virtually $C^{(n)}$-extendible cardinals or there is a proper class of virtually rank-into-rank cardinals. So in each case, the converse direction achieves a disjunction with the target cardinal and the virtually rank-into-rank cardinals. But since the consistency strength of the virtually rank-into-rank cardinals is strictly stronger than the generic Vopěnka principle itself, one can conclude on consistency-strength grounds that it isn’t always relevant, and for this reason, it seemed natural to inquire whether this second possibility in the bifurcation could simply be removed. That is, it seemed natural to expect an affirmative answer to the question, even assuming only $\text{gVP}(\Sigma_2)$, since such an answer would resolve the bifurcation issue and make a tighter analogy with the corresponding results in the non-generic/non-virtual case.
In this article, however, we shall answer the question negatively. The details of our argument seem to suggest that a robust analogy with the non-generic/non-virtual principles is achieved not with the virtual $C^{(n)}$-cardinals, but with a weakening of that property that drops the requirement that $\lambda<j(\kappa)$. Indeed, our results seems to offer an illuminating resolution of the bifurcation aspect of the results we mentioned from Bagaria, Gitmand and Schindler, because it provides outright virtual large-cardinal equivalents of the stratified generic Vopěnka principles. Because the resulting virtual large cardinals are not necessarily remarkable, however, our main theorem shows that it is relatively consistent with even the full generic Vopěnka principle that there are no $\Sigma_2$-reflecting cardinals and therefore no remarkable cardinals.
Main Theorem. It is relatively consistent that GBC and the generic Vopěnka principle holds, yet ORD is not Mahlo. It is relatively consistent that ZFC and the generic Vopěnka scheme holds, yet ORD is not definably Mahlo, and not even $\Delta_2$-Mahlo. In such a model, there can be no $\Sigma_2$-reflecting cardinals and therefore also no remarkable cardinals.
For more, go to the arcticle:
V. Gitman and J. D. Hamkins, “A model of the generic Vopěnka principle in which the ordinals are not Mahlo,” Archive for Mathematical Logic, pp. 1-21, 2018.
@ARTICLE{GitmanHamkins2018:A-model-of-the-generic-Vopenka-principle-in-which-the-ordinals-are-not-Mahlo, author = {Gitman, Victoria and Hamkins, Joel David}, year = {2018}, title = {A model of the generic Vopěnka principle in which the ordinals are not Mahlo}, journal = {Archive for Mathematical Logic}, issn = {0933-5846}, doi = {10.1007/s00153-018-0632-5}, month = {5}, pages = {1--21}, eprint = {1706.00843}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1xT}, abstract = {The generic Vopěnka principle, we prove, is relatively consistent with the ordinals being non-Mahlo. Similarly, the generic Vopěnka scheme is relatively consistent with the ordinals being definably non-Mahlo. Indeed, the generic Vopěnka scheme is relatively consistent with the existence of a Δ2-definable class containing no regular cardinals. In such a model, there can be no Σ2-reflecting cardinals and hence also no remarkable cardinals. This latter fact answers negatively a question of Bagaria, Gitman and Schindler.}, } |
I've been trying to find the formula for the offset/parallel to a sine wave. Not just the parametric equation, but the y = f(x) form.
Here's what I've done so far: Read up on the parametric form and plugged in the x(t) and y(t) formulas. What I get is of course a parametric equation in terms of t.
If $$ y=\sin(x) $$
then the parameterization would be $$ x=t $$ $$ y=\sin(t) $$
Plugging in the offset formula:
$$ x_d(t) = t + \frac{d \cos (t)}{\sqrt {1 + \cos (t)^2}} $$
$$ y_d(t) = \sin (t) - \frac{d}{\sqrt {1 + \cos (t)^2}} $$
Now, that's all accurate, but it doesn't put it into a function form. According to my calculus book, the next step is to solve each of these for t and then set them equal to one another. The problem is that they are kind of a mess, with those sinusoidal functions involved.
My question is: Can Mathematica find the y = f(x) form for an offset curve of a sine wave?
A little background: I need this because I'm trying to find the intersection point when 3 offsets of three PI/3-out-of-phase to each other sine waves intersect. Basically where the green, blue and red intersect at the same time in the link below. I can find it numerically, but I'd like it exactly because it's something of discovery to find out how the ancient people drew braids using just compass and straight edges.
I can draw it no problem in C#: The intersection point was found using trial and error and is approximately 0.63. There are two blue lines, two red lines and two green lines, because I used +0.63 offset and -0.63 offset from the sine wave.
Thank you in advance for any help. |
The other day I came across a problem in system integration with an interesting math twist: one-way multi-thread or multi-process synchronize records from one system to another. At the source, records are identified by sequential Globally Unique Identifiers (GUID) that destination doesn't support. Instead destination identifies records by up to 24 alphanumeric characters (0-9, a-z, 36 symbols) of which at least the first character is reserved for a preamble.
The challenge is then how to map a sequential GUID, such 63b0e1db-cd2f-4265-b4f1-eb4b436b6adf, onto an alphanumeric numeric string, while supporting a multi-threading or multi-process setup.
A GUID in its standard base 16 representation, even without dashes, is 32 characters, also referred to as digits, or 128 bits in length. Mapping it as-is to destination thus isn't an option. But taking advantage of the larger base of the alphanumeric alphabet, if we encode it not in base 16 but base 36 we can shorten its length.
The relationship between base \(b\), number of digits \(d\), and number of states \(s\) is defined by the following equation: $$\begin{eqnarray*} b^d = s & & \textrm{\{states from base and digits\}}\\ \log{(b^d)} = \log(s) & & \textrm{\{apply \(\log\) to both sides}\}\\ d\log(b) = \log(s) & & \textrm{\{apply rule of \(\log(x^y) = y \log(x)\)}\}\\ d = \dfrac{\log(s)}{\log(b)} & & \textrm{\{divide both sides by \(\log(b)\)}\} \end{eqnarray*}$$ With \(l\) bits available, one may represent up to \(2^l\) states \(s\). Reserving one state for zero, the \(l\) bits may represent numbers 0 through \(2^{l} - 1\) (like in base 10 where two digits represent \(10^2\) states or numbers 0 through 99). Since a bit is an indivisible unit, the result must be rounded upwards: $$d = \left\lceil\dfrac{\log{(2^{l})}}{\log{(b)}}\right\rceil = \left\lceil\dfrac{\log{(2^{128})}}{\log(36)}\right\rceil = \lceil{24.75...}\rceil = 25$$ As we see, base 36 encoding alone doesn't solve our problem.
Say we reserve three digits for a preamble, how many bits can we fit into the remaining 21 digits when base 35 encoded? To find the answer, we must solve for \(l\), and since the bit is an indivisible unit, the result must be rounded down: $$\begin{eqnarray*} \left\lfloor\dfrac{\log(2^l)}{\log(36)}\right\rfloor = 21 & &\\ \left\lfloor\dfrac{l\log(2)}{\log(36)}\right\rfloor = 21 & & \textrm{\{apply rule of \(\log(x^y) = y \log(x)\)}\}\\ l = \left\lfloor\dfrac{21\log(36)}{\log(2)}\right\rfloor & & \textrm{\{multiply and divide}\}\\ l = \left\lfloor108.56...\right\rfloor = 108 & & \end{eqnarray*}$$ In other words, 21 digits, base 36 encoded correspond to 108 bits. That's 21 bits less than the original GUID.
The original GUID is a sequential GUID whose implementation details we shouldn't rely on. Worse still, the algorithm for generating sequential GUIDs may have changed over time and may change again in the future. So which 21 bits to drop without disproportionately increasing the risk of a collision?
One way to resolve this issue is compute a hash of the original 128 bits. With SHA1 for instance, regardless of input size, output is always 160 bits. Now instead of dropping 21 bits from the original GUID, we drop 52 bits from the hash. We can drop any 52 bits as long as we're consistent with which to drop; the simplest approach is a bit-wise right-shift by 52 bits.
Downside is that because SHA1 is a one-way hash function, and because we're dropping bits, mapping becomes a one-way function also. It doesn't matter in this case because we can store the original GUID in a separate field inside the destination record.
It seems we could've settled for a simpler scheme and just generated the 108 bits at random. Unfortunately, this would introduce a race condition into the record creation process. Imagine two threads or two processes about to create a record at destination. With random bits, each may query for the original GUID in a record and get a empty result back. Each then creates the record and succeeds. But now destination has two records matching one at source. With deterministic IDs only one record is created. The second creation attempt will fail, by design.
Because of hashing and subsequent bit shifting, there's a possibility that two source GUIDs could map to the same destination. The more records we create, the larger the chance of a collision. From a generalization of the Birthday problem, we get that if we have \(n\) records and \(s\) states, the approximate probability of a collision becomes (multiply by 100 for percentage): $$\begin{eqnarray*} p(n,s) = 1 - e^{-n^2 / 2^s}\\ p(10^6,2^{32}) = 1\\ p(10^6,2^{48}) \sim 3.55\textrm{e-03}\\ p(10^6,2^{64}) \sim 5.42\textrm{e-08}\\ p(10^6,2^{108}) \sim 3.08\textrm{e-21} \end{eqnarray*}$$ Assuming SHA1 hashes are uniformly distributed, even after dropping bits, the probability of a collision is infinitesimal. If such black swan event does occur, we should at least be able to detect it. Detection is easy as the record returned would hold an original GUID different from the one used to generate its ID. Depending on the domain, then either the original ID must the changed or the collision may be ignored.
We end up with the following equation to describe the mapping: $$\begin{eqnarray*} \textrm{map}(\textrm{guid}) = \textrm{encode\(_{36}\)}(\textrm{rshift\(_{52}\)}(\textrm{sha\(_1\)}(\textrm{guid})))\\ \textrm{map}(\textrm{63b0e1db-cd2f-4265-b4f1-eb4b436b6adf}) = \textrm{o7wt7mrm8rj4t0j24twi3} \end{eqnarray*}$$ If we so desire, we can switch mapping strategy later on. Perhaps multi-process or multi-threading is no longer a hard requirement and we can switch to the simpler random generated IDs. |
I was thinking of this today as I was looking over my complex analysis notes.
If you have some complex number $z$, then we can define it using Euler's formula as $z=a+ib=\cos\theta+i \sin\theta$. Say we have the case that $z=3+4i=25(\cos\theta+i\sin\theta)$. Then $25\cos \theta=3$, and $25\sin\theta=4$. But this would mean that
$$\theta=\cos^{-1}\left(\frac{3}{25}\right) =\sin^{-1}\left(\frac{4}{25}\right).$$
How can this be true if $\cos^{-1}\left(\frac{3}{25}\right)=83.107 \text{ degrees}$ and $\sin^{-1}\left(\frac{4}{25}\right)=9.206 \text{ degrees}$? Does this mean that we can only have certain values of $z$ in order to use Euler's formula ? |
Assume $G$ is a group of order $pqr$, with $p, q, r$ distinct primes. Let $P, Q, R$ be their corresponding Sylow subgroups. In addition, assume $P\subseteq C(G)$ and $R\subseteq N(Q)$ where $C$ and $N$ denote the centralizer and the normalizer, correspondingly. Show that $G\cong P\times QR$ and that if $G$ isn't abelian, it has exactly $q$ subgroups of order $pr$.
I have shown that $G\cong P\times QR$ quite easily; we can show $QR$ is normal in $G$ and so is $P$. We can also easily show $o(G)=o(QR)o(P)$, $P\cap QR=\{1\}$, which completes the proof.
However, I'm having trouble with the second part. All I can deduce is that $QR$ is non-abelian, and playing around with the number of Sylow subgroups doesn't seem to be of much help since I can't say much more than $n_p=1, n_q\in\{1, p, r, pr\}, n_r\in\{1, p, q, pr\}$ (or at least, I can't contradict any of the options).
How should I proceed? |
I have the following PDE:
$\qquad \frac{\partial P(x_1,x_2,t)}{\partial t} = -\frac{\partial}{\partial x_1}[F_1(x_1,x_2)P] - \frac{\partial}{\partial x_2}[F_2(x_1,x_2)P] + D(\frac{\partial^2 P}{\partial x_1^2} + \frac{\partial^2 P}{\partial x_2^2} )$
where
$\qquad F_1 = \frac{\epsilon^2 + x_1^2}{(1+x_1^2)(1+x_2)}-ax_1$ and $F_2 = \frac{1}{\tau_0}(b-\frac{x_2}{1+cx_1^2})$.
The values of parameters $\epsilon, a, b, c, \tau_0, D$ are 0.1, 0.1, 0.1, 100, 5.0, 0.001 respectively.
Updated: I also include the codes after reading through various manuals in Mathematica. I try to implement FEM to solve the aforementioned PDE.
Updated2: Thanks to Andrew, I managed to get the code run as follows:
1. Mesh generation
Needs["NDSolve`FEM`"] mesh = ToElementMesh[Rectangle[{0, 0}, {10, 10}], "MaxCellMeasure" -> 0.1, "MeshElementType" -> TriangleElement]; mesh["Wireframe"]
Boundary and initial condition
Γ = {DirichletCondition[u[t, x, y] == 0, x == 0 || y == 0 || x == 10 || y == 10]}; ic = u[0, x, y] == PDF[MultinormalDistribution[{5, 5}, {{1/100, 0}, {0, 1/100}}], {x, y}];
PDE formula
F1[x_, y_] = (ϵ^2 + x^2)/((1 + x^2)(1 + y)) - a x /. ϵ -> 0.1 /. a -> 0.1 F2[x_, y_] = 1/τ*1/(b - y/(1 + c*x^2)) /. τ -> 5 /. c -> 100 /. b -> 0.1 β = {F1[x_, y_], F2[x_, y_]} D = 0.001; eq = D[u[t, x, y], t] - d Laplacian[u[t, x, y], {x, y}] + D[F1[x, y] u[t, x, y], x] + D[F2[x, y] u[t, x, y], y] == 0;
Solve PDE
uif = NDSolveValue[{eq, Γ, ic}, u, {t, 0, 100}, {x, 0, 10}, {y, 0, 10}] Plot3D[-Log[uif[100, x, y]], {x, 0, 10}, {y, 0, 10}, PlotRange -> All]
However, stability issue arises because probability becomes negative at some regions.
May I know how to resolve the issue? I've tried to modify mesh elements since it is more preferable compared to adding artificial diffusion terms.
However, I wanted to try artificial diffusion as well (by following Documentation) but I didn't know how to obtain the norm of smallest elements of convection for my equation.
For initial condition, I tried to make it look like:
(* Initial condition is u[0,1,1] = 1 and others places are 0 *)
so that total probability in the domain of interest is one. Other distribution may be ok.
My purpose is that I want to :
solve PDE to observe time evolution to reach steady state solution take the log of the solution and plot it |
The $over$ operator correctly implemented
According to the Wikipedia, when composing $A$ $over$ $B$, the output alpha channel value $\alpha_O$ and the output color channel value $C_O$ are calculated as follows:
$\begin{cases}\alpha_O = 1 - (1 - \alpha_A) (1 - \alpha_B) \\C_O = \frac{\alpha_A C_A + (1 - \alpha_A)\alpha_B C_B}{\alpha_O}, \text{if $\alpha_O \neq 0$}\\C_O = 0, \text{if $\alpha_O = 0$}\end{cases}$
where $\alpha_A$ and $\alpha_B$ are alpha channel values of $A$ and $B$, and $C_A$ and $C_B$ – color channel values of $A$ and $B$ correspondingly.
This can be directly implemented in
Mathematica 11.1 or above as follows:
imageCompose[b_Image, a_Image] :=
Module[{alphaA = AlphaChannel@a, alphaB = AlphaChannel@b, alphaO,
cA = RemoveAlphaChannel@a, cB = RemoveAlphaChannel@b},
alphaO = 1 - (1 - alphaA) (1 - alphaB);
SetAlphaChannel[(alphaA*cA + (1 - alphaA) alphaB*cB)/alphaO, alphaO]]
Let us check the associative property:
{{i0, i1, i2}} = ImagePartition[
Import["http://i.stack.imgur.com/r13gh.png"], {Scaled[1/3], Scaled[1]}]
{i0~imageCompose~(i1~imageCompose~i2), (i0~imageCompose~i1)~imageCompose~i2}
Equal @@ %
ColorSeparate /@ %%
True
It holds! So what is the problem with
ImageCompose of version 10 and later?
Current implementation of
ImageCompose: the diagnosis
When writing the above implementation for the first time I unintentionally made a simple mistake: I forgot to divide the output value for the color channel by $\alpha_O$. Here is what happened:
imageComposeWrong[b_Image, a_Image] :=
Module[{alphaA = AlphaChannel@a, alphaB = AlphaChannel@b, alphaO,
cA = RemoveAlphaChannel@a, cB = RemoveAlphaChannel@b},
alphaO = 1 - (1 - alphaA) (1 - alphaB);
SetAlphaChannel[alphaA*cA + (1 - alphaA) alphaB*cB, alphaO]]
{i0~imageComposeWrong~(i1~imageComposeWrong~i2),
(i0~imageComposeWrong~i1)~imageComposeWrong~i2}
Equal @@ %
ColorSeparate /@ %%
False
The output looks exactly the same as for the current
ImageCompose:
{i0~ImageCompose~(i1~ImageCompose~i2), (i0~ImageCompose~i1)~ImageCompose~i2}
Equal @@ %
ColorSeparate /@ %%
False
Numerical comparison reveals tiny differences due to rounding off errors. But the final diagnosis is clear: the developer just forgot to divide the color channel by the alpha channel!
It is a great shame that during more than three years after the release of version 10.0.0 nobody noticed this in the company! Do they themselves use this functionality – or not?!
Please, do not be lazy to report this to the technical support, so that this shameful bug will be fixed as soon as possible! A high priority is given to bugs, which many users write about... The remedy
From the above considerations the remedy is obvious: we must just divide the color of the
ImageCompose output by the alpha channel:
icFix[img_Image] := img/AlphaChannel[img];
{i0~icFix@*ImageCompose~(i1~icFix@*ImageCompose~i2), (i0~icFix@*ImageCompose~i1)~icFix@*ImageCompose~i2}
Subtract @@ % // MinMax
ColorSeparate /@ %%
{-3.10689*10^-6, 3.08454*10^-6}
As one can see, there are still tiny differences due to rounding-off errors, but the associative property in fact is restored and the output is correct!
Original answer
Citing a comment by Rahul:
Well, that's certainly undesirable! Alpha compositing is supposed to be associative (
i0~ImageCompose~(i1~ImageCompose~i2) should equal
(i0~ImageCompose~i1)~ImageCompose~i2) and this doesn't do that. One could implement correct alpha compositing manually using
ImageApply, but let's see if someone has a better way.
Indeed, in versions 8.0.4 and 9.0.1
ImageCompose
is associative:
$Version
{{i0, i1, i2}} = ImagePartition[
Import["http://i.stack.imgur.com/r13gh.png"], {Scaled[1/3], Scaled[1]}]
{i0~ImageCompose~(i1~ImageCompose~i2), (i0~ImageCompose~i1)~ImageCompose~i2}
Equal @@ %
ColorSeparate /@ %%
"9.0 for Microsoft Windows (64-bit) (January 25, 2013)"
True
... while starting from version 10.0 it is not:
$Version
{{i0, i1, i2}} = ImagePartition[
Import["http://i.stack.imgur.com/r13gh.png"], {Scaled[1/3], Scaled[1]}]
{i0~ImageCompose~(i1~ImageCompose~i2), (i0~ImageCompose~i1)~ImageCompose~i2}
Equal @@ %
ColorSeparate /@ %%
"10.0 for Microsoft Windows (64-bit) (September 9, 2014)"
False
It is also worth to note that despite that
Overlay[{i0, i1}] and
Show[i0, i1] look the same as the old
ImageCompose[i0, i1], they are not the same. But they do (approximately) equal to each other and do approximately hold the associative property:
$Version
overlayCompose[i0_, i1_] := Rasterize[Overlay[{i0, i1}], "Image", Background -> None];
showCompose[i0_, i1_] := Rasterize[Show[i0, i1], "Image", Background -> None];
{overlayCompose[i0, i1], showCompose[i0, i1], i0~ImageCompose~i1}
ColorSeparate /@ %
{i0~overlayCompose~(i1~overlayCompose~i2), (i0~overlayCompose~i1)~ overlayCompose~i2}
ColorSeparate /@ %
{i0~showCompose~(i1~showCompose~i2), (i0~showCompose~i1)~showCompose~ i2}
ColorSeparate /@ %
"9.0 for Microsoft Windows (64-bit) (January 25, 2013)"
As one can see from the above,
Overlay introduces artifact at the top, while
Show does not. |
We all know that the minimal complexity of a comparison-based sorting algorithm is $\Omega(n \log n)$ comparisons. I'm trying to do a
blind sort, i.e. given a number $n$ output a circuit (with boolean, arithmetic and "comparison" gates) that sorts a list of $n$ items.
Precomputing all ${n \choose 2}$ comparisons and then doing arithmetic on the resulting bits gets me an $\Theta(n^3)$ algorithm, however by some crazy "pointer arithmetic" I think I can get a $\Theta(n^2)$ version.
Is there a known lower bound for comparison-based sorting circuits along similar lines to the $n \log n$ one for comparison-based sorting algorithm? Might it even be possible to blind sort in $n \log n$ time? |
By definition, let's denote $n$-dimensional Hausdorff measure as $$H^{\alpha}{A} = \lim_{\epsilon \to 0} {H_{\epsilon}^{\alpha}{A}}$$ where $$H_{\epsilon}^{\alpha} = \inf\{\sum_{k=1}^{\infty}{diam(A_{k})^{\alpha}}| A \in \cup A_{k}, diam(A_{k}) < \epsilon\}$$
The outer Lebesgue measure is also defined in the following way: $$ \lambda_{n}^{*}(A) = \inf \{\sum_{j=1}^{\infty}{|P_{j}|}: A \subset \cup P_{j} \}.$$
Suppose that $H^{\alpha} = c_{n} \cdot \lambda_{n}^{*}$. How to find $c_{n}$?
Wikipedia says that for Lebesgue measurable sets the following equality holds $$\lambda_{d}(E) = 2^{-d} \cdot \beta_{d} H^{\alpha}(E)$$ where $$\beta_{d} = \frac{{\pi}^{\frac{d}{2}}}{Г(\frac{d}{2}+1)}$$ but the existence of such kind of equation does not help in finding the way how the $c_{n}$ was obtained.
Any sort of help would be much appreciated. |
Given a 4-vector, we can always define a 2x2 hermitian matrix:
$$X=x^\mu \sigma_\mu=\left(\matrix{x^0+x^3&x^1-ix^2\\x^1+ix^2&x^0-x^3} \right)$$
Where $\sigma_i$ are just the Pauli matrices. In this base, we can define the Lorentz transformations as $\Lambda(L)$, where $X'=LXL^\dagger$. This representation forms the basis of the linear group $\mathrm{SL}(2, \mathbb C)$.
However, I'm curious on the exact expression of the $2\times 2$ matrices that represent these Lorentz transformations (they don't appear in the literature).
I've read that they can be characterized by just 6 real parameters (which is reminiscent of the 6 parameters for the $\mathrm{SO}(3)$ Lorentz representation). |
I am trying to solve simple scalar biharmonic equation using bubnov-galerkin finite element method. I am using $H^2$ conforming basis functions. I was wondering that if anyone can give me some pointers on how can I further debug my algorithm.
I proceeded by converting the strong form of equation to weak form: \begin{align} \Delta^2 u &= f \\ u &= g \;\;\; \text{on} \;\;\; \Gamma_D \\ \Gamma_N &= \emptyset \end{align}
As an example problem, I am trying to solve: \begin{align} u &= \cos(4 \pi x) \cos(4 \pi y) \end{align} Boundary conditions are implemented using penalty method. The weak form is follows:
\begin{align} a(u,v) &= L(v) \\ a(u,v) &= \sum_{\Omega_K \in \mathcal T} \int_{\Omega_K} \Delta \psi_{i} \Delta \psi_{j} + \sum_{E \in \mathcal E} \gamma \int_{E} \psi_{i} \psi_{h,j} \\ L(v) &= \sum_{\Omega_K \in \mathcal T} \int_{\Omega_K} \psi_{i} f + \sum_{E \in \mathcal E} \gamma \int_{E} \psi_{i} g\\ \end{align}
Problem: When I solve this equation on a square domain, my L2 error barely converges (convergence rate ~ 0.3). However, if I solve $u = \sin(4 \pi x) \sin(4 \pi y)$, I get correct convergence rates. I have tried the following things to debug my code:
I solves the poisson equation, so replaced the first integrand with the stiffness integrand. I get correct convergence rates. I concluded from this that my penalty method implementation is right
I am using Cartesian grid, so Jacobian lines up with expectations
Since, my shape functions are defined in parametric ($\xi,\eta$) space, I had to work out the transformation for the laplacian. I tested this transformation on polar coordinates.
Many thanks.
Apologies. Editted. |
Given a fat matrix $B \in \mathbb{C}^{n \times m}$ (where $m > n$) with full row rank, I would like to find (numerically) a full-rank matrix $A$ that minimizes the Frobenius norm of the product $A B$. Formally,
$$\underset{A \in \mathbb{C}^{n \times n}}{\text{minimize}} \quad \frac{1}{2} \|AB\|_F^2 \quad \text{subject to} \quad \det (A) \neq 0$$
The value of $m$ is typically an order of magnitude larger than $n$. The sizes ($n,m$) I am interested in may be on the order of hundreds.
I have found the following discussion, which I guess could be generalized to the above case. I wonder if there is a simpler solution in this particular scenario. |
I need to solve a real generalized eigenvalue problem
$Ax= \lambda Bx(*)$
A and B are calculated from equations below:
$$A=\sum_{i,j=1}^{N}W_{ij}(K_{i}-K_{j})\beta\beta^{T}(K_{i}-K_{j})^{T}$$
$$B=\sum_{i=1}^{N}D_{ii}K_{i}\beta\beta^{T}K_{i}^{T}$$.
where $W$ is a real symmetric $N*N$ matrix with diagonal entries being $0$ and off-diagonal entries between $(0,1)$.
$D$ is an $N*N$ diagonal matrix with $D_{ii}=\sum_{j=1}^NW_{ij}$.
$K_i$ is an $N*M$ matrix with all entries positive.
$\beta>0$ is an $M$ dimensional column vector.
From above equations, A and B should be symmetric semi-definite and B should be positive definite(I did some proof myself).
Maybe because some numerical losses( I are not sure :( ), $B$ appears to have small negative eigenvalues( I do the eigenvalue decomposition using LAPACK routine dsyev() ) and $(*)$ gives complex eigenvalues.
I want to select P smallest eigenvalues of this generalized eigenvalue problem, so complex values here are really a problem. Is there any way to avoid complex eigenvalues in such a case?
By the way I used armadillo as linear algebra library and solve $(*)$ directly using LAPACK routine dggev().
Any suggestions will be appreciated. |
As others have mentioned, the $j$ is just some dummy variable that runs through a particular index. I'm not sure if there's a common name for this notation, it's usually just called
indexing or index notation.
As rbird mentions, this is very similar to the summation notation. To remind you, this is$$\sum_{j = 1}^{n} x_{j} = x_{1} + x_{2} + \cdots + x_{n}.$$The $j = 1$ on the bottom indicates that the summation starts at the $x_{j} = x_{1}$ term, and goes through each of the integers up to and including $n$. If you have a set of elements labelled $x_{1}, x_{2}, x_{3}, \ldots, x_{n}$, then the notation $x_{i}$ or $x_{j}$ is commonly used to refer to an arbitrary element without specifying which.
Note that $j$ can be replaced with any other dummy variable and that if we change $j = 1$ to $j = 2$ (for example), this indicates that we start the summation/index at $j = 2$ instead of $j = 1$. An explicit example is$$\bigvee_{i = 7}^{9} p_{i} = p_{7}\vee p_{8}\vee p_{9}$$which is exactly the same as$$\bigvee_{\ell = 7}^{9} p_{\ell} \qquad\text{and}\qquad \bigvee_{\gamma = 7}^{9} p_{\gamma}.$$The choice of which dummy letter to use ($i, \ell, \gamma$) is up to you, though it is common to see $i, j, k$ used for indexes.
The notation$${\textstyle \bigvee_{i = 7}^{9} p_{i}} \qquad\text{and}\qquad \bigvee_{i = 7}^{9} p_{i}$$are both correct: the left is
text style which is common seen written inside a body of text, while the right is display style which is common seen written on its own line. They are equivalent.
You should note that
\bigvee_{i = 7}^{9} p_{i} gives the left one when you use it between single
$'s (inline maths), and that
\bigvee_{i = 7}^{9} p_{i} gives the right one when you use it between double
$$'s:
$$\begin{array}{cc} \texttt{\$\bigvee_{i = 7}^{9} p_{i}\$} & {\textstyle \bigvee_{i = 7}^{9} p_{i}}\\ & \\ \texttt{\$\$\bigvee_{i = 7}^{9} p_{i}\$\$} & {\displaystyle \bigvee_{i = 7}^{9} p_{i}}\end{array}$$ |
On very weak solutions of semi-linear elliptic equations in the framework of weighted spaces with respect to the distance to the boundary
1.
Departamento de Matemática Aplicada, Universidad Complutense de Madrid, Plaza de las Ciencias No. 3, 28040 Madrid, Spain
2.
Laboratoire de Mathématiques et Applications, Université de Poitiers, Boulevard Marie et Pierre Curie, Téléport 2, BP 30179, 86962 Futuroscope Chasseneuil Cedex, France
$(u,L\varphi)_0-(Vu,\varphi)_0+(g(\cdot,u,\nabla u),\varphi)_0=\mu(\varphi),\quad\forall\varphi\in C^2_c(\Omega).$
The potential $V \le \lambda < \lambda_1$ is assumed to be in the
weighted Lorentz space $L^{N,1}(\Omega,\delta)$, where
$\delta(x)= dist(x,\partial\Omega),\ \mu\in
M^1(\Omega,\delta)$, the set of weighted Radon measures
containing $L^1(\Omega,\delta)$, $L$ is an elliptic linear self
adjoint second order operator, and $\lambda_1$ is the first
eigenvalue of $L$ with zero Dirichlet boundary conditions.
If $\mu\in L^1(\Omega,\delta)$ we only assume that for the potential $V$ is in L 1 loc$(\Omega)$, $V \le \lambda<\lambda_1$. If $\mu\in M^1(\Omega,\delta^\alpha),\
\alpha\in$[$0,1[$[, then we prove that the very weak solution $|\nabla u|$ is in the
Lorentz space $L^{\frac N{N-1+\alpha},\infty}(\Omega)$. We apply those results
to the existence of the so called large solutions with a right hand side data in
$L^1(\Omega,\delta)$. Finally, we prove some rearrangement comparison results. Keywords:Very weak solutions; semilinear elliptic equations; distance to the boundary; weighted spaces measure; unbounded potentials.. Mathematics Subject Classification:35J25, 35J60, 35P30, 35J6. Citation:Jesus Idelfonso Díaz, Jean Michel Rakotoson. On very weak solutions of semi-linear elliptic equations in the framework of weighted spaces with respect to the distance to the boundary. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 1037-1058. doi: 10.3934/dcds.2010.27.1037
[1]
Claudia Anedda, Giovanni Porru.
Boundary estimates for solutions of weighted semilinear elliptic
equations.
[2]
Verena Bögelein, Frank Duzaar, Ugo Gianazza.
Very weak solutions of singular porous medium equations with measure data.
[3] [4]
Elder Jesús Villamizar-Roa, Henry Lamos-Díaz, Gilberto Arenas-Díaz.
Very weak solutions for the magnetohydrodynamic type equations.
[5]
Sara Barile, Addolorata Salvatore.
Radial solutions of semilinear elliptic equations with broken symmetry on unbounded domains.
[6]
Huyuan Chen, Hichem Hajaiej, Ying Wang.
Boundary blow-up solutions to fractional elliptic
equations in a measure framework.
[7]
Marco Degiovanni, Michele Scaglia.
A
variational approach to semilinear elliptic equations with
measure data.
[8]
Frédéric Abergel, Jean-Michel Rakotoson.
Gradient blow-up in Zygmund spaces for the very weak solution of a linear elliptic equation.
[9]
Mingchun Wang, Jiankai Xu, Huoxiong Wu.
On Positive solutions of integral equations with the weighted Bessel potentials.
[10]
Zhijun Zhang.
Large solutions of semilinear elliptic equations
with a gradient term: existence and boundary behavior.
[11] [12]
Simona Fornaro, Federica Gregorio, Abdelaziz Rhandi.
Elliptic operators with unbounded diffusion coefficients perturbed by inverse square potentials in $L^p$--spaces.
[13]
Lorena Bociu, Irena Lasiecka.
Uniqueness of weak solutions for the semilinear wave equations with supercritical boundary/interior sources and damping.
[14] [15]
Xavier Fernández-Real, Xavier Ros-Oton.
On global solutions to semilinear elliptic equations related to the one-phase free boundary problem.
[16] [17]
Xavier Cabré, Manel Sanchón, Joel Spruck.
A priori estimates for semistable solutions of semilinear elliptic equations.
[18] [19] [20]
Rui Zhang, Yong-Kui Chang, G. M. N'Guérékata.
Weighted pseudo almost automorphic mild solutions to semilinear integral equations with $S^{p}$-weighted pseudo almost automorphic coefficients.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
In general topology, it is often the case that theorems that rely on Hausdorffness can be sometimes generalized to the non-Hausdorff setting by imposing regularity as a separation axiom. Here is a theorem that solves Question 2. Note also that this solves Question 1, since then necessarily $Y$ must be completely regular (pseudometrizable spaces are completely regular).
Theorem: If $X$ is compact pseudometrizable, $Y$ is regular and $f : X \to Y$ is continuous and surjective then $Y$ is pseudometrizable.
I follow the proof in [Aliprantis, Border, "Infinite Dimensional Analysis: A Hitchhiker's Guide", Corollary 3.43] for the statement "$X$ compact metrizable, $Y$ Hausdorff, $f$ continuous surjective $\Rightarrow$ $Y$ metrizable" with the obvious modifications replacing non-closed compact sets with their closures.
Proof: Since $X$ is compact and $f$ is a continuous it follows that $Y = f(X)$ is compact. Since $Y$ is compact and regular it follows that $Y$ is also completely regular and normal. By the Urysohn metrization theorem in the non-Hausdorff setting, every regular second-countable topological space is pseudometrizable. Thus it is enough to show that $Y$ is second-countable. If $G \subseteq X$ is open then $X \setminus G$ is closed and thus compact (as a closed subset of a compact space). Therefore, $f(X \setminus G)$ is compact, but it need not be closed in $Y$. However, its closure $\overline{f(X \setminus G)}$ in $Y$ is compact since $Y$ is regular (see here). Thus, if $G$ is open in $X$ then $Y \setminus \overline{f(X \setminus G)}$ is open in $Y$. Since $X$ is compact and pseudometrizable it follows that $X$ is second-countable. So let $\mathcal{B}$ be a countable base for $X$ and we can assume that $\mathcal{B}$ is closed under finite unions. Define $\mathcal{C} := \{ Y \setminus \overline{f(X \setminus G)} \mid G \in \mathcal{B} \}$. Then $\mathcal{C}$ is countable an we claim that $\mathcal{C}$ is a base for $Y$. Let $W \subseteq Y$ open, $W \neq \emptyset$. Take $y \in W$. Then $\{ y \}$ is compact and again by regularity of $Y$, $W$ is also an open neighborhood of its closure $\overline{ \{ y \} }$ (see again here). Then $f^{-1}(\overline{ \{ y \} })$ is closed in $X$, thus compact and we have $f^{-1}(\overline{ \{ y \} }) \subseteq f^{-1}(W)$. Now since $X$ is second-countable the open set $f^{-1}(W)$ is a union of sets in $\mathcal{B}$ which then forms an open cover of $f^{-1}(\overline{ \{ y \} })$ and thus there are finitely many such sets $B_1, \dots, B_n \in \mathcal{B}$ with $f^{-1}(\overline{ \{ y \} }) \subseteq G \subseteq f^{-1}(W)$ where $G := \bigcup_{i=1}^n B_i$. Since $\mathcal{B}$ is closed under finite unions we have $G \in \mathcal{B}$. It follows that $Y \setminus \overline{f(X \setminus G)} \subseteq f(G) \subseteq W$ and we are done. |
Ordinal Multiplication is Associative Contents Theorem
Let $x, y, z$ be ordinals.
Let $\times$ denote ordinal multiplication.
Then: $x \times \left({y \times z}\right) = \left({x \times y}\right) \times z$ Proof
The proof shall proceed by Transfinite Induction on $z$:
Basis for the Induction
Let $0$ denote the zero ordinal.
\(\displaystyle x \times \left({y \times 0}\right)\) \(=\) \(\displaystyle x \times 0\) Ordinal Multiplication by Zero \(\displaystyle \) \(=\) \(\displaystyle 0\) Ordinal Multiplication by Zero \(\displaystyle \) \(=\) \(\displaystyle \left({x \times y}\right) \times 0\) Ordinal Multiplication by Zero
This proves the basis for the induction.
Induction Step
\(\displaystyle x \times \left({y \times z}\right)\) \(=\) \(\displaystyle \left({x \times y}\right) \times z\) Inductive Hypothesis \(\displaystyle \implies \ \ \) \(\displaystyle x \times \left({y \times z^+}\right)\) \(=\) \(\displaystyle x \times \left({\left({y \times z}\right) + y}\right)\) Definition of Ordinal Multiplication \(\displaystyle \) \(=\) \(\displaystyle x \times \left({y \times z}\right) + \left({x \times y}\right)\) Ordinal Multiplication is Left Distributive \(\displaystyle \) \(=\) \(\displaystyle \left({x \times y}\right) \times z + \left({x \times y}\right)\) Inductive Hypothesis \(\displaystyle \) \(=\) \(\displaystyle \left({x \times y}\right) \times z^+\) Definition of Ordinal Multiplication
This proves the induction step.
Limit Case
The inductive hypothesis for the limit case states that:
$\forall w \in Z: x \times \left({y \times w}\right) = \left({x \times y}\right) \times w$
where $z$ is a limit ordinal.
The proof shall proceed by cases: Case 1
If $y = 0$, then:
\(\displaystyle x \times \left({y \times z}\right)\) \(=\) \(\displaystyle x \times 0\) Ordinal Multiplication by Zero \(\displaystyle \) \(=\) \(\displaystyle 0\) Ordinal Multiplication by Zero \(\displaystyle \) \(=\) \(\displaystyle 0 \times z\) Ordinal Multiplication by Zero \(\displaystyle \) \(=\) \(\displaystyle \left({x \times y}\right) \times z\) Ordinal Multiplication by Zero Case 2
If $y \ne 0$, then $y \times z$ is a limit ordinal by Limit Ordinals Preserved Under Ordinal Multiplication.
It follows that:
\(\displaystyle x \times \left({y \times z}\right)\) \(=\) \(\displaystyle \bigcup_{u \mathop < \left({y \times z}\right)} x \times u\) Definition of Ordinal Multiplication \(\displaystyle \left({x \times y}\right) \times z\) \(=\) \(\displaystyle \bigcup_{w \mathop < z} \left({x \times y}\right) \times w\) Definition of Ordinal Multiplication If $u < \left({y \times z}\right)$, then $u < \left({y \times w}\right)$ for some $w \in z$ by Ordinal is Less than Ordinal times Limit.
\(\displaystyle x \times u\) \(\le\) \(\displaystyle x \times \left({y \times w}\right)\) Membership is Left Compatible with Ordinal Multiplication \(\displaystyle \) \(=\) \(\displaystyle \left({x \times y}\right) \times w\) Inductive Hypothesis \(\displaystyle \) \(\le\) \(\displaystyle \left({x \times y}\right) \times z\) Subset is Right Compatible with Ordinal Multiplication Generalizing, the result follows for all $u \in \left({y \times z}\right)$.
Therefore by Supremum Inequality for Ordinals:
$x \times \left({y \times z}\right) \le \left({x \times y}\right) \times z$ Conversely, take any $w < z$.
\(\displaystyle \left({x \times y}\right) \times w\) \(=\) \(\displaystyle x \times \left({y \times w}\right)\) Inductive Hypothesis \(\displaystyle \) \(\le\) \(\displaystyle x \times \left({y \times z}\right)\) Membership is Left Compatible with Ordinal Multiplication It follows by Supremum Inequality for Ordinals that: $\left({x \times y}\right) \times z \le x \times \left({y \times z}\right)$ By definition of set equality: $x \times \left({y \times z}\right) = \left({x \times y}\right) \times z$ This proves the limit case.
$\blacksquare$ |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
Notations :
AUT$(X)$ (where $X$ is a graph) is the group of automorphisms of the graph $X$
$G=\langle A \rangle $ means group $G$ is generated by set $A$.
$G_{\{\Delta\}}$ is the set-wise stabiliser of $\Delta$
STAB Input : $A \subseteq \text{Sym}(\Omega)$ and $\Delta \subseteq \Omega$ Find : Generator of $\langle A \rangle_{\{\Delta\}} = \{g \in \langle A \rangle \mid \Delta ^ g = \Delta\}$ ISO Given : $X_1 = (V,E_1)$ and $X_2 = (V,E_2)$ Find : Is $X_1$ isomorphic to $X_2$? Claim : ISO $\le_{P}$ STAB ( polynomial time reduction )
Proof : Take disjoint union $X=X_1 \cup X_2$ and note that AUT$(X) \le \text{Sym}(V) =G$
$G$ acts on the set $V \choose 2$ i.e. set of all unordered pair of vertices. Clearly $E \le$ $V \choose 2$ and AUT$(X) = G_{\{E\}}$ under the above group action.
Question : Is this a polynomial time reduction from decision problem to non-decision problem? Please note that It is possible that I may have misunderstood the reduction
Reference : Poly time computation in groups by E.M Luks See Page no 3 |
Q. The electric field of a plane polarized electromagnetic wave in free space at time t= 0 is given by an expression $\vec{E} (x,y) = 10 \hat{j} \cos [(6x + 8z)]$ The magnetic field $\vec{B} (x, z, t) $ is given by : (c is the velocity of light)
Solution:
$\vec{E} =10\hat{j} \cos \left[\left(6\hat{i} +8\hat{k}\right) . \left(x\hat{i} +z\hat{k}\right)\right]$ $ =10\hat{j} \cos\left[\vec{K} .\vec{r}\right] $ $\therefore \vec{K} = 6\hat{i} +8 \hat{k} ;$ direction of waves travel. i.e. direction of 'c'. $ \therefore$ Direction of $\hat{B}$ will be along $ \hat{C} \times\hat{E} = \frac{-4 \hat{i} +3\hat{k}}{5} $ Mag. of $\vec{B}$ will be along $\hat{C} \times \hat{E} = \frac{-4 \hat{i} +3\hat{k}}{5} $ Mag. of $ \vec{B} = \frac{E}{C} = \frac{10}{C}$ $ \therefore \vec{B} = \frac{10}{C} \left( \frac{-4\hat{i} +3\hat{k}}{5}\right) = \frac{\left(-8\hat{i}+6\hat{k}\right)}{C} $ Questions from JEE Main 2019 4. One of the two identical conducting wires of length L is bent in the form of a circular loop and the other one into a circular coil of N identical turns. If the same current is passed in both, the ratio of the magnetic field at the central of the loop $(B_L)$ to that at the centre of the coil $(B_C)$ , i.e., $R \frac{B_L}{B_C}$ will be 9. The magnetic field associated with a light wave is given, at the origin, by $B = B_0 [\sin(3.14 \times 10^7)ct + \sin(6.28 \times 10^7)ct]$.
If this light falls on a silver plate having a work
function of 4.7 eV, what will be the maximum
kinetic energy of the photo electrons ?
$(c = 3 \times 10^{8} ms^{-1}, h = 6.6 \times 10^{-34} J-s)$ Physics Most Viewed Questions 1. An AC ammeter is used to measure current in a circuit. When a given direct current passes through the circuit, the AC ammeter reads 3 A. When another alternating current passes through the circuit, the AC ammeter reads 4 A. Then the reading of this ammeter, if DC and AC flow through the circuit simultaneously, is 9. A cannon of mass I 000 kg, located at the base of an inclined plane fires a shelI of mass 100 kg in a horizontal direction with a velocity 180 $kmh^{-1}$. The angle of inclination of the inclined plane with the horizontal is 45$^{\circ}$. The coefficient of friction between the cannon and the inclined plane is 0.5. The height, in metre, to which the cannon ascends the inclined plane as a result of the recoiI is (g = 10 $ms^{-1})$ Latest Updates Top 10 Medical Entrance Exams in India Top 10 Engineering Entrance Exams in India JIPMER Results Announced NEET UG Counselling Started NEET Result Announced KCET Result Announced KCET College Predictor JEE Main Result Announced JEE Advanced Score Cards Available AP EAMCET Result Announced KEAM Result Announced UPSEE – Online Applications invited from NRI &Kashmiri Migrants MHT CET Result Announced NEET Rank Predictor Questions Tardigrade |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
Consider the function:
$f(x)= \begin{cases} 1/n \quad &\text{if $x= m/n$ in simplest form} \\ 0 \quad &\text{if $x \in \mathbb{R}\setminus\mathbb{Q}$} \end{cases} $
Prove that the function is continuous at every irrational point and also that the function is not continuous at every rational point. Also, we can say that the function is continuous at some point $k$ if $\displaystyle\lim_{x \to k} f(x)=f(k)$.
I was thinking of doing an epsilon delta proof backwards using the fact that $\mathbb{Q}$ is dense in $\mathbb{R}$ for rational points and irrational points. Any ways on how to expand on this are welcome. |
Quasirandomness Introduction
Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the density increment method or on some kind of generalization of Szemerédi's regularity lemma.
In general, one has some kind of parameter associated with a set, which in our case will be the number of combinatorial lines it contains, and one would like a
deterministic definition of the word "quasirandom" with the following key property. roulette spielen Every quasirandom set [math]\mathcal{A}[/math] has roughly the same value of the given parameter as a random set of the same density.
Needless to say, this is not the
only desirable property of the definition, since otherwise we could just define [math]\mathcal{A}[/math] to be quasirandom if it has roughly the same value of the given parameter as a random set of the same density. The second key property is this. Every set [math]\mathcal{A}[/math] that failsto be quasirandom has some other property that we can exploit.
These two properties are already discussed in some detail in the article on the density increment method: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem.
A possible definition of quasirandom subsets of [math][3]^n[/math]
As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function.
Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math])
As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect).
Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined). |
In a equilibrium reaction $\ce{CO {(g)} + H2O {(g)} <=> CO2 {(g)} + H2 {(g)}}$, initial concentrations of $\ce{CO}$ and H
2O are equal and are 0.3 mol/dm 3. What is the equilibrium constant of the reaction if the equilibrium concentrations of CO 2and H 2are equal and are 0.1 mol/dm 3?
I tried to solve it this way:
$$\begin{split} \ce{K_r} & = \ce{\frac{[CO2][H2]}{[CO][H2O]} \\ & = \frac{(0.1)(0.1)}{(0.3)(0.3)} \\ & = \frac{(0.1)^2}{(0.3)^2} \\ & = \frac{0.01}{0.09} \\ & \approx 0.11 mol/dm^3} \end{split}$$
But the correct answer is 0.25 mol/dm
3
What did I do wrong? What is the right way of solving problems like this one? |
I'm trying to write a program to calculate fixed-point Hartree-Fock level energies of molecules (for my amusement) and everything makes sense but this. I've been agonizing over this for almost 3 hours now. I've tried pretty much every Google search I can think of (mostly returning results that were either too vague or without enough detail) and looked in almost every book/ebook I own (pretty much always too vague). Any help whatsoever would be extremely appreciated.
As far as I understand, an STO-NG contracted Gaussian basis function has the following form:
$$\phi_{\mu}^{\textrm{CGF}}(\vec{r}) = \sum_{p}^{N_{\textrm{PGF}}} d_{p\mu} (x-X_A)^{i_{p\mu}} (y-Y_A)^{j_{p\mu}} (z-Z_A)^{k_{p\mu}} e^{-\alpha_{p\mu}|\vec{r}-\vec{R_{A}}|^{2}}$$
where the contraction coefficients, $d_{k\mu}$ and exponents, $\alpha_{k\mu}$ are chosen such that $\phi_{\mu}$ provides the 'best fit' to a Slater-type orbital having a Slater exponent $\zeta$, $\vec{R_{A}}=(X_A,Y_A,Z_A)$ is a fixed reference centre (usually a nucleus) and $l_{p\mu} = i_{p\mu} + j_{p\mu} + k_{p\mu}$ defines the angular momentum of the primitive Gaussian function:
$$\phi_{p\mu}^{\textrm{PGF}} = (x-X_A)^{i_{p\mu}} (y-Y_A)^{j_{p\mu}} (z-Z_A)^{k_{p\mu}} e^{-\alpha_{p\mu}|\vec{r}-\vec{R_{A}}|^{2}}.$$
I'm having problems interpreting data files documenting contraction coefficients and exponents for various basis sets.
For example, please consider the following snippet from the Gaussian 94 STO-3G basis set file documenting contraction parameters for Carbon:
****C 0 S 3 1.00 71.6168370 0.15432897 13.0450960 0.53532814 3.5305122 0.44463454 SP 3 1.00 2.9412494 -0.09996723 0.15591627 0.6834831 0.39951283 0.60768372 0.2222899 0.70011547 0.39195739 ****
I have read in various places that there are enough coefficients here to describe 5 contracted functions, but I see only enough for 3 maximum (if the exponents are the same, but the coefficients change between 2S and 2P). I'd be eternally grateful if someone could explain for me, precisely how one would determine the following information from such a basis set data file:
1) Which EXACT basis functions appear within each contracted function?
2) How many contracted Gaussian basis functions there are?
If possible, would you be able to do the same for the following snippet, again for Carbon, Gaussian 94, but this time for the more complicated 6-31++G** basis set?:
****C 0 S 6 1.00 3047.5249000 0.0018347 457.3695100 0.0140373 103.9486900 0.0688426 29.2101550 0.2321844 9.2866630 0.4679413 3.1639270 0.3623120 SP 3 1.00 7.8682724 -0.1193324 0.0689991 1.8812885 -0.1608542 0.3164240 0.5442493 1.1434564 0.7443083 SP 1 1.00 0.1687144 1.0000000 1.0000000 SP 1 1.00 0.0438000 1.0000000 1.0000000 D 1 1.00 0.8000000 1.0000000 **** |
There are actually two uses of the word "strength" in play here.A strong endofunctor $F : C \to C$ over a monoidal category $(C, \otimes, I)$ is one which comes with a natural transformation $\sigma : A \otimes F(B) \to F(A \otimes B)$, satisfying some coherence conditions with respect to the associator which I will gloss over. This condition is sometimes ...
One is internal and the other is external.A category $\mathcal{C}$ consists of objects and morphisms. When we write $f : A \to B$ we mean that $f$ is a morphism from object $A$ to object $B$. We may collect all morphisms from $A$ to $B$ into a set of morphisms $\mathrm{Hom}_{\mathcal{C}}(A,B)$, called the "hom-set". This set is not an object of $\mathcal{C}...
Grothendieck's inequality, from his days in functional analysis, was initially proved to relate fundamental norms on tensor product spaces. Grothendieck called the inequality "the fundamental theorem of the metric theory of tensor product spaces", and published it in a now famous paper in 1958, in French, in a limited circulation Brazilian journal. The paper ...
There have been recent developments in dependent type theory which relate type systems to homotopy types.This is now a relatively small field, but there is a lot of exciting work being done right now, and potentially a lot of low hanging fruit, most notably in porting results from algebraic topology and homological algebra and formalizing the notion of ...
As it happens, I'm writing a paper about this now. IMO, a good way to think about futures or promises is in terms of the Curry-Howard correspondence for temporal logic.Basically, the idea behind futures is that it is a data structure representing a computation that is in progress, and upon which you can synchronize. In terms of temporal logic, this is the ...
Algebraic geometry is used heavily in algebraic complexity theory and in particular in geometric complexity theory. Representation theory is also crucial for the latter, but it's even more useful when combined with algebraic geometry and homological algebra.
How do type classes fit in this model?The short answer is: they don't.Whenever you introduce coercions, type classes, or other mechanisms for ad-hoc polymorphism into a language, the main design issue you face is coherence.Basically, you need to ensure that typeclass resolution is deterministic, so that a well-typed program has a single ...
There has been a lot done applying category theory to regular languages and automata. One starting point is the recent papers:Bialgebraic Review of Deterministic Automata, Regular Expressions and Languagesby Bart JacobsA Bialgebraic Approach to Automata and Formal Language Theory by James Worthington.In the first of these papers, the structure of ...
Semantically, a coercion $c : A \leq B$ is just a morphism $c : A \to B$, which gets added to the interpretation of terms at the appropriate points. The basic problem this creates is the issue of coherence: are you guaranteed that a term will have a unique meaning, given that the same term can potentially have coercions hidden in many possible places in the ...
Grothendieck's impact can be felt in type theory and logic. For instance, Bart Jacobs' 700+ page volume Categorical Logic and Type Theory gives a uniform treatment of various type theories ($X$-type theory, where $X\subseteq \{ \text{simple},$ $\text{dependent},$ $\text{polymorphic},$ $\text{higher-order}\}$) based on the categorical notion of Grothendieck ...
There is not a canonical such category, for the same reason there is no canonical category of computations. However, there are large and useful algebraic structures on data structures.One of the more general such structures, which is still nevertheless useful, is the theory of combinatorial species. A species is a functor $F : B \to B$, where $B$ is the ...
First of all:Any monad is also an applicative functor and any applicative functor is a functor.This is true in the context of Haskell, but (reading Applicative as "strong lax monoidal functor") not in general, for the rather trivial reason that you can have "applicative" functors between different monoidal categories, whereas monads (and comonads) are ...
Your knowledge of field theory would be useful in cryptography, while category theory is heavily used in the research on programming languages and typing systems, both of which are closely related to the foundations of mathematics.
As pointed by Martin, there is some work on the categorical representation of patches. Mimram and Di Giusto's "A Categorical Theory of Patches" being the most extensive categorical approach to edit-scripts as used by the UNIX diff.In their sense, you have what you want. The objects are finite sequences of words over an alphabet $L$, seen as a mapping $A : [...
Indeed, there is a different notion than isomorphism which is more useful in programming. It is called "behavioural equivalence" (sometimes called "observational equivalence") and it is established by giving a "simulation relation" between data structures rather than bijections. Algebraists came in and established an area called "algebraic data types" in ...
There have been a number of developments with regards to the use of monads in the theory of computation since Eugenio Moggi's work. I am not able to give a comprehensive account, but here are some points that I am familiar with, others can chime in with their answers.Specific examples of monadsYou do not have to study super-general theory all the time. ...
If your objects are Turing machines, there are several reasonable possibilities for morphisms. For example:1) Consider Turing machines as the automata they are, and consider the usual morphisms of automata (maps between the alphabets and the states that are consistent with one another) which also either preserve the motions of the tape head(s), or exactly ...
EDIT: Adding the caveat that Roger's fixed-point theorem may not be a special case of Lawvere's.Here is a proof that may be "close"... It uses Roger's fixed-point theorem instead of Lawvere's theorem. (See comment section below for further discussion.)Let $K(x)$ be the Kolmogorov complexity of string $x$.lemma. $K$ is not computable.Proof....
You might be interested in Turing categories by Robin Cockett and Pieter Hofstra. From the point of view of category theory the question "what is the category of Turing machines" is less interesting than "what is the categorical structure which underlies computation". Thus, Robin and Pieter identify a general kind of category that is suitable for developing ...
Actually, I think what you're looking for is Kleene algebra. See Dexter Kozen's classic article. He gives an axiomatization of Kleene-star. I assume this is the very first step you're interested in.A completeness theorem for Kleene algebras and the algebra of regular events. Information and Computation, 110(2):366-390, May 1994.That article does not ...
Unfortunately, there are too many things are going on here. So, it is easy to mix things up. The use of "full" in "full completeness" and "full abstraction" refer to completely different ideas of fullness. But, there is also some vague connection between them. So, this is going to be a complicated answer.Full completeness: "Sound and complete" is a ...
Any applications of $p$-adic cohomology, etale cohomology in point counting formulas for algebraic varieties has roots in his work.I am guessing Mulmuley's vision of generalization of Riemann hypothesis over finite fields coming from the Weil conjectures can be thought of as asking questions which originally had fruitful results from Grothendieck's etale ...
I think you are asking two questions about applicability, type A and type B separately.As you note, there are many substantive applications of category theory to type B topics: semantics of programming languages (monads, cartesian closed categories), logic and provability (topoi, varieties of linear logic).However, there seems to be little substantive ...
Here is a proof that this is not a research question. It can be solved by a machine:Welcome to Djinn version 2011-07-23.Type :h to get help.Djinn> f ? (a, Either b c) -> Either (a,b) (a,c)f :: (a, Either b c) -> Either (a, b) (a, c)f (a, b) =case b ofLeft c -> Left (a, c)Right d -> Right (a, d)
Field theory and algrebraic geometry would be useful in topics related to error correcting codes, both in the classical setting as well as in studying locally decodable codes and list decoding. I believe this goes back to work on the Reed-Solomon and Reed-Muller codes, which was then generalized to algebraic geometric codes. See for example, this book ...
You can take a look to:Peter Golbus, Robert W. McGrail, Tomasz Przytycki, Mary Sharac, and Aleksandar Chakarov. 2009. Tricolorable torus knots are NP-complete. In Proceedings of the 47th Annual Southeast Regional Conference (ACM-SE 47). ACM, New York, NY, USA, , Article 42 , 6 pages.Abstract: This work presents a method for associating a class of ...
Applying higher homotopy-theoretic ideas to CS is still a very nascent field! My understanding is that it's not even that old as a mathematical field.Certainly HoTT is the central impetus for such ideas. Even there though, there only have been few applications of category theory of "dimension" higher than 2.One nice "computer science-y" one is ...
Theoretical computer scientists do many things, one of which is mathematical modeling of various computer-sciency things. For instance, we like to provide mathematical models of programming languages, so that people can actually prove things about programs (such as proving that the program does what it's supposed to). In this sense it is always good to have ...
Yes, it's impossible to have a nondegenerate CCC with general recursion and categorical coproducts. The standard reference for this is:H. Huwig and A. Poigne. A note on inconsistencies caused by fixpoints in a cartesian closed category. Theoretical Computer Science, 73:101–112, 1990.However, I and (most of the other people I've met) learned about it ...
Saal Hardali mentioned that he wanted a category of Turing machines to do geometry (or at least homotopy theory) on. However, there are a lot of different ways to achieve similar aims.There is a very strong analogy between computability and topology. The intuition is that termination/nontermination is like the Sierpinski space, since termination is ... |
When I was reading about GAN, the thing I don't understand is why people often choose the input to a GAN (z) to be samples from a Gaussian? - and then are there also potential problems associated with this?
Why people often choose the input to a GAN (z) to be samples from a Gaussian?
Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.
Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems. The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $\epsilon$ or latent factor $z$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $N(\mu, \sigma^2)$ for arbitrary mean $\mu$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples. Central limit theorem. Another commonly used justification is that since many observations are the result (average) of large number of [almost] independent processes, therefore CLT justifies the choice of Gaussian. This is not a good justification because there are also many real-world phenomena that do not obey Normality (e.g. Power-law distribution), and since the variable is the least known to us, we cannot decide which of these real-world analogies are more preferable.
Are there also potential problems associated with this?
Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption. In practice, when we make a new assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a
new Gaussian one, instead of changing the Gaussian assumption. Here are two examples: Example in regression (noise). Suppose we have no knowledge about observation $A$ (the least known), thus we assume $A \sim N(\mu, \sigma^2)$. After fitting the model, we may observe that the estimated variance $\hat{\sigma}^2$ is high. After some investigation, we may assume that $A$ is a linear function of measurement $B$, thus we extract this assumption as $A = \color{blue}{b_1B +c} + \epsilon_1$, where $\epsilon_1 \sim N(0, \sigma_1^2)$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $\hat{\epsilon}_1 = A - \hat{b}_1B -\hat{c}$ also has a high $\hat{\sigma}_1^2$. Then, we may extract a new assumption as $A = b_1B + \color{blue}{b_2B^2} + c + \epsilon_2$, where $\epsilon_2 \sim N(0, \sigma_2^2)$ is the new "the least known", and so on. Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $\color{blue}{\text{more layers}}$ between $z$ and the output (extract assumption), in the hope that the new network (or function) with the new $z_2 \sim N(0, \sigma_2^2)$ would lead to more realistic outputs, and so on. |
Is $\mathbb{Z}[x] / (x - a) \cong \mathbb{Z}$ where $a \in \mathbb{Z}$?
I'm trying to get more comfortable with these questions so please critique my attempt, or if you can, suggest other (better) ways of doing this if you can, thanks.
The two rings are isomorphic. Consider the evaluation map $\phi : \mathbb{Z}[x] \to \mathbb{Z}, x \mapsto a$. Since this is an evaluation map it is a homomorphism and it is surjective since $\phi(n) = n$ for any $n \in \mathbb{Z}$.
Now we show $\ker(\phi) = (x - a)$. If $f \in \ker(\phi)$, then $f(a) = 0$, thus $x - a \mid f$ hence $f \in (x - a)$. If $f \in (x-a)$, then $f(x) = g(x)(x-a)$ hence $f(a) = 0$, so $f \in \ker(\phi)$. Thus by the first isomorphism theorem $\mathbb{Z}[x] / (x-a) \cong \mathbb{Z}$. |
This question already has an answer here:
I have the following equation:
$$m \frac{d^2 x}{dt^2} = -Kx - \alpha \frac{dx}{dt} + f(t) $$
Which I introduced into Mathematica as:
eqn = m D[x[t], {t, 2}] == -K x + f[t] - a D[x[t], t]
I want to take the Fourier transform of this equation to get (after simplyfing terms):
$$ x(\omega) [ - m \omega^2 + i \omega \alpha + K] = f(\omega)$$
(There might be a prefactor missing because I defined the Fourier transform as the inverse Fourier transform of the one that Mathematica uses but that is not important).
In order to do that I tried:
FourierTransform[f[t], t, \[Omega]]
But the output that I get is just:
FourierTransform[ m x''[t] == -K x + f[t] - a x'[t], t, \[Omega]]
It seems that Mathematica doesn't know how to do the Fourier transform of the functions $x(t)$ and $f(t)$ without their explicit form. Is there any way to do this directly with the built in functions of Mathematica? |
In this paper we present local distributed algorithms for constructing spanners in wireless sensor networks modeled as
unit ball graphs
(shortly
UBGs
) and
quasi-unit ball graphs
(shortly
quasi-UBGs
), in the 3-dimensional Euclidean space. Our first contribution is a local distributed algorithm that, given a UBG
U
and a parameter
α
<
π
/3, constructs a sparse spanner of
U
with stretch factor 1/(1 − 2sin(
α
/2)), improving the previous upper bound of 1/(1 −
α
) by Althöfer et al. which is applicable only when
$\alpha < 1/(1+2\sqrt{2}) < \pi/3$
. The second contribution of this paper is in presenting the
first
local distributed algorithm for the construction of bounded-degree lightweight spanners of UBGs and quasi-UBGs.
The simulation results we obtained show that, empirically, the weight of the spanners, the stretch factor and locality of the algorithms, are much better than the theoretical upper bounds proved in this paper. |
As it turns out, that's actually a highly non-trivial question. I presume you're aware that every Euclidean domain is a UFD. It is also useful, however, to recall the definition of an
integrally closed domain. That is, an integral domain $R$ with field of fractions $K$ is considered integrally closed if for any monic polynomial $p(x) = x^n + a_{n-1}x^{n-1} + \ldots + a_0\in R[x]$, if $p$ has a root $\alpha\in K$, then $\alpha\in R$. It can be shown that any UFD is an integrally closed domain, and that $\mathbb{Z}[\sqrt{d}]$ will never be integrally closed for $d$ not square-free.
Secondly, if $d$ is square-free, then since not being integrally closed is an obstruction to being a UFD (which is a necessary for being an ED), we will often extend $\mathbb{Z}[\sqrt{d}]$ into the subring of its fraction field $\mathbb{Q}(\sqrt{d})$ which contains precisely the solutions to monic polynomials in $\mathbb{Z}[\sqrt{d}]$, which is in fact a ring, and we will call this ring $\mathcal{O}_{\mathbb{Q}(\sqrt{d})}$. It is a theorem in algebraic number theory that for square-free nonzero integers $d$, $$\mathcal{O}_{\mathbb{Q}(\sqrt{d})} = \begin{cases} \mathbb{Z}[\sqrt{d}] & \mathrm{if\ } d\equiv 2,3\mod 4 \\\mathbb{Z}\left[\frac{1+\sqrt{d}}{2}\right] & \mathrm{if\ } d\equiv 1\mod 4\end{cases}$$ which tells us that for $\mathbb{Z}[\sqrt{d}]$ to be a Euclidean domain, we must have that $d\equiv 2,3\mod 4$.
Here is where we arrive at our next complication: algebraic number theory provides us with a natural norm $N(a+b\sqrt{d}) = a^2 - db^2$ which is multiplicative and takes elements of $\mathcal{O}_{\mathbb{Q}(\sqrt{d})}$ to integers, as can be checked. A ring which is Euclidean under this norm is said to be
norm-Euclidean. There do exist rings which are Euclidean but not norm-Euclidean, such as $$\mathbb{Z}\left[\frac{1+\sqrt{69}}{2}\right]$$ but to my knowledge, these types of rings are not fully understood. We do, however, fully understand which quadratic rings are norm-Euclidean. In fact $\mathcal{O}_{\mathbb{Q}(\sqrt{d})}$ is norm-Euclidean if and only if $$d = -11, -7, -3, -2, -1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57, \mathrm{\ or\ }73$$ and so, $\mathbb{Z}[\sqrt{d}]$ is norm-Euclidean if and only if $$d = -2, -1, 2, 3, 6, 7, 11, \mathrm{\ or\ }19.$$I actually don't know if there are any Euclidean domains that are not norm-Euclidean of the form $\mathbb{Z}[\sqrt{d}]$. My suspicion is that there are not, though it is really way beyond my abilities to prove this. |
What's the state-of-the-art in the approximation of highly oscillatory integrals in both one dimension and higher dimensions to arbitrary precision?
I'm not entirely familiar with what's now done for cubatures (multidimensional integration), so I'll restrict myself to quadrature formulae.
There are a number of effective methods for the quadrature of oscillatory integrals. There are methods suited for finite oscillatory integrals, and there are methods for infinite oscillatory integrals.
For infinite oscillatory integrals, two of the more effective methods used are Longman's method and the modified double exponential quadrature due to Ooura and Mori. (But see also these two papers by Arieh Iserles.)
Longman's method relies on converting the oscillatory integral into an alternating series by splitting the integration interval, and then summing the alternating series with a sequence transformation method. For instance, when integrating an oscillatory integral of the form
$$\int_0^\infty f(t)\sin\,t\mathrm dt$$
one converts this into the alternating sum
$$\sum_{k=0}^\infty \int_{k\pi}^{(k+1)\pi} f(t)\sin\,t\mathrm dt$$
The terms of this alternating sum are computed with some quadrature method like Romberg's scheme or Gaussian quadrature. Longman's original method used the Euler transformation, but modern implementations replace Euler with more powerful convergence acceleration methods like the Shanks transformation or the Levin transformation.
The double exponential quadrature method, on the other hand, makes a clever change of variables, and then uses the trapezoidal rule to numerically evaluate the transformed integral.
For finite oscillatory integrals, Piessens (one of the contributors of QUADPACK) and Branders, in two papers, detail a modification of Clenshaw-Curtis quadrature (that is, constructing an Chebyshev polynomial expansion of the nonoscillatory part of the integrand). Levin's method, on the other hand, uses a collocation method for the quadrature. (I am told there is now a more practical version of the old standby, Filon's method, but I've no experience with it.)
These are the methods I remember offhand; I'm sure I've forgotten other good methods for oscillatory integrals. I will edit this answer later if I remember them.
Besides "multidimensional vs. single-dimensional" and "finite range vs. infinite range", an important categorization for methods is "one specific type of oscillator (usually Fourier-type: $\sin(t)$, $\exp(it)$, etc, or Bessel-type: $J_0(t)$, etc.) vs. more general oscillator ($\exp(i g(t))$ or even more general oscillators $w(t)$)".
At first, oscillatory integration methods focused on specific oscillators. As
J. M. said, prominent ones include Filon's method and the Clenshaw-Curtis method (these two are closely related) for finite range integrals, and series extrapolation based methods and the double-exponential method of Ooura and Mori for infinite range integrals.
More recently, some general methods have been found. Two examples:
Levin's collocation-based method for any $\exp(i g(t))$ (Levin 1982), or later for any oscillator $w(t)$ defined by a linear ODE (Levin 1996 as linked by
J. M.). Mathematica uses Levin's method for integrals not covered by the more specialized rules.
Huybrechs and Vandewalle's method based on analytic continuation along a complex path where the integrand is non-oscillatory (Huybrechs and Vandewalle 2006).
No distinction is necessary between methods for finite and infinite range integrals for the more general methods, since a compactifying transformation can be applied to an infinite range integral, leading to a finite range oscillatory integral that can still be addressed with the general method, albeit with a different oscillator.
Levin's method can be extended to multiple dimensions by iterating over the dimensions and other ways, but as far as I know all the methods described in literature so far have sample points that are an outer product of the one-dimensional sample points or some other thing that grows exponentially with dimension, so it rapidly gets out of hand. I'm not aware of more efficient methods for high dimensions; if any could be found that sample on a sparse grid in high dimensions it would be useful in applications.
Creating automatic routines for the more general methods may be difficult in most programming languages (C, Python, Fortran, etc) in which you would normally expect to program your integrand as a function/routine and pass it to the integrator routine, because the more general methods need to know the structure of the integrand (which parts look oscillatory, what type of oscillator, etc) and can't treat it as a "black box". |
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open.
The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers.
Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic?
By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the
standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models.
We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a
topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic?
In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic.
Let me state the main theorem and briefly sketch the proof.
Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output.
\begin{equation*}\small\begin{array}{rcr}
\cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*}
This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the
final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$.
Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired.
But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the
final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired.
Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order.
The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$.
We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$
The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$.
Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable.
The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order.
Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order.
Go to the article to read more.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } |
I couldn't follow the proof of Corollary 1 on p.16 of Algebraic number theory by Lang.
Corollary $1$.Let $A$ be a ring integrally closed in its quotient field $K$. Let $L$ be a finite Galois extension of $K$, and $B$ the integral closure of $A$ in $L$. Let $\mathfrak{p}$ be a maximal ideal of $A$. Let $\phi:A \rightarrow A/\mathfrak{p}$ be the canonical homomorphism, and let $\psi_1, \psi_2$ be two homomorphisms of $B$ extending $\phi$ in a given algebraic closure of $A/\mathfrak{p}$. Then there exists an automorphism $\sigma$ of $L$ over $K$ such that $$\psi_1=\psi_2 \circ \sigma.$$
I found the post Proof on p. 16 of Lang's Algebraic Number Theory but I still cannot follow the given proof.
Here is the proof in the post.
By proposition 14, $B/\mathfrak P$ is a
normalextension of $A/\mathfrak p$, so the images of $\psi_1$ and $\psi_2$ inside the algebraic closure of $A/\mathfrak p$ are actually the same: $\psi_1(B) = \psi_2(B)$. That's why $\omega: \psi_1(B) \to \psi_2(B)$ is called an automorphism rather than an isomorphism. So $\omega$ is an element of $\operatorname{Gal}(B/\mathfrak P | A/\mathfrak p)$, hence it comes from an element $\sigma \in \operatorname{Gal}(L|K)$ with $\sigma \mathfrak P = \mathfrak P$. Viewing $\psi_1: B \to \psi_1(B) \cong B/\mathfrak P$ as the projection map inducing the surjection $\operatorname{Gal}(L|K)_{\mathfrak P} \to \operatorname{Gal}(B/\mathfrak P | A/\mathfrak p)$, this means $\omega \circ \psi_1 = \psi_1 \circ \sigma$.
First of all, I don't know why we can view $\psi_1:B \to \psi_1(B)\cong B/\mathfrak{P}$ as the projection map. The projection map used in the textbook is I though the canonical map $B \to B/\mathfrak{P}$.
Secondly, I know $\omega \in \operatorname{Gal}(B/\mathfrak P | A/\mathfrak p)$ comes from an element $\sigma \in \operatorname{Gal}(L|K)$ byProposition 14. Then isn't it $\omega=\psi_1\circ \sigma$ (if I assume my first question)? |
In the book 'Introduction to Set Theory' by Hrbacek and Jech, cardinal addition is defined as
$$\sum_{i \in I}{\kappa_i}=\left|\bigcup_{i \in I}{A_i} \right|$$
where $|A_i|=k_i$ for all $i \in I$ and $\langle A_i\mid i \in I\rangle$ is a system of mutually disjoint sets.
The author stated that: Without the Axiom of Choice, there may exists two system $\langle A_n\mid n \in \mathbb{N}\rangle$, $\langle A^{\prime}_n\mid n \in \mathbb{N}\rangle$ of mutually disjoint sets such that each $A_n$ and each $A^{\prime}_n$ has two elements, but $\bigcup_{n=0}^\infty{A_n}$ is not equipotent to $\bigcup_{n=0}^\infty{A^{\prime}_n}$.
I try to construct such example but I fail to do so. I tried the set of even natural numbers and natural numbers but they have the same cardinality after infinite union. |
I read that every chemical reaction is theoretically in equilibrium in an old textbook. If this is true how can a reaction be one way?
Yes, every chemical reaction can theoretically be in equilibrium. Every reaction is reversible. See my answer to chem.SE question 43258 for more details.
This includes even precipitation reactions and reactions that release gases. Equilibrium isn't just for liquids! Multiphase equilibria exist.
The only thing that stops chemical reactions from being "in equilibrium" is the lack of the proper number of molecules. For a reaction to be in equilibrium, the concentrations of reactants and products must be related by the equilibrium constant.
$$ \ce{ A <=> B} $$ $$ K = \frac{[B]}{[A]} $$
When equilibrium constants are extremely large or small, then extremely large numbers of molecules are required to satisfy this equation. If $K = 10^{30}$, then at equilibrium there will be $10^{30}$ molecules of B for every molecule of A. Another way to look at this is that for equilibrium to happen, there need to be
at least$10^{30}$ molecules of B, i.e. more than one million molesof B, in the system for there to be "enough" B to guarantee an equilibrium, i.e. to guarantee that there will be a well-defined "equilibrium" concentration of A.
When this many molecules are not present, then there is no meaningful equilibrium. For very large (or very small) equilibrium constants, it will be very difficult to obtain an equilibrium. In addition to needing a megamole-sized system (or bigger), the system will have to be well-mixed, isothermal, and isobaric. That's not easy to achieve on such large scales!
Update Commenters suggest that "irreversible" reactions do not have an equilibrium. This is true, but tautological. In the real world, all reactions are reversible, at least to a (perhaps vanishingly small) degree. To say otherwise would violate microscopic reversibility. A reaction that was 100% irreverible would have an equilibrium constant of infinity. But if $K= \infty$, then $\Delta G^{\circ} = -RT \ln{K}$ would turn into $\Delta G^{\circ} = -\infty$. So to get infinite energy we would just have to use 100% irreversible reactions! Hopefully the problems with the idea of "irreversible" reactions are becoming apparent.
Equilibrium can only apply to a closed system.
Reactions which form insoluble precipitates or gases which escape do not exhibit the behavior of a closed system. Therefore, these reactions may not be in equilibrium. However, these claims are pragmatic rather than real.
As it turns out, in the above answer, barium sulphate has a $\ce{K_{sp}}$ of $\ce{1.1 x 10^{-10}}$, so formally there is some small equilibrium related to the amount of barium sulphate in solution $\ce{1.05 x 10^{-5}}$
As gasses are escaping in solution, they may be readsorbed, an thus there would be some small equilibrium for processes like that.
But pragmatically, these reactions are not at equilibrium.
Yes every reaction is an equilibrium. A complete reaction is a equilibrium with high equlibrium constant. If you write the expression for equilibrium constant, you will find that high equilibrium constant implies that the conc. of the products is very high, i.e. the reaction has reached completion.
$$ \ce{BaCl2 (aq) + H2SO4 (aq) -> BaSO4 (s) + 2 HCl (aq)} $$ Let's just think about what (aq) means; it means you have ions floating about in there which are in equilibrium with their solids.
If you start from thinking there is no precipitate, just ions dissolved, we have $\ce{Ba^{2+}}$, $\ce{H+}$, $\ce{Cl-}$, and $\ce{SO4^{2-}}$. Then you consider the $K_{\mathrm{s}}$ of the different salts, which are $\ce{BaCl2}$, $\ce{HCl}$, $\ce{BaSO4}$, and $\ce{H2SO4}$. They will all go to $K_{\mathrm{s}}$, so all the salts would be forming and be dissolved unless blocked, e.g. things can become supersaturated. $\ce{BaSO4}$ has an extremely low Ks so most will precipitate at the same time. $\ce{BaCl2}$ will go to $K_{\mathrm{s}}$ with the ions $\ce{Ba^{2+}}$ and $\ce{Cl-}$ which are still in solution, and $\ce{H2SO4}$ would also go to $K_{\mathrm{s}}$, which means $\ce{BaCl2}$ is being formed, and therefore there is a reverse reaction.
Note if I had $\ce{BaSO4}$ in water, which would be in equilibrium (so tiny dissolved/tiny bit of $\ce{Ba^{2+}}$ ions and $\ce{SO4^{2-}}$ ions), and I added $\ce{Cl-}$ ions, a negligible amount more $\ce{BaSO4}$ will dissolve, as $\ce{BaCl2}$ would go to equilibrium, reducing $\ce{Ba^{2+}}$ ions, leading to negligible amounts of $\ce{BaSO4}$ dissolving to remain at $K_{\mathrm{s}}$. This also shows Le Chatelier's principle.
No, every reaction isn't in equilibrium with its products. Consider the following irreversible reaction: $$\ce{BaCl2(aq) + H2SO4(aq) -> BaSO4(ppt) + 2HCl(aq)}$$.
By definition if the reaction is irreversible then there is no equilibrium for that reaction. If there were an "equilibrium" for the reaction then the equation would be something like: $$\ce{K_{eq}} = \dfrac{\ce{[BaSO4][HCl]^2}}{ \ce{[BaCl2][H2SO4]}}$$ and such an equilibrium just doesn't exist since when the barium sulfate precipitates there could be a microgram or a kilogram as the product. Think of it another way - adding $\ce{HCl}$ (in dilute solution) or $\ce{BaSO4}$ won't shift the reaction to the left. (Adding more HCl would shift $\ce{HSO4^{-} <-> H+ + SO4^{2-}}$ in concentrated solutions, which is besides the point I'm trying to make.) There is a solubility product for barium sulfate, but the solubility product doesn't depend on the amount of barium sulfate precipitate, nor the concentration of HCl. So the solubility product isn't for the overall reaction but rather for part of the system:
$$\ce{[Ba][SO4^{2-}] = K_{sp}}$$
(Full disclosure - Theoretically the barium sulfate solubility product wouldn't depend on the HCl concentration, but really that isn't quite true. The barium sulfate solubility product really depends on the activity of the barium and sulfate ions, so the ionic strength of the solution matters.) |
The answer is the scale.The fluid movement of a sink has a much smaller curvature radius than the grand-scale movements of a hurricane.This curvature radius plays a big role on whether your movement due to a pressure gradient will be balanced by coriolis, or centrifugal forces, as thorougly discussed here.You can read this wikipage, but the essence is ...
This question can be answered with a scaling argument. Let us start with the momentum equation (Navier-Stokes) in a non-intertial reference frame (e.g. on the rotating earth) and assuming inviscid flow (roughly true above the surface).$$\dfrac{\partial\mathbf u}{\partial t} = - \mathbf u \cdot \nabla \mathbf u -\dfrac{1}{\rho}\nabla p-2 \mathbf \Omega \...
It's partly historical, partly point-of-view, but it's not a mistake.The friction coefficient emphasises the effect of the surface on a property of the boundary layer, i.e., greater surface friction slows the near-surface wind more. Aerodynamic resistance emphasises the effect of the boundary layer on surface-atmosphere exchange, i.e., greater mixing ...
You can think about it like this: It takes one day for the earth to perform a full rotation (about 86k seconds), on the other hand, it takes a few seconds for your sink to drain (lets say 10 seconds). So it takes 8600 times longer for the earth to do a full rotation than it takes the water to drain down the sink. It is not too hard to imagine that the earth'...
This is a good question, and the answer is, aerodynamic resistance is not defined inversely. It is rather, defined in a context that is often misinterpreted.In your question, you state that aerodynamic resistance is basically how much the roughness of the surface slows air movement down. This statement is not correct, and it seems to stem from the ...
Here are your choices with regard to modeling the atmosphere. There aren't many, and only one of them makes sense.Model the atmosphere from the perspective of an inertial frame of reference.Good luck with that! As an advisor told me decades ago, "Name one!" It's certainly not an Earth-centered frame; the Earth is orbiting the Sun. It's certainly not Sun-...
The Coriolis acceleration is only present in a rotating reference frame as is the case with Earth. The Coriolis effect is caused by Earth's rotation and the inertia of the mass experiencing the effect. If you are in an inertial frame of reference, thus non-accelerating, there will be no Coriolis effect.Let's assume that you are capable of modeling the ...
If your question is: I have an equation with force term $F(x,t)$, and suppose that $F(x,t)$ is caused by effect A, then will the solution of the equation be the same as if the force $F(x,t)$ had been caused by effect B, then the answer is of course "yes". Same force (i.e., same magnitude, same direction) will always cause the same reaction of the system, ...
Inertial instability is similar to the centrifugal instability in that we are looking at the stability of parcels to horizontal perturbations. In the inertial case, however, the initial state is geostrophic balance rather than cyclostrophic balance.Symmetric instability is the case where a parcel is inertially stable to horizontal perturbations and ...
The flow accumulation algorithm essentially determines the upstream contributing area of every grid cell; in other words, what area or how many other cells will drain into a given cell. The flow accumulation algorithm is independent of rainfall as it simply determines which areas drain where, which will later be used to determine how much water actually ...
Some time ago I posted this answer about how rainbows are formed, and the Wikipedia link Trond Hansen posted mentions droplet size relative to the wavelength of light.For a rainbow to form, the droplet size has to be large enough, relative to the color with the longest wavelength of visible light, for it to be refracted before reflecting off the backside ...
No, clouds don't really have a 'surface' that could have tension like a body of water. The different looks in these two examples (left Cumulonimbus Calvus and right Cumulus Humilis) are greatly dependent on how they have formed and how are they evolving now.The large Cumulonimbus is still growing in a relatively rapid speed. The cloud is reaching higher ...
The Laplace equation, (d^2 Ψ)/(dx^2 )+(d^2 Ψ)/(dy^2 )+(d^2 Ψ)/(dz^2 )=0, is just a steady state 3D flow equation. It's a black box conservation of hydraulic potential. Diffusion doesn't come into it. The Diffusion equation (assuming homogeneous isotropic conditions) is (∂^2 Ψ)/(∂x^2)+(∂^2Ψ)/(∂y^2)+(∂^2 Ψ)/(∂z^2)= S_s/K ∂h/∂t. This discretizes the time ...
It's not clear exactly what is being modelled here, but it seems to me that there are two ways in which the concentration can 'go negative'. Firstly, the rate of change of concentration can be massive, in which case see what happens when modelling with much smaller time steps. Or, the diffusion term substantially exceeds the advection term, which is ...
Recent literature points to an attempt to understand the theoretical dynamics behind the MJO as seen in these two publications - Dynamics moisture mode vs. moisture mode in MJO dynamics and A general theoretical framework for understanding essential dynamics of Madden–Julian oscillationAs concluded in the original paper by Madden and Julien oscillation the ...
This equation is a form of the Shallow-water equations, which are derived from Navier-Stokes in the incompressible limit and the vertical direction is integrated out.One then takes the equation for the vorticity (which has only one component then) from the Shallow-water equations. The vorticity is split into planetary vorticity, as is the Coriolis term ...
If the pumping well is partially penetrating in the aquifer or the source above the aquifer (e.g., unconfined aquifer, leaky aquifer, under the stream), the vertical flow should be accounted.The groundwater equation like Thies' solution is not considering the vertical flow. Incorporating the vertical flow would change the governing equation (P.D.E.) form ... |
I've a few questions in mind:
Why is $\pi$ irrational? If it is, then how can 2 rational quantities (circumference, diameter) can produce irrational number? How are we able to determine digits of $\pi$ accurately?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The name "irrational number" has an ancient source...
It was an (implicit) assumption common to all "archaic" Greek mathematics that given two magnitudes, e.g. to segments of lenght $a$ and $b$, it is always possible to find a segment of "unit lenght" $u$ such that it
measure both, i.e. such that [using modern algebraic formulae which are totally foreign to Greek math] :
$a = n \times u$ and $b = m \times u$, for $n,m \in \mathbb N$.
From the above instance of the assumption, it follows that :
$a/b = (n \times u) / (m \times u) = n/m$.
The assumption amounts to saying that the ratio between two magnitudes is always a ratio between
numbers (i.e. in modern terms : a rational number; but note that for Greek math the only numbers are the natural ones and they are distinguished from magnitudes : a segment, a square, ... which are "measured" by numbers).
The discovery of the existence of irrational magnitudes, through the proof that the case where $a=1$ is the side of the unit square and $b=\sqrt 2$ is the diagonal is not expressible as a ratio between (natural) numbers, leads Greek math to the withdrawal of the "commensurability assumption" and to the axiomatization of geometry.
Those couples of magnitudes was called "incommensurable" (i.e. without common
measure).
For the same reason, $\sqrt 2$ is an
irrational number, exactly because the ratio "diagonal/side" is not expressible as a ratio between natural numbers. |
The famous CS inequality states
$$ \left| \left< x , y \right>\right| \le \left\| x \right\| \cdot \left\| y \right\| $$
for $x,y$ in an inner product space $X$ over $\mathbb{K}$. Every proof I found involves some kind of case distinction; namely one may use w.l.o.g $\Vert x \Vert, \Vert y \Vert > 0$ since the inequality is trivial otherwise. However, I was looking for a constructive proof (i.e. without using the law of excluded middle) for the inequality.
I will add some remarks to the question.
1) Law of excluded middle: This axiom states that $A \vee \neg A$ is true. This is not considered an axiom in constructive mathematics. One might interpret this in the following way: Indirect proofs are not allowed. However, I find this not completely accurate. Precisely the implication $A \rightarrow \neg \neg A$ is true in constructive mathematics; the implication $ \neg \neg A \rightarrow A$ not in gerneral.
2) The definition of an inner product is the same as in "classical" mathematics.
3) The norm $\Vert \cdot \Vert$ is given by $\Vert x \Vert = \sqrt{\langle x, x\rangle }$. |
We know that $(-1)^{\frac 12}$, $(-1)^{\frac 14}$, ... $(-1)^{\frac 1{even}}$ all result in complex answers, whereas $(-1)^{\frac 13}$, $(-1)^{\frac 15}$, ... $(-1)^{\frac 1{odd}}$ all result in real answers. I believe this comes from the definitions that $(-1)^{\frac 12} = \sqrt{-1} = i$ and $(-1)^{\frac 13}=\sqrt[3]{-1}=-1$. Moreover, of the set that provides real answers, if the numerator of the exponent is even, the result then becomes $+1$ instead of $-1$. This explains why the graph $y = (-1)^x$ results in an infinite amount of discontinuous points on $y=1$ and $y=-1$, only existing where the denominator of the exponent is odd:
But the sets I provided only explain $(-1)^{rational}$; in other words, this assumes that we are only raising $-1$ to a fraction. When I plug in $(-1)^e$, $(-1)^{\pi}$, $(-1)^{\sqrt{2}}$, or any other irrational number, the answer comes back undefined/imaginary.
This cannot come from the explanation that I provided earlier because, even though it is the fundamental explanation for where imaginary results come from, it assumes the exponent is a fraction. My intuition is that there would be some other rule that would sometimes result in an imaginary answer and sometimes result in a real number - similar to how rational exponents work - but it seems to always be imaginary. What, then,
is the rule for this, and can we prove that this is the case for all irrational numbers? |
I have been beating my head against this question for quite some time, I do not know whether it has been asked before, but I can't find any information about it!
I am taking Calculus 1 course and I cannot grasp the concept of a derivative. From what I understand, a derivative is a function with the following signature:
$$(\text{derivative with respect to particular free variable}) :: (\lambda x \to (f)\; x) \to (\lambda x \to (f') x)$$
also phrased as
$$(\text{derivative with respect to particular free variable}) = ((\lambda x \to (f) x) \to (\lambda x \to (f') x))$$
e.g:
$$(\text{derivative with respect to x}) (\lambda x \to x^2) = (\lambda x\to 2\cdot x)$$
This makes sense but one thing bothers me: what does "derivative with respect to x" mean? In particular, in single variable Calculus this notation assumes $x$ is always a particular variable such as
['a'..'z'].
This works fine for basic derivatives such as: $$(\text{derivative with respect to x}) (\lambda x \to \ln x) = (\lambda x \to \tfrac{1}{x})$$
What I would like to understand is: Why does (derivative with respect to x) make sense but $(\text{derivative with respect to} (\lambda x \to 2))$ and
$(\text{derivative with respect to} (\lambda x \to \ln(x)) $ do not seem to make any sense to me.
in classical terms, I cannot do (derivative of $\ln x$ with respect to $1$) nor (derivative of $\ln x$ with respect to $\ln x$) without my head starting to hurt, because those concepts were not taught to me yet, or I have not payed enough attention to understand them.
Can somebody please explain what the following two really mean?
Derivative of $f(x)$ with respect to a constant such as $1,2,3,\ldots 9999$ Derivative of $f(x)$ with respect to a function such as $\ln(x)$, $\sin(x)$, $\cos(x)$
Thanks ahead of time, this has been bothering me for quite a few years!
$\langle$Editor's note: I've left the following in the post for archival's sake.$\rangle$
PS: I am terrible at formatting so to the great ones responsible for formatting noob's questions (I thank you much for your work)
convert \ to lambdas convert d/dx to symbolic d/dx notation (not the worded derivative ones) convert arrows to arrows used in set theory/category theory keep the "(derivative of ... with respect to ...)" as they are, as I have no idea how to express them differently, dA/dB doesn't seem to make sense to me since derivatives are taught to be polymorphic function rather than a function of two variables, and division only makes it even more confusing due to the abuse of notation. (Feel free to give me a link to study formatting, I can't find it). |
I've only found a recursive algorithm of the extended Euclidean algorithm. I'd like to know how to use it by hand. Any idea?
Perhaps the easiest way to do it by hand is in analogy to Gaussian elimination or triangularization, except that, since the coefficient ring is not a field, one has to use the division / Euclidean algorithm to iteratively descrease the coefficients till zero. In order to compute both $\rm\,gcd(a,b)\,$
and its Bezout linear representation $\rm\,j\,a+k\,b,\,$ we keep track of such linear representations for each remainder in the Euclidean algorithm, starting with the trivial representation of the gcd arguments, e.g. $\rm\: a = 1\cdot a + 0\cdot b.\:$ In matrix terms, this is achieved by augmenting (appending) an identity matrix that accumulates the effect of the elementary row operations. Below is an example that computes the Bezout representation for $\rm\:gcd(80,62) = 2,\ $ i.e. $\ 7\cdot 80\: -\: 9\cdot 62\ =\ 2\:.\:$ See this answer for a proof and for conceptual motivation of the ideas behind the algorithm (see the Remark below if you are not familiar with row operations from linear algebra).
For example, to solve m x + n y = gcd(m,n) one begins withtwo rows [m 1 0], [n 0 1], representing the twoequations m = 1m + 0n, n = 0m + 1n. Then one executesthe Euclidean algorithm on the numbers in the first column,doing the same operations in parallel on the other columns,Here is an example: d = x(80) + y(62) proceeds as: in equation form | in row form ---------------------+------------ 80 = 1(80) + 0(62) | 80 1 0 62 = 0(80) + 1(62) | 62 0 1 row1 - row2 -> 18 = 1(80) - 1(62) | 18 1 -1 row2 - 3 row3 -> 8 = -3(80) + 4(62) | 8 -3 4 row3 - 2 row4 -> 2 = 7(80) - 9(62) | 2 7 -9 row4 - 4 row5 -> 0 = -31(80) +40(62) | 0 -31 40The row operations above are those resulting from applyingthe Euclidean algorithm to the numbers in the first column, row1 row2 row3 row4 row5namely: 80, 62, 18, 8, 2 = Euclidean remainder sequence | |for example 62-3(18) = 8, the 2nd step in Euclidean algorithmbecomes: row2 -3 row3 = row4 when extended to all columns.
In effect we have row-reduced the first two rows to the last two.The matrix effecting the reduction is in the bottom right corner.It starts as 1, and is multiplied by each elementary row operation, hence it accumulates the product of all the row operations, namely:
$$ \left[ \begin{array}{ccc} 7 & -9\\ -31 & 40\end{array}\right ] \left[ \begin{array}{ccc} 80 & 1 & 0\\ 62 & 0 & 1\end{array}\right ] \ =\ \left[ \begin{array}{ccc} 2\ & \ \ \ 7\ & -9\\ 0\ & -31\ & 40\end{array}\right ] \qquad\qquad\qquad\qquad\qquad $$
Notice row 1 is the particular solution 2 = 7(80) - 9(62)Notice row 2 is the homogeneous solution 0 = -31(80) + 40(62),so the general solution is any linear combination of the two: n row1 + m row2 -> 2n = (7n-31m) 80 + (40m-9n) 62The same row/column reduction techniques tackle arbitrarysystems of linear Diophantine equations. Such techniquesgeneralize easily to similar coefficient rings possessing aEuclidean algorithm, e.g. polynomial rings F[x] over a field, Gaussian integers Z[i]. There are many analogous interestingmethods, e.g. search on keywords: Hermite / Smith normal form, invariant factors, lattice basis reduction, continued fractions,Farey fractions / mediants, Stern-Brocot tree / diatomic sequence.
Remark $ $ As an optimization, we can omit one of the Bezout coefficient columns (being derivable from the others). Then the calculations have a natural interpretation as modular fractions (though the "fractions" are multi-valued), e.g. follow the prior link.
Below I elaborate on the row operations to help readers unfamiliar with linear algebra.
Let $\,r_i\,$ be the Euclidean remainder sequence. Above $\, r_1,r_2,r_3\ldots = 80,62,18\ldots$ Given linear combinations $\,r_j = a_j m + b_j n\,$ for $\,r_{i-1}\,$ and $\,r_i\,$ we can calculate a linear combination for $\,r_{i+1} := r_{i-1}\bmod r_i = r_{i-1} - q_i r_i\,$ by substituting said combinations for $\,r_{i-1}\,$ and $\,r_i,\,$ i.e.
$$\begin{align} r_{i+1}\, &=\, \overbrace{a_{i-1} m + a_{i-1}n}^{\Large r_{i-1}}\, -\, q_i \overbrace{(a_i m + b_i n)}^{\Large r_i}\\[.3em] {\rm i.e.}\quad \underbrace{r_{i-1} - q_i r_i}_{\Large r_{i+1}}\, &=\, (\underbrace{a_{i-1}-q_i a_i}_{\Large a_{i+1}})\, m\, +\, (\underbrace{b_{i-1} - q_i b_i}_{\Large b_{i+1}})\, n \end{align}$$
Thus the $\,a_i,b_i\,$ satisfy the same recurrence as the remainders $\,r_i,\,$ viz. $\,f_{i+1} = f_{i-1}-q_i f_i.\,$ This implies that we can carry out the recurrence in parallel on row vectors $\,[r_i,a_i,b_i]$ representing the equation $\, r_i = a_i m + b_i n\,$ as follows
$$\begin{align} [r_{i+1},a_{i+1},b_{i+1}]\, &=\, [r_{i-1},a_{i-1},b_{i-1}] - q_i [r_i,a_i,b_i]\\ &=\, [r_{i-1},a_{i-1},b_{i-1}] - [q_i r_i,q_i a_i, q_i b_i]\\ &=\, [r_{i-1}-q_i r_i,\ a_{i-1}-q_i a_i,\ b_{i-1}-b_i r_i] \end{align}$$
which written in the tabular format employed far above becomes
$$\begin{array}{ccc} &r_{i-1}& a_{i-1} & b_{i-1}\\ &r_i& a_i &b_i\\ \rightarrow\ & \underbrace{r_{i-1}\!-q_i r_i}_{\Large r_{i+1}} &\underbrace{a_{i-1}\!-q_i a_i}_{\Large a_{i+1}}& \underbrace{b_{i-1}-q_i b_i}_{\Large b_{i+1}} \end{array}$$
Thus the
extended Euclidean step is: compute the quotient $\,q_i = \lfloor r_{i-1}/r_i\rfloor$ then multiply row $i$ by $q_i$ and subtract it from row $i\!-\!1.$ Said componentwise: in each column $\,r,a,b,\,$ multiply the $i$'th entry by $q_i$ then subtract it from the $i\!-\!1$'th entry, yielding the $i\!+\!1$'th entry. If we ignore the 2nd and 3rd columns $\,a_i,b_i$ then this is the usual Euclidean algorithm. The above extends this algorithm to simultaneously compute the representation of each remainder as a linear combination of $\,m,n,\,$ starting from the obvious initial representations $\,m = 1(m)+0(n),\,$ and $\,n = 0(m)+1(n).\,$
This is more a comment on the method explained by Bill Dubuque then a proper answer in itself, but I think there is a remark so obvious that I don’t understand that it is hardly ever made in texts discussing the extended Euclidean algorithm. This is the observation that you can save yourself half of the work by computing only
one of the Bezout coefficients. In other words, instead of recording for every new remainder $r_i$ a pair of coefficients $k_i,l_i$ so that $r_i=k_ia+l_ib$, you need to record only $k_i$ such that $r_i\equiv k_ia\pmod b$. Once you will have found $d=\gcd(a,b)$ and $k$ such that $d\equiv ka\pmod b$, you can then simply put $l=(d-ka)/b$ to get the other Bezout coefficient. This simplification is possible because the relation that gives the next pair of intermediate coefficients is perfectly independent for the two coefficients: say you have$$\begin{aligned} r_i&=k_ia+l_ib\\ r_{i+1}&=k_{i+1}a+l_{i+1}b\end{aligned}$$and Euclidean division gives $r_i=qr_{i+1}+r_{i+2}$, then in order to get$$ r_{i+2}=k_{i+2}a+l_{i+2}b$$one can take $k_{i+2}=k_i-qk_{i+1}$ and $l_{i+2}=l_i-ql_{i+1}$, where the equation for $k_{i+2}$ does not need $l_i$ or $l_{i+1}$, so you can just forget about the $l$'s. In matrix form, the passage is from$$ \begin{pmatrix} r_i&k_i&l_i\\ r_{i+1}&k_{i+1}&l_{i+1}\end{pmatrix} \quad\text{to}\quad \begin{pmatrix} r_{i+2}&k_{i+2}&l_{i+2}\\ r_{i+1}&k_{i+1}&l_{i+1}\end{pmatrix}$$by subtracting the second row $q$ times from the first, and it is clear that the last two columns are independent, and one might as well just keep the $r$'s and the $k$'s, passing from$$ \begin{pmatrix} r_i&k_i\\ r_{i+1}&k_{i+1}\end{pmatrix} \quad\text{to}\quad \begin{pmatrix} r_{i+2}&k_{i+2}\\ r_{i+1}&k_{i+1}\end{pmatrix}$$instead.
A very minor drawback is that the relation $r_i=k_ia+l_ib$ that should hold for every row is maybe a wee bit easier to check by inspection than $r_i\equiv k_ia\pmod b$, so that computational errors could slip in a bit more easily. But really, I think that with some practice this method is just as safe and faster than computing both coefficients. Certainly when programming this on a computer there is no reason at all to keep track of both coefficients.
A final bonus it that in many cases where you apply the extended Euclidean algorithm you are only interested in one of the Bezout coefficients in the first place, which saves you the final step of computing the other one. One example is computing inverse modulo a prime number $p$: if you take $b=p$, and $a$ is not divisible by it, then you know beforehand that you will find $d=1$, and the coefficient $k$ such that $d\equiv ka\pmod p$ is just the inverse of $a$ modulo $p$ that you were after.
The way to do this is due to Blankinship "A New Version of the Euclidean Algorithm", AMM 70:7 (Sep 1963), 742-745. Say we want $a x + b y = \gcd(a, b)$, for simplicity with positive $a$, $b$ with $a > b$. Set up auxiliary vectors $(x_1, x_2, x_3)$, $(y_1, y_2, y_3)$ and $(t_1, t_2, t_3)$ and keep them such that we always have $x_1 a + x_2 b = x_3$, $y_1 a + y_2 b = y_3$, $t_1 a + t_2 b = t_3$ throughout. The algorithm itself is:
(x1, x2, x3) := (1, 0, a)(y1, y2, y3) := (0, 1, b)while y3 <> 0 do q := floor(x3 / y3) (t1, t2, t3) := (x1, x2, x3) - q * (y1, y2, y3) (x1, x2, x3) := (y1, y2, y3) (y1, y2, y3) := (t1, t2, t3)
At the end, $x_1 a + x_2 b = x3 = \gcd(a, b)$. It is seen that $x_3$, $y_3$ do as the classic Euclidean algorithm, and easily checked that the invariant mentioned is kept all the time.
One can do away with $x_2$, $y_2$, $t_2$ and recover $x_2$ at the end as $(x_3 - x_1 a) / b$.
Just to complement the other answers, there's an alternative form of the extended Euclidean algorithm that requires no backtracking, and which you might find easier to understand and apply. Here's how to solve your problem* using it: $\newcommand{\x}{\phantom} \newcommand{\r}{\color{red}} \newcommand{\g}{\color{green}} \newcommand{\b}{\color{blue}}$
*) …from a duplicate question that I originally wrote this answer for.
$$\begin{aligned} \g{ 0} \cdot 19 + \r{\x+1} \cdot 29 &= \b{ 29} && (1) \\ \g{ 1} \cdot 19 + \r{\x+0} \cdot 29 &= \b{ 19} && (2) \\ \g{-1} \cdot 19 + \r{\x+1} \cdot 29 &= \b{ 10} && (3) = (1) - (2) \\ \g{ 2} \cdot 19 + \r{ -1} \cdot 29 &= \b{\x09} && (4) = (2) - (3) \\ \g{-3} \cdot 19 + \r{\x+2} \cdot 29 &= \b{\x01} && (5) = (3) - (4) \end{aligned}$$
…and now you have your solution: $\g{-3} \cdot 19 \equiv \b{1} \pmod{29}$, so the inverse of $19$ modulo $29$ is $-3$ (or $29 - 3 = 27$, if you prefer a non-negative solution).
In effect, what we're doing is trying to find a solution to the linear equation $\g x \cdot 19 + \r k \cdot 29 = \b r$ with the smallest possible $\b r > 0$. We do this by starting with the two trivial solutions $(1)$ and $(2)$ above, and then generate new solutions with a smaller and smaller $\b r$ by always subtracting both sides of the last solution from the one before it
as many times as needed to get a smaller $\b r$ than we have so far. (In your example, that's only once each time, but I'll show another example below where we that's not the case.)
Eventually, we'll either find a solution with $\b r = 1$, in which case the corresponding $\g x$ coefficient is the inverse we want,
or we'll end up with $\b r = 0$, in which case the previous solution's $\b r > 1$ is the greatest common divisor of the number we're trying to invert and the modulus, and thus no inverse exists. (Of course, we could also just keep going until $\b r = 0$ in any case, and then check the previous line to see if $\b r$ there equals $1$ or not, but that would be extra work we can easily avoid.)
Also, it's worth noting that we're not actually using the $\r k$ coefficients for anything, so if all you're interested in is finding the modular inverse (and/or the GCD), you don't actually have to calculate those. But showing them makes it clearer
why the algorithm works. (Also, if you do calculate $\r k$, it's easy to verify that you didn't make any mistakes just by checking that the last equation with $\b r = 1$ really holds.)
Anyway, here's a couple more worked examples to illustrate some cases that your example doesn't. To start with, let's try to find the inverse of $13$ modulo $29$:
$$\begin{aligned} \g{ 0} \cdot 13 + \r{\x+1} \cdot 29 &= \b{ 29} && (1) \\ \g{ 1} \cdot 13 + \r{\x+0} \cdot 29 &= \b{ 13} && (2) \\ \g{-2} \cdot 13 + \r{\x+1} \cdot 29 &= \b{\x03} && (3) = (1) - 2 \cdot (2) \\ \g{ 9} \cdot 13 + \r{ -4} \cdot 29 &= \b{\x01} && (4) = (2) - 4 \cdot (3) \\ \end{aligned}$$
This time, we could (and needed to) subtract solution $(2)$ from $(1)$ twice, since $\lfloor 29 \mathbin/ 13 \rfloor = 2$. And, similarly, we could subtract $(3)$ from $(2)$ four times, since $\lfloor 13 \mathbin/ 3 \rfloor = 4$. And we can verify that $\g 9$ is indeed the inverse of $13$ modulo $29$ just by checking that $9 \cdot 13 - 4 \cdot 29$ indeed equals $1$.
Now let's try an example where the inverse does
not exist, like trying to find the inverse of $15$ modulo $27$:
$$\begin{aligned} \g{ 0} \cdot 15 + \r{\x+1} \cdot 27 &= \b{ 27} && (1) \\ \g{ 1} \cdot 15 + \r{\x+0} \cdot 27 &= \b{ 15} && (2) \\ \g{-1} \cdot 15 + \r{\x+1} \cdot 27 &= \b{ 12} && (3) = (1) - (2) \\ \g{ 2} \cdot 15 + \r{ -1} \cdot 27 &= \b{\x03} && (4) = (2) - (3) \\ \g{-9} \cdot 15 + \r{\x+5} \cdot 27 &= \b{\x00} && (5) = (3) - 4 \cdot (4) \\ \end{aligned}$$
Oops. The last solution, with $\b r = 0$, is quite useless to us (except for checking that we did the arithmetic right). From the previous solution $(4)$, however, we can read that $\gcd(15, 27) = 3$, and even that $\g x = 2$ is a solution to the "generalized inverse" equation $\g x \cdot 15 \equiv \gcd(15, 27) \pmod{27}$.
Ps. See also this answer I wrote on crypto.SE last year, explaining the same algorithm from a slightly different viewpoint. |
"Is this really a proof?" is the exact question e-mailed to me today from an undergraduate mathematics student whom I know as a highly competent student. The one sentence question was accompanied with the following demo: I am looking for a down-to-earth, non-authoritative answer who one may gi...
the 2005 AMS article/survey on experimental mathematics[1] by Bailey/Borwein mentions many remarkable successes in the field including new formulas for $\pi$ that were discovered via the PSLQ algorithm as well as many other examples. however, it appears to glaringly leave out any description of t...
I'm reading the paper "the classification of algebras by dominant dimension" by Bruno J.Mueller, the link is here http://cms.math.ca/10.4153/CJM-1968-037-9. In the proof of lemma 3 on page 402, there is a place I can't understand. Who can tell me what $E_R \oplus * \cong \oplus X_R$ and $_AHom_...
I've been really trying to prove Ramanujan Partition theory, and different sources give me different explanations. Can someone please explain how Ramanujan (and Euler) found out the following equation for the number of partitions for a given integer? Any help is appreciated thank you so much! $...
I was wondering what role non-rigorous, heuristic type arguments play in rigorous math. Are there examples of rigorous, formal proofs in which a non-rigorous reasoning still plays a central part? Here is an example of what I am thinking of. You want to prove that some formula $f(n)$ holds, and y...
Perhaps the "proofs" of ABC conjecture or newly released weak version of twin prime conjecture or alike readily come to your mind. These are not the proofs I am looking for. Indeed my question was inspired by some other posts seeking for a hint to understand a certain more or less well-establised...
I do not know exactly how to characterize the class of proofs that interests me, so let me give some examples and say why I would be interested in more. Perhaps what the examples have in common is that a powerful and unexpected technique is introduced that comes to seem very natural once you are ...
Some conjectures are disproved by a single counter-example and garner little or no further interest or study, such as (to my knowledge) Euler's conjecture in number theory that at least $n$ $n^{th}$ powers are required to sum to an $n^{th}$ power, for $n>2$ (disproved by counter-example by L. J. ...
There is a tag called proofs. This tag has empty tag-info. Without any usage guidance it is quite likely to be used incorrectly. The fact that there are many deleted questions having this tag can be considered a supporting evidence of this fact. (According to SEDE there are 26 such questions - ...
I would like to know how you would rigorously introduce the trigonometric functions ($\sin(x)$ and relatives) to first year calculus students. Suppose they have a reasonable definition of $\mathbb{R}$ (as Cauchy closure of the rationals, or as Dedekind cuts, or whatever), but otherwise require as...
After having a solid year long undergraduate course in abstract algebra, I'm interested in learning algebra at a more advanced level, especially in the context of category theory. I've done some research, and from what I've read, it seems that using Lang as a main text and Hungerford as a supple...
Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that. So after this longish introduction, h...
Usually, during lectures Turing Machines are firstly introduced from an informal point of view (for example, in this way: http://en.wikipedia.org/wiki/Turing_machine#Informal_description) and then their definition is formalized (for example, in this way: http://en.wikipedia.org/wiki/Turing_machin...
« first day (1811 days earlier) ← previous day next day → last day (434 days later) » |
Background for the curious reader:
An ordinal $\beta$ is a transitive set in the sense that $\alpha\in\beta$ implies $\alpha\subset\beta$. Any ordinal is naturally well-ordered under $\in$ (so any subset of it has a least element), and any well-order is isomorphic to an ordinal. In fact, any class (naively, collection) of ordinals is itself well-ordered under the relation $\in$. This fact allows for the usage of transfinite induction and transfinite recursion.
We have three types of ordinals: the empty set $0=\emptyset=\{\; \}$, successor ordinals $S(\alpha)=\alpha\cup\{\alpha\}$ where $\alpha$ is an ordinal, and limit ordinals, which are all the other ones. Finite ordinals are either $0$ or successors, the set $\omega=\{0,1,2,\dots\}$ is a limit ordinal. A limit ordinal $\alpha$ has the property that $\alpha=\sup_{\beta<\alpha}\{\beta\}=\bigcup_{\beta<\alpha}\beta$.
My question
Ordinal arithmetic can be defined recursively as follows:
$\alpha+0=\alpha$, $\alpha+S(\beta)=S(\alpha+\beta)$, $\alpha+\sup_{\gamma<\beta}\{\gamma\}=\sup_{\gamma<\beta}\{\alpha+\gamma\}$; $\alpha\cdot0=\alpha$, $\alpha\cdot S(\beta)=\alpha\cdot\beta+\alpha$, $\alpha\cdot\sup_{\gamma<\beta}\{\gamma\}=\sup_{\gamma<\beta}\{\alpha\cdot\gamma\}$; $\alpha^0=1$, $\alpha^{S(\beta)}=\alpha^\beta\cdot\alpha$, $\alpha^{\sup_{\gamma<\beta}\{\gamma\}}=\sup_{\gamma<\beta}\{\alpha^\gamma\}$.
Alternatively, one can define:
$\alpha+\beta$ is the unique ordinal isomorphic to the disjoint union $\{0\}\times\alpha\cup\{1\}\times\beta$ given the lexicographic order. $\alpha\cdot\beta$ is the unique ordinal isomorphic to the Cartesian product $\beta\times\alpha$ given the lexicographic order.
As the disjoint union and Cartesian product are simply the categorical coproduct and the categorical product, I wonder if there is some way to actually categorify these alternate definitions. Additionally, I am not aware of any non-recursive version of exponentiation, so I would be curious if a categorical formulation of addition and product of ordinals also allows for a categorical (hence non-recursive) formulation of exponentiation. |
Let $X$ be Banach space and $Y$ a closed subspace of $X$. Assume that there exist a closed "subset" $Z$ of $X$ with the properties:
$Z\cap Y=\{0\}$ and every $x\in X$ can be written in a unique form as $x=y+z$ with $y\in Y$ and $z\in Z$
Can we conclude that $Y$ is complemented in $X$?
Edit: I'm not asking if $Z$ is a complement of $Y$ in $X$. Indeed, $Z$ does not need to be linear. What I am asking is if we can find a set closed linear $W\subset X$ such that $W$ is a complement of $Y$ in $X$. |
NB: I'm assuming from the way you worded the question that your '2D manifold' is supposed to be a surface in $xyz$-space and that the 'projection' is the projection to the $xy$-plane. You may have had a more general situation in mind, such as a surface in $\mathbb{R}^n$ for $n>2$ or a more general projection than the obvious one. In that case, the answer would change, but there is still a way of finding the answer. Let me know if you want to know about these more general situations.
You are asking a special case of an inverse problem in the calculus of variations, and there is a standard method for solving this problem. Note that, it is
not true that every 2-parameter family $\mathscr{F}$ of curves in the plane (or an open domain in the plane) that has the property that there is a unique curve in the family passing through any given point in any given direction is the set of geodesics of some Riemannian metric, so it could easily be that most of the curve families $\mathscr{F}$ you might consider aren't the projections of geodesics of a metric of the form that you describe, i.e., a metric of the form$$g = \sqrt{dx^2+dy^2 + df^2}$$ where $f = f(x,y)$ is a function on the given domain in the plane.
The reconstruction method involves computing the Euler-Lagrange equation for the above $g$ as a Lagrangian for curves in the plane and writing it in the form $$y'' = F(x,y,y')$$for a certain function $F$. Then you set this $F$ equal to the equation that defines your curve family. For example, for the family $y = c_1 + c_2 e^x$, you'd have $y'' = y'= F(x,y,y')$. This will give you an overdetermined equation for $f$ and then you apply the usual theory of overdetermined systems to determine whether or not there is a solution.
Here, more explicitly, is what this amounts to: If your curve family is described by a second order equation of the form $y'' = G(x,y,y')$, then any function $f(x,y)$ on a domain such that the above metric has the curve family as geodesics must satisfy the nonlinear equation$$\begin{align}(1{+}f_x(x,y)^2{+}f_y(x,y)^2)G(x,y,y') &= - f_y(x,y)f_{xx}(x,y) \\&\ \ \ \ - \bigl(2f_y(x,y)f_{xy}(x,y){-}f_x(x,y)f_{xx}(x,y)\bigr)y'\\&\ \ \ \ +\bigl(2f_x(x,y)f_{xy}(x,y){-}f_y(x,y)f_{yy}(x,y)\bigr)(y')^2\\&\ \ \ \ +f_x(x,y)f_{yy}(x,y)(y')^3.\end{align}$$Note that, one thing you'd know right off the bat is that if your defining equation for the curve family isn't of the form $y'' = G(x,y,y')$ where $G$ is at most cubic in $y'$, then there will be no solution. This is only a first necessary condition, though, it is not sufficient. To see more conditions, suppose that the curve family is the set of solutions of$$y''=G(x,y,y')=a_0(x,y)+a_1(x,y)\ y'+a_2(x,y)\ (y')^2+a_3(x,y)\ (y')^3.\tag{1}$$Substituting this into the above equation and equating like powers of $y'$ will yield $4$ second order equations for $f$ in terms of the functions $a_i$, namely$$\begin{align}- f_y\ f_{xx} &= a_0\ (1{+}{f_x}^2{+}{f_y}^2)\\-2f_y\ f_{xy}{+}f_x\ f_{xx} &= a_1\ (1{+}{f_x}^2{+}{f_y}^2)\\ 2f_x\ f_{xy}{-}f_y\ f_{yy} &= a_2\ (1{+}{f_x}^2{+}{f_y}^2)\\f_x\ f_{yy} &= a_3\ (1{+}{f_x}^2{+}{f_y}^2).\end{align}\tag{2}$$For most choices of $a_i$ these equations are incompatible.
Thus, for example, if $G\equiv0$, $f$ must satisfy the four second order equations$$f_y\ f_{xx} = f_x\ f_{xx}-2f_y\ f_{xy} = f_y\ f_{yy}-2f_x\ f_{xy} = f_x\ f_{yy} = 0.$$Of course, this implies that, at any point where $(f_x,f_y)$ is nonzero, one must have $f_{xx}=f_{xy}=f_{yy}=0$, so the only solutions to these equations are to have $f$ be a linear function of $x$ and $y$ (and all linear functions are solutions). In the case in which $G\equiv 2$ (the monic quadratic polynomials), the equations become$$f_y\ f_{xx} + 2(1{+}{f_x}^2{+}{f_y}^2) = f_x\ f_{xx}-2f_y\ f_{xy} = f_y\ f_{yy}-2f_x\ f_{xy} = f_x\ f_{yy} = 0,$$and it is not difficult to see that there are no solutions to this overdetermined system.In the case in which $G\equiv y'$ (your other case), the equations become$$f_y\ f_{xx} = f_x\ f_{xx}-2f_y\ f_{xy} + 2(1{+}{f_x}^2{+}{f_y}^2) = f_y\ f_{yy}-2f_x\ f_{xy} = f_x\ f_{yy} = 0,$$and it is easy to see that there are no solutions to this system either.
One useful observation for solving system (2) is that it implies the first order equation$$a_0\ {f_x}^3 + a_1\ {f_x}^2f_y + a_2\ f_x{f_y}^2 + a_3\ {f_y}^3=0,\tag{3}$$which shows that, when the functions $a_i$ are not all zero, the gradient of $f$ must point in one of three possible directions. Differentiating (3) and plugging this back into the system (2) and doing a bit more work, one can derive the necessary and sufficient conditions on the $a_i$ that there exist a solution $f$. |
There are many resources online about how to implement MLP in tensorflow, and most of the samples do work :) But I am interested in a particular one, that I learned from https://www.coursera.org/learn/machine-learning. In which, it uses a
cost function defined as follow:
$ J(\theta) = \frac{1}{m} \sum_{i=1}^{m} \sum_{k=1}^{K} \left[ -y_k^{(i)} \log((h_\theta(x^{(i)}))_k - (1 - y_k^{(i)}) \log(1 - (h_\theta(x^{(i)}))_k \right] $
$h_\theta$ is the
sigmoid function.
And there's my implementation:
# one hidden layer MLPx = tf.placeholder(tf.float32, shape=[None, 784])y = tf.placeholder(tf.float32, shape=[None, 10])W_h1 = tf.Variable(tf.random_normal([784, 512]))h1 = tf.nn.sigmoid(tf.matmul(x, W_h1))W_out = tf.Variable(tf.random_normal([512, 10]))y_ = tf.matmul(h1, W_out)# cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(y_, y)cross_entropy = tf.reduce_sum(- y * tf.log(y_) - (1 - y) * tf.log(1 - y_), 1)loss = tf.reduce_mean(cross_entropy)train_step = tf.train.GradientDescentOptimizer(0.05).minimize(loss)correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))# trainwith tf.Session() as s: s.run(tf.initialize_all_variables()) for i in range(10000): batch_x, batch_y = mnist.train.next_batch(100) s.run(train_step, feed_dict={x: batch_x, y: batch_y}) if i % 100 == 0: train_accuracy = accuracy.eval(feed_dict={x: batch_x, y: batch_y}) print('step {0}, training accuracy {1}'.format(i, train_accuracy))
I think the definition for the layers are correct, but the problem is in the
cross_entropy. If I use the first one, the one got commented out, the model converges quickly; but if I use the 2nd one, which I think/hope is the translation of the previous equation, the model won't converge. |
You're confused about the definitions here, and what is given vs. what is to be proved.
You are assuming that $M \times N$ is compact. This means (by definition) that
given any open cover $\mathscr{U} = \{U_\lambda: \lambda \in \Lambda\}$ of $M \times N$, we can find a finite subover of $\mathscr{U}$.
From this we want to show that $M$ is compact and $N$ is compact.
So we have to show that $M$ is compact. So take any open cover $\{O_i : i \in I \}$ of $M$. Then for every $i$ define $U_i = O_i \times N$. As $U_i$ is open in $M$, $U_i$ is open in $M \times N$, as a product of two open sets.
Also, $\{U_i : i \in I \}$ is a cover: if $(x,y) \in M \times N$, then $x \in O_j$ for some $j \in I$ (as the $O_i$ cover $M$), and then $(x,y) \in U_j$ for that $j$.
So we have an open cover of $M \times N$, and we can apply the assumption that $M \times N$ is compact. So we can find finitely many $i_1,\ldots,i_N \in I$ such that $M \times N$ is covered by the $U_{i_1},\ldots,U_{i_N}$. But then, if $x \in M$, pick any $p \in N$ (we do need that $N$ is non-empty!) and note that $(x,p) \in M \times N$, so is in $U_{i_j} = O_{i_j} \times N$ for some $j \in \{1,\ldots,N\}$. This means (definition of Cartesian product, essentially!) that $x \in O_{i_j}$ for that $j$ and so $\{O_{i_1},\ldots,O_{i_N}\}$ form a finite subcover of the $O_i$.
As the starting open cover was arbitrary, $M$ is compact.
Now you also have to show that $N$ is compact, in a similar way.
Of course, $M = \pi_1[M \times N]$ where $\pi_1$ is the continuous projection from $M \times N$ onto $M$. And continuous maps preserve compactness, and this is another way to see this. The above proof is really a special case for the proof that: $X$ compact, $f: X \rightarrow Y$ continuous, then $f[X]$ is compact too, for the projection. Similarly $N = \pi_2[M \times N]$ for the projection onto the second coordinate.
That both spaces must be non-empty is clear, otherwise $\emptyset \times N = \emptyset$ is compact but we cannot prove anything about $N$, e.g. |
I am curious about how well the following technique can produce algebraic structures and semigroups in particular.
Let $(X,\circ)$ be a semigroup. Let $Y$ be a set and let $L:X\rightarrow P(Y)$ be a function. Let $$T:\{(x,y,z)|x,y\in X,z\in L(x\circ y)\}\rightarrow Y\times\{0,1\}$$ be a function where if $T(x,y,z)=(s,0)$, then $s\in L(x)$ and if $T(x,y,z)=(s,1)$, then $s\in L(y)$.
Let $A$ be a set and define an operation $\circ^{A}_{T}$ on $\bigcup_{x\in X}\{x\}\times A^{L(x)}$ by letting $(x,f)\circ^{A}_{T}(y,g)=(x\circ y,h)$ precisely when $$h(z)= \begin{cases} f(s) & \text{if there is some $s$ with} & T(x,y,z)=(s,0) \\ g(s) & \text{if there is some $s$ with} & T(x,y,z)=(s,1). \end{cases}$$
We shall call the operation $T$ an inducer. If $\circ_{T}^{A}$ is associative for all sets $A$, then we shall say that the operation $T$ is an associative inducer.
For example, if $\lambda$ is a limit ordinal and $L:\lambda\rightarrow P(\lambda)$ is the function where $L(\alpha)=\alpha$, then $(\lambda,+)$ is an associative operation and if we define $$T:\{(\alpha,\beta,\gamma):\alpha,\beta<\lambda,\gamma<\alpha+\beta\}\rightarrow\lambda\times\{0,1\}$$ by letting $T(\alpha,\beta,\gamma)=(\gamma,0)$ whenever $\gamma<\alpha$ and $T(\alpha,\beta,\alpha+\gamma)=(\gamma,1)$ whenever $\gamma<\beta$, then $T$ is an associative inducer and $\circ^{A}_{T}$ is simply the transfinite concatenation operation.
The multigenic and endomorphic Laver tables give highly non-trivial and combinatorially complex examples of associative inducers.
What are some other examples of associative inducers? Is there a good reference for this notion?
Observe by associating functions $L:X\rightarrow P(Y)$ with relations $R\subseteq X\times Y$, that the notion of an associative inducer is first order axiomatizable by universal formulas.
The notion of an inducer makes sense for other varieties besides the variety of semigroups. For example, the multigenic and endomorphic Laver tables produce self-distributive inducers. I am interested in the inducers that produce algebras in other varieties besides the variety of semigroups as well. |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit. |
Say that the exterior differential system (EDS) corresponding to a PDE system is:
$$df-f_x\,dx-f_y\,dy-f_w\,dw-f_z\,dz=0,\\ a_1\,f_x+a_2\,f_y=0,\tag{sys}$$
Of course we also require the independence condition, $dx\wedge dy\wedge dw\wedge dz\neq 0$.
Instead of (sys) can I simply use the following? $$ df +\dfrac{a_2}{a_1}f_y\,dx-f_y\,dy-f_w\,dw-f_z\,dz=0 \tag{sys$^\prime$}$$ I guess what I'm asking above is whether the ideal generated by $$\theta=df +\dfrac{a_2}{a_1}f_y\,dx-f_y\,dy-f_w\,dw-f_z\,dz$$ coincides with the pull-back of the ideal generated by the contact form ($i.e.$ the left-hand side of first line in sys) to the manifold in jet space given by the second line of sys?
I think the answer is yes because the pullback to the manifold in jet space defined by $a_1\,f_x+a_2\,f_y=0$ commutes with the exterior product and with the exterior derivative so the ideal generated by $\mathrm{sys^\prime}$ will coincide with the pullback of the ideal generated by $\mathrm{sys}$ but not 100% sure.
Sorry if all of this obvious but I'm not a mathematician. Besides Bryant et. al. I'm using "Cartan for Beginners" and "Lie's Structural Approach to PDE Systems". I would be grateful for any other useful references. |
I am trying to solve a set of coupled, nonlinear ODEs. The only dependent variable is a 1-dimensional spatial coordinate, let's call it $x$. For now, I've managed to approximate away some of the coupling between the equations, such that the first one doesn't depend on the second one at all. I solved it with FEM without major difficulty: boundary conditions met, residuals look good, physically meaningful result.
So now I'm left with one nonlinear, second-order ODE where the nonlinear coefficient functions are completely determined by the FEM solution, something like
$$ \begin{cases} A(x) + B(x) f(x) + C(x) f'(x) + D(x) f''(x) = 0 \\ f(0) = f(1) = 0 \end{cases} $$
where $A$, $B$, $C$, and $D$ are known, at least at the resolution of the FEM mesh.
The problem is that, using the same basic code as before, the solution appears to violate the boundary conditions entirely, blowing up as $x \rightarrow 0$ instead of approaching zero (or even any finite value, it doesn't seem to matter what I choose). Thinking this was a problem with my code, I experimented with a finer mesh and with finite differences instead of finite elements, with no success. It finds the same nonsensical result every time.
Finally, I put my numerical data for the coefficients into Mathematica and attempted to solve the above BVP. I let it run for quite a while (maybe 15-30 minutes) before giving up. It never found a solution.
So finally, my question: Is it possible that this equation simply can't be solved with these boundary conditions? For example, I tried changing $f(1) = 0 \rightarrow f'(0) = 0$ and the solution was found almost instantaneously by Mathematica, but it still violated the new boundary condition; somehow, that didn't trigger an error, but it made me suspicious that something "bigger" might be going on here.
I'm looking for some mathematical intuition behind why the numerical methods might fail in these cases and how to mitigate these problems.
Update
This equation is linear (correcting here so that comments still make sense). I think that's not the root of my problem, though; I tried using a linear solver with the proper stiffness matrix and load vector to solve $-f''(x) = g(x)$, which should be EASY; here, $g(x)$ is the FEM solution to the first equation. The solution meets boundary conditions but is "jagged" (think sawtooth patterns overlaid on a smooth function). Now I think perhaps there's something wrong with substituting the FEM solution for the first equation into the second equation, but I'm not sure why.
Not giving up yet.
Update 2
I think I can safely rule out a problem with this code; I am generating the source files with SymPy, so there shouldn't be any human errors associated with changing functional forms, etc. of the equations I want to solve. As an example of what I'm seeing now, see figure below. The only intuition I have for this is that the code "tries" to meet boundary conditions (which are homogeneous Dirichlet), but the solution, for whatever reason, fundamentally wants to blow up to $-\infty$. So we end up with this "compromise" solution that straddles the two possibilities.
Is this even remotely correct? It doesn't seem to matter what I change at this point, and refining the mesh just increases the frequency of the sawtooth form. |
Kakeya problem
Define a
Kakeya set to be a subset [math]A\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]a\in{\mathbb F}_3^n[/math] such that [math]a,a+d,a+2d[/math] all lie in [math]A[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math].
Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements.
General lower bounds
Trivially,
[math]k_n\le k_{n+1}\le 3k_n[/math].
Since the Cartesian product of two Kakeya sets is another Kakeya set, we have
[math]k_{n+m} \leq k_m k_n[/math];
this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity.
From a paper of Dvir, Kopparty, Saraf, and Sudan it follows that [math]k_n \geq 3^n / 2^n[/math], but this is superseded by the estimates given below.
To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, etermining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence
[math]k_n\gtrsim 3^{(n+1)/2}.[/math]
One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math].
A better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus,
[math]k_n \ge 3^{6(n-1)/11}.[/math] General upper bounds
We have
[math]k_n\le 2^{n+1}-1[/math]
since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set.
This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]).
Putting all this together, we seem to have
[math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math]
or
[math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math] |
I am working on a multi-commodity flow problem where for a graph $G=(V, E)$, some flows are permitted to be split and some flows should strictly follow one path. I have formulated this problem as follows.
Problem Formulation
$$ \min \sum_{(i,j) \in E} \sum_{f \in F} c_{ij}^fx^f_{ij}$$
$$ \sum_{j \in V}x^f_{ij} - \sum_{j \in V}x_{ji}^f = \begin{cases} d_f &, & i = s_f \\ -d_f&, & i = t_f\\ 0 &,& \text{otherwise} \end{cases} $$ $$ x_{ij}^f \ge \begin{cases} 0 & , & f \in S\\ d_f & , & f \in NS\\ \end{cases} \quad \forall (i, j) \in E$$ $$ \sum_{i,j \in E} x_{ij}^f \le u_{ij}$$
Where $d_f$ is the demand for flow $f$. $S$ is the set of flows that are permitted to be split, and $NS$ is the set of non-splittable flows. $u_{ij}$ is the capacity of edge $(i, j)$.
I am a novice in linear programming. On paper, using two simple graphs, this formulation seems to work fine.
I would appreciate it if someone could point out if there is any problem with this formulation. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.