text stringlengths 256 16.4k |
|---|
I want to simplify the following expression:
{1/Sqrt[2], 1/Sqrt[2]}
How can I get the factor
1/Sqrt[2] in front of the parenthesis like:
1/Sqrt[2] {1, 1}
??
Mathematica Stack Exchange is a question and answer site for users of Wolfram Mathematica. It only takes a minute to sign up.Sign up to join this community
I want to simplify the following expression:
{1/Sqrt[2], 1/Sqrt[2]}
How can I get the factor
1/Sqrt[2] in front of the parenthesis like:
1/Sqrt[2] {1, 1}
??
v = {1/Sqrt[2], 1/Sqrt[2]};c = PolynomialGCD @@ v;c*Defer @@ {v/c}
{1, 1}/Sqrt[2]
If you just want to view it you can try this.
Here is a mixed matrix with some parts that have
Sqrt[2] in the denominator and some parts that do not.
matrix = {{1/Sqrt[2], 2/Sqrt[2]}, {3, Sqrt[2]}, {4, 3/Sqrt[2]}, {2./Sqrt[2], a/Sqrt[2]}};MatrixForm[matrix]
$$\left( \begin{array}{cc} \frac{1}{\sqrt{2}} & \sqrt{2} \\ 3 & \sqrt{2} \\ 4 & \frac{3}{\sqrt{2}} \\ 1.41421 & \frac{a}{\sqrt{2}} \\ \end{array} \right)$$
Now multiply it by the
Sqrt[2]
matrix2=Sqrt[2]*matrix;MatrixForm[matrix2]
$$\left( \begin{array}{cc} 1 & 2 \\ 3 \sqrt{2} & 2 \\ 4 \sqrt{2} & 3 \\ 2. & a \\ \end{array} \right)$$
matrix2 the simplified matrix with
1/Sqrt[2] factored out.
In order to see it with the
1/Sqrt[2] outside try
HoldForm[1/Sqrt[2]] HoldForm[Evaluate[matrix2]]
Now you can get a temporary peek at the matrix with
1/Sqrt[2] moved to the outside.
Given the vector
v = {1/Sqrt[2], 4/Sqrt[2], (x + 3)/Sqrt[2]};
The following code
c = PolynomialGCD @@ v;Print[c, " ", (v/c) // MatrixForm];
Outputs what you want
$$\frac{1}{\sqrt{2}} \begin{pmatrix} 1\\ 4\\ 3 + x\\ \end{pmatrix}$$ |
Bases of a Topology Examples 2
Recall from the Bases of a Topology page that if $(X, \tau)$ is a topological space then a base for the topology $\tau$ is a collection $\mathcal B \subseteq \tau$ such that every $U \in \tau$ can be written as a union of elements from $\mathcal B$, i.e., for all $U \in \tau$ we have that there exists a $\mathcal B^* \subseteq \mathcal B$ such that:(1)
We will now look at some more examples of bases for topologies.
Example 1 Consider the topological space $(\mathbb{R}, \tau)$ where $\tau$ is the usual topology of open intervals on $\mathbb{R}$. Show that $\mathcal B = \left \{ (p, q) : p, q \in \mathbb{Q}, p < q \right \}$ forms a base of $\tau$.
It suffices to show that every open interval $(a, b) \in \tau$ can be expressed as a union of base elements since every open set in $\tau$ is simply a union of open intervals. Let $(a, b) \in \tau$.
If $a, b \in \mathbb{Q}$ then $(a, b) \in \mathcal B$ and we're done.
If $a \in \mathbb{Q}$ and $b \not \in \mathbb{Q}$ then:(2)
Similarly, if $a \not \in \mathbb{Q}$ and $b \in \mathbb{Q}$ then:(3)
The last case occurs when $p, q \not \in \mathbb{Q}$. Then:(4)
So for every open interval $(a, b)$ there exists a $\mathcal B^* \subseteq \mathcal B$ such that $(a, b) = \bigcup_{B \in \mathcal B^*} B$, so $\mathcal B$ is a base of $\tau$.
Example 2 Consider the topological space $(\mathbb{R}, \tau)$ where $\tau$ is the usual topology of open intervals in $\mathbb{R}$. Show that the collection $\mathcal B = \{ (w, z) : w, z \in \mathbb{Z}, w < z\}$ does not form a base of $\tau$.
If $\mathcal B$ was a base of $\tau$ then for every set $U \in \tau$ we must be able to find a subcollection $\mathcal B^* \subseteq \mathcal B$ such that:(5)
Consider the open interval $U = \left ( 0, \frac{1}{2} \right )$. Any set in $\mathcal B$ that contains $U$ must be such that $w \leq 0$ and $\frac{1}{2} \leq z$. The smallest integer greater than $\frac{1}{2}$ is $z = 1$, and so the smallest set in $\mathcal B$ containing $U$ is $(0, 1)$.
However, there are no sets in $\mathcal B$ that are contained in $\left (0, \frac{1}{2} \right )$ and so $\mathcal B$ cannot be a base of $\tau$. |
The function $f:\mathbb Q\setminus \{0\}\to \mathbb R$ defined by $f(q)=q$ is continuous at each rational number $q\neq 0$, takes positive and negative values, but is never $0$. The intermediate value theorem is valid for functions $f: I\subset \mathbb R\to\mathbb R$, where $I$ is a closed interval (i.e connected set in $\mathbb R$).
The example you give with $e^z:\mathbb C\to\color{red}{\mathbb C}$ actually doesn't show anything, because there is no total ordering on the complex numbers. Also you can read from Wikipedia:
The intermediate value theorem generalizes in a natural way: Suppose that $X$ is a connected topological space and $(Y, <)$ is a totally ordered set equipped with the order topology, and let $f : X → Y$ be a continuous map. If $a$ and $b$ are two points in $X$ and $u$ is a point in $Y$ lying between $f(a)$ and $f(b)$ with respect to $<$, then there exists $c$ in $X$ such that $f(c) = u$.
Edit:If $f$ is continuous, then the IVT can fail to apply either because the domain of $f$ is not connected, or because the codomain is not totally ordered:
In my example, $\mathbb R$ is totally ordered and the IVT fails to apply because $\mathbb Q\setminus \{0\}$ is not connected.
In the OP example $e^z:\color{blue}{\mathbb C}\to\color{red}{\mathbb C}$, $\quad \color{blue}{\mathbb C}$ is connected and the IVT fails to apply because $\color{red}{\mathbb C}$ is not totally ordered. |
Find the limit $$\lim_{n \rightarrow \infty} \int _0^n (1 - \frac{x}{n})^n \log (2 + \cos(\frac{x}{n})) dx$$ and justify the answer.
I think that the Dominated Convergence Theorem can be applied to this problem.
Since $$\int _0^n (1 - \frac{x}{n})^n \log (2 + \cos(\frac{x}{n})) dx = \int _0^\infty (1 - \frac{x}{n})^n \log (2 + \cos(\frac{x}{n}))1_{[0, n]}(x) dx,$$ if I can find the dominating function, I will then apply DCT.
After applying DCT, we will have $$\lim \int _0^\infty (1 - \frac{x}{n})^n \log (2 + \cos(\frac{x}{n}))1_{[0, n]}(x) dx = \int _0^\infty \lim (1 - \frac{x}{n})^n \log (2 + \cos(\frac{x}{n}))1_{[0, n]}(x) dx = \int _0^\infty e^{-x} \log (3) dx = \log(3)e^{-x}|_{x=\infty, x=0} = - \log (3).$$
But I am not sure if $$\Big| (1 - \frac{x}{n})^n \log (2 + \cos(\frac{x}{n})) \Big| \leq 3e^{-x}$$ with $3e^{-x}$ integrable on $[0, \infty).$ Can someone check if my dominating function is alright ?
Prove that $$\sum_{k = 1}^\infty \frac{1}{(p + k)^2} = - \int_0^1 \frac{x^p}{1-x}\log (x) dx$$ for $p>0.$ (This problem is allow to use the Fundamental Theorem of Calculus)
I suppose that I should apply FTC with DCT. But I have no ideas about doing this, any hints please ? |
I am trying to read bits and pieces of Ingo Blechschmidt's notes on using the internal language of toposes in algebraic geometry.
I have not studied the internal language. I only have a bare bones idea of what it is through these and some other notes. I have also never studied logic. My question is about the last two lines of table 1, which presents the Kripke-Joyal semantics of sheaf topos:
$\begin{aligned} U\models \forall \mathcal F.\; \varphi (\mathcal F) & :\iff\text{for all sheaves }\mathcal F \text{ on }V\text{, open }V\subset U:\; V\models \varphi (\mathcal F) \\ U\models \exists \mathcal F.\; \varphi (\mathcal F)& :\iff \text{there exists an open covering }U=\bigcup_iU_i\text{ such that for all }i\text{ there exists a sheaf }\mathcal F_i\text{ on }U_i\text{ such that }U_i\models \varphi (\mathcal F_i)\end{aligned}$
These last two rules, according to remark 2.2, concern "
unbounded quantification and are not part of the classical Kripke Joyal semantics. They are part of Mike Shulman's stack semantics, a slight extension. They are needed so that we can formulate universal properties in the internal language".
First, these two lines are identical to the two preceding them except $\mathcal F$ replaces $s$. Since the structure of the statements doesn't change, I'm confused - what is meant by "unbounded quantification"? Why is quantifying sheaves suddenly a different story?
Second, why were these definitions not taken as part of the Kripke-Joyal semantics?
I understand the tempting answer is "go study logic", but I would really appreciate an explanation for the layman. |
I was looking at the Draper & Smith's 'Applied Regression Analysis', as did the person who asked:this other question on CrossValidated
In short - the variance-covariance matrix of the residuals in regression is given by $(I - H)\sigma^2$, where $H$ is the 'Hat Matrix'. So in general we must assume the residuals are not independent.
Yet whenever I read about the assumptions of regression it says the error terms should be independent (as well as having zero expectation $E[\epsilon_i] = 0$ and equal variance $Var(\epsilon_i) = \sigma^2 \forall i$.
I am missing something, or using a loose definition - my naive, inexperienced reading of this looks contradictory. Thank you, Chris |
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, }
Abstract.Ali Enayat had asked whether there is a nonstandard model of Peano arithmetic (PA) that can be represented as $\newcommand\Q{\mathbb{Q}}\langle\Q,\oplus,\otimes\rangle$, where $\oplus$ and $\otimes$ are continuous functions on the rationals $\Q$. We prove, affirmatively, that indeed every countable model of PA has such a continuous presentation on the rationals. More generally, we investigate the topological spaces that arise as such topological models of arithmetic. The reals $\newcommand\R{\mathbb{R}}\R$, the reals in any finite dimension $\R^n$, the long line and the Cantor space do not, and neither does any Suslin line; many other spaces do; the status of the Baire space is open.
The first author had inquired whether a nonstandard model of arithmetic could be continuously presented on the rational numbers.
Main Question. (Enayat, 2009) Are there continuous functions $\oplus$ and $\otimes$ on the rational numbers $\Q$, such that $\langle\Q,\oplus,\otimes\rangle$ is a nonstandard model of arithmetic?
By a model of arithmetic, what we mean here is a model of the first-order Peano axioms PA, although we also consider various weakenings of this theory. The theory PA asserts of a structure $\langle M,+,\cdot\rangle$ that it is the non-negative part of a discretely ordered ring, plus the induction principle for assertions in the language of arithmetic. The natural numbers $\newcommand\N{\mathbb{N}}\langle \N,+,\cdot\rangle$, for example, form what is known as the
standard model of PA, but there are also many nonstandard models, including continuum many non-isomorphic countable models.
We answer the question affirmatively, and indeed, the main theorem shows that every countable model of PA is continuously presented on $\Q$. We define generally that a
topological model of arithmetic is a topological space $X$ equipped with continuous functions $\oplus$ and $\otimes$, for which $\langle X,\oplus,\otimes\rangle$ satisfies the desired arithmetic theory. In such a case, we shall say that the underlying space $X$ continuously supports a model of arithmetic and that the model is continuously presented upon the space $X$. Question. Which topological spaces support a topological model of arithmetic?
In the paper, we prove that the reals $\R$, the reals in any finite dimension $\R^n$, the long line and Cantor space do not support a topological model of arithmetic, and neither does any Suslin line. Meanwhile, there are many other spaces that do support topological models, including many uncountable subspaces of the plane $\R^2$. It remains an open question whether any uncountable Polish space, including the Baire space, can support a topological model of arithmetic.
Let me state the main theorem and briefly sketch the proof.
Main Theorem. Every countable model of PA has a continuous presentation on the rationals $\Q$. Proof. We shall prove the theorem first for the standard model of arithmetic $\langle\N,+,\cdot\rangle$. Every school child knows that when computing integer sums and products by the usual algorithms, the final digits of the result $x+y$ or $x\cdot y$ are completely determined by the corresponding final digits of the inputs $x$ and $y$. Presented with only final segments of the input, the child can nevertheless proceed to compute the corresponding final segments of the output.
\begin{equation*}\small\begin{array}{rcr}
\cdots1261\quad & \qquad & \cdots1261\quad\\ \underline{+\quad\cdots 153\quad}&\qquad & \underline{\times\quad\cdots 153\quad}\\ \cdots414\quad & \qquad & \cdots3783\quad\\ & & \cdots6305\phantom{3}\quad\\ & & \cdots1261\phantom{53}\quad\\ & & \underline{\quad\cdots\cdots\phantom{253}\quad}\\ & & \cdots933\quad\\ \end{array}\end{equation*}
This phenomenon amounts exactly to the continuity of addition and multiplication with respect to what we call the
final-digits topology on $\N$, which is the topology having basic open sets $U_s$, the set of numbers whose binary representations ends with the digits $s$, for any finite binary string $s$. (One can do a similar thing with any base.) In the $U_s$ notation, we include the number that would arise by deleting initial $0$s from $s$; for example, $6\in U_{00110}$. Addition and multiplication are continuous in this topology, because if $x+y$ or $x\cdot y$ has final digits $s$, then by the school-child’s observation, this is ensured by corresponding final digits in $x$ and $y$, and so $(x,y)$ has an open neighborhood in the final-digits product space, whose image under the sum or product, respectively, is contained in $U_s$.
Let us make several elementary observations about the topology. The sets $U_s$ do indeed form the basis of a topology, because $U_s\cap U_t$ is empty, if $s$ and $t$ disagree on some digit (comparing from the right), or else it is either $U_s$ or $U_t$, depending on which sequence is longer. The topology is Hausdorff, because different numbers are distinguished by sufficiently long segments of final digits. There are no isolated points, because every basic open set $U_s$ has infinitely many elements. Every basic open set $U_s$ is clopen, since the complement of $U_s$ is the union of $U_t$, where $t$ conflicts on some digit with $s$. The topology is actually the same as the metric topology generated by the $2$-adic valuation, which assigns the distance between two numbers as $2^{-k}$, when $k$ is largest such that $2^k$ divides their difference; the set $U_s$ is an open ball in this metric, centered at the number represented by $s$. (One can also see that it is metric by the Urysohn metrization theorem, since it is a Hausdorff space with a countable clopen basis, and therefore regular.) By a theorem of Sierpinski, every countable metric space without isolated points is homeomorphic to the rational line $\Q$, and so we conclude that the final-digits topology on $\N$ is homeomorphic to $\Q$. We’ve therefore proved that the standard model of arithmetic $\N$ has a continuous presentation on $\Q$, as desired.
But let us belabor the argument somewhat, since we find it interesting to notice that the final-digits topology (or equivalently, the $2$-adic metric topology on $\N$) is precisely the order topology of a certain definable order on $\N$, what we call the
final-digits order, an endless dense linear order, which is therefore order-isomorphic and thus also homeomorphic to the rational line $\Q$, as desired.
Specifically, the final-digits order on the natural numbers, pictured in figure 1, is the order induced from the lexical order on the finite binary representations, but considering the digits from right-to-left, giving higher priority in the lexical comparison to the low-value final digits of the number. To be precise, the final-digits order $n\triangleleft m$ holds, if at the first point of disagreement (from the right) in their binary representation, $n$ has $0$ and $m$ has $1$; or if there is no disagreement, because one of them is longer, then the longer number is lower, if the next digit is $0$, and higher, if it is $1$ (this is not the same as treating missing initial digits as zero). Thus, the even numbers appear as the left half of the order, since their final digit is $0$, and the odd numbers as the right half, since their final digit is $1$, and $0$ is directly in the middle; indeed, the highly even numbers, whose representations end with a lot of zeros, appear further and further to the left, while the highly odd numbers, which end with many ones, appear further and further to the right. If one does not allow initial $0$s in the binary representation of numbers, then note that zero is represented in binary by the empty sequence. It is evident that the final-digits order is an endless dense linear order on $\N$, just as the corresponding lexical order on finite binary strings is an endless dense linear order.
The basic open set $U_s$ of numbers having final digits $s$ is an open set in this order, since any number ending with $s$ is above a number with binary form $100\cdots0s$ and below a number with binary form $11\cdots 1s$ in the final-digits order; so $U_s$ is a union of intervals in the final-digits order. Conversely, every interval in the final-digits order is open in the final-digits topology, because if $n\triangleleft x\triangleleft m$, then this is determined by some final segment of the digits of $x$ (appending initial $0$s if necessary), and so there is some $U_s$ containing $x$ and contained in the interval between $n$ and $m$. Thus, the final-digits topology is the precisely same as the order topology of the final-digits order, which is a definable endless dense linear order on $\N$. Since this order is isomorphic and hence homeomorphic to the rational line $\Q$, we conclude again that $\langle \N,+,\cdot\rangle$ admits a continuous presentation on $\Q$.
We now complete the proof by considering an arbitrary countable model $M$ of PA. Let $\triangleleft^M$ be the final-digits order as defined inside $M$. Since the reasoning of the above paragraphs can be undertaken in PA, it follows that $M$ can see that its addition and multiplication are continuous with respect to the order topology of its final-digits order. Since $M$ is countable, the final-digits order of $M$ makes it a countable endless dense linear order, which by Cantor’s theorem is therefore order-isomorphic and hence homeomorphic to $\Q$. Thus, $M$ has a continuous presentation on the rational line $\Q$, as desired. $\Box$
The executive summary of the proof is: the arithmetic of the standard model $\N$ is continuous with respect to the final-digits topology, which is the same as the $2$-adic metric topology on $\N$, and this is homeomorphic to the rational line, because it is the order topology of the final-digits order, a definable endless dense linear order; applied in a nonstandard model $M$, this observation means the arithmetic of $M$ is continuous with respect to its rational line $\Q^M$, which for countable models is isomorphic to the actual rational line $\Q$, and so such an $M$ is continuously presentable upon $\Q$.
Let me mention the following order, which it seems many people expect to use instead of the final-digits order as we defined it above. With this order, one in effect takes missing initial digits of a number as $0$, which is of course quite reasonable.
The problem with this order, however, is that the order topology is not actually the final-digits topology. For example, the set of all numbers having final digits $110$ in this order has a least element, the number $6$, and so it is not open in the order topology. Worse, I claim that arithmetic is not continuous with respect to this order. For example, $1+1=2$, and $2$ has an open neighborhood consisting entirely of even numbers, but every open neighborhood of $1$ has both odd and even numbers, whose sums therefore will not all be in the selected neighborhood of $2$. Even the successor function $x\mapsto x+1$ is not continuous with respect to this order.
Finally, let me mention that a version of the main theorem also applies to the integers $\newcommand\Z{\mathbb{Z}}\Z$, using the following order.
Go to the article to read more.
A. Enayat, J. D. Hamkins, and B. Wcisło, “Topological models of arithmetic,” ArXiv e-prints, 2018. (under review)
@ARTICLE{EnayatHamkinsWcislo2018:Topological-models-of-arithmetic, author = {Ali Enayat and Joel David Hamkins and Bartosz Wcisło}, title = {Topological models of arithmetic}, journal = {ArXiv e-prints}, year = {2018}, volume = {}, number = {}, pages = {}, month = {}, note = {under review}, abstract = {}, keywords = {under-review}, source = {}, doi = {}, eprint = {1808.01270}, archivePrefix = {arXiv}, primaryClass = {math.LO}, url = {http://wp.me/p5M0LV-1LS}, } |
We are very familiar with
real valued functions, that is, functions whose output is a real number. This section introduces vector-valued functions - functions whose output is a vector.
Definition \(\PageIndex{1}\): Vector-Valued Functions
A
vector-valued function is a function of the form
\[\vecs r(t) = \langle\, f(t),g(t)\,\rangle\]
or
\[\vecs r(t) = \langle \,f(t),g(t),h(t)\,\rangle,\]
where \(f\), \(g\) and \(h\) are real valued functions.
The
domain of \(\vecs r\) is the set of all values of \(t\) for which \(\vecs r(t)\) is defined. The range of \(\vecs r\) is the set of all possible output vectors \(\vecs r(t)\). Evaluating and Graphing Vector-Valued Functions
Evaluating a vector-valued function at a specific value of \(t\) is straightforward; simply evaluate each component function at that value of \(t\). For instance, if \(\vecs r(t) = \langle t^2,t^2+t-1\rangle\), then \(\vecs r(-2) = \langle 4,1\rangle\). We can sketch this vector, as is done in Figure \(\PageIndex{1a}\). Plotting lots of vectors is cumbersome, though, so generally we do not sketch the whole vector but just the terminal point. The
graph of a vector-valued function is the set of all terminal points of \(\vecs r(t)\), where the initial point of each vector is always the origin. In Figure \(\PageIndex{1b}\) we sketch the graph of \(\vecs r\); we can indicate individual points on the graph with their respective vector, as shown. Figure \(\PageIndex{1}\): Sketching the graph of a vector-valued function.
Vector-valued functions are closely related to parametric equations of graphs. While in both methods we plot points \(\big(x(t), y(t)\big)\) or \(\big(x(t),y(t),z(t)\big)\) to produce a graph, in the context of vector-valued functions each such point represents a vector. The implications of this will be more fully realized in the next section as we apply calculus ideas to these functions.
Example \(\PageIndex{1}\): Graphing vector-valued functions
Graph \( \vecs r(t) = \langle t^3-t, \dfrac{1}{t^2+1}\rangle\), for \(-2\leq t\leq 2\). Sketch \(\vecs r(-1)\) and \(\vecs r(2)\).
Solution
We start by making a table of \(t\), \(x\) and \(y\) values as shown in Figure \(\PageIndex{1a}\). Plotting these points gives an indication of what the graph looks like. In Figure \(\PageIndex{1b}\) , we indicate these points and sketch the full graph. We also highlight \(\vecs r(-1)\) and \(\vecs r(2)\) on the graph.
Figure \(\PageIndex{2}\): Sketching the vector-valued function of Example 11.1.1
Example \(\PageIndex{2}\): Graphing vector-valued functions.
Graph \(\vecs r(t) = \langle \cos t,\sin t,t\rangle\) for \(0\leq t\leq 4\pi\).
SOLUTION
We can again plot points, but careful consideration of this function is very revealing. Momentarily ignoring the third component, we see the \(x\) and \(y\) components trace out a circle of radius 1 centered at the origin. Noticing that the \(z\) component is \(t\), we see that as the graph winds around the \(z\)-axis, it is also increasing at a constant rate in the positive \(z\) direction, forming a spiral. This is graphed in Figure \(\PageIndex{3}\). In the graph \(\vecs r(7\pi/4)\approx (0.707,-0.707,5.498) \) is highlighted to help us understand the graph.
Figure \(\PageIndex{3}\): Viewing a vector-valued function, and its derivative at one point. Algebra of Vector-Valued Functions
Definition \(\PageIndex{2}\): Operations on Vector-Valued Functions
Let \(\vecs r_1(t)=\langle f_1(t),g_1(t)\rangle\) and \(\vecs r_2(t)=\langle f_2(t),g_2(t)\rangle\) be vector-valued functions in \(\mathbb{R}^2\) and let \(c\) be a scalar. Then:
\(\vecs r_1(t) \pm \vecs r_2(t) = \langle\, f_1(t)\pm f_2(t),g_1(t)\pm g_2(t)\,\rangle\). \(c\vecs r_1(t) = \langle\, cf_1(t),cg_1(t)\,\rangle\).
A similar definition holds for vector-valued functions in \(\mathbb{R}^3\).
This definition states that we add, subtract and scale vector-valued functions component-wise. Combining vector-valued functions in this way can be very useful (as well as create interesting graphs).
Example \(\PageIndex{3}\): Adding and scaling vector-valued functions.
Let \(\vecs r_1(t) = \langle\,0.2t,0.3t\,\rangle\), \(\vecs r_2(t) = \langle\,\cos t,\sin t\,\rangle\) and \(\vecs r(t) = \vecs r_1(t)+\vecs r_2(t)\). Graph \(\vecs r_1(t)\), \(\vecs r_2(t)\), \(\vecs r(t)\) and \(5\vecs r(t)\) on \(-10\leq t\leq10\).
Solution
We can graph \(\vecs r_1\) and \(\vecs r_2\) easily by plotting points (or just using technology). Let's think about each for a moment to better understand how vector-valued functions work.
We can rewrite \(\vecs r_1(t) = \langle\, 0.2t,0.3t\,\rangle\) as \( \vecs r_1(t) = t\langle 0.2,0.3\rangle\). That is, the function \(\vecs r_1\) scales the vector \(\langle 0.2,0.3\rangle\) by \(t\). This scaling of a vector produces a line in the direction of \(\langle 0.2,0.3\rangle\).
We are familiar with \(\vecs r_2(t) = \langle\, \cos t,\sin t\,\rangle\); it traces out a circle, centered at the origin, of radius 1. Figure \(\PageIndex{4a}\) graphs \(\vecs r_1(t)\) and \(\vecs r_2(t)\).
Adding \(\vecs r_1(t)\) to \(\vecs r_2(t)\) produces \(\vecs r(t) = \langle\,\cos t + 0.2t,\sin t+0.3t\,\rangle\), graphed in Figure \(\PageIndex{4b}\) . The linear movement of the line combines with the circle to create loops that move in the direction of \(\langle 0.2,0.3\rangle\). (We encourage the reader to experiment by changing \(\vecs r_1(t)\) to \(\langle 2t,3t\rangle\), etc., and observe the effects on the loops.)
Figure \(\PageIndex{4}\): Graphing the functions in Example \(\PageIndex{3}\)
Multiplying \(\vecs r(t)\) by 5 scales the function by 5, producing \(5\vecs r(t) = \langle 5\cos t+1,5\sin t+1.5\rangle\), which is graphed in Figure \(\PageIndex{4c}\) along with \(\vecs r(t)\). The new function is "5 times bigger'' than \(\vecs r(t)\). Note how the graph of \(5\vecs r(t)\) in (c) looks identical to the graph of \(\vecs r(t)\) in \((b)\). This is due to the fact that the \(x\) and \(y\) bounds of the plot in \((c)\) are exactly 5 times larger than the bounds in (b).
Example \(\PageIndex{4}\): Adding and scaling vector-valued functions.
A
cycloid is a graph traced by a point \(p\) on a rolling circle, as shown in Figure \(\PageIndex{5}\). Find an equation describing the cycloid, where the circle has radius 1. Figure \(\PageIndex{5}\): Tracing a cycloid. SOLUTION
This problem is not very difficult if we approach it in a clever way. We start by letting \(\vecs p(t)\) describe the position of the point \(p\) on the circle, where the circle is centered at the origin and only rotates clockwise (i.e., it does not roll). This is relatively simple given our previous experiences with parametric equations; \(\vecs p(t) = \langle \cos t, -\sin t\rangle\).
We now want the circle to roll. We represent this by letting \(\vecs c(t)\) represent the location of the center of the circle. It should be clear that the \(y\) component of \(\vecs c(t)\) should be 1; the center of the circle is always going to be 1 if it rolls on a horizontal surface.
The \(x\) component of \(\vecs c(t)\) is a linear function of \(t\): \(f(t) = mt\) for some scalar \(m\). When \(t=0\), \(f(t) = 0\) (the circle starts centered on the \(y\)-axis). When \(t=2\pi\), the circle has made one complete revolution, traveling a distance equal to its circumference, which is also \(2\pi\). This gives us a point on our line \(f(t) = mt\), the point \((2\pi, 2\pi)\). It should be clear that \(m=1\) and \(f(t) = t\). So \(\vecs c(t) = \langle t, 1\rangle\).
We now combine \(\vecs p\) and \(\vecs c\) together to form the equation of the cycloid:
\[\vecs r(t) = \vecs p(t) + \vecs c(t) = \langle \cos t+ t,-\sin t+1\rangle, \nonumber\]
which is graphed in Figure \(\PageIndex{6}\).
Figure \(\PageIndex{6}\): The cycloid in Example \(\PageIndex{4}\). Displacement
A vector-valued function \(\vecs r(t)\) is often used to describe the position of a moving object at time \(t\). At \(t=t_0\), the object is at \(\vecs r(t_0)\); at \(t=t_1\), the object is at \(\vecs r(t_1)\). Knowing the locations \(\vecs r(t_0)\) and \(\vecs r(t_1)\) give no indication of the path taken between them, but often we only care about the difference of the locations, \(\vecs r(t_1)-\vecs r(t_0)\), the
displacement.
Definition \(\PageIndex{3}\): Displacement
Let \(\vecs r(t)\) be a vector-valued function and let \(t_0<t_1\) be values in the domain. The
displacement
When the displacement vector is drawn with initial point at \(\vecs r(t_0)\), its terminal point is \(\vecs r(t_1)\). We think of it as the vector which points from a starting position to an ending position.
Example \(\PageIndex{5}\): Finding and graphing displacement vectors
Let \(\vecs r(t) = \langle \cos (\dfrac{\pi}{2}t),\sin (\dfrac{\pi}2 t)\rangle\). Graph \(\vecs r(t)\) on \(-1\leq t\leq 1\), and find the displacement of \(\vecs r(t)\) on this interval.
Solution
The function \(\vecs r(t)\) traces out the unit circle, though at a different rate than the "usual'' \(\langle \cos t,\sin t\rangle\) parametrization. At \(t_0=-1\), we have \(\vecs r(t_0) = \langle 0,-1\rangle\); at \(t_1=1\), we have \(\vecs r(t_1) = \langle 0,1\rangle\). The displacement of \(\vecs r(t)\) on \([-1,1]\) is thus
\[\vecs d = \langle 0,1\rangle - \langle 0,-1\rangle = \langle 0,2\rangle. \nonumber\]
Figure \(\PageIndex{7}\): Graphing the displacement of a position function in Example \(\PageIndex{5}\).
A graph of \(\vecs r(t)\) on \([-1,1]\) is given in Figure \(\PageIndex{7}\), along with the displacement vector \(\vecs d\) on this interval.
Measuring displacement makes us contemplate related, yet very different, concepts. Considering the semi-circular path the object in Example \(\PageIndex{5}\) took, we can quickly verify that the object ended up a distance of 2 units from its initial location. That is, we can compute \(\norm{d} = 2\). However, measuring
distance from the starting point is different from measuring distance traveled. Being a semi-circle, we can measure the distance traveled by this object as \(\pi\approx 3.14\) units. Knowing distance from the starting point allows us to compute average rate of change.
Definition \(\PageIndex{4}\): Average Rate of Change
Let \(\vecs r(t)\) be a vector-valued function, where each of its component functions is continuous on its domain, and let \(t_0<t_1\). The
average rate of change of \(\vecs r(t)\) on \([t_0,t_1]\) is
\[\text{average rate of change} = \dfrac{\vecs r(t_1) - \vecs r(t_0)}{t_1-t_0}.\]
Example \(\PageIndex{6}\): Average rate of change
Let \(\vecs r(t) = \langle \cos(\dfrac{\pi}2t),\sin(\dfrac{\pi}2t)\rangle\) as in Example 11.1.5. Find the average rate of change of \(\vecs r(t)\) on \([-1,1]\) and on \([-1,5]\).
Solution
We computed in Example \(\PageIndex{5}\) that the displacement of \(\vecs r(t)\) on \([-1,1]\) was \(\vecs d = \langle 0,2\rangle\). Thus the average rate of change of \(\vecs r(t)\) on \([-1,1]\) is:
\[\dfrac{\vecs r(1) -\vecs r(-1)}{1-(-1)} = \dfrac{\langle 0,2\rangle}{2} = \langle 0,1\rangle. \nonumber\]
We interpret this as follows: the object followed a semi-circular path, meaning it moved towards the right then moved back to the left, while climbing slowly, then quickly, then slowly again.
On average, however, it progressed straight up at a constant rate of \(\langle 0,1\rangle\) per unit of time.
We can quickly see that the displacement on \([-1,5]\) is the same as on \([-1,1]\), so \(\vecs d = \langle 0,2\rangle\). The average rate of change is different, though:
\[\dfrac{\vecs r(5)-\vecs r(-1)}{5-(-1)} = \dfrac{\langle 0,2\rangle}{6} = \langle 0,1/3\rangle. \nonumber\]
As it took "3 times as long'' to arrive at the same place, this average rate of change on \([-1,5]\) is \(1/3\) the average rate of change on \([-1,1]\).
We considered average rates of change in Sections 1.1 and 2.1 as we studied limits and derivatives. The same is true here; in the following section we apply calculus concepts to vector-valued functions as we find limits, derivatives, and integrals. Understanding the average rate of change will give us an understanding of the derivative; displacement gives us one application of integration.
Contributors
Gregory Hartman (Virginia Military Institute). Contributions were made by Troy Siemers and Dimplekumar Chalishajar of VMI and Brian Heinold of Mount Saint Mary's University. This content is copyrighted by a Creative Commons Attribution - Noncommercial (BY-NC) License. http://www.apexcalculus.com/ |
In an oral exam for physical organic chemistry, one student was asked to explain the differences in the ionization potential ($IP$) of syn and anti tricyclo[$4.2.0.0^{2,5}$]octa-3,7-diene (unsaturated 3-ladderane, see Figure).
Apparently, the $IP$s were as follows:
$IP_{1,syn}=9.04$
$IP_{2,syn}=9.38$
$\Delta IP_{syn}=0.34$
and
$IP_{1,anti}=8.97$
$IP_{2,anti}=9.93$
$\Delta IP_{anti}=0.96$
Koopman's Theorem $IP=-\epsilon_{HOMO}$ might help us here. I think it leads us to an orbital diagram. How do I have to combine the $\sigma$-bonds with the $\pi$-system to be able to explain the difference in the $IP$? Maybe you know a paper for it? I checked out Joseph J. Gajewski's
Hydrocarbon Thermal Isomerizations (2004), but he only discusses mechanistic features, no theoretical aspects. |
Eigenfunctions in quantum well
Introduction
In that example we investigate a AlGaAs/GaAs quantum well in quantum mechanical point of view. The simulation involves band profile calculation in the hetero-structure, and compare its influence with different QM solvers.
Band-structure calculation
The growth direction of the sample is in the
001 direction on $Al_{0.4}Ga_{0.6}As$. The lattice miss-match of the GaAs and AlAs is very low, which leaves the structure un-strained. The calculated profile including bowing parameters is depicted in figure 1.
Eigenfunction calculation
In this example we are going to consider just the electron wave-functions, but the same ideas could be used for the hole functions.
Single-band method
In figure 2. the single-band wave-functions are calculated, which means the effective mass of the electron is a constant energy-independent value in the Scrödinger-equation:
\begin{equation} - \nabla \frac{\hbar^2}{2m_e(x)} \nabla \Phi + V(x) \Phi = E\phi. \end{equation}
The $m_e(x)$ is the effective mass of the electron, while the $V(x)$ is the conduction band edge - both of them are position dependent.
8 band k.p method
We are able to calculate the wave functions of the quasi-particles in the sample with the coupling of the hole and electron bands \citep{pryor1998eight}. Which can be used for more realistic calculations, due to the fact the effective mass of the electron is energy dependent.
The first confined state in the Quantum well is plotted in figure 3.
It has a bit different energy, which reflects in the different electron density calculated in the next section.
Lateral dispersion
In the section before we calculated the wavefunctions if the electron has zero in-plane momentum($k_{||}= 0$). But if this lateral momentum is not zero it changes the eigenenergy of the electrons. It can be described according to an E(k) dispersion relation in figure 4.
As it shows it can be treated a constant mass problem for some energy range. In our approach we mix out a constant mass calculated from the 8 band k.p wavefunction at $k_{||} = 0$.
\section{Charge density calculation}
For the constant mass approach we can calculate the density in the sample according the equation: \begin{equation} n(x) = \sum_{i = 0}^{N} |\Phi_i(x)|^2 k_bT \frac{1}{4 \pi} \frac{2 m_{DOS}(x)}{\hbar^2} ln(1+exp((E_i-E_f)/k_bT)) \end{equation}
in one dimension, with two dimensional k-space. Where $E_i, \Phi_i$ is the i-th eigenenergy and eigenfunction of the sample.
In figure 5. we compared 3 different density calculations with effective mass. It shows that due the eigenenergy of the electron function is higher than the conduction band-edge, which results less electrons in the band.
We can calculate the charge carrier density without assuming constant mass in the lateral dimension, but for this we have to calculate the wave-function for each $k_{||}$ parallel point in the sample. It ends up nearly at the same result as in figure 5. |
Construction of formula in Sagemath program
Let $P_k:= \mathbb{F}_2[x_1,x_2,\ldots ,x_k]$ be the polynomial algebra in $k$ variables with the degree of each $x_i$ being $1,$ regarded as a module over the mod-$2$ Steenrod algebra $\mathcal{A}.$ Here $\mathcal{A} = \langle Sq^{2^m}\,\,|\,\,m\geq 0\rangle.$
Being the cohomology of a space, $P_k$ is a module over the mod-2 Steenrod algebra $\mathscr{A}.$ The action of $\mathscr{A}$ on $P_k$ is explicitly given by the formula
$$Sq^m(x_j^d) = \binom{d}{m}x_j^{m+d},$$ where $ \binom{d}{m}$ is reduced mod-2 and $\binom{d}{m} = 0$ if $m > d.$
Now, I want to use the Steenrod algebra package and Multi Polynomial ring package and using formular above to construction of formula following in Sagemath program
$$ Sq^m(f) = \sum\limits_{2^{m_1} + 2^{m_2} + \cdots + 2^{m_k}= m}\binom{d_1}{2^{m_1}}x_1^{2^{m_1}+d_1}\binom{d_1}{2^{m_2}}x_2^{2^{m_2}+d_2}\ldots \binom{d_k}{2^{m_k}}x_k^{2^{m_k}+d_k}.$$ forall $f = x_1^{d_1}x_2^{d_2}\ldots x_k^{d_k}\in P_k$
Example: Let $k = 5, m = 2$ and $f = x_1^2x_2^3x_3^2x_4x_5\in P_5.$ We have $$ Sq^2(x_1^2x_2^3x_3^2x_4x_5) = x_1^4x_2^3x_3^2x_4x_5 + x_1^2x_2^5x_3^2x_4x_5 + x_1^2x_2^3x_3^4x_4x_5 +x_1^2x_2^3x_3^2x_4^2x_5^2 + x_1^2x_2^4x_3^2x_4x_5^2 + x_1^2x_2^4x_3^2x_4^2x_5^1.$$
I hope that someone can help. Thanks! |
Let $A\in\mathbb{C}^{m\times n}$, then a generalized inverse matrix $B$ of $A$ satisfies the following $$ABA = A \ \text{and} \ BAB = B.$$
I am to show that $B$ is unique if $A$ is square and invertible. I also want to know if my approach is correct.
From the first criterion, we get $B=A^{-1}$. My first question is then, does this imply $A=B^{-1}$?
Second, if there exists a $C\neq B$ such that $$ACA = A \ \text{and} \ CAC = C,$$
then clearly $C=A^{-1}$, and since $A^{-1}$ is unique, then $C=B$, and so we have a contradiction.
Is this a correct way to prove the generalized inverse $B$ of $A$ is unique? And can I deduce that $A=B^{-1}$ from the fact that $B = A^{-1}$? I also feel that I am not really using the second criterion ($BAB=B$) and I feel that I should.
Thanks |
Here when I was studying almost integers , I made the following conjecture -
Let $x$ be a natural number then For sufficiently large $n$ (Natural number) Let $$\Omega=(\sqrt x+\lfloor \sqrt x \rfloor)^n$$ then $\Omega$ is an almost integer . The value of $n$ depends upon the difference between the number $x$ and its nearest perfect square which is smaller than it. Can anyone prove this conjecture. Moreover, I can provide examples like $(\sqrt 5+2)^{25}=4721424167835364.0000000000000002$ $(\sqrt 27+5 )^{15}=1338273218579200.000000000024486$
Here when I was studying almost integers , I made the following conjecture -
Well, we have that $$(\sqrt{x}+\lfloor\sqrt{x}\rfloor)^{n} + (-\sqrt{x}+\lfloor\sqrt{x}\rfloor)^{n} $$ is an integer (by the Binomial Theorem), but $(-\sqrt{x}+\lfloor\sqrt{x}\rfloor)^{n}\to 0$ if $\sqrt{x}$ was not already an integer. |
Vector Spaces Review
Vector Spaces Review
We will now summarize some of the material mentioned on earlier pages regarding vector spaces.
Recall that a Vector Spaceover the field $\mathbb{F}$ is a nonempty setwith two operations of addition and scalar multiplication using the scalars from $\mathbb{F}$. If $V$ is a vector space, then for all $u, v, w \in V$ and $a, b \in \mathbb{F}$, the following ten axioms must be satisfied:
1. Commutativity of Addition 2. Associativity of Addition 3. Existence of a Zero Vector 4. Existence of Additive Inverses $u + v = v + u$ $u (v + w) = (u + v) + w$ $u + 0 = u = 0 + u$ $u + (-u) = 0 = (-u) + u$ 5. Associativity of Scalar Mult. 6. Existence of Multiplicative Identity 7. Distributivity over Vector Addition 8. Distributivity over Scalar Mult. $a(bu) = (ab)u$ $1u = u$ $a(u + v) = au + av$ $(a + b)u = au + bu$ 9. Closure under Addition 10. Closure under Scalar Multiplication $u, v \in V$ implies $(u + v) \in V$. $a \in \mathbb{F}$ and $u \in V$ implies $au \in V$. Some examples of vector spaces (with the obvious definitions for addition and scalar multiplication) are: $0$ (the zero vector space which contains only the zero vector), $\mathbb{F}^n$ (the set of all n-component vectors whose components are in $\mathbb{F}$, $M_{mn}$ (the set of all m by n matrices whose entries are in $\mathbb{F}$. Some further examples of vector spaces are $\wp_n (\mathbb{F})$ which is the vector space of all polynomials of degree less than or equal to $n$ and whose coefficients are in $\mathbb{F}$, $\mathbb{F}^{\infty}$ which is the set of all infinite sequences whose terms are in $\mathbb{F}$, or even $F(-\infty, \infty)$ which is the set of all real-valued functions. The definitions of addition and scalar multiplication defined on these vector spaces are once again the obvious ones. Vector spaces satisfy many properties which can be derived by the ten axioms listed above. For example, if $V$ is a vector space then for each vector $x \in V$ there exists a uniqueadditive inverse $-x$. To show this, suppose that two additive inverses exist, say $-x$ and $-x'$. Then we have that:
\begin{equation} -x = -x + 0 = -x + (x -x') = (-x + x) - x' = 0 - x' = x' \end{equation}
Recall that a Vector Subspace, say $U$ of the vector space $V$ is a subset $U \subseteq V$ that is itself a vector space under the addition and scalar multiplication defined on $V$. Note that the zero space $0 = \{ 0 \}$ is a subspace of every vector space, and the whole space $V$ is also a subspace of every vector space $V$. For a more complicated example - the subspaces of $\mathbb{R}^2$ are the zero vector space, the whole space $\mathbb{R}^2$, and all lines that pass through the origin. The subspaces of $\mathbb{R}^3$ are the zero vector space, the whole space $\mathbb{R}^3$, all the lines that pass through the origin, and all of the planes that pass through the origin. To check if a subset $U$ of $V$ is a subspace, we need to verify only axioms 9 and 10 above since the other axioms are inherited from the fact that $U \subseteq V$. Also, recall the important lemma from the Vector Subspaces page that if $U$ is a subset of the $\mathbb{F}$-vector space $V$, then to verify $U$ is a vector space, all we need to show is that if $x, y \in U$ and $a, b \in \mathbb{F}$ then $(ax + by) \in U$. Now also recall from the Vector Subspace Sums page that if $U_1$, $U_2$, …, $U_m$ are all vector spaces of $V$, then we can form the following Subspace Sumwhich is the defined to be the set of all vectors $v \in V$ that can be written as $v = u_1 + u_2 + ... + u_m$ where $u_i \in U_i$ for each $i = 1, 2, ..., m$:
\begin{align} \quad \sum_{i=1}^{m} U_i = U_1 + U_2 + ... + U_m \end{align}
We say that the vector space $V$ is equal to the sum of subspaces $U_1$, $U_2$, …, $U_m$ if every vector $v \in V$ can be written as this sum. Recall from the Vector Sum Theorems page that we proved that if $U_1$ and $U_2$ are subspaces of $V$ then $U_1 + U_2$ is a subspace of $V$. We also say that $U_1 + U_2$ is the smallest subspace of $V$ that contains the subspaces $U_1$ and $U_2$. Furthermore, we say that $V$ is a Direct Sumof the subspaces $U_1$, $U_2$, …, $U_m$ if $V$ is equal to the sum of these subspaces and if every vector $v \in V$ can be written uniquelyas the sum $v = u_1 + u_2 + ... + u_m$ where $u_i \in U_i$ for each $i = 1, 2, ..., m$. We denote this by:
\begin{align} \quad V = \bigoplus_{i=1}^{m} = U_1 \oplus U_2 \oplus ... \oplus U_m \end{align}
Recall from the Direct Sum Theorems page that we proved that if $U_1$ and $U_2$ are twovector spaces of $V$, then $V = U_1 \oplus U_2$ if and only if $U_1 \cap U_2 = \{ 0 \}$. Note that this theorem does NOT necessarily hold in forming direct sums from more than two subspaces. |
Can someone explain to me how this equals? I'm taking a calculus III course at the moment, and I'm doing Taylor and Maclaurin series at the moment, and this is the last step of a problem, but i don't see how this equals each other (probably because I've never dealt with a sum times sum problem before or I can't recall doing one before anyways). If it at all matters, this was the original problem: f(x) = (sinx)ln(1+x). thanks.
Open up each parentheses and carry on the product grouping together equal powers of $\;x\;$ :
$$\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)!}x^{2n+1}\cdot\sum_{n=1}^\infty\frac{(-1)^{n-1}}nx^n=\left(x-\frac{x^3}{3!}+\frac{x^5}{5!}-\ldots\right)\left(x-\frac{x^2}2+\frac{x^3}3-\ldots\right)=$$
$$=x^2-\frac12x^3+\frac16x^4+\ldots$$
FYI, the first series is $\;\sin x\;$, which converges for all (real or complex) $\;x\;$ , whereas the second one is $\;\log(1+x)\;$ , which converges only for $\;|x|<1\;$ |
Type of Publication: Working Paper/Technical Report URI (citable link): http://nbn-resolving.de/urn:nbn:de:bsz:352-0-264314 Author: Racke, Reinhard; Yoshikawa, Shuji Year of publication: 2014 Series: Konstanzer Schriften in Mathematik ; 334 Summary:
We study the Cauchy problem of the Ball model for an extensible beam: \[\rho \partial_t^2 u + \delta \partial_t u + \kappa \partial_x^4 u + \eta \partial_t \partial_x^4 u = \left(\alpha + \beta \int_{\R} |\partial_x u|^2 dx + \gamma \eta \int_{\R} \partial_t \partial_x u \partial_x u dx \right) \partial_x^2 u.\]. The aim of this paper is to investigate singular limits as $\rho \to 0$ for this problem. In the authors' previous paper \cite{ra-yo} decay estimates of solutions $u_{\rho}$ to the equation in the case $\rho>0$ were shown. With the help of the decay estimates we describe the singular limit in the sense of the following uniform (in time) estimate: \[\| u_{\rho} - u_{0} \|_{L^{\infty}([0,\infty); H^2(\R))} \leq C \rho.\]
Subject (DDC): 510 Mathematics Link to License: Terms of use Bibliography of Konstanz: Yes
RACKE, Reinhard, Shuji YOSHIKAWA, 2014. Singular limits in the Cauchy problem for the damped extensible beam equation
@techreport{Racke2014Singu-30729, series={Konstanzer Schriften in Mathematik}, title={Singular limits in the Cauchy problem for the damped extensible beam equation}, year={2014}, number={334}, author={Racke, Reinhard and Yoshikawa, Shuji} }
Racke_0-264314.pdf 176 |
Table of Contents
Pairs of Complex Roots for Polynomials with Real Coefficients
Suppose that $p(x) = a_0 + a_1x + ... + a_mx^m$ is a polynomial with real coefficients $a_0, a_1, ..., a_m \in \mathbb{R}$. Recall that the complex conjugate of a complex number $z = a + bi = \Re (z) + \Im (z) i$ is denoted by $\bar{z}$ and $\bar{z} = a - bi = \Re(z) - \Im (z) i$.
The following theorem will tell us that the complex roots of $p(x)$ come in pairs, that is if $\lambda \in \mathbb{C}$ is a complex root of $p(x)$ then so is its complex conjugate, $\bar{\lambda}$. Note that all real numbers are complex numbers, and so it should be relatively obvious that if $\lambda = a + 0i = a$ is a strictly real root of $p(x)$, then $\bar{\lambda} a - 0i = a$ is trivially also a root of $p(x)$. The important part of the following theorem lends itself to strictly complex roots.
Theorem 1: Let $p(x) \in \wp (\mathbb{R})$. If $\lambda \in \mathbb{C}$ is a complex root of $p(x)$ then the complex conjugate $\lambda$, $\bar{\lambda}$ is also a root of $p(x)$. Proof:Let $p(x) \in \wp (\mathbb{R})$ such that $p(x) = a_0 + a_1x + ... + a_mx^m$ where $a_0, a_1, ..., a_m \in \mathbb{R}$ and suppose that $\lambda \in \mathbb{C}$ is a complex root of $p(x)$. Then $p(\lambda) = 0$ and so: Now take the complex conjugate of both sides of the equation above, and thus: Recall that the complex conjugate of a real number is itself. Also recall that for $y, z \in \mathbb{C}$ the additive property of a complex conjugate is $\overline{y + z} = \bar{y} + \bar{z}$, and multiplicative property of a complex conjugate is $\overline{y \cdot z} = \bar{y} \cdot \bar{z}$. Applying these three properties to the righthand side of the equation above we get that: Therefore $p(\bar{\lambda}) = 0$ and so $\bar{\lambda}$ is a root of $p(x)$. $\blacksquare$
The following corollary is also extremely useful and tells us that polynomials with real coefficients of odd degree must have at least one real root.
Corollary 1: Let $p(x) \in \wp ( \mathbb{R} )$. If $\mathrm{deg} (p)$ is odd, then $p(x)$ contains at least one real root. Proof:Suppose that $p(x) \in \wp ( \mathbb{R} )$ and $\mathrm{deg} (p) = n$ is odd. By Theorem 1, since the complex roots of $p(x)$ come in pairs, this implies that the number of complex roots $\lambda = a + bi$ where $b \neq 0$ is even, and so there must be at least one root in the form $a + 0i$, and so $p(x)$ has at least one real root. $\blacksquare$ Example 1 Verify Theorem 1 by finding all roots to the polynomial $p(x) = x^2 + 4x + 5$.
Using the quadratic formula we have that:(4)
So the roots of $p(x)$ are $x_1 = -2 + i$ and $x_1 = -2 - i$ which are the complex conjugates of one another. |
1) Feynmann-Kac and Girsanov
First you should remember that the process $X$ is independent of the measure you are considering.
Now let's consider a change of measure from ${\mathbb{P}}$ to ${\mathbb{Q}}$. Let us assume $\mathbb{E}_t^{\mathbb{P}}[\tfrac{d{\mathbb{Q}}}{d{\mathbb{P}}}] = e^{\theta W_t^P - \frac{1}{2}\theta^2 t}$ for some constant $\theta$. The BM $W^{\mathbb{P}}$ under ${\mathbb{P}}$ is no longer a BM under ${\mathbb{Q}}$. But Girsanov tells us that $dW^{\mathbb{Q}} = dW^{\mathbb{P}} - d\langle W^{\mathbb{P}}_t,\log \mathbb{E}_t^P[\tfrac{d{\mathbb{Q}}}{d{\mathbb{P}}}]\rangle = dW^{\mathbb{P}} - \theta dt$ is a BM under ${\mathbb{Q}}$.
If you rewrite the SDE of $X$ in terms of this new BM, you see a drift term $d\langle X_t,\log \mathbb{E}_t^{\mathbb{P}}[\tfrac{d{\mathbb{Q}}}{d{\mathbb{P}}}]\rangle$ appear. In your case, this reads $$ dX_t = \theta dt + dW^{\mathbb{Q}}_t$$Now you can apply Feynman-Kac which tells you$$ u^{\mathbb{Q}}(t,x) := \mathbb{E}^{\mathbb{Q}}[e^{-rT}f(X_T)|X_t = x]$$is going to be solution of the PDE $$ v_t + \theta v_x + \frac{1}{2}v_{xx} - rv = 0$$This is a different function because expectation is taken under a different measure and it satisfies a different PDE than your original function$$ u^{\mathbb{P}}(t,x) = \mathbb{E}^{\mathbb{P}}_t[e^{-rT}f(X_T)|X_t = x]$$
2) Derivative pricing and change of numeraire
Now if you are considering $$ u(t,x)=\mathbb{E}^N_t[N(t)/N(T)f(X_T)]$$This function does not depend on the numeraire $N$ you are using. In financial terms, the price does not depend on the currency or asset you are doing your accounting in.
In the case where $N_t = e^{\int_0^t \beta(X_u)\,du}$ for a deterministic function $\beta$, you end up with the usual function$$ u(t,x)=\mathbb{E}^N[N(t)/N(T)f(X_T)|X_t = x]$$being solution of $$ u_t + \frac{1}{2}u_xx - \beta(x)u = 0$$But in general, $N_t$ is not entirely determined by $X_t$ and you cannot apply FK directly. Remember that FK assumes you have a Markovian process driving everything. So you would still need some assumption like $(X,N)$ being Markovian for example and the conditional expectation should be taken with respect to the value of both $X$ and $N$ : $$ u(t,x,n)=\mathbb{E}^N[N(t)/N(T)f(X_T)|N_t=n,X_t = x]$$would then be solution of a PDE given by FK.
Hope that clarifies things a bit. |
Calculating the size of a set by observing the proportionality change of its disjoint subsets Background
Instagram provides a polling feature, which allows the user to ask a question with two answers to the audiences. Once an audience pick one of the two answers, the percentage of users who pick each answer is displayed.
So the questions are: how many people actually voted in the poll? How many people voted for each option?
If you make observations before and after one vote, you can directly calculate the total number of votes from the weight of that one vote.
Cosmin Gorgovan, Fri 12 Apr 2019 14:43:14 BST
I thought he made a really good point. I thought this problem is actually mathematically interesting, and practically useful. So I set out to solve this problem.
We can consider everyone who voted in an Instagram poll as a set, and the two options are disjoint subsets of the superset.
Formal problem statement
Set $A$ consists of disjoint subsets $a_1, a_2, \ldots, a_n$. Although we do not know the cardinality of set $A$ (denoted by $|A|$) and the cardinality of each of the subset, we do know the proportion of each subset in terms of set $A$, that is we know $\frac{|a_1|}{|A|}, \frac{|a_2|}{|A|}, \ldots \frac{|a_n|}{|A|}$. We are allowed to add $n$ elements into subset $a_1$, and observe its change in proportionality. What is the cardinality of set $A$ before elements were added to $a_1$?
Solution
This problem can be solved by modelling the process of adding $n$ elements to subset $a_1$. We denote the proportionality of $a_1$ as $\alpha_1$, that is $\alpha_1 = \frac{|a_1|}{|A|}$, and the change in proportionality of $a_1$ as $\delta_{a_1}$
We begin by writing down the process that leads to the proportionality change. We add $n$ elements into subset $a_1$
$$ \frac{|a_1| + n }{|A| + n} - \frac{|a_1|}{|A|} = \delta_{a_1} $$
By multiplying the equation with the denominators in the left hand side, we obtain:
$$ (|a_1| + n )|A| - |a_1|(|A| + n) = \delta_{a_1}(|A| + n)(|A|)$$
By expanding the brackets, we obtain:
$$ |a_1||A| + n|A| - |a_1||A| - |a_1|n = \delta_{a_1}|A|^2 + \delta_{a_1}n|A| $$
By simplifying and rearranging the above equation, we obtain:
$$ |A|^2\delta_{a_1} + |A|(\delta_{a_1}n - n) + |a_1|n= 0$$
The above equation has two unknowns – $|A|$ and $|a_1|$. We can remove the unknown $|a_1|$ by substituting in $|a_1| = \alpha_1|A|$. By doing so, we obtain:
$$ |A|^2\delta_{a_1} + |A|(\delta_{a_1}n - n) + \alpha_1|A|n = 0 $$
By simplifying the above equation, we obtain:
$$ |A|^2\delta_{a_1} + |A|n(\delta_{a_1} - 1 + \alpha_1)= 0 $$
By dividing the above equation by $|A|$ and rearrangement, we obtain:
$$ |A| = \frac{n(1 - \alpha_1 - \delta_{a_1})}{\delta_{a_1}} $$
You know everything in the right hand side of the equation, so solving $|A|$ is very easy. |
Basic Theorems Regarding Ambient Isotopies
Recall from the Ambient Isotopic Embeddings on Topological Spaces page that if $X$ and $Y$ are topological spaces and $f : X \to Y$ and $g : X \to Y$ are embeddings, then $f$ is said to be ambient isotopic to $g$ within $Y$ if there exists a continuous function $H : Y \times I \to Y$ such that:
1)$H_t : Y \to Y$ is a homeomorphism for all $t \in [0, 1]$. 2)$H_0 = \mathrm{id}_Y$. 3)$H_1 \circ f = g$.
Furthermore, if $A$ and $B$ are topological subspaces of $X$ then $A$ is said to be ambient isotopic to $B$ within $Y$ if there exists a continuous function $H : Y \times I \to Y$ such that:
1)$H_t : Y \to Y$ is a homeomorphism for all $t \in [0, 1]$. 2)$H_0 = \mathrm{id}_Y$. 3)$H_1(A) = B$
We will now state some basic theorems regarding ambient isotopies.
Theorem 1: Let $X$ and $Y$ be topological spaces and let $f, g : X \to Y$ be embeddings. If $f$ is ambient isotopic to $g$ within $Y$ then $f(X)$ is ambient isotopic to $g(X)$ within $Y$. Proof:Since $f$ and $g$ are ambient isotopic within $Y$ there exists an ambient isotopy $H : Y \times I \to Y$. Then properties (1) and (2) hold in the definition of $f(X)$ and $g(X)$ being ambient isotopic within $Y$. Lastly, observe that since $H_1 \circ f = g$, we have that: So property (3) holds. Hence $f(X)$ is ambient isotopic to $g(X)$ within $Y$. $\blacksquare$
Theorem 2: Let $A$ and $B$ be topological subspaces of a topological space $Y$. Then $A$ and $B$ are ambient isotopic within $Y$ if and only if $A^c$ and $B^c$ are ambient isotopic within $Y$. Proof:$\Rightarrow$ Since $A$ and $B$ are ambient isotopic within $Y$ there exists an ambient isotopic $H : Y \times I \to Y$ such that $H_t : Y \to Y$. Then properties (1) and (2) hold for $A^c$ and $B^c$ being ambient isotopic within $Y$. Furthermore, we have that Therefore: So $A^c$ is ambient isotopic to $B^c$ within $Y$. $\Leftarrow$ The converse follows immediately. $\blacksquare$ |
Let $P_k:= \mathbb{F}_2[x_1,x_2,\ldots ,x_k]$ be the polynomial algebra in $k$ variables with the degree of each $x_i$ being $1,$ regarded as a module over the mod-$2$ Steenrod algebra $\mathcal{A}.$ Here $\mathcal{A} = \langle Sq^{2^m}\,\,|\,\,m\geq 0\rangle.$
Being the cohomology of a space, $P_k$ is a module over the mod-2 Steenrod algebra $\mathscr{A}.$ The action of $\mathscr{A}$ on $P_k$ is explicitly given by the formula
$$Sq^m(x_j^d) = \binom{d}{m}x_j^{m+d},$$ where $ \binom{d}{m}$ is reduced mod-2 and $\binom{d}{m} = 0$ if $m > d.$
Now, I want to use the Steenrod algebra package and Multi Polynomial ring package and using formular above to construction of formula following in Sagemath program
$$ Sq^m(f) = \sum\limits_{2^{m_1} + 2^{m_2} + \cdots + 2^{m_k}= m}\binom{d_1}{2^{m_1}}x_1^{2^{m_1}+d_1}\binom{d_1}{2^{m_2}}x_2^{2^{m_2}+d_2}\ldots \binom{d_k}{2^{m_k}}x_k^{2^{m_k}+d_k}.$$ forall $f = x_1^{d_1}x_2^{d_2}\ldots x_k^{d_k}\in P_k$
Example: Let $k = 5, m = 2$ and $f = x_1^2x_2^3x_3^2x_4x_5\in P_5.$ We have $$ Sq^2(x_1^2x_2^3x_3^2x_4x_5) = x_1^4x_2^3x_3^2x_4x_5 + x_1^2x_2^5x_3^2x_4x_5 + x_1^2x_2^3x_3^4x_4x_5 +x_1^2x_2^3x_3^2x_4^2x_5^2 + x_1^2x_2^4x_3^2x_4x_5^2 + x_1^2x_2^4x_3^2x_4^2x_5^1.$$
I hope that someone can help. Thanks! |
I was wondering if the following holds:
If you have an ODE $$-y''(x) + q(x) y(x) = \lambda y(x)$$ on a finite interval $(a,b)$ and you know that this equation is limit-circle or limit-point at the end-points.
If you now add a nice smooth + bounded -potential $V \in C^{\infty}(\mathbb{R})$ to your current potential, so that you end up with the ODE
$$-y''(x) + (q(x)+V(x)) y(x) = \lambda y(x),$$
is it still clear that your differential equation is limit-circle or limit-point at the endpoints?
I mean, somehow I feel that this statement should hold, as it is somehow natural to assume that a nice potential should keep the nice properties of the operator, but I could not find a reference for this. |
Critical exponents are properties of the RG fixed point that drives the phase transition. They are computed by linearising the RG flow equations close to the fixed point. The exponents are the derivatives of the beta functions evaluated
at the fixed point. They know nothing of the way you approach the fixed point. In particular if you are flowing slightly above or below the critical temperature.
An obvious exception to this is the scaling of the order parameter,
$$ m \sim (-\tau)^{\beta} \, .$$
Above the critical temperature, $\tau > 0$, we have $m=0$ and there is no critical exponent.
Edit (15 sept 2015): I recently read this paper. They show, using Renormalisation Group (RG) methods, that critical exponents can be different above and below a phase transition if there is a dangerously irrelevant operator involved. An anisotropic term is added. This term breaks a continuous symmetry and forces the order parameter to take one of $n$ countable values. They consider $n=6$. The value of $n$ depends on the way the symmetry is broken and is not important.
Basically, below $T_c$, the RG flow becomes able to double past the previously attractive Infra-Red (IR) RG fixed point where it terminates in the symmetric case. See the figure in the paper. The closer you are to the phase transition, the closer you come to this fixed point and the larger the susceptibility gets. There is an additive contribution to the critical exponent of the susceptibility because of this. When $T>T_c$, the IR attractive fixed point is Gaussian and its attractiveness is not affected by the anisotropy. Then the critical exponent is not affected. |
Advances in Differential Equations Adv. Differential Equations Volume 9, Number 3-4 (2004), 241-265. Conditional and unconditional well-posedness for nonlinear evolution equations Abstract
Attention is given to the question of well-posedness in Hadamard's classical sense for nonlinear evolution equations of the form \begin{equation} \frac{du}{dt} +L u = N (u), \qquad u(0) =\phi. \tag*{(0.1)} \end{equation} In view are various classes of nonlinear wave equations, nonlinear Schrödinger equations, and the (generalized) KdV equations. Equations of type (0.1) are often well posed in a scale $X_s$, say, of Banach spaces, at least for $s$ large enough. Here, increasing values of $s$ correspond to more regularity; thus $X_r\subset X_s $ if $r>s$. For smaller values of $s$, some equations of the form in (0.1) are well-posed in a conditional sense that the uniqueness aspect depends upon the imposition of auxiliary conditions. In the latter context, it is natural to inquire whether or not the auxiliary conditions are essential to securing uniqueness. It is shown here that for a conditionally well-posed Cauchy problem (0.1), the auxiliary specification is removable if a certain persistence of regularity holds. As a consequence, it will transpire that a conditionally well-posed problem (0.1) is (unconditionally) well-posed if the aforementioned persistence property holds. These results are applied to study several recent conditional well-posedness results for the KdV equation, nonlinear Schrödinger equations, and nonlinear wave equations. It is demonstrated that the auxiliary conditions used to secure the uniqueness are all removable and the corresponding Cauchy problems are, in fact, unconditionally well-posed as long as their classical solutions exist globally. In addition, the well-posedness for an initial-boundary-value problem for the KdV equation posed in a quarter plane is also considered. An affirmative answer is provided for a uniqueness question left open in a recent paper of Colliander and Kenig [14].
Article information Source Adv. Differential Equations, Volume 9, Number 3-4 (2004), 241-265. Dates First available in Project Euclid: 18 December 2012 Permanent link to this document https://projecteuclid.org/euclid.ade/1355867944 Mathematical Reviews number (MathSciNet) MR2100628 Zentralblatt MATH identifier 1103.35092 Subjects Primary: 35Q55: NLS-like equations (nonlinear Schrödinger) [See also 37K10] Secondary: 34G20: Nonlinear equations [See also 47Hxx, 47Jxx] 35B30: Dependence of solutions on initial and boundary data, parameters [See also 37Cxx] Citation
Bona, Jerry L.; Sun, Shu-Ming; Zhang, Bing-Yu. Conditional and unconditional well-posedness for nonlinear evolution equations. Adv. Differential Equations 9 (2004), no. 3-4, 241--265. https://projecteuclid.org/euclid.ade/1355867944 |
Newform invariants
Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2,\beta_3\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form.
Basis of coefficient ring in terms of a root \(\nu\) of \(x^{4}\mathstrut -\mathstrut \) \(x^{3}\mathstrut -\mathstrut \) \(7\) \(x^{2}\mathstrut +\mathstrut \) \(10\) \(x\mathstrut -\mathstrut \) \(1\):
\(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \( \nu \) \(\beta_{2}\) \(=\) \( \nu^{3} + \nu^{2} - 6 \nu \) \(\beta_{3}\) \(=\) \( \nu^{3} - 6 \nu + 4 \)
\(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \(\beta_{1}\) \(\nu^{2}\) \(=\) \(-\)\(\beta_{3}\mathstrut +\mathstrut \) \(\beta_{2}\mathstrut +\mathstrut \) \(4\) \(\nu^{3}\) \(=\) \(\beta_{3}\mathstrut +\mathstrut \) \(6\) \(\beta_{1}\mathstrut -\mathstrut \) \(4\)
For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below.
For more information on an embedded modular form you can click on its label.
This newform does not admit any (nontrivial) inner twists.
\( p \) Sign \(2\) \(-1\) \(3\) \(1\) \(19\) \(1\) \(53\) \(1\)
This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(6042))\):
\(T_{5}^{4} \) \(\mathstrut -\mathstrut 6 T_{5}^{3} \) \(\mathstrut +\mathstrut 5 T_{5}^{2} \) \(\mathstrut +\mathstrut 15 T_{5} \) \(\mathstrut -\mathstrut 7 \) \(T_{7}^{4} \) \(\mathstrut -\mathstrut 3 T_{7}^{3} \) \(\mathstrut -\mathstrut 14 T_{7}^{2} \) \(\mathstrut +\mathstrut 19 T_{7} \) \(\mathstrut +\mathstrut 22 \) \(T_{11}^{4} \) \(\mathstrut +\mathstrut 2 T_{11}^{3} \) \(\mathstrut -\mathstrut 11 T_{11}^{2} \) \(\mathstrut +\mathstrut 7 T_{11} \) \(\mathstrut -\mathstrut 1 \) |
I recently gave a tutorial at CMU about spectral learning for NLP. This tutorial was based on a tutorial I had given last year with Michael Collins, Dean Foster, Karl Stratos and Lyle Ungar at NAACL.
One of the algorithms I explained there was the spectral learning algorithm for HMMs by Hsu, Kakade and Zhang (2009). This algorithm estimates parameters of HMMs in the “unsupervised setting” — only from sequences of observations. (Just like the Baum-Welch algorithm — expectation-maximization for HMMs — does.)
I want to repeat this explanation here, and give some intuition about this algorithm, since it seems to confuse people quite a lot. At a first glance, it looks quite mysterious why the algorithm works, though its implementation is very simple. It is one of the earlier algorithms in this area of latent-variable learning using the method of moments and spectral methods, and promoted the creation of other algorithms for latent-variable learning.
So here are the main ideas behind it, with some intuition. In my explanation of the algorithm, I am going to forget about the “spectral” part. No singular value decomposition will be involved, or any type of spectral decomposition. Just plain algebraic and matrix multiplication tricks that require understanding what marginal probabilities are and how matrix multiplication and inversion work, and nothing more. Pedagogically, I think that’s the right thing to do, since introducing the SVD step complicates the understanding of the algorithm.
Consider a hidden Markov model. The parameters are represented in matrix form \( T \), \( O \) and \( \pi \). We assume \( m \) latent states, \( n \) observations. More specifically, \( T \) is an \( m \times m \) stochastic matrix where \( m \) is the number of latent states, such that \( T_{hh’} \) is the probability of transitioning to state \( h \) from state \( h’ \). \( O \) is an \( n \times m \) matrix such that \( O_{xh} \) is the probability of emitting symbol \( x \) — an observation — from latent state \( h \). \( \pi \) is an \( m \) length vector with \( \pi_h\) being the initial probability for state \( h \).
To completely get rid of the SVD step, and simplify things, we will have to make the assumption that \(m = n\). This means that the number of states equals the number of observations. Not a very useful HMM, perhaps, but it definitely makes the derivation more clear. The fact that \( m=n\) means that \( O \) is now a square matrix — and we will assume it is invertible. We will also assume that \(T \) is invertible, and that \( \pi \) is positive in all coordinates.
If we look at the joint distribution of \(p(X_1 = x_1,X_2 = x_2)\), the first two observations in the HMM, then it can written as:
\( p(X_1 = x_1, X_2 = x_2) = \sum_{h_1,h_2} p(X_1 = x_1, H_1 = h_1, X_2 = x_2, H_2 = h_2) = \sum_{h_1,h_2} \pi_{h_1} O_{x_1,h_1} T_{h_2,h_1} O_{x_2,h_2} \)
Nothing special here, just marginal probability summing out the first two latent states.
It is not hard to see that this can be rewritten in matrix form, i.e. if we define \( [P_{2,1}]_{x_2,x_1} = p(X_1 = x_1, X_2= x_2) \) then:
\( P_{2,1} = O T \mathrm{diag}(\pi)O^{\top} \)
where \( \mathrm{diag}(\pi) \) is just an \( m \times m \) diagonal matrix with \( \pi_h \) on the diagonal.
Just write down this matrix multiplication step-by-step explicitly, multiplying, say, from right to left, and you will be able to verify this identity for \( P_{2,1} \). Essentially, the matrix product, which involves dot-product between rows and vectors of two matrices, eliminates and sums out the latent states (and does other things, like multiplying in the starting probabilities).
Alright. So far, so good.
Now, what about the joint distribution of three observations?
\( p(X_1 = x_1, X_2 = x, X_3=x_3) = \sum_{h_1,h_2,h_3} p(X_1 = x_1, H_1 = h_1, X_2 = x_2, H_2 = h_2, X_3=x_3, H_3 = h_3) = \sum_{h_1,h_2,h_3} \pi_{h_1} O_{x_1,h_1} T_{h_2,h_1} O_{x_2,h_2} T_{h_3,h_2} O_{x_3,h_3} \)
Does this have a matrix form too? Yes, not surprisingly. If we fix \( x \), the second observation, and define \( [P_{3,x,1}]_{x_3,x_1} = p(X_1 = x_1, X_2 = x, X_3 = x_3) \), (i.e. \( P_{3,x,1} \) is an \( m \times m \) matrix defined for each observation symbol \( x \)), then
\( P_{3,x,1} = OT \mathrm{diag}(O_x) T \mathrm{diag}(\pi) O^{\top} \).
Here, \( \mathrm{diag}(O_x) \) is a diagonal matrix where the on the diagonal we have the \(x\)th row of \( O \).
Now define \( B_x = P_{3,x,1}P_{2,1}^{-1} \) (this is well-defined because \( P_{2,1} \) is invertible — all the conditions we had on the HMM parameters make sure that it is true), then:
\( B_x = OT \mathrm{diag}(O_x) T \mathrm{diag}(\pi) O^{\top} \times (O T\mathrm{diag}(\pi)O^{\top})^{-1} = OT\mathrm{diag}(O_x)O^{-1} \)
(just recall that \( (AB)^{-1} = B^{-1} A^{-1} \) whenever both sides are defined and \( A \) and \( B \) are square matrices.)
This part of getting \( B_x \) (and I will explain in a minute why we need it) is the hardest part in our derivation so far. We can also verify that \( p(X_1 = x_1) \) equals \( O\pi \). Let’s call \( b_1 \) a vector such that \([b_1]_x = p(X_1=x_1)\) — i.e. \( b_1 \) is exactly the vector \( P_1 \).
We can also rewrite \( P_1 \) the following way:
\( P_1^{\top} = 1^{\top} T \mathrm{diag}(\pi) O^{\top} = 1^{\top} O^{-1} \underbrace{O T \mathrm{diag}(\pi) O^{\top}}_{P_{2,1}} \)
where \( 1^{\top} \) is an \( 1 \times m \) vector with the value 1 in all coordinates. The first equality is the “surprising” one — we use \( T \) to calculate the distribution of \( p(X_1 = x_1) \) — but if you write down this matrix multiplication explicitly, you will discover that we will be summing over the elements of \( T \) in such a way that it does not play a role in the sum — that’s because each row of \( T \) sums to 1. (As Hsu et al. put it in their paper: this is an unusual but easily verified form to write \( P_1 \).)
The above leads to the identity \( P_1^{\top} = 1^{\top} O^{-1} P_{2,1} \).
Now, it can be easily verified from the above form of \( P_1 \) that for \( b_{\infty}^{\top} \) defined as \( (P^{\top}_{2,1})^{-1} P_1 \), an \(m\) length vector, then:
\( b_{\infty}^{\top} = 1^{\top} O^{-1} \).
So what do we have so far? We managed to define the following matrices and vectors based only on the joint distribution of the first three symbols in the HMM:
\( B_x = P_{3,x,1}P_{2,1}^{-1} = OT\mathrm{diag}(O_x)O^{-1}, \)
\( b_1 = P_1 = O\pi, \)
\( b_{\infty}^{\top} = (P^{\top}_{2,1})^{-1} P_1 = 1^{\top} O^{-1}. \)
The matrix \( B_x \in \mathbb{R}^{m \times m} \) and vectors \( b_{\infty} \in \mathbb{R}^m \) and \( b_1 \in \mathbb{R}^m \) will now play the role of our HMM parameters. How do we use them as our parameters?
Say we just observe a single symbol in our data, i.e. the length of the sequence is 1, and that symbol is \(x\). Let’s multiply \( b^{\top}_{\infty} B_x b_1 \).
According to the above equalities, it is true that this equals:
\( b^{\top}_{\infty} B_x b_1 = (1^{\top} O^{-1}) (O T \mathrm{diag}(O_x) O^{-1}) (O \pi) = 1^{\top} T \mathrm{diag}(O_x) \pi \).
Note that this quantity is a scalar. We are multiplying a matrix by a vector from left and right. Undo this matrix multiplication, and write it the way we like in terms of sums over the latent states, and what do we get? The above just equals:
\( b^{\top}_{\infty} B_x b_1 = \sum_{h_1,h_2} T_{h_2,h_1} O_{x,h_1} \pi_{h_1} = \sum_{h_1,h_2} p(H_1) p(X_1 = x | H_1 = h_1) p(H_2 = h_2 | H_1 = h_1) = p(X_1 = x_1) \).
So, this triplet-product gave us back the distribution over the first observation. That’s not very interesting, we could have done it just by using \( b_1 \) directly. But… let’s go on and compute:
\( b^{\top}_{\infty} B_{x_2} B_{x_1} b_1. \)
This can be easily verified to equal \( p(X_1 = x_1, X_2 = x_2) \).
The interesting part is that in the general case,
\( b^{\top}_{\infty} B_{x_n} B_{x_{n-1}}…B_{x_1} b_1 = p(X_1 = x_1, \ldots, X_n = x_n) \) –
we can now calculate the probability of any observation sequence in the HMM only by knowing the distribution over the first three observations! (To convince yourself about the general case above, just look at Lemma 1 in the Hsu et al. paper.)
In order to turn this into an estimation algorithm, we just need to estimate from data \( P_{2,1} \) and \( P_{3,x,1} \) for each observation symbol (all observed, just “count and normalize”), and voila, you can estimate the probability of any sequence of observations (one of the basic problems with HMMs according to this old classic paper, for example).
But… We made a heavy assumption. We assumed that \( n = m \) — we have as many observation symbols as latent states. What do we do if that’s not true? (i.e. if \( m < n \))? That’s where the “spectral” part kicks in. Basically, what we need to do is to reduce our \( O \) matrix into an \( m \times m \) matrix using some \( U \) matrix, while ensuring that \( U^{\top}O \) is invertible (just like we assumed \( O \) was invertible before). Note that \( U \) needs to be \( n \times m \).
It turns out that a \( U \) that will be optimal in some sense, and will also make all of the above algebraic tricks work is the left singular value matrix of \( P_{2,1} \). Understanding why this is the case requires some basic knowledge of linear algebra — read the paper to understand this! |
Fractions and Decimals
Category : 6th Class
FRACTIONS AND DECIMALS
FUNDAMENTALS
Natural numbers: All counting numbers are called natural numbers.
Whole number: Natural numbers together with zero are called whole numbers.
or
A number written in the form \[\frac{x}{y},\] where \[x\] and \[y\] are whole numbers and \[y\ne 0\] is called fraction,
Types of Fraction
Example: \[\frac{1}{10},\frac{2}{100},\frac{5}{1000}\]etc...
Example: \[\frac{1}{2},\frac{3}{4},\frac{7}{9}\]etc...
Example: \[\frac{7}{2},\frac{9}{4},\frac{11}{10}\] etc...
Example: \[1\frac{1}{2},7\frac{3}{4},2\frac{1}{2}\]etc...
Example: \[\frac{1}{2}\] and \[\frac{2}{4},\,\,\frac{1}{4}\,\,\text{and}\]\[\frac{4}{16},\,\,\frac{10}{12}\,\,\text{and}\]\[\frac{5}{6}\]
Example: \[\frac{4}{5}\]and \[\frac{2}{5},\,\,\frac{2}{7},\,\,\frac{3}{5},\,\,\frac{5}{7}\]etc...
Companion of fractions
(i) If\[x\times w>y\times z\], then \[\frac{x}{y}>\frac{z}{w}\]
(ii) If\[x\times w<y\times z\], then \[\frac{x}{y}<\frac{z}{w}\]
(iii) If\[x\times w=y\times z\], then\[\frac{x}{y}=\frac{z}{w}\]
Example: Compare the \[\frac{2}{3}\] and \[\frac{5}{6}\]
Solution: On cross multiplication we get, \[2\times 6=12\]and \[3\times 5=15\]
\[12<15\]
\[\therefore \] \[\frac{2}{3}<\frac{5}{6}\]
Example: Arrange \[\frac{2}{5},\,\,\frac{1}{4},\,\,\frac{3}{2},\,\,\frac{9}{10}\]in ascending order.
Solution: The LCM of \[5,\,\,4,\,\,2,\,\,10=20\]
\[\frac{2}{5}=\frac{2\times 4}{5\times 4}=\frac{8}{20},\frac{1}{4}=\frac{1\times 5}{4\times 5}=\frac{5}{20},\frac{3}{2}=\frac{3\times 10}{2\times 10}=\frac{30}{20}.,\]
\[\frac{9}{10}=\frac{9\times 2}{10\times 2}=\frac{18}{20}\]
Now, compare the numerators of like fractions \[\frac{8}{20},\frac{5}{20},\frac{30}{20},\frac{18}{20}\]
Arrange them in ascending order, we get \[\frac{5}{20}<\frac{8}{20}<\frac{18}{20}<\frac{30}{20}\]
Hence, \[\frac{1}{4}<\frac{2}{5}<\frac{9}{10}<\frac{3}{2}\]
Finding fraction between two given fraction
Example: Find a fraction lying between \[\frac{2}{3}\] and\[\frac{5}{7}\]
Solution: We have \[\frac{2}{3}\]and \[\frac{5}{7}\]
Fraction lying between \[\frac{2}{3}\] and \[\frac{5}{7}\] is \[\frac{2+5}{3+7}=\frac{7}{10}\]
So, we have \[\frac{2}{3},\frac{7}{10},\frac{5}{7}.\]
Fundamental operations on fraction
Example: Add\[\frac{3}{7}+\frac{5}{7}=\frac{3+5}{7}=\frac{8}{7}\]
Note: While adding unlike terms, first convert them into like fractions and then add as like fractions.
Properties of Addition of fraction
(i) Closure property: If \[\frac{x}{y}\] and \[\frac{z}{w}\] are two fractions, then \[\frac{xw+zy}{yw}\] is also a fractions
(ii) Commutative property: If \[\frac{x}{y}\] and \[\frac{z}{w}\] are two fractions, them\[\frac{x}{y}+\frac{z}{w}=\frac{z}{w}+\frac{x}{y}\]
(iii) Associative property: If \[\frac{r}{s},\]\[\frac{t}{u}\] and \[\frac{v}{w}\] are three fractions, then \[\frac{r}{s}+\left( \frac{t}{u}+\frac{v}{w} \right)=\left( \frac{r}{s}+\frac{t}{u} \right)+\frac{v}{w}\] Subtraction \[\frac{x}{y}-\frac{z}{y}=\frac{x-z}{y}\]
Example: \[\frac{4}{5}-\frac{3}{5}=\frac{4-3}{5}=\frac{1}{5}\]
Note: While subtracting unlike fraction, first convert them into like fractions and find difference as in like fractions.
Multiplication
Example: Multiply \[\frac{3}{7}\] and \[\frac{2}{5}\]
Solution: \[\frac{3}{7}\times \frac{2}{5}=\frac{3\times 2}{7\times 5}=\frac{6}{35}\]
Properties of multiplication of fractions
\[\frac{r}{s}\times \left( \frac{t}{u}\times \frac{v}{w} \right)=\left( \frac{r}{s}\times \frac{t}{u} \right)\times \frac{v}{w}\]
Example: Find the reciprocal of\[\frac{5}{7}\].
Solution: The reciprocal of \[\frac{5}{7}\] is\[\frac{7}{5}\].
Example: Divide\[\frac{5}{9}\div \frac{25}{3}\]
Solution: \[\frac{5}{9}\div \frac{25}{3}=\frac{5}{9}\times \frac{3}{25}=\frac{1}{15}\]
Example: \[0.6,\,\,1.76,\,\,5.046\]
Look at this Example
Example: Study the expanded form of the given decimal numbers:
\[35.6\]
3 tens
5 units
6 tenths
\[25.58\]
2 tens
5 units
5 tenths
8 hundredths
\[17.415\]
1 ten
7 units
4 tenths
1 hundredths
5 thousandths
\[78.004\]
7 tens
8 units
0 tenths
0 hundredths
4 thousandths
\[06.07\]
0 tens
6 units
0 tenths
7 hundredths
Types of Decimals
Example: (i) \[0.7,\,\,1.1,\,\,25.6,\,\,238.4~\] (ii) \[0.21,\,\,666.26,\,\,6.57\]
Example: (i) \[0.7,\,\,0.21,\,\,6.323\] (ii) \[5.17,\,\,9.2,\,\,16.276\]
Converting unlike decimals into like decimals
Example: To convert \[2.5,\,\,8.03\]and \[7.352\] into like decimals.
Solution: We have to convert \[2.5\]and \[8.03\] into equivalent decimals with three decimal places i.e., \[2.5=2.500,\,\,8.03=8.030,\]
Now\[2.500,\,\,8.030,\,\,7.352\]like decimals.
Operations on Decimals
1. Addition of Decimals:
Example: (i) Add \[6.3\]and \[5.75\]
Convert to like decimals
(ii) Add \[6.5,\,\,7.05\] and \[5.325\]
Convert to like decimals
2. Subtraction of Decimals:
Example: Subtract \[56.128\]from \[68.75\]
convert to like decimals
3. Multiplication of decimals:
In multiplying the decimals,
(a) Multiply as with whole numbers ignoring the decimals.
(b) Count the number of decimal places in factors.
(c) Show the number of decimal places in the product as many as there are in the factors.
(d) While counting the digits in the product to place the decimal point, start from the right.
Step-1:
Example:
Step-2:
Total number of decimal places in the factors are 2
Care - 1: Some time you need to write extra zeros in the product on the left side to be able to show the correct number of decimal places.
Example: Multiply \[0.2\]by \[0.2\]
Here, number of decimal places are 2
Extra zero is required to show two decimal places.
Case - 2: Multiplication by 10,100 and 1000.
Example: \[10\times 0.389=03.89\]
Example:\[100\times 0.785=78.5\]
Example: \[1000\times 1.3\text{ }95=1395\]
DIVISION OF DECIMAL
Case - 1: Dividing by whole number
Example:
Hence \[4.35-3=1.45\]
Case - 2: Division by 10, 100, 1000
1. On division by 10 the decimal point moves one decimal place to the left.
Example: (i) \[4-10=0.4\]
(ii) \[321.4-10=32.14\]
2. On division by 100 the decimal point moves two decimal places to the left.
Example: (i) \[321.5-100=3.215\]
(ii) \[244-100=2.44\]
3. On division by 1000 the decimal point moves three places to the left.
Example: (i) \[321.6-1000=0.3216\]
(ii) \[3.25-1000=0.0325\]
Note: You need to place extra zeros wherever necessary.
Case - 3: Division by a decimal fraction
When we divide a decimal fraction by another decimal fraction we have to first change the divisor into a whole number.\[0.76\div 0.4=?\]
Step - 1: Make the decimal places of the dividend and divisor equal that is \[\frac{0.76}{0.4}=\frac{0.76}{0.40}\]
Step - 2: Now remove decimal point as the decimal places are equal. \[\frac{0.76}{0.4}=\frac{0.76}{0.40}=\frac{76}{40}\]
Step - 3: Now divide 76 by 40 as usual and write the answer.
\[\therefore \]\[0.76\div 0.4=1.9\]
Note: To convert the decimal divisor into a whole number multiply it with either 10 or 100 or 1000 and so on according to the decimal places.
You need to login to perform this action.
You will be redirected in 3 sec |
Given real numbers $x_1<x_2<\cdots<x_n$, define the Vandermonde matrix by $V=(V_{ij}) = (x^j_i)$. That is, $$V = \left(\begin{array}{cccccc} 1 & x_1 & x^2_1 & \cdots & x^{n-1}_1 & x^n_1 \\ 1 & x_2 & x^2_2 & \cdots & x^{n-1}_2 & x^n_2 \\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ 1 & x_{n-1}& x^2_{n-1} & \cdots & x^{n-1}_{n-1} & x^n_{n-1} \\ 1 & x_n & x^2_n & \cdots & x^{n-1}_n & x^n_n \end{array}\right).$$ Prove that $\det(V) = \prod_{1\le i<j\le n} (x_j-x_i)$ by the following inductive steps. Recall first that adding a multiple of one row to another will not change the determinant of a matrix. This is also true if you add a multiple of one column to another. Finally remember that the determinant is linear in each row when you leave the other ones fixed.
a. Subtract $x_1$ times each column from the column to its right, starting with the last column. That is, subtract $x_1$ times the $(n-1)$-st column from the $n$-th column, $x_1$ times the $(n-2)$-nd column from the $(n-1)$-st column, etc.
b. Then subtract the first row from all of the other rows.
c. Finally observe that each row has a common factor that can be pulled out of the determinant.
d. After these three steps are done, expand the resulting determinant in cofactors across the first row.
e. You should see at this point how to apply the induction step.
What I have so far: I have been told that for the induction step I have to first first show that it is true for an $(n-1)$$\times$$(n-1)$ matrix and then show that it is true for an $n$$\times$$n$ matrix (I believe?). I kind of think that my resulting matrix after the steps is incorrect... |
2019-09-27 09:59
Higgs boson pair production at colliders: status and perspectives / Di Micco, Biagio (Universita e INFN Roma Tre (IT)) ; Gouzevitch, Maxime (Centre National de la Recherche Scientifique (FR)) ; Mazzitelli, Javier (University of Zurich) ; Vernieri, Caterina (SLAC National Accelerator Laboratory (US)) ; Alison, John (Carnegie-Mellon University (US)) ; Androsov, Konstantin (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, Pisa (IT)) ; Baglio, Julien Lorenzo (CERN) ; Bagnaschi, Emanuele Angelo (Paul Scherrer Institut (CH)) ; Banerjee, Shankha (University of Durham (GB)) ; Basler, P (Karlsruhe Institute of Technology) et al. This document summarises the current theoretical and experimental status of the di-Higgs boson production searches, and of the direct and indirect constraints on the Higgs boson self-coupling, with the wish to serve as a useful guide for the next years. The document discusses the theoretical status, including state-of-the-art predictions for di-Higgs cross sections, developments on the effective field theory approach, and studies on specific new physics scenarios that can show up in the di-Higgs final state. [...] LHCHXSWG-2019-005.- Geneva : CERN, 2019 - 274. Registre complet - Registres semblants 2019-05-10 11:18 Registre complet - Registres semblants 2019-04-02 20:51
Simplified Template Cross Sections – Stage 1.1 / Delmastro, Marco (Centre National de la Recherche Scientifique (FR)) ; Berger, Nicolas (Centre National de la Recherche Scientifique (FR)) ; Bertella, Claudia (Chinese Academy of Sciences (CN)) ; Duehrssen-Debling, Michael (CERN) ; Kivernyk, Oleh (Centre National de la Recherche Scientifique (FR)) ; Langford, Jonathon Mark (Imperial College (GB)) ; Milenovic, Predrag (University of Belgrade (RS)) ; Pandini, Carlo Enrico (CERN) ; Tackmann, Frank (Deutsches Elektronen-Synchrotron (DE)) ; Tackmann, Kerstin (Deutsches Elektronen-Synchrotron (DE)) et al. Simplified Template Cross Sections (STXS) have been adopted by the LHC experiments as a common framework for Higgs measurements. Their purpose is to reduce the theoretical uncertainties that are directly folded into the measurements as much as possible, while at the same time allowing for the combination of the measurements between different decay channels as well as between experiments. [...] arXiv:1906.02754; LHCHXSWG-2019-003; DESY-19-070.- Geneva : CERN, 2019 - 14 p. Fulltext: LHCHXSWG-2019-003 - PDF; 1906.02754 - PDF; Registre complet - Registres semblants 2019-03-27 12:46
Recommended predictions for the boosted-Higgs cross section / Becker, Kathrin (Albert Ludwigs Universitaet Freiburg (DE)) ; Caola, Fabrizio (University of Durham (GB)) ; Massironi, Andrea (CERN) ; Mistlberger, Bernhard (Massachusetts Inst. of Technology (US)) ; Monni, Pier (CERN) ; Chen, Xuan (Zurich U.) ; Frixione, Stefano (INFN e Universita Genova (IT)) ; Gehrmann, Thomas Kurt (Universitaet Zuerich (CH)) ; Glover, Nigel (IPPP Durham) ; Hamilton, Keith Murray (University of London (GB)) et al. In this note we study the inclusive production of a Higgs boson with large transverse momentum. We provide a recommendation for the inclusive cross section based on a combination of state of the art QCD predictions for the gluon-fusion and vector-boson-fusion channels. [...] LHCHXSWG-2019-002.- Geneva : CERN, 2019 - 14. Fulltext: PDF; Registre complet - Registres semblants 2019-03-01 22:49
Higgs boson cross sections for the high-energy and high-luminosity LHC: cross-section predictions and theoretical uncertainty projections / Calderon Tazon, Alicia (Universidad de Cantabria and CSIC (ES)) ; Caola, Fabrizio (University of Durham (GB)) ; Campbell, John (Fermilab (US)) ; Francavilla, Paolo (Universita & INFN Pisa (IT)) ; Marchiori, Giovanni (Centre National de la Recherche Scientifique (FR)) ; Becker, Kathrin (Albert Ludwigs Universitaet Freiburg (DE)) ; Bertella, Claudia (Chinese Academy of Sciences (CN)) ; Bonvini, Marco (Sapienza Universita e INFN, Roma I (IT)) ; Chen, Xuan (Zuerich University (CH)) ; Frederix, Rikkert (Technische Universität Muenchen (DE)) et al. This note summarizes the state-of-the-art predictions for the cross sections expected for Higgs boson production in the 27 TeV proton-proton collisions of a high-energy LHC, including a full theoretical uncertainty analysis. It also provides projections for the progress that may be expected on the timescale of the high-luminosity LHC and an assessment of the main limiting factors to further reduction of the remaining theoretical uncertainties.. LHCHXSWG-2019-001.- Geneva : CERN, 01 - 17. Fulltext: PDF; Registre complet - Registres semblants 2016-07-15 07:28
Analytical parametrization and shape classification of anomalous HH production in EFT approach / Carvalho Antunes De Oliveira, Alexandra (Universita e INFN, Padova (IT)) ; Dall'Osso, Martino (Universita e INFN, Padova (IT)) ; De Castro Manzano, Pablo (Universita e INFN, Padova (IT)) ; Dorigo, Tommaso (Universita e INFN, Padova (IT)) ; Goertz, Florian (CERN) ; Gouzevitch, Maxime (Universite Claude Bernard-Lyon I (FR)) ; Tosi, Mia (CERN) In this document we study the effect of anomalous Higgs boson couplings on non-resonant pair production of Higgs bosons (HH) at the LHC. We explore the space of the five parameters $\kappa_\lambda$, $\kappa_t$, $c_2$, $c_{g}$, and $c_{2g}$ in terms of the corresponding kinematics of the final state, and describe a suggested partition of the space into a limited number of regions featuring similar phenomenology in the kinematics of HH final state, along with a corresponding set of representative benchmark points. [...] LHCHXSWG-2016-001.- Geneva : CERN, 2016 Fulltext: PDF; Registre complet - Registres semblants 2015-08-03 09:58
Benchmark scenarios for low $\tan \beta$ in the MSSM / Bagnaschi, Emanuele (DESY) ; Frensch, Felix (Karlsruhe, Inst. Technol.) ; Heinemeyer, Sven (Cantabria Inst. of Phys.) ; Lee, Gabriel (Technion) ; Liebler, Stefan Rainer (DESY) ; Muhlleitner, Milada (Karlsruhe, Inst. Technol.) ; Mc Carn, Allison Renae (Michigan U.) ; Quevillon, Jeremie (King's Coll. London) ; Rompotis, Nikolaos (Seattle U.) ; Slavich, Pietro (Paris, LPTHE) et al. The run-1 data taken at the LHC in 2011 and 2012 have led to strong constraints on the allowed parameter space of the MSSM. These are imposed by the discovery of an approximately SM-like Higgs boson with a mass of $125.09\pm0.24$~GeV and by the non-observation of SUSY particles or of additional (neutral or charged) Higgs bosons. [...] LHCHXSWG-2015-002.- Geneva : CERN, 2015 - 24. Fulltext: PDF; Registre complet - Registres semblants 2015-03-20 14:24
Recommendations for the interpretation of LHC searches for $H_5^0$, $H_5^{\pm}$, and $H_5^{\pm\pm}$ in vector boson fusion with decays to vector boson pairs / Zaro, Marco (Paris U., IV ; Paris, LPTHE) ; Logan, Heather (Ottawa Carleton Inst. Phys.) We provide theory input for the interpretation of the LHC searches for the production of Higgs bosons $H_5^0$, $H_5^{\pm}$, and $H_5^{\pm\pm}$ that transform as a fiveplet under the custodial symmetry. We choose as a benchmark the Georgi-Machacek model, in which isospin-triplet scalars are added to the Standard Model Higgs sector in such a way as to preserve custodial SU(2) symmetry. [...] LHCHXSWG-2015-001.- Geneva : CERN, 30 - 19p. Fulltext: PDF; Registre complet - Registres semblants |
What is the magnetic field due to current carrying wire at a point on the wire itself?
closed as unclear what you're asking by Chris♦, John Duffield, Jon Custer, sammy gerbil, M. Enns Mar 21 '18 at 20:20
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
Complementing jim's answer in order to explicitly address the questions that possibly motivated the original post:
does the field intensity diverge at the wire? where does it point to?
The answer is that in practice we have a
current density ($I(A)/A$); and, as long as this density is finite, considering a position $r\to 0$ necessarily leads to the magnetic field at this position being that generated by a vanishing amount of current and, thus, having vanishing intensity: $\mathbf{B}\to 0$.
That is, instead of diverging, the magnetic field is zero and, in particular, has no defined direction.
What about $B =\frac{\mu_0 I}{2\pi r}$, then? Well, if you try to apply it for $r\to 0$, you approach the limit of the wire being a mathematical line, with zero radius and area, and that implies, if the current is finite, an infinite current density (finite $I$ going through a vanishing cross-section area). So you're assuming a diverging physical situation, it's then not surprising you get other divergences.
For an infinitely long wire of infinitesimal thickness carrying a steady current $I$ the magnetic field at a distance $r$ from the wire is $$B =\frac{\mu_0 I}{2\pi r}.$$ This result can be derived from Ampere's Law $$\int {\bf B . d s} = \mu_0 \times \text{current enclosed by path},$$ where the current enclosed by the path (for infinitesimal thickness the enclosed current is always the total current $I$ flowing through the wire. For a wire of finite thickness you can still make use of Ampere's Law though for a distance $r \lt s$ the enclosed current is now only a fraction of the total current flowing through the wire. Typically this is taken to be $$I \frac{r^2}{s^2}.$$ You can then determine the magnetic field for the two cases (i) $r \le s$ (current = $I \frac{r^2}{s^2}$) and (ii) $r \ge s$ (current = $I$).
For distances inside the wire you only have a fraction of the current that contributes to the magnetic field and the magnetic field has a finite value at a point on the wire itself. |
Problem with understanding the proof of Sauer Lemma
I will replicate the proof here which is from the book "Learning from Data"
Sauer Lemma:
$B(N,K) \leq \sum_{i=0}^{k-1}{n\choose i}$
Proof:
The statement is true whenever k = 1 or N = 1 by inspection. The proof is by induction on N. Assume the statement is true for all $N \leq N_o$ and for all k. We need to prove that the statement for $N = N_0 + 1$ and fpr all k. Since the statement is already true when k = 1(for all values of N) by the initial condition, we only need to worry about $k \geq 2$. By (proven in the book), $B(N_0 + 1, k) \leq B(N_0, k) + B(N_0, k-1)$ and applying induction hypothesis on each therm on the RHS, we get the result.
**My Concern** From what I see this proof only shows that if $B(N, K)$ implies $B(N+1, K)$. I can't see how it shows $B(N, K)$ implies $B(N, K+1)$. This problem arises because the $k$ in $B(N_0 + 1, K)$ and $B(N_0, K)$ are the same, so i think i need to prove the other induction too. Why the author is able to prove it this way? |
Difference between revisions of "Literature on Carbon Nanotube Research"
Line 74: Line 74:
</math>
</math>
−
where alpha is the helix angle of the spun yarn, i.e. fiber direction
+
where alphais the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant k=sqrt(dQ/mu)/3Lis given by the fiber diameter d=1nm, the fiber migration length Q (distance along the over which a fiber shifts from the yarn surface to the deep
−
relative to yarn axis. The constant k=sqrt(dQ/mu)/3L is given by the
+
interior and back again), the mu=0.13is the friction coefficient of
−
fiber diameter d=1nm, the fiber migration length Q (distance along the
− −
interior and back again), the mu=0.13 is the friction coefficient of
CNTs (the friction coefficent is the ratio of maximum along-fiber force
CNTs (the friction coefficent is the ratio of maximum along-fiber force
−
divided by lateral force pressing the fibers together), L=
+
divided by lateral force pressing the fibers together), L=is the fiber length. A critical review of this formula is given [[yarnstrength|here]].
−
the fiber length. A critical review of this formula is given [[yarnstrength|here]].
In the paper interesting transmission electron microscope (TEM)
In the paper interesting transmission electron microscope (TEM)
Revision as of 16:00, 20 March 2009
I have hijacked this page to write down my views on the literature on Carbon Nanotube (CNT) growths and processing, a procedure that should give us the cable/ribbon we desire for the space elevator. I will try to put as much information as possible here. If anyone has something to add, please do not hesitate!
Contents 1 Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes 2 Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis 3 Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology 4 Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen 5 In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation 6 High-Performance Carbon Nanotube Fiber Direct mechanical measurement of the tensile strength and elastic modulus of multiwalled carbon nanotubes
B. G. Demczyk et al., Materials and Engineering,
A334, 173-178, 2002 The paper by Demczyk et al. (2002) is the basic reference for the experimental determination of the tensile strengths of individual Multi-wall nanotube (MWNT) fibers. The experiments are performed with a microfabricated piezo-electric device. On this device CNTs in the length range of tens of microns are mounted. The tensile measurements are obseverd by transmission electron microscopy (TEM) and videotaped. Measurements of the tensile strength (tension vs. strain) were performed as well as Young modulus and bending stiffness. Breaking tension is reached for the SWNT at 150GP and between 3.5% and 5% of strain. During the measurements 'telescoping' extension of the MWNTs is observed, indicating that single-wall nanotubes (SWNT) could be even stronger. However, 150GPa remains the value for the tensile strength that was experimentally observed for carbon nanotubes. Direct Spinning of Carbon Nanotube Fibers from Chemical Vapour Deposition Synthesis
Y.-L. Li, I. A. Kinloch, and A. H. Windle, Science,
304,276-278, 2004 The work described in the paper by Y.-L. Li et al. is a follow-on of the famous paper by Zhu et al. (2002), which was cited extensively in Brad's book. This article goes a little more into the details of the process. If you use a mixture of ethene (as the source of carbon), ferrocene, and theophene (both as catalysts, I suppose) into a furnace (1050 to 1200 deg C) using hydrogen as carrier gas, you apparently get an 'aerogel' or 'elastic smoke' forming in the furnace cavity, which comprises the CNTs. Here's an interesting excerpt: Under these synthesis conditions, the nanotubes in the hot zone formed an aerogel, which appeared rather like “elastic smoke,” because there was sufficient association between the nanotubes to give some degree of mechanical integrity. The aerogel, viewed with a mirror placed at the bottom of the furnace, appeared very soon after the introduction of the precursors (Fig. 2). Itwas then stretched by the gas flow into the form of a sock, elongating downwards along the furnace axis. The sock did not attach to the furnace walls in the hot zone, which accordingly remained clean throughout the process.... The aerogel could be continuously drawn from the hot zone by winding it onto a rotating rod. In this way, the material was concentrated near the furnace axis and kept clear of the cooler furnace walls,...
The elasticity of the aerogel is interpreted to come from the forces between the individual CNTs. The authors describe the procedure to extract the aerogel and start spinning a yarn from it as it is continuously drawn out of the furnace. In terms of mechanical properties of the produced yarns, the authors found a wide range from 0.05 to 0.5 GPa/g/ccm. That's still not enough for the SE, but the process appears to be interesting as it allows to draw the yarn directly from the reaction chamber without mechanical contact and secondary processing, which could affect purity and alignment. Also, a discussion of the roles of the catalysts as well as hydrogen and oxygen is given, which can be compared to the discussion in G. Zhang et al. (2005, see below).
Multifunctional Carbon Nanotube Yarns by Downsizing an Ancient Technology
M. Zhang, K. R. Atkinson, and R. H. Baughman, Science,
306, 1358-1361, 2004 In the research article by M. Zhang et al. (2004) the procedure of spinning long yarns from forests of MWNTs is described in detail. The maximum breaking strength achieved is only 0.46 GPa based on the 30micron-long CNTs. The initial CNT forest is grown by chemical vapour deposition (CNT) on a catalytic substrate, as usual. A very intersting formula for the tensile strength of a yarn relative to the tensile strength of the fibers (in our case the MWNTs) is given:
<math> \frac{\sigma_{\rm yarn}}{\sigma_{\rm fiber}} = \cos^2 \alpha \left(1 - \frac{k}{\sin \alpha} \right) </math>
where <math>\alpha</math> is the helix angle of the spun yarn, i.e. fiber direction relative to yarn axis. The constant <math>k=\sqrt(dQ/mu)/3L</math> is given by the fiber diameter d=1nm, the fiber migration length Q (distance along the yarn over which a fiber shifts from the yarn surface to the deep interior and back again), the quantity <math>\mu=0.13</math> is the friction coefficient of CNTs (the friction coefficent is the ratio of maximum along-fiber force divided by lateral force pressing the fibers together), <math>L=30{\rm \mu m}</math> is the fiber length. A critical review of this formula is given here.
In the paper interesting transmission electron microscope (TEM) pictures are shown, which give insight into how the yarn is assembled from the CNT forest. The authors describe other characteristics of the yarn, like how knots can be introduced and how the yarn performs when knitted, apparently in preparation for application in the textile industry.
Ultra-high-yield growth of vertical single-walled carbon nanotubes: Hidden roles of hydrogen and oxygen
Important aspects of the production of CNTs that are suitable for the SE is the efficiency of the growth and the purity (i.e. lack of embedded amorphous carbon and imperfections in the Carbon bounds in the CNT walls). In their article G. Zhang et al. go into detail about the roles of oxygen and hydrogen during the chemical vapour deposition (CVD) growth of CNT forests from hydrocarbon sources on catalytic substrates. In earlier publications the role of oxygen was believed to be to remove amorphous carbon by oxidation into CO. The authors show, however, that, at least for this CNT growth technique, oxygen is important, because it removes hydrogen from the reaction. Hydrogen has apparently a very detrimental effect on the growth of CNTs, it even destroys existing CNTs as shown in the paper. Since hydrogen radicals are released during the dissociation of the hydrocarbon source compount, it is important to have a removal mechanism. Oxygen provides this mechanism, because its chemical affinity towards hydrogen is bigger than towards carbon.
In summary, if you want to efficiently grow pure CNT forests on a catalyst substrate from a hydrocarbon CVD reaction, you need a few percent oxygen in the source gas mixture. An additional interesting information in the paper is that you can design the places on the substrate, on which CNTs grow by placing the the catalyst only in certain areas of the substrate using lithography. In this way you can grow grids and ribbons. Figures are shown in the paper.
In the paper no information is given on the reason why the CNT growth stops at some point. The growth rate is given with 1 micron per minute. Of course for us it would be interesting to eliminate the mechanism that stops the growth so we could grow infinitely long CNTs.
This article can be found in our archive.
In situ Observations of Catalyst Dynamics during Surface-Bound Carbon Nanotube Nucleation
The paper by S. Hofmann et al. (2007) is a key publication for understanding the microscropic processes of growing CNTs. The authors describe an experiment in which they observe in situ the growth of CNTs from chemical vapour deposition (CVD) onto metallic catalyst particles. The observations are made in time-lapse transmission electron microscopy (TEM) and in x-ray photo-electron spectroscopy. Since I am not an expert on spectroscopy, I stick to the images and movies produced by the time-lapse TEM. In the observations it can be seen that the catalysts are covered by a graphite sheet, which forms the initial cap of the CNT. The formation of that cap apparently deforms the catalyst particle due to its inherent shape as it tries to form a minimum-energy configuration. Since the graphite sheet does not extend under the catalyst particle, which is prevented by the catalyst sitting on the silicon substrate, the graphite sheet cannot close itself. The deformation of the catalyst due to the cap forming leads to a retoring force exerted by the crystaline stracture of the catalyst particle. As a consequence the carbon cap lifts off the catalyst particle. On the base of the catalyst particle more carbon atoms attach to the initial cap starting the formation of the tube. The process continues to grow a CNT as long as there is enough carbon supply to the base of the catalyst particle and as long as the particle cannot be enclosed by the carbon compounds. During the growth of the CNT the catalyst particle breathes so drives so the growth process mechanically.
Of course for us SE community the most interesting part in this paper is the question: can we grow CNTs that are long enough so we can spin them in a yarn that would hold the 100GPa/g/ccm? In this regard the question is about the termination mechanism of the growth. The authors point to a very important player in CNT growth: the catalyst. If we can make a catalyst that does not break off from its substrate and does not wear off, the growth could be sustained as long as the catalyst/substrate interface is accessible to enough carbon from the feedstock.
If you are interested, get the paper from our archive, including the supporting material, in which you'll find the movies of the CNTs growing.
High-Performance Carbon Nanotube Fiber
K. Koziol et al., Science,
318, 1892, 2007. The paper "High-Performance Carbon Nanotube Fiber" by K. Koziol et al. is a research paper on the production of macroscopic fibers out of an aerogel (low-density, porous, solid material) of SWNT and MWNT that has been formed by carbon vapor deposition. They present an analysis of the mechanical performance figures (tensile strength and stiffness) of their samples. The samples are fibers of 1, 2, and 20mm length and have been extracted from the aerogel with high winding rates (20 metres per minute). Indeed higher winding rates appear to be desirable, but the authors have not been able to achieve higher values as the limit of extraction speed from the aerogel was reached, and higher speeds led to breakage of the aerogel.
They show in their results plot (Figure 3A) that typically the fibers split in two performance classes: low-performance fibers with a few GPa and high-performance fibers with around 6.5 GPa. It should be noted that all tensile strengths are given in the paper as GPa/SG, where SG is the specific gravity, which is the density of the material divided by the density of water. Normally SG was around 1 for most samples discussed in the paper. The two performance classes have been interpreted by the authors as the typical result of the process of producing high-strength fibers: since fibers break at the weakest point, you will find some fibers in the sample, which have no weak point, and some, which have one or more, provided the length of the fibers is in the order of the frequency of occurrence of weak points. This can be seen by the fact that for the 20mm fibers there are no high-performance fibers left, as the likelihood to encounter a weak point on a 20mm long fiber is 20 times higher than encountering one on a 1mm long fiber.
As a conclusion the paper is bad news for the SE, since the difficulty of producing a flawless composite with a length of 100,000km and a tensile strength of better than 3GPa using the proposed method is enormous. This comes back to the ribbon design proposed on the Wiki: using just cm-long fibers and interconnect them with load-bearing structures (perhaps also CNT threads). Now we have shifted the problem from finding a strong enough material to finding a process that produces the required interwoven ribbon. In my opinion the race to come up with a fiber of better than Kevlar is still open. |
Real Analysis Exchange Real Anal. Exchange Volume 29, Number 1 (2003), 265-273. Algebras with inner MB-representation. Abstract
We investigate algebras of sets, and pairs $(\mathcal{A , I})$ consisting of an algebra $\mathcal{A}$ and an ideal $\mathcal{I} \subset \mathcal{A}$, that possess an inner MB-representation. We compare inner MB-representability of $(\mathcal{A , I})$ with several properties of $(\mathcal{A , I})$ considered by Baldwin. We show that $\mathcal{A}$ is inner MB-representable if and only if $\mathcal{A} =S(\mathcal{A} \setminus\mathcal{H}(\mathcal{A}))$, where $S(\cdot)$ is a Marczewski operation defined below and $\mathcal H$ consists of sets that are hereditarily in $\mathcal{A}$. We study the question of uniqueness of the ideal in that representation..
Article information Source Real Anal. Exchange, Volume 29, Number 1 (2003), 265-273. Dates First available in Project Euclid: 9 June 2006 Permanent link to this document https://projecteuclid.org/euclid.rae/1149860191 Mathematical Reviews number (MathSciNet) MR2061310 Zentralblatt MATH identifier 1065.03033 Subjects Primary: 06E25: Boolean algebras with additional operations (diagonalizable algebras, etc.) [See also 03G25, 03F45] Secondary: 28A05: Classes of sets (Borel fields, $\sigma$-rings, etc.), measurable sets, Suslin sets, analytic sets [See also 03E15, 26A21, 54H05] 54E52: Baire category, Baire spaces Citation
Balcerzak, Marek; Bartoszewicz, Artur; Ciesielski, Krzysztof. Algebras with inner MB-representation. Real Anal. Exchange 29 (2003), no. 1, 265--273. https://projecteuclid.org/euclid.rae/1149860191 |
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago
BTW your program looks very interesting, in particular the way to enter mathematics.
One thing that seem to be missing is documentation (at least I did not find it).
This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for.
For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$?
*******
Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports.
When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to.
*******
If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string:
I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead:
One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find...
In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som...
@MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago
I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, "
BTW those animations with examples of searching look really cool.
@MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago
We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page!
@MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users.
@MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it.
@MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords.
@MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history.
@MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though)
@MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match.
@MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell.
@MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets.
@MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit.
@MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned.
@MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish.
@MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish.
So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago
@GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago
@quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago
"What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago
@quid I will reply here, since I do not want to digress in the comments too much from the topic of that question.
Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that".
Book recommendations are certainly accepted on the main site, if they are formulated in the proper way.
If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here.
Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed.
Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously.
I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc.
Academia.SE has some questions which could be classified as "demographic" (including gender).
@quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar.
But that is only anecdotal.
And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat.
From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov."
My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men.
As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation.
It seems that they have also other interpretations in Poland.
"A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House").
Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany."
BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question.
In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3]
A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar).
In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing.
On Slovakia specifically it says there:
The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko. |
I am trying to create a sequence of functions and have it properly memoize the results. The recursive operation is simply convolution, so it possible there is a better way to do this (obviously, if I ...
I wish to perform the following nested integral:\begin{align}I_n=\int_{-\infty}^\infty dx_n~f(x_n,x_{n+1})\int_{-\infty}^\infty dx_{n-1}~f(x_{n-1},x_n)...\int_{-\infty}^\infty dx_1~f(x_1,x_2)\int_{-\...
I was wondering how to approximate or tabulate values for this numeric approximation: It is the following: The confusing part is how to implement the subscripts in mathematica.$y_{i+1} = (t_i - y_i)...
This is a rather specific question and I apologize for spamming you with some lengthy code. But it could be interesting for some reader and maybe you can help out, so please bear with me.I am using ... |
Eigenvalues and Eigenvectors
Recall from the Invariant Subspaces page that a subspace $U$ of $V$ is said to be invariant under the linear operator $T \in \mathcal L (V)$ if $u \in U$ implies that $T(u) \in U$. Now suppose that the vector space $V$ is the direct sum of the nontrivial subspaces $U_1$, $U_2$, …, $U_m$, that is:(1)
We can understand how a linear operator $T \in \mathcal L (V)$ behaves by looking at the linear operator $T$ over each subspace $U_j$ if these subspaces are invariant under $T$. The easiest of such subspaces to analyze are subspaces whose dimension is $1$. If $u \in V$ is a non-zero vector, then any $1$-dimensional subspace $U$ of $V$ is the set of all scalar multiples of $u$, that is:(2)
Suppose that the subspace $U$ defined above is invariant under $T$. Then we have that $u \in U$ implies that $T(u) \in U$, and so there must exist a scalar $\lambda \in \mathbb{F}$ such that:(3)
These values, $\lambda$ are important and we will define them as follows:
Definition: Let $T \in \mathcal L (V)$. Then $\lambda$ is called an Eigenvalue or Characteristic Value of $T$ if there exists a nonzero vector $u$ such that $T(u) = \lambda u$, and $u$ is called a corresponding Eigenvector.
We will now look at some important theorems on eigenvalues and eigenvectors.
Theorem 1: Let $V$ be a finite-dimensional vector space and let $T \in \mathcal L (V)$. The following statements are equivalent: a) $\lambda$ is an eigenvalue of $T$. b) The linear operator $(T -\lambda I)$ is not injective. c) The linear operator $(T - \lambda I)$ is not surjective. d) The linear operator $(T - \lambda I)$ is not invertible. Proof:$a \implies b$. Suppose that $\lambda$ is an eigenvalue of $T$. Then there exists a nonzero vector $v \in V$ such that $T(v) = \lambda v$ and so $T(v) = (\lambda I) (v)$ so $(T - \lambda I)(v) = 0$. Since the nonzero vector $v$ satisfies this operator, then we have that $\mathrm{null} (T - \lambda I) \neq \{ 0 \}$ and so $(T - \lambda I)$ is not injective. $b \implies c$. Suppose that $(T - \lambda I)$ is not injective. From the Linear Operators page, we had a theorem which implies that then $(T - \lambda I)$ is not surjective. $c \implies d$. Suppose that $(T - \lambda I)$ is not surjective. From the same theorem mentioned above, we have that then $(T - \lambda I)$ is not invertible. $d \implies a$. Suppose that $(T - \lambda I)$ is not invertible. From the same theorem mentioned above, we have that then $(T - \lambda I)$ is not injective and so there exists a nonzero vector $v \in V$ such that $(T - \lambda I)(v) = 0$ that is $T(v) = \lambda v$ so $\lambda$ is an eigenvalue of $T$. $\blacksquare$
Theorem 2: Let $T \in \mathcal L (V)$. If $\lambda_1, \lambda_2, ..., \lambda_m$ are distinct eigenvalues of $T$ and $v_1, v_2, ..., v_m$ are the corresponding eigenvectors of $T$, then $\{ v_1, v_2, ..., v_m \}$ is a linearly independent set. Proof:Suppose instead that $\{ v_1, v_2, ..., v_m \}$ is actually a linearly dependent set of vectors. We will prove that this will result in a contradiction. Since the set of vectors $\{ v_1, v_2, ..., v_m \}$ is linearly dependent, then let $k$ be the smallest natural number for which $v_k \in \mathrm{span} (v_1, v_2, ..., v_{k-1})$ and $\{ v_1, v_2, ..., v_k \}$ be a linearly independent set, which is guaranteed by the Linear Dependence Lemma. Then there exists scalars $a_1, a_2, ..., a_{k-1}$ such that $v_k = a_1v_1 + a_2v_2 + ... + a_{k-1}v_{k-1}$. If we apply the linear operator $T$ to both sides of this equation, then we have that: Now since $v_k = a_1v_1 + a_2v_2 + ... + a_{k-1}v_{k-1}$ then $\lambda_k v_k = \lambda_k a_1v_1 + \lambda_k a_2v_2 + ... + \lambda_k a_{k-1}v_{k-1}$ as well. Subtracting the earlier equation for $\lambda_k v_k$ and we have that: Since $\{ v_1, v_2, ..., v_k \}$ is a linearly independent set and each of $\lambda_1, \lambda_2, ..., \lambda_k$ is distinct, then we must have that $a_1 = a_2 = ... = a_{k-1} = 0$. But then this implies that $v_k = 0$ which is a contradiction as $v_k$ is nonzero. Hence our assumption that $\{ v_1, v_2, ..., v_m \}$ was linearly dependent was false. $\blacksquare$
Corollary 1: If $T \in \mathcal L (V)$ then $T$ has at most $\mathrm{dim} (V)$ distinct eigenvalues. Proof:Suppose that $T \in \mathcal L (V)$ and that $\mathrm{dim} (V) = n$. Then $V$ has a basis of length $n$. However, no linearly independent list of vectors from $V$ has length greater than $n$. By Theorem 1, we have that if $\lambda_1, \lambda_2, ..., \lambda_m$ are distinct eigenvalues of $T$, then the set of eigenvectors $\{ v_1, v_2, ..., v_m \}$ is linearly independent. Therefore $m ≤ n = \mathrm{dim} (V)$. $\blacksquare$.
Let's now look at some examples of finding eigenvalues.
Example 1: The Eigenvalues and Eigenvectors of The Identity Operator
Let $V$ be a vector space and suppose that $I \in \mathcal L (V)$ is the identity operator on $V$, that is for all $v \in V$ we have that $I(v) = v$. Suppose now that:(6)
Since $I(v) = v$, we then have that $v = \lambda v$ for all $v \in V$ and so $\lambda = 1$ is an eigenvalue for the identity operator. Additionally, every nonzero vector $v \in V$ is an eigenvector of the identity operator.
More generally, if $aI \in \mathcal L (V)$ then $(aI)(v) = av$, and $(aI)(v) = \lambda v$ implies that $av = \lambda v$ and so $\lambda = a$ is an eigenvalue of $aI$, and every nonzero vector $v \in V$ is an eigenvector of $aI$.
Example 2 Suppose that $T \in \mathcal L (\mathbb{R}^2)$ is defined by $T(x, y) = (2x, y)$. Find all eigenvalues (if they exist) of $T$.
Let $(x, y) \in V$. We want to find $\lambda$ such that:(7)
Therefore $2x = \lambda x$ and $y = \lambda y$. The first equation implies that $\lambda = 2$ while the second equation implies that $\lambda = 1$. Since both equations cannot be solved simultaneously, we thus have that no such eigenvalue exists. This should intuitively make sense. We note that $T$ takes each vector $(x, y)$ and maps it to $(2x, y)$, that is $T$ is the linear transformation which stretches the $x$ coordinate by a vector of $2$ while the $y$ coordinate stays the same. Hence, there is no vector $(x, y) \in \mathbb{R}^2$ that is mapped to a multiple $\lambda$ of itself. |
Table of Contents
{ this post published here
[1] ; I am not sure whether this also applies if you use an inadmissible heuristic }
I’m working on a tutorial showing how Breadth First Search, Dijkstra’s Algorithm, Greedy Best-First Search, and A* are related. I’m focusing on keeping the code simple and easy to understand.
While tracking down a bug in my Dijkstra’s Algorithm code, I realized I had forgotten to implement the “reprioritize” case. That led me to wonder why my code seemed to work without handling that case. It turns out that case is never triggered by the graphs I’m searching over. Let me explore this …
Algorithm textbooks describe the algorithm as starting with \(\infty\) everywhere and then checking the new cost against the old cost. The old cost is always defined so it’s a single test:
if cost_so_far[current] + cost(current, next) < cost_so_far[next]: cost_so_far[next] = cost_so_far[current] + cost(current, next) came_from[next] = current frontier.reprioritize(next, cost_so_far[next])
In practice I don’t initialize the priority queue to have all vertices with distance set to \(\infty\). Instead, I start the priority queue with only the start element, and only insert elements when their distance is finite. Keeping the priority queue small makes it significantly faster, especially with early exit.
However, this complicates the code. I now have two cases instead of one:
The new cost is less than the old cost because the old cost was \(\infty\), and we need to insertthe node into the priority queue. This is a simpler operation, and it’s more common, so it’s worth separating out. The node wasn’t already in the priority queue, and has no old cost, and we need to reprioritizethe node already in the priority queue. This is rare and more complicated than an insert.
This logic can go either in the algorithm code or in the priority queue code. I had put it in the algorithm (but in the tutorial I am going to instead put it in the priority queue, because it keeps the algorithm easier to understand).
if next not in cost_so_far: cost_so_far[next] = cost_so_far[current] + cost(current, next) came_from[next] = current frontier.insert(next, cost_so_far[next]) elif cost_so_far[current] + cost(current, next) < cost_so_far[next]: cost_so_far[next] = cost_so_far[current] + cost(current, next) came_from[next] = current frontier.reprioritize(next, cost_so_far[next])
I wanted to better understand which situations lead to the reprioritize case. Let’s say we’re looking at edge \(b \rightarrow c\) and we previously found \(c\) via edge \(a \rightarrow c\). (Note: algorithm textbooks call this \(g\) but I call it
cost_so_far)
To trigger the reprioritize case, I need the old cost to be worse than the new cost: \( g(a) + \mathit{cost}(a,c) > g(b) + \mathit{cost}(b,c) \). Dijkstra’s Algorithm explores nodes in increasing order. Since \(a\) was explored before \(b\), \( g(a) \leq g(b) \), or equivalently, \( g(b) - g(a) \geq 0 \)
Those two together can be rewritten as \( \mathit{cost}(a,c) - \mathit{cost}(b,c) > g(b) - g(a) \geq 0 \)
That means when \(\mathit{cost}(a,c) - \mathit{cost}(b,c) > 0\) we need to reprioritize.
In several of my game projects, \(\mathit{cost}(x,y)\) depends
only on \(y\). That means \(\mathit{cost}(a,c) = \mathit{cost}(b,c)\), and we never need to reprioritize.
That means any bugs in my reprioritize code never mattered!
It also means the abstraction boundaries between the graph data structure, the search algorithm, and the priority queue data structure potentially make the code more complicated and hurting optimization opportunities. I think this rule works, but I’m not sure:
if we have a monotonic ordering (always with Dijkstra’s algorithm, and A* with a consistent heuristic), and the movement cost only depends on the target node, then we never need to reprioritize. And that means we can use a simpler and faster regular priority queue instead of a reprioritizable priority queue. 1 Testing#
Let’s define a graph where we actually need to reprioritize, then try it out.
class Graph: def __init__(self): self.edges = {} self.weights = {} def neighbors(self, id): return self.edges.get(id, []) def cost(self, a, b): return self.weights.get((a, b), 1) def add(self, a, b, w): self.edges.setdefault(a, []).append(b) self.weights[(a, b)] = w class PriorityQueue: def __init__(self): self.elements = {} def empty(self): return len(self.elements) == 0 def put(self, item, priority): if item in self.elements: print("Reprioritizing", item, "from", self.elements[item], "to", priority) else: print("Inserting", item, "with priority", priority) self.elements[item] = priority def get(self): best_item, best_priority = None, None for item, priority in self.elements.items(): if best_priority is None or priority < best_priority: best_item, best_priority = item, priority del self.elements[best_item] return best_item def search(graph, start): frontier = PriorityQueue() frontier.put(start, 0) came_from = {} cost_so_far = {} came_from[start] = None cost_so_far[start] = 0 while not frontier.empty(): current = frontier.get() for next in graph.neighbors(current): new_cost = cost_so_far[current] + graph.cost(current, next) if next not in cost_so_far or new_cost < cost_so_far[next]: cost_so_far[next] = new_cost frontier.put(next, new_cost) came_from[next] = current g = Graph() g.add('A', 'B', 1) g.add('B', 'C', 1) g.add('C', 'D', 1) g.add('D', 'E', 1) g.add('B', 'E', 4) search(g, 'A') Inserting A with priority 0 Inserting B with priority 1 Inserting C with priority 2 Inserting E with priority 5 Inserting D with priority 3 Reprioritizing E from 5 to 4 |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Determining Whether Two Groups are Isomorphic by their Group Presentations
Definition: Let $G$ be a group and let $S = \{ X_1, X_2, ..., X_n \}$. We denote $W(X_1, X_2, ..., X_n)$ to be a word over $S \cup S^{-}$. Then we define the following set by $\{ W(X_1, X_2, ..., X_n) \}_G = \{ W(g_1, g_2, ..., g_n) : g_1, g_2, ..., g_n \in G \}$.
We now prove a very important
Theorem 1: Let $G = \langle a_1, a_2, ... : P_1, P_2, ... \rangle$ and let $H = \langle b_1, b_2, ... : Q_1, Q_2, ... \rangle$. If $G$ and $H$ are isomorphic then for every $W(X_1, X_2, ..., X_n)$ we have that $\langle a_1, a_2, ... : P_1, P_2, ..., \{ W(X_1, X_2, ..., X_n) \}_G \rangle$ and $\langle b_1, b_2, ... : Q_1, Q_2, ..., \{ W(X_1, X_2, ..., X_n) \}_H \rangle$ are isomorphic.
We now give an example of applying Theorem 1
Example 1 Prove that $G = \langle a, b : \emptyset \rangle$ is not isomorphic to $H = \langle x, y, z : \emptyset \rangle$.
Suppose that $G$ and $H$ are instead isomorphic and let $S = \{ X, Y \}$. Let $W(X, Y) = XYX^{-1}Y{^-1}$. Then by Theorem 1 we have that:(1)
Observe that:(2)
Clearly $\mathbb{Z} \times \mathbb{Z} \not \cong \mathbb{Z} \times \mathbb{Z} \times \mathbb{Z}$ and so the assumption that $G$ and $H$ are isomorphic was false. |
Article ID 0001 February 2019
Solar flares and coronal mass ejections (CMEs) are two very important active events from Sun. Inspite of several theoretical and statistical analyses, the relation between solar flares and CMEs is so far not well established, and strong opinions and arguments still continue. Statistical approaches use a lot of dataavailable from many measurements by space and ground instruments. They try to map the measured parameters of one event to that of another event and try to establish the relation between them. Halo CMEs are a kind of special CMEs in the sense that they are directed towards Earth and hence can influence Earth’s atmosphere. For a scientist interested in Sun–Earth interactions and the effect on Earth’s atmosphere, study of Halo CMEs is extremely important. In this paper the relation between solar flares and Halo CMEs is studied. The data setsused are for the period from October 2006 to March 2017. For the first time, the Halo CMEs are categorized into four different groups based on the relative time of occurrence with respect to the flares and the relation between the flare and Halo CME parameters is studied. It is shown that: (a) there is a good correlation between certain flare parameters (like flare flux and peak intensity) and CME parameters (like kinetic energy, linear speed, and mass) especially when the Halo CME occurs during the flare; (b) For the same set of CMEs, the correlation is poor with flare duration; and (c) For CMEs before or after the flare, the correlation is lesser than the CMEs occurring during the flare.
Article ID 0002 February 2019
In this paper, the Bianchi-V universe has been applied to the transitional universe. Exact solutions of Einstein’s modified field equations in the framework of Sáez-Ballester theory are obtained with heat conduction and perfect fluid. We have applied the hybrid expansion law for the average scale factor $a = kt^{\alpha} e^{\beta t}$, (where $\alpha \geq 0$, $k$ > 0, and $\beta \geq 0$ are constants). This results into a new class of transit models from deceleratinguniverse to the current accelerating universe. The present work also elucidates some of the physical, geometric and kinematic properties of the universe and found them in good agreement with recent observations.
Article ID 0003 February 2019
Lunar occultation, which occurs when the Moon crosses sight-lines to distant sources, has been studied extensively through apparent intensity pattern resulting from Fresnel diffraction, and has been successfully used to measure angular sizes of extragalactic sources. However, such observations till date havebeen mainly over narrow bandwidth, or averaged over the observing band, and the associated intensity pattern in time has rarely been examined in detail as a function of frequency over a wide band. Here, we revisit the phenomenon of lunar occultation with a view to study the associated intensity pattern as a function of both time and frequency. Through analytical and simulation approach, we examine the variation of intensity across the dynamic spectra, and look for chromatic signatures which could appear as discrete dispersed signal tracks, when the diffraction pattern is adequately smoothed by a finite source size. We particularly explore circumstances in which such diffraction pattern might closely follow the interstellar dispersion law followed by pulsars and transients, such as the Fast Radio Bursts (FRBs), which remain a mystery even after a decade of their discovery. In this paper, we describe details of this investigation, relevant to radio frequencies at which FRBs have been detected, and discuss our findings, along with their implications. We also show how a
band-averaged light curve suffers from temporal smearing, and consequent reduction in contrast of intensity variation, with increasing bandwidth. We suggest a way to recover the underlying diffraction signature, as well as the sensitivity improvement commensurate with usage of large bandwidths.
Article ID 0004 February 2019
The occurrence of total 113 geomagnetic storms during declining phase of Solar Cycle 24 (2015–2017) subdivided as about 105 moderate storms (${\rm Dst} = −50$ nT to $−$100 nT), 6 intense storms (${\rm Dst} = −100$ nTto $−$200 nT) and 2 severe storms (Dst < $−$200 nT) has been diagnosed on the basis of 5 day active window through the CACTus (Computer aided CME tracking) software. A detailed study has been carried out for the 6intense and 2 severe storms. It is inferred that CMEs are the major source of geomagnetic storms to occur. Out of the 6 intense and 2 severe storms, only 1 has been observed with the origin of CIR. Thus, all analyzed intensegeomagnetic storms are due to coronal mass ejection at the Sun. Most of our results are in good accordance with other reported results.
Article ID 0005 February 2019
This paper examines the linear stability analysis around triangular equilibrium points of a test body in the gravitational field of a low-mass post-AGB binary system, enclosed by circumbinary disc and radiating with effective Poynting–Robertson (P–R) drag force. The equations of motion are derived and positions of triangular equilibrium points are located. These points are determined by; the circumbinary disc, radiation and P–R drag. In particular, for our numerical computations of triangular equilibrium points and the linear stability analysis, we have taken a pulsating star, IRAS 11472-0800 as the first primary, with a young white dwarf star; G29-38 as the second primary. We observe that the disc does not change the positions of the triangular points significantly, except on the y-axis. However, radiation, P–R drag and the mass parameter $\mu$ contributeeffectively in shifting the location of the triangular points. Regarding the stability analysis, it is seen that these points under the combined effects of radiation, P–R drag and the disc, are unstable in the linear sense due to at least a complex root having a positive real part. In order to discern the effects of the parameters on the stability outcome, we consider the range of the mass parameter to be in the region of the Routhonian critical mass (0.038520). It is seen that in the absence of radiation and the presence of the disc, when the mass parameter isless than the critical mass, all the roots are pure imaginary and the triangular point is stable. However, when $\mu = 0.038521$, the four roots are complex, but become pure imaginary quantities when the disc is present. This proofs that the disc is a stabilizing force. On introducing the radiation force, all earlier purely imaginary roots became complex roots in the entire range of the mass parameter. Hence, the component of the radiation force is strongly a destabilizing force and induces instability at the triangular points making it an unstable equilibriumpoint.
Article ID 0006 February 2019
In this paper, we use the distributions of luminosity ($P$) and radio size ($D$) to re-examine the consistency of the unified scheme of high-excitation radio galaxies and quasars in the recently updated 3CRR sample. Based on a standard cosmology, we derive theoretically and show from observed data, the luminosity limit above which the 3CRR objects are well-sampled. We find, on average, a quasar fraction $\sim$0.44 and galaxy-to-quasar size ratio $\approx$2. Assuming a relativistic outflow of jet materials, we find a mean angle to the line of sight in the range 35$^{\circ} \leq \phi \leq 44^{\circ}$ for the quasars. On supposition of luminosity and orientation-dependent linear size evolution, expressed in a general functional form $D_{\rm (P,z,\phi)} \approx P^{\pm q}(1+z)^{−w} \sin \phi$, we show that above the flux detection threshold of the 3CRR sample, high-excitation galaxies and quasars undergo similar evolutionwith $q = −0.5$; $w = −0.27$ and luminosity independent evolution parameter $x = 2.27$, when orientation effect is accounted for. The results are consistent with orientation-based unified scheme for radio galaxies and quasars.
Article ID 0007 February 2019
Relations between radio surface brightness ($\Sigma$) and diameter ($D$) of supernova remnants (SNRs) are important in astronomy. In this paper, following the work Duric and Seaquist (ApJ 301:308, 1986) at adiabaticphase, we carefully investigate shell-type supernova remnants at radiative phase, and obtain theoretical $\Sigma$-$D$ relation at radiative phase of shell-type supernova remnants at 1 GHz. By using these theoretical $\Sigma$-$D$ relations at adiabatic phase and radiative phase, we also roughly determine phases of some supernova remnant from observation data.
Article ID 0008 February 2019
New models for a charged anisotropic star in general relativity are found. We consider a linear equation of state consistent with a strange quark star. In our models a new form of measure of anisotropy is formulated; our choice is a generalization of other pressure anisotropies found in the past by other researchers. Our results generalize quark star models obtained from the Einstein–Maxwell equations.Well-known particular charged models are also regained.We indicate that relativistic stellar masses for several stars are obtained using the general mass function found in our model.
Current Issue
Volume 40 | Issue 5 October 2019
Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles.
Click here for Editorial Note on CAP Mode |
The following problem arises when we try to bound the expected offline optimal value of a simple online assignment problem with random values and unit weights, by its deterministic approximation.
The Problem
Consider a sequence $\{X_i\}_{i=1}^n$ of non-negative integrable i.i.d. random variables with absolutely continuous c.d.f. $F(x)$. Let $X_{(i)}$ be the $i^{\rm th}$ order statistics, so that $X_{(1)}$ is the minimum of the sequence and $X_{(n)}$ is the maximum. Now, let $T_k$ be the average of the top $k^{\rm th}$ order statistics, that is, $T_k = \frac 1 k \sum_{i=n-k+1}^n X_{(i)}$. We would like to show that the expected value of the average of the top order statistics is upper bounded as follows:
$$\mathbb{E} T_{k} \le \mathbb{E} \left[ X \mid F(X) \ge 1 - k/n \right],$$
where the right hand size is the conditional expectation of X given that it is larger than the $(1-k/n)$-percentile. In order to keep things simple, we may assume that the c.d.f. $F(\cdot)$ is strictly increasing in its domain.
Moreover, we may we fix $\rho \in (0,1)$, and set $k=[\rho n]$; so that we are interested in the average of the top $\rho$ fraction of the sequence. If we scale both $n$ and $k$ to infinity while keeping the ratio $\rho$ fixed, it seems to be the case that the bound is asymptotically tight:
$$ \lim_{n \rightarrow \infty} \mathbb{E} T_{[\rho n]} = \mathbb{E} \left[ X | F(X) \ge 1 - \rho \right]. $$
Can you show if these results hold? I have done some numerical experiments with a couple of distributions (uniform, truncated normal, exponential) that confirm these results. Any help or pointer would be appreciated.
Thanks in advance.
More motivation
This is a stripped down version of a more complicated problem. Suppose that we have an incoming inventory of $n$ different items with unknown value arriving in an online fashion. We can only keep $k$ items of the $n$ total. The decision to keep an item has to be made at the moment of arrival, and once we decide to keep an item we need to stick to this decision.
The value of i-th item, denoted by $X_i$, is unknown, and revealed when it arrives (before a decision needs to be made). However, we do have a prior for the values; they are drawn independently from an identical distribution $F(\cdot)$. The objective of the problem is designing an online policy that maximizes the expected total value of the assignment.
A useful benchmark, when comparing online policies, is the offline optimal solution. Given a realization of the values $X=\{X_i\}_{i=1}^n$, the optimal value of the offline problem, denoted by $P(X)$, is
$$\begin{align} P(X) = \max_y &\; \sum_{i=1}^n y_i X_i \\ \text{s.t.} & \sum_{i=1}^n y_i = k, \\ & y_i \in \{0,1\} \end{align}$$ In this simple case, the offline optimal solution is to keep the items with the k-th highest values. Equivalently, we have that $P(X) = \sum_{i=n-k+1}^n X_{(i)}$. We are interested in the expected optimal value of offline optimal solution, which is given by $\mathbb{E} P(X) = k \mathbb{E} T_k$.
A simple online policy could instruct us to keep those items with value larger than the $(1-k/n)$-percentile. Such policy can be shown to attain an expected value of $n \mathbb{E} \left[ X \mathbf{1} \{ F(X) \ge 1 - k/n\} \right]$ when $n$ is large.
The results that we want to prove would allows us to upper bound the expected optimal value of the offline problem by a bound that could be attained, asymptotically, by an online policy. This would confirm that our policy is good. |
Browse by Person
Up a level 54. Article
Aad, G, Abbott, B, Abdallah, J et al. (2883 more authors) (2016)
Addendum to ‘Measurement of the tˉt production cross-section using eμ events with b-tagged jets in pp collisions at √s = 7 and 8 TeV with the ATLAS detector’. European Physical Journal C: Particles and Fields, 76. 642. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2860 more authors) (2016)
Measurement of the inclusive isolated prompt photon cross section in pp collisions at root s=8 TeV with the ATLAS detector. Journal of High Energy Physics (8). ARTN 005. pp. 1-42.
Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016)
Measurement of event-shape observables in Z→ℓ+ℓ- events in pp collisions at √s=7 TeV with the ATLAS detector at the LHC. The European Physical Journal C - Particles and Fields, 76 (7). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016)
Measurement of event-shape observables in Z→ℓ+ℓ- events in pp collisions at √s=7 TeV with the ATLAS detector at the LHC. The European Physical Journal C, 76. 375. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016)
Identification of high transverse momentum top quarks in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 93. ISSN 1029-8479
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Measurement of the charged-particle multiplicity inside jets from √ s =8 TeV pp collisions with the ATLAS detector. European Physical Journal C: Particles and Fields , 76 (6). 322. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2715 more authors) (2016)
Measurements of and production in collisions at with the ATLAS detector. Physical Review D, 93 (11). ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Reconstruction of hadronic decay products of tau leptons with the ATLAS experiment. The European Physical Journal C, 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016)
Measurement of the transverse momentum and Øn∗ distributions of Drell–Yan lepton pairs in proton–proton collisions at √s = 8 TeV with the ATLAS detector. The European Physical Journal C - Particles and Fields, 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016)
Measurement of the differential cross-sections of prompt and non-prompt production of J/ψ and ψ(2S) in pp collisions at √s=7 and 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 283. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016)
Search for the standard model Higgs boson produced in association with a vector boson and decaying into a tau pair in pp collisions sqrt s = 8 TeV at with the ATLAS detector. Physical Review D, 93 (9). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2860 more authors) (2016)
Measurements of production cross sections in collisions at with the ATLAS detector and limits on anomalous gauge boson self-couplings. Physical Review D, 93 (9). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2856 more authors) (2016)
Search for supersymmetry at $$\sqrt{s}=13$$ s = 13 TeV in final states with jets and two same-sign leptons or three leptons with the ATLAS detector. European Physical Journal C (The), 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Probing lepton flavour violation via neutrinoless τ⟶3μ decays with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 232. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2871 more authors) (2016)
Search for dark matter produced in association with a Higgs boson decaying to two bottom quarks in pp collisions at root s=8 TeV with the ATLAS detector. PHYSICAL REVIEW D, 93 (7). ARTN 072007. ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016)
Search for new phenomena in events with at least three photons collected in pp collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044
Aad, G, Abajyan, T, Abbott, B et al. (2840 more authors) (2016)
Measurement of the centrality dependence of the charged-particle pseudorapidity distribution in proton–lead collisions at sNN‾‾‾√=5.02sNN=5.02 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016)
Search for anomalous couplings in the W tb vertex from the measurement of double differential angular decay rates of single top quarks produced in the t-channel with the ATLAS detector. Journal of High Energy Physics, 2016 (4).
Aad, G, Abbott, B, Abdallah, J et al. (2844 more authors) (2016)
Search for new phenomena with photon plus jet events in proton-proton collisions at TeV with the ATLAS detector. Journal of High Energy Physics (3). 41. ISSN 1029-8479
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2794 more authors) (2016)
Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2879 more authors) (2016)
Centrality, rapidity, and transverse momentum dependence of isolated prompt photon production in lead-lead collisions at TeV measured with the ATLAS detector. Physical Review C, 93 (3). ISSN 0556-2813
Aad, G, Abbott, B, Abdallah, J et al. (2856 more authors) (2016)
Search for invisible decays of a Higgs boson using vector-boson fusion in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 172. ISSN 1126-6708
Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016)
Search for a high-mass Higgs boson decaying to a W boson pair in pp collisions at s = 8 $$ \sqrt{s}=8 $$ TeV with the ATLAS detector. Journal of High Energy Physics, 2016 (1).
Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2016)
Measurements of fiducial cross-sections for $$t\bar{t}$$ t t ¯ production with one or two additional b-jets in pp collisions at $$\sqrt{s}$$ s =8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (1). 11. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2824 more authors) (2016)
Measurements of the Higgs boson production and decay rates and coupling strengths using pp collision data at $$\sqrt{s}=7$$ s = 7 and 8 TeV in the ATLAS experiment. European Physical Journal C: Particles and Fields, 76. 6. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2835 more authors) (2015)
Search for Higgs boson pair production in the $$b\bar{b}b\bar{b}$$ b b ¯ b b ¯ final state from pp collisions at $$\sqrt{s} = 8$$ s = 8 TeVwith the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (9). 412. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2825 more authors) (2015)
Search for heavy long-lived multi-charged particles in pp collisions at root s=8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (8). 362. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2819 more authors) (2015)
Constraints on the off-shell Higgs boson signal strength in the high-mass ZZ and WW final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (7). 335. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015)
Search for a new resonance decaying to a W or Z boson and a Higgs boson in the $$\ell \ell / \ell \nu / \nu \nu + b \bar{b}$$ ℓ ℓ / ℓ ν / ν ν + b b ¯ final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (6). 263. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2823 more authors) (2015)
Determination of spin and parity of the Higgs boson in the $$WW^*\rightarrow e \nu \mu \nu $$ W W ∗ → e ν μ ν decay channel with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 231. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2815 more authors) (2015)
Observation and measurements of the production of prompt and non-prompt $$\varvec{\text {J}\uppsi }$$ J ψ mesons in association with a $$\varvec{Z}$$ Z boson in $$\varvec{pp}$$ p p collisions at $$\varvec{\sqrt{s}= 8\,\text {TeV}}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 229. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2821 more authors) (2015)
Search for direct pair production of a chargino and a neutralino decaying to the 125 GeV Higgs boson in $$\sqrt{\varvec{s}} = 8$$ s = 8 TeV $$\varvec{pp}$$ p p collisions with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 208. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015)
Search for $$W' \rightarrow tb \rightarrow qqbb$$ W ′ → t b → q q b b decays in $$pp$$ p p collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (4). 165. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015)
Search for Higgs and Z Boson Decays to J/ψγ and ϒ(nS)γ with the ATLAS Detector. Physical Review Letters, 114 (12). 121801. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2881 more authors) (2015)
Simultaneous measurements of the tt¯, W+W−, and Z/γ∗→ττ production cross-sections in pp collisions at √s=7 TeV with the ATLAS detector. Physical Review D - Particles, Fields, Gravitation and Cosmology, 91 (5). 052005. ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2896 more authors) (2015)
Measurements of Higgs boson production and couplings in the four-lepton channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 91 (1). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2014)
Search for nonpointing and delayed photons in the diphoton and missing transverse momentum final state in 8 TeV pp collisions at the LHC using the ATLAS detector. Physical Review D, 90 (11). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2886 more authors) (2014)
Measurement of the Higgs boson mass from the H→γγ and H→ZZ∗→4ℓ channels in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 90 (5). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2878 more authors) (2014)
Search for high-mass dilepton resonances in pp collisions at s√=8 TeV with the ATLAS detector. Physical Review D, 90. 052005. ISSN 1550-7998
Aad, G, Abajyan, T, Abbott, B et al. (2920 more authors) (2013)
Evidence for the spin-0 nature of the Higgs boson using ATLAS data. Physics Letters B, 726 (1-3). pp. 120-144. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (1825 more authors) (2012)
Search for contact interactions in dilepton events from pp collisions at root s=7 TeV with the ATLAS detector. Physics Letters B, 712 (1-2). pp. 40-58. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (2923 more authors) (2012)
Measurement of D*± meson production in jets from pp collisions at s√=7 TeV with the ATLAS detector. Physical Review D, 85 (5). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (3057 more authors) (2012)
Search for the Standard Model Higgs Boson in the Diphoton Decay Channel with 4.9 fb−1 of pp Collision Data at √s=7 TeV with ATLAS. Physical Review Letters, 108. 111803. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2775 more authors) (2012)
Measurement of the ZZ Production Cross Section and Limits on Anomalous Neutral Triple Gauge Couplings in Proton-Proton Collisions at √s=7 TeV with the ATLAS Detector. Physical Review Letters, 108 (4). 041804. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2992 more authors) (2012)
K0s and Λ production in pp interactions at s√=0.9 and 7 TeV measured with the ATLAS detector at the LHC. Physical Review D, 85 (1). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (3022 more authors) (2011)
Search for Dilepton Resonances in pp Collisions at √s=7 TeV with the ATLAS Detector. Physical Review Letters, 107 (27). ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3028 more authors) (2011)
Measurement of the transverse momentum distribution of Z/gamma* bosons in proton-proton collisions at root s=7 TeV with the ATLAS detector. Physics Letters B, 705 (5). pp. 415-434. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (3023 more authors) (2011)
Search for a standard model Higgs boson in the H→ZZ→ℓ(+)ℓ(-)νν decay channel with the ATLAS detector. Physical Review Letters, 107 (22). 221802. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3017 more authors) (2011)
Search for new phenomena with the monojet and missing transverse momentum signature using the ATLAS detector in sqrt(s) = 7 TeV proton-proton collisions. Physics Letters B, 705 (4). pp. 294-312. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (3033 more authors) (2011)
Measurement of the W+W− Cross Section in s√=7 TeV pp Collisions with ATLAS. Physical Review Letters, 107. 041802. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3046 more authors) (2011)
Measurement of the production cross section for W-bosons in association with jets in pp collisions at √s=7 TeV with the ATLAS detector. Physics Letters B, 698 (5). pp. 325-345. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (3024 more authors) (2011)
Measurement of Dijet Azimuthal Decorrelations in pp Collisions at s√=7 TeV. Physical Review Letters, 106. 172002. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3177 more authors) (2010)
Measurement of the W -> lv and Z/gamma* -> ll production cross sections in proton-proton collisions at root s=7 TeV with the ATLAS detector. Journal of High Energy Physics. 60. ISSN 1029-8479 |
Difference between revisions of "Help:Formatting"
m
Line 337: Line 337:
<noinclude>
<noinclude>
==See Also==
==See Also==
−
*[[Help:Editing|Editing the
+
*[[Help:Editing|Editing the Wiki]]
*[[Help:Formatting_Toolbar | Using the formatting toolbar]]
*[[Help:Formatting_Toolbar | Using the formatting toolbar]]
*[[Help:Links|Adding Links]]
*[[Help:Links|Adding Links]]
−
*[[Help:Adding_Media|Adding Media to the
+
*[[Help:Adding_Media|Adding Media to the Wiki]]
*[[Help:Tables|Table Formatting]]
*[[Help:Tables|Table Formatting]]
*[[Help:Footnotes and References|Adding Footnotes and References]]
*[[Help:Footnotes and References|Adding Footnotes and References]]
[[Category:Help]]
[[Category:Help]]
</noinclude>
</noinclude>
Latest revision as of 09:27, 16 September 2014
You can format your text using the formatting toolbar or wiki markup. Wiki markup can be thought of as a simplified version of html and it consists of normal characters like asterisks, single quotes or equation marks which have a special function in the wiki. For example, to format a word in
italic, you include it in two single quotes like
''this''.
Contents Text Formatting
Description You type You get character (inline) formatting – applies anywhere Italic text
''italic''
italic Bold text
'''bold'''
bold Bold and italic
'''''bold & italic'''''
bold & italic Ignore wiki markup
<nowiki>no ''markup''</nowiki>
no ''markup'' section formatting – only at the beginning of the line Preformatted text preformatted text is done with a '''space''' at the ''beginning'' of the line
This way of preformatting only applies to section formatting, and character formatting markups are still effective.
preformatted text is done with a Organizing Headers & Lines
Description You type You get section formatting – only at the beginning of the line (with no leading spaces) Headings of different levels =level 1= ==level 2== ===level 3=== ====level 4==== =====level 5===== ======level 6======
Level 1 is normally set aside for the article title. An article with 4 or more headings automatically creates a table of contents.
Level 3
Level 4
Level 5
Level 6 Horizontal rule
----
Lists
Description You type You get section formatting – only at the beginning of the line (with no leading spaces) Bullet list * one * two * three ** three point one ** three point two
Inserting a blank line will end the first list and start another.
Numbered list # one # two<br />spanning more lines<br />doesn't break numbering # three ## three point one ## three point two Definition list ;item 1 : definition 1 ;item 2 : definition 2-1 : definition 2-2 Adopting definition list to indent text : Single indent :: Double indent ::::: Multiple indent
This workaround may be controversial from the viewpoint of accessibility.
Mixture of different types of list # one # two #* two point one #* two point two # three #; three item one #: three def one # four #: four def one #: this rather looks like the continuation of # four #: and thus often used instead of <br /> # five ## five sub 1 ### five sub 1 sub 1 ## five sub 2 ;item 1 :* definition 1-1 :* definition 1-2 : ;item 2 :# definition 2-1 :# definition 2-2
The usage of
For even more on list, check out Wikipedia's List Help article.
Signatures
You should always sign your comments, though signatures can be inserted anywhere on a wiki page.
Description You type You get character (inline) formatting – applies anywhere Signature Three tildes for just a signature, ~~~ Three tildes for just a signature, Cynthia (UBC LSIT) Signature with Date and Time Four tildes for your signature with date and time, ~~~~ Four tildes for your signature with date and time, Cynthia (UBC LSIT) 22:27, 26 May 2010 (UTC) Only Date and Time Five tildes for date and time only, ~~~~~ Five tildes for date and time only, 22:27, 26 May 2010 (UTC) Note: Once you save, the signature and date/time are automatically created. The next time someone edits, it no longer show the tildes. Links Please see Help:Links for detailed information on creating hyperlinks Paragraphs
MediaWiki ignores single line breaks. To start a new paragraph, leave an empty line. You can force a line break within a paragraph with the HTML tags
<br />.
HTML Formatting
Some HTML tags are allowed in MediaWiki, for example
<code>,
<div>,
<span> and
<font>. These apply anywhere you insert them.
Description You type You get Underscore
<u>underscore</u>
underscore Strikethrough
<del>Strikethrough</del> or
<s>Strikethrough</s>
Fixed width text
<tt>Fixed width text</tt> or
<code>source code</code>
Fixed width text or
source code
Blockquotes
text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text Typewriter font Puts text in a <tt>typewriter font</tt>. The same font is generally used for <code> computer code</code>. Puts the text in a typewriter Superscripts and Subscripts X<sup>2</sup>, H<sub>2</sub>O X 2, H 2O Centered text <center>Centered text</center> * Please note the American spelling of "center". Comment
<!-- This is a comment -->
Text can only be viewed in the edit window.
Completely preformatted text
this way, all markups are '''ignored'''. Customized preformatted text this way, all markups are '''ignored''' and formatted with a CSS text Mathematical formulas
MediaWiki allows you to use LaTeX to insert mathematical formulae by typing in <math>Formula here</math>. Included here are a couple of examples and commonly used functions and expressions.
What you type What it looks like Superscript <math> a^2 </math> Subscript <math> a_3 </math> Grouping <math> a_{x,y} </math> Combination <math> a_3^2 or {a_3}^2 </math> or Root <math> ([n] is optional) \sqrt[n]{x} </math> Fraction <math> \frac{3}{4}=0.75 or (small) \tfrac{1}{3}=0.\bar{3} </math> or (small) More Complex Example <math> \sum_{n=0}^\infty \frac{x^n}{n!} </math>
See WikiMedia's Help on Displaying a Formula for a full article on using TeX to display formulae. Beginning at Section 3 (Functions, symbols, special characters) is a comprehensive list of all the symbols.
Footnotes
You can add footnotes to sentences usingthe
ref tag -- this is especially goodfor citing a source.
What it looks like What you type There are over six billion people in the world. [1]References: There are over six billion people in the world.<ref>CIA World Factbook, 2006.</ref> References: <references/> |
The question was about intuitionism specifically, not some variant of constructivism, nor about some particular formalization of intuitionism (I don't think an intuitionist would recognize any particular formalization as being complete or even meaningful).
Your statement is not of the form $$P \to (Q \vee R).$$ It's of the form $$(\forall n)\Big(P(n) \to \big(Q(n) \vee R(n)\big)\Big).$$ (Quantification here is over natural numbers.)
To prove this intuitionistically, we don't necessarily need a proof of $(\forall n)(P(n) \to Q(n))$ or a proof of $(\forall n)(P(n) \to R(n)).$ What we need is a constructive way of finding, for each natural number $n,$ either a proof for that specific $n$ of $P(\underline{n}) \to Q(\underline{n})$ or a proof for that specific $n$ of $P(\underline{n}) \to R(\underline{n}),$ where $\underline{n}$ is the numeral representing $n.$
In general, to prove intuitionistically that $(\forall n)S(n),$ we need a constructive way of finding, for each natural number $n,$ a proof of $S(\underline{n})$ for that specific $n.$
In your example, it's clear that one can intuitionistically determine whether $b$ is composite or prime (simply check all possible factors between $2$ and $b-1\text{)}.$ If $b$ is composite, we immediately have a proof that "$b$ is composite or $(b \mid a)\text{."}$ If $b$ is prime, then since we are given that $n\ne 1\,\wedge\,(n\mid a)\,\wedge\,(n\mid b),$ we can conclude that $n=b,$ so $b\mid a,$ and again we have a proof of $\text{"}b$ is composite or $(b \mid a)\text{."}$
So we have an intuitionistically acceptable method, given any any $a, b, \text{ and }n$ such that $n\ne 1$ and $n$ divides both $a$ and $b$, of finding a proof that $"\!\underline{b}$ is a composite number or $\underline{b}$ divides $\underline{a}\!",$ which is exactly what is needed.
Now, there are ultrafinitists who might dispute the fact that each natural number $\gt 1$ is either prime or composite, but intuitionists would have no problem with that statement. |
Lasers work by stimulated emission of atomic transitions. Stimulated emission produces two photons which, because the particle number is well-defined, projects the field into a Fock state. However, it is a known fact that lasers emit light in a coherent state. How does the field evolve from a particle-state to a superposition of particle-states? Omitting normalization:
$$ | n \rangle \rightarrow \sum_{n=0}^{\infty}\frac{\alpha^n }{\sqrt{n!}}| n \rangle $$
I guess one way of looking at it is that the field shifts according to $\Delta n \Delta \phi \geq 1$ from certain particle number to certain phase but it feels like a superficial answer to me. What I want to understand is the
mechanism that allows this to happen. Is it the reflection with the mirror? Is it the imposed boundaries of the resonating cavity? Pumping method? |
(1) Is it correct then that geodesics in M (if M is Riemannian) are just a special case of this construction?
Yes, it is correct to say this. Give a Riemannian manifold $M$ with a metric $g_{ab}$
and an affine connection $\Gamma^a_{bc}$, one can construct various geometric invariants on $M$ such as the Riemann and Ricci tensors. Now for a point particle propagating on this static manifold, i.e without taking General Relativity into account, one could write various Lagrangians of the form:
$$ L = \sqrt{|g|} ( K_m - V_m) $$
where $|g|$ is the absolute value of the determinant of $g_{ab}$ and $K_m$ and $V_m$ are kinetic and potential terms respectively which depending on the position and velocity of the particle or more generically is energy-momentum n-vector $j^a$. A generic example of the kinetic term in this case is:
$$ K_m := \frac{1}{2} \left( \partial^a j_a \right)^2 + \frac{1}{2}(\partial^a j^b \partial_a j_b) $$
and $V_m$ is some polynomial in $j^a$.
If we wish to incorporate the dynamics of the manifold itself (i.e. move into the domain of GR) then our Lagrangians must contain "kinetic" and "potential" terms for the various geometric invariants. These quantities however lack a simple expression for a general manifold and are subsumed into the Ricci scalar:
$$ L = \sqrt{|g|}(K_m - V_m + \mathcal{R}) $$
where the Ricci scalar $\mathcal{R} = g^{ab}\mathcal{R}_{ab} = g^{ab}g^{cd} \mathcal{R}_{acdb} $ is the trace of the Ricci tensor which in turn is a partial trace of the Riemann tensor.
Now, geodesics on a Riemannian manifold $M$ with a metric $g_{ab}$
and an affine connection $\Gamma^a_{bc}$ are given by solutions (integral curves) of the of the geodesic equation:
$$ v^a\nabla_a v^b = 0 $$
where $v := v^a \partial/\partial_a $ is a vector field on $M$, the covariant derivative is given in terms of the affine connection $\nabla_a v_b = \partial_a v_b + \Gamma^c_{ab}v_c $ and indices are raised and lowered with the metric $ v^a = g^{ab} v_b $. Your question amount to asking if there exists a Lagrangian $L(v^a)$ whose equation of motion is this geodesic equation. The answer is 'yes'. The following Lagrangian
$$ L = \sqrt{g^{ab} v_a v_b} $$
which is nothing more than the proper distance along the particle's wordline, yields the geodesic equation on variation w.r.t $\delta v_a$.
(2) Homogeneous spaces, i.e. manifolds of the form G/K, where G is a Lie group and K a closed subgroup, provide interesting examples of manifolds. What would a physical system look like which has G/K as a configuration space? I remember hearing something along the lines "global G-invariance, local K-invariance" but I'm not sure.
An example of a physical system with this configuration space is MacDowell-Mansouri gravity. A beautifully written reference for this is the review paper MacDowell-Mansouri gravity and Cartan geometry by Derek Wise, a former student of John Baez.
There one starts with a five-dimensional spacetime with symmetry group $G$ which can be deSitter, Anti-deSitter or Minkowski with a lie-algebra valued antisymmetric tensor $B$ and the curvature of the gauge connection $F$. The action is that of a topological BF theory given by:
$$ S_{BF} = \int tr B \wedge F $$
We can identify $B:= B_{\mu\nu}^{IJ}$ with the wedge product of two fermion fields: $ B^{IJ} = \psi^I \wedge \psi^J $ which transform in the fundamental representation of the gauge group. Furthermore, as in the case of the BCS mechanism, let us assume that the dynamics of the system contains a four-fermion term which acquires a vev (vacuum expectation value) due to the formation of a condensate of these fermions. Such a term results in the action:
$$ S'_{BF} = \int tr B \wedge F - \frac{G\Lambda}{6} B \wedge \star B $$
where $\star B$ is the Hodge dual of $B$. $G$ and $\Lambda$ are Newton's constant and the cosmological constant respectively. The formation of the condensate can be physically described by writing the five-dimensional gauge connection in the following form:
$$ {}^5 A =
\left( \begin{array}{cc}
{}^4 A && \frac{1}{l}\{e^0,e^{i} \} \\
\frac{1}{l} \{ e^0, \epsilon e^i \} && 0
\end{array} \right) $$
where ${}^4 A$ is a four-dimensional connection and $\{e^0,e^{i}\}$ (where $ i \in {1,2,3}$) is a vier-bien (tetrad). $\epsilon = \{-1,0,1\}$ for $G = \{ SO(4,1)$ (deSitter), $ISO(3,1)$ (Minkowski), $SO(3,2)$ (Anti-deSitter) $\}$ respectively. The group $H$ in each case is $SO(3,1)$ and the resulting theory has gauge group $\{SO(4,1)/SO(3,1), ISO(3,1)/SO(3,1),SO(3,2)/SO(3,1)\}$ respectively.
The resulting theory describes general relativity in
four dimensions with the addition of topological terms such as the Nieh-Yan and Pontyargin terms in the action. For more details see Wise's excellent paper reference above and also the seminal paper (1) by Friedel and Starodubstev who first proposed this formulation of the MacDowell-Mansouri mechanism. |
After having read the answers to calculating $\pi$ manually, I realised that the two fast methods (Ramanujan and Gauss–Legendre) used $\sqrt{2}$. So, I wondered how to calculate $\sqrt{2}$ manually in an accurate fashion (i.e., how to approximate its value easily).
One really easy way of approximating square roots surprisingly accurately was actually developed by the Babylonians.
First they made a guess at the square root of a number $N$--let this guess be denoted by $r_1$. Noting that $$ r_1\cdot\left(\frac{N}{r_1}\right)=N, $$ they concluded that the actual square root must be somewhere between $r_1$ and $N/r_1$. Thus, their next guess for the square root, $r_2$, was the average of these two numbers: $$ r_2 = \frac{1}{2}\left(r_1+\frac{N}{r_1}\right). $$ Continuing in this way, in general, once we have reached the $n$th approximation to the square root of $N$, we find the $(n+1)$st using $$ r_{n+1}=\frac{1}{2}\left(r_n+\frac{N}{r_n}\right). $$ All that you really need to do is make a moderately decent guess of the square root of a number and then apply this method two or three times and you should have quite a good approximation.
For $\sqrt{2}$, simply using a guess of $1$ and applying this method three times (the algebra involved is remarkably simple) yields an approximation of $$ \frac{577}{408}\approx \color{red}{1.41421}\color{blue}{568627}, $$ whereas $$ \sqrt{2}\approx \color{red}{1.41421}\color{green}{356237}. $$ That's quite a good approximation using an easy and quick manual method.
I think crash's post's method is the best, but if you don't want to do a lot of long divisions, then here is an alternative method for the lazy.
Suppose you want to compute $\sqrt{2}$ to $k$ decimal places. That is, you want to find $x$ in:
$$x \cdot 10^{-k} \approx \sqrt{2}$$ $$x^2 \approx 2 \cdot 10^{2k}$$
This allows you to find $x$ using a binary search: doing approximately $\log(k)$ multiplications of a $k$ digit number (and computing the average of the upper and lower bound, just adding and dividing by $2$ is easy by hand).
And the accuracy of this result guaranteed by construction. Suppose you want to calculate $\sqrt{2}$ to $8$ decimal places:
$$x^2 = 2 \cdot 10^{16}$$
$$\begin{array} {c|ccc} \text{Step} & \text{LowerBound} & \text{UpperBound} &\text{MidPoint} \\ 1 & 100000000 & 1000000000 & 550000000 \\ 2 & 100000000 & 549999999 & 324999999 \\ 3 & 100000000 & 324999998 & 212499999 \\ 4 & 100000000 & 212499998 & 156249999 \\ 5 & 100000000 & 156249998 & 128124999 \\ 6 & 128125000 & 156249998 & 142187499 \\ 7 & 128125000 & 142187498 & 135156249 \\ 8 & 135156250 & 142187498 & 138671874 \\ 9 & 138671875 & 142187498 & 140429686 \\ 10 & 140429687 & 142187498 & 141308592 \\ 11 & 141308593 & 142187498 & 141748045 \\ 12 & 141308593 & 141748044 & 141528318 \\ 13 & 141308593 & 141528317 & 141418455 \\ 14 & 141418456 & 141528317 & 141473386 \\ 15 & 141418456 & 141473385 & 141445920 \\ 16 & 141418456 & 141445919 & 141432187 \\ 17 & 141418456 & 141432186 & 141425321 \\ 18 & 141418456 & 141425320 & 141421888 \\ 19 & 141418456 & 141421887 & 141420171 \\ 20 & 141420172 & 141421887 & 141421029 \\ 21 & 141421030 & 141421887 & 141421458 \\ 22 & 141421030 & 141421457 & 141421243 \\ 23 & 141421244 & 141421457 & 141421350 \\ 24 & 141421351 & 141421457 & 141421404 \\ 25 & 141421351 & 141421403 & 141421377 \\ 26 & 141421351 & 141421376 & 141421363 \\ 27 & 141421351 & 141421362 & 141421356 \\ 28 & 141421357 & 141421362 & 141421359 \\ 29 & 141421357 & 141421358 & 141421357 \\ \end{array}$$
$29$ $8$-digit multiplications for $8$ decimal places of accuracy (and a lot of even that was redundant).
you can use the formula $$\frac{a_{n+1}}{b_{n+1}}=\frac{a_n^2+2b_n^2}{2a_nb_n}$$ if we take an initial value of $\sqrt{2}$ as $\frac{3}{2}$
now the new value will become $$\frac{a}{b}=\frac{3^2+2*2^2}{2*2*3}=\frac{17}{12}$$
the new value of $a=17$ and $b=12$
and then continue
Another technique might be to use the Taylor series $$(1+x)^{1/2} = 1+ \frac 12 x - \frac 18 x^2 + \frac{1}{16} x^3 - \frac{5}{128} x^4 +\cdots.$$
The coefficients of this series are $\frac{(-1)^k }{k!} \left(\frac12\right) \left(-\frac12\right)\left(-\frac32\right)\cdots\left(\frac32 - k\right)$. You can plug in $x=1$ so that the series evaluates to $\sqrt2$, but the series converges faster if you start with a rational approximation of $\sqrt2$ and use the Taylor series to compute a correction factor, for example $\sqrt 2 = 1.4 \cdot \left(1 + \frac{1}{49}\right)^{1/2}.$ |
I think it'd be nice to have a whole set of possible messages, each associated with some equation which has zero on the right hand side. It'd look something like this:
"404; Did you just [insert operation that yields zero*]? Because there's nothing here! (Insert relevant equation**)"
*Examples; (**Corresponding equations):
1. symmetrize the electromagnetic field strength tensor \(\bigl(F^{(\mu\nu)}=0\bigr)\)
2. take a covariant derivative of the metric \(\bigl(\nabla_\rho g_{\mu\nu}=0\bigr)\)
3. calculate a lightlike interval \(\bigl(ds^2=0\bigr)\)
4. vary the action of the internet \(\bigl(\delta S=0\bigr)\)
5. take the d'Alembertian of a massless field \(\bigl(\square\phi=0\bigr)\)
6. check the Bianchi identities \(\bigl(\nabla_{[\mu}F_{\nu\sigma]}=0\bigr)\ \text{or}\ \bigl(\nabla_{[\mu}R_{\nu\sigma]}=0\bigr)\)
feel free to add other/better ones if you'd like |
Interface Issues¶ Background jobs¶
Yes, a Sage job can be run in the background on a UNIX system. The canonical thing to do is type
$ nohup sage < command_file > output_file &
The advantage of nohup is that Sage will continue running after you log out.
Currently Sage will appear as “sage-ipython” or “python” in the outputof the (unix)
top command, but in future versions of Sage it willappears as
sage.
Referencing Sage¶
To reference Sage, please add the following to your bibliography:
\bibitem[Sage]{sage}Stein, William, \emph{Sage: {O}pen {S}ource {M}athematical {S}oftware({V}ersion 2.10.2)}, The Sage~Group, 2008, {\tt http://www.sagemath.org}.
Here is the bibtex entry:
@manual{sage, Key = {Sage}, Author = {William Stein}, Organization = {The Sage~Group}, Title = {{Sage}: {O}pen {S}ource {M}athematical {S}oftware ({V}ersion 2.10.2)}, Note= {{\tt http://www.sagemath.org}}, Year = 2008}
If you happen to use the Sage interface to PARI, GAP or Singular, you should definitely reference them as well. Likewise, if you use code that is implemented using PARI, GAP, or Singular, reference the corresponding system (you can often tell from the documentation if PARI, GAP, or Singular is used in the implementation of a function).
For PARI, you may use
@manual{PARI2, organization = "{The PARI~Group}", title = "{PARI/GP, version {\tt 2.1.5}}", year = 2004, address = "Bordeaux", note = "available from \url{http://pari.math.u-bordeaux.fr/}" }
or
\bibitem{PARI2} PARI/GP, version {\tt 2.1.5}, Bordeaux, 2004,\url{http://pari.math.u-bordeaux.fr/}.
(replace the version number by the one you used).
For GAP, you may use
[GAP04] The GAP Group, GAP -- Groups, Algorithms, and Programming,Version 4.4; 2005. (http://www.gap-system.org)
or
@manual{GAP4, key = "GAP", organization = "The GAP~Group", title = "{GAP -- Groups, Algorithms, and Programming, Version 4.4}", year = 2005, note = "{\tt http://www.gap-system.org}", keywords = "groups; *; gap; manual"}
or
\bibitem[GAP]{GAP4} The GAP~Group, \emph{GAP -- Groups, Algorithms, and Programming, Version 4.4}; 2005, {\tt http://www.gap-system.org}.
For Singular, you may use
[GPS05] G.-M. Greuel, G. Pfister, and H. Sch\"onemann.{\sc Singular} 3.0. A Computer Algebra System for PolynomialComputations. Centre for Computer Algebra, University ofKaiserslautern (2005). {\tt http://www.singular.uni-kl.de}.
or
@TechReport{GPS05, author = {G.-M. Greuel and G. Pfister and H. Sch\"onemann}, title = {{\sc Singular} 3.0}, type = {{A Computer Algebra System for Polynomial Computations}}, institution = {Centre for Computer Algebra}, address = {University of Kaiserslautern}, year = {2005}, note = {{\tt http://www.singular.uni-kl.de}},}
or
\bibitem[GPS05]{GPS05}G.-M.~Greuel, G.~Pfister, and H.~Sch\"onemann.\newblock {{\sc Singular} 3.0}. A Computer Algebra System for Polynomial Computations.\newblock Centre for Computer Algebra, University of Kaiserslautern (2005).\newblock {\tt http://www.singular.uni-kl.de}.
Logging your Sage session¶
Yes you can log your sessions.
(a) Modify line 186 of the .ipythonrc file (or open .ipythonrc into an editor and search for “logfile”). This will only log your input lines, not the output.
(b) You can also write the output to a file, by running Sage in the background ( Background jobs ).
(c) Start in a KDE konsole (this only work in linux). Go to
Settings \(\rightarrow\)
History ... and selectunlimited. Start your session. When ready, go to
edit\(\rightarrow\)
save history as ....
Some interfaces (such as the interface to Singular or that to GAP)allow you to create a log file. For Singular, there is a logfileoption (in
singular.py). In GAP, use the command
LogTo.
LaTeX conversion¶
Yes, you can output some of your results into LaTeX.
sage: M = MatrixSpace(RealField(),3,3)sage: A = M([1,2,3, 4,5,6, 7,8,9])sage: print(latex(A))\left(\begin{array}{rrr} 1.00000000000000 & 2.00000000000000 & 3.00000000000000 \\ 4.00000000000000 & 5.00000000000000 & 6.00000000000000 \\ 7.00000000000000 & 8.00000000000000 & 9.00000000000000 \end{array}\right)
sage: view(A)
At this point a dvi preview should automatically be called to display in a separate window the LaTeX output produced.
LaTeX previewing for multivariate polynomials and rational functions is also available:
sage: x = PolynomialRing(QQ,3, 'x').gens()sage: f = x[0] + x[1] - 2*x[1]*x[2]sage: h = f /(x[1] + x[2])sage: print(latex(h))\frac{-2 x_{1} x_{2} + x_{0} + x_{1}}{x_{1} + x_{2}}
Sage and other computer algebra systems¶
If
foo is a Pari, GAP ( without ending semicolon), Singular,Maxima command, resp., enter
gp("foo") for Pari,
gap.eval("foo")}
singular.eval("foo"),
maxima("foo"), resp..These programs merely send the command string to the externalprogram, execute it, and read the result back into Sage. Therefore,these will not work if the external program is not installed and inyour PATH.
Command-line Sage help¶
If you know only part of the name of a Sage command and want toknow where it occurs in Sage, a new option for 0.10.11 has beenadded to make it easier to hunt it down. Just type
sage -grep <string> to find all occurrences of
<string> in theSage source code. For example,
$ sage -grep berlekamp_masseymatrix/all.py:from berlekamp_massey import berlekamp_masseymatrix/berlekamp_massey.py:def berlekamp_massey(a):matrix/matrix.py:import berlekamp_masseymatrix/matrix.py: g =berlekamp_massey.berlekamp_massey(cols[i].list())
Type
help(foo) or
foo?? for help and
foo.[tab] for searchingof Sage commands. Type
help() for Python commands.
For example
help(Matrix)
returns
Help on function Matrix in module sage.matrix.constructor:Matrix(R, nrows, ncols, entries = 0, sparse = False) Create a matrix. INPUT: R -- ring nrows -- int; number of rows ncols -- int; number of columns entries -- list; entries of the matrix sparse -- bool (default: False); whether or not to store matrices as sparse OUTPUT: a matrix EXAMPLES: sage: Matrix(RationalField(), 2, 2, [1,2,3,4]) [1 2] [3 4] sage: Matrix(FiniteField(5), 2, 3, range(6)) [0 1 2] [3 4 0] sage: Matrix(IntegerRing(), 10, 10, range(100)).parent() Full MatrixSpace of 10 by 10 dense matrices over Integer Ring sage: Matrix(IntegerRing(), 10, 10, range(100), sparse = True).parent() Full MatrixSpace of 10 by 10 sparse matrices over Integer Ring
in a new screen. Type q to return to the Sage screen.
Reading and importing files into Sage¶
A file imported into Sage must end in
.py, e.g.,
foo.py andcontain legal Python syntax. For a simple example see Permutation groupswith the Rubik’s cube group example above.
Another way to read a file in is to use the
load or
attachcommand. Create a file called
example.sage (located in the homedirectory of Sage) with the following content:
print("Hello World")print(2^3)
Read in and execute
example.sage file using the
load command.
sage: load("example.sage")Hello World8
You can also
attach a Sage file to a running session:
sage: attach("example.sage")Hello World8
Now if you change
example.sage and enter one blank line intoSage, then the contents of
example.sage will be automaticallyreloaded into Sage:
sage: !emacs example.sage& #change 2^3 to 2^4sage: #hit return*************************************************** Reloading 'example.sage'***************************************************Hello World16
Installation for the impatient¶
We shall explain the basic steps for installing the most recent version of Sage (which is the “source” version, not the “binary”).
Download
sage-*.tar(where
*denotes the version number) from the website and save into a directory, say
HOME. Type
tar zxvf sage-*.tarin
HOME.
cd
sage-*(we call this
SAGE_ROOT) and type
make. Now be patient because this process make take 2 hours or so.
Python language program code for Sage commands¶
Let’s say you want to know what the Python program is for the Sage command to compute the center of a permutation group. Use Sage’s help interface to find the file name:
sage: PermutationGroup.center?Type: instancemethodBase Class: <type 'instancemethod'>String Form: <unbound method PermutationGroup.center>Namespace: InteractiveFile: /home/wdj/sage/local/lib/python2.4/site-packages/sage/groups/permgroup.pyDefinition: PermutationGroup.center(self)
Now you know that the command is located in the
permgroup.py fileand you know the directory to look for that Python module. You canuse an editor to read the code itself.
“Special functions” in Sage¶
Sage has many special functions (see the reference manual at http://doc.sagemath.org/html/en/reference/functions/), and most of them can be manipulated symbolically. Where this is not implemented, it is possible that other symbolic packages have the functionality.
Via Maxima, some symbolic manipulation is allowed:
sage: maxima.eval("f:bessel_y (v, w)")'bessel_y(v,w)'sage: maxima.eval("diff(f,w)")'(bessel_y(v-1,w)-bessel_y(v+1,w))/2'sage: maxima.eval("diff (jacobi_sn (u, m), u)")'jacobi_cn(u,m)*jacobi_dn(u,m)'sage: jsn = lambda x: jacobi("sn",x,1)sage: P = plot(jsn,0,1, plot_points=20); Q = plot(lambda x:bessel_Y( 1, x), 1/2,1)sage: show(P)sage: show(Q)
In addition to
maxima,
pari and
octave also have specialfunctions (in fact, some of
pari’s special functions are wrappedin Sage).
Here’s an example using Sage’s interface (located insage/interfaces/octave.py) with
octave(http://www.octave.org/doc/index.html).
sage: octave("atanh(1.1)") ## optional - octave(1.52226,-1.5708)
Here’s an example using Sage’s interface to
pari’s specialfunctions.
sage: pari('2+I').besselk(3)0.0455907718407551 + 0.0289192946582081*Isage: pari('2').besselk(3)0.0615104584717420
What is Sage?¶
Sage is a framework for number theory, algebra, and geometry computation that is initially being designed for computing with elliptic curves and modular forms. The long-term goal is to make it much more generally useful for algebra, geometry, and number theory. It is open source and freely available under the terms of the GPL. The section titles in the reference manual gives a rough idea of the topics covered in Sage.
History of Sage¶
Sage was started by William Stein while at Harvard University in the Fall of 2004, with version 0.1 released in January of 2005. That version included Pari, but not GAP or Singular. Version 0.2 was released in March, version 0.3 in April, version 0.4 in July. During this time, support for Cremona’s database, multivariate polynomials and large finite fields was added. Also, more documentation was written. Version 0.5 beta was released in August, version 0.6 beta in September, and version 0.7 later that month. During this time, more support for vector spaces, rings, modular symbols, and windows users was added. As of 0.8, released in October 2005, Sage contained the full distribution of GAP, though some of the GAP databases have to be added separately, and Singular. Adding Singular was not easy, due to the difficulty of compiling Singular from source. Version 0.9 was released in November. This version went through 34 releases! As of version 0.9.34 (definitely by version 0.10.0), Maxima and clisp were included with Sage. Version 0.10.0 was released January 12, 2006. The release of Sage 1.0 was made early February, 2006. As of February 2008, the latest release is 2.10.2.
Many people have contributed significant code and other expertise, such as assistance in compiling on various OS’s. Generally code authors are acknowledged in the AUTHOR section of the Python docstring of their file and the credits section of the Sage website. |
Imagine we have a population and $Y$ is a summary of that population. Then $P(Y \in (y, y + \Delta y))$ is counting the proportion of individuals that have variable $Y$ in the range $(y, y + \Delta y)$. You can consider this as a "bin" of size $\Delta y$ and we are counting how many individuals are inside that bin.
Now let us re-express those individuals in terms of another variable, $X$. Given that we know that $Y$ and $X$ are related as $Y = X^2$, the event $Y\in (y, y + \Delta y)$ is the same as the event $X^2 \in (x^2, (x + \Delta x)^2)$ which is the same as the event $ X \in (|x|, |x| + \Delta x)~ \text{or}~ X \in (- |x| -\Delta x, -|x| )$. Thus, the individuals that are in the bin $(y, y + \Delta y)$ must also be in the bins $(|x|, |x| + \Delta x)$ and $ (- |x| -\Delta x, -|x| )$. In other words, those bins must have the same proportion of individuals,
\begin{align}P(Y \in (y, y + \Delta y)) &=P\left( X \in (|x|, |x| + \Delta x) \right) + P\left( X \in (- |x| -\Delta x, -|x| )\right)\end{align}
Ok, now let's get to the density. First, we need to define what a probability
density is. As the name suggests, it is the proportion of individuals per area. That is, we count the share of individuals on that bin and divide by the size of the bin. Since we have established that the proportions of people are the same here, but the size of the bins have changed, we conclude the density will be different. But different by how much?
As we said, the probability density is the proportion of people in the bin divided by the size of the bin, thus the density of $Y$ is given by $f_Y(y):=\frac{P(Y \in (y, y + \Delta y))}{\Delta y}$. Analogously, the probability density of $X$ is given by $f_X(x):=\frac{P(X \in (x, x + \Delta x))}{\Delta x}$.
From our previous result that the population in each bin is the same we then have that,
\begin{align}f_Y(y):=\frac{P(Y \in (y, y + \Delta y))}{\Delta y} &= \frac{P\left( X \in (|x|, |x| + \Delta x) \right) + P\left( X \in (- |x| - \Delta x, -|x| )\right)}{\Delta y} \\&= \frac{f_X(|x|)\Delta x + f_{X}(-|x|)\Delta x}{\Delta y}\\&= \frac{\Delta x}{\Delta y} \left(f_X(|x|) + f_{X}(-|x|) \right)\\&= \frac{\Delta x}{\Delta y} \left(f_X(\sqrt{y}) + f_{X}(-\sqrt{y}) \right)\end{align}
That is, the density $f_X(\sqrt{y}) + f_{X}(-\sqrt{y})$ changes by the factor $\frac{\Delta x}{\Delta y}$, which is the relative size of stretching or squeezing the bin size. In our case, since $y = x^2$ we have that $y + \Delta y = (x + \Delta x )^2 = x^2 + 2x \Delta x + \Delta x^2$. If $\Delta x$ is tiny enough we can ignore $\Delta x ^2$, which implies $\Delta y = 2x \Delta x$ and $\frac{\Delta x}{\Delta y} = \frac{1}{2x} = \frac{1}{2 \sqrt{y}}$, and that is why the factor $\frac{1}{2 \sqrt{y}}$ shows up in the transformation. |
We have the matrix Laplacian matrix $G=A^TA$ which has a set of eigenvalues $\lambda_0\leq\lambda_1\leq\ldots\leq \lambda_n$ for $G\in\mathbb{R}^{n\times n}$ where we always know $\lambda_0 = 0$. Thus the Laplacian matrix is always symmetric positive semi-definite. Because the matrix $G$ is not symmetric positive definite we have to be careful when we discuss the Cholesky decomposition. The Cholesky decomposition exists for a positive semi-definite matrix but it is no longer unique. For example, the positive semi-definite matrix $$ A=\left[\!\!\begin{array}{cc} 0 & 0 \\ 0 & 1\end{array}\!\!\right],$$has infinitely many Cholesky decompositions$$ A=\left[\!\begin{array}{cc} 0 & 0 \\ 0 & 1\end{array}\!\right]= \left[\!\begin{array}{cc} 0 & 0 \\ \sin\theta & \cos\theta\end{array}\!\right] \left[\!\begin{array}{cc} 0 & \sin\theta \\ 0 & \cos\theta\end{array}\!\right]=LL^T.$$
However, because we have a matrix $G$ that is known to be a Laplacian matrix we can actually avoid the more sophisticated linear algebra tools like Cholesky decompositions or finding the square root of the positive semi-definite matrix $G$ such that we recover $A$. For example, if we have the Laplace matrix $G\in\mathbb{R}^{4\times 4}$,$$G=\left[\!\begin{array}{cccc} 3 & -1 & -1 & -1\\-1 & 1 & 0 & 0 \\ -1 & 0 & 1 & 0 \\ -1 & 0 & 0 & 1 \\\end{array}\!\right]$$we can use graph theory to recover the desired matrix $A$. We do so by formulating the oriented incidence matrix. If we define the number of edges in the graph to be $m$ and the number of vertices to be $n$ then the oriented incidence matrix $A$ will be an $m\times n$ matrix given by $$A_{ev} = \left\{\begin{array}{lc} 1 & \textrm{if }e=(v,w)\textrm{ and }v<w \\ -1 & \textrm{if }e=(v,w)\textrm{ and }v>w \\ 0 & \textrm{otherwise},\end{array}\right.$$where $e=(v,w)$ denotes the edge which connects the vertices $v$ and $w$. If we take a graph for $G$ with four vertices and three edges,then we have the oriented incidence matrix $$A = \left[\!\begin{array}{cccc} 1 & -1 & 0 & 0\\ 1 & 0 & -1 & 0 \\ 1 & 0 & 0 & -1 \\\end{array}\!\right],$$and we can find that $G=A^TA$. For the matrix problem you describe you would construct a graph for $G$ with the same number of edges as vertices, then you should have the ability to reconstruct the matrix $A$ when you are only given the Laplacian matrix $G$.
Update:
If we define the diagonal matrix of vertex degrees of a graph as $N$ and the adjacency matrix of the graph as $M$, then the Laplacian matrix $G$ of the graph is defined by $G=N-M$. For example, in the following graph
we find the Laplacian matrix is$$G=\left[\!\begin{array}{cccc} 3 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\\end{array}\!\right] - \left[\!\begin{array}{cccc} 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\\end{array}\!\right].$$Now we relate the $G$ to the oriented incidence matrix $A$ using the edges and nodes given in the pictured graph. Again we find the entries of $A$ from $$A_{ev} = \left\{\begin{array}{lc} 1 & \textrm{if }e=(v,w)\textrm{ and }v<w \\ -1 & \textrm{if }e=(v,w)\textrm{ and }v>w \\ 0 & \textrm{otherwise},\end{array}\right..$$ For example, edge $e_1$ connects the nodes $v_1$ and $v_2$. So to determine $A_{e_1,v_1}$ we note that the index of $v_1$ is less than the index of $v_2$ (or we have the case $v<w$ in the definition of $A_{ev}$). Thus, $A_{e_1,v_1} = 1$. Similarly by the way of comparing indices we can find $A_{e_1,v_2} = -1$. We give $A$ below in a more explicit way referencing the edges and vertices pictured.$$A = \begin{array}{c|cccc} & v_1 & v_2 & v_3 & v_4 \\ \hline e_1 & 1 & -1 & 0 & 0\\ e_2 & 1 & 0 & -1 & 0 \\ e_3 & 1 & 0 & 0 & -1 \\\end{array}.$$
Next, we generalize the concept of the Laplacian matrix to a weighted undirected graph. Let $Gr$ be an undirected finite graph defined by $V$ and $E$ its vertex and edge set respectively. To consider a weighted graph we define a weight function $$w: V\times V\rightarrow \mathbb{R}^+,$$which assigns a non-negative real weight to each edge of the graph. We will denote the weight attached to edge connecting vertices $u$ and $v$ by $w(u,v)$. In the case of a weighted graph we define the degree of each vertex $u\in V$ as the sum of all the weighted edges connected to $u$, i.e.,$$d_u = \sum_{v\in V}w(u,v).$$From the given graph $Gr$ we can define the weighted adjacency matrix $Ad(Gr)$ as an $n\times n$ with rows and columns indexed by $V$ whose entries are given by $w(u,v)$. Let $D(Gr)$ be the diagonal matrix indexed by $V$ with the vertex degrees on the diagonal then we can find the weighted Laplacian matrix $G$ just as before$$G = D(Gr) - Ad(Gr).$$
In the problem from the original post we know $$G=\left[\!\begin{array}{ccc} \tfrac{3}{4} & -\tfrac{1}{3} & -\tfrac{5}{12} \\-\tfrac{1}{3} & \tfrac{2}{3} & -\tfrac{1}{3} \\ -\tfrac{5}{12} & -\tfrac{1}{3} & \tfrac{3}{4} \\\end{array}\!\right].$$ From the comments we know we seek a factorization for $G$ where $G=A^TA$ and specify $A$ is of the form $A=I-1_nw^T$ where $w^T1_n=1$. For full generality assume the matrix $A$ has no zero entries. Thus if we formulate the weighted oriented incidence matrix to find $A$ we want the weighted adjacency matrix $Ad(Gr)$ to have no zero entries as well, i.e., the weighted graph will have loops. Actually calculating the weighted oriented incidence matrix seems difficult (although it may simply be a result of my inexperience with weighted graphs). However, we can find a factorization of the form we seek in an ad hoc way if we assume we know something about the loops in our graph. We split the weighted Laplacian matrix $G$ into the degree and adjacency matrices as follows$$G=\left[\!\begin{array}{ccc} \tfrac{5}{4} & 0 & 0 \\0 & 1 & 0 \\ 0 & 0 & \tfrac{11}{12} \\\end{array}\!\right]-\left[\!\begin{array}{ccc} \tfrac{1}{2} & \tfrac{1}{3} & \tfrac{5}{12} \\\tfrac{1}{3} & \tfrac{1}{3} & \tfrac{1}{3} \\ \tfrac{5}{12} & \tfrac{1}{3} & \tfrac{1}{6} \\\end{array}\!\right] = D(Gr)-Ad(Gr).$$
Thus we know the loops on $v_1$, $v_2$ and $v_3$ have weights $1/2$, $1/3$, and $1/6$ respectively. If we put the weights on the loops into a vector $w$ = $[\frac{1}{2}$ $\frac{1}{3}$ $\frac{1}{6}]^T$ then we can recover the matrix $A$ we want in the desired form$$A = I-1_nw^T = \left[\!\begin{array}{ccc} \tfrac{1}{2} & -\tfrac{1}{3} & -\tfrac{1}{6} \\-\tfrac{1}{2} & \tfrac{2}{3} & -\tfrac{1}{6} \\ -\tfrac{1}{2} & -\tfrac{1}{3} & \tfrac{5}{6} \\\end{array}\!\right].$$
It appears if we have knowledge of the loops in our weighted graph we can find the matrix $A$ in the desired form. Again, this was done in an ad hoc manner (as I am not a graph theorist) so it may be a hack that worked just for this simple problem. |
Fractions and binomial coefficients are common mathematical elements with similar characteristics - one number goes on top of another. This article explains how to typeset them in LaTeX.
Contents
Using fractions and binomial coefficients in an expression is straightforward.
The binomial coefficient is defined by the next expression: \[ \binom{n}{k} = \frac{n!}{k!(n-k)!} \]
For these commands to work you must import the package
amsmath by adding the next line to the preamble of your file \usepackage{amsmath}
The appearance of the fraction may change depending on the context
Fractions can be used alongside the text, for example \( \frac{1}{2} \), and in a mathematical display style like the one below: \[\frac{1}{2}\]
As you may have guessed, the command
\frac{1}{2} is the one that displays the fraction. The text inside the first pair of braces is the numerator and the text inside the second pair is the denominator.
Also, the text size of the fraction changes according to the text around it. You can set this manually if you want.
When displaying fractions in-line, for example \(\frac{3x}{2}\) you can set a different display style: \( \displaystyle \frac{3x}{2} \). This is also true the other way around \[ f(x)=\frac{P(x)}{Q(x)} \ \ \textrm{and} \ \ f(x)=\textstyle\frac{P(x)}{Q(x)} \]
The command
\displaystyle will format the fraction as if it were in mathematical display mode. On the other side,
\textstyle will change the style of the fraction as if it were part of the text.
The usage of fractions is quite flexible, they can be nested to obtain more complex expressions.
The fractions can be nested \[ \frac{1+\frac{a}{b}}{1+\frac{1}{1+\frac{1}{a}}} \] Now a wild example \[ a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\cdots}}} \]
The second fraction displayed in the previous example uses the command
\cfrac{}{} provided by the package
amsmath (see the introduction), this command displays nested fractions without changing the size of the font. Specially useful for continued fractions.
Binomial coefficients are common elements in mathematical expressions, the command to display them in LaTeX is very similar to the one used for fractions.
The binomial coefficient is defined by the next expression: \[ \binom{n}{k} = \frac{n!}{k!(n-k)!} \] And of course this command can be included in the normal text flow \(\binom{n}{k}\).
As you see, the command
\binom{}{} will print the binomial coefficient using the parameters passed inside the braces.
A slightly different and more complex example of continued fractions
Final example \newcommand*{\contfrac}[2]{% { \rlap{$\dfrac{1}{\phantom{#1}}$}% \genfrac{}{}{0pt}{0}{}{#1+#2}% } } \[ a_0 + \contfrac{a_1}{ \contfrac{a_2}{ \contfrac{a_3}{ \genfrac{}{}{0pt}{0}{}{\ddots} }}} \]
For more information see |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
Could you please give at least a hint to prove this? I really don't know how to start.
Let $A$ be the matrix
$A= \left( \begin{array}{ccc} A_{11} & A_{12} \\ A_{21} & A_{22} \end{array} \right)$,
where $A_{11}$ and $A_{22}$ are square matrices of order $k$ and $m$ respectively. Prove that for any two matrices $D$ of order $k \times k$ and $B$ of order $m \times k$ we have
$\left| \begin{array}{ccc} D \cdot A_{11} & D \cdot A_{12} \\ A_{21} & A_{22} \end{array} \right| = |D|\cdot |A|$,
and
$\left| \begin{array}{ccc} A_{11} & A_{12} \\ A_{21}+B \cdot A_{11} & A_{22}+B \cdot A_{12} \end{array} \right|=|A|.$ |
Current browse context:
hep-th
Change to browse by: Bookmark(what is this?) General Relativity and Quantum Cosmology Title: Inflationary Phenomenology of Einstein Gauss-Bonnet Gravity Compatible with GW170817
(Submitted on 20 Aug 2019)
Abstract: In this work we shall study Einstein Gauss-Bonnet theories and we investigate when these can have their gravitational wave speed equal to the speed of light, which is unity in natural units, thus becoming compatible with the striking event GW170817. We demonstrate how this is possible and we show that if the scalar coupling to the Gauss-Bonnet invariant is constrained to satisfy a differential equation, the gravitational wave speed becomes equal to one. Accordingly, we investigate the inflationary phenomenology of the resulting restricted Einstein Gauss-Bonnet model, by assuming that the slow-roll conditions hold true. As we demonstrate, the compatibility with the observational data coming from the Planck 2018 collaboration, can be achieved, even for a power-law potential. We restricted ourselves to the study of the power-law potential, due to the lack of analyticity, however more realistic potentials can be used, in this case though the calculations are not easy to be performed analytically. We also pointed out that a string-corrected extension of the Einstein Gauss-Bonnet model we studied, containing terms of the form $\sim \xi(\phi) G^{ab}\partial_a\phi \partial_b \phi $ can also provide a theory with gravity waves speed $c_T^2=1$ in natural units, if the function $\xi(\phi)$ is appropriately constrained, however in the absence of the Gauss-Bonnet term $\sim \xi(\phi) \mathcal{G}$ the gravity waves speed can never be $c_T^2=1$. Finally, we discuss which extensions of the above models can provide interesting cosmologies, since any combination of $f(R,X,\phi)$ gravities with the above string-corrected Einstein Gauss-Bonnet models can yield $c_T^2=1$, with $X=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi $. Submission historyFrom: Vasilis Oikonomou [view email] [v1]Tue, 20 Aug 2019 18:14:01 GMT (15kb) |
Forgot password? New user? Sign up
Existing user? Log in
To those who have given NSEP, how much are you expecting?and what is the likely MAS this year?
Note by Mvs Saketh 4 years, 10 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
I am expecting between 110 and 120 . Maharashtra what about you ? I think cut off should drop this year
Log in to reply
Which cutoff do you mean , the minimum score or "u can give inpho" score?
u can give inpho score. What were your marks ?
@Rohit Shah – Oh well i am expecting 160-170 west bengal . The potentiometer questions killed me . West bengal. But i saw in hsbsc webdite that the "u can give inpho" score is 80% of the average top ten scorers... and it is not state dependend this time, and considering that the top ten people would certainly be like 90% so shouldnt the cutoff be higher( though i do agree that this year i felt they have inserted too much experiment questions , and made multiple correct choice too easy and the rest of the paper lengthy) ( so cutoff should fall) (pattern of paper changed considerably)
@Mvs Saketh – Just checked the new rules last years toppers were around 200 so say rank 10 is about 190 then avg would be 195. Then you would be above MI if you score above 156 and above MAS would be 97. But this year paper was worse so expecting MI to be 150 and MAS to be 90. What do u have to say ? How much did other level 5 ers score ? Can somebody start a post for nsec @Krishna Ar
@Rohit Shah – Well last year paper was too easy, and it is likely that even the toppers dont give so much of a shit about potentiometer and time was a critical factor.. so yes MAS should fall below 150 and around 145 maybe. I do know others score (expected) but we can draw a better comparision after knowing Ronaks score i suppose ..... @Ronak Agarwal
@Mvs Saketh – Hi Saketh.
This paper includes the syllabus of both class 11 and 12. Is there any Physics Olympiad for class 11 students (which includes only class 11 syllabus)?
@Satvik Pandey – I am sorry but there isnt, the only two olympiads are nsep and apho which have both 11 and 12 stuff... however you can have fun by solving only the 11 th problems from olympiad papers available online.
@Mvs Saketh – Thanks Saketh.
I will surely take part in this exam after two years(when I will be in class 12). :D
@Satvik Pandey – Hey Satvik, can you give me the link to the rules and selection procedure about NSEP?
@Karthik Sharma – Hi Kartik. I don't have much knowledge about NESP. I came to know about this exam from this note. So I just searched about it on google and found previous year paper of that. Other than that I don't have any information about this exam. :)
@Mvs Saketh – Sorry to comment very late but I have really messed up my paper and I am getting only 120 in this paper. @Mvs Saketh
@Mvs Saketh – What your score Mvs Saketh.
@Mvs Saketh – MAS is 89..MI is 144
@Tushar Gopalka – i am trying to look at the results page, but it is not loading, btw, how much you got @Tushar Gopalka ? thanks for informing btw,
@Mvs Saketh – Tell me your centre code and roll number I will tell your marks.
@Ronak Agarwal – already seen, how much you got?
@Mvs Saketh – 119, I really don't like this state representation clause, there are only 17 selections from rajasthan where as there are 55 selections from uttar pradesh, I saw the result of uttar pradesh they are getting too low, the cutoff is unbelieveably low, and that thing really makes me unhappy, the cutoff for rajasthan for second stage is 128 out of 240.
@Ronak Agarwal – true that,, some states have too many selections, but this time i dont think theres an individual state cutoff,, also can you please tell me west bengals quota? and are you getting selected (means within the top 17)
Although its true that state division is unfair, we cant really blame them bro, we should have prepared better and performed an all round preparation to cross the MI , let us just consider it a lesson, whether we get selected or not, cause even if we do get selected(which is unlikely for me as i got state rank 10), it will be more because of others losing rather than us winning
@Mvs Saketh – I am blaming myself,(For not preparing for time constraints) but still it's a once in a lifetime oppurtunity and it is very sad to think that I am not getting selected even though I deserved to do so.
I have prepared for tough questions but not prepared for doing easy questions in a given time.Just imagine a paper with sufficiently hard subjective problems, and we would be having a greater chances for getting selected.Why are children from UP getting the previlage of getting selected with extremely low marks, what wrong we have done.I only think one thing and that is I deserved to be selected against people getting lower marks than me and getting selected.
I missed the rajasthan cutoff by 9 marks.
@Ronak Agarwal – i do agree bro, i wanted this oppurtunity too, i wanted INPHO too,, this oppurtunity had really great career oppurtunities in physics and i wanted it,, and now i failed , because of silly mistakes, because of not being fast enough, , . imagine the loss, and imagine the humiliation i am presently feeling,, because even if by any remote chance i qualify, it will be because of state quota and not my skill. i understand it,,, and yes if there were sufficiently hard subjective questions,, maybe i would have done better, if there were lesser problems, or harder but less about practical so much, and stuff like thermopiles and shit,,, But then there are people in AP who have really rocked the score,, they faced the same paper too, so it is my deficiency ,
as for you, we all know your skill , anyone who has seen your challenges in mechanics will know that you got extreme aptitude, and yes i understand your loss,, well i can only say that you buckle up for IITJEE in which you have a really good chance considering your maths and physics skills,, and a single exam cant determine what you are or where you gonna be,, just keep going, bro, and bigger oppurtunities will come, The world needs brains like yours, and its talent will be recognised sooner or later :)
@Mvs Saketh – Yes, you will be selected most probably due to quota, me too but due to quota..No merit..
@Ronak Agarwal – How do you know the cut-off...I mean is quota system true..that is I need to be above MAS score and be within top 19 in WEST BENGAL and I get selected?
@Tushar Gopalka – He means that , if a state has 19 as selection quota, then the 19th dudes score can be called as the cutoff
@Mvs Saketh – Then, we get selected, right. You and me from WB?
@Tushar Gopalka – maybe, if what we think is the rules is actually the rules, based on what i read in the hsbsc site, it seems so,, that they will take 19 students from WB minimum, and since only two crossed MI , so other than them,, they gonna choose 17, and since we are within the top 17, we should get selected,, but lets just wait till 30 th december to be sure
@Mvs Saketh – No HBCSE has also explained these criteria in detail, what happens is that for example state quota is 25 and there are 10 students scoring above MI then only 15 further selections are to be made , but if in a state, state quota is 25 and there are 35 people scoring above MI then all of them are going to be selected and in that case no further selections are to be made.
@Ronak Agarwal – Yes thats what i said,,19 is state quota and only 2 are above MI, so 17 selections are to be made
@Tushar Gopalka – If you score above MI score that is 144 for physics then you are automatically getting selected.
@Mvs Saketh – I have the stats (referenced from the list of students scoring above MAS) ,
There are only 44 students from UP scoring above MAS score with highest score 128 and lowest score 89(of course since it's the MAS) and if we believe the Proportional representation clause all of them are getting selected just imagine a student scoring 89 is getting selected for second stage of olympiad !!!
Also think about students in Andhra Pradesh with students scoring exceptionally high, there state cutoff is coming out to be 150 , imagine you have to score 150 !! to get selected in second stage.( Of 9 out of top 10 are from Andhra Pradesh). You may be wondering how I got rank and cutoff's
What I actually did is to download the PDF files and converted them into excel spreadsheet and sorted them to get the rank list and cutoff list.
There is also another thing to consider, that Kota has become a Hub for preparation for IIT-JEE and other similar competitive entrance exams hence people not from Rajasthan are also coming in Kota and giving the first stage physics olympiad exam, the state quota of states are decided on the basis of populations but these things are taken into consideration.
So what are your views on state quota for selection in physics olympiad. @Mvs Saketh
@Mvs Saketh – I don't know why it is showing 124, I will qualify as west Bengal quota is 19. How much is your score?
@Tushar Gopalka – 117, wait quota is 19? are you sure, last time only 6 students were selected right?
@Mvs Saketh – Yes, in IAPT page, you can see that West bengal quota is 19 for this year....
@Tushar Gopalka – oh thanks, so there is some hope for me right, rank 10?
@Tushar Gopalka – yes i have seen it,, i crossed MAS but still pretty behind to hope for selection,, i have lost it, best of luck if you have got selected,
I am getting 146 according to the official solutions by NSEP
Could anyone tell the expected cutoff for inpho 2015
would it cross 52?
Guys how did you solve that moment of inertia question (Q7 in Q.P. Code-P160), please help me out by posting a detailed solution.
The Question paper is available here- NSEP 2014-15 Question Paper
whos paper is that ?
I don't know.
I am expecting 150 in NSEP and belongs to Jharkhand, what's my probability
Problem Loading...
Note Loading...
Set Loading... |
Another method, not covered by the answers above, is
finite automaton transformation. As a simple example, let us show that the regular languages are closed under the shuffle operation, defined as follows:$$L_1 \mathop{S} L_2 = \{ x_1y_1 \ldots x_n y_n \in \Sigma^* : x_1 \ldots x_n \in L_1, y_1 \ldots y_n \in L_2 \}$$You can show closure under shuffle using closure properties, but you can also show it directly using DFAs. Suppose that $A_i = \langle \Sigma, Q_i, F_i, \delta_i, q_{0i} \rangle$ is a DFA that accepts $L_i$ (for $i=1,2$). We construct a new DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$ as follows: The set of states is $Q_1 \times Q_2 \times \{1,2\}$, where the third component remembers whether the next symbol is an $x_i$ (when 1) or a $y_i$ (when 2). The initial state is $q_0 = \langle q_{01}, q_{02}, 1 \rangle$. The accepting states are $F = F_1 \times F_2 \times \{1\}$. The transition function is defined by $\delta(\langle q_1, q_2, 1 \rangle, \sigma) = \langle \delta_1(q_1,\sigma), q_2, 2 \rangle$ and $\delta(\langle q_1, q_2, 2 \rangle, \sigma) = \langle q_1, \delta_2(q_2,\sigma), 1 \rangle$.
A more sophisticated version of this method involves
guessing. As an example, let us show that regular languages are closed under reversal, that is,$$ L^R = \{ w^R : w \in \Sigma^* \}. $$(Here $(w_1\ldots w_n)^R = w_n \ldots w_1$.) This is one of the standard closure operations, and closure under reversal easily follows from manipulation of regular expressions (which may be regarded as the counterpart of finite automaton transformation to regular expressions) – just reverse the regular expression. But you can also prove closure using NFAs. Suppose that $L$ is accepted by a DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$. We construct an NFA $\langle \Sigma, Q', F', \delta', q'_0 \rangle$, where The set of states is $Q' = Q \cup \{q'_0\}$. The initial state is $q'_0$. The unique accepting state is $q_0$. The transition function is defined as follows: $\delta'(q'_0,\epsilon) = F$, and for any state $q \in Q$ and $\sigma \in \Sigma$, $\delta(q', \sigma) = \{ q : \delta(q,\sigma) = q' \}$.
(We can get rid of $q'_0$ if we allow multiple initial states.) The guessing component here is the final state of the word after reversal.
Guessing often involves also verifying. One simple example is closure under
rotation:$$ R(L) = \{ yx \in \Sigma^* : xy \in L \}. $$Suppose that $L$ is accepted by the DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$. We construct an NFA $\langle \Sigma, Q', F', \delta', q'_0 \rangle$, which operates as follows. The NFA first guesses $q=\delta(q_0,x)$. It then verifies that $\delta(q,y) \in F$ and that $\delta(q_0,x) = q$, moving from $y$ to $x$ non-deterministically. This can be formalized as follows: The states are $Q' = \{q'_0\} \cup Q \times Q \times \{1,2\}$. Apart from the initial state $q'_0$, the states are $\langle q,q_{curr}, s \rangle$, where $q$ is the state that we guessed, $q_{curr}$ is the current state, and $s$ specifies whether we are at the $y$ part of the input (when 1) or at the $x$ part of the input (when 2). The final states are $F' = \{\langle q,q,2 \rangle : q \in Q\}$: we accept when $\delta(q_0,x)=q$. The transitions $\delta'(q'_0,\epsilon) = \{\langle q,q,1 \rangle : q \in Q\}$ implement guessing $q$. The transitions $\delta'(\langle q,q_{curr},s \rangle, \sigma) = \langle q,\delta(q_{curr},\sigma),s \rangle$ (for every $q,q_{curr} \in Q$ and $s \in \{1,2\}$) simulate the original DFA. The transitions $\delta'(\langle q,q_f,1 \rangle, \epsilon) = \langle q,q_0,2 \rangle$, for every $q \in Q$ and $q_f \in F$, implement moving from the $y$ part to the $x$ part. This is only allowed if we've reached a final state on the $y$ part.
Another variant of the technique incorporates bounded counters. As an example, let us consider change
edit distance closure:$$ E_k(L) = \{ x \in \Sigma^* : \text{ there exists $y \in L$ whose edit distance from $x$ is at most $k$} \}. $$Given a DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$ for $L$, e construct an NFA $\langle \Sigma, Q', F', \delta', q'_0 \rangle$ for $E_k(L)$ as follows: The set of states is $Q' = Q \times \{0,\ldots,k\}$, where the second item counts the number of changes done so far. The initial state is $q'_0 = \langle q_0,0 \rangle$. The accepting states are $F' = F \times \{0,\ldots,k\}$. For every $q,\sigma,i$ we have transitions $\langle \delta(q,\sigma), i \rangle \in \delta'(\langle q,i \rangle, \sigma)$. Insertions are handled by transitions $\langle q,i+1 \rangle \in \delta'(\langle q,i \rangle, \sigma)$ for all $q,\sigma,i$ such that $i < k$. Deletions are handled by transitions $\langle \delta(q,\sigma), i+1 \rangle \in \delta'(\langle q,i \rangle, \epsilon)$ for all $q,\sigma,i$ such that $i < k$. Substitutions are similarly handles by transitions $\langle \delta(q,\sigma), i+1 \rangle \in \delta'(\langle q,i \rangle, \tau)$ for all $q,\sigma,\tau,i$ such that $i < k$. |
generation of certain matrices
I'd like to create a list of roughly 100-1000 $2 \times 2$ matrices $[A_1,A_2,...,A_N]$ that have the following properties:
$\det A_j = 1$
The entries of each $A_j$ are in an imaginary quadratic integer ring, such as $\mathbb{Z}[i]$, or $\mathbb{Z}[\sqrt{-2}]$
For example, the matrix $$\begin{bmatrix} 1&2i \\ 0&1 \end{bmatrix}$$ fits the above specifications when the ring is $\mathbb{Z}[i]$.
I know that I probably want to run some kind of loop over the entries of the matrix, but I'm not sure how to do this. Perhaps I want to initially treat the matrices as lists of length 4, and then run an iterative loop over the lists. Then, when the above specifications are met, that list is stored somewhere else. I think I'd also like to put a bound on the "size" of the matrix entries, but that should be easy to do afterwards.
Thanks! |
Blue-sky catastrophe
Andrey Shilnikov and Dmitry Turaev (2007), Scholarpedia, 2(8):1889. doi:10.4249/scholarpedia.1889 revision #137318 [link to/cite this article]
This stunning name has been given to the last, out of the seven known, main bifurcations of a periodic orbit. While the first six bifurcations had been known for almost 70 years [Andronov and Leontovich 1937, Andronov et al. 1966], the
blue-sky catastrophe (see Figure 1) has been discovered and studied quite recently [Turaev and L. Shilnikov 1995, 1996; Gavrilov and A. Shilnikov 2000; L. Shilnikov et al., 2001, A. Shilnikov et al. 2005].
Contents Codimension-one Bifurcations
The loss of stability or disappearance of a periodic orbit corresponds to a certain bifurcation: the main stability boundaries correspond to bifurcations of codimension 1 (i.e. those that occur in one-parameter families of the general position). For systems on a plane, there are four such stability boundaries, all discovered and described by Leontovich and Andronov. These are also the
existence boundaries, i.e. the periodic orbit disappears at the bifurcation moment or immediately after it. Namely, the periodic orbit either 1. collapses into an equilibrium state through a supercritical Andronov-Hopf bifurcation; or 2. collides with an unstable periodic orbit (acquiring a multiplier equal to +1) and vanishes; or 3. becomes a homoclinic loop to a saddle equilibrium state; or 4. transforms into a homoclinic loop of a saddle-node equilibrium state.
Higher-dimensional systems add two more possibilities, where the periodic orbit no longer disappears at the bifurcation but only loses its stability via:
5. period doubling or flip bifurcation where a multiplier of the orbit decreases through -1; the stability of the original orbit is inherited by an orbit of doubled period; or 6. the secondary Andronov-Hopf bifurcation where a pair of complex-conjugate multipliers \( e^{\pm i\phi}\ ,\) with \(\phi \neq 0, \pi/2, 2\pi/3, \pi\) of the periodic orbit, crosses a unit circle outwards, and the periodic orbit, as Andronov said, "loses its skin" that becomes a two-dimensional invariant torus.
One can also classify these cod-1 bifurcations by that how the period and length of the orbit depend on the control parameter, \(\mu\ ,\) approaching a finite bifurcation value \(\mu^+_0\ .\)
Group I: finite Period & zero LengthFirst group consists of a single Andronov-Hopf Bifurcation bifurcation, at which a periodic orbit collapses into the equilibrium state with a pair of purely imaginary characteristic exponents \(\pm i \omega\ ,\) giving an estimate on its period \(T \sim 2 \pi /\omega\ .\) Group II: finite Period & LengthThe second group includes the local saddle-node, the flip or period-doubling bifurcations, as well as the secondary Andronov-Hopf bifurcation (numbers 2,5 and 6 in the above list). It is worth noticing that the periodic orbit persists at \(\mu=\mu^{+}_{0}\) for the boundaries of Group II. Group III: \(\infty\) Period & finite Lengthare the feature of homoclinic bifurcations of equilibria (cases 3 and 4 above). Moreover, the period of the orbit increases as \(1/\sqrt{\mu-\mu_0}\) before the former becomes a homoclinic orbit to a saddle-node equilibrium state (one zero exponent), or as \(-\ln(\mu-\mu_0)\) in case of a simple saddle. Group IV: \(\infty\) Period & \(\infty\) Length- Blue Sky Catastrophe. Historical note
The question about the possibility for a periodic orbit to remain in a bounded region of the phase space while the period and length of the orbit increase with no bound as it approaches its existence boundary was raised by Palis and Pugh [1974]. The problem was code-named a "blue sky catastrophe" [Abraham, 1985] as the orbit, while getting longer and longer, would be virtually vanishing in the space. The first examples of such one-parameter families of periodic orbits were suggested by Medvedev [1980]. However, Medvedev families are not in general position. As the analysis in [Afraimovich and L. Shilnikov, 1982; Turaev and L. Shilnikov,1986; Li and Zhang, 1991] showed, the generic version of one of the Medvedev examples (on a Klein bottle) gave a new existence boundary for periodic orbits, approaching which the orbit changes its stability infinitely many times in a sequence of forward and backward flip bifurcations.
The question on the possibility for the periodic orbit to disappear in the blue sky without losing stability en route remained open until it wassolved positively by L. Shilnikov and Turaev [1995; 2000] who found the following configuration in \(\mathbb{R}^3\) and higher: its core is the waythe two-dimensional unstable manifold of a saddle-node periodic orbit returnsto the orbit from the stable (node) region where it makesinfinitely many revolutions while approaching the saddle-node, as shown in Figure 1 above. The second component of this configuration is the strong transverse contraction along the homoclinic connection: that ensures that its closure becomes an arbitrarily long (of period and length both evaluated as \(1/\sqrt{\mu-\mu_0}\)) stable periodic orbit after the saddle-node orbit has vanished.
In other cases, the closure of the unstable manifold of the saddle-node periodic orbit can be a two-dimensional torus -- that corresponds to the border of a synchronization zone (Arnold tongue), or a Klein bottle (as in the Medvedev example), or the unstable manifold may come back crossing transversely the strongly stable manifold \(W^{ss}\) of the saddle-node orbit, this leads to chaotic shift dynamics [Lukyanov and L. Shilnikov, 1978; A. Shilnikov et al., 2005] (see Lukyanov-Shilnikov Bifurcation). Notably, a saddle-node bifurcation in \(R^4\) can even lead to the emergence of a hyperbolic strange attractor (the Smale-Williams solenoid) under the fulfillment of a few simple conditions on the shape of \(W^u\) as it returns to the node region [L. Shilnikov and Turaev, 2000].
Applications
The first example of the specific equations undergoing the catastrophe was given by N. Gavrilov and A. Shilnikov [Gavrilov and Shilnikov, 2000; L. Shilnikov et al., 2001]: \[\tag{1} \begin{array}{rcl} \dot x &=& x(2+\mu -10(x^{2}+y^{2})) +z^{2}+y^{2}+2y,\\ \dot y &=& -z^{3}-(1+y)(z^{2}+y^{2}+2y) -4x +\mu y,\\ \dot z &=& (1+y)z^{2}+x^{2}-\varepsilon, \end{array} \ .\]
The early development of the blue sky catastrophe in this system begins with a homoclinic connection to an equilibrium state with the characteristic exponents (\(0,\pm i \omega\)); this is indeed a cod-2 bifurcation named after Gavrilov-Guckenheimer or aka the homoclinic Fold-Hopf.
The blue sky catastrophe has turned out to be a typical phenomenon in slow-fast systems [L. Shilnikov et al., 2001; A. Shilnikov et al., 2005]. The dynamics of such a system are known to center around the attracting segments of the slow motion manifolds, which are formed by the limit sets, such as equilibria (labeled \(M_{eq}\)) and periodic orbits (\(M_{po}\)), of its fast subsystem (see the corresponding sketch). The blue sky catastrophe occurs here when a saddle-node orbit emerges on the manifold \(M_{po}\) shutting the passage along it for the solutions of the system. The stability of the blue sky orbit is due to the contraction across the manifold \(M_{eq}\) that is comprised by the stable equilibrium states of the fast subsystem.
In slow-fast Hodgkin-Huxley models of computational neuroscience the blue sky catastrophe describes a continuous and reversible transition between periodic bursting and tonic spiking activities, for example, in a reduced oscillatory heart interneuron model [A. Shilnikov and Cymbalyuk, 2005]: \[\tag{2} \mathrm{\dot V} = \mathrm{-2\,[30\, m^2_{K2} (V+0.07)+8\,(V+0.046)}+ \mathrm{200\, f^3_{\infty}(-150,\,0.0305\,,V) h_{Na}\,(V-0.045)}+0.0060]\ ,\]
\(\mathrm{\dot h_{Na}} = \mathrm{[f_{\infty}(500,\,0.0325,\,V)-h_{Na}]/0.0406}\ ,\)
\[\mathrm{\dot m_{K2}} =\mathrm{[f_{\infty}(-83,V_{\frac{1}{2}}+V_{K2}^{shift},V)-m_{K2}]/0.9}\ ,\]
where \(\mathrm{V}\) is the membrane potential, \(\mathrm{h}_{\rm Na}\) is inactivation of the fast sodium current, and \(\mathrm{m}_{\rm K2}\) is activation of persistent potassium one; a Boltzmann function \(\mathrm{f_{\infty}(a,b,V)=1/(1+e^{a(b+V)})}\) describes kinetics of (in)activation of the currents. The bifurcation parameter \(\mathrm{V^{shift}_{K2}}\) is a deviation from the canonical value \(\mathrm{V_{\frac{1}{2}}}=0.018\)V corresponding to \(f_{\infty}=1/2\ ,\) i.e. to the semi-activated potassium channel. The blue sky catastrophe occurs in the model near \(\mathrm{V^{shift}_{K2}}=-0.02425\) (Figure 14).
It is worth noticing that since the blue sky catastrophe is locally based on the saddle-node bifurcation, the period of the bursting orbit obeys the law of \(1/\sqrt{\mu-\mu_0}\ .\) This means that the slow component of the phase point slows down near the phantom of the vanished saddle-node, thereby allowing the bursting orbit to absorb arbitrarily many new spikes one by one (Figure 16) as the bifurcation parameter approaches the transition value.
References
A.A. Andronov, E.A. Leontovich, Some cases of dependence of limit cycles on a parameter, Uchenye zapiski Gorkovskogo Universiteta (Research notes of Gorky University) 6, 3-24, 1937.
A.A. Andronov, E.A. Leontovich, I.E. Gordon, A.G. Maier. The theory of bifurcations of dynamical systems on a plane, Wiley, New York, 1971.
J. Palis, C. Pugh, in Fifty problems in dynamical systems, Dynamical systems - Warwick, 1974, Springer Lecture Notes 468, 1975.
R.H. Abraham, Catastrophes, intermittency, and noise, in Chaos, Fractals, and Dynamics, Lect. Notes Pure Appl. Math. 98, 3-22, 1985.
V.S. Medvedev, The bifurcation of the “blue sky catastrophe” on two-dimensional manifolds, Mathematical Notes, 51(1), 76-81, 1992.
W. Li, C. Li and Z.F. Zhang, Unfolding critical homoclinic orbit of a class of degenerate equilibrium points, Symp. Special Year of ODE and Dyn. Systems in Nankai Univ. in 1990. World Sci. Publ, 99-110, 1992.
D.V. Turaev, L.P. Shilnikov, Blue sky catastrophes. Dokl. Math. 51, 404-407, 1995.
L. Shilnikov, D. Turaev, A new simple bifurcation of aperiodic orbit of blue sky catastrophe type, in ``Methods ofqualitative theory of differential equations and related topics
,AMS Transl. Series II, v.200, 165-188, 2000.
N. Gavrilov, A. Shilnikov, Example of a blue sky catastrophe,
ibid, 99-105, 2000.
L. Shilnikov, A. Shilnikov, D. Turaev, L. Chua, Methods of qualitative theory in nonlinear dynamics. Part I. World Scientific, Singapore, 1998.
L. Shilnikov, A. Shilnikov, D. Turaev, L. Chua, Methods of qualitative theory in nonlinear dynamics. Parts II, World Scientific, Singapore, 2001.
A. Shilnikov, L.P. Shilnikov, D. Turaev. Blue sky catastrophe in singularly perturbed systems. Moscow Math. Journal 5(1), 205-218, 2005.
V. Lukyanov, L.P. Shilnikov, On some bifurcations of dynamical systems with homoclinic structures. Soviet Math. Dokl. 19(6), 1314-1318, 1978.
A. Shilnikov, G. Cymbalyuk, Transition between tonic-spiking and bursting in a neuron model via the blue-sky catastrophe, Phys Review Letters, 94, 048101, 2005.
Internal references Yuri A. Kuznetsov (2006) Andronov-Hopf bifurcation. Scholarpedia, 1(10):1858. John Guckenheimer (2007) Bifurcation. Scholarpedia, 2(6):1517. Eugene M. Izhikevich (2006) Bursting. Scholarpedia, 1(3):1300. James Meiss (2007) Dynamical systems. Scholarpedia, 2(2):1629. Eugene M. Izhikevich (2007) Equilibrium. Scholarpedia, 2(10):2014. Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358. Yuri A. Kuznetsov (2006) Saddle-node bifurcation. Scholarpedia, 1(10):1859. Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838. Arkady Pikovsky and Michael Rosenblum (2007) Synchronization. Scholarpedia, 2(12):1459. Paul So (2007) Unstable periodic orbits. Scholarpedia, 2(2):1353. |
One of the benefits of coming back to an existing wordpress blog rather than continuing to try to roll my own in github.io is that I can easily piggyback on other people writing plugins to solve the same problems. So far, the Crayon and MathJax-Latex plugins seem to fit the bill.
Testing syntax highlighting
# hello_world.py
class HelloWorld():
"""An unnecessarily complex hello world"""
def __init__(self):
print("Hello world")
if __name__ == "__main__":
hello = HelloWorld()
Testing Latex / Math
$$
\begin{align} J(\theta) &= \frac{1}{m}\sum_{i=1}^{m}Cost(h_\theta(x)^{(i)},y^{(i)}) \\ Cost(h_\theta(x),y) &= -log(h_\theta(x))& y=1 \\ Cost(h_\theta(x),y) &= -log(1-h_\theta(x))& y=0 \end{align} $$
So I’ve been skipping my World of Warcraft time and reviewing calculus and differential equations so I don’t get too rusty. I almost immediately ran into an issue “reading” some of the set notation. It’s great because it’s compact and can communicate a lot of information but that doesn’t help if you can’t read it.
Take for example
{(x,f(x))|x∈A}
Doesn’t say much unless you can read it as “the set of all ordered pair x, f of x such that x is a member of set A”. Here are a few quick rules:
{ } denotes the set
( ) denotes and ordered pair | “such that” : also “such that” just in case things were too clear ∈ “is a member of” ∉ “is not a member of” ∪ union of two sets ∩ intersection of sets
So ℕ = {|
a| : a ∈ ℤ} reads as “The set of natural numbers is equal to the absolute value of a such that a is a member of set ℤ”. Since ℤ is generally the set of ingeters { … ,-2, -1, 0, 1, 2, … } this means that ℕ = {0, 1, 2, … }
So now I can mostly read set notation again. Now I just need to learn MathML. |
Suppose that $X \in \mathbb{R}^{n \times n}$, and $X$ is positive semidefinite. For convenience, define
\begin{align}\langle A, B\rangle = \sum_{k, l} A_{kl}B_{kl}\end{align}
for square matrices $A, B$ of equal size; this operation corresponds to the trace of the matrix product.
So your formulation currently looks like
\begin{align}\max \langle{C, X}\rangle \\\mathrm{s.t.}\,\, X \succeq 0,\,\,I - X \succeq 0, \end{align}
where $X \succeq 0$ means that $X$ is positive semidefinite.
Essentially, I'd create a positive semidefinite matrix of slack variables $S \in \mathbb{R}^{n \times n}$ such that $S + X = I$, and $S \succeq 0$. Let $E_{ij}$ is a matrix of consisting of all zeros except for the $(i,j)$th element, which is set to 1. Also, define the matrices
\begin{align}X' = \left[\begin{array}{cc}X & 0 \\ 0 & S\end{array}\right], \\C' = \left[\begin{array}{cc}C & 0 \\ 0 & 0\end{array}\right], \\A_{ij} = \left[\begin{array}{cc} E_{ij} & 0 \\ 0 & E_{ij}\end{array}\right].\end{align}
Then, (I think,) the following reformulation works:
\begin{align}\max \langle{C', X'}\rangle \\\mathrm{s.t.}\,\, X' \succeq 0,\,\,\langle A_{ij}, X' \rangle = \delta_{ij},\,\,i,j=1,\ldots,n \end{align}
where $\delta_{ij}$ is the Kronecker delta symbol. |
Bulletin of the American Physical Society 2017 Fall Meeting of the APS Division of Nuclear Physics Volume 62, Number 11 Wednesday–Saturday, October 25–28, 2017; Pittsburgh, Pennsylvania
Session HF: Structure Functions Hide Abstracts Chair: Misak Sargsian, Florida International University
Room:
Salon 6
Friday, October 27, 2017
8:30AM - 8:42AM
HF.00001: Nucleon PDFs and TMDs from Continuum QCD
Kyle Bednar, Ian Cloet, Peter Tandy
The parton structure of the nucleon is investigated in an approach based upon QCD's Dyson-Schwinger equations. The method accommodates a variety of QCD’s dynamical outcomes including: the running mass of quark propagators and formation of non-pointlike di-quark correlations. All needed elements, including the nucleon wave function solution from a Poincar\'e covariant Faddeev equation, are encoded in spectral-type representations in the Nakanishi style to facilitate Feynman integral procedures and allow insight into key underlying mechanisms. Results will be presented for spin-independent PDFs and TMDs arising from a truncation to allow only scalar di-quark correlations. The influence of axial-vector di-quark correlations may be discussed if results are available. [Preview Abstract]
Friday, October 27, 2017
8:42AM - 8:54AM
HF.00002: Nonperturbative Transverse Momentum Effects in p$+$p and p$+$A Collisions at PHENIX
Michael Skoby
Due to the non-Abelian nature of QCD, there is a prediction that quarks can become correlated across colliding protons in hadron production processes sensitive to nonperturbative transverse momentum effects.~ Measuring the evolution of nonperturbative transverse momentum widths as a function of the hard interaction scale can help distinguish these effects from other possibilities.~ Collins-Soper-Sterman evolution comes directly from the proof of transverse-momentum-dependent (TMD) factorization for processes such as Drell-Yan, semi-inclusive deep-inelastic scattering, and e$+$e- annihilation and predicts nonperturbative momentum widths to increase with hard scale.~ Experimental results from proton-proton and proton-nucleus collisions, in which TMD factorization is predicted to be broken, will be presented.~ The results show that these widths decrease with hard scale, suggesting possible effects from TMD factorization breaking. [Preview Abstract]
Friday, October 27, 2017
8:54AM - 9:06AM
HF.00003: Double Polarization Asymmetry Measurement of the Electric Form Factor of the Neutron at $Q^2$=1.16 GeV$^2$ Using the Semi-Exclusive Reaction $^3\vec{\textrm{He}}(\vec{e},e'n)pp$
Richard Obrecht
The space-like electric form factor of the neutron has been extracted at $Q^2=1.16$ GeV$^2$ via a beam-target helicity asymmetry measurement using the semi-exclusive reaction $^3\vec{\textrm{He}}(\vec{e},e'n)pp$. The Jefferson Lab Hall A experiment E02-013 ran in 2006 utilizing the 6 GeV CEBAF for its high-duty, longitudinally polarized electron beam. The double-arm coincidence experiment detected the quasielastically scattered electrons in a large angular and momentum acceptance spectrometer referred to as BigBite. The recoiling nucleons were detected in a large neutron detector, built out of planes of scintillator arrays interlaced with iron and lead plates to increase the probability of inducing a hadronic shower. The polarized $^3$He target used the novel technique of hybrid spin-exchange optical pumping, resulting in a 10 atm target that could sustain polarizations greater than 50$\%$ at a beam current of 8 $\mu$A. Presented will be the current analysis and a preliminary result for $G_E^n$ at $Q^2$=1.16 GeV$^2$. [Preview Abstract]
Friday, October 27, 2017
9:06AM - 9:18AM
HF.00004: Comparison of the F2 Structure Function in Iron as Measured by Charged Lepton and Neutrino Probes
Narbe Kalantarians, Eric Christy, Cynthia Keppel
World data for the F2 structure function for Iron, as measured by multiple charged lepton and neutrino deep inelastic scattering experiments, are compared. Data obtained from charged lepton and neutrino scattering at larger values of x are in remarkably good agreement with a simple invocation of the 18/5 rule, while a discrepancy in the behavior of the data obtained from the different probes well beyond the data uncertainties is observed in the shadowing/anti-shadowing transition region where the Bjorken scaling variable x is less than 0.15. The data are compared to theoretical calculations. Details and results of the data comparison will be presented, along with future plans. [Preview Abstract]
Friday, October 27, 2017
9:18AM - 9:30AM
HF.00005: Transverse Single-Spin Asymmetries of Direct Photons from Proton-Proton Collisions at Forward Rapidity
Oleg Eyser
Transverse single-spin asymmetries in high energy collisions offer unique ways to study the nucleon structure beyond the conventional leading twist collinear picture in hard QCD processes. While transverse momentum dependent distribution and fragmentation functions require two scales (hard and soft), observables with a single hard scale can be described in a collinear framework with multiparton correlations (twist-3). Both are related when the intrinsic transverse momentum is integrated. Initial and final state effects can contribute to different probes and need to be disentangled. In 2015, the STAR experiment at RHIC has extended the forward calorimeter, 2.5<$\eta$<4.0 with a preshower detector in order to study transverse asymmetries of direct photon production in proton-proton collisions at a center of mass energy of 200 GeV. This measurement will contribute to the universality test of initial state spin-orbit correlations (sign-change between hadronic collisions and deep inelastic scattering) and serve as first input to a proper evolution of higher twist functions as function of momentum transfer. We will present the status of the analysis and discuss implications on the theoretical description. [Preview Abstract]
HF.00006: ABSTRACT WITHDRAWN Friday, October 27, 2017
9:42AM - 9:54AM
HF.00007: Transverse single-spin asymmetries for direct photon and neutral pion production in midrapidity at PHENIX
Nicole Lewis
Large transverse single spin asymmetries for hadron production in proton-proton collisions were some of the first indicators of significant nonperturbative spin-momentum correlations in the proton. They have been found to persist up to collision energies of 510 GeV, yet their origin remains poorly understood. Measurements of different final-state particles in a wide variety of collision systems over a range of kinematics can help to identify and separate contributions from the proton versus hadronization, and from different parton flavors. Depending on the rapidity pion production can provide access to both initial- and final-state effects for a mix of parton flavors, while direct photons depend only on initial-state effects and are particularly sensitive to gluon dynamics in RHIC kinematics. The status of transverse single spin measurements for neutral pions and direct photons performed for p+p, p+Al, and p+Au collisions at PHENIX will be presented. [Preview Abstract]
Friday, October 27, 2017
9:54AM - 10:06AM
HF.00008: Generalized Parton Distributions of the nucleon from exclusive lepto- and photo-production of lepton pairs
Sho Uemura, Marie Boer
Generalized Parton Distributions (GPDs) contain the correlation between the parton's longitudinal momentum and their transverse distribution. They are accessed through hard exclusive processes such as exclusive Compton processes, where two photons are exchanged with a quark of the nucleon, and at least one of them has a high virtuality. Exclusive Compton processes are considered "golden" channels, as the only non-perturbative part of the process corresponds to the GPDs. Deeply Virtual Compton Scattering (DVCS) corresponds to the lepto-production of a real photon and has been intensively studied in the past decade. We propose to access GPDs with the two other cases of exclusive Compton processes: Timelike Compton Scattering (TCS) corresponds to the photo-production of a lepton pair, and Double Deeply Virtual Compton Scattering (DDVCS) corresponds to the lepto-production of a lepton pair. The study of these two reactions is complementary to DVCS and will bring new constraints on our understanding of the nucleon structure, in particular for a tomographic interpretation of GPDs. We will discuss the interest of TCS and DDVCS in terms of GPD studies, and present the efforts held at Jefferson Lab for new experiments aiming at measuring TCS and DDVCS. [Preview Abstract]
Engage My APS Information for
The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics.
Headquarters1 Physics Ellipse, College Park, MD 20740-3844(301) 209-3200 Editorial Office1 Research Road, Ridge, NY 11961-2701(631) 591-4000 Office of Public Affairs529 14th St NW, Suite 1050, Washington, D.C. 20045-2001(202) 662-8700 |
Let $(\Omega,\mathcal{F},\mathbb{F},\mathbb{\mu})$ be a filtered probability space.
Market efficiency implies that the stock price process is Markov with
$\mathbb{E}[f(X_t)|\mathbb{F}_s] = g(X_s)$ for $0 \leq s \leq t$
where $f$ and $g$ are Borel measurable functions.
It additionally implies that the discounted stock price process is a martingale w.r.t. probability measure $\mathbb{\mu}$ and filtration $\mathbb{F}$ with
$\mathbb{E}^{\mathbb{\mu}}[X_t^*|\mathbb{F}_s] = X_s^*$ for $0 \leq s \leq t$
While the discounted stock price process is a martingale the stock price process itself should be a submartingale w.r.t. probability measure $\mathbb{\mu}$ and filtration $\mathbb{F}$ with
$\mathbb{E}^{\mathbb{\mu}}[X_t|\mathbb{F}_s] \geq X_s$ for $0 \leq s \leq t$
I agree with the others Markov does not imply martingale and vice versa.
There are many sources on empirical tests for these properties.
In my opinion these assumptions are not unreasonable. Markov property only says that all past information about the stock price process (historical prices, historical volume, etc.) is incorporated in the current price and therefore only the current price is relevant. I believe that it is logical to assume that publicly available historical information is already priced in. For instance even the anomalies violating weak form efficiency (e.g. the January effect) tend to disappear over time as market participants trade on the information thereby incorporating the information into the price. Assuming that the stock price process is a submartingale only says that in expectation the future stock price should be greater than or equal to today's price. Intuitively, investors would not participate (long positions) in the stock market if prices were expected to decline. Take stock price process$$dX = \alpha Xdt + \sigma XdW$$A submartingale implies $\alpha \geq 0$
For most assets I don't believe that that is an unreasonable assumption. |
Basic Electric Guitar Circuits 2: Potentiometers & Tone Capacitors Part 2: Potentiometers and Tone Capacitors What is a Potentiometer?
Potentiometers, or "pots" for short, are used for volume and tone control in electric guitars. They allow us to alter the electrical resistance in a circuit at the turn of a knob.
It is useful to know the fundamental relationship between voltage, current and resistance known as Ohm's Law when understanding how electric guitar circuits work. The guitar pickups provide the voltage and current source, while the potentiometers provide the resistance. From Ohm's Law we can see how increasing resistance decreases the flow of current through a circuit, while decreasing the resistance increases the current flow. If two circuit paths are provided from a common voltage source, more current will flow through the path of least resistance.
Ohm's Law$$V = I \times R$$
where ~V~ = voltage, ~I~ = current and ~R~ = resistance
Basic Electric Guitar Circuit Alternative functional terminal names
Terminal 1 "Cold" Terminal 2 "Wiper" Terminal 3 "Hot" A Visual Representation of how a potentiometer works
Based on a 300 degree rotation
We can visualize the operation of a potentiometer from the drawing above. Imagine a resistive track connected from terminal 1 to 3 of the pot. Terminal 2 is connected to a wiper that sweeps along the resistive track when the potentiometer shaft is rotated from 0° to 300°. This changes the resistance from terminals 1 to 2 and 2 to 3 simultaneously, while the resistance from terminal 1 to 3 remains the same. As the resistance from terminal 1 to 2 increases, the resistance from terminal 2 to 3 decreases, and vice-versa.
Tone Control: Variable Resistors & Tone Capacitors
Tone pots are connected using only terminals 1 and 2 for use as a variable resistor whose resistance increases with a clockwise shaft rotation. The tone pot works in conjunction with the tone capacitor ("cap") to serve as an adjustable high frequency drain for the signal produced by the pickups.
The tone pot's resistance is the same for all signal frequencies; however, the capacitor has AC impedance which varies depending on both the signal frequency and the value of capacitance as shown in the equation below.$$\text{Capacitor Impedance} = Z_{\text{capacitor}} = \frac{1}{2 \pi f C}$$
where ~f~ = frequency and ~C~ = capacitance
Capacitor impedance decreases if capacitance or frequency increases.High frequencies see less impedance from the same capacitor than low frequencies. The table below shows impedance calculations for three of the most common tone cap values at a low frequency (100 Hz) and a high frequency (5 kHz)
~C~ (Capacitance) ƒ (Frequency) ~Z~ (Impedance) .022 μF 100 Hz 72.3 kΩ .022 μF 5 kHz 1.45 kΩ .047 μF 100 Hz 33.9 kΩ .047 μF 5 kHz 677 Ω .10 μF 100 Hz 15.9 kΩ .10 μF 5 kHz 318 Ω
When the tone pot is set to its maximum resistance (e.g. 250kΩ), all of the frequencies (low and high) have a relatively high path of resistance to ground. As we reduce the resistance of the tone pot to 0Ω, the impedance of the capacitor has more of an impact and we gradually lose more high frequencies to ground through the tone circuit. If we use a higher value capacitor, we lose more high frequencies and get a darker, fatter sound than if we use a lower value.
Volume Control: Variable Voltage Dividers
Volume pots are connected using all three terminals in a way that provides a variable voltage divider for the signal from the pickups. The voltage produced by the pickups (input voltage) is connected between the volume pot terminals 1 and 3, while the guitar\'s output jack (output voltage) is connected between terminals 1 and 2.
Voltage divider equation:$$V_{\text{out}} = V_{\text{in}} \times \frac{R_2}{R_1 + R_2}$$
From the voltage divider equation we can see that if ~R_1 = 0\text{Ω}~ and ~R_2 = 250\text{kΩ}~, then the output voltage will be equal to the input voltage (full volume).$$V_{\text{out}} = V_{\text{in}} \times \frac{250\text{kΩ}}{0 + 250\text{kΩ}} = V_{\text{in}} \times \frac{250\text{kΩ}}{250\text{kΩ}}$$$$V_{\text{out}} = V_{\text{in}}$$
If ~R_1 = 250\text{kΩ}~ and ~R_2 = 0\text{Ω}~, then the output voltage will be zero (no sound).$$V_{\text{out}} = V_{\text{in}} \times \frac{0}{250\text{kΩ} + 0} = V_{\text{in}} \times \frac{0}{250\text{kΩ}}$$$$V_{\text{out}} = 0$$
Two Resistor Voltage Divider Schematic
Example:$$V_{\text{in}} = 60\text{mV} \text{, } R_1 = 125\text{kΩ} \text{, } R_2 = 125\text{kΩ}$$$$V_{\text{out}} = V_{\text{in}} \times \frac{R_1}{(R_1 + R_2)}$$$$V_{\text{out}} = 60\text{mV} \times \frac{125\text{kΩ}}{(125\text{kΩ} + 125\text{kΩ})}$$$$V_{\text{out}} = 60\text{mV} \times \frac{1}{2}$$$$V_{\text{out}} = 30\text{mV}$$
Potentiometer Taper
The taper of a potentiometer indicates how the output to input voltage ratio will change with respect to the shaft rotation. The two taper curves below are examples of the two most common guitar pot tapers as they would be seen on a manufacturer data sheet. The rotational travel refers to turning the potentiometer shaft clockwise from 0° to 300° as in the previous visual representation drawing.
How do you know when to use an audio or linear taper potentiometer?
The type of potentiometer you should use will depend on the type of circuit you are designing for. Typically, for audio circuits the audio taper potentiometer is used. This is because the audio taper potentiometer functions on a logarithmic scale, which is the scale in which the human ear percieves sound. Even though the taper chart appears to have a sudden increase in volume as the rotation increases, in fact the perception of the sound increase will occur on a gradual scale. The linear scale will actually (counterintuitively) have a more significant sudden volume swell effect because of how the human ear perceives the scale. However, linear potentiometers are often used for other functions in audio circuits which do not directly affect audio output. In the end, both types of potentiometers will give you the same range of output (from 0 to full), but the rate at which that range changes varies between the two.
How do you know what value of potentiometer to use?
The actual value of the pot itself does not affect the input to output voltage ratio, but it does alter the peak frequency of the pickup. If you want a brighter sound from your pickups, use a pot with a larger total resistance. If you want a darker sound, use a smaller total resistance. In general, 250kΩ pots are used with single-coil pickups and 500kΩ pots are used with humbucking pickups.
Specialized Pots
Potentiometers are used in all types of electronic products so it is a good idea to look for potentiometers specifically designed to be used in electric guitars. If you do a lot of volume swells, you will want to make sure the rotational torque of the shaft feels good to you and most pots designed specifically for guitar will have taken this into account. When you start looking for guitar specific pots, you will also find specialty pots like push-pull pots, no-load pots and blend pots which are all great for getting creative and customizing your guitar once you understand how basic electric guitar circuits work. |
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... |
I wish to find an expression for the number of solutions $x$ to $x^2\equiv 9 \pmod n$, with $x$ a natural number${}<n$, when $n$ has a factorization $n=p_1^{m_1}\cdot p_2^{m_2}\cdots p_k^{m_k}$ into distinct prime powers.
I have already found that for the different cases of $p_i$, that for $2^k$, $k \ge 3$ there are always 4, similarly for $3^k, k \ge 3$ there are 6. For $p^k$ there are 2.
I have experimented and it follows that for a composite $n$, I simply multiply the distinct prime powers together to get the number of solutions, but I am struggling with trying to express this formally.
I know that the CRT states that there exists a $k$, $1 \le k \le n-1$, s.t.
$$k \equiv b \pmod {p_1}$$ $$k \equiv c \pmod {p_2}$$ .... $$k \equiv z \pmod {p_3}$$
for each distinct prime factor of $n$. But how can I bind all these different residues back to $9$? Because the CRT does not state this.
Any help would be greatly appreciated!! |
Given a filtered space $(\Omega, F,\mathcal{F}_{t})$ with rightcontinous filtration. We have a class of probability measures $P=\{P_{\theta}:\theta \in \Theta\}$ definied on the filtered space.
We assume there exists a $\sigma$-finite measure $\mu$ on $(\Omega,\mathcal{F})$, which locally dominates $P$:
\begin{align*} P_{\theta}^{t}<<\mu^{t} \end{align*}
where $P_{\theta}^{t},\mu^{t}$ is the restriction of our measures $P_{\theta},\mu$ to $\mathcal{F}_{t}$. Then we assume the Radon-Nikodym-Derivative of $P_{\theta}^{t}$ w.r.t $\mu^{t}$ are of the form \begin{align*} \frac{dP_{\theta}^{t}}{d\mu^{t}}=e^{<\gamma_{t}(\theta),B_{t}>+\phi_{t}(\theta)}\quad \theta \in \Theta,t\geq 0 \end{align*} Here $\gamma_{t}(\theta)$ is a $m$-dimensional real valued function of $\theta$ and $t$ which is RCLL (cadlag) function with respect to $t$.
For $\phi_{t}(\theta)$ we have the same, but just 1-dimensional. Both non-random.
$B_{t}$ is a $m$-dimensional real valued $\mathcal{F}_{t}$-adapted RCLL (cadlag) random-process.
This respresentation of the Radon-Nikodm-Derivative is not unique: for example take $\tilde{\gamma_{t}}=\frac{1}{2}\gamma_{t}$ and $\tilde{B_{t}}=2B_{t}$ as another form of representation.
Now it is stated:
It is always possible to find a respresentation of the latter form, such that $B$ is a semimartingale. Why?
Additional stuff kept in mind: one can show, that $\dot{\gamma}_{t}(\theta)^{T}B_{t}-\dot{\phi}_{t}(\theta)$ is a $P_{\theta}$ square integrable martingale with cadlag paths. Thus its a semimartingale. Could this help me, for getting a semimartingale $\tilde{B}$? |
At its core, the argument you're presenting only uses the fact that your prospective ladder operator $\hat \ell=\hat a$ has a specific commutation relation with the hamiltonian,$$[\hat H,\hat \ell] = -c\hat \ell\tag{$*$}$$for some constant $c$; this is enough to ensure that if $|E\rangle$ is an eigenstate of $\hat H$ with eigenvalue $E$, then $\hat \ell|E\rangle$ must obey$$\hat H\hat \ell |E\rangle=\hat \ell(\hat H-c)|E\rangle=(E-c)\hat \ell |E\rangle,\tag{$\star$}$$as in your question. The key issue here, though, is that
the relationship $\bf (*)$ is not unique to $\boldsymbol{\hat\ell=\hat a}$, and indeed if $(1)$ is true for $\hat \ell$ then it is also true for ${\hat{\ell}}^2$ by changing $c$ for $2c$, and by induction for $\hat \ell^k$ for any power $k$ with constant $nk$.
Thus, the question is perfectly valid: how do we know that we're not tricking ourselves, and there is in fact some operator $\hat b$ such that $\hat b^2=\hat a$, and which causes increments half as big as those caused by $a$? Indeed, for all that the algebraic method knows as of $(\star)$, there could be some other half of the basis $|\frac12\rangle,|\frac32\rangle,\ldots$, interleaved with the usual Fock states, and $\hat a$ is taking the stairs two at a time.
The answer to that is the behaviour at the edges, and more specifically at the lower bound of the spectrum of $H$. We know that the spectrum is bounded from below, which means that the descending stair in $(\star)$ on $|E\rangle, \hat \ell |E\rangle, \hat \ell^2|E\rangle, \ldots$ needs to terminate at some point, and that can only happen if at some point that descending ladder gets a zero vector, i.e. if for some state $|\psi\rangle$ we have$$\hat \ell |\psi\rangle=0.$$Now, here's the crucial bit: if $\hat \ell$ is taking on more stairs than it needs to, then the solution to this equation will have a kernel that's bigger than it needs to, and it will involve more than one eigenspace. This is the case with the square of the usual ladder operator, which has $\hat a^2|0\rangle = 0$
and $\hat a^2|1\rangle = 0$.
In contrast, the correct ladder operator $\hat a$ has a well-defined single-(effective-)dimensional kernel, in the sense that if we set $|\psi\rangle$ to$$\hat a |\psi\rangle=0$$then this corresponds to a single eigenvalue of the hamiltonian, a fact that relies on the structure $\hat H=c\, \hat a^\dagger\hat a + \delta$ of the hamiltonian. Thus, if $\hat a |\psi\rangle=0$, then we also know that $\hat a^\dagger\hat a |\psi\rangle=0$, so that therefore $\hat H |\psi\rangle=\delta|\psi\rangle$, i.e. a single eigenvalue.
This then places the completely strict constraint on the energies $E$ that can ladder-descend down to the ground state to energies of the form $E=\delta + n c$, which completes the proof that no smaller steps are possible. |
Excuse my lack of vocabulary for I have no formal training in this field, which is also why I ask this question - it may be trivial or it may be impossible.
I want to evaluate an expression in the following form down to a binary float with "correct" rounding and I'm worried that double-rounding will corrupt my result:
$$\sqrt[c]{\frac{a}{b}} \qquad a \in \mathbb N \quad b,c \in \mathbb N_{> 0}$$
By
correct rounding I mean as if the complete expression is evaluated to infinite precision and then rounded correctly, without any chance of "double rounding" corrupting the result.
$$t = a/b \qquad t:\mathrm{e.g.\ binary128}$$ $$y = \sqrt[c]{t} \qquad y: \mathrm{e.g.\ binary64}$$
Using MPFR I can calculate both $a/b$ and $\sqrt[c] t$ correctly rounded to any arbitrary precision, but I don't know how to do these two steps in a way that guarantees correct rounding for the complete expression!
Here is a code example of what I'm currently doing. Since it's written in C it is a bit clumsy (no overloads or default arguments) but I hope you can see the steps even if you're not fluent in this language or the libraries I use:
extern unsigned long int a, b, c; // b and c != 0MPFR_DECL_INIT( y, DBL_MANT_DIG ); // Fits a double exactlyMPFR_DECL_INIT( t, DBL_MANT_DIG*2 ); // Twice the precision I wantmpfr_set_ui ( t, a, MPFR_RNDN ); // Exact conversionmpfr_div_ui ( t, t, b, MPFR_RNDN ); // Correctly roundedmpfr_rootn_ui ( y, t, c, MPFR_RNDN ); // N'th root correctly rounded
I'm expecting one of these to hold true:
Twice the precision for the intermediate is not enough Twice the precision for the intermediate is more than needed - I only need N extra bits I need to do it differently but it's still possible in a finite time and space It's not possible to guarantee correct rounding because of reason |
To evaluate detection performance, we plot the
miss-rate $mr(c) = \frac{fn(c)}{tp(c) + fn(c)}$ against the number of false positives per image $fppi(c)=\frac{fp(c)}{\text{#img}}$ in log-log plots. $tp(c)$ is the number of true positives, $fp(c)$ is the number of false positives, and $fn(c)$ is the number of false negatives, all for a given confidence value $c$ such that only detections are taken into account with a confidence value greater or equal than $c$. As commonly applied in object detection evaluation the confidence threshold $c$ is used as a control variable. By decreasing $c$, more detections are taken into account for evaluation resulting in more possible true or false positives, and possible less false negatives. We define the log average miss-rate (LAMR) as shown, where the 9 fppi reference points are equally spaced in the log space: $\DeclareMathOperator*{\argmax}{argmax}LAMR = \exp\left(\frac{1}{9}\sum\limits_f \log\left(mr(\argmax\limits_{fppi\left(c\right)\leq f} fppi\left(c\right))\right)\right)$
For each fppi reference point the corresponding mr value is used. In the absence of a miss-rate value for a given f the highest existent
fppi value is used as new reference point. This definition enables LAMR to be applied as a single detection performance indicator at image level. At each image the set of all detections is compared to the groundtruth annotations by utilizing a greedy matching algorithm. An object is considered as detected (true positive) if the Intersection over Union (IoU) of the detection and groundtruth bounding box exceeds a pre-defined threshold. Due to the high non-rigidness of pedestrians we follow the common choice of an IoU threshold of 0.5. Since no multiple matches are allowed for one ground-truth annotation, in the case of multiple matches the detection with the largest score is selected, whereas all other matching detections are considered false positives. After the matching is performed, all non matched ground-truth annotations and detections, count as false negatives and false positives, respectively.
Neighboring classes and ignore regions are used during evaluation. Neighboring classes involve entities that are semantically similar, for example bicycle and moped riders. Some applications might require their precise distinction (
enforce) whereas others might not ( ignore). In the latter case, during matching correct/false detections are not credited/penalized. If not stated otherwise, neighboring classes are ignored in the evaluation. In addition to ignored neighboring classes all persons annotations with the tags behind glass or sitting-lying are treated as ignore regions. Further, as mentioned in Section 3.2, EuroCity Persons Dataset Publication, ignore regions are used for cases where no precise bounding box annotation is possible (either because the objects are too small or because there are too many objects in close proximity which renders the instance based labeling infeasible). Since there is no precise information about the number or the location of objects in the ignore region, all unmatched detections which share an intersection of more than $0.5$ with these regions are not considered as false positives.
Note that submissions with provided publication link and/or code will get priorized in below list (COMING SOON).
Method User LAMR (reasonable) LAMR (small)▲ LAMR (occluded) LAMR (all) External data used Publication URL Publication code Submitted on HRNet Hongsong Wang 0.061 0.138 0.287 0.183 ImageNet no no 2019-08-05 17:11:04 View YOLOv3 ECP Team 0.097 0.186 0.401 0.242 ImageNet yes no 2019-04-01 17:08:05 View Faster R-CNN ECP Team 0.101 0.196 0.381 0.251 ImageNet yes no 2019-04-01 17:06:33 View SSD ECP Team 0.131 0.235 0.460 0.296 ImageNet yes no 2019-04-02 13:56:14 View R-FCN (with OHEM) ECP Team 0.163 0.245 0.507 0.330 ImageNet yes no 2019-04-01 17:10:03 View YOLOv3_640 HUI_Tsinghua-Daim... 0.273 0.564 0.623 0.456 no no 2019-05-17 04:56:27 View
Method User LAMR (reasonable) LAMR (small)▲ LAMR (occluded) LAMR (all) External data used Publication URL Publication code Submitted on HRNet Hongsong Wang 0.079 0.156 0.265 0.153 ImageNet no no 2019-08-05 17:11:04 View FasterRCNN with M... Qihua Cheng 0.150 0.253 0.653 0.295 ImageNet no no 2019-07-08 08:48:13 View Faster R-CNN ECP Team 0.201 0.359 0.701 0.358 ImageNet yes no 2019-05-02 10:10:01 View |
$$\sum \frac{(-1)^{n}}{n\cdot \ln n}$$
leibniz criterion: The series $\frac{1}{n\cdot \ln n}$ is monotonically decreasing. Also $\lim_{n \to \infty}\frac{1}{n\cdot \ln n}=0$
Thus, the the series is convergent.
however, these criteria also met with "integral test".
Using Integral test: $\int \frac{1}{x\cdot \ln x}dx$ using the formula $\int \frac{f'(x)}{f(x)}dx=\ln (f(x))$
I get: $\ln (\ln x)$
solving the latter with "infinity" as an upper bound gets me a result of "Divergent" which for some reason contradicts Leibniz criterion that gets me an answer of "Convergent"
why is that? |
Topological Methods in Nonlinear Analysis Topol. Methods Nonlinear Anal. Volume 29, Number 2 (2007), 199-249. Fixed point theorems and Denjoy-Wolff theorems for Hilbert's projective metric in infinite dimensions Abstract
Let $K$ be a closed, normal cone with nonempty interior ${\rm int}(K)$in a Banach space $X$. Let $\Sigma = \{x\in{\rm int}(K) : q(x) = 1\}$where $q \colon {\rm int}(K)\rightarrow (0,\infty)$ is continuous andhomogeneous of degree $1$ and it is usually assumed that $\Sigma$is bounded in norm. In this framework there is a complete metric$d$,
Hilbert's projective metric, defined on $\Sigma$ and acomplete metric $\overline d$, Thompson's metric, defined on${\rm int}(K)$. We study primarily maps $f\colon \Sigma\rightarrow\Sigma$which are nonexpansive with respect to $d$, but also maps $g\colon {\rm int}(K)\rightarrow {\rm int}(K)$ which are nonexpansive with respectto $\overline{d}$. We prove under essentially minimal compactnessassumptions, fixed point theorems for $f$ and $g$. We generalizeto infinite dimensions results of A. F. Beardon (see also A. Karlsson and G. Noskov) concerning the behaviour of Hilbert'sprojective metric near $\partial\Sigma := \overline\Sigma\setminus \Sigma$. If $x \in \Sigma$, $f \colon \Sigma\rightarrow\Sigma$ is nonexpansive with respect to Hilbert's projectivemetric, $f$ has no fixed points on $\Sigma$ and $f$ satisfiescertain mild compactness assumptions, we prove that $\omega(x;f)$, the omega limit set of $x$ under $f$ in the norm topology,is contained in $\partial\Sigma$; and there exists$\eta\in\partial\Sigma$, $\eta$ independent of $x$, such that $(1- t) y + t\eta \in\partial K$ for $0 \leq t \leq 1$ and all $y\in\omega (x;f)$. This generalizes results of Beardon and ofKarlsson and Noskov. We give some evidence for the conjecturethat $\text{\rm co}(\omega(x;f))$, the convex hull of $\omega(x;f)$,is contained in $\partial K$. Article information Source Topol. Methods Nonlinear Anal., Volume 29, Number 2 (2007), 199-249. Dates First available in Project Euclid: 13 May 2016 Permanent link to this document https://projecteuclid.org/euclid.tmna/1463148715 Mathematical Reviews number (MathSciNet) MR2345061 Zentralblatt MATH identifier 1143.47037 Citation
Nussbaum, Roger D. Fixed point theorems and Denjoy-Wolff theorems for Hilbert's projective metric in infinite dimensions. Topol. Methods Nonlinear Anal. 29 (2007), no. 2, 199--249. https://projecteuclid.org/euclid.tmna/1463148715 |
Given a fat matrix $B \in \mathbb{C}^{n \times m}$ (where $m > n$) with full row rank, I would like to find (numerically) a full-rank matrix $A$ that minimizes the Frobenius norm of the product $A B$. Formally,
$$\underset{A \in \mathbb{C}^{n \times n}}{\text{minimize}} \quad \frac{1}{2} \|AB\|_F^2 \quad \text{subject to} \quad \det (A) \neq 0$$
The value of $m$ is typically an order of magnitude larger than $n$. The sizes ($n,m$) I am interested in may be on the order of hundreds.
I have found the following discussion, which I guess could be generalized to the above case. I wonder if there is a simpler solution in this particular scenario. |
We study the problem of variable selection for linear models under thehigh-dimensional asymptotic setting, where the number of observations $n$ growsat the same rate as the number of predictors $p$. We consider two-stagevariable selection techniques (TVS) in which the first stage uses bridgeestimators to obtain an estimate of the regression coefficients, and the secondstage simply thresholds this estimate to select the "important" predictors. Theasymptotic false discovery proportion (AFDP) and true positive proportion(ATPP) of these TVS are evaluated. We prove that for a fixed ATPP, in order toobtain a smaller AFDP, one should pick a bridge estimator with smallerasymptotic mean square error in the first stage of TVS. Based on suchprincipled discovery, we present a sharp comparison of different TVS, via anin-depth investigation of the estimation properties of bridge estimators.Rather than "order-wise" error bounds with loose constants, our analysisfocuses on precise error characterization. Various interesting signal-to-noiseratio and sparsity settings are studied. Our results offer new and thoroughinsights into high-dimensional variable selection. For instance, we prove thata TVS with Ridge in its first stage outperforms TVS with other bridgeestimators in large noise settings; two-stage LASSO becomes inferior when thesignal is rare and weak. As a by-product, we show that two-stage methodsoutperform some standard variable selection techniques, such as LASSO and SureIndependence Screening, under certain conditions.
The class of Lq-regularized least squares (LQLS) are considered forestimating a p-dimensional vector \b{eta} from its n noisy linear observationsy = X\b{eta}+w. The performance of these schemes are studied under thehigh-dimensional asymptotic setting in which p grows linearly with n. In thisasymptotic setting, phase transition diagrams (PT) are often used for comparingthe performance of different estimators. Although phase transition analysis isshown to provide useful information for compressed sensing, the fact that itignores the measurement noise not only limits its applicability in manyapplication areas, but also may lead to misunderstandings. For instance,consider a linear regression problem in which n > p and the signal is notexactly sparse. If the measurement noise is ignored in such systems,regularization techniques, such as LQLS, seem to be irrelevant since even theordinary least squares (OLS) returns the exact solution. However, it iswell-known that if n is not much larger than p then the regularizationtechniques improve the performance of OLS. In response to this limitation of PTanalysis, we consider the low-noise sensitivity analysis. We show that thisanalysis framework (i) reveals the advantage of LQLS over OLS, (ii) capturesthe difference between different LQLS estimators even when n > p, and (iii)provides a fair comparison among different estimators in high signal-to-noiseratios. As an application of this framework, we will show that under mildconditions LASSO outperforms other LQLS even when the signal is dense. Finally,by a simple transformation we connect our low-noise sensitivity framework tothe classical asymptotic regime in which n/p goes to infinity and characterizehow and when regularization techniques offer improvements over ordinary leastsquares, and which regularizer gives the most improvement when the sample sizeis large.
In this paper, we study the popularly dubbed matrix completion problem, wherethe task is to "fill in" the unobserved entries of a matrix from a small subsetof observed entries, under the assumption that the underlying matrix is oflow-rank. Our contributions herein, enhance our prior work on nuclear normregularized problems for matrix completion (Mazumder et al., 2010) byincorporating a continuum of nonconvex penalty functions between the convexnuclear norm and nonconvex rank functions. Inspired by SOFT-IMPUTE (Mazumder etal., 2010; Hastie et al., 2016), we propose NC-IMPUTE- an EM-flavoredalgorithmic framework for computing a family of nonconvex penalized matrixcompletion problems with warm-starts. We present a systematic study of theassociated spectral thresholding operators, which play an important role in theoverall algorithm. We study convergence properties of the algorithm. Usingstructured low-rank SVD computations, we demonstrate the computationalscalability of our proposal for problems up to the Netflix size (approximately,a $500,000 \times 20, 000$ matrix with $10^8$ observed entries). We demonstratethat on a wide range of synthetic and real data instances, our proposednonconvex regularization framework leads to low-rank solutions with betterpredictive performance when compared to those obtained from nuclear normproblems. Implementations of algorithms proposed herein, written in the Rprogramming language, are made available on github.
We consider a probit model without covariates, but the latent Gaussianvariables having compound symmetry covariance structure with a single parametercharacterizing the common correlation. We study the parameter estimationproblem under such one-parameter probit models. As a surprise, we demonstratethat the likelihood function does not yield consistent estimates for thecorrelation. We then formally prove the parameter's nonestimability by derivinga non-vanishing minimax lower bound. This counter-intuitive phenomenon providesan interesting insight that one bit information of the latent Gaussianvariables is not sufficient to consistently recover their correlation. On theother hand, we further show that trinary data generated from the Gaussianvariables can consistently estimate the correlation with parametric convergencerate. Hence we reveal a phase transition phenomenon regarding thediscretization of latent Gaussian variables while preserving the estimabilityof the correlation.
We study the problem of estimating $\beta \in \mathbb{R}^p$ from its noisylinear observations $y= X\beta+ w$, where $w \sim N(0, \sigma_w^2 I_{n\timesn})$, under the following high-dimensional asymptotic regime: given a fixednumber $\delta$, $p \rightarrow \infty$, while $n/p \rightarrow \delta$. Weconsider the popular class of $\ell_q$-regularized least squares (LQLS)estimators, a.k.a. bridge, given by the optimization problem: \begin{equation*}\hat{\beta} (\lambda, q ) \in \arg\min_\beta \frac{1}{2} \|y-X\beta\|_2^2+\lambda \|\beta\|_q^q, \end{equation*} and characterize the almost sure limitof $\frac{1}{p} \|\hat{\beta} (\lambda, q )- \beta\|_2^2$. The expression wederive for this limit does not have explicit forms and hence are not useful incomparing different algorithms, or providing information in evaluating theeffect of $\delta$ or sparsity level of $\beta$. To simplify the expressions,researchers have considered the ideal "no-noise" regime and have characterizedthe values of $\delta$ for which the almost sure limit is zero. This is knownas the phase transition analysis. In this paper, we first perform the phase transition analysis of LQLS. Ourresults reveal some of the limitations and misleading features of the phasetransition analysis. To overcome these limitations, we propose the study ofthese algorithms under the low noise regime. Our new analysis framework notonly sheds light on the results of the phase transition analysis, but alsomakes an accurate comparison of different regularizers possible.
In ultrahigh dimensional setting, independence screening has been boththeoretically and empirically proved a useful variable selection framework withlow computation cost. In this work, we propose a two-step framework by usingmarginal information in a different perspective from independence screening. Inparticular, we retain significant variables rather than screening outirrelevant ones. The new method is shown to be model selection consistent inthe ultrahigh dimensional linear regression model. To improve the finite sampleperformance, we then introduce a three-step version and characterize itsasymptotic behavior. Simulations and real data analysis show advantages of ourmethod over independence screening and its iterative variants in certainregimes.
Community detection is one of the fundamental problems in the study ofnetwork data. Most existing community detection approaches only consider edgeinformation as inputs, and the output could be suboptimal when nodalinformation is available. In such cases, it is desirable to leverage nodalinformation for the improvement of community detection accuracy. Towards thisgoal, we propose a flexible network model incorporating nodal information, anddevelop likelihood-based inference methods. For the proposed methods, weestablish favorable asymptotic properties as well as efficient algorithms forcomputation. Numerical experiments show the effectiveness of our methods inutilizing nodal information across a variety of simulated and real network datasets.
In many application areas we are faced with the following question: Can werecover a sparse vector $x_o \in \mathbb{R}^N$ from its undersampled set ofnoisy observations $y \in \mathbb{R}^n$, $y=A x_o+w$. The last decade haswitnessed a surge of algorithms and theoretical results addressing thisquestion. One of the most popular algorithms is the $\ell_p$-regularized leastsquares (LPLS) given by the following formulation: \[ \hat{x}(\gamma,p )\in\arg\min_x \frac{1}{2}\|y - Ax\|_2^2+\gamma\|x\|_p^p, \] where $p \in [0,1]$.Despite the non-convexity of these problems for $p<1$, they are still appealingbecause of the following folklores in compressed sensing: (i) $\hat{x}(\gamma,p)$ is closer to $x_o$ than $\hat{x}(\gamma,1)$. (ii) If we employ iterativemethods that aim to converge to a local minima of LPLS, then under goodinitialization these algorithms converge to a solution that is closer to $x_o$than $\hat{x}(\gamma,1)$. In spite of the existence of plenty of empiricalresults that support these folklore theorems, the theoretical progress toestablish them has been very limited. This paper aims to study the above folklore theorems and establish theirscope of validity. Starting with approximate message passing algorithm as aheuristic method for solving LPLS, we study the impact of initialization on theperformance of AMP. Then, we employ the replica analysis to show the connectionbetween the solution of AMP and $\hat{x}(\gamma, p)$ in the asymptoticsettings. This enables us to compare the accuracy of $\hat{x}(\gamma,p)$ for $p\in [0,1]$. In particular, we will characterize the phase transition and noisesensitivity of LPLS for every $0\leq p\leq 1$ accurately. Our results in thenoiseless setting confirm that LPLS exhibits the same phase transition forevery $0\leq p <1$ and this phase transition is much higher than that of LASSO. |
One way to formalize the concept is to approximate $X(t)$ by a Markov jump process, then define the concept for this spatial approximation (where it makes perfect sense), and then take the limit. To construct this approximation, one approximates the infinitesimal generator $L_t$ of the SDE by a suitable spatial difference approximation (see Chapter 2 of this reference for more detail). Let $X^h(t)$ be the resulting Markov jump process where $h$ is a jump size parameter. Note that this process only moves by jumps, and if the approximation is stable, the number of jumps in a finite time interval is a.s. bounded. Define the
positive innovation of $X(t)$ as:$$Y(t) = \lim_{h \to 0} \sum_{s \le t} \max( \Delta X^h(s), 0 ) $$ where $\Delta X^h(s)$ denotes the jump of $X^h$ at $s$ defined as $\Delta X^h(s) = X^h(s+) - X^h(s-)$. This definition leads to a nontrivial $Y(t)$. Indeed, since $\max(a,0)=(a+|a|)/2 \ge a/2$, we have that$$ \sum_{s \le t} |\Delta X^h(s)| \ge \sum_{s \le t} \max( \Delta X^h(s), 0 ) \ge \sum_{s \le t} \frac{1}{2} \Delta X^h(s) \tag{$*$}$$ almost surely. Note that the expected values of the random variables appearing in the upper and lower bounds are well-defined in the limit as $h \to 0$. For example, the expected value of the lower bound converges to $\mathbb{E}_x \int_0^t \mu(s,X(s))/2 ds$, I think.
ADD
First, let us confirm numerically that $Y(t)$ is an integrable random variable. For this purpose, suppose $X(t)$ is an OU process with unit drift/noise coefficients; and let $\overline{Y}^h(t)$, $Y^h(t)$ and $\underline{Y}^h(t)$ denote the upper, middle, and lower bounds appearing in ($*$). The figures below plot these quantities for $t=1$ with initial condition $X(0)=1$.
Why are these random variables integrable? The action of the infinitesimal generator of this particular approximation is given by:$$L^h f(x) = \frac{e^{-h x}}{2 h^2} \left( f(x+h) - f(x) \right) + \frac{e^{h x}}{2 h^2} \left( f(x-h) - f(x) \right)$$If $f \in C^4_b(\mathbb{R})$, a Taylor expansion about $h=0$ shows that $L^h f(x) = L f(x) + O(h^2)$ where $L f(x) = - x f'(x) + f''(x)/2$. Moreover, for any $h>0$, this Markov jump process approximation is right continuous with left limits and is a process of finite variation with zero continuous part. Thus, Ito's formula for this process reduces to a telescoping sum:$$f(X^h(t)) - f(X^h(0)) = \sum_{s \le t} \left( f(X^h(s)) - f(X^h(s-)) \right)$$Add to both sides of the equation $- \int_0^t L^h f(X^h(s)) ds$ to obtain$$f(X^h(t)) - f(X^h(0)) - \int_0^t L^h f(X^h(s)) ds = \sum_{s \le t} \left( f(X^h(s)) - f(X^h(s-)) \right) - \int_0^t L^h f(X^h(s)) ds$$ where the LHS is a local martingale (and for suitable functions a true martingale). Thus, we see that $- \int_0^t L^h f(X^h(s)) ds$ is the compensator for $\sum_{s \le t} \left( f(X^h(s)) - f(X^h(s-)) \right)$. Now choose $f(x) = x^2$ and $f(x)=x$ to obtain upper and lower bounds on $Y^h(t)$. |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Forgot password? New user? Sign up
Existing user? Log in
Please help me to find out the fallacy.
Note by Trishit Chandra 4 years, 7 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
There's another definition for the gamma function. Check it out here: Gamma Function
Log in to reply
I saw that there is another definition of gamma function but there is not another definition of factorial which implies the above derivation. And as the formulas I have written in the first portion are correct then the derivation should be correct but there is a contradiction between the definition of factorial and the formulas.
Definitions can be extended to include more cases than what you initially dealt with.
A typical example that you should be familiar with, is that in (say) 2nd grade you were told that "multiplication is just repeated addition. To find 4×5 4 \times 5 4×5, you draw out 4 groups of 5 items, and count them to get 20". However, that "logic" quickly fails when you try and extend it to frations / irrational numbers. Pray tell, how do you draw out 2 \sqrt{2} 2 groups of π \pi π items and count them?
The next example would be exponentiation, which is (similarly) often introduced as "exponentiation is just repeated multiplication" To find 45 4 ^ 5 45, we do 4×4×4×4×4 4 \times 4 \times 4 \times 4 \times 4 4×4×4×4×4. Pray tell, how would you write 2π \sqrt{2} ^ \pi 2π?
So, is the process of this derivation wrong ?
So you are saying that the factorial of fraction should be possible but this can not be possible according to the definition of it. And hence the derivation is correct.
We can take the factorial of fractions, and or irrational numbers. The definition of it is not x!=x×(x−1)×(x−2)×… x ! = x \times (x-1) \times (x-2) \times \ldots x!=x×(x−1)×(x−2)×… where the last term is between 0 and 1.
We can also extend this to negative numbers, but not to the negative integers.
The above derivation is correct, but it does not explain how it calculated Γ(12) \Gamma ( \frac{1}{2} ) Γ(21). With that as the working assumption (ie not proven), the rest of the steps look correct.
@Calvin Lin – Okay. Thank you Calvin Lin for helping me. Now I've understood. Can you refer me some link or book where I can find this conception of factorial because Wikipedia is saying that factorial means the product of the positive integers less the or equal to the number.And you said in the last line bracket that something is not proven, what is that?
See Trishit, a special form of the gamma function (the one you have mentioned in your problem) is actually an improper integral of the following form:Γ(t)=∫0∞xt−1e−xdt\Gamma (t)=\int _{ 0 }^{ \infty }{ { x }^{ t-1 }{ e }^{ -x }dt } Γ(t)=∫0∞xt−1e−xdtthe domain of this function being the set of all complex numbers t, with Re(t)>=0. The gamma function reduces to the simple factorial function defined by n!=∏i=1ni,whenn≥1n!=\prod _{ i=1 }^{ n }{ i } ,when\quad n\ge 1n!=i=1∏ni,whenn≥1 and n!=1,whenn=0n!=1,when\quad n=0n!=1,whenn=0 which has its domain the set of non-negative integers. Factorial function turns out to be a very special form of the gamma function in the set of non-negative integers. Clearly, the domain of the factorial function does not contain fractions like 2.5 and thus (2.5)! is not defined. Your fallacy is that you are considering the factorial and gamma functions to be the same. Well, you cannot call two functions the same if their domains are different.
Got it. And thank you very much.
Welcome. You can learn more about functions like gamma and beta in the topic of improper integrals There you will learn why Γ(12)=π\Gamma \left( \frac { 1 }{ 2 } \right) =\sqrt { \pi } Γ(21)=π. This result can be proved from the Euler's reflection formula. I give a short sketch of the proof here. The reflection formula states that(this can also be proved):Γ(t)Γ(1−t)=πsinπt⟹Γ(12)Γ(1−12)=Γ(12)Γ(12)={Γ(12)}2=πsinπ2=π\Gamma (t)\Gamma (1-t)=\frac { \pi }{ sin\quad \pi t } \\ \Longrightarrow \Gamma \left( \frac { 1 }{ 2 } \right) \Gamma \left( 1-\frac { 1 }{ 2 } \right) =\Gamma \left( \frac { 1 }{ 2 } \right) \Gamma \left( \frac { 1 }{ 2 } \right) ={ \left\{ \Gamma \left( \frac { 1 }{ 2 } \right) \right\} }^{ 2 }=\frac { \pi }{ sin\quad \frac { \pi }{ 2 } } =\pi Γ(t)Γ(1−t)=sinπtπ⟹Γ(21)Γ(1−21)=Γ(21)Γ(21)={Γ(21)}2=sin2ππ=π⟹Γ(12)=π\Longrightarrow \Gamma \left( \frac { 1 }{ 2 } \right) =\sqrt { \pi } ⟹Γ(21)=π
@Kuldeep Guha Mazumder – Thanks for providing the insight!
Could you look over this Gamma Function wiki page and provide us with feedback about areas that it can be improved?
Problem Loading...
Note Loading...
Set Loading... |
To fill the Schrödinger equation, $\hat{H}\psi=E\psi$, with a bit oflife, we need to add the specifics for the system of interest, here the hydrogen-like atom.A
hydrogen-like atom is an atom consisting of a nucleus and just one electron; thenucleus can be bigger than just a single proton, though. H atoms, He + ions, Li 2+ions etc. are hydrogen-like atoms in this context. We'll see later how we can use the exactsolution for the hydrogen-like atom as an approximation for multi-electron atoms.
The potential, $V$ between two charges is best described by a Coulomb term,$$V(r)=-\frac{Ze^2}{4\pi\epsilon_0r}\qquad,$$ where $Ze$ is the chargeof the nucleus (
Z=1 being the hydrogen case, Z=2 helium, etc.), the other $e$is the charge of the single electron, $\epsilon_0$ is the permittivity of vacuum (no relativepermittivity is needed as the space inside the atom is "empty").
With the system consisting of two masses, we can define the
reduced mass, i.e. theequivalent mass a point located at the centre of gravity of the system would have:$\mu=\frac{mM}{m+M}$, where $M$ is the mass of the nucleus and$m$ the mass of the electron.
Thus, the hydrogen atom's
Hamiltonian is$$\hat{H}=-\frac{\hbar^2}{2\mu}\nabla^2-\frac{Ze^2}{4\pi\epsilon_0r}\qquad.$$
The Schrödinger equation of the hydrogen atom in polar coordinates is: $$-\frac{\hbar^2}{2\mu}\left[\frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\frac{\partial\psi}{\partial r}\right)+\frac{1}{r^2\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial\psi}{\partial\theta}\right)+\frac{1}{r^2\sin^2\theta}\frac{\partial^2\psi}{\partial\phi^2}\right]-\frac{Ze^2}{4\pi\epsilon_0r}\psi=E\psi$$ Both LHS and RHS contain a term linear in $\psi$, so combine: $$\frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\frac{\partial\psi}{\partial r}\right)+\frac{1}{r^2\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial\psi}{\partial\theta}\right)+\frac{1}{r^2\sin^2\theta}\frac{\partial^2\psi}{\partial\phi^2}+\frac{2\mu}{\hbar^2}\left(E+\frac{Ze^2}{4\pi\epsilon_0r}\right)\psi=0$$
Using the Separation of Variables idea, we assume a product solution of a radial and an angular function: $$\psi(r,\theta,\phi)=R(r)\cdot Y(\theta,\phi)\qquad.$$ Since $Y$ does not depend on $r$, we can move it in front of the radial derivative: $$\frac{\partial\psi}{\partial r}=\frac{\partial}{\partial r}RY=Y\frac{{\rm d}R}{{\rm d}r}\qquad,$$ and, similarly, $R$ does not depend on the angular variables. Thus replace $\psi$ and the differentials: $$\frac{Y}{r^2}\frac{\rm d}{{\rm d}r}\left(r^2\frac{{\rm d}R}{{\rm d}r}\right)+\frac{R}{r^2\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial Y}{\partial\theta}\right)+\frac{R}{r^2\sin^2\theta}\frac{\partial^2Y}{\partial\phi^2}+\frac{2\mu}{\hbar^2}\left(E+\frac{Ze^2}{4\pi\epsilon_0r}\right)RY=0\qquad.$$ Multiply by $r^2$ and divide by $RY$ to separate the radial and angular terms: $$\bbox[pink]{\frac{1}{R}\frac{\rm d}{{\rm d}r}\left(r^2\frac{{\rm d}R}{{\rm d}r}\right)}+\bbox[lightblue]{\frac{1}{Y\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial Y}{\partial\theta}\right)+\frac{1}{Y\sin^2\theta}\frac{\partial^2Y}{\partial\phi^2}}+\bbox[pink]{\frac{2\mu r^2}{\hbar^2}\left(E+\frac{Ze^2}{4\pi\epsilon_0r}\right)}=0\qquad.$$ The first and fourth terms depend on $r$ only, the middle terms depend on the angles only. They can only balance each other for all points in space if the radial and angular terms are the same constant but with opposite sign.
Therefore, we can separate into a radial equation: $$\bbox[pink]{\frac{\rm d}{{\rm d}r}\left(r^2\frac{{\rm d}R}{{\rm d}r}\right)+\frac{2\mu r^2}{\hbar^2}\left(E+\frac{Ze^2}{4\pi\epsilon_0r}\right)R}-AR=0$$ ...and an angular equation: $$\bbox[lightblue]{\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial Y}{\partial\theta}\right)+\frac{1}{\sin^2\theta}\frac{\partial^2Y}{\partial\phi^2}}+AY=0\qquad,$$ where $A$ is the separation constant.
The angular part still contains terms depending on both $\theta$ and $\phi$. Another separation of variables is needed.
We'll replace $Y$ by a product of single-variable functions: $$Y(\theta,\phi)=\Theta(\theta)\cdot\Phi(\phi)\qquad.$$ Replacing $Y$ and the differentials, we have $$\frac{\Phi}{\sin\theta}\frac{\rm d}{{\rm d}\theta}\left(\sin\theta\frac{{\rm d}\Theta}{{\rm d}\theta}\right)+\frac{\Theta}{\sin^2\theta}\frac{{\rm d}^2\Phi}{{\rm d}\phi^2}+A\Phi\Theta=0\qquad.$$ Isolate variables in separate terms: $$\bbox[lightgreen]{\frac{\sin\theta}{\Theta}\frac{\rm d}{{\rm d}\theta}\left(\sin\theta\frac{{\rm d}\Theta}{{\rm d}\theta}\right)}+\bbox[yellow]{\frac{1}{\Phi}\frac{{\rm d}^2\Phi}{{\rm d}\phi^2}}+\bbox[lightgreen]{A\sin^2\theta}=0\qquad.$$ With $B$ as separation constant, we have a polar part: $$\bbox[lightgreen]{\frac{\sin\theta}{\Theta}\frac{\rm d}{{\rm d}\theta}\left(\sin\theta\frac{{\rm d}\Theta}{{\rm d}\theta}\right)+A\sin^2\theta}-B=0$$ ...and an azimuth part: $$\bbox[yellow]{\frac{1}{\Phi}\frac{{\rm d}^2\Phi}{{\rm d}\phi^2}}+B=0\qquad.$$
The azimuth equation: $$\frac{{\rm d}^2\Phi}{{\rm d}\phi^2}+B\Phi=0$$ is a 2nd order ODE with constant coefficients solved by: $$\Phi(\phi)=c_1{\rm e}^{{\rm i}m\phi}+c_2{\rm e}^{-{\rm i}m\phi}\qquad,$$ where $B=m^2$.
The angle $\phi$ is the azimuth,
i.e. if you think of the atom as a globe,then $\phi$ is the longitude of the position of the electron. As long as there is no external reason todo otherwise, we can choose the "Greenwich meridian" of the atom in a mathematically convenient way bysetting $c_2=0$.
Note that $m$ must be an integer number - otherwise the value of the azimuth wave function would bedifferent for $\phi=0^{\rm o}$ and $\phi=360^{\rm o}$. In quantum terminology, $m$ is calleda
quantum number as it restricts the possible values of the wave function (and hence of observables)to integer multiples (quanta) of a base unit.
Thus, the azimuth part of the wave function is $\Phi_m(\phi)=c_1{\rm e}^{{\rm i}m\phi}$.
With $B=m^2$, the polar equation is: $$\frac{\sin\theta}{\Theta}\frac{\rm d}{{\rm d}\theta}\left(\sin\theta\frac{{\rm d}\Theta}{{\rm d}\theta}\right)+A\sin^2\theta-m^2=0\qquad.$$ Rearrange: $$\frac{1}{\sin\theta}\frac{\rm d}{{\rm d}\theta}\left(\sin\theta\frac{{\rm d}\Theta}{{\rm d}\theta}\right)+\left(A-\frac{m^2}{\sin^2\theta}\right)\Theta=0\qquad.$$
We can express the wave function as depending on $\cos\theta$ rather than on $\theta$ itself.
To figure this out, think of the function $y(x)=x^4$. If you plot the function logarithmically, you are effectively plotting a new function $z(\log x)=4(\log x)$ on a linear scale, where $(\log x)$ is the independent variable. Try it yourself!
Substituting $P(\cos\theta):=\Theta(\theta)$ and $x:=\cos\theta$ and hence the differential, $$\frac{\rm d}{{\rm d}\theta}=\frac{{\rm d}x}{{\rm d}\theta}\frac{\rm d}{{\rm d}x}=-\sin\theta\frac{\rm d}{{\rm d}x}\qquad,$$ leaves us with $$\frac{1}{\sin\theta}(-\sin\theta)\frac{\rm d}{{\rm d}x}\left(\sin\theta(-\sin\theta)\frac{{\rm d}P}{{\rm d}x}\right)+\left(A-\frac{m^2}{\sin^2\theta}\right)P=0\qquad.$$ Tidy up: $$\frac{\rm d}{{\rm d}x}\left(\sin^2\theta\frac{{\rm d}P}{{\rm d}x}\right)+\left(A-\frac{m^2}{\sin^2\theta}\right)P=0\qquad.$$ Because of $\sin^2\theta+\cos^2\theta=1$, we can further substitute $\sin^2\theta=1-\cos^2\theta=1-x^2$ and get $$\frac{\rm d}{{\rm d}x}\left((1-x^2)\frac{{\rm d}P}{{\rm d}x}\right)+\left(A-\frac{m^2}{1-x^2}\right)P=0\qquad.$$ Apply the product rule to the first term: $$(1-x^2)\frac{{\rm d}^2P}{{\rm d}x^2}-2x\frac{{\rm d}P}{{\rm d}x}+\left(A-\frac{m^2}{1-x^2}\right)P=0\qquad.$$
Unfortunately, the coefficients in this ODE are not constant but depend on $x$, so the recipe forODE with constant coefficients doesn't really help here. However, it is a known type of differentialequation (called
associated Legendre-type DE), for which a solution is known in the mathsliterature. The solutions are known as associated Legendre polynomials, and they contain a power serieswith recursive coefficients.
Legendre's polynomials: $$P_l^m=(1-x^2)^{\frac{m}{2}}\left(a_0\sum_{n=0}^{\infty}\frac{a_{2n}}{a_0}x^{2n}+a_1\sum_{n=1}^{\infty}\frac{a_{2n+1}}{a_1}x^{2n+1}\right)$$ with the coefficients $$a_{n+2}=\frac{(n+m)(n+m+1)-A}{(n+1)(n+2)}a_n\qquad.$$
This means that there are two power series (for the even and odd terms, respectively) and that the coefficients of the higher terms can be calculated recursively if the first coefficient of each series, $a_0$ and $a_1$, is known. In the recursion formula, $A$ and $m$ are the constants we have from the previous parts of the solution strategy, while $n$ is the index variable of the two power series.
A series solution is only helpful if the series converges so that it can be truncated as soon as the solution is sufficiently accurate. For the recursion formula above, the series converges if $A=l(l+1)$ where $l$ is an integer number. The root coefficients of the two series, $a_0$ and $a_1$, are chosen depending on the particular value of $l$ to ensure only the convergent series survives. The first few Legendre functions are:
$l=0$ $l=1$ $l=2$ $m=0$ $P_0^0(x)=1$ $P_1^0(x)=x$ $P_2^0(x)=\frac{1}{2}(3x^2-1)$ $m=1$ --- $P_1^1(x)=\sqrt{1-x^2}$ $P_2^1(x)=3x\sqrt{1-x^2}$ $m=2$ --- --- $P_2^2(x)=3-3x^2$
The value of $l$ limits the choices of $m$; $m$ must have a value between $-l$ and $l$. As far as the polar part is concerned, the $\pm m$ solutions are equivalent, but the sign of $m$ makes a difference to the azimuth part as seen above.
Remember to resubstitute $P$ and $x$ when using these polar wave functions.
In the radial equation, $$\frac{\rm d}{{\rm d}r}\left(r^2\frac{{\rm d}R}{{\rm d}r}\right)+\frac{2\mu r^2}{\hbar^2}\left(E+\frac{Ze^2}{4\pi\epsilon_0r}\right)R-l(l+1)R=0\qquad,$$ apply the product rule to the first term: $$r^2\frac{{\rm d}^2R}{{\rm d}r^2}+2r\frac{{\rm d}R}{{\rm d}r}+\frac{2\mu r^2}{\hbar^2}\left(E+\frac{Ze^2}{4\pi\epsilon_0r}\right)R-l(l+1)R=0\qquad,$$ and divide by $r^2$: $$\frac{{\rm d}^2R}{{\rm d}r^2}+\bbox[lightblue]{\frac{2}{r}\frac{{\rm d}R}{{\rm d}r}}+\left(\frac{2\mu}{\hbar^2}\left(E+\bbox[lightblue]{\frac{Ze^2}{4\pi\epsilon_0r}}\right)\bbox[lightblue]{-\frac{l\left(l+1\right)}{r^2}}\right)R=0\qquad.$$
We can't solve this straight away, but for very large $r$, the highlighted terms are forced to zero because they go reciprocal with $r$.
That leaves us with an asymptotic equation: $$\frac{{\rm d}^2R_{\infty}}{{\rm d}r^2}+\frac{2\mu E}{\hbar^2}R_{\infty}=0\qquad,$$ which is another ODE with constant coefficients. Solution: $$R_{\infty}=c_3\exp{\left({\rm i}\sqrt{\frac{2\mu E}{\hbar^2}}r\right)}+c_4\exp{\left(-{\rm i}\sqrt{\frac{2\mu E}{\hbar^2}}r\right)}\qquad.$$
It makes sense to use as the zero point of potential energy the energy of a free electron,
i.e.in this asymptotic case, $E\to 0$ for an electron far away from the nucleus, as it is practicallyfree. Since the presence of the positive charge in the nucleus stabilises the atom, we mustlook for solutions where $E$ becomes negative as the electron comes closer to the nucleus. Thesetwo conditions are met if we choose $c_4=0$ and use the fact that $E\lt 0$ to getrid of the imaginary unit.
The asymptotic solution is then$R_{\infty}=c_3\exp{-\left(\sqrt{-\frac{2\mu E}{\hbar^2}}r\right)}$.
The detail nearer the nucleus is expanded in a power series: $R=R_{\infty}\sum_{q=0}^{\infty}b_qr^q$.
This results in a series of powers of $r$ whose coefficients must all be zero to match the RHS of the differential equation. From that, a recursion formula is derived for the $b_q$, and the requirement for the series to converge produces another quantum number, $n$.
This results in the radial solution$$R_{n,l}(r)=R_{\infty}(r)b_0\exp{\left(\frac{\mu Ze^2r}{2\pi\epsilon_0\hbar^2n}\right)}\qquad,$$where the coefficient $b_0$ contains the $l$-dependence.
At the same time, the solution of the radial part also fixes the possible energy levels by linking them to the quantum number $n$. The energy levels of the hydrogen-like atom are given by
The full solution of the Schrödinger equation of the hydrogen-like atom is, according to theseparation approach taken:
In solving the Schrödinger equation of the hydrogen atom, we have encountered three quantum numbers.Two of them, $m$ and $l$, arise from the separation constants of the $R$/$Y$ and $\theta$/$\phi$ separations.The possible values of the separation constant are restricted to integer numbers by boundary conditions(the need for the azimuth wave function to return to its value after a full 360
o turn of $\phi$and the need for the power series in the Legendre polynomial to converge to produce a physically sensiblesolution). The third quantum number, $n$, arises, again, from the need to have a convergent seriesrepresenting the non-asymptotic part of the radial function.
$n$= 1 2 3 ... $l$= 0 0 1 0 1 2 0...(n-1) $m$= 0 0 -1 0 +1 0 -1 0 +1 -2 -1 0 +1 +2 - l...+ l
The quantum numbers are not independent; the choice of $n$ limits the choice of $l$, which in turn limits the choice of $m$. A fourth quantum number, $s$, does not follow directly from solving the Schrödinger equation but is to do with spin. The possible combinations of quantum numbers are given in the table.
Note that the energy of a state (
i.e. of a wave function) depends only on $n$ but not on theother quantum numbers. This degeneracy is only strictly true for the hydrogen-like atom; anyapproximate solutions for higher atoms cause a dependence of the energy eigenvalue of a state on allquantum numbers. |
Does Smeaton's coefficient, k, have a modern value or it is dependent of the air density?
Why is the accepted value of k so high?
In various texts about the Wright brothers (see 1 and 2) one can read about Smeaton's coefficient that troubled them a lot and that they finally discovered the parameter had a much lower value reaching the conclusion $k = 0.0033 lbf/ft^2/mph^2 = 0.79 kg/m^3$ (instead of $k = 0.005$), a fact also noticed by others before them.
"the Wright brothers calculated a new average value of 0.0033. Modern aerodynamicists have confirmed this figure to be accurate within a few percent." Source: Correcting Smeaton's Coefficient
$L = k \cdot S \cdot V^2 \cdot C_L$
$L$ = lift in pounds
$k$ = coefficient of air pressure (Smeaton coefficient)
$S$ = total area of lifting surface in square feet
$V$ = velocity (headwind plus ground speed) in miles per hour
$C_L$ = coefficient of lift (varies with wing shape)"
However, knowing that the modern formula for lift is $$L = 0.5 \cdot \rho \cdot S \cdot V^2 \cdot C_L$$ Where $\rho$ = the air density.
It appears that $k = 0.5 \cdot \rho$ and so it does not have a standard average value. Also a $k = 0.0033 lbf/ft^2/mph^2 = 0.79 kg/m^3$ leads to a $\rho = k/0.5 = 1.58 kg/m^3$ that corresponds to a sea level air temperature well below -25 C, which is unusual.
If the two relations for lift are correct, the Smeaton's coefficient can not be 0.0033 but closer to 0.0025 a value corresponding to a standard air density at $20 ^\circ C$ close to $1.2 kg/m^3$. |
Assume that the Lagrangian density
$$\tag{1} {\cal L} ~=~ {\cal L}(\phi(x), \partial \phi(x), x) $$
does not depend on higher-order derivatives $\partial^2\phi$, $\partial^3\phi$, $\partial^4\phi$, etc. Let
$$\tag{2} \pi^{\mu}_{\alpha} ~:=~ \frac{\partial {\cal L}}{ \partial (\partial_{\mu}\phi^{\alpha})} $$
denote the de Donder momenta, and let
$$\tag{3} E_{\alpha}~:=~ \frac{\partial {\cal L}}{ \partial \phi^{\alpha}} - d_{\mu} \pi^{\mu}_{\alpha} $$
denote the Euler-Lagrange equations. Let us for simplicity assume that the infinitesimal local quasi-symmetry$^1$ transformation
$$\tag{4} \delta_{\varepsilon} \phi^{\alpha}~=~ Y^{\alpha}(\varepsilon)~=~Y^{\alpha}\varepsilon + Y^{\alpha,\mu} d_{\mu}\varepsilon $$
is vertical$^2$ and that it does not depend on higher-order derivatives of the infinitesimal $x$-dependent parameter $\varepsilon$. [It is implicitly understood that the structure coefficients $Y^{\alpha}$ and $Y^{\alpha\mu}$ are independent of the parameter $\varepsilon$. If the theory has more that one symmetry parameter $\varepsilon^a$, $a=1, \ldots m$, we are just investigating one local symmetry (and its conservations law) at the time.] The bare Noether current $j^{\mu}(\varepsilon)$ is the momenta times the symmetry generators
$$\tag{5} j^{\mu}\varepsilon + j^{\mu,\nu}d_{\nu}\varepsilon ~=~j^{\mu}(\varepsilon) ~:=~ \pi^{\mu}_{\alpha}Y^{\alpha}(\varepsilon) ,$$
$$\tag{6} j^{\mu}~:=~ \pi^{\mu}_{\alpha}Y^{\alpha}, \qquad j^{\mu,\nu}~:=~ \pi^{\mu}_{\alpha}Y^{\alpha,\nu}. $$
(Again, it is implicitly understood that the structure coefficients $j^{\mu}$ and $j^{\mu\nu}$ are independent of the parameter $\varepsilon$, and so forth.) That the infinitesimal transformation (4) is a local quasi-symmetry$^1$ implies that variation of the Lagrangian density ${\cal L}$ wrt. (4) is a total space-time divergence
$$ d_{\mu} f^{\mu}(\varepsilon) ~=~ \delta_{\varepsilon} {\cal L} ~\stackrel{\begin{matrix}\text{chain}\\ \text{rule}\end{matrix}}{=}~ \frac{\partial {\cal L}}{ \partial \phi^{\alpha}} Y^{\alpha}(\varepsilon) + \pi^{\mu}_{\alpha}d_{\mu}Y^{\alpha}(\varepsilon) $$$$\tag{7} ~\stackrel{\begin{matrix}\text{Leibniz'}\\ \text{rule}\end{matrix}}{=}~ E_{\alpha}Y^{\alpha}(\varepsilon) + d_{\mu} j^{\mu}(\varepsilon). $$
Here$^3$
$$ \tag{8} f^{\mu}(\varepsilon) ~=~ f^{\mu}\varepsilon + f^{\mu,\nu}d_{\nu}\varepsilon +\frac{1}{2} f^{\mu,\nu\lambda}d_{\nu}d_{\lambda}\varepsilon $$
are some functions with
$$\tag{9}f^{\mu,\nu\lambda}~=~f^{\mu,\lambda\nu}. $$
The full $\varepsilon$-dependent Noether current $J^{\mu}(\varepsilon)$ is defined as$^3$
$$\tag{10} J^{\mu}\varepsilon + J^{\mu,\nu}d_{\nu}\varepsilon +\frac{1}{2} J^{\mu,\nu\lambda}d_{\nu}d_{\lambda}\varepsilon ~=~J^{\mu}(\varepsilon) ~:=~ j^{\mu}(\varepsilon) - f^{\mu}(\varepsilon), $$
where
$$\tag{11}J^{\mu,\nu\lambda}~=~J^{\mu,\lambda\nu}. $$
Eqs. (7) and (10) imply the $\varepsilon$-dependent
off-shell Noether identity
$$ \tag{12} d_{\mu} J^{\mu}(\varepsilon) ~=~ -E_{\alpha}Y^{\alpha}(\varepsilon) . $$
The $\varepsilon$-dependent off-shell Noether identity (12) is the key identity. Decomposing it in its $\varepsilon$-independent components leads to the following set (13)-(16) of identities,
$$ \tag{13} d_{\mu}J^{\mu} ~=~-E_{\alpha} Y^{\alpha} , $$
$$ \tag{14} J^{\mu} + d_{\nu} J^{\nu,\mu}~=~-E_{\alpha} Y^{\alpha,\mu} ,$$
$$ \tag{15} J^{\nu,\lambda}+J^{\lambda,\nu}+d_{\mu}J^{\mu,\nu\lambda} ~=~0 , $$
$$ \tag{16} \sum_{{\rm cycl}.~\mu,\nu,\lambda}J^{\mu,\nu\lambda} ~=~0, $$
in accordance with Noether's second theorem. Eq. (13) is just the usual off-shell Noether identity, which can be derived from the global symmetry alone via Noether's first theorem (where $\varepsilon$ is $x$-independent). As is well-known, the eq. (13) implies an on-shell conservation law
$$ \tag{17} d_{\mu}J^{\mu}~\approx~ 0, $$
or more explicitly written as
$$ \tag{18} \frac{d Q}{dt}~\approx~ 0,\qquad Q~:=~\int_{V} \! d^3V ~J^0. $$
(Here the $\approx$ sign denotes equality modulo Euler-Lagrange equations $E_{\alpha}\approx 0$. We have assume that the currents $J^i$, $i\in\{1,2,3\}$, vanish at the boundary $\partial V$.)
The remaining eqs. (14)-(16) may be repackaged as follows. Define the
second Noether current ${\cal J}^{\mu}(\varepsilon)$ as$^4$
$$ \tag{19} {\cal J}^{\mu}\varepsilon + {\cal J}^{\mu,\nu}d_{\nu}\varepsilon +\frac{1}{2} {\cal J}^{\mu,\nu\lambda}d_{\nu}d_{\lambda}\varepsilon ~=~ {\cal J}^{\mu}(\varepsilon)~:= ~ J^{\mu}(\varepsilon)+ E_{\alpha} Y^{\alpha,\mu}\varepsilon. $$
It satisfies an $\varepsilon$-dependent off-shell conservation law
$$ d_{\mu} {\cal J}^{\mu}(\varepsilon) ~\stackrel{(12)+(19)}{=}~ -E_{\alpha}Y^{\alpha}(\varepsilon)+d_{\mu}(E_{\alpha} Y^{\alpha,\mu}\varepsilon)$$$$ \tag{20}~\stackrel{(13)+(14)}{=}~ - \varepsilon d_{\mu}d_{\nu} J^{\nu,\mu}~\stackrel{(15)}{=}~\frac{\varepsilon}{2}d_{\mu}d_{\nu}d_{\lambda} J^{\lambda,\mu\nu}~\stackrel{(16)}{=}~0 . $$
One may introduce a so-called
superpotential ${\cal K}^{\mu\nu}(\varepsilon)$ as$^3$
$$ {\cal K}^{\mu\nu}\varepsilon+{\cal K}^{\mu\nu,\lambda}d_{\lambda}\varepsilon~=~{\cal K}^{\mu\nu}(\varepsilon)~=~-{\cal K}^{\nu\mu}(\varepsilon) $$$$~:=~ \left(\frac{1}{2} J^{\mu,\nu}-\frac{1}{6}d_{\lambda}J^{\mu,\nu\lambda}\right)\varepsilon+ \frac{1}{3} J^{\mu,\nu\lambda}d_{\lambda}\varepsilon-(\mu\leftrightarrow \nu)$$$$ \tag{21}~\stackrel{(14)+(16)}{=}~ \left( J^{\mu,\nu}+\frac{1}{3}d_{\lambda}(J^{\lambda,\mu\nu}-J^{\mu,\nu\lambda})\right)\varepsilon+ \frac{1}{3}\left( J^{\mu,\nu\lambda}-J^{\nu,\mu\lambda}\right)d_{\lambda}\varepsilon$$
A straightforward calculation
$$ d_{\nu}{\cal K}^{\mu\nu}(\varepsilon)~\stackrel{(15)+(21)}{=}~J^{\mu,\nu}d_{\nu}\varepsilon-\varepsilon d_{\nu}\left(J^{\nu,\mu}+d_{\lambda}J^{\lambda,\mu\nu}\right)$$$$ \tag{22}+\frac{\varepsilon}{3}d_{\nu}d_{\lambda}\left(J^{\lambda,\mu\nu}-J^{\mu,\nu\lambda}\right)+\frac{1}{3}\left( J^{\mu,\nu\lambda}-J^{\nu,\mu\lambda}\right)d_{\nu}d_{\lambda}\varepsilon ~\stackrel{(14)+(16)+(19)}{=}~{\cal J}^{\mu}(\varepsilon)$$
shows that ${\cal K}^{\mu\nu}(\varepsilon)$ is the superpotential for the second Noether current ${\cal J}^{\mu}(\varepsilon)$. The existence of the superpotential ${\cal K}^{\mu\nu}(\varepsilon)=-{\cal K}^{\nu\mu}(\varepsilon)$ makes the off-shell conservation law (20) manifest
$$ \tag{23}d_{\mu}{\cal J}^{\mu}(\varepsilon)~\stackrel{(22)}{=}~d_{\mu}d_{\nu}{\cal K}^{\mu\nu}(\varepsilon)~=~0. $$
Moreover, as a consequence the superpotential (22), the corresponding second Noether charge ${\cal Q}(\varepsilon)$ vanishes off-shell
$$ \tag{24}{\cal Q}(\varepsilon)~:=~\int_{V} \! d^3V ~{\cal J}^0(\varepsilon)~=~\int_{V} \! d^3V ~d_i{\cal K}^{0i}(\varepsilon)~=~\int_{\partial V} \! d^2\!A_i ~{\cal K}^{0i}(\varepsilon)~=~0, $$
if we assume that the currents ${\cal J}^{\mu}(\varepsilon)$, $\mu\in\{0,1,2,3\}$, vanish at the boundary $\partial V$.
We conclude that the remaining eqs. (14)-(16) are trivially satisfied, and that the local quasi-symmetry doesn't imply additional non-trivial conservation laws besides the ones (13,17,18) already derived from the corresponding global quasi-symmetry. Note in particular, that the local quasi-symmetry does not force the conserved charge (18) to vanish.
This is e.g. the situation for gauge symmetry in electrodynamics, where the off-shell conservation law (20) of the second Noether current ${\cal J}^{\mu}=- d_{\nu}F^{\nu\mu}$ is a triviality, cf. also this and this Phys.SE posts. Electric charge conservation follows from global gauge symmetry alone, cf. this Phys.SE post. Note in particular, that there could be a nonzero surplus of total electric charge (18).
--
$^1$ An off-shell transformation is a
quasi-symmetry if the Lagrangian density ${\cal L}$ is preserved $\delta_{\varepsilon} {\cal L}= d_{\mu} f^{\mu}(\varepsilon)$ modulo a total space-time divergence, cf. this Phys.SE answer. If the total space-time divergence $d_{\mu} f^{\mu}(\varepsilon)$ is zero, we speak of a symmetry.
$^2$ Here we restrict for simplicity to only
vertical transformations $\delta_{\varepsilon} \phi^{\alpha}$, i.e., any horizontal transformation $\delta_{\varepsilon} x^{\mu}=0$ are assumed to vanish.
$^3$ For field theory in more than one space-time dimensions $d>1$, the higher structure functions $f^{\mu,\nu\lambda}=-J^{\mu,\nu\lambda}$ may be non-zero. However, they vanish in one space-time dimension $d=1$, i.e. in point mechanics. If they vanish, then the superpotential (21) simplifies to ${\cal K}^{\mu\nu}(\varepsilon)=J^{\mu,\nu}\varepsilon$.
$^4$ The
second Noether current is defined e.g. in M. Blagojevic and M. Vasilic, Class. Quant. Grav. 22 (2005) 3891, arXiv:hep-th/0410111, subsection IV.A and references therein. See also Philip Gibbs' answer for the case where the quasi-symmetry is a symmetry. |
I think -- and hope -- that every computer science student is confronted with this problem which feels like a paradoxon. It is a very good example for the difference of computable in TCS sense and computable in a practical sense.
My thoughts back then were: "Yea, if I
knew the answer, it would obviously be computable. But how to find out?" The trick is to rid yourself from the illusion that you have to find out wether $\pi$ has this property or not. Because this, obviously (read: imho), cannot be done by a Turing machine (as long as we do not have more knowledge than we have about $\pi$).
Consider your definition for computability: we say $f$ is (Turing-)computable if and only if $\exists M \in TM : f_M = f$. That is you only have to show
existence of an appropriate Turing machine, not give one. What you -- we -- try to do there is to compute the Turing machine that computes the required function. This is a way harder problem!
The basic idea of the proof is: I give you an infinite class of functions, all of them computable (to show; trivial here). I prove then that the function you are looking for is in that class (to show; case distinction here). q.e.d. |
Question:
A particle of mass
mis in the symmetric well $$ V(x) = \begin{cases}\infty & x < 0\\V_0 & 0 < x <a \\ 0 & a< x <2a \\ V_0 & 2a < x <3a \\ \infty & 3a < x \end{cases} $$ The ground state happens to have energy $E_1 = V_0 / 2 $.
(a) what conditions must the wave function satisfy at each of the potential step $ x = 0, x=a, x=2a, x=3a $.
(b) Sketch the corresponding ground state wave function $\psi_1(x)$. Indicate the energy $E_2$ and sketch the wave function $\psi_2(x)$ of the first excited state.
(c) Indicate the ground state energy and sketch the ground state wave function for. 1. a particle of mass $2m$ in the same potential. 2. a particle of mass $m/2$ in the same potential.
I have a mediocre idea on the conditions the wave function must satisfy at each step. But I not so sure if I miss anything.
At $x=0, x=3a$ $\psi(x)$ continuous at infinite step.
At $x=a, x=2a$ $\psi(x)$ and $\psi'(x)$ continuous at finite step.
In part b, I think I need to work out the its potential for different region as below but I not so sure what to do next.
$$ \psi(x) = \begin{cases}0 & x < 0\\Ce^{q(x-a)} + De^{-q(x-a)} & 0 < x <a \\ A\sin(kx) + B\cos(kx)& a< x <2a \\ Ce^{q(x-a)} + De^{-q(x-a)} & 2a < x <3a \\ 0& 3a < x \end{cases} $$
any guidance or solution would be great. Thanks |
I am not very big in mathematics yet(will be hopefully), naive set theory has a problem with Russell's paradox, how do they defeat this sort of problem in mathematics? Is there a greater form of set theory than naive set theory that beats this problem? (Maybe something like superposition if is both or neither)?
The
Zermelo-Fraenkel Axioms for set theory were developed in response.
The key axiom which obviates Russell's Paradox is the
Axiom of Specification, which, roughly, allows new sets to be built based on a predicate (condition), but only quantified over some set.
That is, for some predicate $p(x)$ and a set $A$ the set
$\{x \in A : p(x)\}$
exists by the axiom, but constructions of the form:
$\{x : p(x) \}$
(not quantified over any set) are not allowed.
Thus the contradictory set $\{x : x \not\in x\}$ is not allowed. If you consider $S = \{x \in A: x \not\in x\}$, there's no contradiction. The same logic as in Russell's paradox gives us that $S \not\in S$, but then the conclusion is simply that $S \not\in A$, instead of any contradiction.
There are other problems with Russell's program which led to Gödel's work, which is also something you should check out, but ZF is where Russell's Paradox was fixed.
I'm aware of two principal approaches to solving Russell's paradox:
One is to reject $x \not\in x$ as a formula. This is what Russell himself did with his theory of types: inhabitants on the universe are indexed with natural numbers (types), and $x \in y$ is only a valid formula if the type of $y$ is one greater than the type of $x$. Basically, this slices the universe into "elements", "sets of elements", "sets of sets of elements", etc, and then there is no slice that admits a concept of membership of itself.
One drawback of this approach is that we then have, e.g. many empty sets of different types. Sometimes this is addressed by introducing type-shifting automorphisms, so that you can somehow recognise that the empty sets are all really the same. Another approach is to say that comprehension formulae are permitted only if you
could give types to the variables, but you don't actually have to do so – this is the basis of Quine's New Foundations set theory. NF even has a universal set, but still manages to avoid Russell messing things up. This actually leads to a slightly odd situation where in NF the universal set $V$ actually does satisfy $V \in V$, and does not satisfy $V \not\in V$, so we don't forbid this kind of consideration entirely, it's just not permitted when building sets by comprehension.
The other main approach is to permit $x \not\in x$, but reject the formation of the set $\{x : x\not\in x\}$. Some argued that this set is problematic because it is in some sense far too large, a substantial slice of the whole universe – if we only stuck to sensible things like $\mathbb N$ and $\mathbb R$ and $\aleph_\omega$ then everything would be fine. Zermelo–Fraenkel set theory (ZF) proposed to build sets by more concrete means: start with things you know are sets, like the empty set, and operations that you know make sets, like union and power set, and just keep applying those operations, and take all the sets you can prove to exist in this way. ZF only allows selection of
subsets of existing sets by arbitrary properties, that is $\{x \in A : p(x) \}$ instead of the more general comprehension $\{x : p(x)\}$ that ruined everything.
As a sidenote, ZF actually includes the Axiom of Foundation, which implies that $x\not\in x$ is true for all $x$. But there are also theories like ZF but without Foundation in which some $x$ do contain themselves.
It lead to the development of a number of solutions in logic and set theory, as well as other insights. You can find a nice introduction to the topic in the
Stanford Encyclopedia of Philosophy section on Russell's Paradox in Contemporary Logic.
To the best of my knowledge, the Theory of Types http://en.wikipedia.org/wiki/Type_theory , was designed specifically to address this.
Basically you restrict the objects you can refer to , by the choice of types.
Another way of escaping from Russell's paradox, not yet mentioned in the answers here, is Morse-Kelley set theory. In MK set theory, there are good collections (sets) and bad collections (classes) and only the good collections (sets) are allowed to be members; unless $x$ is a set, $x\notin y$ holds for all $y$.
Suppose $\Phi$ is some property. In naïve set theory, you can construct $$\{x\mid \Phi(x)\}$$ the set of all $x$ with property $\Phi$; this is the principle that causes the trouble of Russell's paradox. In ZF set theory, this unrestricted comprehension is forbidden; all you can have is $$\{x\in S\mid \Phi(x)\}$$ which is the subset of $S$ for which $\Phi$ holds. MK goes a different way. $\{x\mid \Phi(x)\}$ is allowed, but it only represents the class of all
sets $x$ with property $\Phi$. That is, it is the collection of all $x$ such that $x$ has property $\Phi$ and $x$ is a set.
Now take $\Phi(x) = “x\notin x”$ and let $S = \{x\mid x\notin x\}$. In naïve set theory we have $S\in S$ if and only if $S\notin S$, which is absurd. But in MK set theory we have $S\in S$ if and only if $S\notin S$
and $S$ is a set. There's no contradiction here; we've merely proved that $S$ is not a set. Since $S$ is not a set, it is not allowed to be a member of any collection, so $S\notin T$ is true for all $T$, and in particular $S\notin S$. Everything is fine.
In MK set theory one can also construct a universal class $V$ which contains all
sets—but $V$ is not itself a set and is not a member of itself.
The way around Russell's paradox which Georg Cantor chose (and if you read Russell's letter describing the paradox to Frege, who fell into it, so to speak--this is found on pp 124-5 of van Heijenoort's book "From Frege to Goedel: A Source Book in Mathematical Logic, 1879-1931"--you find that Russell held that his paradox showed "that under certain circumstances a definable collection [Menge--(the German word for set, my comment)] does not form a totality", and therefore falls into Cantor's means of avoiding the set-theoretical paradoxes (there are others, namely Cantor's, Burali-Forti's, and Curry's paradoxes)) is this: you can divide the notion of 'collection' into those collections that form a consistent, completed totality (these are the sets), and those which do not (these are called proper classes). You might consider reading Penelope Maddy's paper "Proper Classes" (Journal of Symbolic Logic, Volume 48, Number 1, March 1983, pp. 113-139, although I found a pdf file of it on the Web, look for it using author and title) as an introduction.
You might also want to look at my mathstackexchange question (867626) "A Question Regarding Consistent Fragments of Naive (Ideal) Set Theory". In the comments section you will find a link to a paper titled "Maximal consistent sets of naive comprehension" by Luca Incurvati and Julien Murzi. You asked "how do they defeat this sort of problem in mathematics?". Incurvati's and Murzi's paper show that even though one has maximal consistent fragments of naive Comprehension (call it COMP, the axiom that gets one in trouble with Russell's paradox) there will be incompatible maximal consistent sets of naive Comprehension, that is, there exists in COMP (in a first-order language (V,$\in$) the axiom COMP, i.e. ($\exists$y)(x)(x$\in$y iff $\phi$(x)) is an axiom schema, that is, a collection of first-order sentences with distinct first-order formulae $\phi$) two maximally consistent subcollections of COMP, $COMP_1$ and $COMP_2$ such that there exists a first-order formula $\psi$(x) such that $\psi$(x)$\in$$COMP_1$ and $\lnot$$\psi$(x)$\in$$COMP_2$ so even if you rid yourself of the problem caused by Russell's paradox, you still have for Naive Set Theory another sort of inconsistency--that caused by the independence results.
Finally, you might want to look at Cantor's letter to Dedekind (in van Heijenoort, pp. 113-117). There he discusses the distinction between "consistent multiplicities [sets]" and "inconsistent multiplicities [now called proper classes]" and also discusses Cantor's Paradox, and discovers the Burali-Forti paradox (in fact, in that letter, Cantor shows (pg. 115 of van Heijenoort) that the collection $\Omega$ of all ordinals "is an inconsistent, absolutely infinite multiplicity" and therefore not a set).
Suppose there exists a set $r$ such that, for any object $x$, $x\in r$ if and only if $x\notin x$. This leads to the obvious contradiction that $r\in r$ if and only if $r\notin r$. Therefore, no such set can exist. This is only a problem if, as in naive set theory, such a set
must exist simply because you can define it. So, a consistent set theory cannot allow such a set to exist. The widely used Zermelo-Fraenkel axioms of set theory have been particularly successful in this regard. After over a century of intensive scrutiny, no such contradictions have been shown to arise from these axioms.
They build set thoery on axioms, as ZF or ZFC, and then prove (if possible) that there would be no contradiction inside the builded theory (read on Godel 193?, and Cohen 1963).
Russell's paradox is based on the naive assumption that the set of all sets does exist. They defeat it with the opposite assumption, that the set of all sets does not exist. Sets are instead "built" starting from the empty set, and step by step, being careful not to build too big sets in a single step. Certain axioms are introduced, e.g. pairs, power set, replacement, comprehension, which allow one to form a new set, based on old sets, and restrict the formation of sets defined just by "a property". Thus, the property of a set being a member (or not being a member) of itself is not allowed to be the defining property for a new set. But, for any given set $A$ one may form the set $B=\{a\in A: a$ is not a member of itself$\}$. Then $B$ is a set, but $B$ need not be an element of $A$ so even if $B$ is not a member if itself,the definition of $B$ would not imply that $B$ is a member of itself, which formally resolves the paradox.
Most set theories solve this problem by giving up on unlimited comprehension. Unlimited comprehension posits the existence of "a set of all sets such that..." where you can state any clear criterion that talks about sets and their membership. Unlimited comprehension gives you the set of all sets, and also a set of all sets that aren't a member of themselves. Giving up on unlimited comprehension is a smart thing to do, because the combination of unlimited comprehension and classical two valued logic (true versus not true) gives you Russell's Paradox.
Several such approaches are well covered in excellent sister answers.
There is however also another way out - or there at least might be. Keep unlimited comprehension. Throw away the classical logic instead. Suppose we adopt a multi-valued logic where a truth value can be true ($1$), false ($0$), or anything in between, such as one half. Suppose that the negation of a statement whose truth value is $1\over 2$ is also $1\over 2$. (This isn't the only way to define negation over partial truths, but we need to pick one.) Instead of classical sets we now have fuzzy sets where membership is a matter of degree. Can we have a set of all sets that do not contain themselves? Yes, we can. The big question: Does this set contain itself? Answer: $1\over 2$.
As you can see, this isn't exactly a superposition of the big question being answered by both $0$ and $1$, but it comes close. |
I'm encountering a very unusual consistency when plotting the vector fields in a cylindrical waveguide mode. In a cylindrical waveguide operating in a given mode, the $\hat{e}_r$ and $\hat{e}_\phi$ components of the $\textbf{E}$-field vector (in a cylindrical coordinate system, with the z-axis aligned longitudinally with the direction of propagation) can be written in terms of the $\textbf{B}$-field vector components as follows: $$E_r(r,\phi,z,t) = -\frac{\omega}{k_z}B_\phi$$ $$E_\phi(r,\phi,z,t) = \frac{\omega}{k_z}B_r$$
With $B_r$ and $B_\phi$ given by, with the $'$ denoting differentiation with respect to r, and where n is a zero of the $l$th Bessel function: $$B_r(r,\phi,z=0,t=0) = \frac{inkc^2}{\omega^2-(kc)^2}J_l'(nr)e^{il\phi}$$ $$B_\phi(r,\phi,z=0,t=0) = \frac{-lkc^2}{\omega^2-(kc)^2}\frac{J_l(nr)}{r}e^{il\phi}$$
These expressions can be found in this link. For good measure, I also independently derived these expressions from scratch; they
are indeed correct.
Now, to illustrate the problem I am facing, I have written some Python code that plots the $\textbf{E}$ and $\textbf{B}$-field vectors for a given mode. In this post I shall focus my attention on the well known 'dominant' TE11 mode (this mode is also the easiest case to highlight the inconsistency I am encountering). For the TE11 mode, then, my code gives the following plot for the $\textbf{B}$-field (the waveguide region is the shaded circle; ignore the outside regions):
This plot is exactly correct; there are many images available in the literature for the fields of the TE11 mode. For example, see (here, the B-field is given by the horizontal vector field lines, and the E field the vertical 'bowed' lines):
It's clear that the $\textbf{B}$-fields match. Now comes the problem. If I plot the $\textbf{E}$-field components as given by the expressions at the start of this post, I obtain the following plot:
This clearly does
not agree with the $\textbf{E}$-field plots found in the literature (yet arguably shares some similarities). However, the code is plotting the $\textbf{E}$-fields correctly, at least according to the equations given at the start of this post (note I've just used these plots to neatly illustrate my problem; this is a physics question and not a programming related question). You get the same result if you manually hand-calculate a few values of the $\textbf{E}$-field and sketch it out.
For example, the $\textbf{E}$-field vector at the point (0.03, -0.02) on the above plot. Plugging in these position values into the expressions given at the start of this post, one obtains (the units are totally arbitrary; all I care about is the direction of the vector): $$E_r=-1198, E_\phi = -940$$
This corresponds to a vector pointing 'inwards and slightly upwards', which matches what is seen on the plot at this point. You can do this for many points; the plot always agrees with what is calculated.
$\textbf{Summary of problem}$: If the expressions at the start of this post are correct, then the (coloured) vector field plots above must be correct and the all of the plots in the literature must be wrong (unlikely). Alternatively, the expressions at the start of this post are wrong, yet calculate the $\textbf{B}$-field perfectly, whilst getting the $\textbf{E}$-field completely wrong. The dilemma is that the equations for cylindrical waveguide modes are very well documented, and so it is unlikely that they too could be wrong. |
The package
CircuiTikz provides a set of macros for naturally typesetting electrical and electronic networks. This article explains basic usage of this package.
Contents CircuiTikz includes several nodes that can be used with standard tikz syntax.
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{circuitikz} \begin{document} \begin{center} \begin{circuitikz} \draw (0,0) to[ variable cute inductor ] (2,0); \end{circuitikz} \end{center} \end{document}
To use the package it must be imported with
\usepackage{circuitikz} in the preamble. Then the environment circuitikz is used to typeset the diagram with tikz syntax. In the example a node called variable cute inductor is used.
As mentioned before, to draw electrical network diagrams you should use tikz syntax, the examples even work if the environment
tikzpicture is used instead of circuitikz; below a more complex example is presented.
\begin{center} \begin{circuitikz}[american voltages] \draw (0,0) to [short, *-] (6,0) to [V, l_=$\mathrm{j}{\omega}_m \underline{\psi}^s_R$] (6,2) to [R, l_=$R_R$] (6,4) to [short, i_=$\underline{i}^s_R$] (5,4) (0,0) to [open, v^>=$\underline{u}^s_s$] (0,4) to [short, *- ,i=$\underline{i}^s_s$] (1,4) to [R, l=$R_s$] (3,4) to [L, l=$L_{\sigma}$] (5,4) to [short, i_=$\underline{i}^s_M$] (5,3) to [L, l_=$L_M$] (5,0); \end{circuitikz} \end{center}
The nodes
short,
V,
R and
L are presented here, but there a lot more. Some of them are presented in the next section.
Below most of the elements provided by
CircuiTikz are listed: Monopoles Bipoles Diodes dynamical bipoles
For more information see: |
The accretion of matter onto a compact object cannot take place at an unlimited rate. There is a negative feedback caused by radiation pressure.
If a source has a luminosity $L$, then there is a maximum luminosity - the Eddington luminosity - which is where the radiation pressure balances the inward gravitational forces.
The size of the Eddington luminosity depends on the opacity of the material. For pure ionised hydrogen and Thomson scattering$$ L_{Edd} = 1.3 \times 10^{31} \frac{M}{M_{\odot}}\ W$$
Suppose that material fell onto a black hole from infinity and was spherically symmetric. If the gravitational potential energy was converted entirely into radiation just before it fell beneath the event horizon, the "accretion luminosity" would be$$L_{acc} = \frac{G M_{BH}}{R}\frac{dM}{dt},$$where $M_{BH}$ is the black hole mass, $R$ is the radius from which the radiation is emitted (must be greater than the Schwarzschild radius) and $dM/dt$ is the accretion rate.
If we say that $L_{acc} \leq L_{Edd}$ then$$ \frac{dM}{dt} \leq 1.3 \times10^{31} \frac{M_{BH}}{M_{\odot}} \frac{R}{GM_{BH}} \simeq 10^{11}\ R\ kg/s \sim 10^{-3} \frac{R}{R_{\odot}}\ M_{\odot}/yr$$
Now, not all the GPE gets radiated, some of it could fall into the black hole. Also, whilst the radiation does not have to come from near the event horizon, the radius used in the equation above cannot be too much larger than the event horizon. However, the fact is that material cannot just accrete directly into a black hole without radiating; because it has angular momentum, an accretion disc will be formed and
will radiate away lots of energy - this is why we see quasars and AGN -, thus both of these effects must be small numerical factors and there is some maximum accretion rate.
To get some numerical results we can absorb our uncertainty as to the efficiency of the process and the radius at which the luminosity is emitted into a general ignorance parameter called $\eta$, such that$$L_{acc} = \eta c^2 \frac{dM}{dt}$$i.e what fraction of the rest mass energy is turned into radiation.Then, equating this to the Eddington luminosity we have$$\frac{dM}{dt} = (1-\eta) \frac{1.3\times10^{31}}{\eta c^2} \frac{M}{M_{\odot}}$$which gives $$ M = M_{0} \exp[t/\tau],$$where $\tau = 4\times10^{8} \eta/(1-\eta)$ years (often termed the Salpeter (1964) growth timescale). The problem is that $\eta$ needs to be pretty big in order to explain the luminosities of quasars, but this also implies that they cannot grow extremely rapidly. I am not fully aware of the arguments that surround the work you quote, but depending on what you assume for the "seed" of the supermassive black hole, you may only have a few to perhaps 10 e-folding timescales to get you up to $10^{10}$ solar masses. I guess this is where the problem lies. $\eta$ needs to be very low to achieve growth rates from massive stellar black holes to supermassive black holes, but this can only be achieved in slow-spinning black holes, which are not thought to exist!
A nice summary of the problem is given in the introduction of Volonteri, Silk & Dubus (2014). These authors also review some of the solutions that might allow Super-Eddington accretion and shorter growth timescales - there are a number of good ideas, but none has emerged as a front-runner yet. |
Specifically, I need to find the cosmological parameters $Ω_i$ from a certain data set (with error bars) of redshift $z$ and luminosity distance $d_L$ using a Fisher Information matrix. I know that my data goes accordingly to $$ d_L(z)= \begin{cases} \frac{d_H}{\sqrt{|\Omega_k|}}\sinh(\sqrt{\Omega_k}d_c(z)/d_H) & \quad \text{if } \Omega_k>0\\ d_c(z) & \quad \text{if } \Omega_k=0\\ \frac{d_H}{\sqrt{|\Omega_k|}}\sin(\sqrt{-\Omega_k}d_c(z)/d_H) & \quad \text{if } \Omega_k<0\\ \end{cases} $$
and that $$ d_c(z)=d_H\int\limits^z_0\frac{dz'}{H(z')}\ \ ;\ \ H(z)=\left[\Omega_m(1+z)^3+\Omega_k(1+z)^2+\Omega_{\Lambda}\right]^{1/2} $$
where $d_H$ is a constant. I'm a little lost on how to determine the $\Omega_i$ from there, since I have no prior information nor I know the PDF. I'm suspecting that it should be $d_L$ normalized, is this correct?
Thanks!
PS: I wasn't sure if this should be posted here or in a statistics forum, please let me know! |
Let $\rho$ be a group action by a compact group $G$
\begin{equation} \rho:G\times M \rightarrow M \\ \rho:(g,p) \rightarrow \rho_g(p) \end{equation}
Denote the orbit of $p\in M$ by $\mathcal{O}_p$ and the isotropy group of $p$ by $G_p$. We have a natural representation $G_p$ on the vector space $T_p(M)/T_p\mathcal{O}_p$ given by the action of $d\rho_h(p)$ on $T_p(M)/T_p\mathcal{O}_p$ for $h\in G_p$. Consider the action of $G_p$ on the product $G \times T_p(M)/T_p\mathcal{O}_p$
\begin{equation} \sigma(h)(g,v)=(h^{-1}g,d\rho_h(p)(v)) \end{equation}
This action is free and therefore the quotient is a manifold. It is a principal bundle over $G/G_p$ denoted $G \times_{G_p} T_p(M)/T_p\mathcal{O}_p$. The slice theorem tells us that this bundle, the action on the fiber being the trivial action is equivariantly diffeomorphic to a neighbourhood of $\mathcal{O}_p$.
I tried computing some simple examples to get a feeling for this, but of course for the simple examples this vector bundle is trivial. I feel I could understand much better if I had an example where the bundle was not trivial. Is there an "easy" example of this? |
2019-09-27 09:59
Higgs boson pair production at colliders: status and perspectives / Di Micco, Biagio (Universita e INFN Roma Tre (IT)) ; Gouzevitch, Maxime (Centre National de la Recherche Scientifique (FR)) ; Mazzitelli, Javier (University of Zurich) ; Vernieri, Caterina (SLAC National Accelerator Laboratory (US)) ; Alison, John (Carnegie-Mellon University (US)) ; Androsov, Konstantin (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, Pisa (IT)) ; Baglio, Julien Lorenzo (CERN) ; Bagnaschi, Emanuele Angelo (Paul Scherrer Institut (CH)) ; Banerjee, Shankha (University of Durham (GB)) ; Basler, P (Karlsruhe Institute of Technology) et al. This document summarises the current theoretical and experimental status of the di-Higgs boson production searches, and of the direct and indirect constraints on the Higgs boson self-coupling, with the wish to serve as a useful guide for the next years. The document discusses the theoretical status, including state-of-the-art predictions for di-Higgs cross sections, developments on the effective field theory approach, and studies on specific new physics scenarios that can show up in the di-Higgs final state. [...] LHCHXSWG-2019-005.- Geneva : CERN, 2019 - 274. Registre complet - Registres semblants 2019-05-10 11:18 Registre complet - Registres semblants 2019-04-02 20:51
Simplified Template Cross Sections – Stage 1.1 / Delmastro, Marco (Centre National de la Recherche Scientifique (FR)) ; Berger, Nicolas (Centre National de la Recherche Scientifique (FR)) ; Bertella, Claudia (Chinese Academy of Sciences (CN)) ; Duehrssen-Debling, Michael (CERN) ; Kivernyk, Oleh (Centre National de la Recherche Scientifique (FR)) ; Langford, Jonathon Mark (Imperial College (GB)) ; Milenovic, Predrag (University of Belgrade (RS)) ; Pandini, Carlo Enrico (CERN) ; Tackmann, Frank (Deutsches Elektronen-Synchrotron (DE)) ; Tackmann, Kerstin (Deutsches Elektronen-Synchrotron (DE)) et al. Simplified Template Cross Sections (STXS) have been adopted by the LHC experiments as a common framework for Higgs measurements. Their purpose is to reduce the theoretical uncertainties that are directly folded into the measurements as much as possible, while at the same time allowing for the combination of the measurements between different decay channels as well as between experiments. [...] arXiv:1906.02754; LHCHXSWG-2019-003; DESY-19-070.- Geneva : CERN, 2019 - 14 p. Fulltext: LHCHXSWG-2019-003 - PDF; 1906.02754 - PDF; Registre complet - Registres semblants 2019-03-27 12:46
Recommended predictions for the boosted-Higgs cross section / Becker, Kathrin (Albert Ludwigs Universitaet Freiburg (DE)) ; Caola, Fabrizio (University of Durham (GB)) ; Massironi, Andrea (CERN) ; Mistlberger, Bernhard (Massachusetts Inst. of Technology (US)) ; Monni, Pier (CERN) ; Chen, Xuan (Zurich U.) ; Frixione, Stefano (INFN e Universita Genova (IT)) ; Gehrmann, Thomas Kurt (Universitaet Zuerich (CH)) ; Glover, Nigel (IPPP Durham) ; Hamilton, Keith Murray (University of London (GB)) et al. In this note we study the inclusive production of a Higgs boson with large transverse momentum. We provide a recommendation for the inclusive cross section based on a combination of state of the art QCD predictions for the gluon-fusion and vector-boson-fusion channels. [...] LHCHXSWG-2019-002.- Geneva : CERN, 2019 - 14. Fulltext: PDF; Registre complet - Registres semblants 2019-03-01 22:49
Higgs boson cross sections for the high-energy and high-luminosity LHC: cross-section predictions and theoretical uncertainty projections / Calderon Tazon, Alicia (Universidad de Cantabria and CSIC (ES)) ; Caola, Fabrizio (University of Durham (GB)) ; Campbell, John (Fermilab (US)) ; Francavilla, Paolo (Universita & INFN Pisa (IT)) ; Marchiori, Giovanni (Centre National de la Recherche Scientifique (FR)) ; Becker, Kathrin (Albert Ludwigs Universitaet Freiburg (DE)) ; Bertella, Claudia (Chinese Academy of Sciences (CN)) ; Bonvini, Marco (Sapienza Universita e INFN, Roma I (IT)) ; Chen, Xuan (Zuerich University (CH)) ; Frederix, Rikkert (Technische Universität Muenchen (DE)) et al. This note summarizes the state-of-the-art predictions for the cross sections expected for Higgs boson production in the 27 TeV proton-proton collisions of a high-energy LHC, including a full theoretical uncertainty analysis. It also provides projections for the progress that may be expected on the timescale of the high-luminosity LHC and an assessment of the main limiting factors to further reduction of the remaining theoretical uncertainties.. LHCHXSWG-2019-001.- Geneva : CERN, 01 - 17. Fulltext: PDF; Registre complet - Registres semblants 2016-07-15 07:28
Analytical parametrization and shape classification of anomalous HH production in EFT approach / Carvalho Antunes De Oliveira, Alexandra (Universita e INFN, Padova (IT)) ; Dall'Osso, Martino (Universita e INFN, Padova (IT)) ; De Castro Manzano, Pablo (Universita e INFN, Padova (IT)) ; Dorigo, Tommaso (Universita e INFN, Padova (IT)) ; Goertz, Florian (CERN) ; Gouzevitch, Maxime (Universite Claude Bernard-Lyon I (FR)) ; Tosi, Mia (CERN) In this document we study the effect of anomalous Higgs boson couplings on non-resonant pair production of Higgs bosons (HH) at the LHC. We explore the space of the five parameters $\kappa_\lambda$, $\kappa_t$, $c_2$, $c_{g}$, and $c_{2g}$ in terms of the corresponding kinematics of the final state, and describe a suggested partition of the space into a limited number of regions featuring similar phenomenology in the kinematics of HH final state, along with a corresponding set of representative benchmark points. [...] LHCHXSWG-2016-001.- Geneva : CERN, 2016 Fulltext: PDF; Registre complet - Registres semblants 2015-08-03 09:58
Benchmark scenarios for low $\tan \beta$ in the MSSM / Bagnaschi, Emanuele (DESY) ; Frensch, Felix (Karlsruhe, Inst. Technol.) ; Heinemeyer, Sven (Cantabria Inst. of Phys.) ; Lee, Gabriel (Technion) ; Liebler, Stefan Rainer (DESY) ; Muhlleitner, Milada (Karlsruhe, Inst. Technol.) ; Mc Carn, Allison Renae (Michigan U.) ; Quevillon, Jeremie (King's Coll. London) ; Rompotis, Nikolaos (Seattle U.) ; Slavich, Pietro (Paris, LPTHE) et al. The run-1 data taken at the LHC in 2011 and 2012 have led to strong constraints on the allowed parameter space of the MSSM. These are imposed by the discovery of an approximately SM-like Higgs boson with a mass of $125.09\pm0.24$~GeV and by the non-observation of SUSY particles or of additional (neutral or charged) Higgs bosons. [...] LHCHXSWG-2015-002.- Geneva : CERN, 2015 - 24. Fulltext: PDF; Registre complet - Registres semblants 2015-03-20 14:24
Recommendations for the interpretation of LHC searches for $H_5^0$, $H_5^{\pm}$, and $H_5^{\pm\pm}$ in vector boson fusion with decays to vector boson pairs / Zaro, Marco (Paris U., IV ; Paris, LPTHE) ; Logan, Heather (Ottawa Carleton Inst. Phys.) We provide theory input for the interpretation of the LHC searches for the production of Higgs bosons $H_5^0$, $H_5^{\pm}$, and $H_5^{\pm\pm}$ that transform as a fiveplet under the custodial symmetry. We choose as a benchmark the Georgi-Machacek model, in which isospin-triplet scalars are added to the Standard Model Higgs sector in such a way as to preserve custodial SU(2) symmetry. [...] LHCHXSWG-2015-001.- Geneva : CERN, 30 - 19p. Fulltext: PDF; Registre complet - Registres semblants |
Elements of Infinite Polynomial Rings¶
AUTHORS:
An Infinite Polynomial Ring has generators \(x_\ast, y_\ast,...\), sothat the variables are of the form \(x_0, x_1, x_2, ..., y_0, y_1,y_2,...,...\) (see
infinite_polynomial_ring).Using the generators, we can create elements as follows:
sage: X.<x,y> = InfinitePolynomialRing(QQ)sage: a = x[3]sage: b = y[4]sage: ax_3sage: by_4sage: c = a*b+a^3-2*b^4sage: cx_3^3 + x_3*y_4 - 2*y_4^4
Any Infinite Polynomial Ring
X is equipped with a monomial ordering.We only consider monomial orderings in which:
X.gen(i)[m] > X.gen(j)[n]\(\iff\)
i<j, or
i==jand
m>n
Under this restriction, the monomial ordering can be lexicographic (default), degree lexicographic, or degree reverse lexicographic. Here, the ordering is lexicographic, and elements can be compared as usual:
sage: X._order'lex'sage: a > bTrue
Note that, when a method is called that is not directly implementedfor ‘InfinitePolynomial’, it is tried to call this method for theunderlying
classical polynomial. This holds, e.g., when applying the
latex function:
sage: latex(c)x_{3}^{3} + x_{3} y_{4} - 2 y_{4}^{4}
There is a permutation action on Infinite Polynomial Rings by permuting the indices of the variables:
sage: P = Permutation(((4,5),(2,3)))sage: c^Px_2^3 + x_2*y_5 - 2*y_5^4
Note that
P(0)==0, and thus variables of index zero are invariantunder the permutation action. More generally, if
P is anycallable object that accepts non-negative integers as input andreturns non-negative integers, then
c^P means to apply
P tothe variable indices occurring in
c.
sage.rings.polynomial.infinite_polynomial_element.
InfinitePolynomial(
A, p)¶
Create an element of a Polynomial Ring with a Countably Infinite Number of Variables.
Usually, an InfinitePolynomial is obtained by using the generators of an Infinite Polynomial Ring (see
infinite_polynomial_ring) or by conversion.
INPUT:
A– an Infinite Polynomial Ring.
p– a
classicalpolynomial that can be interpreted in
A.
ASSUMPTIONS:
In the dense implementation, it must be ensured that the argument
pcoerces into
A._Pby a name preserving conversion map.
In the sparse implementation, in the direct construction of an infinite polynomial, it is
nottested whether the argument
pmakes sense in
A.
EXAMPLES:
sage: from sage.rings.polynomial.infinite_polynomial_element import InfinitePolynomial sage: X.<alpha> = InfinitePolynomialRing(ZZ) sage: P.<alpha_1,alpha_2> = ZZ[]
Currently,
Pand
X._P(the underlying polynomial ring of
X) both have two variables:
sage: X._P Multivariate Polynomial Ring in alpha_1, alpha_0 over Integer Ring
By default, a coercion from
Pto
X._Pwould not be name preserving. However, this is taken care for; a name preserving conversion is impossible, and by consequence an error is raised:
sage: InfinitePolynomial(X, (alpha_1+alpha_2)^2) Traceback (most recent call last): ... TypeError: Could not find a mapping of the passed element to this ring.
When extending the underlying polynomial ring, the construction of an infinite polynomial works:
sage: alpha[2] alpha_2 sage: InfinitePolynomial(X, (alpha_1+alpha_2)^2) alpha_2^2 + 2*alpha_2*alpha_1 + alpha_1^2
In the sparse implementation, it is not checked whether the polynomial really belongs to the parent:
sage: Y.<alpha,beta> = InfinitePolynomialRing(GF(2), implementation='sparse') sage: a = (alpha_1+alpha_2)^2 sage: InfinitePolynomial(Y, a) alpha_1^2 + 2*alpha_1*alpha_2 + alpha_2^2
However, it is checked when doing a conversion:
sage: Y(a) alpha_2^2 + alpha_1^2 class
sage.rings.polynomial.infinite_polynomial_element.
InfinitePolynomial_dense(
A, p)¶
Element of a dense Polynomial Ring with a Countably Infinite Number of Variables.
INPUT:
A– an Infinite Polynomial Ring in dense implementation
p– a
classicalpolynomial that can be interpreted in
A.
Of course, one should not directly invoke this class, but rather construct elements of
Ain the usual way.
This class inherits from
InfinitePolynomial_sparse. See there for a description of the methods.
class
sage.rings.polynomial.infinite_polynomial_element.
InfinitePolynomial_sparse(
A, p)¶
Element of a sparse Polynomial Ring with a Countably Infinite Number of Variables.
INPUT:
A– an Infinite Polynomial Ring in sparse implementation
p– a
classicalpolynomial that can be interpreted in
A.
Of course, one should not directly invoke this class, but rather construct elements of
Ain the usual way.
EXAMPLES:
sage: A.<a> = QQ[] sage: B.<b,c> = InfinitePolynomialRing(A,implementation='sparse') sage: p = a*b[100] + 1/2*c[4] sage: p a*b_100 + 1/2*c_4 sage: p.parent() Infinite polynomial ring in b, c over Univariate Polynomial Ring in a over Rational Field sage: p.polynomial().parent() Multivariate Polynomial Ring in b_100, b_0, c_4, c_0 over Univariate Polynomial Ring in a over Rational Field
coefficient(
monomial)¶
Returns the coefficient of a monomial in this polynomial.
INPUT:
A monomial (element of the parent of self) or a dictionary that describes a monomial (the keys are variables of the parent of self, the values are the corresponding exponents)
EXAMPLES:
We can get the coefficient in front of monomials:
sage: X.<x> = InfinitePolynomialRing(QQ) sage: a = 2*x[0]*x[1] + x[1] + x[2] sage: a.coefficient(x[0]) 2*x_1 sage: a.coefficient(x[1]) 2*x_0 + 1 sage: a.coefficient(x[2]) 1 sage: a.coefficient(x[0]*x[1]) 2
We can also pass in a dictionary:
sage: a.coefficient({x[0]:1, x[1]:1}) 2
footprint()¶
Leading exponents sorted by index and generator.
OUTPUT:
D– a dictionary whose keys are the occurring variable indices.
D[s]is a list
[i_1,...,i_n], where
i_jgives the exponent of
self.parent().gen(j)[s]in the leading term of
self.
EXAMPLES:
sage: X.<x,y> = InfinitePolynomialRing(QQ) sage: p = x[30]*y[1]^3*x[1]^2+2*x[10]*y[30] sage: sorted(p.footprint().items()) [(1, [2, 3]), (30, [1, 0])]
gcd(
x)¶
computes the greatest common divisor
EXAMPLES:
sage: R.<x>=InfinitePolynomialRing(QQ) sage: p1=x[0]+x[1]**2 sage: gcd(p1,p1+3) 1 sage: gcd(p1,p1)==p1 True
is_nilpotent()¶
Return
Trueif
selfis nilpotent, i.e., some power of
selfis 0.
EXAMPLES:
sage: R.<x> = InfinitePolynomialRing(QQbar) sage: (x[0]+x[1]).is_nilpotent() False sage: R(0).is_nilpotent() True sage: _.<x> = InfinitePolynomialRing(Zmod(4)) sage: (2*x[0]).is_nilpotent() True sage: (2+x[4]*x[7]).is_nilpotent() False sage: _.<y> = InfinitePolynomialRing(Zmod(100)) sage: (5+2*y[0] + 10*(y[0]^2+y[1]^2)).is_nilpotent() False sage: (10*y[2] + 20*y[5] - 30*y[2]*y[5] + 70*(y[2]^2+y[5]^2)).is_nilpotent() True
is_unit()¶
Answer whether
selfis a unit.
EXAMPLES:
sage: R1.<x,y> = InfinitePolynomialRing(ZZ) sage: R2.<a,b> = InfinitePolynomialRing(QQ) sage: (1+x[2]).is_unit() False sage: R1(1).is_unit() True sage: R1(2).is_unit() False sage: R2(2).is_unit() True sage: (1+a[2]).is_unit() False
Check that trac ticket #22454 is fixed:
sage: _.<x> = InfinitePolynomialRing(Zmod(4)) sage: (1 + 2*x[0]).is_unit() True sage: (x[0]*x[1]).is_unit() False sage: _.<x> = InfinitePolynomialRing(Zmod(900)) sage: (7+150*x[0] + 30*x[1] + 120*x[1]*x[100]).is_unit() True
lc()¶
The coefficient of the leading term of
self.
EXAMPLES:
sage: X.<x,y> = InfinitePolynomialRing(QQ) sage: p = 2*x[10]*y[30]+3*x[10]*y[1]^3*x[1]^2 sage: p.lc() 3
lm()¶
The leading monomial of
self.
EXAMPLES:
sage: X.<x,y> = InfinitePolynomialRing(QQ) sage: p = 2*x[10]*y[30]+x[10]*y[1]^3*x[1]^2 sage: p.lm() x_10*x_1^2*y_1^3
lt()¶
The leading term (= product of coefficient and monomial) of
self.
EXAMPLES:
sage: X.<x,y> = InfinitePolynomialRing(QQ) sage: p = 2*x[10]*y[30]+3*x[10]*y[1]^3*x[1]^2 sage: p.lt() 3*x_10*x_1^2*y_1^3
max_index()¶
Return the maximal index of a variable occurring in
self, or -1 if
selfis scalar.
EXAMPLES:
sage: X.<x,y> = InfinitePolynomialRing(QQ) sage: p=x[1]^2+y[2]^2+x[1]*x[2]*y[3]+x[1]*y[4] sage: p.max_index() 4 sage: x[0].max_index() 0 sage: X(10).max_index() -1
polynomial()¶
Return the underlying polynomial.
EXAMPLES:
sage: X.<x,y> = InfinitePolynomialRing(GF(7)) sage: p = x[2]*y[1]+3*y[0] sage: p x_2*y_1 + 3*y_0 sage: p.polynomial() x_2*y_1 + 3*y_0 sage: p.polynomial().parent() Multivariate Polynomial Ring in x_2, x_1, x_0, y_2, y_1, y_0 over Finite Field of size 7 sage: p.parent() Infinite polynomial ring in x, y over Finite Field of size 7
reduce(
I, tailreduce=False, report=None)¶
Symmetrical reduction of
selfwith respect to a symmetric ideal (or list of Infinite Polynomials).
INPUT:
I– a
SymmetricIdealor a list of Infinite Polynomials.
tailreduce– (bool, default
False)
Tail reductionis performed if this parameter is
True.
report– (object, default
None) If not
None, some information on the progress of computation is printed, since reduction of huge polynomials may take a long time.
OUTPUT:
Symmetrical reduction of
selfwith respect to
I, possibly with tail reduction.
THEORY:
Reducing an element \(p\) of an Infinite Polynomial Ring \(X\) by some other element \(q\) means the following:
Let \(M\) and \(N\) be the leading terms of \(p\) and \(q\). Test whether there is a permutation \(P\) that does not does not diminish the variable indices occurring in \(N\) and preserves their order, so that there is some term \(T\in X\) with \(TN^P = M\). If there is no such permutation, return \(p\) Replace \(p\) by \(p-T q^P\) and continue with step 1.
EXAMPLES:
sage: X.<x,y> = InfinitePolynomialRing(QQ) sage: p = y[1]^2*y[3]+y[2]*x[3]^3 sage: p.reduce([y[2]*x[1]^2]) x_3^3*y_2 + y_3*y_1^2
The preceding is correct: If a permutation turns
y[2]*x[1]^2into a factor of the leading monomial
y[2]*x[3]^3of
p, then it interchanges the variable indices 1 and 2; this is not allowed in a symmetric reduction. However, reduction by
y[1]*x[2]^2works, since one can change variable index 1 into 2 and 2 into 3:
sage: p.reduce([y[1]*x[2]^2]) y_3*y_1^2
The next example shows that tail reduction is not done, unless it is explicitly advised. The input can also be a Symmetric Ideal:
sage: I = (y[3])*X sage: p.reduce(I) x_3^3*y_2 + y_3*y_1^2 sage: p.reduce(I, tailreduce=True) x_3^3*y_2
Last, we demonstrate the
reportoption:
sage: p=x[1]^2+y[2]^2+x[1]*x[2]*y[3]+x[1]*y[4] sage: p.reduce(I, tailreduce=True, report=True) :T[2]:> > x_1^2 + y_2^2
The output ‘:’ means that there was one reduction of the leading monomial. ‘T[2]’ means that a tail reduction was performed on a polynomial with two terms. At ‘>’, one round of the reduction process is finished (there could only be several non-trivial rounds if \(I\) was generated by more than one polynomial).
ring()¶
The ring which
selfbelongs to.
This is the same as
self.parent().
EXAMPLES:
sage: X.<x,y> = InfinitePolynomialRing(ZZ,implementation='sparse') sage: p = x[100]*y[1]^3*x[1]^2+2*x[10]*y[30] sage: p.ring() Infinite polynomial ring in x, y over Integer Ring
squeezed()¶
Reduce the variable indices occurring in
self.
OUTPUT:
Apply a permutation to
selfthat does not change the order of the variable indices of
selfbut squeezes them into the range 1,2,…
EXAMPLES:
sage: X.<x,y> = InfinitePolynomialRing(QQ,implementation='sparse') sage: p = x[1]*y[100] + x[50]*y[1000] sage: p.squeezed() x_2*y_4 + x_1*y_3
stretch(
k)¶
Stretch
selfby a given factor.
INPUT:
k– an integer.
OUTPUT:
Replace \(v_n\) with \(v_{n\cdot k}\) for all generators \(v_\ast\) occurring in self.
EXAMPLES:
sage: X.<x> = InfinitePolynomialRing(QQ) sage: a = x[0] + x[1] + x[2] sage: a.stretch(2) x_4 + x_2 + x_0 sage: X.<x,y> = InfinitePolynomialRing(QQ) sage: a = x[0] + x[1] + y[0]*y[1]; a x_1 + x_0 + y_1*y_0 sage: a.stretch(2) x_2 + x_0 + y_2*y_0
symmetric_cancellation_order(
other)¶
Comparison of leading terms by Symmetric Cancellation Order, \(<_{sc}\).
INPUT:
self, other – two Infinite Polynomials
ASSUMPTION:
Both Infinite Polynomials are non-zero.
OUTPUT:
(c, sigma, w), where
c = -1,0,1, or None if the leading monomial of
selfis smaller, equal, greater, or incomparable with respect to
otherin the monomial ordering of the Infinite Polynomial Ring
sigma is a permutation witnessing
self\(<_{sc}\)
other(resp.
self\(>_{sc}\)
other) or is 1 if
self.lm()==other.lm()
w is 1 or is a term so that
w*self.lt()^sigma == other.lt()if \(c\le 0\), and
w*other.lt()^sigma == self.lt()if \(c=1\)
THEORY:
If the Symmetric Cancellation Order is a well-quasi-ordering then computation of Groebner bases always terminates. This is the case, e.g., if the monomial order is lexicographic. For that reason, lexicographic order is our default order.
EXAMPLES:
sage: X.<x,y> = InfinitePolynomialRing(QQ) sage: (x[2]*x[1]).symmetric_cancellation_order(x[2]^2) (None, 1, 1) sage: (x[2]*x[1]).symmetric_cancellation_order(x[2]*x[3]*y[1]) (-1, [2, 3, 1], y_1) sage: (x[2]*x[1]*y[1]).symmetric_cancellation_order(x[2]*x[3]*y[1]) (None, 1, 1) sage: (x[2]*x[1]*y[1]).symmetric_cancellation_order(x[2]*x[3]*y[2]) (-1, [2, 3, 1], 1) c = -1,0,1, or None if the leading monomial of
tail()¶
The tail of
self(this is
selfminus its leading term).
EXAMPLES:
sage: X.<x,y> = InfinitePolynomialRing(QQ) sage: p = 2*x[10]*y[30]+3*x[10]*y[1]^3*x[1]^2 sage: p.tail() 2*x_10*y_30
variables()¶
Return the variables occurring in
self(tuple of elements of some polynomial ring).
EXAMPLES:
sage: X.<x> = InfinitePolynomialRing(QQ) sage: p = x[1] + x[2] - 2*x[1]*x[3] sage: p.variables() (x_3, x_2, x_1) sage: x[1].variables() (x_1,) sage: X(1).variables() () |
This isn't an answer to conjecture 1, just an elaboration of things others mentioned.
There is every reason to think that the answer to conjecture 1 is yes and even that for fixed $m \gt 1$ we have that for each odd integer $x \geq 3$ there are infinitely many $n$ with $p_{n+m}-p_n-p_m=x.$ We don't know that there is even one, however we can say with good precision how the number of solutions
should be expected to grow as $n \rightarrow \infty.$ Computations always (seem) to conform to this with good fidelity as far as checked. I'll explain that a bit and then point out that that fixing $m=2$ would likely not be the most fruitful way to find a solution.
So with $m=3$ and $p_3=5$ solving $p_{n+3}-p_n-5=15$ amounts to finding two primes $p_N$ and $p_n$ with
We do not know that $p_N-p_m=20$ infinitely often. For each even integer g define $\pi_g(X)$ to be the number of pairs $(p_N,p_n)$ with $p_N-p_n=g$ and $p_N \leq X.$ Then $\pi_2(X)$ is the number of twin primes up to $X$ We don't know that this grows without bound but can expect that it is assymptotic (in some sense which could be made precise) to $C_2\frac{X}{\ln^2X}$ and that the constant $C_2=\prod_p(1-\frac1{(p-1)^2})$ with the product over the odd primes.
There is a similar constant $C_g$ for each even $g.$ Namely $C_g=C_2\prod_p\frac{p-1}{p-2}$ where the product is over odd primes which divide $g.$
So $p_n+20$ is expected to be prime more often that $p_2$ but about as often as $p+10.$ i.e. $\pi_{20}(X) \sim \pi_{10}(X)\sim \frac{4}{3}\pi_2(X).$ That is a prediction which holds up quite well as far as checked.
For an amazing exposition of this read Heuristic Reasoning In The Theory of Numbers by Polya.
But our goal had an extra condition. Want something like to have
$p_n,p_n+6,p_n+14,p_n+20$ to all be prime but $p_n+h$ be composite for $h=2,4,8,10,12,16,18.$ It is possible to similarly predict how often that happens up to $X.$ Summing over finite number of cases would give a prediction for the number of solutions of the given problem.
$m=2$ is a little easier but I wanted to use a different value.
But where is it most fruitful to look for solutions to $p_{n+m}-p_n-p_m=x$?
Here is a graph for $p_{n+m}-p_n-p_m=101$ with $n+m\lt 200.$
I find $168$ solutions. here is a graph.
Of the $168$ solutions, $61$ of them have $m \in\{34,35,36,37,38\}$ and the ratio $\frac{n}{m}$ ranges from $3$ to $5.2$
Using $p_k \sim {k}{\ln{k}}$ one might be able to argue that for fixed $r=\frac{n}{m}$ ( really $r$ in some small range like $[3,5]$) there is a narrow range of $m$ values that would be worth searching first. Perhaps the best $r$ (given $x$) could be estimated. I would have guessed $r \sim 1$ is best, but that seems not to be the case based on this one computation. Perhaps the optimal range is past $m+n=200.$ |
The most direct way of simulating a random variable from a distribution with cdf $F$ is to first simulate a Uniform variate $U\sim\mathcal{U}(0,1)$ and second return the inverse cdf transform $F^{-1}(U)$. When the inverse $F^{-1}$ is not available in closed form, a numerical inversion can be used. Numerical inversion may however be costly, especially in the ...
The confusion stems from a misunderstanding of the notation $$V \simf_V$$ which means both (a) $V$ is a random variable with density $f_V$and (b) $V$ is created by a PRNG algorithm that reproduces ageneration of a random variable with density $f_V$. Each time ageneration $V_i\sim f_V$ occurs in the algorithm from Casella andBerger, a new ...
I have puzzled over this question but never came with a satisfying solution.One property that is of possible use is that, if a density writes$$f(x)=\frac{g(x)-\omega h(x)}{1-\omega}\qquad \omega>0$$where $g$ is a density such that $g(x)\ge \omega h(x)$, simulating from $g$ and rejecting these simulations with probability $\omega h(x)/g(x)$ delivers ...
If a Metropolis-Hastings algorithm uses a truncated Normal as proposal$${\cal N}^+(\mu_{t-1},\sigma^2)$$the associated Metropolis-Hastings acceptance ratio is$$\dfrac{\pi(\mu')}{\pi(\mu_{t-1})}\times \dfrac{\varphi(\{\mu_{t-1}-\mu'\}/\sigma)}{\varphi(\{\mu'-\mu_{t-1}\}/\sigma)}\times\dfrac{\Phi(\mu_{t-1}/\sigma)}{\Phi(\mu'/\sigma)}$$when $\mu'\sim{\cal N}^+(...
What do you mean by "find?" I can tell you $\pi(\theta \mid x)$ is proportional to$$f(x \mid \theta) \pi(\theta) \propto \exp\left[-\frac{(\theta - x)^2}{2} - \log(1 + \theta^2) \right],$$but I don't recognize this density. You can use this fact in a number of techniques to sample from the posterior (e.g. accept-reject, importance sampling with ...
I'll construct a proof of a simpler proposition which should make it clear how the more general one is done. Let $z \sim \text{U}(0,1)$. Then the density $p(z) = 1$ and the cumulative distribution $P(z) = z$. Now let us find the conditional distribution of $z | z < c$, i.e., $z \in (0,c)$.Using the definition of conditional probability, $p(z|z<c)...
I stumbled over this via googling. Rejection sampling is not needed. Instead, it is sufficient to flip the sign if the sample would be rejected!This is because we can use that $\Phi(-ax)+\Phi(ax)=1$ and thus$(f(x)+f(-x))/2= \phi(x) \Phi(ax)+ \phi(-x) \Phi(-ax)= \phi(x)$Therefore, we can sample a skew-normal random variable by first sampling a standard ...
The simplest way is to use the cumulative distribution function like in the title of your question. As pointed by Jim B., the CDF is:$$F(x)=1-e^{-\frac{x^2}{10}}$$The method is explained here: wikipedia or here: How does the inverse transform method work?The Aceptance-rejection method is more complex, usually slower, and should not be the first choice. ...
The paper is available from ResearchGate through Google.The validation of the delayed rejection algorithm is that, when starting with a realisation of the variable $X_t\sim\pi(x)$, the outcome of the Markov move to $X_{t+1}$ still remains distributed as $X_{t+1}\sim\pi(x)$. If the first step leads to an acceptance, the validation is the same as with the ...
The answer by Taylor is excellent, and already correctly gives the posterior kernel, which gives an intractable integral. If your goal is to find the posterior density (as opposed to sampling from the posterior distribution) then you are effectively just looking for the constant-of-integration:$$H(x) \equiv \frac{1}{\pi} \int \limits_{-\infty}^\infty \...
When you write the Metropolis-Hastings density [wrt a dominating measure that is the sum of a measure $\text{d}\lambda$ that is absolutely continuous against the target and of a Dirac measure at $x$] as$$K(x, x') = \displaystyle \alpha(x, x')q(x \mid x')$$it should be$$K(x, x') = \displaystyle \alpha(x, x')q(x' \mid x)+\int (1-\alpha(x, x'))q(x' \mid x)\...
I hope this answers your questions:Since your hypothesis test is being performed at the $0.01$ significance level, $K = 8,9,10$ is the set of $x$ values for which you can reject $H_{0}$ at that significance level. In other words, those three $x$ values produce a $p$-value that is lower than your pre-determined significance level. You can see that those ...
I have the draft of an idea that could work. It is not exact, but hopefully asymptotically exact. To turn it into a really rigorous method, where the approximation is controlled, or something about it can be proven, there's probably a lot of work needed.First, as mentioned by Xi'an you can group the positive weights on one hand and the negative weights on ...
If you really want to do accept-reject I'd suggest that a $\chi^2_4$ density would be a considerably better choice as a proposal than a $\chi^2_1$. A $c$ of just under 1.5 will do (I'd just use 1.5, the difference is miniscule).[With care, a well chosen gamma density would let you push it some way below 1.5. Probably not worth the effort though; you may ... |
@mickep I'm pretty sure that malicious actors knew about this long before I checked it. My own server gets scanned by about 200 different people for vulnerabilities every day and I'm not even running anything with a lot of traffic.
@JosephWright @barbarabeeton @PauloCereda I thought we could create a golfing TeX extension, it would basically be a TeX format, just the first byte of the file would be an indicator of how to treat input and output or what to load by default. I thought of the name: Golf of TeX, shortened as GoT :-)
@PauloCereda Well, it has to be clever. You for instance need quick access to defining new cs, something like (I know this won't work, but you get the idea) \catcode`\@=13\def@{\def@##1\bgroup} so that when you use @Hello #1} it expands to \def@#1{Hello #1}
If you use the D'Alembert operator as well, you might find pretty using the symbol \bigtriangleup for your Laplace operator, in order to get a similar look as the \Box symbol that is being used for D'Alambertian. In the following, a tricky construction with \mathop and \mathbin is used to get the...
Latex exports. I am looking for a hint on this. I've tried everything I could find but no solution yet. I read equations from file generated by CAS programs. I can't edit these or modify them in any way. Some of these are too long. Some are not. To make them fit in the page width, I tried resizebox. The problem is that this will resize the small equation as well as the long one to fit the page width. Which is not what I want. I want only to resize the ones that are longer that pagewidth and keep the others as is. Is there a way in Latex to do this? Again, I do not before hand the size of…
\documentclass[12pt]{article}\usepackage{amsmath}\usepackage{graphicx}\begin{document}\begin{equation*}\resizebox{\textwidth}{!}{$\begin{split}y &= \sin^2 x + \cos^2 x\\x &= 5\end{split}$}\end{equation*}\end{document}
The above will resize the small equation, which I do not want. But since I do not know before how long the equation is, I do resize on everyone. Is there a way to find in Latex using some latex command, if an equation "will fit" the page width or how long it is? If so I can add logic to add resize if needed in that case.
What I mean, I want to resize DOWN only if needed. And not resize UP. Also, if you think I should ask this on main board, I can. But thought to check here first.
@egreg what other options do I have? sometimes cas generates an equation which do not fit the page. Now it overflows the page and one can't see the rest of it at all. Now, since in pdf one can zoom in a little, at least one can see it if needed. It is impossible to edit or modify these by hand, as this is done all using a program.
@UlrikeFischer I do not generate unreadable equations. These are solutions of ODE's. The latex is generated by Maple. Some of them are longer than the page width. That is all. So what is your suggestion I do? Keep the long solutions flow out of the page? I can't edit these by hand. This is all generated by a program. I can add latex code around them that is all. But editing them is out of question. I tried breqn package, but that did not work. It broke many things as well.
@egreg That was just an example. That was something I added by hand to make up a long equation for illustration. That was not real solution to an ODE. Again, thanks for the effort. but I can't edit the latex generated at all by hand. It will take me a year to do. And I run the program many times each day. each time, all the latex files are overwritten again any way.
CAS providers do not generate good Latex also. That is why breqn did not work. many times they add {} around large expressions, which made breqn not able to break it. Also breqn has many other problems. So I no longer use it at all. |
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit. |
Author Message TAGS:
Board of Directors
Joined: 01 Sep 2010
Posts: 3400 Re: sqrt{ABC} = 504 . Is B divisible by 2? 1) C = 168 2) A is a [#permalink]
Show Tags 30 Sep 2012, 04:49
I hope bunuel shed light on this one..........
_________________
Manager
Joined: 25 Jun 2012
Posts: 61
Location: India
WE: General Management (Energy and Utilities) Re: sqrt{ABC} = 504 . Is B divisible by 2? 1) C = 168 2) A is a [#permalink]
Show Tags 01 Oct 2012, 04:58
I dont know whether my apporach is right or not..
sqrt{ABC}= 504 here, LHS is sqrt and RHS is integer so LHS must be a n integer value when sqrt is removed. statement 1 : C=168, nothinh about A and B so insufficient. statement 2 : A is a perfect square now A is pefect square, we have value of C=168. so to make it of even number of power, suppose, B=C=168. so A,B and C all are of even power, so both the statement are together suffice.B=168 is divisible by 2. (question stem doesnot mention that all the three are distinct numbers. so we can assume B=C) Am I right in this?
Intern
Joined: 25 Nov 2012
Posts: 10 Re: DS- Perfect squares [#permalink]
Show Tags 22 Sep 2013, 12:02
anaik100 wrote:
sqrt{ABC}=504.Is B divisible by 2?
(1) C = 168 (2) A is a perfect square sqrt{ABC}=504 ABC=504^2 1) we know C but AB r unknown hence insuff 2) A is perfect sq 504 2*252=2*2*126=2*2*2*63=2*2*2*3*3*7 A can be =(2*2*3*3)^2 or (2*2)^2 or (3*3)^2 B and C r still not known hence insuff 1 and 2 C is known AB168=504^2 AB=3*504 =3*2*2*2*3*3*7 since A is perf sq A can be (2*2*3*3) B will be 3*2*7.it is divisible by 2 id A can be (2*2) B will be 3*2*7*3*3.it is still divisible by 2 hence C
Answer C is based on the assumption that B is an integer, and no such information is given anywhere in the question.
I believe, answer C wont be valid in case, c=168, a=504^2, B=1/168.
Kindly correct if I am wrong. Thanks.
Manager
Joined: 21 Sep 2012
Posts: 208
Location: United States
Concentration: Finance, Economics
GPA: 4
WE: General Management (Consumer Products) Re: sqrt{ABC} = 504. Is B divisible by 2? [#permalink]
Show Tags 28 Aug 2014, 23:44
let us take factors of 504
=2*2*2*7*3*3 LHS has a square root sign. so number of factors double on LHS. so we require 6 2's, 2 7's and 4 3's to equate LHS to RHS. statement 1 is clearly insufficient. statement 2 also insufficient. 1+2 combined, factors of 168 are 2*2*2*7*3 so balance we require 3 2's, 1 7's and 3 3's to make LHS and RHS equal. a= 2^2*3^2 and b=2*7*3 satisfy above condition and b is a divisible by 2 a=2^4*3^2 and b=(7*3)/2=21/2 also satisfy above condition but b is not divisible by 2 Hence statements are insufficient. Ans=E
Intern
Joined: 27 Aug 2014
Posts: 27
GMAT Date: 09-27-2014 Re: sqrt{ABC} = 504. Is B divisible by 2? [#permalink]
Show Tags 29 Aug 2014, 00:24
desaichinmay22 wrote:
let us take factors of 504
=2*2*2*7*3*3 LHS has a square root sign. so number of factors double on LHS. so we require 6 2's, 2 7's and 4 3's to equate LHS to RHS. statement 1 is clearly insufficient. statement 2 also insufficient. 1+2 combined, factors of 168 are 2*2*2*7*3 so balance we require 3 2's, 1 7's and 3 3's to make LHS and RHS equal. a= 2^2*3^2 and b=2*7*3 satisfy above condition and b is a divisible by 2 a=2^4*3^2 and b=(7*3)/2=21/2 also satisfy above condition but b is not divisible by 2 Hence statements are insufficient. Ans=E
Yes, you are right.
But if the question mentioned a, b, c as integers - then answer would be C.
Manager
Joined: 21 Sep 2012
Posts: 208
Location: United States
Concentration: Finance, Economics
GPA: 4
WE: General Management (Consumer Products) Re: sqrt{ABC} = 504. Is B divisible by 2? [#permalink]
Show Tags 29 Aug 2014, 01:38
bazc wrote:
desaichinmay22 wrote:
let us take factors of 504
=2*2*2*7*3*3 LHS has a square root sign. so number of factors double on LHS. so we require 6 2's, 2 7's and 4 3's to equate LHS to RHS. statement 1 is clearly insufficient. statement 2 also insufficient. 1+2 combined, factors of 168 are 2*2*2*7*3 so balance we require 3 2's, 1 7's and 3 3's to make LHS and RHS equal. a= 2^2*3^2 and b=2*7*3 satisfy above condition and b is a divisible by 2 a=2^4*3^2 and b=(7*3)/2=21/2 also satisfy above condition but b is not divisible by 2 Hence statements are insufficient. Ans=E
Yes, you are right.
But if the question mentioned a, b, c as integers - then answer would be C.
That is right. But for this problem, ans has to be E. It is not given that numbers are integers.
Intern
Joined: 13 Sep 2015
Posts: 17
Location: India
GPA: 3.2 Re: sqrt{ABC} = 504. Is B divisible by 2? [#permalink]
Show Tags 16 May 2016, 22:48
why abc => a*b*c???
Math Expert
Joined: 02 Aug 2009
Posts: 7954 Re: sqrt{ABC} = 504. Is B divisible by 2? [#permalink]
Show Tags 16 May 2016, 23:14
vishnu440 wrote:
why abc => a*b*c???
Hi vishnu440
,
abc is always a*b*c, but can be easily mistaken as a 3-digit number..
A 3-digit number will always be mentioned..
say abc is a 3-digit number where a, b are ......etc
_________________
Current Student
Joined: 12 Aug 2015
Posts: 2572 Is B divisible by 2? [#permalink]
Show Tags 24 Aug 2016, 04:11
If \(\sqrt{ABC}=504\) . Is B divisible by 2?
(1) C = 168
(2) A is a perfect square
_________________
Manager
Joined: 07 Jul 2016
Posts: 74
GPA: 4 sqrt{ABC} = 504. Is B divisible by 2? [#permalink]
Show Tags Updated on: 25 Aug 2016, 18:27
stonecold wrote:
If \(\sqrt{ABC}=504\) . Is B divisible by 2?
504 can be factorised as follows:
\(\frac{504}{4} = 126\\1+2+6 = 9 \implies 126 = 9 \times 14\)
\(504 = 2^3 \times 3^2 \times 7\)
\(ABC = 2^6 \times 3^4 \times 7^2\)
\(\textbf{(1) } C = 168\)
We know no information about \(B\) or \(A\), so it's only possible to prove that \(B\) is odd. \(B\) would only be odd if \(C\) were divisible by \(2^6 = 64\) 168 is not divisible by 64.
EDIT: stricken through above, we only know one number, even if C were divisible by 64, then \(B=2\) and \(A = \frac{1}{2}\) would allow for an evenInsufficient
.
----(2) A is a perfect square
It's trivial to see that there are multiple combinations of squares, If \(A = 2^{3^2}\), then there would be no twos to distribute.
Otherwise a two may or may not be distributed to B to make it even or odd.Insufficient(1+2)
From (1)
: \(168 = 2^3 \times 3 \times 7 \implies AB = 2^3 \times 3^3 \times 7\)
Prove by counterexample
\(A = 2^4, B=2^{-1} \times 3^3 \times 7 = \text{ odd }\)
Anything without a fractional power of 2 in \(B\) would be even.Insufficient
_________________
Please press +1 Kudos if this post helps.
Originally posted by DAllison2016
on 24 Aug 2016, 05:07.
Last edited by DAllison2016
on 25 Aug 2016, 18:27, edited 1 time in total.
Retired Moderator
Joined: 26 Nov 2012
Posts: 578 Re: Is B divisible by 2? [#permalink]
Show Tags 24 Aug 2016, 05:14
stonecold wrote:
If \(\sqrt{ABC}=504\) . Is B divisible by 2?
(1) C = 168 (2) A is a perfect square
Given √ABC = 504
We write the factor of 504 - 2*2*2*7*3*3
Stat 1: C = 168 then A and B can be 3 or 1. (168*3*1 = 504)...we can't take unique value for A and B.
Stat 2: A is perfect square from the set 2*2*2*7*3*3 , we can take 4 and 9 and also 1..no unique value for A.
Stat 1 + Stat 2: if A is 1 and B = 3 then C = 168. Anyhow 1 is perfect square.
Option C is correct answer..
I doubt on the OA...can you share the full official solution if you have or others can try this..
Manager
Joined: 07 Jul 2016
Posts: 74
GPA: 4 sqrt{ABC} = 504. Is B divisible by 2? [#permalink]
Show Tags 24 Aug 2016, 05:28
msk0657 wrote:
stonecold wrote:
If \(\sqrt{ABC}=504\) . Is B divisible by 2?
(1) C = 168 (2) A is a perfect square
Given √ABC = 504
We write the factor of 504 - 2*2*2*7*3*3
Stat 1: C = 168 then A and B can be 3 or 1. (168*3*1 = 504)...we can't take unique value for A and B.
Stat 2: A is perfect square from the set 2*2*2*7*3*3 , we can take 4 and 9 and also 1..no unique value for A.
Stat 1 + Stat 2: if A is 1 and B = 3 then C = 168. Anyhow 1 is perfect square.
Option C is correct answer..
I doubt on the OA...can you share the full official solution if you have or others can try this..
OA has changed to C now the topics have been merged.
For the OA of E, the numbers do not have to be integers.
Let \(A = 144, B = \frac{21}{2}\). B is not even.
Let \(A = 4, B = 378\). B is even.
_________________
Please press +1 Kudos if this post helps.
Current Student
Joined: 12 Aug 2015
Posts: 2572 Re: sqrt{ABC} = 504. Is B divisible by 2? [#permalink]
Show Tags 24 Aug 2016, 08:17
Hi everyone abhimahna DAllison2016 msk0657
here is what i think =>
The ANSWER here should be C
WE aren't told A,B.C are integers.
if the Question would mention that A,B,C are integers then ye => C is fine
Else here E Gotta be true
CC- Bunuel
_________________
Board of Directors
Status: Stepping into my 10 years long dream
Joined: 18 Jul 2015
Posts: 3584 Re: sqrt{ABC} = 504. Is B divisible by 2? [#permalink]
Show Tags 27 Aug 2016, 03:36
stonecold wrote:
Hi everyone abhimahna DAllison2016 msk0657
here is what i think =>
The ANSWER here should be C
WE aren't told A,B.C are integers.
if the Question would mention that A,B,C are integers then ye => C is fine
Else here E Gotta be true
CC- Bunuel
Dude,
504 can be written as \(7 * 3^2 * 2^3\)
so, since \(sqrt(ABC) = 504\), we can say ABC = \(7^2 * 3^4 * 2^6\)
Now, Statement 1 has made it clear that C is an integer.
Now, we also know that A*B = \(7 * 3^3 * 2^3\)
We are not sure if B is even. hence, insufficient.
Now Statement B states that A is a perfect Square. No info about B and C, hence insufficient.
Combining,
We know C and product of A and B.
What do you wanna take the value of A. 1/2, if yes, then take it. I know one thing that in order to make the multiplication of A and B an integer, I MUST have 2 in B.
if you wanna take A = 1/7, again take it (
) , Since I have the product of A and B even, and A is ODD, I can say B MUST be even.
Also, If I go with the normal integral values, then since I have A a perfect square and 3 2's in the product of A and B, I must infer that B must have atleast one 2 in it.
Hence, sufficient.
Do let me know if you have any doubts.
_________________
My LinkedIn abhimahna.My GMAT Story: From V21 to V40My MBA Journey: My 10 years long MBA DreamMy Secret Hacks: Best way to use GMATClub
| Importance of an Error Log!Verbal Resources: All SC Resources at one place
| All CR Resources at one placeBlog: Subscribe to Question of the Day BlogGMAT Club Inbuilt Error Log Functionality - View More
.New Visa Forum - Ask all your Visa Related Questions - here
.New! Best Reply Functionality on GMAT Club!Find a bug in the new email templates and get rewarded with 2 weeks of GMATClub Tests for freeCheck our new About Us Page here.
Intern
Joined: 20 Jul 2017
Posts: 4 Re: sqrt{ABC} = 504. Is B divisible by 2? [#permalink]
Show Tags 27 Aug 2017, 00:54
The question is quite simple :-
ABC = 504*504 AB = 504*3 We dont know the value of A, therefore the statement A is insufficient A is a perfect square But this statement doesnt tell us anything about C or B Combined, AB = 504*3 We can assume A to be any perfect square for example 49, 36, 4, 9 etc. 1. 504*3/49 => Not divisible by 2 2. 504*3/9 => 168 => divisible by 2 Not sufficient E is the answer
Intern
Joined: 15 Jul 2016
Posts: 34
Location: India
Concentration: Operations, Marketing GMAT 1: 560 Q46 V21
GMAT 2: 620 Q48 V26
GPA: 4
WE: Operations (Manufacturing) Re: sqrt{ABC} = 504. Is B divisible by 2? [#permalink]
Show Tags 08 Nov 2018, 22:56
For the option C
Let A= 3^2*7^2*2^4 C = 3^2*3^7 In this case B= 3/(7*2) Which is not divisible by 2 I think it is E not C Correct me if I am wrong.
Re: sqrt{ABC} = 504. Is B divisible by 2? [#permalink] 08 Nov 2018, 22:56
Go to page Previous
1 2 [ 36 posts ] |
Pearce-Hall error learning theory
Geoffrey Hall (2008), Scholarpedia, 3(2):5274. doi:10.4249/scholarpedia.5274 revision #91640 [link to/cite this article]
The
Pearce-Hall model (1980) describes the circumstances in which an event comes to be established as a signal for its consequences. The model was developed in the context of classical (Pavlovian) conditioning in which the signal is referred to as a conditioned stimulus (CS) and the consequence as an unconditioned stimulus (US); it thus dealt, specifically, with the conditions necessary for the formation of an associative link between the central representations of a CS and a US. The model proposed that effectiveness of concurrent activation of the two representations in establishing or strengthening a link between them depends on the associability of (informally, the attention paid to) the CS. The associability of a CS is held to change as a result of experience, declining when a CS accurately predicts its consequences, and increasing or remaining high when a CS is followed by unpredictable consequences. These proposals capture the intuition that an animal needs to attend to and fully process an event the consequences of which are uncertain, but may deal differently with an event the consequences of which are known. The model dealt with many of the same phenomena as were addressed by the earlier Rescorla-Wagner model of conditioning, but whereas the latter explained them in terms of learned changes in US effectiveness, the Pearce-Hall model ascribed them to changes in CS effectiveness. The Pearce-Hall model was thus able to extend the analysis to a range of attentional phenomena not dealt with by its predecessor.
Contents Background
The 1970s saw a revolution in the approach to conditioning as a result of experimental studies showing that contiguity (the concurrent presentation of CS and US) was not enough to ensure the formation of an association. A critical phenomenon was that known as
blocking - the observation that preliminary training in which one CS (X) is paired with the US will block conditioning to CS A when the AX compound is paired with the US. Stimulus A and the US co-occur, but A fails to gain associative strength. Latent inhibition provides another example. In this procedure the event to be used as the CS is repeatedly presented alone, prior to CS-US pairings. These pairings are, at least initially, ineffective in endowing the CS with associative strength. The model explains these, and related, phenomena in terms of learned changes in CS associability. The model
According to the model, the change (\(\Delta\)) in the associative strength (symbolized V) of a CS as result of a CS-US pairing is governed by the following equation\[\Delta V = S.\alpha.\lambda\] (Eq. 1)
where S is determined by the intensity of the CS and \(\lambda\) by the intensity of the US. The parameter \(\alpha\) represents the associability of the CS and is assumed to be high for a novel CS.
The associability parameter is modified by experience according to the following equation\[\alpha_n = |\lambda - \Sigma V|_{n-1}\] (Eq. 2)
where \(\Sigma V\) represents the sum of the associative strengths of all stimuli present on trial n-1. That is, the value of \(\alpha\) on trial n is set by the absolute value of the discrepancy between \(\lambda\) and summed associative strength experienced on the preceding trial.
In simple conditioning, in which a CS is reliably paired with a US, V increases trial by trial and \(\alpha\) declines, approaching zero as asymptote is reached.
Applications
1) Blocking. In the blocking procedure the value of V for CS X rises to asymptote (\(\lambda\)) during the first phase of training. The value of \(\alpha\) for the added CS (A) is high on the first compound (AX) trial and A acquires strength on this trial (Eq. 1). But the presence of the trained CS X means that \(\alpha\) for A falls to zero (Eq. 2) as a result of this trial, and no further acquisition occurs despite continued pairing of AX with the US.
2) Latent inhibition. For a stimulus presented alone, Eq. 2 implies that its \(\alpha\)-value will fall to zero, thus precluding the acquisition of associative strength when the CS is first paired with a US. (On subsequent trials the discrepancy between \(\lambda\) and V will restore the value of \(\alpha\ ,\) and conditioning will start to occur.)
3) Latent inhibition during conditioning. Eq. 2 implies that the \(\alpha\)-value of a CS will fall to zero whenever it is accompanied by a consistent consequence (or US). It follows that latent inhibition (taken to be the decline in \(\alpha\)) will occur during the standard conditioning procedure. This unique prediction of the model has been confirmed by studies showing that a CS that has been a reliable predictor of a consequence will be learned about only slowly when it is subsequently employed as a CS signalling some other consequence (Hall & Pearce, 1979).
4) Effect of inconsistency. When the consequence of an event changes from one presentation to the next the discrepancy between \(\lambda\) and V of Eq. 2 is maintained and with it the value of \(\alpha\ .\) Experimental study (e.g., Swan & Pearce, 1988) confirms that a CS treated in this way is learned about readily when it is subsequently subjected to an orthodox conditioning procedure.
Limitations and developments
1) In its original form the model made the \(\alpha\)-value of a CS on trial n entirely dependent on the state of affairs on trial n-1. In fact, changes in associability occur more gradually, and Eq. 2 can been modified to allow the value of \(\alpha\) on trial n to be determined by a weighted average of its values on a run of preceding trials. The basic principle remains as described.
2) Latent inhibition has been found to be a more complex phenomenon than envisaged by the model, with the accumulation of evidence to show the retardation of learning produced by stimulus pre-exposure depends not solely on the decline in \(\alpha\) but on associative learning that goes on during pre-exposure. The model has been extended to allow the possibility that (for example) pre-exposure allows the gradual formation of a CS-no event association that contributes to the effect. The decline in \(\alpha\) continues to play a role (being a consequence of the formation of the association).
3) The model was devised to accommodate evidence indicating that learning about the added CS occurred normally on the first trial of the blocking procedure. Subsequent research has shown that this is not the case - that some blocking occurs even on that trial. To deal with this finding it is necessary to assume that the reduced learning is a consequence of a loss of effectiveness by the US (the central proposition of the Rescorla-Wagner model). Adding this feature detracts from the purity of the original model, but it can be done without damage to the (parallel) mechanisms proposed for changing CS effectiveness (and indeed the notion that the effectiveness of a US might be subject to change by experience was an intrinsic part of the original model's account of inhibitory learning).
4) It has long been thought that attention might increase to an event that is a good predictor of its consequences (an effect known as
acquired distinctiveness). The model explicitly denies this, in that associability is supposed to decline in such circumstances, being restored only when unexpected consequences occur. Evidence taken to demonstrate the acquired distinctiveness effect must, therefore be explained in other terms. One possibility is to distinguish two forms of attention, one concerned with learning, the other with performance. The former (the associability parameter of the model) will decline when the consequences of a CS are certain and no further learning is required; but performance needs to be controlled by just such CSs. This second form of attention should increase for predictive CSs and its effects on performance could be responsible for the acquired distinctiveness phenomenon. References
Hall G. and Pearce J.M. (1979) Latent inhibition of a CS during CS-US pairings. Journal of Experimental Psychology: Animal Behavior Processes, 5:31-42.
Pearce J.M. and Hall G. (1980) A model for Pavlovian learning: Variations in the effectiveness of conditioned but not of unconditioned stimuli. Psychological Review, 87:532-552.
Swan J.A. and Pearce J.M. (1988) The orienting response as an index of stimulus associability in rats. Journal of Experimental Psychology: Animal Behavior Processes, 14:292-301.
Internal references Peter Jonas and Gyorgy Buzsaki (2007) Neural inhibition. Scholarpedia, 2(9):3286. Florentin Woergoetter and Bernd Porr (2008) Reinforcement learning. Scholarpedia, 3(3):1448. Robert Rescorla (2008) Rescorla-Wagner model. Scholarpedia, 3(3):2237. Recommended reading
Hall G. (1991) Perceptual and associative learning. Clarendon Press.
Pearce J.M and Bouton M.E. (2001) Theories of associative learning in animals. Annual Review of Psychology, 52:111-139.
Le Pelley M.E. (2004) The role of associative history in models of associative learning: A selective review and a hybrid model. Quarterly Journal of Experimental Psychology, 57B:193-243.
Author's home page: |
D&D 5E has an "advantage" concept where instead of rolling 1d20, you roll 2d20 and take the higher. Likewise, disadvantage means rolling 2d20 and taking the lower.
How does this affect the expected average outcome of the roll?
Role-playing Games Stack Exchange is a question and answer site for gamemasters and players of tabletop, paper-and-pencil role-playing games. It only takes a minute to sign up.Sign up to join this community
All this does is linearly adjust the normally-flat 5% probability for each number to occur. What results is a increased or decreased probability of any number above or below average to occur, positively for advantage and negatively for disadvantage. See this AnyDice function set, which yields the following:
Since the probability of achieving any given number is a linear function, we can use linear regression (via Wolfram Alpha and our sample data from AnyDice to eventually solve for
probability of x = 0.5x - 0.25 - multiply by 100, and there's your percent chance that you'll roll any particular number.
Additionally, what you're likely looking for is the probability that
at least a particular number will be rolled, using either advantage or disadvantage. AnyDice, again, is king:
Data:
Advantage# %1 1002 99.753 994 97.755 966 93.757 918 87.759 8410 79.7511 7512 69.7513 6414 57.7515 5116 43.7517 3618 27.7519 1920 9.75Disadvantage# %1 1002 90.253 814 72.255 646 56.257 498 42.259 3610 30.2511 2512 20.2513 1614 12.2515 916 6.2517 418 2.2519 120 0.25
The math is straightforward
With an advantage you are looking for best of two results. To figure out your odds you need to multiply the chance of FAILURE together to find out the new chance of failure. For example if you need 11+ to hit rolling two dice and taking the best means instead of a 50% of failing you have only a 25% chance of failing (.5 times .5).
For a disadvantage where you take the worst of two dice roll you need to multiply the chances of SUCCESS to find out the new odds. For example if you need a 11+ to hit your chance success drops from 50% to 25% (.5 time .5).
Advantage 16+ to hit, goes from 25% chance of success to roughly 43% chance of success. (.75 time .75)
Disadvantage 16+ to hit, goes from 25% chance of success to roughly a 6% chance of success (.25 times .25)
The general rule of thumb that in the mid range of the d20 (from success on a 9+ to 12+) advantage grant roughly a equivalent to a +5 bonus and disadvantage a -5 penalty. The increase and decrease in odds tappers off when your odds of success approach 1 or 20. For example a advantage on a 19+ your chance of failure goes from 90% to 81% not quite a +2 bonus on a d20.
An interesting property of the system is that there always a chance of success and always a chance of failure. Unlike a modifier systems where enough modifiers can mean auto success or auto failure. (Unless you have a 20 is an automatic success and 1 a automatic failure)
A useful application of knowing the odds of rolling two dice is that you can just convert it to a straight bonus when rolling for a large number of NPCs. A bunch of goblins with an advantage from surprise that need 13+ to hit the players you can just apply a +4 (or +5 if you round up) bonus instead of rolling the second dice. This is because they have a 60% chance of failure on 13+. Taking .6 times .6 yields .36 a drop of 24%. Not quite a +5 bonus on a d20 dice.
The mean result goes from 10.5 to 7.175 for disadvantage and to 13.825 for advantage. The odds go from a flat 5% for each of 1 through 20 to (disadvantage results shown; reverse the first column for advantage results):
1 39 9.75% 2 37 9.25% 3 35 8.75% 4 33 8.25% 5 31 7.75% 6 29 7.25% 7 27 6.75% 8 25 6.25% 9 23 5.75% 10 21 5.25% 11 19 4.75% 12 17 4.25% 13 15 3.75% 14 13 3.25% 15 11 2.75% 16 9 2.25% 17 7 1.75% 18 5 1.25% 19 3 0.75% 20 1 0.25%
(Middle column is how many of the 400 combinations of two numbers from 1-20 yield the result given in the first column.)
I just wanted to add a more generalized answer to this question that will give you a formula for computing your odd of success with advantage and disadvantage rather than looking up the value in a table. I am going to do my best to make this clear to anyone with any math background, so let me know in the comments if any of the steps don't make sense.
With advantage when you need to roll at least \$n\$ to succeed on your check (i.e. check - mod = \$n\$), you succeed if any one of your two dice rolls a value of \$n\$ or greater. Conversely, you fail when both of your two dice roll a value of \$n-1\$ or less. Since these are the only two options, you succeed or you fail, the probability of one of these two things happening is \$1\$, so we can say:
$$ P(success) + P(failure) = 1 $$
Where \$P(x)\$ indicates the probability of event \$x\$ occurring. We can re arrange this to get:
$$ P(success) = 1 - P(failure) $$
So now we know that we can find the value we want using the probability of failure, which we previously defined as:
$$ P(failure) = P(\text{both dice }\leq n-1) $$ the probability that both dice roll a value of \$n-1\$ or less. For one dice, we know that there are \$n-1\$ ways that you can roll \$n-1\$ or less (e.g. if \$n-1 = 5\$ you could roll \$1, 2, 3, 4, \text{or }5\$ so there are \$5\$ possible ways to do it). There are \$20\$ total possible ways to roll the dice. so the probability of one dice rolling \$n-1\$ or less is the number of ways to roll \$n-1\$ divided by the total number of ways to roll the dice or:
$$ P(\text{one die } \leq n-1) = \frac{n-1}{20} $$
Since both dice are the same, their probability of rolling \$n-1\$ is the same, so we know the probabilities for both dice. The two dice rolls are independent of one another, meaning that the number you roll one one die doesn't effect the number you roll on the other one. In other words, if you roll a 5 on the first die, the odds of rolling a 7 on the other one don't change. When two events are independent, we can find the probability of
both events happening by multiplying their probabilities. In other words:
$$ P(\text{both dice }\leq n-1) = P(\text{one die }\leq n-1) \times P(\text{one die }\leq n-1)\\ P(\text{both dice }\leq n-1) = \frac{n-1}{20} \times \frac{n-1}{20}\\ P(\text{both dice }\leq n-1) = \Big( \frac{n-1}{20}\Big)^2 $$
Substituting this into our original equation we get:
$$ P(success) = 1 - \Big( \frac{n-1}{20}\Big)^2 $$
Now let define what it means to succeed with disadvantage in the same way we defined what it meant to succeed with advantage. For disadvantage where you need to roll at least \$n\$ to succeed, both dice must roll a value of \$n\$ or greater. In other words, if we need to roll at least an \$18\$ to succeed, both dice must roll either \$18, 19, \text{or } 20\$. The total number of ways to roll at least \$n\$ on a 20 sided die are:
$$ \{\text{# of ways to roll }\geq n\} = \{\text{total # of ways to roll}\} - \{\text{# of ways to roll }\leq n-1\}\\ \{\text{# of ways to roll }\geq n\} = 20 - (n-1) = 21 - n $$
We can create a probability from this by dividing by the total number of ways to roll the die giving us:
$$ P(\text{one die }\geq n) = \frac{21 - n}{20} $$
As before, the dice rolls are independent, so we can get the probabilities of both dice being greater than or equal to \$n\$ is:
$$ P(success) = \Big( \frac{21-n}{20}\Big)^2 $$
Since we worked through the math, we can also see how we can easily change this formula to get new probabilities. For example, if we make a house rule of "super advantage" where you roll 3 dice instead of 2, we simply multiply our \$P(failure)\$ by one more die \$\frac{n-1}{20}\$ changing the \$^2\$ to \$^3\$. We can therefore generalize the formula to be:
$$ P(success) = 1 - \Big( \frac{n-1}{20}\Big)^m $$
Where \$m\$ is the number of dice. Similarly, the probabilities for "super disadvantage" would be:
$$ P(success) = \Big( \frac{21-n}{20}\Big)^m $$
Going further, if we want to we could also sub out the \$20\$ in the denominator for another number if you wanted to look at the odds for other dice. For example, you are a GM and after character creation, one player comes to you and wants to re-roll their stats. They say they rolled all 1s and 2s on their 4d6s for a stat and they feel this was so unlikely that it will make the game be unbalanced for their character. Lets help the GM figure out if the player is right or not. In other words, we want to know \$P(\text{all dice }\leq 2)\$. This is the same as our "failure" condition for advantage except with 4 6-sided dice instead of 2 20-sided dice. So we can sub the 2 out for 4 and the 20 out for 6 and get:
$$ P(\text{all dice }\leq \text{max roll}) = \Big( \frac{\text{max roll}}{\text{# of sides}}\Big)^\text{# of dice}\\ P(\text{all dice }\leq 2) = \Big( \frac{2}{6}\Big)^4 = 0.01234 $$
So there is a 1.234% chance of this happening (i.e. 1 in 81 stats rolled up will be this low). Since characters have to roll 6 stats per game, the DM decides this isn't actually as unlikely as the player thinks and tells them to keep the stat block.
The answers provided effectively cover the probability for every result, 1 through 20, for advantage/disadvantage with 2d20. For completeness, the probabilities follow:
When rolling 2d20, and keeping the Maximum value from each of the 400 permutations, the expected value is 13.825. By contrast, the expected value when you keep the Minimum value is 7.175. The departure from the average of a single d20 is
3.325 Yes, the two average values sum to 21.
Unaddressed is the inherent benefit, or detriment, on the outcome expected rolling 2d20. To minimize duplication of effort, the following analysis assumes the roll is performed with advantage.
By definition, rolling with advantage is the act of rolling 2d20, and taking the higher value; the lower die, or one die if they have the same value, is disfavored in comparison to the other. The order in which the dice are rolled is immaterial. Instead, focus on the values they are capable of producing, e.g. of the 400 permutations, there are 39 opportunities to receive a 20 as the favored result. In rolling two dice, benefit for rolling a 1 and a 20, or a 20 and a 1, is still 19. The 1 is the disfavored value, and discounted by the procedure for rolling with advantage.
However, it would be a statistical error to assume that one die will always be disfavored and focus on the cases where the value of the other die is greater or equal to the value of the disfavored die. Doing so negates 190 cases where a benefit would still be gained from rolling 2d20 instead of a single die. This is because for each result where the die values aren't equal, there are 2 cases in which it can occur. In total, there are 20 cases where the values are equal, 190 where A < B, and 190 where A > B.
To correctly analyze the benefit of rolling 2d20, each of the 400 cases must be examined. For each resultant PAIR, the benefit demonstrated by the roll is the absolute difference between the dice, e.g. the values of the two dice are equal, the benefit is zero. Through this, the disfavored value is presumed to be the result we would have gotten in rolling 1 die, while the difference between it and the favored die is the benefit gained. The average of every Benefit is
6.650.
The PHB provides a short cut for applying advantage via a +5 modifier to supplant the roll. Coincidently
6.650 - (6.650-3.325)/2 = 4.9875 ~ 5
I actually made an ipython notebook for this:
To start, I simply rolled a random d20 1000 times.
The average 1d20 result for this series was 10.
For this graph, I rolled 2d20 1000 times and threw out the lower result.
The average result from an advantaged 2d20 roll was 13.
The last graph is a 2d20 1000 times disadvantaged roll.
The average result from the disadvantaged roll was 7.
So you can see here that there is a general +- 3 bias for advantaged or disadvantaged rolls.
This was not intuitive to me at first, so I created an Excel spreadsheet to help me see how it worked with simulated rolls.
You can change the number of rolls and change the type of die (d20, d12, d33--knock yourself out), and watch how the rolls change.
Find the spreadsheet here. Enjoy!
Effectively the trick is to find the percentage chance to hit under advantage. Subtract your percentage to hit normally. Divide by 0.05 (5%). Round down. This will provide the effective bonus that advantage provides.
Why divide by 5%? Because the standard d20 is 20 outcomes and 100/20 = 5. So if you want to know how many effective die results the bonus is helping you you have to divide by 5. (Interesting fact: the die size would affect your results in different games - but since the standard is D20 that's a moot point.)
In case you're wondering, I took from this article and computed the effective bonus advantage provides. As you can see, it provides more of a bonus the closer to the middle of the to hit range you needed. But less of a bonus at the extremes. In games term, if you were really good - or really terrible - at hitting the AC/DC before then advantage can't really help you. However if you're just an average joe, you get the most benefit from it.
In other words, for having advantage, the game rewards you. However, the effective reward is not a flat bonus. Instead, the reward is bell curved around your original chance of success. Disadvantage works the same, except the reward is instead a penalty.
RLL NORMAL ADV ADV-Normal Effective Bonus20 0.050 0.098 0.48 +019 0.100 0.191 0.91 +118 0.150 0.278 .128 +217 0.200 0.359 .159 +316 0.250 0.437 .187 +315 0.300 0.510 .210 +414 0.350 0.576 .226 +413 0.400 0.639 .239 +412 0.450 0.698 .248 +411 0.500 0.751 .251 +510 0.550 0.798 .248 +49 0.600 0.840 .240 +48 0.650 0.877 .227 +47 0.700 0.910 .210 +46 0.750 0.938 .188 +35 0.800 0.960 .160 +34 0.850 0.978 .128 +23 0.900 0.990 .090 +12 0.950 0.998 .048 +01 1.000 1.000 .000 +0
The average expected outcome is 1 out of 20.
Rolling twice makes the expected outcome as 1 out of 10, which is 2 in 20 simplified, instead of 1 in 20 as with a regular roll.
This is the same for both advantage and disadvantage with the difference being taking the lower number instead of the larger number. But otherwise it is the same.
Without overcomplicating this I'll keep it short with this example; you have a better chance of find a Willie Wonka golden ticket if you eat 2 bars rather than 1 bar, and the players know this when they roll with advantage or disadvantage. Which is why they like advantage and dont like disadvantage.
Hope that helps clear up any confusion, without going scientific on you. |
In fact, even a snake does not have to look beyond its nose to find a nice example (I post it here such that the original question does not get to long, discouraging potential readers).Take $X$ as the complex-valued sequences with the usual vector-space structure and with norm\begin{equation}\left\|x\right\|_X^2=|x_1|^2+|x_2|^2+|x_3|^3...\end{equation}(i.e. $X=l^2(\mathbb{N})$)and $Y$ again those sequences with the more relaxed norm\begin{equation}\left\|x\right\|_Y^2=\frac{1}{2}(|x_1|^2+\frac{1}{2!}|x_2|^2+\frac{1}{3!}|x_3|^2+...)\end{equation}For the role of closed operator $A:D(A)\subset Y \to Y$ we take the infinite diagonal matrix\begin{equation}A_{kl}=\left(k+i\sqrt{2k!-k^2}\right)\delta_{kl}.\end{equation}This seemingly "arbitrary" operator satisfies ($\forall x \in X$)\begin{equation}\left\|x\right\|_X=\left\|Ax\right\|_Y.\end{equation}which establishes that $D(A)=X$ (as a subset of $Y$ of course).
Next, let $Z\subset X$ be the sequences with finite support. $Z$ is dense in $X$ and $Y$ for their respective norms. It is straightforward to show that the sequence\begin{equation}u=(u_1,...,u_n,0,0...)\end{equation}is equal to $u(0)$ in the following solution of $u'=Au$ which is smooth in both the $X$ and $Y$-norm and unique for this initial condition:\begin{equation}u(t)=(u_1\exp(A_{11} t),...,u_n \exp(A_{kk}t),0,0...) (\in Z)\end{equation}It is not difficult to see that there is no uniform bound ($\forall u \in Y$ or $\in X$) to the quantities $\frac{\left\|u(t)\right\|_Y}{\left\|u\right\|_Y}$ or $\frac{\left\|u(t)\right\|_X}{\left\|u\right\|_X}$.
Assume however that $\left\|u\right\|_X=1$ which means in particular $|u_k|\leq 1$ $\forall k >0$. Then\begin{equation}\left\|u(t)\right\|_Y = \frac{1}{2}\sum_{k=1}^{\infty} \frac{1}{k!}|u_k\exp(A_{kk}t)|^2 \leq \frac{1}{2}\sum_{k=1}^{\infty} \frac{1}{k!}\exp(kt)^2=\frac{1}{2}\exp(\exp(2t))\end{equation}
So far, I've produced all prerequisites stated in my question above, which I will show to be enough to produce a bilateral semigroup. That is, I've come up with a $u'=Au$-system which satisfies...
Definition Let $(Y,\left\|\right\|_Y)$ be a Banach space, a closed linear operator $A:D(A)\subset Y \to Y$ is called a " Cauchy-operator" if the set $Z\subset D(A)$, where $u\in Z$ i.f.f. a unique solution to $u'(t)=Au(t)$ ($\forall t \geq 0$) exists with initial condition $u$, is dense in $Y$. From now we define semigroup operators $R(t):Z \to D(A)$ through $u(t)=R(t)u$.
We call $A$ a "
regular Cauchy-operator" if there exists moreover a vectorspace $X$ with $D(A) \subset X \subset Y$ with an accompanying norm $\left\|\right\|_X$ such that $(X,\left\|\right\|_X)$ is a Banach space and such that $Z$ is also dense in $(X,\left\|\right\|_X)$ and such that $\sup_{u \in Z}\frac{\left\|R(t)u\right\|_Y}{\left\|u\right\|_X} < \infty$ $\forall t >0$. $A$ is called a " smooth Cauchy-operator" if for every $t^* \in [0,\infty)$, there exists $\epsilon>0$ such that \begin{equation}\sup_{u \in Z, t\in [t^*-\epsilon,t^*+\epsilon]}\frac{\left\|R(t)u\right\|_Y}{\left\|u\right\|_X} < \infty\end{equation}
Theorem Let $A:D(A)\subset Y \to Y$ be a Cauchy operator. $Z$ is a vector space and the $R(t):Z \to D(A)$-operators are linear.Their range is contained in $Z \subset D(A)$ and $\forall s \geq 0$. Redefining them as $R(t):Z\subset Y \to Z \subset Y$ we have\begin{equation}R(s+t)u=R(s)R(t)u.\end{equation} proof: exercise.
Theorem If $A:D(A)\subset Y \to Y$ is a regular Cauchy operator. Then the bounded operators $R(t):Z\subset X \to Z \subset Y$ can be continuously extended to bounded operators\begin{equation}S(t):X \to Y.\end{equation}If at some point $t^*\in [0,\infty)$ and for some $\epsilon>0$, we have that $R(t)$ is uniformly bounded on $[t^*-\epsilon,t^*+\epsilon]$, then for all $u\in X$ the map\begin{equation}\varphi_u:[0,\infty)\to Y: t \mapsto S(t)u\end{equation}is continuous at $t^*$. As a consequence these maps are continuous if $A$ is a smooth Cauchy operator.In this case the operators $S(t)$ constitute a bilateral one-parameter semigroup $G=\left\{S(t)\right\}_{t\geq 0}$. Note: I've edited the definition of bilateral semigroup slightly.
proof The first part of the theorem is straightforward and left to the reader. Keep in mind that the continuity of the $\varphi_u$ maps implies uniform continuity: for all compacts $C \subset [0,\infty)$ we have\begin{equation}\sup_{t\in C} \left\|S(t)\right\| \leq 0.\end{equation}For the second part of the theorem, we first have to show that elements of the form $\int_a^b S(s)u\text{d}s=\int_0^{\infty} \chi_{[a,b]}S(s)u\text{d}s$ or $\int_0^{\infty} f(s)S(s)u\text{d}s$(where $f:[0,\infty)\to \mathbb{C}$ is differentiable) are in $X$. Secondly, we have to show that "$S(t) \int S(s)... = \int S(s+t)...$". (Important note: all the relevant Bochner integrals are of course defined in the Banach space $Y$!) In this proof I'll only treat the second case, while the first is similar.
Take $(u_n)_n \in Z$ such that $\left\|u_n-u\right\|_X \to 0$. By the boundedness of the operators $S(t)$, we have\begin{equation}\left\|\int_0^{\infty} f(s)S(s)u_n\text{d}s - \int_0^{\infty} f(s)S(s)u\text{d}s\right\|_Y \to 0\end{equation}(exercise)Moreover, we have \begin{equation}\left\|A\int_0^{\infty} f(s)S(s)u_n\text{d}s + f(0)u+\int_0^{\infty} f'(s)S(s)u\text{d}s\right\|_Y = \left\|\int_0^{\infty} f(s)AS(s)u_n\text{d}s + f(0)u+\int_0^{\infty} f'(s)S(s)u\text{d}s\right\|_Y =\left\|\int_0^{\infty} f(s)(S(s)u_n)'\text{d}s + f(0)u+\int_0^{\infty} f'(s)S(s)u\text{d}s\right\|_Y=\left\|-f(0)u_n - \int_0^{\infty} f'(s)S(s)u_n\text{d}s+ f(0)u+\int_0^{\infty} f'(s)S(s)u\text{d}s\right\|_Y \to 0\end{equation}So the closedness of $A:D(A) \subset Y \to Y$ implies that $v:=\int_0^{\infty} f(s)S(s)u\text{d}s \in D(A)\subset X$ and $Av =-f(0)u-\int_0^{\infty} f'(s)S(s)u\text{d}s$. To finish, one takes the following steps:*$\eta:[0,\infty)\to Y: t \mapsto \int_0^{\infty} f(s)S(s+t)u_n\text{d}s=\int_t^{\infty} f(s-t)S(s)u_n\text{d}s$ is differentiable and $\eta'(t)=A\eta(t)$. Hence $\eta(t) \in Z$ for all $t\geq 0$.
*But then\begin{equation}\nu:[0,\infty)\to Y: h \mapsto S(t-h)\int_0^{\infty} f(s)S(s+h)u_n\text{d}s = S(t-h)\eta(h)\end{equation}is also differentiable and $\nu'(h)=0$. But then\begin{equation}S(t)\int_0^{\infty} f(s)S(s)u_n\text{d}s = \nu(0)=\nu(t)=\int_0^{\infty} f(s)S(s+t)u_n\text{d}s.\end{equation} |
This question is somehow related to my previous MO question Explicit description of a subgroup of the braid group $\mathsf{B}_2(C_2)$; for the reader convenience, let me write down again the relevant set-up.
Let $C_2$ be a smooth curve of genus $2$ and $X:=\mathrm{Sym}^2(C_2)$ its second symmetric product. If $\delta \subset X$ is the diagonal, then the topological fundamental group $\pi_1(X-\delta)$ is isomorphic to the braid group $\mathsf{B}_2(C_2)$ on two strands on $C_2$.
Such a group is generated by five elements $a_1, \, a_2, \, b_1, \, b_2, \, \sigma$ subject to the following set of relations:
\begin{equation*} \begin{split} (R2) \quad & \sigma^{-1} a_1 \sigma^{-1} a_1= a_1 \sigma^{-1} a_1 \sigma^{-1} \\ & \sigma^{-1} a_2 \sigma^{-1} a_2= a_2 \sigma^{-1} a_2 \sigma^{-1} \\ & \sigma^{-1} b_1 \sigma^{-1} b_1 = b_1 \sigma^{-1} b_1 \sigma^{-1} \\ & \sigma^{-1} b_2 \sigma^{-1} b_2 = b_2 \sigma^{-1} b_2 \sigma^{-1}\\ & \\ (R3) \quad & \sigma^{-1} a_1 \sigma a_2 = a_2 \sigma^{-1} a_1 \sigma \\ & \sigma^{-1} b_1 \sigma b_2 = b_2 \sigma^{-1} b_1 \sigma \\ & \sigma^{-1} a_1 \sigma b_2 = b_2 \sigma^{-1} a_1 \sigma \\ & \sigma^{-1} b_1 \sigma a_2 = a_2 \sigma^{-1} b_1 \sigma \\ & \\ (R4) \quad & \sigma^{-1} a_1 \sigma^{-1} b_1 = b_1 \sigma^{-1} a_1 \sigma \\ & \sigma^{-1} a_2 \sigma^{-1} b_2 = b_2 \sigma^{-1} a_2 \sigma \\ & \\ (TR) \quad & [a_1, \, b_1^{-1}] [a_2, \, b_2^{-1}]= \sigma^2. \end{split} \end{equation*} The geometric interpretation for the above generators of $\mathsf {B}_2(C_2)$ is the following. The $a_i$ and the $b_i$ are the braids coming from the representation of the topological surface associated with $C_2$ as a polygon of $8$ sides with the standard identification of the edges, whereas $\sigma$ is the classical braid generator on the disk. In terms of the isomorphism with $\pi_1(X-\delta)$, the element $\sigma$ corresponds to the homotopy class in $\textrm{Sym}^2(C_2)-\delta$ of a topological loop that "winds once around $\delta$". For more details see P. Bellingeri's paper
On presentations of surface braid groups, Journal of Algebra 274 (2004), 543-563.
For some research problems related to algebraic surfaces, I would like to construct some group epimorphism $$\varphi \colon \mathsf {B}_2(C_2) \longrightarrow G, \quad (\ast)$$ where $G$ is a finite group. I also want that the element $s :=\varphi(\sigma)$ is not the identity of $G$.
Making some experiments with GAP4, I discovered that, up to order $|G|=64$, if $\varphi$ exists than the order of $s$ is at most $2$. This was a surprise, since I do not see anything in the presentation of $\mathsf {B}_2(C_2)$ forcing this behaviour. So I wonder if this happens just because $64$ is too small or if, instead, I am missing some conceptual point here, maybe some basic result of combinatorial group theory I am not aware of.
Of course, the time requested for the machine computations grows rapidly with the order of $G$, hence it is not possible to systematically check all cases when such a order becomes too large (already, $64$ takes a lot of time). So, let me ask the following
Q. Is it possible to construct a group epimorphism of type $(*)$, such that the order of $s:=\varphi(\sigma)$ is at least $3$? If yes, what is the minimum order of $G$ such that this happens? If not, why? Note. Here is the Gap4 script I used to define $\mathsf {B}_2(C_2)$:
F:=FreeGroup("a1", "b1", "a2", "b2", "s");a1:=F.1; b1:=F.2; a2:=F.3; b2:=F.4; s:=F.5; R1 := s^(-1)*a1*s^(-1)*a1*(a1*s^(-1)*a1*s^(-1))^(-1); R2 := s^(-1)*a2*s^(-1)*a2*(a2*s^(-1)*a2*s^(-1))^(-1); R3 := s^(-1)*b1*s^(-1)*b1*(b1*s^(-1)*b1*s^(-1))^(-1); R4 := s^(-1)*b2*s^(-1)*b2*(b2*s^(-1)*b2*s^(-1))^(-1); R5 := s^(-1)*a1*s*a2*(a2*s^(-1)*a1*s)^(-1); R6 := s^(-1)*b1*s*b2*(b2*s^(-1)*b1*s)^(-1); R7 := s^(-1)*a1*s*b2*(b2*s^(-1)*a1*s)^(-1); R8 := s^(-1)*b1*s*a2*(a2*s^(-1)*b1*s)^(-1); R9 := s^(-1)*a1*s^(-1)*b1*(b1*s^(-1)*a1*s)^(-1); R10 := s^(-1)*a2*s^(-1)*b2*(b2*s^(-1)*a2*s)^(-1); R11 := a1*b1^(-1)*a1^(-1)*b1*a2*b2^(-1)*a2^(-1)*b2*s^(-2);Br:=F/[R1, R2, R3, R4, R5, R6, R7, R8, R9, R10, R11]; |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.