text stringlengths 256 16.4k |
|---|
I don't think it works that way unfortunately.
Let me propose a more general approach:let $\hat{x}$, $\hat{y}$, $\hat{z}$ be the normal axes to the ring (two diameters and normal respectively), now let $\hat{x}'$, $\hat{y}'$, $\hat{z}'$ be the rotated axes. We want to compute $I_{z'}$. Now write down the change of co-ordinates matrix, which is simply a rotation about the $\hat{y}$ axis:
$$ \Lambda = \left[ \begin{matrix}\cos\theta && 0 && -\sin\theta \\0 && 1 && 0 \\\sin\theta && 0 && \cos\theta \end{matrix} \right] $$
This matrix is such that $\vec{x}' = \Lambda \vec{x}$
Now we can easily build the inertial tensor for in the normal co-ordinates because it is diagonal:
$$ I = \left[ \begin{matrix}\frac{1}{2}MR^2 && 0 && 0 \\0 && \frac{1}{2}MR^2 && 0 \\0 && 0 && MR^2\end{matrix}\right] $$
Now since I is a tensor, it transforms as a tensor, so in the new co-ordinates (the "prime" ones) it is given by $I'_{ij} = \Lambda_{ik}\Lambda_{jl} I_{kl} \rightarrow I' = \Lambda I \Lambda^T$. A simple calculation shows:
$$ I' = \left[ \begin{matrix}MR^2(\frac{\cos^2{\theta}}{2}+\sin^2{\theta}) && 0 && \frac{MR^2}{2}\cos{\theta}\sin{\theta} \\0 && \frac{MR^2}{2} && 0 \\\frac{MR^2}{2}\cos{\theta}\sin{\theta} && 0 && MR^2(\frac{\sin^2{\theta}}{2}+\cos^2{\theta})\end{matrix} \right] $$
So as you can see, when $\theta=\pi/4$ you have $I_{x'x'} = I_{z'z'} = \frac{3}{4}MR^2$.
Simple ruleThe trace of the inertial tensor is invariant under change of co-ordinates. In normal co-ordinates it is $2MR^2$ (just the sum of the three diagonal components). In our rotated co-ordinates, since when $\theta = \pi/4$ symmetries suggest $I_{x'x'}=I_{z'z'}$ and $I_{y'y'}=I_{yy}=MR^2/2$ (the $y$ axis is the rotation axis and so it doesn't change), you can impose trace invariance and get the result. |
Line 60: Line 60:
22:26:19: Background noise value (channel: #2): 6.621 (1.010e-04)<br>
22:26:19: Background noise value (channel: #2): 6.621 (1.010e-04)<br>
</code>
</code>
− − −
<!--T:9-->
<!--T:9-->
Line 74: Line 71:
<!--T:12-->
<!--T:12-->
−
The images above picture the result in Siril using the
+
The images above picture the result in Siril using the rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous [[Siril:Tutorial_preprocessing|step]] (take a look to the sigma value). The increase in SNR is of <math>19.7/6.4 = 3.08 \approx \sqrt{12} = 3.46</math> and you should try to improve this result adjusting <math>\sigma_{low}</math> and <math>\sigma_{high}</math>.
<!--T:13-->
<!--T:13-->
Revision as of 13:51, 11 April 2016 Siril processing tutorial Convert your images in the FITS format Siril uses (image import) Work on a sequence of converted images Pre-processing images Registration (Global star alignment) → Stacking Stacking
The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation.
Sum Stacking
This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to [math]\sqrt{N}[/math], where [math]N[/math] is the number of images. Because of the lack of normalisation, this method should only be used for planetary processing.
Average Stacking With Rejection Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Sigma Clipping: this is an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units ([math]\sigma_{low}[/math], [math]\sigma_{high}[/math]). Median Sigma Clipping: this is the same algorithm except than the rejected pixels are replaced by the median value of the stack. Winsorized Sigma Clipping: this is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work [1] [2]. Linear Fit Clipping: this is an algorithm developed by Juan Conejero, main developer of PixInsight [2]. It fits the best straight line ([math]y=ax+b[/math]) of the pixel stack and rejects outliers. This algorithm performs very well with large stacks and images containing sky gradients with differing spatial distributions and orientations.
These algorithms are very efficient to remove satellite/plane tracks.
Median Stacking
This method is mostly used for dark/flat/offset stacking. The median value of the pixels in the stack is computed for each pixel. As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration. The increase in SNR is proportional to [math]0.8\sqrt{N}[/math].
Pixel Maximum Stacking
This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater.
Pixel Minimum Stacking
This algorithm is mainly used for cropping sequence by removing black borders. Pixels of the image are replaced by pixels at the same coordinates if intensity is lower.
In the case of NGC7635 sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks ([math]\sigma_{low}=4[/math] and [math]\sigma_{high}=3[/math]).
The output console thus gives the following result:
22:26:06: Pixel rejection in channel #0: 0.215% - 1.401%
22:26:06: Pixel rejection in channel #1: 0.185% - 1.273% 22:26:06: Pixel rejection in channel #2: 0.133% - 1.150% 22:26:06: Integration of 12 images: 22:26:06: Normalization ............. additive + scaling 22:26:06: Pixel rejection ........... Winsorized sigma clipping 22:26:06: Rejection parameters ...... low=4.000 high=3.000 22:26:09: Saving FITS: file NGC7635.fit, 3 layer(s), 4290x2856 pixels 22:26:19: Background noise value (channel: #0): 10.013 (1.528e-04) 22:26:19: Background noise value (channel: #1): 6.755 (1.031e-04) 22:26:19: Background noise value (channel: #2): 6.621 (1.010e-04)
After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the differe1nt display mode. In our example the file is the stack result of all files, i.e., 12 files.
The images above picture the result in Siril using the Auto-Stretch rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous step (take a look to the sigma value). The increase in SNR is of [math]19.7/6.4 = 3.08 \approx \sqrt{12} = 3.46[/math] and you should try to improve this result adjusting [math]\sigma_{low}[/math] and [math]\sigma_{high}[/math].
Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this page. Here an example of what you can get with Siril:
Peter J. Huber and E. Ronchetti (2009), Robust Statistics, 2nd Ed., Wiley Juan Conejero, ImageIntegration, Pixinsight Tutorial |
First of all, I should point out that the standard definition of a normal subgroup isA subgroup $N \subset G$ is normal iff $g n g^{-1} \in N$ for all $n\in N$ and $g\in G$.When I say "the" standard definition, I mean that this is how working group theorists think of normal subgroups, and this is one of two basic ways to prove that a subgroup is normal....
It is possible to usefully mention "Lie groups (and Lie algebras)" in an introductory course, if one does not give formal definitions, but, rather, examples. It is not necessary (or advisable) to "define" smooth manifolds, which seems to have considerable baggage-of-abstraction of its own. Just give important examples, noting that they do seem to have a lot ...
My favorite textbook for an undergraduate course in Abstract Algebra, Ted Shifrin's Abstract Algebra: A Geometric Approach, uses a rings-first approach. The primary pro is that students are much more familiar with examples of rings (integers, polynomials) than they are with the standard examples of groups (symmetries of simple shapes, permutations). Indeed,...
$\mathbb Z_n$ since it is very easy to compute in and you have one of order $n$ for every $n\ge 1$.$S_3$, since it's the smallest non-abelian group.$S_4$, since $S_3$ is sometimes too small.$A_4$, Since it is the smallest group that does not contain a subgroup of every possible order dividing its order.$\mathbb Z_2 \times \mathbb Z_2$ since it's ...
I strongly suspect the difficulty is not with cosets specifically, but with working with equivalence relations generally, especially when combined with objects that they have only recently become acquainted with. So apologies in advance that this answer deals more generally with equivalence relations and my opinion that they often need more coverage than ...
The way I like to approach this is as follows. After discussing subgroups, the natural question as to forming the quotient $G/H$ arises. I then proceed to look at the cosets and prove that if $gH\cdot g'H=gg'H$ is a well-defined operation, then the cosets become a group, which we call the quotient group. This is a very easy proof with basically nothing to do....
Combining colored paint is an interesting example of a non-associative operation.Define $Paint_1 * Paint_2$ to be the paint obtained by mixing the two paints in a $1:1$ ratio. It is easy to see that$$(Red * Blue) * White \neq Red * (Blue * White).$$They are different shades of purple. I haven't tried it, but it should be easy enough to make a ...
Just supplementing Benjamin Dickman'snice answer,here is$x \mapsto x^2 - x$ in $\mathbb{Z}_{18}$ in the same style:For example, the pentagon wheel reflects the fact that$$(5+3k)^2-(5+3k) = 9k^2 + 27k + 20 = 9k(k+3) + 20 = 2\bmod 18 \;.$$
Preface. Birkhoff & Mac Lane's Algebra is a brilliant book. I should probably spend some time with it again, actually. Also, I apologize for such a long response. I think too much about algebra pedagogy and textbooks. The short version is I think the book can be used for either undergraduates or graduates with some success, but I think it is less than ...
When I introduce groups I first go over a very (e.g., 15 examples) long list of particular groups. I go over each example and verify the group axioms without naming it such. Then I present the problem of studying all of these examples together, motivated by simply saving labour. Instead of proving things again and again for each case, we'd like a single ...
I TA'd a first course in abstract algebra during my senior year of undergrad. The professor wanted a computational flavor to the course, so we introduced Magma right off the bat. We wanted to allow the students to experiment with permutations without needing to actually do large computations themselves. As part of my contribution to the course, I was put ...
One of the approaches taken in some areas of mathematics (e.g., in arithmetic dynamics and considerations of preperiodic points, etc) is to create these graphs by drawing discrete points and then using arrows to show which values map to which other ones.Figuring out a "canonical" way to draw these pictures might be a bit tough (this is related in some ...
I think you are correct to worry about an overload of algebraic structures, especially if they are not well-motivated. I would strongly encourage you to keep the naming of various algebraic structures to a minimum in a first course in abstract algebra. In my view, the goal of the course is for them to see the power of abstraction in a fairly concrete ...
I have taught both groups first and a rings first course.When I was a post-doc at Rutgers University, I taught their standard introduction to modern algebra course using Hungerford's undergraduate algebra text. I was kind of annoyed at the time that I would (unless I wanted to fight the textbook) have to teach using a rings first approach. By the end of ...
Symmetry groups of basic regular polygons and polyhedra. For one thing, they are noncommutative; for another, sometimes they coincide with other "known" groups; for a third, there can be physical meaning to some concepts in group theory.And they are very "hands-on" - it can be fun to make sure students understand how to generate the symmetry group for the ...
I suggest having a look at the following book by John Stillwell: Naive Lie Theory. A fast-paced week or (if you're lucky enough... or believe it's worth it as I would) two weeks could be created by pulling the big ideas from chapters 1, 2, and 4.I would argue that this shouldn't be the first time the students are seeing groups like SO, SU, etc. Those ...
Students learn linear and quadratic equations in high school algebra. And then, if they have forgotten it, re-learn it in college, in courses called "pre-calculus" or something.Unless they specialize in mathematics at the college level, they do not learn any more.Why not? Because we have computers now, so most people do not need to solve polynomial ...
It seems to me that the first thing to do is to define the signature. It is a very important morphism, and you need to have a bunch of example of morphisms to present to your students anyway. Only then is the alternating group really relevant, and you get all properties you want from what you will have done with the signature.Of course, I am telling you to ...
I've done it both ways, although I do rings-first now and for the foreseeable future. I think the pros and cons have a lot to do with the audience, especially if there are a lot of pre-service mathematics teachers taking the course (like there are at my place).The main "pro" is pedagogical: Rings are more familiar objects to students and, for pre service ...
I really like the example of the game Rock-Paper-Scissors, thinking of it as a binary operation $\star$ on the set {Rock, Paper, Scissors}.The rules are:Rock$\star$Paper $\mapsto$ PaperRock$\star$Scissors $\mapsto$ RockPaper$\star$Scissors $\mapsto$ ScissorsThe operation is certainly commutative, but it is not associative. For example, $$(Rock \...
I think a nice way to introduce groups is the Rubik's Cube non-commutative group. Also tessellation can be a way to inspire the need of a simple underlying structure to represent some complex sets. Also the dihedral group has the advantage of being visual.As dtldarek pointed out, you can praise finite fields for their applications in asymmetric ...
I think in a first abstract algebra course the goal is simply to make students aware that such things exist, give a couple of examples, and let them know that there is much, much more that can be learned in future classes. With that in mind:Start with $SO(2)$. Note that all elements of $SO(2)$ can be represented in terms of a continuously-varying ...
Describing groups by generators and relations (i.e., as quotients of free groups) is not a good way to describe them most of the time. There are some exceptions, Coxeter groups, braid groups, but mostly groups do not arise in that fashion. Further, "the word problem" is undecideable, so it's not the case that there are algorithms (much less efficient ones) ...
Why not turn $M_2(\mathbb{R})$ into a multiplication on $\mathbb{R}^4$? Here's a fun, somewhat weird, ring:$$ \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \left[ \begin{array}{cc} x & y \\ z & w \end{array}\right] = \left[\begin{array}{cc} ax+bz & ay+bw \\ cx+dz & cy+dw \end{array}\right]$$hence$$ (a,b,c,d) \star (...
(This pertains to U.S. universities.) Times may have changed, but when I was in graduate school (several places, and yes I know this is unusual, but I mention it because I'm talking about more than one data point) this was a standard topic that showed up somewhere in the standard first year (2-semester) graduate algebra sequence (e.g. Lang, Hungerford, ...
Perhaps rather than spend time establishing that trisecting an angle is impossible via Euclidean (ruler-compass) constructions,you could instead (a) Make that claim without proof,and (b) Mention that different axioms do permit angle trisection:(Figure from Geometric Folding Algorithms: ...
One of the most famous mathematics education theoretical constructs, namely APOS, offers an answer for this question, in fact, the exact same question. A quick search leads you to some papers and even one or two books. Here is a very short summary.APOS stands for Action, Process, Object, Schema. In simple terms, the difficulty relies on coming from ...
I second Adam. Keep it with strong with examples if you can. There is a mathoverflow answer with a ton of group theory examples. You might want to force them to practice doing more "rote" exercises that often come in an intro abstract algebra book to help gather examples.Also it should be mentioned that not all CS students have a strong background with ...
The rational root theorem, synthetic division, the remainder theorem, Descartes rule of signs, and similar lower level topics were fairly widely taught in U.S. high school algebra-2 courses before and during the 1980s, but they've slowly been de-emphasized as graphing calculators (allows for numerical equation solving) came onto the scene at the end of the ... |
ISSN:
2156-8472
eISSN:
2156-8499 Mathematical Control & Related Fields
March 2015 , Volume 5 , Issue 1
Select all articles
Export/Reference:
Abstract:
We consider single-observed cascade systems of hyperbolic equations. We first consider the class of bounded operators that satisfy a non negativity property $(NNP)$. Within this class, we give a necessary and sufficient condition for observability of the cascade system by a single observation. We further show that if the coupling operator does not satisfy $(NNP)$ (contrarily to [5], or also e.g.[3,4] for symmetrically coupled systems), the usual observability inequality through a single component may still occur in a general framework, under some smallness conditions, but it may also be violated. When the coupling operator is a multiplication operator, $(NNP)$ is violated whenever the coupling coefficient changes sign in the spatial domain. We give explicit constructive examples of such coupling operators for which unique continuation may fail for an infinite dimensional set of initial data, that we characterize explicitly. We also exhibit examples of couplings and initial data for which the observability inequality holds but in weaker norms. These examples extend to parabolic systems. Finally, we show that the two-level energy method [1,2] which involves different levels of energies for the observed and unobserved component, may involve the
samelevels of energies of these respective components, if the differential order of the coupling is higher (operating here through velocities instead of displacements). We further give an application to controlled systems coupled in velocities. This shows that the answer to observability and unique continuation questions for single-observed cascade systems is much more involved in the case of coupling operators that violate $(NNP)$ or of higher order coupling operators, and that the mathematical properties of the coupling operator greatly influence the dynamics of the observed system even though it operates through lower order differential terms. We indicate several extensions and future directions of research. Abstract:
In this paper necessary and sufficient conditions of $L^\infty$-controllability and approximate $L^\infty$-controllability are obtained for the control system $ w_{tt}=\frac{1}{\rho} (k w_x)_x+\gamma w$, $w(0,t)=u(t)$, $x>0$, $t\in(0,T)$. Here $\rho$, $k$, and $\gamma$ are given functions on $[0,+\infty)$; $u\in L^\infty(0,\infty)$ is a control; $T>0$ is a constant. These problems are considered in special modified spaces of the Sobolev type introduced and studied in the paper. The growth of distributions from these spaces is associated with the equation data $\rho$ and $k$. Using some transformation operator introduced and studied in the paper, we see that this control system replicates the controllability properties of the auxiliary system $ z_{tt}=z_{xx}-q^2z$, $z(0,t)=v(t)$, $x>0$, $t\in(0,T)$, and vise versa. Here $q\ge0$ is a constant and $v\in L^\infty(0,\infty)$ is a control. Necessary and sufficient conditions of controllability for the main system are obtained from the ones for the auxiliary system.
Abstract:
The paper gives a characterization of the uniform robust domain of attraction for a finite non-linear controlled system subject to perturbations and state constraints. We extend the Zubov approach to characterize this domain by means of the value function of a suitable infinite horizon state-constrained control problem which at the same time is a Lyapunov function for the system. We provide associated Hamilton-Jacobi-Bellman equations and prove existence and uniqueness of the solutions of these generalized Zubov equations.
Abstract:
In this paper we study an optimal control problem (OCP) associated to a linear elliptic equation on a bounded domain $\Omega$. The matrix-valued coefficients $A$ of such systems is our control in $\Omega$ and will be taken in $L^2(\Omega;\mathbb{R}^{N\times N})$ which in particular may comprises the case of unboundedness. Concerning the boundary value problems associated to the equations of this type, one may exhibit non-uniqueness of weak solutions--- namely, approximable solutions as well as another type of weak solutions that can not be obtained through the $L^\infty$-approximation of matrix $A$. Following the direct method in the calculus of variations, we show that the given OCP is well-possed and admits at least one solution. At the same time, optimal solutions to such problem may have a singular character in the above sense. In view of this we indicate two types of optimal solutions to the above problem: the so-called variational and non-variational solutions, and show that some of that optimal solutions can not be attainable through the $L^\infty$-approximation of the original problem.
Abstract:
A linear-quadratic (LQ, for short) optimal control problem is considered for mean-field stochastic differential equations with constant coefficients in an infinite horizon. The stabilizability of the control system is studied followed by the discussion of the well-posedness of the LQ problem. The optimal control can be expressed as a linear state feedback involving the state and its mean, through the solutions of two algebraic Riccati equations. The solvability of such kind of Riccati equations is investigated by means of semi-definite programming method.
Abstract:
We construct a patchy feedback for a general control system on $\mathbb{R}^d$ which realizes practical stabilization to a target set $\Sigma$, when the dynamics is constrained to a given set of states $S$. The main result is that $S$--constrained asymptotically controllability to $\Sigma$ implies the existence of a discontinuous practically stabilizing feedback. Such a feedback can be constructed in ``patchy'' form, a particular class of piecewise constant controls which ensure the existence of local Carathéodory solutions to any Cauchy problem of the control system and which enjoy good robustness properties with respect to both measurement errors and external disturbances.
Abstract:
This paper is addressed to a quantitative internal unique continuation property for stochastic parabolic equations, i.e., we show that each of their solutions can be determined by the observation on any nonempty open subset of the whole region in which the equations evolve. The proof is based on a global Carleman estimate.
Abstract:
In this paper we establish a global Carleman estimate for the fourth order Schrödinger equation with potential posed on a $1-d$ finite domain. The Carleman estimate is used to prove the Lipschitz stability for an inverse problem consisting in recovering a stationary potential in the Schrödinger equation from boundary measurements.
Readers Authors Editors Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
This question already has an answer here:
What is the order and cofactor of a base point? Is it possible to deduct the order and cofactor, given just the basepoint. What about the other way around from order and cofactor to basepoint?
Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
What is the order and cofactor of a base point? Is it possible to deduct the order and cofactor, given just the basepoint. What about the other way around from order and cofactor to basepoint?
What is the order and cofactor of a base point?
Put simply, the order of the basepoint is the number of points the basepoint generates under repeated addition. If you have point $G$ and you repeatedly add $G$ to itself, you will eventually get the identity element $O$ (where $O + G = G$). The order of $G$ is the smallest $n$ where $nG = O$.
The cofactor is the order of the entire group (number of points on the curve) divided by the order of the subgroup generated by your basepoint. Because of Lagrange's theorm, we know that the number of elements of every subgroup divides the order of the group.
Is it possible to deduct the order and cofactor, given just the basepoint. What about the other way around from order and cofactor to basepoint?
Not easily unless you know the order of the group.
There are algorithms for efficiently computing the order of an elliptic curve. After you know the order of the curve, you can factor it to find the size of its subgroups. Using the factors, you can test each one and find the order of any point.
I suggest you read up on some basic group theory (I recommend the book of Fraleigh).
Let $E$ be an elliptic curve over a (finite) field $K$, $E(K)$ is the group of points of $E$, $\left|E(K)\right|$ is the order of $E(K)$ (
i.e., its cardinality). It can be computed in polynomial time (by the SEA algorithm), and for the curves occurring in practice, it can also be factored.
Let $P \in E(K)$ be a point of $E$. The set of multiples of $P$ (
i.e., of all points $kP$) forms a subgroup of $E(K)$, which we note $\langle P\rangle$, and we let $r = \left|\langle P\rangle\right|$. $r$ is the order of $P$, and by Lagrange's theorem, $r$ is a divisor of $\left|E(K)\right|$, so we let $h = \frac{\left|E(K)\right|}{r}$. $h$ is the cofactor. $r$ can be computed efficiently (by iterating through the divisors of $\left|E(K)\right|$), and so $h$ can be computed as well.
In practice we often choose $r$ prime, so given a curve $E$ such that $\left|E(K)\right|$ is a multiple of $r$ it is easy to obtain a point of order $r$: choose a point $P$ at random until you find one such that $rP = \infty$ (and $P \ne \infty$ of course). |
Summary: The objective of the bar crawl optimization problem is to select a set of bars and determine an ordering to visit them so as to optimize the objective function subject to any constraints on the bar crawl. The objective function may include a single criterion or multiple criteria, and the constraints may include requirements to include one or more specific locations, limits on the cost of drinks, restrictions on operating hours, etc.
According to Wikipedia, a bar crawl, or a pub crawl, is the act of one or more people drinking in multiple bars or pubs in a single night, normally walking or busing to each one between drinking. The Oxford English Dictionary reports that the term pub crawl and it variants have been in use since the late 19th century.
A bar crawl can be an informal outing for a group of friends or an organized event attracting thousands of people from around the country or the world. The Guinness World Record for the most people on a pub crawl involved 4885 people in Kansas City, Missouri in June 2013 to benefit cancer research. (See Crawl for Cancer.) Wisconsin is well-known for its many pub crawls. Oshkosh hosts pub crawls in the spring and in the fall. In the greater Milwaukee area, thousands of people have attended the Wolski's Pub Crawl, the Bay View Pub Crawl, and the Zombie Pub Crawl. On a smaller scale, the concentration of bars in downtown Madison makes it a great place for college students to participate in their own bar crawls.
The rules governing a pub crawl vary from event to event, so there are different ways of formulating the pub crawl optimization problem. In the Kansas City Crawl for Cancer, each team of 10 people paid a fixed fee to receive a ticket for two pitchers of beer at each of the 10 participating bars. For teams seeking to minimize their walking distance, the optimal sequence of bars can be obtained by solving a traveling salesman problem (TSP).
In the TSP, every location needs to be visited, and consequently, there is no value associated with visiting a location. However, in an informal bar crawl event among friends, it is unlikely that every location
needs to be visited, and the "benefit" associated with visiting a bar may vary. Feillet et al. (2005) define a class of problems, called traveling salesman problems with profits, that are generalizations of the TSP. In the TSP with profits, a profit is associated with each location and not every location must be visited. The overall goal is the simultaneous optimization of the collected profit and the travel costs. (Note that the problems are defined assuming that the tour starts and finishes at a defined location.) Within the class, they define three variations: profitable tour problems: the objective is to find a circuit that minimizes the travel costs minus the collected profit. orienteering problems: the objective is to find a circuit that maximizes the collected profit subject to the travel costs not exceeding a specified value. prize-collecting traveling salesman problems: the objective is to find a circuit that minimizes travel costs subject to the collected profits not being less than a specified value.
While one could reasonably cast the bar crawl optimization problem as any one of these problems, we consider a version of the bar crawl optimization problem that is similar to the profitable tour problem. There are two main differences. First, we do not specify in advance a starting/ending location. Second, most college students are budget-constrained, so we add a budget constraint to the problem.
Given a set of nodes (bars), the pairwise distances between the nodes (bars), the benefit and the cost of visiting each node (i.e., the value of one drink and the cost of one drink at each bar), and a maximum available budget, the objective of the pub crawl optimization problem is to find a tour through a subset of the nodes so as to maximize the collected benefits minus the distance walked subject to a maximum available budget for drinks. Since the benefits and the distances are measured in different units, we include an objective weighting parameter alpha (\(\alpha\)). Sets \(V\) = set of nodes \(A\) = \(\{(i,j): i \in V, j \in V, i \neq j\}\) = set of links Parameters \(\alpha\) = weighting value for objective function \(d_{ij}\) = distance between nodes \(i\) and \(j\), \(\forall (i,j) \in A\) \(b_j\) = "benefit" associated with a drink at node \(j\), \(\forall j \in V\) \(c_j\) = cost of one drink at node \(j\), \(\forall j \in V\) \(B\) = maximum available budget Variables \(x_{ij}\) = 1 if link \((i,j)\) is selected, 0 otherwise, \(\forall (i,j) \in A\) \(y_j\) = 1 if node \(j\) is included in the tour, 0 otherwise, \(\forall j \in V\) Bar Crawl Optimization Formulation maximize \(\sum_{j \in V} b_j y_j - \alpha*\sum_{(i,j) \in A} d_{ij} x_{ij}\)
subject to exiting a node on the tour
\(\sum_{(i,j) \in A} x_{ij} - y_i = 0, \forall i \in V\)
subject to entering a node on the tour
\(\sum_{(i,j) \in A} x_{ij} - y_j = 0, \forall j \in V\)
subject to drink cost budget constraint
\(\sum_{j \in V} c_j y_j \leq B\)
subject to subtour elimination constraints
\(\sum_{i \in S} \sum_{j \in S} x_{ij} \leq |S| - 1, \forall S \subseteq V, 2 \leq |S| \leq |V|-1\)
To solve instances of this integer linear programming problem, we can use one of the NEOS solvers in the Mixed Integer Linear Programming category.
Included below is an AMPL model for a small instance of the bar crawl optimization problem. Note that the model includes only a small set of the subtour elimination constraints, namely constraints to eliminate two-cycles and constraints to eliminate three-cycles (when necessary).
set V;
set LINKS := {i in V, j in V: i <> j};
param alpha >= 0;
param d{LINKS} >= 0; param b{V} >= 0; #default benefit of visiting a bar param c{V} >= 0; # default cost of one drink
param B default 30; # default maximum budget for drinks
var x{LINKS} binary;
var y{j in V} binary;
maximize benefit_minus_distance:
sum{j in V} b[j]*y[j] - alpha*sum{(i,j) in LINKS} d[i,j]*x[i,j];
subject to exit{i in V}:
sum{(i,j) in LINKS} x[i,j] - y[i] = 0;
subject to enter{j in V}:
sum{(i,j) in LINKS} x[i,j] - y[j] = 0;
subject to budget:
sum{j in V} c[j]*y[j] <= B;
subject to two_cycles{(i,j) in LINKS}:
x[i,j] + x[j,i] <= 1;
subject to three_cycles{(i,j) in LINKS, k in V: i <> k and j <> k}:
x[i,j] + x[j,k] + x[k,i] <= 2;
data;
set V := 1 2 3 4 5 6 7 8 9 10;
param alpha := 0.5;
param bonus := 1 550 2 600 3 450 4 650 5 700 6 800 7 1000 8 250 9 850 10 750; param c := 1 4 2 4 3 4 4 5 5 5 6 4 7 6 8 4 9 5 10 5; param d : 1 2 3 4 5 6 7 8 9 10 := 1 . 4224 3168 1056 3696 427 5280 528 1056 2112 2 4224 . 1056 5280 6336 4224 1584 3696 4752 4224 3 3168 1056 . 4224 5808 3168 2112 2640 4224 3696 4 1056 5280 4224 . 2640 1584 6336 1584 141 1056 5 3696 6336 5808 2640 . 4224 7392 4224 2640 3696 6 427 4224 3168 1584 4224 . 5808 528 1584 1056 7 5280 1584 2112 6336 7392 5808 . 4752 6336 5808 8 528 3696 2640 1584 4224 528 4752 . 1584 1056 9 1056 4752 4224 141 2640 1584 6336 1584 . 1056 10 2112 4224 3696 1056 3696 1056 5808 1056 1056 . ;
If we submit the model to AMPL-Gurobi, we obtain the following output:
Gurobi 5.5.0: optimal solution; objective 1732 58 simplex iterations x := 1 6 1 4 9 1 6 10 1 9 1 1 10 4 1 ; The \(x_{ij}\) variables at value 1 in the solution indicate an ordering of the bars on the tour (loop) but not an origin. Starting at bar 1, the sequence would be 1-6-10-4-9-1 but any one of the bars is eligible to be the starting point. To compute the value of the objective function, we consider the total drink benefit (550+800+750+650+850 = 3600) minus alpha (0.5) times the total walking distance (427+1056+1056+141+1056 = 3736) to get 3600 - 0.5*3736 = 1732.
Let's explore how the solution changes as alpha varies. We solved the AMPL model with five different values of alpha: 0, 0.25, 0.5, 0.75, 1, and 1.25. The corresponding solutions are shown in the table below.
alpha obj vaue tour benefit distance 0 4750 4-5-6-7-10-9-4 4750 19677 0.25 2784 1-6-8-10-9-4-1 3850 4264 0.5 1732 1-6-10-4-9-1 3600 3736 0.75 798 1-4-9-10-6-1 3600 3736 1 117 1-8-6-1 1600 1483 1.25 0 none 0 0 When alpha is zero, the objective of the problem reduces to maximizing the total benefit, regardless of the distance walked. The problem is essentially a 0-1 knapsack problem: given a set of items (drinks), each with a weight (cost) and a value (benefit), determine which items to include so that the total weight (cost) is less than or equal to the given limit (budget) and the total value (benefit) is maximized. As alpha increases, the objective of the problem puts increasing emphasis on minimizing the distance walked. As one can see in the table, as the value of alpha increases from 0.25 to 0.5, the benefit of the tour decreases from 3850 to 3600 and the distance of the tour decreases from 4264 to 3736 as fewer stops are included. At some point, alpha increases to a large enough value such that the increased emphasis on minimizing the distance walked mathematically outweighs any benefit of a tour. The result is an optimal objective function value of 0, and no tour. Feillet, D., P. Dejax, and M. Gendreau. 2005. Traveling Salesman Problems with Profits. Transportation Science 39(2), 188-205. Martello, S. and P. Toth. 1990. Knapsack Problems: Algorithms and Computer Implementations. J. Wiley, Chichester. |
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Global, regional, and national incidence, prevalence, and years lived with disability for 301 acute and chronic diseases and injuries in 188 countries, 1990–2013: a systematic analysis for the Global Burden of Disease Study 2013
The Lancet, ISSN 0140-6736, 08/2015, Volume 386, Issue 9995, pp. 743 - 800
Summary Background Up-to-date evidence about levels and trends in disease and injury incidence, prevalence, and years lived with disability (YLDs) is an...
Internal Medicine | POPULATION | MEDICINE, GENERAL & INTERNAL | ATTENTION-DEFICIT/HYPERACTIVITY DISORDER | OTITIS-MEDIA | CONDUCT DISORDER | WEIGHTS | RISK | IRON-DEFICIENCY | OSTEOARTHRITIS | EPIDEMIOLOGY | HEALTH OUTCOMES | Age Distribution | Prevalence | Developing Countries - statistics & numerical data | Humans | Middle Aged | Child, Preschool | Cost of Illness | Infant | Male | Wounds and Injuries - epidemiology | Developed Countries - statistics & numerical data | Incidence | Young Adult | Global Health - statistics & numerical data | Aged, 80 and over | Neglected Diseases - epidemiology | Adult | Disabled Persons - statistics & numerical data | Female | Child | Infant, Newborn | Chronic Disease - epidemiology | Residence Characteristics - statistics & numerical data | Adolescent | Sex Distribution | Acute Disease - epidemiology | Aged | Medicine, Experimental | Chronic diseases | Medical research | Analysis | Prevalence studies (Epidemiology) | Global health | Infectious diseases | Disease | Mental disorders | Comorbidity | Mortality | Respiratory diseases | Human immunodeficiency virus--HIV | Mental depression | Epidemiology | Injuries | Residence Characteristics | Neglected Diseases | Life Sciences | Acute Disease | Global Health | Developed Countries | Wounds and Injuries | Santé publique et épidémiologie | Developing Countries | Chronic Disease | Disabled Persons | Hälsovetenskaper | Folkhälsovetenskap, global hälsa, socialmedicin och epidemiologi | Medical and Health Sciences | Medicin och hälsovetenskap | Public Health, Global Health, Social Medicine and Epidemiology | Health Sciences | Nutrition and Disease | HNE Nutrition and Disease | Chair Nutrition and Disease | Voeding en Ziekte | VLAG | HNE Voeding en Ziekte
Internal Medicine | POPULATION | MEDICINE, GENERAL & INTERNAL | ATTENTION-DEFICIT/HYPERACTIVITY DISORDER | OTITIS-MEDIA | CONDUCT DISORDER | WEIGHTS | RISK | IRON-DEFICIENCY | OSTEOARTHRITIS | EPIDEMIOLOGY | HEALTH OUTCOMES | Age Distribution | Prevalence | Developing Countries - statistics & numerical data | Humans | Middle Aged | Child, Preschool | Cost of Illness | Infant | Male | Wounds and Injuries - epidemiology | Developed Countries - statistics & numerical data | Incidence | Young Adult | Global Health - statistics & numerical data | Aged, 80 and over | Neglected Diseases - epidemiology | Adult | Disabled Persons - statistics & numerical data | Female | Child | Infant, Newborn | Chronic Disease - epidemiology | Residence Characteristics - statistics & numerical data | Adolescent | Sex Distribution | Acute Disease - epidemiology | Aged | Medicine, Experimental | Chronic diseases | Medical research | Analysis | Prevalence studies (Epidemiology) | Global health | Infectious diseases | Disease | Mental disorders | Comorbidity | Mortality | Respiratory diseases | Human immunodeficiency virus--HIV | Mental depression | Epidemiology | Injuries | Residence Characteristics | Neglected Diseases | Life Sciences | Acute Disease | Global Health | Developed Countries | Wounds and Injuries | Santé publique et épidémiologie | Developing Countries | Chronic Disease | Disabled Persons | Hälsovetenskaper | Folkhälsovetenskap, global hälsa, socialmedicin och epidemiologi | Medical and Health Sciences | Medicin och hälsovetenskap | Public Health, Global Health, Social Medicine and Epidemiology | Health Sciences | Nutrition and Disease | HNE Nutrition and Disease | Chair Nutrition and Disease | Voeding en Ziekte | VLAG | HNE Voeding en Ziekte
Journal Article
2. Search for nonresonant Higgs boson pair production in the b b ¯ b b ¯ $$ \mathrm{b}\overline{\mathrm{b}}\mathrm{b}\overline{\mathrm{b}} $$ final state at s $$ \sqrt{s} $$ = 13 TeV
Journal of High Energy Physics, ISSN 1029-8479, 4/2019, Volume 2019, Issue 4, pp. 1 - 49
Results of a search for nonresonant production of Higgs boson pairs, with each Higgs boson decaying to a b b ¯ $$ \mathrm{b}\overline{\mathrm{b}} $$ pair, are...
Higgs physics | Beyond Standard Model | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
Higgs physics | Beyond Standard Model | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
Journal Article
3. Measurement of the ratio of the production cross sections times branching fractions of B c ± → J/ψπ ± and B± → J/ψK ± and ℬ B c ± → J / ψ π ± π ± π ∓ / ℬ B c ± → J / ψ π ± $$ \mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm }{\pi}^{\pm }{\pi}^{\mp}\right)/\mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm}\right) $$ in pp collisions at s = 7 $$ \sqrt{s}=7 $$ TeV
Journal of High Energy Physics, ISSN 1029-8479, 1/2015, Volume 2015, Issue 1, pp. 1 - 30
The ratio of the production cross sections times branching fractions σ B c ± ℬ B c ± → J / ψ π ± / σ B ± ℬ B ± → J / ψ K ± $$ \left(\sigma...
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
Journal Article
4. Combined Measurement of the Higgs Boson Mass in pp Collisions at √s=7 and 8 TeV with the ATLAS and CMS Experiments
Physical Review Letters, ISSN 0031-9007, 2015, Volume 114, Issue 19
A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H arrow right...
CERN | Large Hadron Collider | Collisions | Higgs bosons | Decomposition | Solenoids | Channels | Invariants | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
CERN | Large Hadron Collider | Collisions | Higgs bosons | Decomposition | Solenoids | Channels | Invariants | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
The European Physical Journal C, ISSN 1434-6044, 3/2016, Volume 76, Issue 3, pp. 1 - 52
New sets of parameters (“tunes”) for the underlying-event (UE) modelling of the pythia8, pythia6 and herwig++ Monte Carlo event generators are constructed...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS, PARTICLES & FIELDS | Nuclear energy | Distribution (Probability theory) | Collisions (Nuclear physics) | Analysis | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Regular - Experimental Physics
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PHYSICS, PARTICLES & FIELDS | Nuclear energy | Distribution (Probability theory) | Collisions (Nuclear physics) | Analysis | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Regular - Experimental Physics
Journal Article
6. Precise determination of the mass of the Higgs boson and tests of compatibility of its couplings with the standard model predictions using proton collisions at 7 and 8 TeV
European Physical Journal C, ISSN 1434-6044, 05/2015, Volume 75, Issue 5, p. 1
Properties of the Higgs boson with mass near 125 GeV are measured in proton-proton collisions with the CMS experiment at the LHC. Comprehensive sets of...
TRANSVERSE-MOMENTUM | TOP-PAIR | RATIOS | NLO | RESUMMATION | ELECTROWEAK CORRECTIONS | BROKEN SYMMETRIES | HADRON COLLIDERS | LHC | QCD CORRECTIONS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment
TRANSVERSE-MOMENTUM | TOP-PAIR | RATIOS | NLO | RESUMMATION | ELECTROWEAK CORRECTIONS | BROKEN SYMMETRIES | HADRON COLLIDERS | LHC | QCD CORRECTIONS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment
Journal Article |
Let $m, n \in \mathbb{N}$ and $n \le m$. Given $k$ subsets $X_1, X_2, \dots, X_k$ of $\{ 1, 2, \dots, m \}$ and $k$ nonnegative integers $a_1, a_2, \dots, a_k$, find all subsets $Y$ of $\{ 1, 2, \dots, m \}$ satisfying the following conditions:
The set $Y$ contains exactly $n$ elements: $|Y| = n$ The set $X_i$ has exactly $a_i$ elements with $Y$ in common: $\forall 1 \le i \le k: |X_i \cap Y| = a_i$ EXAMPLE. Let $m = 5$, $n = 2$. We are given the following conditions. Set $Y$ contains... 1 element out of $\{ 2, 3, 4, 5 \}$, $\quad$ ($a_1 = 1$) 2 elements out of $\{ 1, 2, 3, 4 \}$, $\quad$ ($a_2 = 2$) 1 element out of $\{ 1, 3, 4, 5 \}$. $\quad$ ($a_3 = 1$)
$Y = \{ 1, 2 \}$ is a unique solution. It is necessary to have $1 \in Y$, else we must select two elements of $\{ 2, 3, 4 \}$ (2nd set without $1$), which is a subset of the 1st set. Now consider the 3rd set. If we select $3, 4$ or $5$ instead of $2$, $Y$ would contain two elements from 3rd set.
Of course, there may be many solutions, up to $\binom{m}{n}$ to be specific.
QUESTIONS. I have the following two questions: Are there any algorithms better than trivial brute-force to find all solutions to an instance of this problem? Do these algorithms fulfill known lower bounds? Given that there exists a unique solution, can we use this extra knowledge to do better than in the general case in terms of time complexity? MY APPROACHES. Obviously, it is possible to enumerate all $\binom{m}{n}$ subsets and check the conditions. At least for the last step (I know, the first step causes the lack of efficiency) I can think of some improvements. First, we may represent the sets as boolean arrays of size $m$. When iterating over the possible solutions, we may compute $X_i \cap Y$ in time $\mathcal{O}(m)$ resulting in an $\mathcal{O}\left(m\binom{m}{n}\right)$ solution. Additionally, if there is a set $X_i$ with $|X_i| = a_i$ we are lucky. In this case, we know $X_i \subseteq Y$ and may erase all elements of $X_i$ in the other subsets $X_j$ replacing $a_j$ by $a_j - |X_i \cap X_j|$. But in general, this special case does not occur with certainty.
I would appreciate any kind of help and material. |
Erratum to: Tower lemmas Abstract s}, in order that it can be used to obtain the homotopy type of the R-completion of a given space X: (i)If f: X → {Y sis a map which induces, for every R-module M, an isomorphismthen f induces a homotopy equivalence \({R_\infty }X \simeq \mathop {\lim }\limits_ \to {R_\infty }{Y_S}\).$$\mathop {\lim }\limits_ \to H*\left( {{Y_S};M} \right) \approx H*\left( {X;M} \right)$$ (ii)
If, in addition, each Y
sis R-complete (Ch.I, 5.1) , then the space \(\mathop {\lim }\limits_ \to {Y_S}\) already has the same homotopy type as R ∞X. (iii)
If, in addition, each Ys satisfies the even stronger condition of being R-nilpotent (4.2), then, in a certain precise sense, the tower {Y
s} has the same homotopy type as the tower {R s} |
Additional exploration is achieved by adding Dirichlet noise to the prior probabilities in the root node $s_0$, specifically $P(s, a) = (1−\varepsilon)p_a+ \varepsilon \eta_a$, where $\eta \sim \text{Dir}(0.03)$ and $\varepsilon = 0.25$; this noise ensures that all moves may be tried, but the search may still overrule bad moves.
(AlphaGo Zero)
And:
Dirichlet noise $\text{Dir}(\alpha)$ was added to the prior probabilities in the root node; this was scaled in inverse proportion to the approximate number of legal moves in a typical position, to a value of $\alpha = \{0.3, \; 0.15, \; 0.03\}$ for chess, shogi and Go respectively.
(AlphaZero)
Two things I don't understand:
P(s, a)is an $n$-dimensional vector. Is $\text{Dir}(\alpha)$ shorthand for the Dirichlet distribution with $n$ parameters, each with value $\alpha$?
I've only come across Dirichlet as the conjugate prior of the multinomial distribution. Why was it picked here?
For context,
P(s, a) is just one component of the PUCT (polynomial upper confidence tree, a variant on upper confidence bounds) calculation for a given state/action. It's scaled by a constant and a metric for how many times the given action has been selected amongst its siblings during MCTS, and added to the estimated action value
Q(s, a):
PUCT(s, a) = Q(s, a) + U(s, a).
$ U(s,a) = c_{\text{puct}} P(s,a) \frac{\sqrt{\sum_b N(s,b)}}{1 + N(s,a)} $. |
A capacitor is two conductors of arbitrary shape separated by some insulate material (i.e. air). Capacitors are used to store electrical charge and electrical potential energy. In this lesson, it's be our goal to prove that and to derive equations which will allow us to calculate the total electrical potential energy stored in
any kind of capacitor.
Consider the capacitor in Figure 1. The two conductors are initially electrically neutral with an equal number of positive and negative chaged particles. Suppose than an electric field is turned on and an electric force \(\vec{F}_E\) is applied to a charged particle \(q\) in Conductor B. Suppose that \(\vec{F}_E\) moves \(q\) over to Conductor A. What is the work done by this
force? Using the definition of work (\(W≡\int_c\vec{F}·d\vec{r}\)), we find that the work done by \(\vec{F}_E\) moving \(q\) a distance \(d\) to Conductor A is given by
$$W=\int_C\vec{F}_E·d\vec{r}.\tag{1}$$
Equation (1) represents the work done by any electrical force \(\vec{F}_E\) moving some charge \(q\) along any arbitrary path over to Conductor B. So, Equation (1) is very general and, initially, it might not look all that practically useful. After all, how do we solve the integral \(\int_c\vec{F}_E ·d\vec{r}\) if \(\vec{F}_E\) and \(c\) can be anything? Without any simplifying assumption, this integral does indeed seem pretty hard to solve. But, using the definition of the electric field (\(\vec{E}=\frac{\vec{F}_E}{q}\)), let's substitute \(\vec{F}_E=\vec{E}q\) into Equation (1) to see how that might simplify this integral:
$$W=q\int_{r_A}^{r_B}\vec{E} ·d\vec{r}.\tag{2}$$
The question is, why was this substitution useful? The answer is because the electric potential difference \(ΔV_{AB}\) (or voltage) between Conductors B and A can now be substituted into Equation (2); unlike the integral \(\int_{r_A}^{r_B}\vec{E} ·d\vec{r}\), the voltage \(ΔV_{BA}\) is something that we can actually measure and, thus, determine the value of. the voltage \(ΔV_{AB}\) is defined as \(ΔV_{AB}≡\int_{r_A}^{r_B}\vec{E} ·d\vec{r}\); if we substitute \(ΔV_{AB}\) into Equation (2) then we have
$$W=q ΔV_{AB}.\tag{3}$$
Equations (1), (2) and (3) all represent the work done by any arbitrary electric force \(\vec{F}_E\) moving a charged particle along a path \(c\) from one location to another (in our case, from Conductor A to Conductor B); but the point is that Equation (3) is the most useful since voltage is what engineers typically measure.
Let's now think about how to calculate the work done moving infinitely many point charges (each having a charge of \(dq\)) from Conductor A to Conductor B. If we move an ininitely many point charges of charge \(dq\) from Conductor A to Conductor B, then how much additional charge does Conductor B get? To answer this question, we can just add up every charge \(dq\) added to conductor B. But we have to add up an infinite number of these charges and we must take the infinite sum, \(∫\). Doing so, we find that the total charge added to Conductor B is \(Q=\int{dq}\). How much charge did Conductor A lose? WE could invoke the law of conservation charge if we wanted to be technical, but on a more intuitive level, here's an easy way to think about it: since we removed an amount of charge \(Q\) from Conductor A (in order to move it to Conductor B), the total change in charge of Conductor A is simply just \(-Q\).
Let's now ask a slightly more difficult question: what was the total work done moving all of the charges? We could alreadyy be able to figure out the amount of work done moving a single ppoint charge \(dq\) using Equation (3). If the point charge being moved becomes infinitesimally small, then what's on the opposite sidfe of the equal sign (the work) must also become infinitesimally small and Equation (3) simplifies to
$$dW=ΔV_{AB}dq.\tag{4}$$
Out of all the infinite number of charges (each with charge \(dq\)) that we want to move from Conductor A to Conductor B, Equation (4) represents the work done in moving just one of those charges from Conductor A to Conductor B. But in order to find the total amount of work associated with moving the charge \(Q\) to Conductor B, we must add up the work done in moving every charged particle to get
$$\int{dW}=\int{ΔV_{AB}dq}$$
$$W=\int{ΔV_{AB}dq}.\tag{5}$$
Put simply, Equation (3) is used to find the work done moving a single point charge (but we'd use Equation (4) instead if that point charge is infinitesimally small) and Equation (5) is used to find the work done moving many charged particles. Since the way that a capacitor is charged in real life is by having an electric field (which applies an electric force) moving many charged paarticles (electrons) from one conductor to another, Equation (5) will be mucj more useful to use in our analysis of capacitors.
What is work, though? Work, of course, measures the amount of energy transfered into or out of a system. Equation (5) is used to calculate how much energy is transfered into or out of the capacitor when you strip one of the conductors of that capacitor of its electrons and move them to the other conductor comprising the capacitor.
$$ΔKE+ΔPE=W+Q+T_{other}$$
I have written the law of conservation of energy above. In our reference frame, the capacitor obviously isn't moving away from us since capacitors are held in place in electrical circuits. There is no loss of energy in the capacitor associated with \(T_{other}\); in other words, we'll assume that the capacitor doesn't lose energy by radiating it away, nor that energy is gained by adding electrons to the system, nor any of the other energy transfer mechanisms besides work and heat. The only non-trivial idealization we'll make is that the capacitors doesn't lose any energy due to heat. In some applications, it is necessary to consider energy lost in a capacitor due to heat. But for out purposes, we'll assume that the heat loss is negligible. Making all of these assumptions, we can write \(ΔPE=W\) and simplify Equation (5) to
$$ΔPE=\int{ΔV_{AB}dq}.\tag{6}$$
In other words, as one of the conductors in a capacitor gets stripped of its electrons which get moved to the other conductor, the transfer of energy \(W\) into the capacitor is additional electrical potential energy.
Well, that concludes this lesson. Hopefully this gives you a sense that by charging a capacitor (moving charges from one conductor to another), this adds electrical energy to the system. |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge.
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
April 18, Andrea Agazzi, Duke
Title:
Large Deviations Theory for Chemical Reaction Networks
Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes.
April 25, Kavita Ramanan, Brown
Title:
Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown
Title:
Tales of Random Projections
Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. |
This is really not that big of a deal. In essence, though, the dimensions of zero are ill-defined, and $[0]$ (i.e. the dimensions of zero) is (a) not meaningfully defined, and (b) never used in practice.
Let me start by making one thing clear:
Would it be improper to drop the units of measurement of $0 \,\mathrm m$ like this in an academic paper?
Yes, this is perfectly OK, and it is standard practice.
The dimensionality map $[\,·\,]$ has multiple different conventions, but they all work in much the same way. The key fact is that physical quantities form a vector space under multiplication, with exponentiation (over the field $\mathbb Q$) taking the role of scalar multiplication. (This is the essential reason why dimensional analysis often boils down to systems of linear equations, by the way.) The different base dimensions - mass, length, time, electric charge, etc. - are assumed to be algebraically independent and to span the space, and the dimensionality map $[\,·\,]$ reads out the 'coordinates' of a given physical quantity in terms of some pre-chosen canonical basis.
This only works, however, if you exclude zero from the game. The quantity $1\,\mathrm m$ has a multiplicative inverse, but $0\,\mathrm m$ does not, so if you included it it would break the vector space axioms. This is in general OK - you're not
forced to keep those properties - but it does preclude you from using the tools built on that vector space, most notably the dimensionality map. Thus $[0]$ doesn't map to anything.
Since you explicitly asked for it, here's one way you can formalize what I said above. (For another nice analysis, see this nice blog post by Terry Tao.)
A positive
physical quantity consists of a 8-tuple $(x,m,l,t,\theta,c,q,i) \in\mathbb R^\times\times \mathbb Q^7$, where $\mathbb R^\times=(0,\infty)$ is the real multiplicative group. This is usually displayed in the form
$$ x\,\mathrm{kg}^m\mathrm{m}^l\mathrm s^t\mathrm K^\theta\mathrm A^c \mathrm{mol}^q \mathrm{cd}^i.$$ The
multiplication of two physical quantities $p=(x,m,l,t,\theta,c,q,i)$ and $p'=(x',m',l',t',\theta',c',q',i')$ is defined as
$$pp'=(xx',m+m',l+l',t+t',\theta+\theta',c+c', q+q',i+i').$$
The multiplicative identity is $1=(1,0,0,0,0,0,0,0)$, and the multiplicative inverse of $p$ is $1/p=(1/x,-m,-l,-t,-\theta,-c,-q,-i)$. The
exponentiation of a physical quantity $p$ to an exponent $r\in\mathbb Q$ is defined as $$p^r=(x^r,rm,rl,rt,r\theta,rc,rq,ri).$$
You can then easily check that these two operations satisfy the vector space axioms. The above construction is, in fact, a specific instantiation of the abstract vector space $\mathcal Q$ of physical quantities, but it suffices to take one specific example to show that this works.
As an aside, the choice for $\mathbb Q$ as the scalar field is because (a) it's essential for the vector space structure, and (b) it's still somewhat reasonable to have things like $\mathrm {m}^{-3/2}$ (e.g. the units of a wavefunction). On the other hand, things like $\mathrm {m}^{\pi}$ cannot be made to make sense.
The dimensionality map $[\,·\,]$ is, first and foremost, an equivalence relation, that of commesurability. That is, we say that for $p,p'\in\mathcal Q$, $$[p]=[p']\Leftrightarrow p/p'=(x,0,0,0,0,0,0,0)\text{ for some }x\in\mathbb R^\times.$$This is in fact all you really need to do dimensional analysis, as I argued here, but it's still useful to go on for a bit.
The really useful vector space, if you want to do dimensional analysis, is the vector space of physical dimensions: the space of physical quantities once we forget about their numerical value. This is the quotient space of $\mathcal Q$ over the commesurability equivalence relation:$$\mathcal D=\mathcal Q/[\,·\,]=\{[p]\,:\,p\in\mathcal Q\}.$$(Here I've abused notation slightly to make $[p]$ the equivalence class of $p$, i.e. the set of all physical quantities commensurable to $p$.) The vector space of physical dimensions, $\mathcal D$, has the same operations as in $\mathcal Q$:
$[p][p']=[pp']$, and $[p]^r=[p^r]$.
It is easy to check that these definitions do not depend on the specific representatives $p$ and $p'$, so the operations are well-defined.
Dimensional analysis takes place in $\mathcal D$. From the definitions above, you can prove that the seven base units give rise to a basis $\{[1\,\mathrm {kg}], [1\,\mathrm m],\ldots,[1\,\mathrm{cd}]\}$ for $\mathcal D$. More physically, though,
the seven base units are algebraically independent, which means that they cannot be expressed as multiples of each other, and they are enough to capture the dimensions of all physical quantities.
These are the key physical requirements on a set of base units for the abstract space $\mathcal Q$.
After this, you're all set, really. And it should be clear that there's no way to make zero fit into this scheme at all. |
I have a very similar question to the one asked below:
In particular, the setup to my question is essentially the same: In section 5.9 of Weinberg Volume I, it is stated that a massless helicty-1 field must transform like $A_{\mu} \rightarrow A_{\mu} + \partial_{\mu} \Omega$ under a Lorentz transformation. Because of this, when such a field is coupled to matter, it must be coupled to a conserved current, i.e. the action must take the form
$S = S_{photon} + S_{matter} + \int A_{\mu}J^{\mu}$,
in which $S_{photon} = \int (\partial_{\mu}A_{\nu} - \partial_{\nu}A_{\mu})^2$, $S_{matter}$ is the free part of action for the matter fields, and $J^{\mu}$ is some conserved current which comes about from applying the Noether procedure to $S_{matter}$. Because the current is conserved, when the action is varied under a Lorentz transformation one finds
$\delta_{Lorentz}S = \int(\partial_{\mu}\Omega) J^{\mu} = -\int\Omega(\partial_{\mu} J^{\mu}) = 0$.
This gives a concrete motivation for why, at least in the abelian case, we require the action to be gauge invariant--it's needed for the action to be Lorentz invariant.
I am confused as to how (or if) this argument can be generalized to the non-abelian case. I believe the setup should go as follows: First, suppose that we have $n$ species of massless, helicity-1 particles, $A^{a}_{\mu}, a = 1, 2, ..., n$. Momentarily ignore matter which will be included later. Because we require the action to be invariant under $\delta_{Lorentz}$, one can write
$S = S_{photon}$, where now $S_{photon} = \int(\partial_{\mu}A^a_{\nu} - \partial_{\nu}A^a_{\mu})^2$. Now turn on some collection of matter fields transforming under a representation of a group $G$, and include any self-interactions between the gauge fields themselves. Again, because one requires that the total action is invariant under $\delta_{Lorentz}$, the action must take the form
$S = S_{photon} + S_{matter} + \int A^a_{\mu}J^{a\mu}[A, \phi]$, in which I have emphasized the fact that $J^a_{\mu}$ may depend on the gauge fields and the matter fields. I believe $J$ should be interpreted as the Noether current corresponding to the global symmetry of the group G on the total action. With this information, one can show that under a global transformation $\delta_G$ the current transforms in the adjoint representation. Moreover, if one assumes the coupling term itself is invariant under $\delta_G$, one can show that the gauge fields also transform in the adjoint representation. At this point, however, I am stuck. I believe the next step should be to show that somehow, the statement $\delta_{Lorentz}S = 0$ is equivalent to $\delta^{local}_G S = 0$. By $\delta^{local}_G$, I mean a local gauge transformation under the group $G$. If this is true, then I could see how requiring non-abelian gauge invariance is necessary (and sufficient) for having Lorentz invariance. The main points to my question are as follows:
1) Is the statement that $\delta_{Lorentz}S = 0$ is equivalent to $\delta^{local}_G S = 0$ even true?
2) If it is true, is there a simple way to see it from the setup above, or does anyone know of a source which fleshes out the connection? I can find many sources which talk about the connection between Lorentz invariance, matter fields transforming under representations of abelian groups, and gauge invariance. However, I cannot find any sources which generalize the argument to the case in which matter fields transform under representations of non-abelian groups.
EDIT
3) I have also not been able to show that the Yang-Mills Lagrangian without matter is even Lorentz invariant, i.e. invariant under the $A^a_{\mu} \rightarrow A^a_{\mu} + \partial_{\mu} \Omega^a.$ I thought maybe there was some $U(1)$ subgroup of $G$ so that the above transformation is a special case of the general gauge group transformation, but I haven't been able to make that work. I believe this to be my most significant point of confusion, because it seems as though the YM Lagrangian is not Lorentz Invariant. This is somewhat addressed in the answer to the question linked above, in which the author affirms that in the non-abelian case the operators do indeed acquire the extra divergence term under Lorentz transformations. However, there is no justification for why the YM Lagrangian is invariant under this transformation. If someone has any ideas for why this is true, I could be satisfied in seeing everything to do with the group $G$ as just extra structure. |
Scaling factor a(t) and Hubble's Parameter H(t)
Shortly after the precise quantitative predictions of Einstein’s general relativity concerning the precession of Mercury’s perihelion and the deflection angle of rays of light passing the Sun, Einstein moved beyond investigations of the solar system and applied general relativity to the entire Universe. He wondered what the effects of gravity would be due to all the masses in the Universe. This might seem like an impossible task, but Einstein greatly simplified matters by assuming that the distribution of all the matter in the Universe was spatially uniform. He called this assumption the
cosmological principle. This means that the distribution of all mass throughout space is homogenous and isotropic. If the mass distribution is homogenous, then if you draw a line in any direction which extends throughout all of space, all of the mass distribution along that line will be equally spaced; isotropy means that the distribution of mass is the same in all directions. If a distribution of mass is both homogenous and isotropic then it is equally spaced and the same in all direction. Later observations (in particular the Cosmic Microwave Background Radiation) proved that on the scale of hundreds of millions of light-years across space, the distribution of galaxies very nearly (up to very miniscule non-uniformities) is completely homogenous and isotropic; thus on this scale the cosmological principle is a reasonable idealization.
Figure 1
Imagine that we draw a line through our galaxy that extends across space for hundreds of millions of light-years. Let’s label this line with equally spaced points which have fixed coordinate values \(x^1\). Imagine that embedded and attached to those points are point-masses (each having a mass \(m\)) which we can think of as galaxies. If we stretch or contract this line, the point-masses (galaxies) will either move away from or towards one another. The coordinate value \(x^1\) of each mass does not change since as we stretch the line, the point embedded in the line and the galaxy remain “overlapping each other.” We shall, for simplicity, consider our galaxy to be located at the origin of the coordinate system at \(x^1 = 0\) although (as we will soon see) the choice of the origin is completely arbitrary. We define the distance between galaxies on this line to be \(D\equiv{a(t)∆x^1}\) where, based on this definition, the scaling factor \(a(t)\) is the distance \(D=a(t)\cdot1=a(t)\) between two galaxies separated by \(∆x^1=1\). (I repeat, the coordinate value \(x^1\) of each galaxies doesn’t change and, therefore, the “coordinate separation” \(∆x^1\) between galaxies doesn’t change.) We will assume that the masses along this line are
homogeneously distributed which just means that all of the masses are, at all times \(t\), equally spaced. In other words, at all times \(t\), the distance \(D=a(t)(x^1 - x^1_0) = a(t)\) (where \(∆𝑥¹=(x^1 - x^1_0)=1\)) between any two galaxies on the line separated by \(∆x^1=1\) with any coordinates \(x^1\) and \(x^1_0\); this is just a mathematically precise way of saying that the distance \(D=a(t)\) between two galaxies separated by “one coordinate unit” doesn’t depend on where we are on the line (\(x^1\) and \(𝑥^1_0\) could be anything, the distance will still be the same.)
Let’s draw another line (at a right angle to the first) through our galaxy which, also, extends for hundreds of millions of light-years across space. Let’s also label this line with equally spaced points where galaxies of mass \(m\) sit on. We will also assume that the distribution of masses along this line is
homogenous (meaning they are all equally spaced) and that the spacing between these points is the same as the spacing between the points on the other line (which means that the total mass distribution along both lines is isotropic). Isotropic just means that the distribution of mass is the same in all directions. The equation \(D=a(t)∆x^2\) is the distance \(D\) between two galaxies on the vertical line drawn in the picture. We can find the distance \(D\) between two galaxies with coordinates \((x^1_0, x^2_0)\) and \((x^1, x^2)\) using the Pythagorean Theorem. Their separation distance \(D_{\text{x^1-axis}}\)along the horizontal line is \(D_{\text{x^1-axis}}=a(t)∆x^1\) and their separation distance \(D_{\text{x^2-axis}}\) along the vertical line is \(D_{\text{x^2-axis}}= a(t)∆x^2\). Using the Pythagorean Theorem, we see that \(D=\sqrt{(D_{\text{x^1-axis}})+(D_{\text{x^2-axis}})}\). To make this equation more compact, let \(∆r=\sqrt{(∆x^1)^2 +(∆x^2)^2}\) which we can think of as the “coordinate separation distance” which doesn’t change. Then we can write the distance as \(D=a(t)∆r\).
If we drew a third line going through our galaxy (at right angles to the two other lines), we could find the distance between two points in space with coordinates \(x^i_0 = (x^1_0, x^2_0, x^3_0)\) and \(x^i=(x^1, x^2, x^3)\), using the Pythagorean Theorem in three dimensions, to be
$$D=\sqrt{(∆𝑥^1)^2 + (∆𝑥^2)^2 + (∆𝑥^3)^2}.\tag{1}$$
Equation (1) gives us the distance \(D\) between any two points with coordinates \(x^i_0\) and \(x^i\). Since the galaxies always have fixed coordinate values, we can simply view equation (1) as the distance between any two galaxies in space. (Later on, we will come up with a “particles in the box” model where, in general, the particles will not have fixed coordinate values and it will be more useful to think of equation as the distance between coordinate points.)
Although the coordinate separation \(∆r\) between galaxies does not change, because (in general) the space can be expanding or contracting, the scaling factor \(a(t)\) (the distance \(D\) between “neighboring galaxies” whose coordinate separation is \(∆r=1\) ) will vary with time \(t\) (where \(t\) is the time measured by an ideal clock which is at rest with respect to our galaxy’s reference frame). (We shall see later on that the FRW equation determines how \(a(t)\) changes with \(t\) based on the energy density \(ρ\) at each point in space and the value of \(κ\).) Since \(a(t)\) is changing with time, it follows that the distance \(D=a(t)∆r\) between any two galaxies is also changing with time. For example, the distance \(D\) between our galaxy and other, far off galaxies is actually growing with time \(t\). The fact that the distance \(D\) between any two galaxies is changing with time according to the scaling factor \(a(t)\), this means that there must be some relative velocity \(V\) between those two galaxies as their separation distance increases with time. To find the relative velocity \(V\) between any two galaxies, we take the time rate-of-change of their separation distance \(D\) to obtain \(V=dD/dt \). \(∆r\) is just a constant and the scaling factor \(a(t)\) is some function of time; thus the derivative is
$$V=\frac{dD}{dt}=\frac{d}{dt}(a(t)∆r)=∆r\frac{d}{dt}(a(t)).$$
Let’s multiply the right-hand side of the equation by \(a(t)/a(t)\) to get
$$V=a(t)∆r\frac{d/dt(a(t))}{a(t)}.$$
\(a(t)∆r\) is just the distance \(D\) between the two galaxies moving away at a relative velocity \(V\); thus,
$$V=D\frac{d/dt(a(t))}{a(t)}.$$
The term \frac{d/dt(a(t))}{a(t)} is called Hubble’s parameter which is represented by \(H(t)\):
$$H(t)=\frac{da(t)/dt}{a(t)}.\tag{2}$$
Substituting Hubble's parameter for \frac{d/dt(a(t))}{a(t)}, we get
$$V=H(t)D.\tag{3}$$
The value of Hubble’s parameter at our present time is called Hubble’s constant and is represent by \(H(today)=H_0\) . Thus, at our present time, the recessional velocities between any two galaxies is given by
$$V=H_0D.\tag{4}$$
and the value of Hubble’s constant has been measured to be
$$H_0≈500\text{ km/s/Mpc}=160\text{ km/s}.\tag{5}$$
Since \(H_0\) is a positive constant, this tells us that (at \(t=today\), not later times, because \(H(t)\) varies with time) the farther away a galaxy or object is from us (our galaxy), the faster it’s moving away. The bigger \(D\) is, the bigger \(V\) is.
By substituting Equation (5) into Equation (4) and by measuring the separation distance \(D\) between any two galaxies, we can use Equation (4) to calculate the relative, recessional speeds between those galaxies—today. To determine \(V\) as a function of time, you must compute \(a(t)\) from the FRW equation, then substitute \(a(t)\) into Equation (3); but this will be discussed later on. By substituting sufficiently big values of \(D\) (namely, values which are tens of billions of light-years) into Equation (4), one will discover that it is possible for two galaxies to recede away from one another at speeds exceeding that of light. This, however, does not violate the special theory of relativity which restricts the speeds of massive objects through space to being less than that of light. This is because it is space itself which is expanding faster than the speed of light and general relativity places no limit on how rapidly space or spacetime can expand or contract.
It might seem unintuitive, but the two coordinate points \(x^i_0\) and \(x^i\) are not actually moving through space at all. Of course, the galaxies do have some motion and velocity through space; but it is a useful idealization and approximation to assume that they are "attached" to the coordinate points and not moving through space at all. Sir Arthur Edington’s favorite analogy for this was an expanding balloon with two points drawn on its surface. As the balloon expands, the points are indeed moving away from one another; but those points are not actually moving across the space (which in this example, the space is the surface \(S^2\).)
Age of the universe
We can use Hubble’s Law to come up with a rough estimate of the age of the universe. If all of the galaxies are moving away from one another then that means that yesterday they must have been closer to one another—and a week ago even closer. If you keep running the clock back far enough, then at some time all of the galaxies and matter in the universe must have been on top of each other. Let’s assume that during that entire time interval (which we’ll call \(t_{\text{age of the universe}}\)) the recessional velocity \(V\) of every galaxy is exactly proportional to \(D\) (which, empirically, is very close to being true). Then it follows that the ratio \(D/V = 1/H\) is the same for every galaxy. Since \(1/H\) stays the same, it follows that \(1/H = 1/H_0\). Let’s also assume that during the entire history of the universe the velocity \(V\) of every galaxy remained constant. Then, according to kinematics, the time \(t_{\text{age of the universe}}\) that it took for every galaxy to go from being on top of one another (when \(D=0\)) to being where they are today is given by the equation \(t_{\text{age of the universe}}=D/V=1/H_0≈\text{14 billion years}\). (When this calculation was first performed it gave an estimate for the age of the universe of only about 1.8 billion years. Although Hubble correctly measured the recessional velocities of the galaxies, his distance measurements were off by about a factor of ten. Later astronomers corrected his distance measurements.) To come up with a more accurate age of the universe we have to account for the acceleration/deceleration of the galaxies. When we do this we are able to obtain the more accurate estimate which is given by \(t_{\text{age of the universe}}≈\text{13.8 billion years}\). |
160 3 1. Homework Statement
These are the exact words:
A player is ready to throw a ball with max. possible velocity of 15 m/s from a height of 1.5 m above the ground. He wants to hit the wall 16 m away from him at its highest point. If the ceiling height of the room is 8 m, find the angle of projection of the ball and the height of the point where the ball hits the wall.
2. Homework Equations
v=u+at
s=ut+0.5t^2
v^2=u^2+2as
3. The Attempt at a Solution
The attached file is the diagram.
To find α, I did this:
##15\cos(\alpha)=\frac{16}{t}##
##0=15\sin(\alpha)-9.81t##
## \therefore 15\sin(\alpha)=\frac{9.81\times16}{15\cos(\alpha)} ##
## \therefore 15^2\sin(\alpha)\cos(\alpha)=9.8\times16 ##
## \therefore \sin(2\alpha)=\frac{2\times9.81\times16}{15^2} ##
## \therefore \sin(2\alpha)=1.3952 > 1 ## <-- WHAT
Did I do something wrong?
Attachments 19.5 KB Views: 233 |
A simple counting argument shows most strings can't be compressed to shorter strings. But, compression is usually defined using Kolmogorov complexity. A string is compressible if its Kolmogorov complexity is less than its length, $K(s) < |s|$. The Kolmogorov complexity of a string is defined as the size of the smallest Turing machine (TM) that writes the string and halts when given a blank tape. I want to define the size of a TM as the number of states the machine has. I want to use a simple type of TM known as a Busy Beaver. BB's use two symbols, 0 and 1. A BB blank tape is initialized to all 0's and is unbounded in both directions.
Let the Kolmogorov complexity of natural number $n$, $K(n)$, be the number of states in the smallest BB that writes $n$ 1's and halts when given a blank tape. I use this definition so I can use known results about busy beavers, particularly Rado's sigma function, $\sum(n)$. $\sum(n)$ is defined as the maximum number of 1's an n-state BB can write and halt when given a blank tape. It is known $\sum(n)$ grows faster than any recursive function. This shows there is no recursive limit on how much a natural number can be compressed.
$\sum(2)=4$ so $K(4)=2$. This means 4 is a compressible number. This is the definition of the BB:
A0: 1RB / B0: 1LA
A1: 1LB / B1: 1RHalt
"A0: 1RB" means in state A on input 0 write 1, move one position right, and switch to state B. We can "extend" this machine to create new machines that compress natural numbers. To extend this machine replace the halt state with a new state. On input 0 this state writes 1 and halts. On input 1 it writes 1, moves one position right, and stays in the same state.
A0: 1RB / B0: 1LA / C0: 1RHalt
A1: 1LB / B1: 1RC / C1: 1RC
This 3 state machine writes 5 1's and halts. This proves $K(5) \leq 3$ and 5 is a compressible number. In fact, all numbers greater than 3 are compressible by at least 2 because we can write the first 4 1's using only 2 states. $\forall n(n>3 \rightarrow K(n) \leq n-2)$
We know natural number $n$ can be represented as a binary number with $log_2(n)+1$ bits. Let a number be super compressible if $K(n) < log_2(n)+1$. We see 4 is super compressible because $K(4)=2 < log_2(4)+1 = 3$. It is known $\sum(5) \ge 4098$. This means $K(4098) = 5$ and 4098 is super compressible. By extending this 5-state machine we can show 4099 through 4107 are also super compressible. In general, the range $\sum(n)$ through $\sum(n)+log_2(\sum(n))-n$ will be super compressible. For example, it is known $\sum(6) \ge 3.515 * 10^{18267}$. This proves the range $3.515 * 10^{18267}$ through $3.515 * 10^{18267} + log_2(10^{18267})$ is super compressible.
Are all large enough natural numbers super compressible? If not, why not?
Note there are lots of machines besides the ones defined by $\sum(n)$. There are probably lots of halting 5-state machines that write more that $2^5$ 1's and less than 4098 1's. All such machines define ranges of super compressible numbers. I previously asked a similar question without getting an adequate answer. |
Speaker
Mr. Giovanni Benato (University of Zurich)
Description
The search for neutrinoless double beta decay ($0\nu\beta\beta$) is playing since two decades a major role in astroparticle physics. The discovery of this process would demonstrate the violation of lepton number conservation and the presence of a Majorana term in the neutrino mass. The GERmanium Detector Array ({\sc{gerda}}) experiment, located at the Gran Sasso underground laboratory in Italy, is one of the leading experiments for the search of $0\nu\beta\beta$ decay in $^{76}$Ge. The first data taking (Phase I) took place between November 2011 and June 2013. With a $\approx 21~\rm kg \cdot yr$ exposure and a background index (BI) at {{$\text{Q}_{\beta\beta}$}}~of $1.1\cdot10^{-2}$~{{$\rm cts/(kg \cdot yr \cdot keV)$}} after pulse shape discrimination, {\sc{gerda}}~Phase I set a limit on the $0\nu\beta\beta$ decay half life of $T_{1/2}^{0\nu} > 10^{25}~\rm yr~(90\%\mathrm{C.L.})$. The setup is now being upgraded for the Phase II of the experiment. A final sensitivity on $0\nu\beta\beta$ decay half life up to $2\cdot10^{26}$~yr can be obtained with an exposure of $100~\rm kg \cdot yr$ and a BI of $10^{-3}$~{{$\rm cts/(kg \cdot yr \cdot keV)$}}. The main strategies for reaching this are the use of the newly developed Broad Energy Germanium detectors (BEGe), with enhanced energy resolution and pulse shape discrimination capabilities, and the installation of an active veto in the liquid argon surrounding the germanium crystals for the recognition of external background events. In this talk, a review of the {\sc{gerda}}~Phase I results will be given, followed by a report on the ongoing operations for the preparation of Phase II.
Primary author
Mr. Giovanni Benato (University of Zurich) |
Differential Topology Lecture 1 Notes
This is the first of a series of lecture notes. I am going to try to keep this up so that it forces me to review the notes I have taken in lectures. Unfortunately, I cannot guarantee these will be any good.
Highlights of the Course
The first interesting theorem we will cover is Whitney’s Theorem: Every $n$-dimensional smooth manifold can be embedded in $\mathbb{R}^{2n+1}$.
An embedding is an injective map with no self intersections.
Jordon Curve Theorem: Every simple closed curve in $\mathbb{R}^2$ splits $\mathbb{R}^2$ into connected components.
Jordon-Brower Theorem is the generalization that a simple closed $n-1$ dimensional manifold splits $\mathbb{R}^n$.
Euler Characteristic of a manifold $M$, $\chi(M)$ ($\chi = V - E + F$). For example $\chi(\mathbb{R}) = 1$, $\chi(S^2) = 2$, $\chi(T^2) = 0$.
Hedgehog Theorem says $S^2$ cannot be combed. This is the same as saying there is always a still point or there is always a point with no wind on earth.
A compact manifold $M$ can be combed if and only if $\chi(M) = 0$, e.g. $T^2$ can be combed.
Fixed point problems. Then $f : M \to M$ a smooth deformation of the identity, if $\chi(M) \neq 0$, then $f$ has at least one fixed point. A circle rotated by $\theta$ has no fixed points.
Topological Manifolds
DefinitionAn $n$-dimensional topological manifold, $M$, is a topological space such that: $M$ is Hausdorff, $M$ is second-countable, $M$ is locally euclidean.
DefinitionA charton $M$ is a pair $(U,f)$, $U \subseteq M$ such that $f : U \to V \subseteq \mathbb{R}^n$ and $V$ is open.
ExampleTwo rays in $\mathbb{R}^2$ shooting out from the origin. Let $f$ be the projection of these two lines to $\mathbb{R}$.
Examples of spaces that are not manifolds.
Properties of Topological Manifolds
Locally euclidean $\implies$ locally path connected $\implies$ disjoint union of path connected manifolds.
Topologist’s Sine Curve is connected but not path connected.
They are locally compact
They are paracompact.
Definition$M$ is locally compactiff $\forall x \in M$, $\exists C \subseteq M$ and $C$ compact such that $C \supsetneq N^{\textrm{open}}(x)$.
DefinitionA cover $\mathscr{B}$ of a space $M$ is a refinementof $\mathscr{A}$ iff $\forall B \in \mathscr{B}$, $\exists A \in \mathscr{A}$ where $B \subseteq A$.
Definition$M$ is locally finiteiff $\forall x \in M$, $\exists U \subseteq M$ open such that only finite members of an open cover $\mathscr{B}$ intersect $U$ nonemptily.
Definition$M$ is paracompactiff every open cover $\mathscr{A}$ has a locally finite refinement. |
When using Gaussian processes, the covariance matrix $\mathbf{\Sigma}$ is often defined via a covariance function $K$ as follows $$ \mathbf{\Sigma}_{ij} = K(\underline{x}_i, \underline{x}_j) $$ where $\underline{x}_i, \underline{x}_j$ are coordinates of two points in some space of interest, and belong to some finite set of $n$ such points, resulting in a $n \times n$ covariance matrix.
Is it known whether the Gaussian (or 'squared exponential') covariance function$$K(\underline{x}_i, \underline{x}_j) = a^2 \exp{\left(-\frac{(\underline{x}_i - \underline{x}_j)^{\top}(\underline{x}_i - \underline{x}_j)}{2l^2}\right)}$$will
always produce a positive-definite $\mathbf{\Sigma}$ for any chosen set of points and values of $a, l$?
Thanks! |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
Q1) $\delta \mathcal L$ is zero for a global symmetry transformation, for example: assume that $\phi$ is a field that transforms under the fundamental representation of $SU(N)$, $\phi \mapsto U \phi$, then the Lagrangian for this field can look like:$$ \partial_m \phi^\dagger \partial^m \phi + m^2 \phi^\dagger \phi + \lambda (\phi^\dagger \phi)^2 + \cdots\,.$$The important thing here is that all these terms are
invariant under a $\mathrm{SU(N})$ transformation, $\phi^\dagger \phi \mapsto \phi^\dagger U^\dagger U \phi = \phi^\dagger \phi$. This means that the Lagrangian is invariant under a $\mathrm{SU(N})$ transformation, i.e., $\delta \mathcal{L} = 0$.
This happens because the Lagrangian, by definition, is a singlet under all global symmetry groups, and singlets don't change under a transformation.
This would not be true for space-time transformations. If $x^m \mapsto x^m + v^m$ is an infinitesimal space-time transformation then any space-time scalar, including the Lagrangian, will transform like $f(x) \mapsto f(x) - v^m \partial_m f(x)$.
Q2) Variation of the Lagrangian $\mathcal L(\phi, \partial_m \phi)$ can always be written as:$$ \delta \mathcal L = \left(\frac{\partial \mathcal L}{\partial \phi} - \partial_m \frac{\partial \mathcal L}{\partial(\partial_m \phi)} \right) \delta \phi + \partial_m \left(\frac{\partial \mathcal L}{\partial(\partial_m\phi)} \delta\phi \right) \,. \tag{*} $$Note that, the first pair of braces contain the equation of motion. And the equation of motion is, by definition, the functional derivative of the action with respect to the field (so that, extremizing action $\Rightarrow$ functional derivative with respect to field vanishes $\Rightarrow$ equation of motion satisfied). Therefore, you can write the variation of the Lagrangian as:$$ \delta \mathcal L = \frac{\delta S}{\delta \phi} \delta \phi + \partial_m \left(\frac{\partial \mathcal L}{\partial(\partial_m\phi)} \delta\phi \right) \,. $$This is what is written on page 202 of the pdf you linked, with the identification:$$ j^m = \frac{\partial \mathcal L}{\partial(\partial_m\phi)} \delta\phi \,. $$Now, it is important that this $j^m$ is required to be divergenceless
only on shell. For example, in the first question we saw that under a transformation generated by a global symmetry group $\delta\mathcal L$ is zero. The first term of the equation (*) vanishes only when the equation of motion is satisfied, so it is only when the equation of motion is satisfied that we require $\partial_m j^m = 0$. For more general symmetry transformations, the definition of symmetry is that the variation of the Lagrangian be a total derivative, $\delta \mathcal L = \partial_m F^m$ on shell. Since the first term of (*) vanishes on shell, we have to require for symmetry that, on shell, the following must be true:$$ \partial_m F^m = \partial_m j^m \Rightarrow \partial_m (j^m - F^m) = 0\,.$$So the general definition of current is indeed what you wrote:$$ J^m := j^m - F^m\,. $$But this is divergenceless only when the equation of motion is satisfied. |
Research Open Access Published: On Poly-Bernoulli polynomials of the second kind with umbral calculus viewpoint Advances in Difference Equations volume 2015, Article number: 27 (2015) Article metrics
1115 Accesses
11 Citations
Abstract
Poly-Bernoulli polynomials of the second kind were introduced in Kim
et al. (Adv. Differ. Equ. 2014:219, 2014) as a generalization of the Bernoulli polynomial of the second kind. Here we investigate those polynomials and derive further results about them by using umbral calculus. Introduction
Following Kaneko [1], the poly-Bernoulli polynomials have been studied by many researchers in recent decades. Poly-Bernoulli polynomials \(B_{n}^{(k)}(x)\) were defined as \(\frac {Li_{k}(1-e^{-t})}{1-e^{-t}}e^{xt}=\sum_{n\geq0}B_{n}^{(k)}(x)\frac {t^{n}}{n!}\), where \(Li_{k}(x)=\sum_{r\geq1} \frac{x^{r}}{r^{k}}\) is the classical polylogarithm function, which satisfies \(\frac{d}{dx}Li_{k}(x)=\frac{1}{x}Li_{k-1}(x)\). The poly-Bernoulli polynomials have wide-ranging applications in mathematics and applied mathematics (see [2–4]). For \(k\in \mathbb{Z}\), the poly-Bernoulli polynomials \(b_{n}^{(k)}(x)\) of the second kind are given by the generating function
When \(x=0\), \(b_{n}^{(k)}=b_{n}^{(k)}(0)\) are called the poly-Bernoulli numbers of the second kind. When \(k=1\), \(b_{n}(x)=b_{n}^{(1)}(x)\) are called the Bernoulli polynomial of the second kind (see [5–11]). Poly-Bernoulli polynomials of the second kind were introduced as a generalization of the Bernoulli polynomial of the second kind (see [12]). The aim of this paper is to use umbral calculus to obtain several new and interesting explicit formulas, recurrence relations and identities of poly-Bernoulli polynomials of the second kind. Umbral calculus has been used in numerous problems of mathematics. Umbral techniques have been of use in different areas of physics; for example it is used in group theory and quantum mechanics by Biedenharn
et al. (see [13–15]).
Let Π be the algebra of polynomials in a single variable
x over ℂ and let \(\Pi^{*}\) be the vector space of all linear functionals on Π. We denote the action of a linear functional L on a polynomial \(p(x)\) by \(\langle L|p(x)\rangle\). Define the vector space structure on \(\Pi^{*}\) by \(\langle cL+c'L'|p(x)\rangle=c\langle L|p(x)\rangle+c'\langle L'|p(x)\rangle\), where \(c,c'\in\mathbb{C}\) (see [16–19]). We define the algebra of a formal power series in a single variable t to be
where \(\delta_{n,k}\) is the Kronecker symbol. Let \(f_{L}(t)=\sum_{n\geq 0}\langle L|x^{n}\rangle\frac{t^{n}}{n!}\). By (1.3), we have \(\langle f_{L}(t)|x^{n}\rangle=\langle L|x^{n}\rangle\). Thus, the map \(L\mapsto f_{L}(t)\) is a vector space isomorphism from \(\Pi^{*}\) onto ℋ. Therefore, ℋ is thought of as a set of both formal power series and linear functionals. We call ℋ the
umbral algebra. The umbral calculus is the study of the umbral algebra.
Let \(f(t)\) be a non-zero power series, the
order \(O(f(t))\) is the smallest integer k for which the coefficient of \(t^{k}\) does not vanish. If \(O(f(t))=1\) (respectively, \(O(f(t))=0\)), then \(f(t)\) is called a delta (respectively, an invertable) series. Suppose that \(f(t)\) is a delta series and \(g(t)\) is an invertable series, then there exists a unique sequence \(s_{n}(x)\) of polynomials such that \(\langle g(t)(f(t))^{k}|s_{n}(x)\rangle=n!\delta_{n,k}\), where \(n,k\geq0\). The sequence \(s_{n}(x)\) is called the Sheffer sequence for \((g(t),f(t))\) which is denoted by \(s_{n}(x)\sim(g(t),f(t))\) (see [18, 19]). For \(f(t)\in\mathcal{H}\) and \(p(x)\in\Pi\), we have \(\langle e^{yt}|p(x)\rangle=p(y)\), \(\langle f(t)g(t)|p(x)\rangle =\langle g(t)|f(t)p(x)\rangle\), and \(f(t)=\sum_{n\geq0}\langle f(t)|x^{n}\rangle\frac{t^{n}}{n!}\) and \(p(x)=\sum_{n\geq0}\langle t^{n}|p(x)\rangle\frac{x^{n}}{n!}\) (see [18, 19]). Thus, we obtain \(\langle t^{k}|p(x)\rangle=p^{(k)}(0)\) and \(\langle1|p^{(k)}(x)\rangle =p^{(k)}(0)\), where \(p^{(k)}(0)\) denotes the kth derivative of \(p(x)\) with respect to x at \(x=0\). Therefore, we get \(t^{k}p(x)=p^{(k)}(x)=\frac{d^{k}}{dx^{k}}p(x)\), for all \(k\geq0\) (see [18, 19]). Thus, for \(s_{n}(x)\sim(g(t),f(t))\), we have
for all \(y\in\mathbb{C}\), where \(\bar{f}(t)\) is the compositional inverse of \(f(t)\) (see [18, 19]). For \(s_{n}(x)\sim(g(t),f(t))\) and \(r_{n}(x)\sim(h(t),\ell(t))\), let \(s_{n}(x)=\sum_{k=0}^{n} c_{n,k}r_{k}(x)\), then we have
The aim of the present paper is to present several new identities for the poly-Bernoulli polynomials by the use of umbral calculus.
Explicit expressions
Before proceeding, we observe that
where \(S_{2}(n,k)\) is the Stirling number of the second kind, which is defined by the identity \(x^{n}=\sum_{k=0}^{n}S_{2}(n,k)(x)_{k}\) with \((x)_{0}=1\) and \((x)_{k}=x(x-1)\cdots(x-k+1)\). This shows
Thus,
which implies that
Now, we are ready to present several formulas for the
nth poly-Bernoulli polynomials of the second kind. Theorem 2.1 For all \(n\geq1\), Proof
Since \(x^{n}\sim(1,t)\) and \(\frac{t}{Li_{k}(1-e^{1-e^{t}})}b_{n}^{(k)}(x)\sim (1,e^{t}-1)\) (see (1.6)), we obtain
Thus, by (2.3) we have
which completes the proof. □
Let \(S_{1}(n,k)\) be the Stirling number of the first kind, which is defined by the identity \((x)_{n}=\sum_{j=0}^{n}S_{1}(n,k)x^{k}\). Now, we are ready to present our second explicit formula.
Theorem 2.2 For all \(n\geq0\), Proof
Note that \((x)_{n}=\sum_{j=0}^{n}S_{1}(n,j)x^{j}\sim(1,e^{t}-1)\). So, by (1.6) we have \(\frac{t}{Li_{k}(1-e^{1-e^{t}})}b_{n}^{(k)}(x)\sim(1,e^{t}-1)\), which implies that
For the next explicit formula, we use the conjugation representation, namely (1.5).
Theorem 2.3 For all \(n\geq0\), Proof
If \(j=0\), then \(c_{n,0}=b_{n}^{(k)}\). Thus, assume now that \(1\leq j\leq n\). So
which, by (2.1), implies that
which completes the proof. □
In order to state our next formula, we recall that \(b_{n}(x)=b_{n}^{(1)}(x)\) is the Bernoulli polynomial of the second kind, which is given by the generating function \(\frac{t}{\log(1+t)}(1+t)^{x}=\sum_{n\geq 0}b_{n}(x)\frac{t^{n}}{n!}\).
Theorem 2.4 For all \(n\geq0\), where \(B_{n}^{(k)}(x)\) is the nth poly- Bernoulli polynomial. Proof
From the definitions, we have
Since \(B_{n}^{(k)}(x)\) is the poly-Bernoulli polynomial given by the generating function \(\frac{Li_{k}(1-e^{-t})}{1-e^{-t}} e^{xt}=\sum_{n\geq0}B_{n}^{(k)}(x)\frac {t^{n}}{n!}\), we have \(\frac{Li_{k}(1-e^{-t})}{1-e^{-t}}x^{n}=B_{n}^{(k)}(x)\) and \(\frac{d}{dx}B_{n}^{(k)}(x)=nB_{n-1}^{(k)}(x)\). Thus \(b_{n}^{(k)}(y)=\sum_{\ell=0}^{n}\binom{n}{\ell} b_{\ell}(y) \langle \frac{e^{-t}-1}{-t}|B_{n-\ell}^{(k)}(x) \rangle\). By the fact that \(\langle f(at)|p(x)\rangle=\langle f(t)|p(ax)\rangle\) for constant
a (see Proposition 2.1.11 in [19]), we obtain
Note that \(\langle\frac{e^{t}-1}{t}|B_{n-\ell}^{(k)}(-x) \rangle=\int_{0}^{1}B_{n-\ell}^{(k)}(-u)\,du=\frac{1}{n+1-\ell}(B_{n+1-\ell}^{(k)}-B_{n+1-\ell}^{(k)}(-1))\), which leads to
which completes the proof. □
Theorem 2.5 For all \(n\geq0\), Proof
By using a similar argument as in the proof of Theorem 2.4, we obtain
which, by (2.2), gives
as required. □
Recurrence relations
By (1.6) we have \(b_{n}^{(k)}(x)\sim (\frac {t}{Li_{k}(1-e^{1-e^{t}})},e^{t}-1 )\) with \(P_{n}(x)=\frac {t}{Li_{k}(1-e^{1-e^{t}})}b_{n}^{(k)}(x)=(x)_{n}=x(x-1)\cdots(x-n+1)\sim (1,e^{t}-1)\). Thus,
The aim of this section is to derive recurrence relations for the poly-Bernoulli polynomials of the second kind. As first trivial recurrence, by using the fact that if \(S_{n}(x)\sim(g(t),f(t))\) then \(f(t)S_{n}(x)=nS_{n-1}(x)\), we derive that \((e^{t}-1)b_{n}^{(k)}(x)=nb_{n-1}^{(k)}(x)\), and hence \(b_{n}^{(k)}(x+1)=b_{n}^{(k)}(x)+nb_{n-1}^{(k)}(x)\). Our next results establish other types of recurrence relations.
Theorem 3.1 For all \(n\geq0\), Proof
It is well known that if \(S_{n}(x)\sim(g(t),f(t))\) then \(S_{n+1}(x)=(x-\frac{g'(t)}{g(t)})\frac{1}{f'(t)}S_{n}(x)\). Hence, by (1.6), we have
with
where \(1-\frac {te^{t}e^{1-e^{t}}Li_{k-1}(1-e^{1-e^{t}})}{(1-e^{1-e^{t}})Li_{k}(1-e^{1-e^{t}})}\) has order at least one. Thus, by (2.4), we get
where
and
Thus,
which completes the proof. □
Theorem 3.2 For all \(n\geq0\), \(\frac{d}{dx}b_{n}^{(k)}(x)=n!\sum_{\ell=0}^{n-1}\frac{(-1)^{n-1-\ell }}{\ell!(n-\ell)}b_{\ell}^{(k)}(x)\). Proof
We proceed in the proof by using the fact that if \(S_{n}(x)\sim (g(t),f(t))\) then
By (1.6), we have \(\langle\bar{f}(t)|x^{n-\ell}\rangle=\langle \log(1+t)|x^{n-\ell}\rangle\), which leads to
Thus \(\frac{d}{dx}b_{n}^{(k)}(x)=n!\sum_{\ell=0}^{n-1}\frac{(-1)^{n-1-\ell }}{\ell!(n-\ell)}b_{\ell}^{(k)}(x)\), as required. □
Theorem 3.3 For all \(n\geq1\), Proof
Let \(n\geq1\). Then (1.6), we have
The first term in (3.1) is given by
For the second term in (3.1), we note that
which has order at least zero. So, the second term in (3.1) is given by
Identities
In this section we present some identities related to poly-Bernoulli numbers of the second kind.
Theorem 4.1 For all \(n\geq0\), Proof
We compute \(A=\langle Li_{k}(1-e^{-t})|x^{n+1}\rangle\) in two different ways. On the one hand, by (1.6), it is
On the other hand, by (1.6), it is
By using similar techniques as in the proof of Theorem 4.1 with computing
in two different ways, we obtain the following result (we leave the proof as an exercise to the interested reader).
Theorem 4.2 For all \(n-1\geq m\geq1\),
which leads to the following identity.
Theorem 4.3 For all \(n\geq0\),
Let \(\mathbb{B}_{n}^{(s)}(x)\) be the
nth Bernoulli polynomial of order s. Then \(\mathbb{B}_{n}^{(s)}(x)\sim(((e^{t}-1)/t)^{s},t)\). Also, the Bernoulli numbers of the second kind of order s are given by \(\frac{t^{s}}{\log^{s}(1+t)}=\sum_{j\geq0}\mathbf{b}_{j}^{(s)}\frac{t^{j}}{j!}\) and let \(b_{n}^{(k)}(x)=\sum_{m=0}^{n} c_{n,m}\mathbb{B}_{m}^{(s)}(x)\). By (1.5) and (1.6), we obtain
which gives the following identity.
Theorem 4.4 For all \(n\geq0\),
Define \(H_{n}^{(s)}(\lambda,x)\) to be the
nth Frobenius-Euler polynomials of order s. Note that these polynomial satisfy \(H_{n}^{(s)}(\lambda,x)\sim(((e^{t}-\lambda)/(1-\lambda))^{s},t)\). Let \(b_{n}^{(k)}(x)=\sum_{m=0}^{n} c_{n,m}H_{m}^{(s)}(\lambda,x)\). By (1.5) and (1.6), we obtain
which gives the following identity.
Theorem 4.5 For all \(n\geq0\), References 1.
Kaneko, M: Poly-Bernoulli numbers. J. Théor. Nr. Bordx.
9, 221-228 (1997) 2.
Kim, DS, Kim, T, Lee, S-H: Poly-Cauchy numbers and polynomials with umbral calculus viewpoint. Int. J. Math. Anal.
7(45-48), 2235-2253 (2013) 3.
Kim, T: Higher-order Cauchy of the second kind and poly-Cauchy of the second kind mixed type polynomials. Ars Comb.
115, 435-451 (2014) 4.
Zhao, F-Z: Some results for generalized Cauchy numbers. Util. Math.
82, 269-284 (2010) 5.
Araci, S, Acikgoz, M: A note on the Frobenius-Euler numbers and polynomials associated with Bernstein polynomials. Adv. Stud. Contemp. Math. (Kyungshang)
22(3), 399-406 (2012) 6.
Gzyl, H: Hamilton Flows and Evolution Semigroups. Pitman Research Notes in Mathematics Series, vol. 239 (1990)
7.
Jolany, H, Sharifi, H, Alikelaye, RE: Some results for the Apostol-Genocchi polynomials of higher order. Bull. Malays. Math. Soc.
36(2), 465-479 (2013) 8.
Qi, F: An integral representation, complete monotonicity, and inequalities of Cauchy numbers of the second kind. J. Number Theory
144, 244-255 (2014) 9.
Nemes, G: An asymptotic expansion for the Bernoulli numbers of the second kind. J. Integer Seq.
14(4), Article 11.4.8 (2011) 10.
Kim, T: Identities involving Laguerre polynomials derived from umbral calculus. Russ. J. Math. Phys.
21(1), 36-45 (2014) 11.
Zachos, CK: Umbral deformations on discrete space-time. Int. J. Mod. Phys. A
23(13), 2005-2014 (2008) 12.
Kim, T, Kwon, H-I, Lee, S-H, Seo, J-J: A note on poly-Bernoulli numbers and polynomials of the second kind. Adv. Differ. Equ.
2014, Article ID 219 (2014) 13.
Biedenharn, LC, Gustafson, RA, Lohe, MA, Louck, JD, Milne, SC: Special functions and group theory in theoretical physics. In: Special Functions: Group Theoretical Aspects and Applications. Math. Appl., pp. 129-162. Reidel, Dordrecht (1984)
14.
Dattoli, G, Levi, D, Winternitz, P: Heisenberg algebra, umbral calculus and orthogonal polynomials. J. Math. Phys.
49(5), 053509 (2008) 15.
Blasiak, P, Dattoli, G, Horzela, A, Penson, KA: Representations of monomiality principle with Sheffer-type polynomials and boson normal ordering. Phys. Lett. A
352, 7-12 (2006) 16.
Kim, DS, Kim, T: Applications of umbral calculus associated with
p-adic invariant integrals on \(\mathbb{Z}_{p}\). Abstr. Appl. Anal. 2012, Article ID 865721 (2012) 17.
Kim, DS, Kim, T: Some identities of Frobenius-Euler polynomials arising from umbral calculus. Adv. Differ. Equ.
2012, Article ID 196 (2012) 18.
Roman, S: More on the umbral calculus, with emphasis on the
q-umbral calculus. J. Math. Anal. Appl. 107, 222-254 (1985) 19.
Roman, S: The Umbral Calculus. Dover, New York (2005)
Acknowledgements
This work was partially supported by Kwangwoon University in 2014.
Additional information Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to this work. All authors read and approved the final manuscript. |
I'm trying to solve this limit $without$ using
L'hopital's Rule or Taylor Series. Any help is appreciated!
$$\lim\limits_{x\rightarrow 0^+}{\dfrac{e^x-\sin x-1}{x^2}}$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
One possible way is to shoot linear functions at the limit - not very elegant, but it works. Let:
$$f(x)=x^3-\frac{x^2}{2}+e^x-\sin x-1,\;\;x\geq 0$$
Computing the first few derivatives of $f:$
$$f'(x)=3x^2-x+e^x-\cos x$$ $$f''(x)=6x-1+e^x+\sin x$$
$f''$ is clearly increasing and since $f''(0)=0$ we have $f''(x)>0$ for $x\in (0,a)$ for some $a$. This in turn implies that $f'$ is strictly increasing and since $f'(0)=0$ we again have $f'(x)>0$ for $x\in (0,a)$. Finally, this means $f$ is also increasing on this interval, and since $f(0)=0$ we have:
$$0\leq x\leq a:\quad f(x)\geq 0$$
$$\Rightarrow \;\;\frac{e^x-\sin x-1}{x^2}\geq \frac{1}{2}-x$$
Similarly by considering $h(x)=-x^3-\dfrac{x^2}{2}+e^x-\sin x-1$ it is very easy to show that:
$$0\leq x\leq b: \quad h(x)\leq 0$$
$$\Rightarrow \;\;\frac{1}{2}+x\geq\frac{e^x-\sin x-1}{x^2}$$
Hence for small positive $x$ we have:
$$\frac{1}{2}-x\leq\frac{e^x-\sin x-1}{x^2}\leq \frac{1}{2}+x$$
$$\lim_{x\to 0^+}\frac{e^x-\sin x-1}{x^2}=\frac{1}{2}$$
I have an idea that might be fruitful although I haven't been very careful and completely ignore the $x\rightarrow 0^+$! Define the complex function
$\displaystyle F(z)=\frac{e^z-\sin z-1}{z^2}$.
Now let $z=it$.
$\displaystyle F(z)=\frac{e^{it}-\sin(ti)-1}{t^2}=\frac{\cos t+i\sin t-\sin(ti)-1}{t^2}$
$\displaystyle \Rightarrow F(z)=\frac{\cos t-1}{t^2}+\frac{i\sin t-\sin(it)}{t^2}$.
Now take the limit as $t\rightarrow0$. It is not difficult to show that the first limit is one half (multiply above and below by $\cos t+1$). I do not know how to compute the second limit which is annoying me although I think that it is zero.
Now use the fact that
$\displaystyle \lim_{x\rightarrow 0}\,F(x)=\lim_{z\rightarrow 0}\,F(z)=\lim_{t\rightarrow 0}\,F(it)$
I have assumed that the middle limit exists which is troublesome.
$$\lim_{x \to 0^+}\frac{e^x-\sin x -1}{x^2}=\lim_{x \to 0^+}\frac{e^x- x -1}{x^2}+\lim_{x \to 0^+}\frac{x-\sin x}{x^2}$$ First part given here and next part $$|\frac{x -\sin x}{x^2}-0| < \frac{x}{x^2}< \epsilon \implies x >\frac{1}{\epsilon} $$ Take $N(\epsilon)= \lfloor \frac{1}{\epsilon}\rfloor+1$ So here from definition of limit we get $$\lim_{x \to 0^+}\frac{x-\sin x}{x^2} =0$$ Otherwise see this
Perhaps the problem says $x\to 0^+$ because some inequalities wich relate $\sin x$ and $e^x$ with polynomials are only valid for $x>0$ sufficiently close to $0$. So, a strategy is to find functions $f(x)$ and $g(x)$ such that
$$f(x)\leq \dfrac{e^x-\sin x-1}{x^2}\leq g(x)\:\;\mbox{ for }x>0 \mbox{ close to } 0$$
and $\displaystyle\lim_{x\to 0^+}f(x)=\lim_{x\to 0^+}g(x)=\frac{1}{2}$ |
Current browse context:
math.AG
Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Algebraic Geometry Title: Generic Initial ideals of Singular Curves in Graded Lexicographic Order
(Submitted on 30 Jul 2011)
Abstract: In this paper, we are interested in the generic initial ideals of \textit{singular} projective curves with respect to the graded lexicographic order. Let $C$ be a \textit{singular} irreducible projective curve of degree $d\geq 5$ with the arithmetic genus $\rho_a(C)$ in $\p^r$ where $r\ge 3$. If $M(I_C)$ is the regularity of the lexicographic generic initial ideal of $I_C$ in a polynomial ring $k[x_0,..., x_r]$ then we prove that $M(I_C)$ is $1+\binom{d-1}{2}-\rho_a(C)$ which is obtained from the monomial $$ x_{r-3} x_{r-1}\,^{\binom{d-1}{2}-\rho_a(C)}, $$ provided that $\dim\Tan_p(C)=2$ for every singular point $p \in C$. This number is equal to one plus the number of non-isomorphic points under a generic projection of $C$ into $\p^2$. %if $\deg(C)=3,4$ then $M(I_C)= \deg(C)$ by the direct computation. Our result generalizes the work of J. Ahn for \textit{smooth} projective curves and that of A. Conca and J. Sidman \cite{CS} for \textit{smooth} complete intersection curves in $\p^3$. The case of singular curves was motivated by \cite[Example 4.3]{CS} due to A. Conca and J. Sidman. We also provide some illuminating examples of our results via calculations done with {\it Macaulay 2} and \texttt {Singular} \cite{DGPS, GS}. Submission historyFrom: Jea Man Ahn [view email] [v1]Sat, 30 Jul 2011 06:43:14 GMT (13kb) |
Consider two adjacent terms in the product, and use $x=x_j,y=x_{j+1}$ for the notation. The nondecreasing inequality then gives the constraint $x \le y$ and there is a positive integer $a=j^2$ so that the contribution to the product from the $j$ and $j+1$ terms is$$(1+a^2x^2)(1+(a+1)^2y^2). \tag{1}$$Now if initially $x+y=k$ we only decrease (or keep the same) this two term product by replacing $x,y$ each by $k/2$, and note that since initially $x \le y$ this replacement has increased (or kept the same) $x$ while decreasing (or keeping the same) $y$, and so the replacement is consistent with the nondecreasing inequality among the $x_j$'s. It is also clear that, if $x\neq y$ were true initially, the replaced contribution would be strictly less than the initial product $(1).$
This argument shows that (as Greg Martin suggested) the minimal value of the expression occurs when all the $x_j$ are $1/n.$ If $p(n)=(2n^2+9n+1)/(6n)$ and $q(n)$ is the value of the product when each $x_j=1/n,$ then $p(1)=q(1)=2,$ so that for general $n$ the minimal lower bound of $p(n)$ happens to be exact at $n=1$. For larger $n$ it is less than $q(n)$, and has to be checked by hand for $n \le 4$, for which $q(2)-p(2)=1/4,$ and $q(3)-p(3)=53/81,$ and finally $q(4)-p(4)=655/512.$ [There may be a more clever way to see in general that $q(n)>p(n)$ but I don't see it.]
As soon as $n \ge 5$ we can use the fact that $1/n$ times the logarithm of the product is a right-endpoint sum for the integral $$\int_0^1 \log(1+x^2)\ dx = c= \log 2+(\pi-4)/2.$$Then from $(1/n)\cdot \log q(n) \ge c$ we get $q(n) \ge e^{cn}.$ Though I didn't do this last part carefully, it seems that for $n \ge 5$ we do have $e^{cn} >p(n)$ as required. The left side here should certainly overtake the right since it's exponential with $e^c \approx 1.302.$ I guess the reason the integral bound isn't tight for $n<5$ is because too much error occurs in replacing the sum by the integral.
EDIT (at suggestion by the OP BarbuDorel in a comment). Once it is agreed the minimum of the product occurs when all the $x_j$ are $1/n$ the product becomes$$(1+(1/n)^2)(1+(2/n)^2)\cdots (1+(n/n)^2).$$This is clearly at least $1$ plus the sum of the squared terms which occur as second terms in the factors. But$$1+(1/n)^2+(2/n)^2+\cdots+(n/n)^2=\frac{2n^2+9n+1}{6n},$$on applying the formula for the sum of the first $n$ squares. So there is no need to consider integrals as I did above. [There still remains some work to justify replacing the product of two adjacent factors $(1)$ by the product obtained on replacing adjacent $x_j$ by their average. I'll add that if I can get a simple argument.
A simpler approach. Note that the given product is bounded below by the sum$$S=1+x_1^2+2^2x_2+\cdots + n^2x_n^2.$$ As noted above, if we can show this sum is minimal when all the $x_j$ are equal, the inequality follows. The argument for this goes similarly to before, looking at two adjacent terms, but the math is easier. Suppose the adjacent terms contribute $ax^2+by^2$ [where $0<a<b$] to the sum, and that initially $x<y$. Then if initially $x+y=k$ we have $x<k/2$. Now consider $$ax^2+b(k-x)^2=(a+b)x^2 -2bkx+bk^2.$$The graph of this is a parabola opening upward, with its vertex at $x=bk/(a+b)>k/2,$ where the latter follows from $0<a<b.$ So since our initial $x$ satisfies $x<k/2$ we see that the value of $ax^2+by^2$ is strictly decreased when we replace each of $x,y$ by their average $(x+y)/2.$
Now since it is clear that the restrictions on the $x_k$ define a compact bounded region in $n$ space, the function $S$ must have a minimum, and the above argument shows this minimum cannot occur at any point at which any two of the $x_j$ are not equal. Conclusion: at the minimal $S$ all the $x_j$ are equal, and the overall inequality follows from that as just outlined. |
In the following circuit:
I know the expression for the collector current is $$I_C =\frac{V_{CC}-V_C}{R_L}$$ Say I have a potentiometer for \$R_L\$ and I decide to set it to 0 Ω. What would \$I_C\$ be in this case?
Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up.Sign up to join this community
Let me show the voltages and collector current (I
C).
As you can see, \$V_B = V_{BE} + V_E = V_{BE} + I_E \cdot R_E\$.
Since \$I_C \approx I_E\$, the equality above turns into \$V_B \approx V_{BE} + I_C \cdot R_E\$. Finally, $$I_C = \frac{V_B - V_{BE}}{R_E}$$
(I will not show what VB is since it can be found with simply voltage divider rule.)
See? Collector current is independent from collector load!
NOTE: Placing a resistor to emitter creates a constant current source. It also guarantees the thermal stability.
Lets assume Beta = infinity. And assume the Ic is near 1mA, so the Vbe is approximately 0.6 volts (0.5 volts near 10uA, 0.4 volts near 100 nanoAmps).
Ic = (Vbase - 0.6) / Re
Where Vbase = VDD * the voltage divider ratio
thus Vbase = VDD * R2 / (R1 + R2)
and now, finally,
Ic = { [VCC * R2/(R1 + R2)] - 0.6} / RE
Again, assuming beta = infinity, Vbe = 0.6, and Vearly = infinity
With all 3 resistors being 1KOhm, and VCC = 3 volts, we have
Ratio = 0.5,
Vbase = 3*0.5 = 1.5 volts,
Vemitter = Vbase - 0.6 = 1.5 - 0.6 = 0.9 volts
Ic = 0.9 / 1K = 0.9 milliAmps |
Let $E$ be a subspace of $\mathbb{R}^4$ that is composed of vectors that support the following system :
$$\begin{cases} x - 2y + z + t = 0 \\2x+y-2z-t = 0 \end{cases}$$
So, I've used Gaussian elimination so I've found the following basis for $E$: $$ B = \{(3,4,5,0), (1,3,0,5)\}. $$
Well now i defined $W= \{ w \in W \mid \langle v, w \rangle = 0 \}$ . What do I have to do next? I used the following system :
$$ \begin{cases}3x_1 + 4x_2 + 5x_3 = 0 \\ 1x_1+3x_2+5x_4 = 0 \end{cases} $$
by using the inner product, where $x_1,x_2,x_3,x_4$ are the elements of an arbitrary vector in $W$. Is this approach correct? If i now find the basis $W$ from this system will i find the orthogonal complement basis of $E$? |
I've read all the existing answers long ago but still feel that none have gotten to the heart of the issue. We obtain mathematical results through a process of reasoning.
That reasoning must be logical and enough to convince anyone that our results are correct given our initial assumptions. That is the actual purpose of a proof. It does not matter what form the reasoning takes, whether using only words or only mathematical symbols or only a diagram. The requirement is simply to convince the other person. If we cannot do so, then our reasoning is insufficient or incorrect.
This proper attitude must start right from the basics. For example $\frac{1+2}{1+3} \ne \frac{\not{1}+2}{\not{1}+3}$.
Explaining to the student that one cannot do that is almost useless. Instead, the student should be asked: "Why do you cancel?" and then "Why does cancelling keep the value the same?".
The problem is that if this is not done from the beginning of arithmetic, it simply causes students to create for themselves a deep quagmire of guesswork in order to heuristically
write down things which they believe will get them their grades. If you have seen students who try to mimic their teachers' phrasing but clearly without understanding of the meaning, or students who care only about how to get the answer and not why the method is correct, you know what I mean.
As a result, very few students have a full grasp of even the fundamentals, namely the field of rationals. What I mean by this is that few are able to state all the field axioms correctly and prove results like the uniqueness of inverses (when they exist) and that $0 \times x = 0$ and that $-x \times -y = x \times y$. (Out of these, fewer still can give any explanation as to the rationale for the axioms, but that is another topic.)
It is obvious that with a proper foundation as I briefly described above, no student would ever write $(a+b)^2 = a^2+b^2$. Why? Because they know that "$x^2$" is
defined as "$x \times x$" and "$()$" are used to denote what to do first, so $(a+b)^2 = (a+b) \times (a+b)$. Moreover, they also would know the distributivity field axiom that gives first $(a+b) \times (a+b) = a \times (a+b) + b \times (a+b)$ and then after 2 more applications the full expansion, using the commutativity and associativity axioms. Likewise none of the other mistakes that you mentioned would occur.
Furthermore, if students cannot handle the field axioms correctly, one might as well throw the induction axiom out of the window. The way it is taught in most textbooks and curricula is seriously lacking, precisely because it is not based on sufficiently formal reasoning. A simple example that most students who were brought up with textbook induction fail to solve is:
Given a function $f:\mathbb{Z}\to\mathbb{R}$ such that $f(0) = 0$ and $f(1) = 1$ and $f(x+1) + 6 f(x-1) = 5 f(x)$ for any $x \in \mathbb{Z}$, prove that $f(x) = 3^x - 2^x$ for any $x \in \mathbb{Z}$.
It is not hard at all, but only those who understand the logical structure of induction would be able to give a correct proof. In case anyone is wondering what I mean by textbook induction, two examples that I would consider seriously lacking are:
Finally, proper reasoning naturally requires sufficient precision, because one cannot reason logically about statements whose meaning is undefined or unclear.
Vagueness in mathematics is one great recipe for confusion. This must start with the teacher. A teacher who is sloppy with mathematical statements or steps in reasoning is simply telling the students that it is alright to be sloppy and by extension it is alright if they do not know what they are doing as long as they get the answer!
One terrible example of sloppiness in most high-school curricula is solving differential equations by "separating variables". Try giving the following to any student:
Solve for $y$ as a function of a real variable $x$ given that the differential equation $\frac{dy}{dx} = 2\sqrt{y}$ holds.
You know what answer to expect, and I hope you know the correct answer. Even Wolfram Alpha gets it wrong. Now for students who give the wrong answer, tell them that it is wrong but do not tell them the correct answer, and ask if they can identify the mistake and fix it. Most will fail to identify the mistake, and fixing the mistake will require the foundation in logic that most students do not have.
Here are the solution sketches for the problems I've given above. I strongly encourage one to thoroughly check one's own work to verify whether each step follows completely logically from the preceding deductions, and merely look at these solutions to confirm.
Problem
Given a function $f:\mathbb{Z}\to\mathbb{R}$ such that $f(0) = 0$ and $f(1) = 1$ and $f(x+1) + 6 f(x-1) = 5 f(x)$ for any $x \in \mathbb{Z}$, prove that $f(x) = 3^x - 2^x$ for any $x \in \mathbb{Z}$.
Hints
Induction only allows you to derive something about the natural numbers. The desired theorem is about integers. Also, if you cannot prove the implication needed for the induction, a key technique that often works is to strengthen the induction hypothesis to include enough information so that you can prove the implication step. Of course that also means that the implication you need to prove has changed!
Solution sketch
First notice that the theorem to be proven is that $f(x) = 3^x - 2^x$ for all
integers $x$, and so induction in one direction is not enough! Also, notice that it is impossible to prove that $f(x) = 3^x - 2^x$ implies $f(x+1) = 3^{x+1} - 2^{x+1}$, and hence the induction hypothesis must contain information about at least two 'data points' for $f$. The easiest one would be to let $P(x)$ be "$f(x) = 3^x - 2^x$ and $f(x-1) = 3^{x-1} - 2^{x-1}$". Then one must prove $P(x+1)$, which expands to "$f(x+1) = 3^{x+1} - 2^{x+1}$ and $f(x) = 3^x - 2^x$". I would not accept if the student does not fully prove $P(x+1)$. This would handle the natural numbers, and a similar induction would handle the negative integers. It is of course possible to combine both inductions into one, which it should be explored, although in general it is good to keep a proof as modular as possible.
Problem
Solve for $y$ as a function of a real variable $x$ given that the differential equation $\frac{dy}{dx} = 2\sqrt{y}$ holds.
Hint
The answer is not $y = (x+a)^2$, which you would get by the method of separating variables. What went wrong? Note that the error would still be there if you used the theorem that allows change of variables in an integral. Look carefully at each deduction step. One step cannot be justified based on any axiom. Think basic arithmetic. After you get that, you need to consider cases and use the completeness axiom for reals to extend the open intervals on which the standard solution works.
Solution sketch
The field axioms only give you a multiplicative inverse when it is not zero. Now how to solve the problem? Split into cases. Note that you need to work on intervals since having isolated points where $y$ is nonzero is useless. First prove that for any point where $y \ne 0$, there is an open interval around $x$ for which $y \ne 0$. Then we can use the completeness axiom for reals to extend the interval in both directions as far as $y \ne 0$. Now we can use any method to solve for $y$ on that interval. Note that the method of separating variables is formally invalid, so we should use the change of variables substitution. But the prerequisite for that is that $\frac{dy}{dx}$ is continuous, so we need to prove that! Well, $y$ is differentiable and hence continuous, so $2\sqrt{y}$ is continuous. So we get the solution on the extended interval, and it shows that $y$ becomes zero in exactly one direction in this example. Hence after some checking you will get either $y = 0$ or $y = \cases{ 0 & \text{if } x \le a \\ (x-a)^2 & \text{if } x > a }$ for some real $a$.
Alternative subproof
In fact, the substitution theorem can be completely avoided as follows. On any interval $I$ where $y \ne 0$, we have $y'^2 = 4y$, where "${}'$" denotes the derivative with respect to $x$. Thus $(y'^2)' = (4y)'$, which gives $2y'y'' = 4y'$, and hence $y'' = 2$ since $y' = 2\sqrt{y} \ne 0$. Thus $y' = 2x+c$ on $I$ for some real $c$, and hence $y = x^2+cx+d$ on $I$ for some real $d$. Note that most of the above steps are not reversible and hence we need to check all the solutions we finally obtain with the original differential equation. We would get $c^2 = 4d$. After simple manipulation we obtain the same result for $y$ on $I$ as in the other solution. The other parts of the solution still need to be there. |
There is a very simple way to do that: using "gregorio" in LualaTex:% !TEX TS-program = lualatex\documentclass{article}\usepackage[bitstream-charter]{mathdesign}%I like this font, but you can use another font.\usepackage{gregoriotex}\begin{document}\Vbar\Rbar\end{document}
For the question of typing: The Vedic accents are not available on the standard keyboard layouts, it seems.Which leaves: (a) direct input individual selection of characters from a character map (e.g., BabelMap); (b) almost-direct input via text editors where the unicode value can be keyed in and pressing Alt-X converts it to the glyph (e.g., LibreOffice ...
You can use the cmtex10 font:\documentclass{article}\usepackage{newunicodechar}\DeclareFontFamily{OT1}{cmtex}{}\DeclareFontShape{OT1}{cmtex}{m}{n}{<-> cmtex10}{}\DeclareTextFontCommand{\textttex}{\usefont{OT1}{cmtex}{m}{n}}\newunicodechar{≤}{\ifmmode\le\else\textttex{\symbol{"1C}}\fi}\newunicodechar{≥}{\ifmmode\ge\else\textttex{\symbol{"1D}}\...
I’ll repost my answer using Unicode. It’s not as terse as requested, because it uses expl3, but neither is David Carslisle’s fine answer.Here is a solution using fontspec and expl3. It’s not very compact, but in my opinion, it’s a lot more readable than the legacy solution.\documentclass{article}\RequirePackage{expl3}\usepackage{fontspec}% These ...
If you're feeling hacky, or are working on an online platform based on KaTeX, this is a mediocre, but perhaps passable ASCII-inspired solution:\underline{/\overline{ \vphantom{/} \hphantom{a}}\,}\!\overline{/}}Hopefully you don't need to use this, though :)
Here is how you can define custom commands using special characters for the \stackrel approach.\catcode`\!=11\newcommand{\eq!}{\stackrel{?}{=}}% ...$f(x) \eq! 4$It is also nice to define \leq?, \mid? etc.This solution is significantly better than writing \stackrel{?}{=} and friends frequently.
In the Computer Modern math font family, the shapes of \setminus and \backslash are identical. (Aside: the vertical size of \backslash can be modified by \left and \right directives; that's not the case for \setminus.) That does not mean, though, that the symbols get typeset identically. This is because \backslash has status mathord ("math ordinary"), ...
by default \setminus is\DeclareMathSymbol{\setminus}{\mathbin}{symbols}{"6E}That is a smallish \, set as a binary operator with the same spacing as -but \backslash is\DeclareMathDelimiter{\backslash}{\mathord}{symbols}{"6E}{largesymbols}{"0F}That is a large \ that can grow with \left right but otherwise set as an ordinary symbol, so acting like |...
This is a letter invented by Dr. Seuss for his children's book OnBeyond Zebra! The author presents several letters which extend the known26-letter alphabet of English.In the book the letter is not assigned a name, but somecall it ABCDEFGHIJKLMNOPQRSTUVWXYZ. Seeen.wikipedia.org/wiki/On_Beyond_Zebra! and the link to the registeredUnicode extension on ...
I had to guess some of the unicode mappings, just looking athttps://www.unicode.org/charts/PDF/U13000.pdf\documentclass{article}\usepackage{fontspec}\newfontfamily\hg{Segoe UI Historic}\def\hgunits#1{\ifcase#1\relax\or^^^^^^0133fa\or^^^^^^0133fb\or^^^^^^0133fc\or^^^^^^0133fd\or^^^^^^0133fe\or^^^^^^0133ff\or^^^^^^013400\or^^^^^^013401\or^^^^^^...
See, if this work for you:\documentclass{standalone}\usepackage{tikz}\begin{document}\begin{tikzpicture}[line width=5pt]\foreach \i in {0, 90, 180, 270}\draw[rotate=\i] (0,0) -- ++ (45:1) arc (45:90:1);\end{tikzpicture}\end{document}
It very much looks like the issue you've come across is a (bad) side-effect of using the polyglossia package, which seems to have a few issues when used with the brazil language option.Unless you absolutely must use the polyglossia package, an easy remedy consists of (a) getting rid of (or, at least, commenting out) the instructions\usepackage{...
The test file\documentclass{article}\usepackage{catchfilebetweentags}\begin{document}\ExecuteMetaData ∗ [filename]{tag}\end{document}produces the error shown! Argument of \UTFviii@three@octets has an extra }.<inserted text>\parl.6 \ExecuteMetaData ��� [filename]{tag}?as the LaTex *form of a ...
The solution to the problem is to load the letltxmacro package and do:\LetLtxMacro\iso\cong\LetLtxMacro\cong\equivA little explanation: your reasoning that "TeX read commands from top to bottom" is correct. After you do \newcommand{\iso}{\cong}, the command \iso will make a ≅ as you expect, but not how you expect it to. After the \newcommand above (...
unicode-math does many things \AtBeginDocument, you can delay your declarations:\documentclass{article}\usepackage{unicode-math}\setmathfont{Garamond-Math.otf}\DeclareSymbolFont{gs}{OML}{cmm}{m}{it}\newcommand*\RedeclareMathSymbol[4]{%\let#1\relax\DeclareMathSymbol{#1}{#2}{#3}{#4}%}\AtBeginDocument{%\RedeclareMathSymbol{\alpha}{\mathalpha}{...
It doesn't work because unicode-math sets the font tables at begin document.You should use the range feature instead:\documentclass{article}\usepackage{amsmath}\usepackage{amsthm}\usepackage[math-style=ISO, bold-style=ISO]{unicode-math}\setmathfont{Garamond-Math.otf}\setmathfont{latinmodern-math.otf}[Scale=MatchLowercase,range=it/{greek,Greek}...
You can rotate the \approx symbol and set it as a relation:\documentclass{article}\usepackage{graphicx}\makeatletter\newcommand{\@curveslash}[2]{\rotatebox[origin=c]{70}{$#1#2$}}\newcommand{\curveslash}{\mathrel{\mathpalette\@curveslash\approx}}\makeatother\begin{document}\ldots enveloping quotient $H \curveslash H$.\[H \curveslash H_{H \...
The ≙ symbol is U+2259 in Unicode. It has the name \wedgeq in several newer packages, including unicode-math and stix. It is \hateq in two older ones, mnsymbol and fdsymbol. One completely obsolete one, boisik, has \corresponds.Edit: Sorry, just noticed that Barbara Beeton said that in a comment first. Props!
It seems like a variation of the congruence symbol. Denis's answer is aligned with the =. To align it with \cong you need more trickery:\documentclass{article}\usepackage{amsmath} % necessary for correct scaling of \widehat\makeatletter\newcommand*{\varcong}{\mathrel{\mathpalette\@varcong\relax}}\newcommand*{\@varcong}[2]{\vcenter{\hbox{\m@th$#1\...
What you want could be obtained using\documentclass[12pt]{article}\usepackage{amsmath}\newcommand{\circumeq}{\mathrel{\widehat{=}}}\begin{document}\begin{equation*}a \circumeq b\end{equation*}\end{document}
I would definitely not use a symbol here and use the word times. If you really want the symbol then the second one or the fourth, but corrected to not lose the space so107 \texttimes\ thatyou always need \after a command name in text But107 times thatis much betterNote that you may or may not want to use $107$ to use numbers in text (with the ...
Here is a (humble) possibility using tikz. It won't resist scaling though\documentclass{standalone}\usepackage{tikz,amstext}\newlength{\tempheight}\newcommand{\Let}[0]{%\mathbin{\text{\settoheight{\tempheight}{\mathstrut}\raisebox{0.5\pgflinewidth}{%\tikz[baseline,line cap=round,line join=round] \draw (0,0) --++ (0.4em,0) --++ (0,1.5ex) --++ (-0.4em,0)...
\colon and : are defined in fontmath.ltx using:\DeclareMathSymbol{\colon}{\mathpunct}{operators}{"3A}\DeclareMathSymbol{:}{\mathrel}{operators}{"3A}which makes \colon a punctuation and : a relation symbol, as you said yourself.You can swap the definitions:\DeclareMathSymbol{:}{\mathpunct}{operators}{"3A}\DeclareMathSymbol{\colon}{\mathrel}{...
Thanks for all the great leads guys. I liked @unbonpetit's response with the long lines (admittedly longer than em-dash like I initially wanted), so I decided to modify the code a bit and got a really nice result:What's nice is you can play around with the positioning and I was able to get it slightly (vertically) off-center so that it lies closer to the ...
The second π looks like the ones you would find in fonts sold by Bitstream.Unless you have copies of these commercial fonts, there is no way to change \pi into your desired glyph (unless you want to draw one yourself).By the way, this letterform does not go well with Times at all. I would not recommend changing the already well-designed MathTime Pro....
Is this close to what you want?\documentclass{article}\usepackage{amsmath, amssymb}\usepackage{graphicx}\usepackage{stackengine}\newcommand{\mybond}{\mathrel{\scalebox{1.5}[0.84]{$\stackMath\stackinset{c}{-1.4pt}{c}{4.3pt}{=}{\equiv}$}}}\begin{document}\[\mathrm{N}\mybond \text{---}\]%\end{document}
A simple solution is to declare another “math alphabet” with a different name:% My standard header for TeX.SX answers:\documentclass[a4paper]{article} % To avoid confusion, let us explicitly% declare the paper format.\usepackage[T1]{fontenc} % Not always necessary, but recommended.% End of standard header. ...
Another possible solution, with the \bigovoid symbol from mathabx and \stackinset from stackengine to insert a smaller \bigovoid inside a larger one:\documentclass[12pt]{article}\usepackage[utf8]{inputenc}\usepackage{graphicx}\usepackage{stackengine}\DeclareFontFamily{U}{mathx}{\hyphenchar\font45}\DeclareFontShape{U}{mathx}{m}{n}{%<-6> mathx5&...
A circle with variable line thickness:\documentclass{article}\usepackage{amsmath,pict2e}\makeatletter\newcommand{\bigcomp}{%\DOTSB\mathop{\vphantom{\sum}\mathpalette\bigcomp@\relax}%\slimits@}\newcommand{\bigcomp@}[2]{%\begingroup\m@th\sbox\z@{$#1\sum$}%\setlength{\unitlength}{0.9\dimexpr\ht\z@+\dp\z@}%\vcenter{\hbox{%\begin{...
\documentclass{article}\newcommand\ustexttt[1]{\bgroup\ttfamily\ustextttaux#1\relax\relax}\def\ustextttaux#1#2\relax{\ifx_#1\_\else#1\fi\allowbreak\ifx\relax#2\relax\def\next{\egroup}\else\def\next{\ustextttaux#2\relax}\fi\next}\begin{document}Here is an example\ustexttt{multiple_words_long_with_underscores_representing_spaces}of converted ...
Consider using listings' \lstinline (experimental):\documentclass{article}\usepackage{listings}\lstset{basicstyle = \ttfamily}\let\code\lstinline\begin{document}Some regular text.Some \code{code} inline.Then some \code{multiple_words} code word.\end{document}
The stix package gives you not only \parallelogram and \parallelogramblack, but also \fltns.\documentclass{article}\usepackage{stix}\begin{document}$\parallelogram \parallelogramblack \fltns$\end{document}
Here is a possibility with stackengine and the \bigovoid symbol from mathabx (without replacing the default maths fonts with the mathabx fonts):\documentclass{article}\usepackage{amsmath}\DeclareFontFamily{U}{mathx}{\hyphenchar\font45}\DeclareFontShape{U}{mathx}{m}{n}{%<-6> mathx5<6-7> mathx6<7-8> mathx7<8-9> mathx8<9-...
You can use a scaled up version of \bigcirc and \ooalign:\documentclass{article}\usepackage{amsmath,graphicx}\makeatletter\newcommand{\makecircled}[2][\mathord]{#1{\mathpalette\make@circled{#2}}}\newcommand{\make@circled}[2]{%\begingroup\m@th\vphantom{\biggercirc{#1}}%\ooalign{$#1\biggercirc{#1}$\cr\hidewidth$#1#2$\hidewidth\cr}%\endgroup}...
You can use TikZ to draw a circle node with a # inside. Using \DeclareMathOperator from amsmath improves the spacing. The character should be a bit smaller than the current font, which you can do using \smaller from the relsize package, to make sure it works in different fontsizes.MWE:\documentclass{article}\usepackage{amsmath}\usepackage{tikz}\...
I have created your symbol with a combination of packages. Excuse me for the complicated code. The symbol has the name \disj. It is a variable name that you can change. Here is my MWE:\documentclass{article}\usepackage{amsmath,amssymb}\usepackage{MnSymbol,scalerel}\newcommand{\disj}{\mathrel{{\bigcircle}\mkern-4mu\raise.3ex\llap{$\scaleobj{.6}{\#}$}}}\...
\stretchrel from the scalerel package allows you to do such things.\documentclass[border=1mm]{standalone}\usepackage{scalerel}\begin{document}\Huge $\cup \ \cap \ \stretchrel*{\mid}{\cap}$\end{document} |
See: Description
Interface Description WeightFunction
WeightFunction interface that allows the use of various distance-based weight functions.
Class Description ConstantWeight
Constant Weight function The result is always 1.0
ErfcStddevWeight
Gaussian Error Function Weight function, scaled using stddev.
ErfcWeight
Gaussian Error Function Weight function, scaled such that the result it 0.1 at distance == max erfc(1.1630871536766736 * distance / max) The value of 1.1630871536766736 is erfcinv(0.1), to achieve the intended scaling.
ExponentialStddevWeight
Exponential Weight function, scaled such that the result it 0.1 at distance == max stddev * exp(-.5 * distance/stddev) This is similar to the Gaussian weight function, except distance/stddev is not squared.
ExponentialWeight
Exponential Weight function, scaled such that the result it 0.1 at distance == max exp(-2.3025850929940455 * distance/max) This is similar to the Gaussian weight function, except distance/max is not squared
GaussStddevWeight
Gaussian weight function, scaled using standard deviation \( 1/\sqrt(2\pi) \exp(-\frac{\text{dist}^2}{2\sigma^2}) \)
GaussWeight
Gaussian weight function, scaled such that the result it 0.1 at distance == max, using \( \exp(-2.3025850929940455 \frac{\text{dist}^2}{\max^2}) \).
InverseLinearWeight
Inverse Linear Weight Function.
InverseProportionalStddevWeight
Inverse proportional weight function, scaled using the standard deviation. 1 / (1 + distance/stddev)
InverseProportionalWeight
Inverse proportional weight function, scaled using the maximum. 1 / (1 + distance/max)
LinearWeight
Linear weight function, scaled using the maximum such that it goes from 1.0 to 0.1 1 - 0.9 * (distance/max)
QuadraticStddevWeight
Quadratic weight function, scaled using the standard deviation.
QuadraticWeight
Quadratic weight function, scaled using the maximum to reach 0.1 at that point.
Copyright © 2019 ELKI Development Team. License information. |
The Order of a Fin. Group is the Prod. of the Ord. of its Comp. Factors
Math Online's 3000th Page - 5:30 PM CST on August 6th, 2018 The Order of a Finite Group is the Product of the Orders of its Composition Factors
Theorem 1: Let $G$ be a finite group and let $\{ e \} = G_0 \leq G_1 \leq ... \leq G_n = G$ be a composition series of $G$. Then $\displaystyle{|G| = \prod_{i=0}^{n-1} |G_{i+1}/G_i|}$. On The Jordan-Hölder Theorem page we proved that every finite group has a composition series. Proof:We carry this proof out by induction on the length of the composition series. Let $P(n)$ be the statement that if $G$ is finite group and has a composition series of length $n$, then $|G|$ is the product of the orders of the composition factors of that composition series. Base Step:Consider $P(1)$. Let $G$ be a group with a composition series of length $1$, say $\{ e \} = G_0 \leq G_1 = G$. Then $G_1/G_0 = G/\{ e \} \cong G$. So indeed:
\begin{align} \quad |G| = \prod_{i=0}^{0} |G_{i+1}/G_i| = |G_1/G_0| = |G/\{e\}| = |G| \end{align}
Suppose that for some $n > 1$ we have that $P(1)$, $P(2)$, …, $P(n-1)$ is true. Let $G$ be a finite group with a composition series of length $n$, say:
\begin{align} \quad \{ e \} = G_0 \leq G_1 \leq ... \leq G_n = G \end{align}
Without loss of generality we can assume that $G_{i+1} \neq G_i$ for all $0 \leq i \leq i+1$, since if $G_{i+1} = G_i$ then $|G_{i+1}/G_i| = 1$. So if $G_{i+1} \neq G_i$ for all $0 \leq i \leq i+1$ then notice that $G_{n-1}$ has composition series:
\begin{align} \quad \{ e \} = G_0 \leq G_1 \leq ... \leq G_{n-1} \end{align}
Since $G_{n-1} \leq G_n$ we have that $G_{n-1}$ is a finite group with the above composition series, and so by the induction hypothesis we have that:
\begin{align} \quad |G_{n-1}| = \prod_{i=0}^{n-2} |G_{i+1}/G_i| \end{align}
But then by Lagrange's Theorem, since $G_{n-1}$ is a subgroup of $G_n = G$ we have that:
\begin{align} \quad |G| = |G_{n-1}|[G:G_{n-1}] = \prod_{i=0}^{n-2}|G_{i+1}/G_i| \cdot \frac{|G_n|}{|G_{n-1}|} = \prod_{i=0}^{n-2} |G_n/G_{n-1}| = \prod_{i=0}^{n-1} |G_{i+1}/G_i| \end{align}
So $P(n)$ is true. Conclusion:By the principle of mathematical induction we have that $P(n)$ is true for all $n \in \mathbb{N}$. $\blacksquare$ |
I have completed week 1 of Andrew Ng's course. I understand that the cost function for linear regression is defined as $J (\theta_0, \theta_1) = 1/2m*\sum (h(x)-y)^2$ and the $h$ is defined as $h(x) = \theta_0 + \theta_1(x)$. But I don't understand what $\theta_0$ and $\theta_1$ represent in the equation. Is someone able to explain this?
Linear regression is always associated with an activation function, the weights between layers and the structure of the network. The weights between layers are
theta0 and
theta1. This weights and the input features undergoes the dot product operation which is then the input to the activation function of the next layers node/nodes.
An apparently different but the same use of
theta0 and
theta1 is as coefficients to one or more number of terms which themselves are combination of the input vectors.
Broadly
theta-n denotes an weight i.e. how much preference you want to give to a feature.
As said above, they are weights to your hypothesis function that are changed during training to minimize your Error function. You can think of them like slope and y intercept in basic algebra. However, a linear regression hypothesis function can be parameterized by many more weight terms than just theta_0, and theta_1.
I detail this process more in this post: How does an activation function's derivative measure error rate in a neural network?
The prediction made by linear regression can simply be thought of as a vector dot product.
$$\overrightarrow{x}^T \cdot \overrightarrow{y}$$
One of those two vectors is the "data" for one case (like a row in your data matrix), the other is a vector of the model's parameters, which is usually called $\overrightarrow{\theta}$ or $\overrightarrow{\beta}$.
So in the case shown by yourself, we have:
$$h(x) = \theta_0 + \theta_1 \cdot x$$
Often we add a row of ones to the beginning of the data matrix, that way we are consistent in the sense that the $\theta_0 = 1 \cdot \theta_0$
This way we arrive at:
$$h(x) = \overrightarrow{\theta}^T \cdot \overrightarrow{x}$$
The aim of reinforcement learning is to make an agent which plays games. The agent uses weights in his policy for adapting the decisions to the game. Learning means, that the agent changes it's weights. The cost function defines how good the agent has adapted the weights to the problem. Linear regression is a method for determine the weights inside the policy. Instead of linear regression any other algorithm like hill climbing, simulated annealing or brute force can be used. The general idea is, that the agent fluctuates his weights until he has won the game.
The equation of Andrew Ng can't be seen as a tutorial of how to construct the agent, it is more a self-referential symbol in a proof-workflow. That means, the formula is used together with many more of them to fulfil a thought-system which has nothing to do with agent-programming itself. |
Usually for MLE estimation as you said we compute the residuals starting from index
number of lag+1 (p+1 for AR model) in this case we obtain
Conditional MLE estimates:
$\hat{\theta} = \text{arg max} \sum_{p+1}^{T} \ln f(Y_{t}|\theta)$
where $f(Y_{t})$ is the marginal density of observation $Y_{t}$ and $\ln$ is employed to maximized the
log likelihood. The first residuals are then fixed as missing values and then we get a smaller number of usable residuals than observations. (in this case you start the recursion algorithm from index P+1)
However, it is also possible to estimate your model with
Exact MLE estimates:
$\hat{\theta} = \text{arg max} \sum_{p+1}^{T} \ln f(Y_{t}|\theta) + \ln f(Y_{p},...,Y_{1},\theta)$
This require to fix some pre-sample values (the $Y_{0},...Y_{-p+1}$ ) in order to be able to run the model. Given that the AR model is stationary you can fix these values to their sample mean or unconditional theoretical mean. In this case first residuals are not missing values and you obtain the same number of
usable residuals than observations. (in this case you start the recursion algorithm from index 1)
Regarding standardized residuals $res_{std}$, it is simply the residuals from the model divided by the conditional standard deviation : $ res_{std}= res /\sigma_{t}$ , this require to estimate $\sigma_{t}$ via, for instance, a GARCH model. If you don't model the conditional variance part, you use : $res_{std}= res /\sigma $ where $\sigma$ is the unconditional volatility of residuals obtained during estimation.
See this nice paper for details.
EDIT
I found my answer no enough documented so this is an update.First the MLE method for a simple AR1 model is very well explained in the following page
See below,
(from :
Hamilton, J. D. (1994). Time Series Analysis (1 edition). Princeton University Press. (page 122) ):
[...]
So as we see the Exact Log Likelihood
[equation 5.2.9] differs from the Conditional Log likelihood [equation 5.2.27] in the sense that in the exact MLE we maximize the full likelihood (that is why we call it exact) whereas in the conditional version we truncate the likelihood by dropping the marginal : $-0.5 \log(2\pi) -0.5\log(\sigma^{2}/(1-\phi^{2}))- \frac{(y_{1}-(c/1-\phi))^2}{2\sigma^{2}/(1-\phi^2}$
This marginal replaces the
expectation of $E(Y_{0}) = c+\phi Y_{-1} + \epsilon$ by the unconditonal mean $E(Y_{0}) = \frac{c}{1-\phi}$ |
I have the following question from Hull, problem 6.17:
On August 1 a portfolio manager has a bond portfolio worth $10 million. The duration of the portfolio in October will be 7.1 years. The December Treasury bond futures price is currently 91-12 and the cheapest-to-deliver bond will have a duration of 8.8 years at maturity. How should the portfolio manager immunize the portfolio against changes in interest rates over the next two months?
The solution is as follows:
The treasurer should short Treasury bond futures contract. If bond prices go down, this futures position will provide offsetting gains. The number of contracts that should be shorted is:
$$\begin{align} Number\ of\ Contracts& =\frac{$\ 10\,000\,000}{\bbox[yellow, 5px,border:2px solid red]{$\ 91\,375}}\times\frac{7.1\ \text{years}}{8.8\ \text{years}} \newline & = 88.30 \ \text{contracts}\newline \therefore Number\ of\ Contracts &\approx 88\ \text{contracts}\end{align}$$
Question:
How was the $91,375 calculated? |
A first remark
This same phenomenon of 'control' qubits changing states in some circumstances also occurs with controlled-NOT gates; in fact, this is the entire basis of eigenvalue estimation. So not only is it possible, it is an important fact about quantum computation that it is possible. It even has a name: a "phase kick", in which the control qubits (or more generally, a control register) incurs relative phases as a result of acting through some operation on some target register.$\def\ket#1{\lvert#1\rangle}$
The reason why this happens
Why should this be the case? Basically it comes down to the fact that the standard basis is not actually as important as we sometimes describe it as being.
Short version. Only the standard basis states on the control qubits are unaffected. If the control qubit is in a state which is not a standard basis state, it can in principle be changed.
Longer version —
Consider the Bloch sphere. It is, in the end, a sphere — perfectly symmetric, with no one point being more special than any other, and no one
axis more special than any other. In particular, the standard basis is not particularly special.
The CNOT operation is in principle a physical operation. To describe it, we often
express it in terms of how it affects the standard basis, using the vector representations$$ \ket{00} \to {\scriptstyle \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}}\,,\quad\ket{01} \to {\scriptstyle \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}}\,,\quad\ket{10} \to {\scriptstyle \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}}\,,\quad\ket{11} \to {\scriptstyle \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}}$$— but this is just a representation. This leads to a specific representation of the CNOT transformation:$$\mathrm{CNOT}\to{\scriptstyle \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}}\,.$$and for the sake of brevity we say that those column vectors are the standard basis states on two qubits, and that this matrix is a CNOT matrix.
Did you ever do an early university mathematics class, or read a textbook, where it started to emphasise the difference between a linear transformation and matrices — where it was said, for example, that a matrix could
represent a linear transformation, but was not the same as a linear transformation? The situation with CNOT in quantum computation is one example of how this distinction is meaningful. The CNOT is a transformation of a physical system, not of column vectors; the standard basis states are just one basis of a physical system, which we conventionally represent by $\{0,1\}$ column vectors.
What if we were to choose to represent a different basis — say, the X eigenbasis — by $\{0,1\}$ column vectors, instead? Suppose that we wish to represent $$\begin{aligned}\ket{++} \to{}& [\, 1 \;\; 0 \;\; 0 \;\; 0 \,]^\dagger\,,\\\ket{+-} \to{}& [\, 0 \;\; 1 \;\; 0 \;\; 0 \,]^\dagger\,,\\\ket{-+} \to{}& [\, 0 \;\; 0 \;\; 1 \;\; 0 \,]^\dagger\,,\\\ket{--} \to{}& [\, 0 \;\; 0 \;\; 0 \;\; 1 \,]^\dagger \,.\end{aligned}$$This is a perfectly legitimate choice mathematically, and because it is only a notational choice, it doesn't affect the physics — it only affects the way that we would write the physics. It is not uncommon in the literature to do analysis in a way equivalent to this (though it is rare to explicitly write a different convention for column vectors as I have done here). We would have to represent the standard basis vectors by:$$ \ket{00} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}}\,,\quad\ket{01} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ -1 \\ 1 \\ -1 \end{bmatrix}}\,,\quad\ket{10} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ 1 \\ -1 \\ -1 \end{bmatrix}}\,,\quad\ket{11} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ -1 \\ -1 \\ 1 \end{bmatrix}}\,.$$Again, we're using the column vectors on the right
only to represent the states on the left. But this change in representation will affect how we want to represent the CNOT gate.
A sharp-eyed reader may notice that the vectors which I have written on the right just above are the columns of the usual matrix representation of $H \otimes H$. There is a good reason for this: what this change of representation amounts to is a change of reference frame in which to describe the states of the two qubits. In order to describe $\ket{++} = [\, 1 \;\; 0 \;\; 0 \;\; 0 \,]^\dagger$, $\ket{+-} = [\, 0 \;\; 1 \;\; 0 \;\; 0 \,]^\dagger$, and so forth, we have changed our frame of reference for each qubit by a rotation which is the same as the usual matrix representation of the Hadamard operator — because that same operator interchanges the $X$ and $Z$ observables, by conjugation.
This same frame of reference will apply to how we represent the CNOT operation, so in this shifted representation, we would have$$\begin{aligned}\mathrm{CNOT} \to \tfrac{1}{4}{}\,{\scriptstyle\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix}\,\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}\,\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix}}\,=\,{\scriptstyle\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix}}\end{aligned}$$which — remembering that the columns now represent $X$ eigenstates — means that the CNOT performs the transformation$$ \begin{aligned}\mathrm{CNOT}\,\ket{++} &= \ket{++} , \\\mathrm{CNOT}\,\ket{+-} &= \ket{--}, \\\mathrm{CNOT}\,\ket{-+} &= \ket{-+} , \\\mathrm{CNOT}\,\ket{--} &= \ket{+-} .\end{aligned} $$Notice here that it is
only the first, 'control' qubits whose state changes; the target is left unchanged.
Now, I could have shown this same fact a lot more quickly without all of this talk about changes in reference frame. In introductory courses in quantum computation in computer science, a similar phenomenon might be described without ever mentioning the words 'reference frame'. But I wanted to give you more than a mere calculation. I wanted to draw attention to the fact that a CNOT is in principle not just a matrix; that the standard basis is not a special basis; and that when you strip these things away, it becomes clear that the operation realised by the CNOT clearly has the potential to affects the state of the control qubit, even if the CNOT is the only thing you are doing to your qubits.
The very idea that there is a 'control' qubit is one centered on the standard basis, and embeds a prejudice about the states of the qubits that invites us to think of the operation as one-sided. But as a physicist, you should be deeply suspicious of one-sided operations.
For every action there is an equal and opposite reaction; and here the apparent one-sidedness of the CNOT on standard basis states is belied by the fact that, for X eigenbasis states, it is the 'target' which unilaterally determines a possible change of state of the 'control'.
You wondered whether there was something at play which was only a mathematical convenience, involving a choice of notation. In fact, there is: the way in which we write our states with an emphasis on the standard basis, which may lead you to develop a
non-mathematical intuition of the operation only in terms of the standard basis. But change the representation, and that non-mathematical intuition goes away.
The same thing which I have sketched for the effect of CNOT on X-eigenbasis states, is also going on in phase estimation, only with a different transformation than CNOT. The 'phase' stored in the 'target' qubit is kicked up to the 'control' qubit, because the target is in an eigenstate of an operation which is being coherently controlled by the first qubit. On the computer science side of quantum computation, it is one of the most celebrated phenomena in the field. It forces us to confront the fact that the standard basis is only special in that it is the one we prefer to describe our data with — but not in how the physics itself behaves. |
Separable Topological Spaces
Recall from the Dense and Nowhere Dense Sets in a Topological Space page that if $(X, \tau)$ is a topological space then a set $A \subseteq X$ is said to be dense if the intersection of $A$ with every open set (excluding the empty set) is nonempty, that is, for all $U \in \tau \setminus \{ \emptyset \}$ we have that:(1)
Furthermore we said that $A$ is said to be nowhere dense if the interior of the closure of $A$ is empty, that is:(2)
We will now classify a special type of topological space which contains a countable and dense subset.
Definition: Let $(X, \tau)$ be a topological space. Then $X$ is said to be Separable if $X$ contains a countable and dense subset.
Consider the topological space $(\mathbb{R}, \tau)$ where $\tau$ is the usual topology of open intervals on $\mathbb{R}$. Recall that the set of rational numbers $\mathbb{Q}$ is dense in $\mathbb{R}$. This is because any open set of $\mathbb{R}$ is either the empty set or the union of an arbitrary collection of open intervals. However, every open interval contains a rational number and the intersection of $\mathbb{Q}$ with any open interval is nonempty. Therefore, the intersection of $\mathbb{Q}$ with any open set $U \in \tau$ is nonempty and so for all $U \in \tau \setminus \{ \emptyset \}$ we have that:(3)
Furthermore the set of rational numbers is countable, so the subset $\mathbb{Q}$ of $\mathbb{R}$ is a countable and dense subset of $\mathbb{R}$ with respect to the topology $\tau$. Hence, $(\mathbb{R}, \tau)$ is a separable topological space. |
Part a) of this is fine, but I'm really stuck on part b) and I have a test on this in an hours time, does anyone have any hints?
closed as off-topic by Toby Mak, Dietrich Burde, Shogun, mrtaurho, Yanior Weg Aug 12 at 19:32
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc." – Toby Mak, Dietrich Burde, Shogun, mrtaurho, Yanior Weg
I think the best place to start is to first give things names, e.g.
$$(a,b,c) \ast (\lambda, \mu, \nu) =: (x,y,z) $$
And then simplify the RHS of the expression
$$ F(x,y,z) = F(a,b,c) \cdot F(\lambda, \mu, \nu)$$
to get an expression for $x$, $y$ and $z$. You'll use part a) for this.
EDIT: $(\alpha, \beta, \gamma)$ were not the best choice of letters!
Good luck!
$(a\alpha^2+b\alpha+c)(d\alpha^2+e\alpha+f)=ad(\alpha^4)+(bd+ae)\alpha^3+(af+be+cd)\alpha^2+(bf+ce)\alpha+cf$
Then use the relation $\alpha^3=-3\alpha+24/5$ to reduce this to something like
$ad(-3\alpha^2+24/5\alpha)+(bd+ae)(-3\alpha+24/5)+(af+be+cd)\alpha^2+(bf+ce)\alpha+cf =(-3ad+af+be+cd)\alpha^2+(24ad/5-3bd-3ae+bf+ce)\alpha+(24bd/5+24ae/5+cf)$
which will give us a way to compose $(a,b,c)*(d,e,f)=(-3ad+af+be+cd,24ad/5-3bd-3ae+bf+ce,24bd/5+24ae/5+cf)$ |
The reason behind the choice of $\lvert 0^{\otimes n}\rangle$ as reference state, found in many basic treatments of Grover's algorithm, is best understood by considering the technique that generalizes it: the so-called quantum amplitude amplification.
The goal of amplitude amplification is a very generic one: given some initial state $\lvert\psi_{in}\rangle$, we want to transform it into a state that belongs to a given target subspace $\mathcal H_{target}$.The initial state $\lvert\psi_{in}\rangle$ is assumed to be known, but $\mathcal H_{target}$ needs not be (and can indeed be seen as the
goal of the algorithm).
This is consistent with what you have in the special case of Grover's algorithm with $\lvert\psi_{in}\rangle=\lvert+,\cdots,+\rangle=H^{\otimes n}\lvert 0,\cdots, 0\rangle$ and $\mathcal H_{target}=\mathbb C\lvert \psi_{target}\rangle$ one-dimensional and encoding the state that we are trying to find, and given only indirectly via oracular access to a function $f$ such that $f(x)=1$ iff $x=\lvert\psi_{target}\rangle$, and $f(x)=0$ otherwise.
In the general amplitude amplification scheme, the way we get from the initial to the target space is via repeated application of a pair of two reflections, $-S_t S_i$, where$$S_i\equiv 2\lvert \psi_{in}\rangle\!\langle \psi_{in}\rvert-I, \\S_t\equiv 2\lvert \psi_{t}\rangle\!\langle \psi_{t}\rvert-I,$$and we used the notation $\lvert\psi_t\rangle\simeq\sum_{x\in\mathcal H_{target}}\lvert x\rangle$.
As it turns out, the product of two reflections amounts to a rotation in state space:$$\newcommand{\ketbra}[1]{\lvert #1\rangle\!\langle #1\rvert}R\equiv-S_t S_i=-4\ketbra{\psi_{in}}\psi_{t}\rangle\!\langle \psi_t\rvert+2(\ketbra{\psi_{in}}+\ketbra{\psi_t})-I,$$which brings the initial state $\lvert\psi_{in}\rangle$ closer to the target, as long as the initial overlap is not too big to begin with:$$R\lvert\psi_{in}\rangle=(1-4\lvert\langle\psi_{in}\rvert\psi_{t}\rangle\rvert^2 )\lvert\psi_{in}\rangle+2\langle\psi_t\rvert\psi_{in}\rangle\lvert\psi_t\rangle,$$$$\langle\psi_{in}\rvert R\lvert\psi_{in}\rangle=1-2\lvert\langle\psi_{in}\rvert\psi_{t}\rangle\rvert^2.$$You can then verify how repeated applications of $R$ get you closer and closer to the target.
This shows how there is nothing special about the choice of $\lvert0\rangle$ often used: what is needed is a pair of reflections, one with respect to the initial state and the other with respect to the projector over the target state/space.
Why does the operator corresponding to the phase shift in Grover's algorithm correspond to $2\ketbra0-I$?
Let $\lvert\psi\rangle$ be any state, and define $S\equiv 2\ketbra\psi-I$. Then,$$S\lvert\psi\rangle=2\lvert\psi\rangle-\lvert\psi\rangle=\lvert\psi\rangle,$$while for any $\lvert\phi\rangle$ such that $\langle\phi\rvert\psi\rangle=0$,$$S\lvert\phi\rangle=-\lvert\phi\rangle.$$ |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
I want to calculte risk-premiums in order to assess how much risk-averse customers would be willing to pay for an insurance against an uncertain loss modeled by a random variable $X$. How would a risk-premium be calculated, if $X$ does not follow a normal distribution? My reading of the literature is as follows:
1) Expected utility theory: Assumes CARA (CRRA) like utility functions with well behaved properties that are more or less agreed to model the behavior of rational risk-averse decision makers by satisfying certain axioms. The risk-aversion of decision makers can be modelled by Arrow-Pratt with $\alpha=R(X)=-\frac{u''(x)}{u'(x)}$. Using Arrow Pratt, one can calculate a risk-premium $\pi$ if $X$ is normally distributed such that $\pi(X)=\frac{\alpha}{2}\sigma^2(X)$ which derives from the formula of the certainty equivalent.
2) Downside risk measures from finance: Since losses (or returns) may not be normally distributed, finance has developed other measures to capture the concept of risk. One of these measures is the semi-variance as a special case of lower partial moments, which is very similar to the idea of mean-variance principle as outlined above: $SV=E((max[0,E(X)-X])^2)$.
Now the problem is as follows:
Aiming to calculate a risk-premium I can either follow utility theory. If $X$ is normally distributed we get a well contained formula to calculate the risk-premium $\pi(X)$. If not, then what? Would the certainty equivalent be calculated based on numerical integrat? If so, how would this look like for an arbitrary distribution of $X$?
On the other hand, I could follow a down-side risk measure approach: Given that I use semi-variance as a concept to measure risk, how could you account for the risk-aversion of customer in order to derive a risk-premium? I would just weight the semi variance with a factor $\tau^{~}SV$ (with $\tau$ being some kind of a proxy for risk-aversion) this wouldn't do the trick, would it? Essentially it would probabily not satisfy the expected utility axioms such as CARA or CRRA do correct?
How to approach this problem then? Are there any alternative strategies that I have overlooked?
**Addition after update
Now I have also found this post on the matter which would give a solution to my problem - as far as I understand - since it allows $X$ to have a non-zero expected value? This is due to the fact that the taylor approximation is - as it says - only an approximation of the risk premium with regard to various moments of the given distribution $X$. This also explains why the risk-premium can be correctly calculated for the normal distribution which is completely defined trough first and second moment.
This brings me to my last question: Considering the risk premium for non-zero expected values this would according to the post listed above also be approximated by the formula $\pi \approx \frac{1}{2}R(w)\sigma^{2}_X$? And the approxmation error for non-normal distributions is then depending on the question how well first and second moment capture the true nature of a distribution? |
MercuryDPM Trunk
Having explained in the previous section the how to run a Mercury driver code, we next explain the form of the data output, and describe how relevant information may be extracted from this data. Mercury produces data regarding a wide range of system parameters and, as such, there exist a variety of manners in which this data may be obtained and processed. This page is divided in two parts:
Each MercuryDPM executable produces three main output files, with the extensions ‘.data’, ‘.fstat’ and ‘.ene’.
For instance, building the source file
example.cpp will create an executable named
example. The output file name is set using DPMBase::getName; the MercuryDPM convention is that the name of the output file names should be equal to the name of the source file. Thus, execution will create output files named ‘example.data’, ‘example.fstat’ and ‘example.ene’ (other files such as ‘example.restart’ and ‘example.stat’ might be created, which will be discussed in later sections).
The simplest of the three file types is the ‘.ene’ file, which allows us to interpret the time evolution of the various forms of energy possessed by the system. Data is written at predefined time steps, with the system’s
total gravitational (
’ene_gra’) and elastic (
’ene_ela’) potential energies and translational (
’ene_kin’) and rotational (
’ene_rot’) kinetic energies being shown alongside the system’s centre of mass position in the
x, y and z directions (
’X_COM’,
’Y_COM’ and
’Z_COM’, respectively).
At each time step, the data is output as follows:
time ene_gra ene_kin ene_rot ene_ela X_COM Y_COM Z_COM
The data file is perhaps the most useful and versatile of the three, as it provides full information regarding the positions and velocities of all particles within the system at each given time step.
The files are formatted as follows: at
each time step, a single line stating the number of particles in the system (
N). This first line is structured as below:
N, time, xmin, ymin, zmin, xmax, ymax, zmax
For each timestep, we are given information regarding
N),
time when the output was written, and
xmin, ymin, zmin, xmax, ymax, zmax.
This output is then followed by a series of
N subsequent lines, each providing information for one particle within the system. These parameters are output in the following order:
rx, ry, rz, vx, vy, vz, rad, alpha, beta, gamma, omex, omey, omez, info
For each particle, we are given information regarding
info represents an additional variable which can be specified by the user using DPMBase::getInfo. By default,
info represents the
The sequence of output lines described above is then repeated for each time step.
It should be noted that the above is the standard output required for
three-dimensional data; for two-dimensional data, only five items of information are given in the initial line of each time step: N, time, xmin, zmin, xmax, zmax
and eight in the subsequent
N lines:
x, z, vx, vz, rad, qz, omez, xi
The fstat is predominantly used to calculate stresses.
The .fstat output files follow a similar structure to the .data files; for each time step, three lines are initially output, each preceded by a ‘hash’ symbol (#). These lines are designated as follows:
# time, fstatVersion # xmin ymin zmin xmax ymax zmax #
where
time is the current time step adn
fstatVersion the version number of the output format (currently one, as it has been changed once). The values provided in the second and third line ensure backward compatibility with earlier versions of Mercury, but are not used for analysis.
This initial information is followed by a series of
N c lines corresponding to each of the time, i, j, cx, cy, cz, delta, deltat, fn, ft, nx, ny, nz, tx, ty, tz
i indicates the number used to identify a given particle and
j similarly identifies its contact partner.
delta represents the overlap between the two particles,
deltat the length of the
i is split into a normal and tangential component, \(\vec{f}={\tt fn}\cdot\hat{n}+{\tt fn}\cdot\hat{t}\), where \(\hat{n}=({\tt nx}, {\tt ny}, {\tt nz})\) and \(\hat{t}=({\tt tx}, {\tt ty}, {\tt tz})\) denote unit vectors normal and tangential to the contact plane, respectively.
f
We begin by discussing the manner in which Mercury data can simply be ‘visualised’ - i.e. a direct, visual representation of the motion of all particles within the system produced.
ParaView may be downloaded from http://www.paraview.org/download/ and installed by following the relevant instructions for your operating system. On Ubuntu, it can simply be installed by typing
sudo apt-get install paraview
In order to visualise the data using Paraview, the
data2pvd tool can be used to convert the ‘.data' files output by Mercury into a '.pvd' Paraview datafile and several VTK (.vtu) files. We will now work through an example, using
data2pvd to visualise a simple data set produced using the example code
ChuteDemo. From your build directory, go to the
ChuteDemos directory:
and run the
ChuteDemo code:
Note: if the code does not run, it may be necessary to first build the code by typing:
Once the code has run, you will have a series of files; for now, however, we are only interested in the '.data' files.
data2pvd
Since
data2pvd creates numerous files, it is advisable to output these to a different directory. First, we will create a directory called
chute_pvd:
We will then tell
data2pvd to create the files in the directory:
In the above, the first of the three terms should give the path to the directory in which the
data2pvd program is found (for a standard installation of Mercury, the path will be exactly as given above); the second is the name of the data file (in the current directory) which you want to visualise; the third gives the name of the directory into which the new files will be output (‘chute_pvd’) and the name of the files to be created ('chute').
Once the files have been successfully created, we now start ParaView by simply typing:
Which should load a screen similar to the one provided below:
Note: for Mac users, ParaView can be opened by clicking 'Go', selecting 'Applications' and opening the file manually.
The next step is to open the file by pressing the folder icon circled in the above image and navigating to the relevant directory using the panel shown below.
Here, you can choose to open either the `.pvd' file, which loads the entire simulation, or the '.vtu' file, which allows the selection of a single timestep.
For this tutorial, we will select the full file - ’chute.pvd'.
On the left side of the ParaView window, we can now see chute.pvd, below the builtin in the Pipeline Browser.
Click ‘Apply' to load the file into the pipeline.
Now we want to actually draw our particles. To do so, open the 'filters' menu at the top of the ParaView window (or, for Mac users, at the top of the screen) and then, from the drop-down list, select the 'common' menu and click 'Glyph'.
In the current case, we want to draw all our particles, with the correct size and shape. In the left-hand menu, select 'Sphere' for the ‘Glyph Type’, 'scalar' as the Scale Mode (under the ‘Scaling’ heading) and enter a Scale Factor of 2.0 (Mercury uses radii, while ParaView uses diameters).
Select 'All Points' for the ‘Glyph Mode’ under the ‘Masking’ heading to make sure all of our particles are actually rendered. Finally press 'Apply' once again to apply the changes.
In order to focus on our system of particles, click the 'Zoom to data' button circled in the image above.
The particles can then be coloured according to various properties; for the current tutorial, we will colour our particles according to their velocities. To do this, with the 'Glyph1' stage selected, scroll down in the properties menu until you find 'Colouring' and select the 'Velocities' option.
The colouring can be rescaled for optimal clarity by pressing the ‘Rescale' button in the left hand menu under the ‘Colouring’ heading.
We are now ready to press the 'play' button near the top of the screen and see the system in motion!
The ParaView program has endless possibilities and goes way beyond the scope of this document. Please consult the ParaView documentation for further information.
In the MercuryCG folder (
MercuryDPM/MercuryBuild/Drivers/MercuryCG), type ‘
make fstatistics’ to compile the ‘fstatistics’ analysis package.
For information on how to operate fstatistics, type ‘
./fstatistics -help’.
The Mercury analysis package are due to be upgraded in the upcoming Version 1.1, at which point full online documentation and usage instructions will be uploaded.
If you experience problems in the meantime, please do not hesitate to contact a member of the Mercury team. |
We assume an RSA signature scheme with appendix where the signature of message $M$ is $S=\left(\operatorname{MD5}(M)\right)^d\bmod N$, and the verification procedure checks that $0\le S<N$ and $\left(S^e\bmod N\right)=\operatorname{MD5}(M)$, with $e=3$ (or other relatively small odd $e\ge3$). Eve somewhat got $k$ rightful signatures $S_i$ and perhaps the corresponding messages $M_i$ (which Eve could not influence). Eve wants to construct another $M$, and matching signature $S$.
Eve will make a multiplicative forgery: she'll find a message $M$ and a matching set of $k$ non-negative integers $e_i$, such that $\operatorname{MD5}(M)\cdot\prod\left(\operatorname{MD5}(M_i)\right)^{e_i}$ is an $e$
th power, then compute the signature of $M$ as$$S=\left(\sqrt[e]{\operatorname{MD5}(M)\cdot\prod\left(\operatorname{MD5}(M_i)\right)^{e_i}}\right)\cdot\left(\prod S_i^{e_i}\right)^{-1}\bmod N$$which verifies $\left(S^e\bmod N\right)=\operatorname{MD5}(M)$.
Define $m_{i,j}$ as the multiplicity of prime $p_j$ in the factorization of $\operatorname{MD5}(M_i)$, and define $m_j$ as the multiplicity of prime $p_j$ in the factorization of $\operatorname{MD5}(M)$. The goal of Alice is that $\forall j,\; m_j+\sum_i m_{i,j}\cdot e_i\equiv0\pmod e$. That linear system of equation with unknowns $e_i$ is equivalent to $\operatorname{MD5}(M)\cdot\prod\left(\operatorname{MD5}(M_i)\right)^{e_i}$ being an $e$
th power.
Eve computes the hashes $H_i=\operatorname{MD5}(M_i)$, directly or as $S_i^e\bmod N$. She factors the $H_i$ at least partially (with $H_i<2^{128}$, even complete factorization is feasible). She can ignore any $H_i$ with a prime factor $p_j$ not appearing in the other $H_i$ [and $m_{i,j}\not\equiv0\pmod e$, but that is likely for $k$ large enough to carry the attack]; in particular she can without loosing much ignore those $H_i$ with a prime factor larger than about $k^3/\log(k)$, which are unlikely to be of any help.
Outline of the rest: Eve repeatedly
selects a message $M$ of her choice [that she did not previously select, and distinct from the $M_i$ if these are given] computes $\operatorname{MD5}(M)$ and factors it at least partially if that factorization consists entirely of primes occurring in the factorization of at least one of the $H_i$ kept [in that screening Eve could exclude primes with multiplicity $m_j\equiv0\pmod e$ in the factorization of $\operatorname{MD5}(M)$, and occurrences with multiplicity $m_{i,j}\equiv0\pmod e$ in the $H_i$, but that won't make much of a difference for $k$ large enough to carry the attack] attempts to solve the linear system, and if that works computes $S$, noting that the $e$ th root extraction reduces to dividing the exponents by $e$ in the known factorization of $\operatorname{MD5}(M)\cdot\prod\left(\operatorname{MD5}(M_i)\right)^{e_i}$ outputs $M$ and $S$.
It will help to have preprocessed the system of linear equations. For larger $k$, solving the linear system will succeed for a large proportion of $M$ having passed the screening; or/and it will be possible to put an upper bound of the $p_j$ early on, making the factorization easier and the linear system smaller, thus easier to manage.
A small $e$ helps the attack, but with a large-enough $k$ it can be carried for any $e$. The size of the public modulus $N$ of the RSA key is essentially immaterial; what matters most is the width of the hash, which at 128-bit is grossly insufficient.
A slightly simpler variant of the problem (where all the messages are chosen, which is moot for a hash without collision resistance as $\operatorname{MD5}$ is nowadays) is discussed by Jean-Sébastien Coron, David Naccache and Julien P. Stern in section 2 of:
On the Security of RSA Padding (in proceedings of Crypto 1999); or, when we set $\mu$ to $\operatorname{MD5}$, by Don Coppersmith, Jean-Sébastien Coron, François Grieu, Shai Halevi, Charanjit Jutla, David Naccache, and Julien P. Stern in section 3 of: Cryptanalysis of ISO/IEC 9796-1 (in Journal of Cryptology, 2008). The idea of a building coefficients by solving a linear system based on prime multiplicity was introduced by Yvo Desmedt and Andrew M. Odlyzko in A chosen text attack on the RSA cryptosystem and some discrete logarithm schemes (in proceedings of Crypto 1985). |
It looks like you're new here. If you want to get involved, click one of these buttons!
In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly:
Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints.
Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints.
Today we'll conclude our discussion of Chapter 1 with two more bombshells:
Joins
are left adjoints, and meets are right adjoints.
Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down.
This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world!
Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders.
In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets.
Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have
all joins: it's enough that all the joins in this formula exist:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have
all meets: it's enough that all the meets in this formula exist:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes.
Suppose \(A\) is a poset with all binary joins. Then we get a function
$$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows:
$$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that
$$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the
diagonal
$$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called
duplication, since it duplicates any element of \(A\).
Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact:
$$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \).
Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \).
A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function
$$ \wedge : A \times A \to A $$that's the
right adjoint of \( \Delta \). This is just a clever way of saying
$$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check.
Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number.
All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on.
Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by
$$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short.
I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason.
Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\).
So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called
duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs.
Once you start looking you can find duality everywhere, from ancient Chinese philosophy:
to modern computers:
But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality!
This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises. |
$\mathbf{Question:}$ $(H,+)$ is a subgroup of $(\mathbb{R},+)$ such that $H \cap [-1,1]$ is finite and contains elements other than $0$. Show that $(H,+)$ must be cyclic.
$\mathbf{Attempt:}$ Since $\{0\}\cup T= H \cap [-1,1]$ is finite [$0 \not\in T$, $\emptyset \subsetneq T$], so we can completely enumerate $T$. Let $T =\{a_1, a_2, ..., a_m\} $ be the complete 'list' of elements. Let $Q= \{a_i \in T: a_i>0 \}$. Let $a_t$ be the minimal element in $Q$.
Claim: $H=\langle a_t \rangle $.
Suppose, the claim is false. Then for some $h \in H$, $h \notin \langle a_t \rangle $. [we pick $h$ such that it is $>0$. If not, then we pick $-h$].
So, we can find a positive integer $p$ such that $p a_t< h <(p+1)a_t \implies 0<h-pa_t<a_t $ which contradicts the fact that $a_t$ is the minimal element in $Q$.
Is this Proof valid? Kindly verify. |
Every Group Of Order Pq Is Solvable
Every Group of Order pq is Solvable
Proposition 1: Let $G$ be a group of order $pq$ where $p$ and $q$ are primes. Then $G$ is solvable. Proof:Let $G$ be a group of order $pq$ for some primes $p$ and $q$. Case 1:Suppose that $p = q$. Then $G$ is a group of order $p^2$. From the result on the Every Group of Order p^2 is Abelian we have that $G$ is an abelian group. But every abelian group is a solvable group and so $G$ is solvable. Case 2:Suppose that $p \neq q$. Without loss of generality, assume that $p > q$. Let $n_p$ denote the number of Sylow $p$-subgroups of $G$ and write $n = |G| = pq$. Since $G$ is a finite group, $p \mid n = pq$ and $p \nmid q$ (since $p$ and $q$ are distinct primes from $p > q$) we have by The Third Sylow Theorem on The Sylow Theorems page that:
\begin{align} \quad n_p &\equiv 1 \pmod p \\ \quad n_p &\mid q \end{align}
Since $n_p \mid q$ and since $q$ is prime we have that either $n_p = 1$ or $n_p = q$. If $n_p = q$ then from the above congruence we have that $q \equiv 1 \pmod p$, i.e., $p \mid (q - 1)$. So $p \leq q - 1$ - a contradiction since $p > q$. Thus we must have that $n_p = 1$. So $G$ has only one Sylow $p$-subgroup. Let $G_1$ denote this Sylow $p$-subgroup. By Lagrange's Theorem, the only possible orders of subgroups of $G$ are $1$, $p$, $q$, and $pq$. So the Sylow $p$-subgroup $G_1$ must have order $p$. Since $G$ is a finite group and $G_1$ is the only subgroup of $G$ with order $p$ we have by the result on the Subgroups of Finite Groups with Unique Order are Normal Subgroups that $G_1$ is a normal subgroup of $G$. Let:
\begin{align} \quad \{ e \} = G_0 \leq G_1 \leq G_2 = G \quad (*) \end{align}
Then $G_0$ is trivially a normal subgroup in $G_1$ and $G_1$ is a normal subgroup of $G$ from the remarks made above. Furthermore, the composition factors of the above chain are $G_1/G_0 = G_1/\{ e \} \cong G_1$, which is abelian since $G_1$ is a group of prime order $p$ (and is thus cyclic $\Rightarrow$ abelian). On the otherhand, we see that $|G_2/G_1| = |G/G_1| = |G|/|G_1| = pq/p = q$. So $G_2/G_1$ is a group of prime order $q$ and is thus also cyclic and hence abelian. So the chain at $(*)$ is such that $G_i$ is a normal subgroup of $G_{i+1}$ for all $0 \leq i \leq 1$ and $G_{i+1}/G_i$ is abelian for all $0 \leq i \leq 1$. Thus $G$ is a solvable group. $\blacksquare$ |
Update: Added discussion on launch angle at the end of the post. Edit: The final numbers in this post went through a few rounds of revision. What is the world coming to, when you have to track down missing factors of 2 in your blog posts?!
This week, I’m looking at the strategies and mechanisms by which different animals solve the problem of getting around. I started off by writing about how birds and aquatic animals conserve energy on-the-go. This post is another spinoff on the theme of locomotion.
Here’s a clip from one of my favorite documentaries, David Attenborough’s
Life of Mammals. It shows the incredible sifaka lemur of Madagascar, a primate that has a really remarkable way of getting around. (If the embed doesn’t work, you can watch it here)
As they launch out from the trees, they almost look like they’re defying gravity. And so, taking inspiration from Dot Physics, I thought it might be interesting to put physics to use and analyze the flight of the sifaka.
I loaded the above video into Tracker, a handy open source video analysis software. I can then use Tracker to plot the motion of the sifaka. I chose to analyze the jump at about 21 seconds in. I like this shot because it isn’t in slow motion (that messes up the physics), the camera is perfectly still (we expect no less from Attenborough’s crew), and the lemur is leaping in the plane of the camera (there are no skewed perspective issues that would be a pain to deal with). The whole jump lasts under a second, but at 30 frames per second, there should be plenty of data points.
This is what it looks like when you track the sifaka’s motion:
The red dots are the position of the sifaka at every frame. That’s the data. In order to analyze it, we need to set a scale on the video. I drew this yellow line as a reference for 1 unit of size (call it 1 sifaka long). And how big is that?
If we believe this picture that I found on the National Geographic website, then a sifaka is about half the size of this folded arms dude.
Now, to the physics..
While the sifaka flies through the air, the only force acting on it is gravity, which points downwards. So the acceleration of the lemur should also be downwards. (I’m ignoring air resistance. We’ll find out if this is a good idea.)
If we plot its horizontal motion, it should be moving at a fixed speed, with no acceleration. But its vertical motion will give away its acceleration.
This is what we get if we plot at the horizontal position of all the points with respect to time.
$latex x = x_0 + v_x t$
I was amazed by how well they agree, since I expected air resistance to matter a little more. I guess ignoring air resistance is a pretty good approximation.
We find that there’s a straight line relationship between position and time, which implies that the sifaka moves at a constant speed in the horizontal direction. The slope of this line ($latex v_x$) has units of meters/second (or in our case sifaka/second) and is the speed of the sifaka.
What about the vertical direction? Well, it certainly can’t be a straight line relationship with time, because at some point the sifaka turns and comes back down. Here is what the plot looks like:
The small squares are the vertical positions of the dots plotted versus time, and the red curve is the plot of an equation for a parabola
$latex y = y_0 + v_y t + \frac{1}{2} a t^2$
Here $latex v_y$ is the vertical launch speed, $latex a$ is acceleration, and $latex t$ is time.
So, over time, the vertical position traces out a parabola, which is a characteristic shape for motion under a fixed acceleration (in this case, the earth is accelerating the lemur downwards). The nice thing about analyzing motion is that we can analyze the horizontal and vertical motion independently of each other.
The fit to the parabola is not great, but it’s not too shabby either. I suspect the main reason for the discrepancy is that its hard to track the center of mass of the sifaka, and if you choose any other place on the sifaka, you’ll also be tracking the spin of the sifaka about its center of mass.
By solving for the values of $latex a$, $latex v_y$ and $latex v_x$ that best match the data, we get the launch speed and acceleration of the lemur.
To be a little more empirical about things, I did this analysis twice, and averaged the results. Here’s what I got:
Horizontal launch speed: $latex v_x = 6.97 \textrm{ sifaka}/\textrm{second}$ ****Vertical launch speed: $latex v_y = 4.84 \textrm{ sifaka}/\textrm{second}$ **Vertical acceleration: $latex a = - 16.92 \textrm{ sifaka}/\textrm{second}^2$**
The negative sign on the acceleration indicates that gravity is pulling the sifaka downwards (in the negative y direction). So far things look good qualitatively, but do the numbers work out?
Well, according to National Geographic, the tail of a sifaka monkey is 46 cm, whereas according to wikipedia it is 50 to 60 cm. Let’s go with 50 cm on average. The length scale I drew in Tracker is about the length of the Sifaka’s tail. So we can set
1 sifaka = 0.5 meters.
That gives us a value of $latex -8.46 \textrm{ m}/\textrm{s}^2$ for the acceleration caused by gravity, which is within 16% of the known result of $latex -9.8 \textrm{ m}/\textrm{s}^2$. I think that’s pretty darn good for a first stab at video analysis, especially as the sifaka was a blur in each frame and often obscured by trees.
Next, we can use Pythagoras’ theorem in the above velocity triangle to solve for the total launch speed
$latex v^2 = v_x^2 + v_y^2$
where $latex v_x = 3.49 \textrm{ m/s}$ and $latex v_y = 2.42 \textrm{ m/s}$ are the horizontal and vertical components of velocity.
**This gives a launch speed of 4.25 meters per second or 9.5 miles per hour (15.3 km/h). **This speed sounds reasonable to me, as it’s about how fast your typical bicycle moves. If we include a fudge factor that fixes our acceleration to the known result, then the launch speed is actually faster by 16%.
Update: added discussion on launch angle.
We can also solve for the launch angle of the sifaka, by using some high-school trigonometry on the triangle:
$latex \tan \theta = v_y/v_x$
Solving for the angle $latex \theta$ gives 34.7 degrees.
Is this angle correct? Fortunately, Tracker has a handy built in protractor, so we can check it. Marking out the initial leap for both runs, I get an average launch angle of 34.5 degrees.
Which agrees to within half a percent of our result inferred from the physics!! Eerily accurate..
It’s a bit of a coincidence that the result is as close as it is, given the many possible sources of error. However, one reason why this result is so accurate is that the angle comes from a ratio $latex v_y/v_x$, and so common sources of error (such as error in estimating the length of a sifaka) end up cancelling out. This is also why physicists prefer to measure ratios, rather than numbers that have units (they call such quantities dimensionless).
And there you have it folks, SCIENCE being put to use to answer the burning questions that keep you up at night.
If you want to read more about how the sifakas glide, Darren Naish has a detailed post describing research on the physics of this. |
Integer programming algorithms minimize or maximize a linear function subject to equality, inequality, and integer constraints. Integer constraints restrict some or all of the variables in the optimization problem to take on only integer values. This enables accurate modeling of problems involving discrete quantities (such as shares of a stock) or yes-or-no decisions. When there are integer constraints on only some of the variables, the problem is called a mixed-integer linear program. Example integer programming problems include portfolio optimization in finance, optimal dispatch of generating units (unit commitment) in energy production, and scheduling and routing in operations research.
Integer programming is the mathematical problem of finding a vector \(x\) that minimizes the function:
\[\min_{x} \left\{f^{\mathsf{T}}x\right\}\]
Subject to the constraints:
\[\begin{eqnarray}Ax \leq b & \quad & \text{(inequality constraint)} \\A_{eq}x = b_{eq} & \quad & \text{(equality constraint)} \\lb \leq x \leq ub & \quad & \text{(bound constraint)} \\ x_i \in \mathbb{Z} & \quad & \text{(integer constraint)} \end{eqnarray}\]
Solving such problems typically requires using a combination of techniques to narrow the solution space, find integer-feasible solutions, and discard portions of the solution space that do not contain better integer-feasible solutions. Common techniques include:
Cutting planes: Add additional constraints to the problem that reduce the search space. Heuristics: Search for integer-feasible solutions. Branch and bound: Systematically search for the optimal solution. The algorithm solves linear programming relaxations with restricted ranges of possible values of the integer variables.
For more information on integer programming, see Optimization Toolbox™. |
Difference between revisions of "Polarization Mixing Due to Feed Rotation"
(→Comparison with the Mathematical Description in the Earlier Section)
Line 422: Line 422:
which is precisely the inverse of (3).
which is precisely the inverse of (3).
−
=
+
== AzEl Axis Offset ==
− + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − + − +
[[File:3C84_cos-el-rotation.png|right|thumb|300px|'''Fig. 9:''' Observations and simulation of amplitude and phase on 3C84 for baseline Ant 1-14, where a phase shift proportional to <math>{2\pi\over\lambda} \cos E</math> is applied. The agreement is reasonably good except for some curvature, which could be residual baseline error.]]
[[File:3C84_cos-el-rotation.png|right|thumb|300px|'''Fig. 9:''' Observations and simulation of amplitude and phase on 3C84 for baseline Ant 1-14, where a phase shift proportional to <math>{2\pi\over\lambda} \cos E</math> is applied. The agreement is reasonably good except for some curvature, which could be residual baseline error.]]
Revision as of 14:09, 9 July 2017 Contents 1 Explanation of Polarization Mixing 2 Absolute vs. Relative Angle of Rotation 3 Effect of an X - Y Delay 4 Another Look at X-Y Delays Explanation of Polarization Mixing
The newer 2.1-m antennas [Ants 1-8 and 12] have AzEl (azimuth-elevation) mounts (also referred to as AltAz; the terms Altitude and Elevation are used synonymously), which means that their crossed linear feeds have a constant angle relative to the horizon (the axis of rotation being at the zenith). The older 2.1-m antennas [Ants 9-11 and 13], and the 27-m antenna [Ant 14], have Equatorial mounts, which means that their crossed linear feeds have a constant angle with respect to the celestial equator, the axis of rotation being at the north celestial pole. Thus, the celestial coordinate system is tilted by the local co-latitude (complement of the latitude). This tilt results in a relative feed rotation between the 27-m antenna and the AzEl mounts, but not between the 27-m and the older equatorial mounts. This angle is called the "parallactic angle," and is given by:
where is the site latitude, is the Azimuth angle [0 north], and is the Elevation angle [0 on horizon]. This function obviously changes with position on the sky, and as we follow a celestial source (e.g. the Sun) across the sky this rotation angle is continuously changing in a surprisingly complex manner as shown in
Figure 1. Note that at zero hour angle for declinations less than the local latitude (37.233 degrees at OVRO), but is at higher declinations.
The crossed linear dipole feeds on all antennas are oriented with the X-feed as shown in
Figure 2, at 45-degrees from the horizontal, when the antenna is pointed at 0 hour angle. This is the view as seen looking down at the feed from the dish side, although since the feeds are at the prime focus this is the same as the view projected onto the sky. At other positions, the feeds on the AzEl antennas experience a rotation by angle relative to the equatorial antennas.
Because of this rotation, the normal polarization products XX, XY, YX and YY on baselines with dissimilar antennas (one AzEl and the other equatorial) become mixed. The effect of this admixture can be written by the use of Jones matrices (see Hamaker, Bregman & Sault (1996) for a complete description). Consider antenna A whose feed orientation is rotated by , cross-correlated with antenna B with unrotated feed. The corresponding Jones matrices, acting on signal vector are:
and the cross-correlation is found by taking the outer product, i.e.
which relates the output polarization products to the input as
where we have dropped the subscripts and complex conjugate notation for brevity. Of course, there are other effects such as unequal gains and cross-talk between feeds that are also at play, but for now we ignore those and focus only on the effect of this polarization mixing due to the parallactic angle.
Absolute vs. Relative Angle of Rotation
However, the above description fails when we consider a rotation on both antennas, so that
In this case, performing the outer product gives:
whereas intuitively we want something like:
which becomes the identity matrix when , i.e. when the feeds on two antennas of a baseline are parallel. The difference seems to be that the earlier expression evaluates to components of X and Y in an absolute coordinate frame, whereas we are interested only the difference in angle of the feeds in a relative coordinate frame. This choice no doubt has implications for measuring Stokes Q and U, but for solar data we are not concerned with linear polarization.
One way to achieve this in the framework of Jones matrices is to form Mueller matrices from the outer-product of the rotation times the gain matrix:
and
then form an overall matrix
where .
Effect of an X - Y Delay
Regardless of how the math is done, we expect that the result should be dependent on the difference in angle, , so as a practical solution let us simply replace with and proceed as in section 1.
and the cross-correlation is found by taking the outer product, i.e.
which relates the output polarization products to the input as
Now consider that there is a "multi-band" delay on both antennas, and . Then (2) becomes:
The result agrees with our intuition:
This approach was implemented, to see how well it does in correcting for the effects of differential feed rotation, but the results were not good. The problem turns out not to be the approach, but the assumption that the X-Y delay is a constant with frequency. The next section describes the actual case, where the X-Y delay is considered in terms of a measured "delay phase."
Another Look at X-Y Delays
Prior to doing the feed rotation correction, it is essential that any X-Y delays be measured and corrected. We have devised a calibration procedure in which we take data on a strong calibrator with the feeds parallel, then rotate the 27-m (antenna 14) feed so that they are perpendicular. For an unpolarized source, this results in signal on the XX and YY polarization channels in the first case, and on the XY and YX polarization channels in the second case. As a practical matter, this can be done on all antennas at once if a strong source is observed near 0 HA, ideally timed to start 20 min before 0 HA and completing 20 min after 0 HA. The source 2253+161 works well, as does 1229+006 (3C273). Two observations are needed
one with the 27-m feed unrotated (gives parallel-feed data for all dishes, if done near 0 HA). Gives strong signal in XX and YY channels. Example one with the 27-m feed rotated to -90 degrees (gives crossed-feed data for all dishes, if done near 0 HA). Gives strong signal in XY and YX channels. Example
Note that the feed should be rotated by -90, not 90, in order for the signs in the expressions below to be correct.
Background
Consider antenna-based phases on X polarization as and on Y polarization as , i.e. the Y phases are nominally the same as for X, except for a 90-degree rotation and a possible X-Y delay difference , here written as delay phase . We are finding that this delay is a complicated function of frequency, so it is just as well to keep it in terms of phase. On a baseline , then, the four polarization terms become:
We then examine the channel differences on baselines with antenna 14, i.e.
where . Consequently, we can solve redundantly in two ways for the antenna-based delay phases:
where we specifically use to emphasize that this quantity for all antennas should be the same value, because the measurements are all baselines with antenna 14. In practice, we can average the two measurements for each antenna for , and the 26 measurements for antenna 14 for , although care must be taken to do an appropriate average to take care of the phase ambiguity. One way to do this is form unit vectors and sum them, then find the phase of the summed vector.
Figure 1x shows the results for a measurement on 2017-07-02. Applying the Measurements
Once we have these, we can apply corrections to each of the polarization channels, and then do the feed rotation correction. The corrections are done to data taken in a normal way, without rotating the 27-m feed. The application of the correction is found by removing the effects of the delays and rotations in equation (5):
where the third term has the opposite sign of because this term should be flipped for negative parallactic angle, i.e. should be . With these corrections, we have
I tried applying the feed rotation correction for data taken on 2017-07-02, and it does seem to work.
Figures 2x and 3x show the amplitude and phase on all baselines with Ant 14, with light green for data before correction and black for after correction. For ants 1-8 and 12, the XX and YY amplitudes have increased a bit, while the XY and YX amplitudes are much reduced. The corresponding phases are slightly improved in XX and YY, and noise-like for XY and YX (less so on YX for some antennas). For the other antennas, no correction was made since those feeds are already parallel to Ant 14.
The proof of this scheme will be seen when we observe a calibrator for many hours while the parallactic angle changes over HA, and then see that the amplitude time profiles become steady and well behaved.
Ultimately, the X-Y delays will need to be measured periodically (especially if the correlator is rebooted or X and Y delays are changed for other reasons), and then stored as a new calibration type in the SQL database.
Comparison with the Mathematical Description in the Earlier Section
We now want to see how this compares with the mathematical development in the previous section. It turns out that they are the same, as long as we rewrite the expression . To see this, first write the corrections in equation (6) in matrix form:
where now this is the correction applied to the measured () data to covert it to the expected () data, and hence is the inverse of the matrix (2) in the previous section. Expanding the matrix product, this is
and converting back to , it becomes:
which is precisely the inverse of (3).
AzEl Antenna Axis Offset
Dr. Avinash Deshpande (Raman Research Institute, Bangalore -- Thanks to Dr. Ananthakrishnan for contacting him) confirms that no phase rotation is expected for the parallactic correction, aside from the 180-degree phase jump at the meridian crossing. He suggests that a non-intersecting axis is more likely, and notes that my plots claiming no evidence of a delay is too hasty. It may be that the small range of frequencies in Figure 8 is too small to see an evident frequency dependence that may nevertheless be there. He notes that the effect of non-intersecting axes is a phase rotation of
where is the elevation angle, and is the offset distance. As a test, I applied this function, using cm (based on the apparent phase variation in the observed phases), and obtained the results in
Figures 9 and 10. Although the observed phases show a bit more curvature than the simulation, this can be due to residual baseline errors, so I think it is fair to say this is a promising result. We can prove this very shortly, since the feed rotator on the 27-m antenna is soon to be working (I hope). The prediction is that rotating the 27-m feed to keep it parallel to the 2.1-m feeds on these antennas will correct the amplitudes, but the phases will still show the same behavior (since they are due to a different cause), and also that using a wider range of frequencies (which we can do, especially now that the high-frequency receiver is available) will show a frequency dependence in the amount of phase variation. --Dgary (talk) 04:55, 8 November 2016 (UTC) Further update
On 2016 Nov 13, new observations of 3C84 were taken, and the correction for the axis offset (d = 15.2 cm) were applied, as shown in
Figure 11 (at left). It appears that this correction works well, and that there is a residual baseline error on each of the antennas due to the fact that they were originally determined without the axis-offset correction. --Dgary (talk) 14:20, 15 November 2016 (UTC) |
This is a countable family of first-order statements, so it holds for every real-closed field, since it holds over $\mathbb R$.
From a square matrix, we immediately derive that such a field must satisfy the property that the sum of two perfect squares is a perfect square. Indeed, the matrix:
$ \left(\begin{array}{cc} a & b \\ b & -a \end{array}\right)$
has characteristic polynomial $x^2-a^2-b^2$, so it is diagonalizable as long as $a^2+b^2$ is a pefect square.
Moreover, $-1$ is not a perfect square, or else the matrix:
$ \left(\begin{array}{cc} i & 1 \\ 1 & -i \end{array}\right)$
would be diagonalizable, thus zero, an obvious contradiction.
So the semigroup generated by the perfect squares consists of just the perfect squares, which are not all the elements of the field, so the field can be ordered.
However, the field need not be real-closed. Consider the field $\mathbb R((x))$. Take a matrix over that field. Without loss of generality, we can take it to be a matrix over $\mathbb R[[x]]$. Looking at it mod $x$, it is a symmetric matrix over $\mathbb R$, so we can diagonalize it using an orthogonal matrix. If its eigenvalues mod $x$ are all distinct, we are done, because we can find roots of its characteristic polynomial in $\mathbb R[[x]]$ by Hensel's lemma. If they are all the same, say $\lambda$ we can reduce: subtract $\lambda I$, divide by $x$ and diagonalize again. The only remaining case is if some are the same and some are distinct. If we can handle that case, then we can diagonalize any matrix.
Lemma: Let $M$ be a symmetric matrix over $\mathbb R[[x]]$ such that some eigenvalues are distinct mod $x$. There exists an orthogonal matrix $A$ such that $AMA^{-1}$ is block diagonal, with the blocks symmetric.
Proof: Consider the scheme of such orthogonal matrices. Each connected component of this scheme corresponds to a partition of the eigenvalues into blocks. Choose one. Since we can diagonalize the matrix with an orthogonal matrix mod $x$, there is certainly a mod $x$ point on this component. We want to lift this to a point on the whole ring. We can do this if the scheme is smooth over $\mathbb R[[x]]$.
Assuming the blocks have distinct eigenvalues, the variety of ways to do this looks, over an algebraically closed field, like $O(n_1) \times O(n_2) \times.. \times O(n_k)$ where $n_1,\dots,n_k$ are the sizes of the blocks. This is because the only way to keep a diagonal matrix block diagonal is to hit it with one of those. So as long as the blocks are chosen such that the eigenvalues in different blocks are distinct and remain so on reduction mod $x$, the variety is smooth over $\mathbb R((x))$ and smooth over $\mathbb R$, and has the same dimension over both, so is smooth over $\mathbb R[[x]]$. (This bit might not be entirely correct.) Thus there is a lift and the matrix can be put in this form.
Then we do an induction on dimension. The only way we would be unable to put a matrix in a form where two of its eigenvalues are distinct mod $x$ is if its eigenvalues are all the same, in which case,since $\mathbb R((x))$ is contained in a real closed field, it's a scalar matrix and we're done. |
The Annals of Statistics Ann. Statist. Volume 46, Number 6A (2018), 3130-3150. Optimal adaptive estimation of linear functionals under sparsity Abstract
We consider the problem of estimation of a linear functional in the Gaussian sequence model where the unknown vector $\theta \in\mathbb{R}^{d}$ belongs to a class of $s$-sparse vectors with unknown $s$. We suggest an adaptive estimator achieving a nonasymptotic rate of convergence that differs from the minimax rate at most by a logarithmic factor. We also show that this optimal adaptive rate cannot be improved when $s$ is unknown. Furthermore, we address the issue of simultaneous adaptation to $s$ and to the variance $\sigma^{2}$ of the noise. We suggest an estimator that achieves the optimal adaptive rate when both $s$ and $\sigma^{2}$ are unknown.
Article information Source Ann. Statist., Volume 46, Number 6A (2018), 3130-3150. Dates Received: November 2016 Revised: October 2017 First available in Project Euclid: 7 September 2018 Permanent link to this document https://projecteuclid.org/euclid.aos/1536307245 Digital Object Identifier doi:10.1214/17-AOS1653 Mathematical Reviews number (MathSciNet) MR3851767 Zentralblatt MATH identifier 06968611 Citation
Collier, Olivier; Comminges, Laëtitia; Tsybakov, Alexandre B.; Verzelen, Nicolas. Optimal adaptive estimation of linear functionals under sparsity. Ann. Statist. 46 (2018), no. 6A, 3130--3150. doi:10.1214/17-AOS1653. https://projecteuclid.org/euclid.aos/1536307245 |
Algebraic and Topological Complements of Linear Subspaces Review
Table of Contents
Algebraic and Topological Complements of Linear Subspaces Review
We will now review some of the recent material regarding algebraic and topological complements of linear subspaces.
On the Projection/Idempotent Linear Operatorspage we said that if $X$ is a linear space then a linear operator $P : X \to \mathbb{C}$ is a Projectionor Idempotentif $P^2 = P$, that is, for every $x \in X$ we have that:
\begin{align} \quad P(P(x)) = P(x) \end{align}
We then proved a useful theorem which says that if $P$ is a projection then:
\begin{align} \quad \ker P = (I - P)(X) \end{align}
On the Algebraic Complements of Linear Subspacespage we said that if $X$ is a linear space and $M \subset X$ is a subspace of $X$ then an Algebraic Complementof $M$ is another linear subspace $M' \subset X$ such that:
\begin{align} \quad M \cap M' &= \{ 0 \} \\ \quad X &= M + M' \end{align}
If $M'$ is an algebraic complement of $M$ we write:
\begin{align} \quad X = M \oplus M' \end{align}
We said that $M$ is Finite Co-Dimensionalif $M$ has a finite-dimensional algebraic complement. We then proved that every linear subspace $M$ of a linear space $X$ has an algebraic complement. On the Topological Complements of Normed Linear Subspacespage said that if $X$ is a normed linear space and $M \subseteq X$ is a subspace of $X$ then a Topological Complementof $M$ is an algebraic complement of $M$ that is also closed. We then proved an important criterion for the existence of a topological complement. We proved that a linear subspace $M \subseteq X$ has a topological complement if and only if there exists a projection linear operator $P : X \to X$ whose range is $M$, that is:
\begin{align} \quad P(X) = M \end{align}
On the Topological Complement Criterion for the Range of a BLO to be Closed when X and Y are Banach Spacespage we then proved that if $X$ and $Y$ are Banach spaces and $T : X \to Y$ is a bounded linear operator then if $T(X)$ has a topological complement we must have that $T(X)$ is closed. Lastly, on the T(X) Finite Co-Dimensional Criterion for the Range of a BLO to be Closed when X and Y are Banach Spacespage we proved that if $X$ and $Y$ are Banach spaces and $T : X \to Y$ is a bounded linear operator then if $T(X)$ is finite co-dimensional then we must have that $T(X)$ is closed. |
The Annals of Statistics Ann. Statist. Volume 46, Number 3 (2018), 1077-1108. On the systematic and idiosyncratic volatility with large panel high-frequency data Abstract
In this paper, we separate the integrated (spot) volatility of an individual Itô process into integrated (spot) systematic and idiosyncratic volatilities, and estimate them by aggregation of local factor analysis (localization) with large-dimensional high-frequency data. We show that, when both the sampling frequency $n$ and the dimensionality $p$ go to infinity and $p\geq C\sqrt{n}$ for some constant $C$, our estimators of High dimensional Itô process; common driving process; specific driving process, integrated High dimensional Itô process, common driving process, specific driving process, systematic and idiosyncratic volatilities are $\sqrt{n}$ ($n^{1/4}$ for spot estimates) consistent, the best rate achieved in estimating the integrated (spot) volatility which is readily identified even with univariate high-frequency data. However, when $Cn^{1/4}\leq p<C\sqrt{n}$, aggregation of $n^{1/4}$-consistent local estimates of systematic and idiosyncratic volatilities results in $p$-consistent (not $\sqrt{n}$-consistent) estimates of integrated systematic and idiosyncratic volatilities. Even more interesting, when $p<Cn^{1/4}$, the integrated estimate has the same convergence rate as the spot estimate, both being $p$-consistent. This reveals a distinctive feature from aggregating local estimates in the low-dimensional high-frequency data setting. We also present estimators of the integrated (spot) idiosyncratic volatility matrices as well as their inverse matrices under some sparsity assumption. We finally present a factor-based estimator of the inverse of the spot volatility matrix. Numerical studies including the Monte Carlo experiments and real data analysis justify the performance of our estimators.
Article information Source Ann. Statist., Volume 46, Number 3 (2018), 1077-1108. Dates Received: March 2016 Revised: January 2017 First available in Project Euclid: 3 May 2018 Permanent link to this document https://projecteuclid.org/euclid.aos/1525313076 Digital Object Identifier doi:10.1214/17-AOS1578 Mathematical Reviews number (MathSciNet) MR3797997 Zentralblatt MATH identifier 06897923 Citation
Kong, Xin-Bing. On the systematic and idiosyncratic volatility with large panel high-frequency data. Ann. Statist. 46 (2018), no. 3, 1077--1108. doi:10.1214/17-AOS1578. https://projecteuclid.org/euclid.aos/1525313076
Supplemental materials Supplement to “On the integrated systematic and idiosyncratic volatility with large panel high-frequency data”. This supplement contains the technical proof of Lemmas 3–5, which is crucial in proving Theorem 1 and Theorem 2. |
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that? |
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that? |
With no warranty of any kind!
\documentclass{article}
\usepackage{color}
\makeatletter
\def\colorizemath #1#2{%
\expandafter\mathchardef\csname orig:math:#1\endcsname\mathcode`#1
\mathcode`#1="8000
\toks@\expandafter{\csname orig:math:#1\endcsname}%
\begingroup
\lccode`~=`#1
\lowercase{%
\endgroup
\edef~{{\noexpand\color{#2}\the\toks@}}}%
}
\@for\@tempa:=a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z\do{%
\expandafter\colorizemath\@tempa{green}}
\@for\@tempa:=A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z\do{%
\expandafter\colorizemath\@tempa{green}}
\@for\@tempa:=0,1,2,3,4,5,6,7,8,9\do{%
\expandafter\colorizemath\@tempa{red}}
\makeatother
\everymath{\color{blue}}
\everydisplay{\color{blue}}
\begin{document}\thispagestyle{empty}
Hello $world$. Do you know that $E=mc^2$?
\[ \widehat f(\omega) = \int_{-\infty}^\infty f(x) e^{-2\pi i \omega x}\,dx\]
\[ (I - M)^{-1} = \sum_{k=0}^\infty M^k\]
\end{document}
Let me add, with regards to
\everymath and
\everydisplay that it would have been better to do:
\everymath\expandafter{\the\everymath \color{blue}}
\everydisplay\expandafter{\the\everydisplay \color{blue}}
This preserves, rather than erases, the previously stored data in these token lists. (I just checked and Lamport's book does not have a single mention of
token list, and even the word
token is not to be found (it seems) in the entire book...). Admittedly, packages who put things in them should do that
At Begin Document so even the brutal way used in my initial code, as long as it is in the preamble, is maybe not that destructive. People interested in token lists can learn about it in, for example,
TeX by Topic by Victor Eijkhout (
texdoc topic). |
This article will be permanently flagged as inappropriate and made unaccessible to everyone.
Are you certain this article is inappropriate?
Excessive Violence
Sexual Content
Political / Social
Email Address:
Article Id:
WHEBN0000025280
Reproduction Date:
Quantum teleportation is a process by which quantum information (e.g. the exact state of an atom or photon) can be transmitted (exactly, in principle) from one location to another, with the help of classical communication and previously shared quantum entanglement between the sending and receiving location. Because it depends on classical communication, which can proceed no faster than the speed of light, it cannot be used for superluminal transport or communication of classical bits. It also cannot be used to make copies of a system, as this violates the no-cloning theorem. Although the name is inspired by the teleportation commonly used in fiction, current technology provides no possibility of anything resembling the fictional form of teleportation. While it is possible to teleport one or more qubits of information between two (entangled) atoms,[1][2][3] this has not yet been achieved between molecules or anything larger. One may think of teleportation either as a kind of transportation, or as a kind of communication; it provides a way of transporting a qubit from one location to another, without having to move a physical particle along with it.
The seminal paper[4] first expounding the idea was published by C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres and W. K. Wootters in 1993.[5] Since then, quantum teleportation has been realized in various physical systems. Presently, the record distance for quantum teleportation is 143 km (89 mi) with photons,[6] and 21 m with material systems.[7] In August 2013, the achievement of "fully deterministic" quantum teleportation, using a hybrid technique, was reported.[8] On 29 May 2014, scientists announced a reliable way of transferring data by quantum teleportation. Quantum teleportation of data had been done before but with highly unreliable methods.[9][10]
It is known, from axiomatizations of quantum mechanics (such as categorical quantum mechanics), that the universe is fundamentally composed of two things: bits and qubits.[11][12] Bits are units of information, and are commonly represented using zero or one, true or false. These bits are sometimes called "classical" bits, to distinguish them from quantum bits, or qubits. Qubits also encode a type of information, called quantum information, which differs sharply from "classical" information. For example, a qubit cannot be used to encode a classical bit (this is the content of the no-communication theorem). Conversely, classical bits cannot be used to encode qubits: the two are quite distinct, and not inter-convertible. Qubits differ from classical bits in dramatic ways: they cannot be copied (the no-cloning theorem) and they cannot be destroyed (the no-deleting theorem).
Quantum teleportation provides a mechanism of moving a qubit from one location to another, without having to physically transport the underlying particle that a qubit is normally attached to. Much like the invention of the telegraph allowed classical bits to be transported at high speed across continents, quantum teleportation holds the promise that one day, qubits could be moved likewise. However, as of 2013, only photons and single atoms have been teleported; molecules have not, nor does this even seem likely in the upcoming years, as the technology remains daunting. Specific distance and quantity records are stated below.
The movement of qubits does require the movement of "things"; in particular, the actual teleportation protocol requires that an entangled quantum state or Bell state be created, and its two parts shared between two locations (the source and destination, or Alice and Bob). In essence, a certain kind of "quantum channel" between two sites must be established first, before a qubit can be moved. Teleportation also requires a classical information link to be established, as two classical bits must be transmitted to accompany each qubit. The need for such links may, at first, seem disappointing; however, this is not unlike ordinary communications, which requires wires, radios or lasers. What's more, Bell states are most easily shared using photons from lasers, and so teleportation could be done, in principle, through open space.
The quantum states of single atoms have been teleported,.[1][2][3] An atom consists of several parts: the qubits in the electronic state or electron shells surrounding the atomic nucleus, the qubits in the nucleus itself, and, finally, the electrons, protons and neutrons making up the atom. Physicists have teleported the qubits encoded in the electronic state of atoms; they have not teleported the nuclear state, nor the nucleus itself. It is therefore false to say "an atom has been teleported". It has not. The quantum state of an atom has. Thus, performing this kind of teleportation requires a stock of atoms at the receiving site, available for having qubits imprinted on them. The importance of teleporting nuclear state is unclear: nuclear state does affect the atom, e.g. in hyperfine splitting, but whether such state would need to be teleported in some futuristic "practical" application is debatable.
The quantum world is strange and unusual; so, aside from no-cloning and no-deleting, there are other oddities. For example, quantum correlations arising from Bell states seem to be instantaneous (the Alain Aspect experiments), whereas classical bits can only be transmitted slower than the speed of light (quantum correlations cannot be used to transmit classical bits; again, this is the no-communication theorem). Thus, teleportation, as a whole, can never be superluminal, as a qubit cannot be reconstructed until the accompanying classical bits arrive.
The proper description of quantum teleportation requires a basic mathematical toolset, which, although complex, is not out of reach of advanced high-school students, and indeed becomes accessible to college students with a good grounding in finite-dimensional linear algebra. In particular, the theory of Hilbert spaces and projection matrixes is heavily used. A qubit is described using a two-dimensional complex number-valued vector space (a Hilbert space); the formal manipulations given below do not make use of anything much more than that. Strictly speaking, a working knowledge of quantum mechanics is not required to understand the mathematics of quantum teleportation, although without such acquaintance, the deeper meaning of the equations may remain quite mysterious.
The prerequisites for quantum teleportation are a qubit that is to be teleported, a conventional communication channel capable of transmitting two classical bits (i.e., one of four states), and means of generating an entangled EPR pair of qubits, transporting each of these to two different locations, A and B, performing a Bell measurement on one of the EPR pair qubits, and manipulating the quantum state of the other of the pair. The protocol is then as follows:
Work in 1998 verified the initial predictions,[13] and the distance of teleportation was increased in August 2004 to 600 meters, using optical fiber.[14] The longest distance yet claimed to be achieved for quantum teleportation is 143 km (89 mi), performed in May 2012, between the two Canary Islands of La Palma and Tenerife off the Atlantic coast of north Africa.[15] In April 2011, experimenters reported that they had demonstrated teleportation of wave packets of light up to a bandwidth of 10 MHz while preserving strongly nonclassical superposition states.[16][17]
Researchers at the Niels Bohr Institute successfully used quantum teleportation to transmit information between clouds of gas atoms, notable because the clouds of gas are macroscopic atomic ensembles.[18][19]
There are a variety of ways in which the teleportation protocol can be written mathematically. Some are very compact but abstract, and some are verbose but straightforward and concrete. The presentation below is of the latter form: verbose, but has the benefit of showing each quantum state simply and directly. Later sections review more compact notations.
The teleportation protocol begins with a quantum state or qubit |\psi\rangle, in Alice's possession, that she wants to convey to Bob. This qubit can be written generally, in bra–ket notation, as:
The subscript C above is used only to distinguish this state from A and B, below. The protocol requires that Alice and Bob share a maximally entangled state beforehand. This state is chosen beforehand, by mutual agreement between Alice and Bob, and will be one of the four Bell states
Alice obtains one of the qubits in the pair, with the other going to Bob. The subscripts A and B in the entangled state refer to Alice's or Bob's particle. In the following, assume that Alice and Bob shared the entangled state |\Phi^+\rangle_{AB}.
At this point, Alice has two particles (C, the one she wants to teleport, and A, one of the entangled pair), and Bob has one particle, B. In the total system, the state of these three particles is given by
Alice will then make a partial measurement in the Bell basis on the two qubits in her possession. To make the result of her measurement clear, it is best to write the state of Alice's two qubits as superpositions of the Bell basis. This is done by using the following general identities, which are easily verified:
and
The total three particle state, of A, B and C together, thus becomes the following four-term superposition:
The above is just a change of basis on Alice's part of the system. No operation has been performed and the three particles are still in the same total state. The actual teleportation occurs when Alice measures her two qubits in the Bell basis. Experimentally, this measurement may be achieved via a series of laser pulses directed at the two particles. Given the above expression, evidently the result of Alice's (local) measurement is that the three-particle state would collapse to one of the following four states (with equal probability of obtaining each):
Alice's two particles are now entangled to each other, in one of the four Bell states, and the entanglement originally shared between Alice's and Bob's particles is now broken. Bob's particle takes on one of the four superposition states shown above. Note how Bob's qubit is now in a state that resembles the state to be teleported. The four possible states for Bob's qubit are unitary images of the state to be teleported.
The result of Alice's Bell measurement tells her which of the above four states the system is in. She can now send her result to Bob through a classical channel. Two classical bits can communicate which of the four results she obtained.
After Bob receives the message from Alice, he will know which of the four states his particle is in. Using this information, he performs a unitary operation on his particle to transform it to the desired state \alpha |0\rangle_B + \beta|1\rangle_B:
to recover the state.
to his qubit.
Teleportation is thus achieved. The above-mentioned three gates correspond to rotations of π radians (180°) about appropriate axes (X, Y and Z).
Some remarks:
There are a variety of different notations in use that describe the teleportation protocol. One common one is by using the notation of quantum gates. In the above derivation, the unitary transformation that is the change of basis (from the standard product basis into the Bell basis) can be written using quantum gates. Direct calculation shows that this gate is given by
where H is the one qubit Walsh-Hadamard gate and C_N is the Controlled NOT gate.
Teleportation can be applied not just to pure states, but also mixed states, that can be regarded as the state of a single subsystem of an entangled pair. The so-called entanglement swapping is a simple and illustrative example.
If Alice has a particle which is entangled with a particle owned by Bob, and Bob teleports it to Carol, then afterwards, Alice's particle is entangled with Carol's.
A more symmetric way to describe the situation is the following: Alice has one particle, Bob two, and Carol one. Alice's particle and Bob's first particle are entangled, and so are Bob's second and Carol's particle:
___ / \ Alice-:-:-:-:-:-Bob1 -:- Bob2-:-:-:-:-:-Carol \___/
Now, if Bob performs a projective measurement on his two particles in the Bell state basis and communicates the results to Carol, as per the teleportation scheme described above, the state of Bob's first particle can be teleported to Carol's. Although Alice and Carol never interacted with each other, their particles are now entangled.
A detailed diagrammatic derivation of entanglement swapping has been given by Bob Coecke,[21] presented in terms of categorical quantum mechanics.
One can imagine how the teleportation scheme given above might be extended to N-state particles, i.e. particles whose states lie in the N dimensional Hilbert space. The combined system of the three particles now has an N^3 dimensional state space. To teleport, Alice makes a partial measurement on the two particles in her possession in some entangled basis on the N^2 dimensional subsystem. This measurement has N^2 equally probable outcomes, which are then communicated to Bob classically. Bob recovers the desired state by sending his particle through an appropriate unitary gate.
In general, mixed states ρ may be transported, and a linear transformation ω applied during teleportation, thus allowing data processing of quantum information. This is one of the foundational building blocks of quantum information processing. This is demonstrated below.
A general teleportation scheme can be described as follows. Three quantum systems are involved. System 1 is the (unknown) state ρ to be teleported by Alice. Systems 2 and 3 are in a maximally entangled state ω that are distributed to Alice and Bob, respectively. The total system is then in the state
A successful teleportation process is a LOCC quantum channel Φ that satisfies
where Tr12 is the partial trace operation with respect systems 1 and 2, and \circ denotes the composition of maps. This describes the channel in the Schrödinger picture.
Taking adjoint maps in the Heisenberg picture, the success condition becomes
for all observable O on Bob's system. The tensor factor in I \otimes O is 12 \otimes 3 while that of \rho \otimes \omega is 1 \otimes 23.
The proposed channel Φ can be described more explicitly. To begin teleportation, Alice performs a local measurement on the two subsystems (1 and 2) in her possession. Assume the local measurement have effects
If the measurement registers the i-th outcome, the overall state collapses to
The tensor factor in (M_i \otimes I) is 12 \otimes 3 while that of \rho \otimes \omega is 1 \otimes 23. Bob then applies a corresponding local operation Ψi on system 3. On the combined system, this is described by
where Id is the identity map on the composite system 1 \otimes 2.
Therefore the channel Φ is defined by
Notice Φ satisfies the definition of LOCC. As stated above, the teleportation is said to be successful if, for all observable O on Bob's system, the equality
holds. The left hand side of the equation is:
where Ψi* is the adjoint of Ψi in the Heisenberg picture. Assuming all objects are finite dimensional, this becomes
The success criterion for teleportation has the expression
A local explanation of quantum teleportation is put forward by David Deutsch and Patrick Hayden, with respect to the many-worlds interpretation of Quantum mechanics. Their paper asserts that the two bits that Alice sends Bob contain "locally inaccessible information" resulting in the teleportation of the quantum state. "The ability of quantum information to flow through a classical channel ..., surviving decoherence, is ... the basis of quantum teleportation."[22]
Computer science, Logic, Information theory, Physics, Mathematics
Cryptography, Technology, Semantic Web, Neuroscience, Agriculture
Cryptography, Silicon, ArXiv, Artificial intelligence, Computer science
Bell state, Qubit, Quantum cryptography, Quantum channel, Locc
Quantum gate, Quantum circuit, Quantum information, Quantum optics, Quantum teleportation |
Or simply:
why do we call equivalent martingale measures as risk-neutral measures?
In the utility or game theory, when we consider a person's preferences to certain outcomes, we often deal with the utility functions. For example, if we consider an investor with a utility function $U$ whose return on a portfolio $\Pi$ is $x_\Pi$, we assume that he choose a portfolio that maximizes his expected utility $$ \Pi^* \quad\text{ such that }\quad\mathsf E U(x_{\Pi^*}) = \sup_{\Pi}\mathsf E U(x_\Pi). $$ In particular, we say that an investor is risk-averse (risk-seeking) whenever $U$ is concave (convex). We say that an investor is risk-neutral when $U(x) = ax + b$ is an affine function.
The risk-neutral valuation - taking expectations w.r.t. martingale measures equivalent to the real-world ones - is used in quant finance a lot for the pricing purposes. I do understand the theory behind this method, and the relation with non-arbitrage arguments. I wonder though, whether there is any relation with the risk-neutrality as in the paragraph above.
I thought of the following idea: let us think of a fair price for a contract (when we write it) as the highest one at which the agent will buy it. The agent $A$ with utility $U_A$ and expectation of prices $\mathsf E_A$ has to make a choice between the zero utility (when he does not buy contract) and $$ \mathsf E_AU_A(\mathrm e^{-rT}C_T - C_0) $$ where $T$ is maturity of the contract, $r$ is a rate used to compute present value of future cashflow, $C_T$ is the payoff of the contract, $C_0$ is the price of the contract. Hence, we need to solve the equation $$ \mathsf E_AU_A(\mathrm e^{-rT}C_T - C_0) = 0 $$ with unknown $C_0$. Assuming that agent is risk-neutral, we obtain $$ C_0 = \mathrm e^{-rT}\cdot\mathsf E_A(C_T). $$ At the same time, pricing using the $\Delta$-hedging in the Black \& Scholes framework gives us $$ C_0 = \mathrm e^{-rT}\cdot\mathsf E_Q(C_T) $$ where $Q$ is a risk-neutral measure. Hence, if we assume that our agent is risk neutral, then his expectations (at least at any given time $T$) have to be given exactly by the measure $Q$. |
Table of Contents
The Hausdorff Property under Homeomorphisms on Topological Spaces
Recall from the Hausdorff Topological Spaces page that a topological space $(X, \tau)$ is said to be Hausdorff if for every distinct pair of points $x, y \in X$ there exists open neighbourhoods $U$ of $x$ and $V$ of $y$ such that:(1)
We will now see that the Hausdorff property is preserved under homeomorphisms.
Theorem 1: Let $X$ and $Y$ be topological spaces and let $f : X \to Y$ be a homeomorphism. If $X$ is a Hausdorff space then $Y$ is Hausdorff. Proof:Let $X$ be a Hausdorff space and let $f : X \to Y$ be a homeomorphism between $X$ and $Y$. Let $x, y \in Y$. Since $f$ is a homeomorphism, $f$ is bijective, and so $f^{-1}(x)$ and $f^{-1}(y)$ are distinct points in $X$. Since $X$ is Hausdorff, there exists open neighbourhoods $U$ of $f^{-1}(x)$ and $V$ of $f^{-1}(y)$ such that: Since $f$ is an open map, $f(U)$ is an open neighbourhood of $x$ and $f(V)$ is an open neighbourhood of $y$. Furthermore, we see that: If not, i.e., if there existed an $x \in f(U) \cap f(V)$ then this would imply that there exists a $z \in U$ and a $w \in V$ such that $f(z) = x$ and $f(w) = x$ which contradicts $f$ being injective. So, for all points $x, y \in Y$ there exists open neighbourhoods $f(U)$ of $x$ and $f(V)$ of $y$ such that $f(U) \cap f(V) = \emptyset$. Therefore, $Y$ is a Hausdorff space. $\blacksquare$ |
The Method of Variation of Parameters for Higher Order Nonhomogeneous Differential Equations
Recall from The Method of Variation of Parameters page, we were able to solve many different types of second order linear nonhomogeneous differential equations with constant coefficients by first solving for the solution to the corresponding linear homogeneous differential equation for $y_h(t) = Cy_1(t) + Dy_2(t)$ (where $y_1(t)$ and $y_2(t)$ form a fundamental set of solutions), and then substituting the constants $C$ and $D$. We then assumed that a particular solution for the nonhomogeneous differential equation was of the form $Y(t) = u_1(t)y_1(t) + u_2(t)y(t)$, and we solved the following system of linear equations for $u_1'(t)$ and $u_2'(t)$:(1)
After we obtained the functions $u_1'(t)$ and $u_2'(t)$, we integrated the results to obtain $u_1(t)$ and $u_2(t)$, and then plugged them into the formula $Y(t) = u_1(t)y_1(t) + u_2(t)y_2(t)$ for a particular solution (the existence and uniqueness of a solution is guaranteed from $y_1(t)$ and $y_2(t)$ forming a fundamental set of solutions). The general solution to our second order nonhomogeneous differential equation was then:(2)
As well will now see - the method of variation of parameters can also be applied to higher order differential equations. The process can be derived similarly. Suppose that we have a higher order differential equation of the following form:(3)
We first solve the corresponding homogeneous differential equation to get $y_h(t) = C_1 y_1(t) + C_2y_2(t) + ... + C_ny_n(t)$. The functions $y_1(t)$, $y_2(t)$, …, $y_n(t)$ form a fundamental set. Assume a particular solution to the nonhomogeneous differential equation is of the form:(4)
We then solve the following system of equations for the functions $u_1'(t)$, $u_2'(t)$, …, $u_n'(t)$.(5)
Once again, a unique solution is guaranteed since $y_1(t)$, $y_2(t)$, …, $y_n(t)$ form a fundamental set of solutions implies the Wronskian $W(y_1, y_2, ..., y_n)$ is nonzero. Furthermore, it should be noted that the system above can be solved for with row reduction (if the process is simple) or more commonly by applying Cramer's rule once again.
We then integrate each of the functions $u_1'(t)$, $u_2'(t)$, …, $u_n'(t)$ to obtain $u_1(t)$, $u_2(t)$, …, $u_n(t)$. Lastly, we obtain that $Y(t) = u_1(t)y_1(t) + u_2(t)y_2(t) + ... + u_n(t)y_n(t)$ as our particular solution.
We will now look at an example of using the method of variation of parameters for higher order nonhomogeneous differential equations.
Example 1 Solve the third order linear nonhomogeneous differential equation $y''' + y' = \tan t$ using the method of variation of parameters.
For the corresponding homogeneous differential equation $y''' + y' = 0$ we have that the characteristic equation is $r^3 + r = 0$ which can be factored as $r(r^2 + 1) = 0$ and so $r = 0$ or $r = \pm i$. The solution to the corresponding homogeneous differential equation is therefore:(6)
Furthermore, $y_1 = e^{0t} = 1$, $y_2 = e^{0t} \cos t= \cos t$, and $y_3 = e^{0t} \sin t = \sin t$.
Now let's try to find a particular solution to this differential equation. We have that:(7)
Thus we want to solve the following system of equations:(8)
We will use Cramer's rule in order to solve for $u_1'$, $u_2'$, and $u_3'$. We first find the corresponding Wronskian:(9)
Now we have that:(10)
We will now integrate $u_1'$, $u_2'$, and $u_3'$ to get:(13)
Thus we have that:(16) |
Parentheses and brackets are very common in mathematical formulas. You can easily control the size and style of brackets in LaTeX; this article explains how.
Contents
Here's how to type some common math braces and parentheses in LaTeX:
Type LaTeX markup Renders as Parentheses; round brackets
(x+y)
\((x+y)\) Brackets; square brackets
[x+y]
\([x+y]\) Braces; curly brackets
\{ x+y \}
\(\{ x+y \}\) Angle brackets
\langle x+y \rangle
\(\langle x+y\rangle\) Pipes; vertical bars
|x+y|
\(\displaystyle| x+y |\) Double pipes
\|x+y\|
\(\| x+y \|\)
The size of brackets and parentheses can be manually set, or they can be resized dynamically in your document, as shown in the next example:
\[ F = G \left( \frac{m_1 m_2}{r^2} \right) \] Notice that to insert the parentheses or brackets, the
\left and
\right commands are used. Even if you are using only one bracket,
both commands are mandatory.
\left and
\right can dynamically adjust the size, as shown by the next example:
\[ \left[ \frac{ N } { \left( \frac{L}{p} \right) - (m+n) } \right] \] When writing multi-line equations with the
align,
align* or
aligned environments, the
\left and
\right commands must be balanced
on each line and on the same side of &. Therefore the following code snippet will fail with errors: \[ y = 1 + & \left( \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3} + \ldots \\ & \quad + \frac{1}{x^{n-1}} + \frac{1}{x^n} \right) \] The solution is to use "invisible" brackets to balance things out, i.e. adding a
\right. at the end of the first line, and a
\left. at the start of the second line after
&:
\[ y = 1 + & \left( \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3} + \ldots \right. \\ & \quad \left. + \frac{1}{x^{n-1}} + \frac{1}{x^n} \right) \]
The size of the brackets can be controlled explicitly
The commands
\Bigg and
\bigg stablish the size of the delimiters
< and
> respectively. For a complete list of parentheses and sizes see the reference guide.
LaTeX markup Renders as
\big( \Big( \bigg( \Bigg(
\big] \Big] \bigg] \Bigg]
\big\{ \Big\{ \bigg\{ \Bigg\{
\big \langle \Big \langle \bigg \langle \Bigg \langle
\big \rangle \Big \rangle \bigg \rangle \Bigg \rangle
\big| \Big| \bigg| \Bigg|
\(\displaystyle\big| \; \Big| \; \bigg| \; \Bigg|\)
\big\| \Big\| \bigg\| \Bigg\|
\(\displaystyle\big\| \; \Big\| \; \bigg\| \; \Bigg\|\) |
It looks like you're new here. If you want to get involved, click one of these buttons!
I wanted to break this up into parts, talking about one question at a time, and hopefully get others to talk with me about this. It would be nice to talk to others about this stuff. So here's perhaps the first question which we would want to ask, when considering the type of 'stochastic mechanics' we've been talking about on Azimuth.
When is the reverse of a continuous time stochastic process also a valid continuous time stochastic process?
For every directed graph, we have an adjacency matrix $A$. This is defined by fixing a labelling to the nodes of the graph, and fixing an ordering of this labelling. Permuting the ordering, lifts to a permutation of $A$.
To define a stochastic process on a directed graph (with non-negative edge weights), we need to consider the combinatorial lapacian. We will consider 'walks' on this graph.
This is defined in terms of the generator
$$ L = A - D $$ where
$$ D_{ii} := -\sum_i A_{ij} $$ and so the generator satisfies
$$ L_{i\neq j} \geq 0 $$ and $\forall j$
$$ \sum_i L_{ij} = 0 $$ The stochastic generator is then defined for all times $t\in R^+$ as
$$ U_{ij} = \exp(t L)_{ij} $$and for $\psi(0)$ an initial vector of probabilities of finding a walker on a node at time $t=0$, we want to determine the probability of finding a walker at a node $k$ at a later time $t$. Here we mean that $\psi_j(0)$ is the jth entry of $\psi(0)$, which is the probability of finding a walker at the jth node. We have that $\sum_j \psi_j(0) = 1$.
The probability of finding a walker at node $k$ is given by the kth component of $\psi(t)$ as
$$ \psi(t) = \exp(t L)\psi(0) $$ Now, we want to reverse all of the arrows in our graph, which is represented by taking the transpose of $A$. So the question is,
To understand this, we should return to the defining properties of what it means to be a stochastic generator.
We have that
$$ L^\top = A^\top - D $$ Here, since $D$ is diagonal, the transpose of course does not do anything. So the question is then ''if $L^\top$ still satisfies the definitions, making it a valid stochastic generator?''. In general, this is not the case. However, there are some cases when it does work still and these are interesting. The elementary case is of course given by $A=A^\top$, but we can actually still say a bit more than that. It turns out to be enough if the following is true, for all $j$
$$ \sum_i A_{ij} = \sum_i A_{ji} $$ There are a few interesting cases of this. Here's one we talked a bit about before.
these graphs are called
balanced. |
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that? |
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Observation of a peaking structure in the J/psi phi mass spectrum from B-+/- -> J/psi phi K-+/- decays
PHYSICS LETTERS B, ISSN 0370-2693, 06/2014, Volume 734, Issue 370-2693 0370-2693, pp. 261 - 281
A peaking structure in the J/psi phi mass spectrum near threshold is observed in B-+/- -> J/psi phi K-+/- decays, produced in pp collisions at root s = 7 TeV...
PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PHYSICS, NUCLEAR | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | scattering [p p] | J/psi --> muon+ muon | experimental results | Particle Physics - Experiment | Nuclear and High Energy Physics | Phi --> K+ K | vertex [track data analysis] | CERN LHC Coll | B+ --> J/psi Phi K | Peaking structure | hadronic decay [B] | Integrated luminosity | Data sample | final state [dimuon] | mass enhancement | width [resonance] | (J/psi Phi) [mass spectrum] | Breit-Wigner [resonance] | 7000 GeV-cms | leptonic decay [J/psi] | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
2. Measurement of the ratio of the production cross sections times branching fractions of B c ± → J/ψπ ± and B± → J/ψK ± and ℬ B c ± → J / ψ π ± π ± π ∓ / ℬ B c ± → J / ψ π ± $$ \mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm }{\pi}^{\pm }{\pi}^{\mp}\right)/\mathrm{\mathcal{B}}\left({\mathrm{B}}_{\mathrm{c}}^{\pm}\to \mathrm{J}/\psi {\pi}^{\pm}\right) $$ in pp collisions at s = 7 $$ \sqrt{s}=7 $$ TeV
Journal of High Energy Physics, ISSN 1029-8479, 1/2015, Volume 2015, Issue 1, pp. 1 - 30
The ratio of the production cross sections times branching fractions σ B c ± ℬ B c ± → J / ψ π ± / σ B ± ℬ B ± → J / ψ K ± $$ \left(\sigma...
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
B physics | Branching fraction | Hadron-Hadron Scattering | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
Journal Article
3. Precise Measurement of the e+e- →π+π-J /ψ Cross Section at Center-of-Mass Energies from 3.77 to 4.60 GeV
Physical Review Letters, ISSN 0031-9007, 03/2017, Volume 118, Issue 9, p. 092001
The cross section for the process e^{+}e^{-}→π^{+}π^{-}J/ψ is measured precisely at center-of-mass energies from 3.77 to 4.60 GeV using 9 fb^{-1} of data...
Journal Article
Physical Review Letters, ISSN 0031-9007, 06/2019, Volume 122, Issue 23, p. 232002
Journal Article
Physics Letters B, ISSN 0370-2693, 05/2016, Volume 756, Issue C, pp. 84 - 102
A measurement of the ratio of the branching fractions of the meson to and to is presented. The , , and are observed through their decays to , , and ,...
scattering [p p] | pair production [pi] | statistical | Physics, Nuclear | 114 Physical sciences | Phi --> K+ K | Astronomy & Astrophysics | LHC, CMS, B physics, Nuclear and High Energy Physics | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | Science & Technology | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | Nuclear & Particles Physics | 7000 GeV-cms | leptonic decay [J/psi] | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | Physical Sciences | hadronic decay [f0] | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Physics, Particles & Fields | 0202 Atomic, Molecular, Nuclear, Particle And Plasma Physics | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0
scattering [p p] | pair production [pi] | statistical | Physics, Nuclear | 114 Physical sciences | Phi --> K+ K | Astronomy & Astrophysics | LHC, CMS, B physics, Nuclear and High Energy Physics | f0 --> pi+ pi | High Energy Physics - Experiment | Compact Muon Solenoid | pair production [K] | Science & Technology | mass spectrum [K+ K-] | Ratio B | Large Hadron Collider (LHC) | Nuclear & Particles Physics | 7000 GeV-cms | leptonic decay [J/psi] | J/psi --> muon+ muon | experimental results | Nuclear and High Energy Physics | Physics and Astronomy | branching ratio [B/s0] | CERN LHC Coll | B/s0 --> J/psi Phi | CMS collaboration ; proton-proton collisions ; CMS ; B physics | Physics | Física | Physical Sciences | hadronic decay [f0] | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Physics, Particles & Fields | 0202 Atomic, Molecular, Nuclear, Particle And Plasma Physics | colliding beams [p p] | hadronic decay [Phi] | mass spectrum [pi+ pi-] | B/s0 --> J/psi f0
Journal Article
PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 06/2019, Volume 122, Issue 23
Journal Article
CHINESE PHYSICS C, ISSN 1674-1137, 01/2017, Volume 41, Issue 1
A measurement of the number of J/psi events collected with the BESIII detector in 2009 and 2012 is performed using inclusive decays of the J/psi. The number of...
BESIII detector | PHYSICS, NUCLEAR | inclusive J/psi events | number of J/psi events | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences
BESIII detector | PHYSICS, NUCLEAR | inclusive J/psi events | number of J/psi events | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences
Journal Article
8. Observation of a Charged Charmoniumlike Structure in e(+)e(-) -> pi(+)pi(-) J/psi at root s=4.26 GeV
Physical Review Letters, ISSN 0031-9007, 06/2013, Volume 110, Issue 25
We study the process e(+)e(-) -> pi(+)pi(-) J/psi at a center-of-mass energy of 4.260 GeV using a 525 pb(-1) data sample collected with the BESIII detector...
PHYSICS, MULTIDISCIPLINARY
PHYSICS, MULTIDISCIPLINARY
Journal Article
9. Observation of a near-threshold omega J/psi mass enhancement in exclusive B -> K omega J/psi decays
PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 05/2005, Volume 94, Issue 18
We report the observation of a near-threshold enhancement in the omega J/psi invariant mass distribution for exclusive B -> K omega J/psi decays. The results...
BELLE | BREAKING | STATES | PHYSICS, MULTIDISCIPLINARY | SEARCH | ANNIHILATION | EXOTIC MESONS | CHARMONIUM | Physics - High Energy Physics - Experiment
BELLE | BREAKING | STATES | PHYSICS, MULTIDISCIPLINARY | SEARCH | ANNIHILATION | EXOTIC MESONS | CHARMONIUM | Physics - High Energy Physics - Experiment
Journal Article
Physical Review Letters, ISSN 0031-9007, 06/2013, Volume 110, Issue 25
Journal Article
Journal of High Energy Physics, ISSN 1126-6708, 2012, Volume 2012, Issue 5
Journal Article
12. Suppression of non-prompt J/psi, prompt J/psi, and Upsilon(1S) in PbPb collisions at root s(NN)=2.76 TeV
JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 05/2012, Issue 5
Yields of prompt and non-prompt J/psi ,as well as Upsilon(1S) mesons, are measured by the CMS experiment via their mu(+)mu(-) decays in PbPb and pp collisions...
P(P)OVER-BAR COLLISIONS | CROSS-SECTIONS | PERSPECTIVE | MOMENTUM | ROOT-S=7 TEV | LHC | COLLABORATION | QUARK-GLUON PLASMA | PP COLLISIONS | NUCLEUS-NUCLEUS COLLISIONS | Heavy Ions | PHYSICS, PARTICLES & FIELDS
P(P)OVER-BAR COLLISIONS | CROSS-SECTIONS | PERSPECTIVE | MOMENTUM | ROOT-S=7 TEV | LHC | COLLABORATION | QUARK-GLUON PLASMA | PP COLLISIONS | NUCLEUS-NUCLEUS COLLISIONS | Heavy Ions | PHYSICS, PARTICLES & FIELDS
Journal Article |
Topological Subspaces Examples 1
Recall from the Topological Subspaces page that if $(X, \tau)$ is a topological space and $A \subseteq X$ then the subspace topology on $A$ is defined to be:(1)
We verified that $\tau_A$ is indeed a topology for any subset $A$ of $X$.
We will now look at some examples of subspace topologies.
Example 1 Consider the topological space $(\mathbb{R}^2, \tau)$ where $\tau$ is the usual topology of open disks in $\mathbb{R}^2$. Determine what the subspace topology is for the subset $A = \{ (x, 0) \in \mathbb{R}^2 : x \in \mathbb{R} \} \subseteq \mathbb{R}^2$.
Note that the set $A = \{ (x, 0) \in \mathbb{R}^2 : x \in \mathbb{R} \}$ is simply the real line $\mathbb{R}$. Geometrically we can see that the subspace topology $\tau_A$ will simply be the usual topology on $\mathbb{R}$.
To see this, consider any open set in $\mathbb{R}$ with respect to the usual topology of open intervals in $\mathbb{R}$. Then any open interval $(a, b)$ can be constructed by taking an open disk in $\mathbb{R}^2$ that intersects the line $y = 0$ at the points $(a, 0)$, $(b, 0)$.
Since every open set in $\mathbb{R}$ is a union of these open intervals, we see that the subspace topology on $\mathbb{R}$ is simply the usual topology on $\mathbb{R}$.
Example 2 Consider the topological space $(\mathbb{R}, \tau)$ where $\tau$ is the usual topology of open intervals in $\mathbb{R}$. Verify that the subspace topology on $\mathbb{Z} \subseteq \mathbb{R}$ is the discrete topology on $\mathbb{Z}$.
Let $x \in \mathbb{Z}$. Then the open interval $\left ( x - \frac{1}{2}, x + \frac{1}{2} \right) \cap \mathbb{Z} = \{ x \}$. Hence every singleton set $\{ x \}$ is contained in the subspace topology on $\mathbb{Z}$. But this implies that that $\tau_{\mathbb{Z}}$ is the discrete topology on $\mathbb{Z}$. |
I'm going to suggest an approach to transitive closures which will yield the usual definition, in the special case of a 'simple' digraph having an adjacency matrix with entries in {0,1}. Namely: the transitive closure can be regarded as a maximum value over weights of walks, where we regard multi-arcs as arcs with greater weight. We can then formulate two different definitions according to how you would like to define the weight of a walk.
Throughout,
A(D) denotes the adjacency matrix of a (multi-)digraph D.
A. 'Max-min' approach
If you would like the weight of a walk to be the minimum weight of any edge (according to the principle of a chain being only as strong as the weakest link), you would then define
$\begin{align}A(T)\_{a,b} \;\;=\quad \max_{\ell \in \mathbb N} \; \max_{\substack{v \in V(D)^{\ell+1} \\\\ (v_0, v_{\ell+1}) = (a,b)}} \min_{0 \le j \le \ell}\; \;\Bigl[ A(D)\_{v_j,v_{j+1}} \Bigr]\;,\end{align}$
fairly straightforwardly.
B. 'Combinatoric' approach
If you would rather concieve of walk-weifghts as a product of the constituent arc-weights, as happens in sum-over-paths descriptions of probabilistic processes, you should rather define$\begin{align}A(T)\_{a,b} \;\;=\quad \sup_{\ell \in \mathbb N} \; \max_{\substack{v \in V(D)^{\ell+1} \\\\ (v_0, v_{\ell+1}) = (a,b)}} \;\; \prod_{j=0}^\ell \Bigl[ A(D)\_{v_j,v_{j+1}} \Bigr] \;,\end{align}$
where the supremum may be replaced by a maximum in the case where (as with probabilistic mixing) all arc-weights are between −1 and 1, if the network does not contain any directed cycles, or similar conditions.
(If your digraph is not acyclic, and some vertex-pairs in some strongly-connected component contains multiple arcs between them, and you don't like the idea of digraphs with countably infinitely many arcs between vertices, you may wish to replace the maximum over tuples with a maximum over non-repeating sequences, in which case the supremum also becomes a maximum. This corresponds to taking maximum wieghts of paths, rather than walks.) |
Search
Now showing items 1-5 of 5
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(Elsevier, 2016-02)
The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ...
Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2013-11)
We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ... |
Topological Subspaces Examples 2
Recall from the Topological Subspaces page that if $(X, \tau)$ is a topological space and $A \subseteq X$ then the subspace topology on $A$ is defined to be:(1)
We verified that $\tau_A$ is indeed a topology for any subset $A$ of $X$.
We will now look at some examples of subspace topologies.
Example 1 Let $X$ be a set with the discrete topology $\tau$, and let $A \subseteq X$. Prove that the subspace $(A, \tau_A)$ also has the discrete topology (on $A$).
Let $A \subseteq X$. To show that $A$ has the discrete topology on $A$, all we need to show is that any subset of $A$ is open in $X$.
Let $U \subseteq A$ be open in $X$. Then $U \subseteq X$. So $U$ is open in $X$. Moreover, $A \cap U = U$ is open in $A$. Hence $\tau_A$ is the discrete topology on $A$.
Example 2 Let $(X, \tau)$ be a topological space and let $A \subseteq X$. Prove that $\tau_A \subseteq \tau$ if and only if $A \in \tau$.
$\Rightarrow$ Suppose that $\tau_A \subseteq \tau$. Since $A$ is open in $A$, this means that $A \in \tau_A$, so $A \in \tau$.
$\Leftarrow$ Suppose that $A \in \tau$. Let $U \subseteq \tau_A$. Then $U$ is open in $A$. So there exists an open set $V \in X$ such that $U = A \cap V$. But $A$ is open in $X$ since $A \in \tau$, so $A \cap V = U$ is open in $X$. Therefore $U \in \tau$. This shows that:(2) |
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
I am not quite sure if I exactly get what you are looking for, but I'll give it a try.
This answer refers to the original question before the edit
I'm looking for some kind of crypto-based data structure that will allow me to produce a signature over a set of hashes such that I can verify that any of the hashes is in the set at a later point in time without having to distribute all of the hashes.
One possible approach to produce a signature for a sequence of messages $(m_1,\ldots,m_n)$ (or hash values) is to build a Merkle tree from this sequence.
The idea of a Merkle tree isto assign the messages $m_i$ to the leaves of a binary tree from left to right,and do the hashing recursively upwards, starting from the lowest levelin the tree.
More formally, a Merkle treeis a complete binary tree, together with a cryptographic hash function$H:\{0,1\}^*\rightarrow \{0,1\}^{\ell}$ and an assignment$\phi:N\rightarrow \{0,1\}^{\ell}$, where $N$ is the set of nodes of thetree. The assignment $\phi$ for the label of the nodes is recursivelydefined, where $v_P$ is the parent nodeand $v_L$ and $v_R$ the left and right child respectively. Furthermore,$x$ is a string that is assigned to a leaf.\begin{equation} \phi(v_P):= \begin{cases}H(\phi(v_L)||\phi(v_R)) & \text{if $v_P$ has two children};\\H(\phi(v_L)) & \text{if $v_P$ has one child};\\ H(x) & \text{if $v_P$ isa leaf.} \end{cases} \end{equation} Additionally, define theauthentication path $A_v=\{a_i|0<i\leqh\}$ of a leaf $v$ as the set containing all values $a_i$. The value$a_i$ at height $i$ is defined to be the label of the sibling of thenode of height $i$ at the unique path from $v$ to the root.
Note that if your values $(m_1,\ldots,m_n)$ are already hashes, then you can directly assign these values to the leaves (without hashing them again). Subsequently, the $m_i$ are denoted as messages (irrespective if they are messages or hash values).
Put differently, a Merkle tree can be used to release a message $m_i$ and the coresponding authentication path $A_{m_i}$ such that everybody is able to verify that $m_i$ "is in the tree", but does not require access to the all the remaining messages.
Now take a secure signature scheme and sign the root-hash of the Merkle tree. Then given the signature, $m_i$ and $A_{m_i}$ it can be checked that $m_i$ is in the signed sequence $(m_1,\ldots,m_n)$ without requiring all the other messages $m_j$, $j\neq i$. Basically, from $m_i$ and $A_{m_i}$ one recomputes the root hash and checks the digital signature.
However, note that this does not ensure a real hiding of the remaining messages $m_j$. If the remaining messages have a short length and/or are known to come from a small set, one can mount a brute-force attack by simply hashing all possible values and check whether the result matches the root hash.
This can be prevented if one does not use simple hashes for the leaf nodes, but uses commitments instead. This approach is for instance applied in redactable signatures (RS) or content extraction signatures (CES).
How does this related to what you seek
You do not have to distribute all hashes, but for every value $m_i$ a "proof" that the value is in the set (the Merkle tree) you have to distribute $(m_i,A_{m_i})$. The advantage is that if you use the commitment based approach as applied in RS or CES, nobody can test against your signature to check if a given value is in the set, unless you want to explicitly allow that by providing $(m_i,A_{m_i})$.
EDIT (to answer the edited question)
Any approach that will give you cryptographic security guarantees that you will not encounter false positives (such as cryptographic accumulators, Merkle trees, vector commitments, zero-knowledge sets, etc.) will require to provide a "witness" for every value to be able to check against the structure.
If you tolerate a larger false-positive probability you can use for instance Bloom filters to represent your set and allow set membership queries without any witnesses. |
dc.contributor.author Area, Iván dc.contributor.author Godoy, Eduardo dc.contributor.author Marcellán Español, Francisco José dc.contributor.author Moreno Balcázar, Juan José dc.date.accessioned 2009-12-17T10:29:58Z dc.date.available 2009-12-17T10:29:58Z dc.date.issued 2000-06 dc.identifier.bibliographicCitation Journal of Computational and Applied Mathematics, 2000, vol. 118, n. 1-2, p. 1-22 dc.identifier.issn 0377-0427 dc.identifier.uri http://hdl.handle.net/10016/6141 dc.description 22 pages, no figures.-- MSC codes: Primary 33C25; Secondary 33D45.-- Issue title: "Higher transcendental functions and their applications". dc.description MR#: MR1765938 (2001d:33018) dc.description Zbl#: Zbl 0957.33008 dc.description.abstract In this paper, polynomials which are orthogonal with respect to the inner product $$\multline\langle p,r\rangle_S=\sum^\infty_{k=0}p(q^k)r(q^k) {(aq)^k(aq;q)_\infty\over(q;q)_k}\\ +\lambda\sum^\infty_{k=0} (D_qp)(q^k)(D_qr)(q^k){(aq)^k(aq;q)_\infty\over(q;q)_k},\endmultline$$ where $D_q$ is the $q$-difference operator, $\lambda\geq0,\ 0<q<1$ and $0<aq<1$, are studied. For these polynomials, algebraic properties and $q$-difference equations are obtained as well as their relation with the monic little $q$-Laguerre polynomials. Some properties of the zeros of these polynomials are also deduced. Finally, the relative asymptotics $\{Q_n(x)/p_n(x;a )\}_n$ on compact subsets of ${\bf C}\sbs[0,1]$ is given, where $Q_n(x)$ is the $n$th degree monic orthogonal polynomial with respect to the above inner product and $p_n(x;a )$ denotes the monic little $q$-Laguerre polynomial of degree $n$. dc.description.sponsorship E.G. wishes to acknowledge partial financial support by Dirección General de Enseñanza Superior
(DGES) of Spain under Grant PB-96-0952. The research of F.M. was partially supported by DGES
of Spain under Grant PB96-0120-C03-01 and INTAS Project 93-0219 Ext. J.J.M.B. also wishes to
acknowledge partial financial support by Junta de Andalucía, Grupo de Investigación FQM 0229. dc.format.mimetype application/pdf dc.language.iso eng dc.publisher Elsevier dc.rights © Elsevier dc.subject.other Orthogonal polynomials dc.subject.other Sobolev orthogonal polynomials dc.subject.other Little q-Laguerre polynomials dc.title Inner products involving q-differences: the little q-Laguerre-Sobolev polynomials dc.type article dc.type.review PeerReviewed dc.description.status Publicado dc.relation.publisherversion http://dx.doi.org/10.1016/S0377-0427(00)00278-8 dc.subject.eciencia Matemáticas dc.identifier.doi 10.1016/S0377-0427(00)00278-8 dc.rights.accessRights openAccess |
Because you want a similar temperature and climate to Earth's, your new planet would have to be at approximately the same distance from its sun as Earth is, assuming both solar systems have similar-sized suns.
For Earth's mean temperature of $15^{\circ}\mathrm C$, the planet would have to be about 1 AU from its sun.
For your planet to possess seasons and be similar in climate to Earth, it would need the same axial tilt, which is responsible for Earth's seasons. Earth's axial tilt is about $23.45$ degrees, your planet's would have to be similar for similar seasons.
Your planet would also need the same day-length so that each part of it will receive the same warming from the sun per day. If the day were longer, your planet might become a bit desert-like,
very hot in the day, and freezing at night. That doesn't sound promising for life.
However, for a massive planet to possess a 24 hour day, the surface would have to be moving much,
much faster than Earth's:
Earth's diameter is $12,756$ km, so has a radius of $6,378$ km.
Your new planet's radius is 10x larger, so would be $63,780$ km. That means your planet has a circumference of $2 \cdot 63,780\pi$ km, approximately equal to $400,742$ km.
For your planet to have a $24$ hour day, the surface would have to spin at $\frac{400742}{24}$ km/h, about $16,698$ km/h, which is pretty fast. (Earth's spin is only $1,673$ km/h.)
Your planet would be rotating at about
$$16,698~\mathrm{km/h}$$
which is pretty darn fast!
Honestly, I was half-hoping that speed of this planet's rotation might be nearer Jupiter's $45,061$ km/h, so that I could talk about extreme weather patterns and phenomena such as the Great Red Spot.
Apparently your planet will not be subject to anything near as powerful as Jupiter's hurricanes, but your planet's spin is definitely fast enough to increase the strength of - and therefore devastation caused by - any of its storms.
Also, your planet's spin is still slow enough to lead to similar weather systems as on Earth, where winds are constrained to a hemisphere, and Jet Streams will become possible, which will help regulate your planet's climate and keep it more consistent with Earth's.
However, we have yet to face the biggest problem imposed by a massive planet: keeping gravity somewhat similar to Earth's.
This is nigh impossible, as we will see after we calculate what density our planet would need:
Earth's density is $5,540$ kg/m
3, and it's volume $1.08321×10^{21}$ m 3.
Your planet's radius is 10x larger, therefore its volume must be 10
3x larger. This means that your planet's volume is $1.08321×10^{24}$ m 3.
Usually, for two objects to have the same gravity, their masses must be the same, but the inverse square law also states gravitational attraction to be inversely proportional to the square of the distance between two objects.
Because your planet is 10x larger, anyone on the surface will be 10x further away from its center than they would be from Earth's center on its surface. Using this equality, we can calculate the necessary mass of your planet:
$$
\mathrm g = \frac{G\cdot M}{r^2}
$$
Where $\mathrm g$ represents the accelration due to gravity ($\mathrm{m/s^2}$), $G$ the gravitational constant (6.673×10
-11 N·(m/kg) 2), $M$ the mass of our planet and $r$ the radius of our planet.
We can now rearrange and solve for $M$:
$$
M = \frac{\mathrm g r^2}{G} \\~\\
M = \frac{9.8 \times 63780000^2}{6.673 \times 10^{-11}}\\~\\
M \approx 5.974 \times 10^{26}
$$
So, our planet's mass would have to be approximately $5.974 \times 10^{26}$ kg, which looks about right; Earth's mass is about $5.972 \times 10^{24}$ kg, and, for the same gravity, we'd expect our planet to be 100x heavier, which it is! Of course, we are suffering rounding inaccuracies, but so far, so good.
Now, we can calculate our planet's target density, using the equality that
$$
p = \mathrm{M/V}
$$
where $p$ represents density, $\mathrm M$ mass and $\mathrm V$ volume, we see that:
$$
p = \frac{5.974×10^{26}}{1.08321×10^{24}}
$$
so therefore the planet's density must be $551.5$ kg/m
3 if you want the same attraction under gravity.
Saturn is the least-dense planet in our solar system, with a density of $687$ kg/m
3. However, Saturn is a gas giant, composed mainly of Hydrogen and Helium: good luck mining for minerals in a cloud of Hydrogen!
Your planet would need a density of $551.5$ kg/m 3, 100 kg/m 3 less than Saturn's!
So, it should be fairly obvious that you will never be able to get exactly the same surface gravity as you do on Earth, but fauna would be able to exist under different gravitational conditions, and, what's more, higher gravity means more pressure, which means minerals will form more readily!
There could be some special minerals which only form within the crust of this planet because of its strong gravity, making the planet more valuable!
However, to enable humans to land on this planet, it will be almost impossible to get the gravity weak enough (have you thought of using exoskeletons for manned missions on this planet?); the planet would have to be as sparse as possible.
If your planet was composed almost entirely of some really porous Vesicular rock, almost like a sponge, it would definitely keep down the density.
The same process which forms such rock could also create massive air-pockets, creating a cavernous planet with many underground tunnels and caves on the same scale as Erebor!
A cavernous planet, made of porous rock, would remain reasonably sparse.
The crust of the planet should be as thick as possible, because a molten mantle and solid core would be more dense than this spacious, cavernous crust. This crust would also need constant renewal, so that collapses don't bring up the density, and to remain, again, more similar to earth.
However, tectonic activity would have to be minimal, to prevent metamorphic and denser igneous rocks being formed, so volcanoes would be necessary to keep rejuvenating the surface. Underground lava-flows might be commonplace, where it is still hot enough so that the lava cools slowly into some porous rock. Think about how awesome that could be; a massive, underground cave system where magma seeps up through the floors forming puddles of fiery lava!
The one thing we can't change is the presence of a solid, dense core; this is because a nickel/iron core is necessary for a magnetic field, which would protect the planet as the Earth is.
The geomagnetic field would protect the planet from solar winds, thus retaining the Atmosphere and ozone layer, protecting inhabitants from radiation which would otherwise be harmful. Also, an atmosphere is generally a handy thing to have if you want life to live on a planet!
A magnetic field is necessary to protect the planet like earth, so a dense core is also necessary.
We can actually calculate the density, and therefore acceleration due to gravity, of our planet:
Of course, our planet, possessing a molten mantle, liquid seas, and solid core, would likely be slightly more dense than Pumice (a Vesicular rock), but let's just say that Pumice is the most common rock on our planet, and everything else has a similar density.
The density of Pumice is $641 \mathrm{kg/m^3}$, so our planet's density would also be about $641 \mathrm{kg/m^3}$.
The volume of our planet is $\frac43 \pi r^3$, about $1.08321×10^{24}$ m
3.
Now, using our planet's assumed density and volume, we can plug the values into our density equation:
$$
p = \mathrm{M/V} \\~\\
641 = \frac{\mathrm M}{1.08321×10^{24}}
$$
Rearranging, we get
$$
\mathrm M = 641 \times 1.08321×10^{24}
$$
which is approximately equal to $6.94 \times 10^{26}$ kg. Our planet is pretty heavy!
Using this equation (same as before)
$$
\mathrm g = \frac{G\cdot M}{r^2}
$$
we can solve for $\mathrm g$, the planet's gravitational acceleration:
$$
\mathrm g = \frac{(6.673 \times 10^{-11}) \times (6.94 \times 10^{26})}{63780000^2}
$$
which is approximately $11.38 \mathrm{m/s^2}$.
Wait, only $11.38$? That's just $1.16$ Earths! I'm in luck!
Well, no, not really, not unless you make some other changes as well: the actual density of your planet would be much greater, as a large proportion of the planet would probably be magma (density: $3100$ kg/m
3), and, if the planet is like Earth, a lot of the surface would have to be water (density: $1000$ kg/m 3); the planet's mean density would obviously be above Pumice's $641$ kg/m 3.
However, if you discover some way of limiting the density of your planet's magma, this would no longer be an obstacle to your 1 Earth goal!
Maybe the magma is filled with air, think of carbonated water, a similar process could have trapped and compressed air within the magma of your planet.
This idea also makes the Vesicular-rock-only surface more likely, as when the lava is released through fissures in the crust (think Volcanoes, etc.), the trapped air within the lava will expand, creating air bubbles. This is like getting the bends when re-surfacing from a deep dive: as pressure is released, compressed nitrogen within one's blood quickly expands into bubbles.
And my magma density figures were based on basalt; vesicular lava would have a lower density.
Earth-like gravity still not too far-fetched an idea.
Anyway, that exoskeleton idea still seems pretty cool to me. |
Piecewise Smooth Curves in the Complex Plane
We will soon look at integrals of complex functions along piecewise smooth curves in the complex plane but we will first need to properly define what we mean by "curve" and "piecewise smooth curves".
Definition: A continuous function $\gamma : [a, b] \to \mathbb{C}$ is called a Curve in the complex plane. The point $\gamma (a)$ is called the Initial Point of the curve, and the point $\gamma (b)$ is called the Terminal Point of the curve.
For example, consider the following curve $\gamma : [0, 2\pi] \to \mathbb{C}$ defined by:(1)
Then $\gamma$ traverses the circle centered at the origin with radius $1$ whose initial and terminal point is $\gamma (0) = \gamma (2\pi) = 1 + 0i$, in a counterclockwise direction.
For another example, consider the curve $\gamma : [0, 1] \to \mathbb{C}$ defined by:(2)
Then $\gamma$ traverses the horizontal line segment with initial point $0 + i$ and terminal point $1 + i$.
Of course, we can also consider vertical line segments as curves. For example, consider the curve $\gamma : [0, 1] \to \mathbb{C}$ defined by:(3)
Then $\gamma$ traverses the vertical line segment with initial point $1 + 0i$ and terminal point $1 + i$.
We will be wanting to look at particular types of curves in the succeeding sections, so we will now make some important definitions which classify curves in the complex plane.
Definition: A curve $\gamma : [a, b] \to \mathbb{C}$ in the complex plane is said to be a Piecewise Smooth Curve if there exists finitely many points $a = a_0, a_1, ..., a_n = b$ with $a = a_0 < a_1 < ... < a_n = b$ called a Partition of $[a, b]$, for which: 1) $\gamma$ is infinitely differentiable on each open subinterval $(a_{k}, a_{k+1})$. 2) The derivative of $\gamma$ on each closed subinterval $[a_k, a_{k+1}]$ are continuous. (a) and (b) simply state that a piecewise smooth curve is a curve made up of finitely many smooth pieces.
Pretty much all of the curves that we will encounter will be piecewise smooth curves and in general, it is not hard to distinguish between a curve that is piecewise smooth and one that is not if we have an idea of what the graphs of these curves look like. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Research Open Access Published: Positive periodic solutions for nonlinear first-order delayed differential equations at resonance Boundary Value Problems volume 2018, Article number: 187 (2018) Article metrics
504 Accesses
Abstract
This paper studies the existence of positive periodic solutions of the following delayed differential equation:
where \(a, \tau \in C(\mathbb{R},\mathbb{R})\) are
ω-periodic functions with \(\int_{0}^{\omega }a(t)\,dt=0\), \(f:\mathbb{R}\times [0, \infty)\to \mathbb{R}\) is continuous and ω-periodic with respect to t. By means of the fixed point theorem in cones, several new existence theorems are established. Our main results enrich and complement those available in the literature. Introduction
In the past few years, there has been considerable interest in the existence of positive periodic solutions for the first-order equation
where \(a, b\in C(\mathbb{R},[0,\infty))\) are
ω-periodic functions with
and
τ is a continuous ω-periodic function. Note that when \(\lambda =0\), equation (1.1) reduces to \(u'=-a(t)u\), which is well known in Malthusian population models. In real world applications, (1.1) has also been viewed as a model for a variety of physiological processes and conditions including production of blood cells, respiration, as well as cardiac arrhythmias. See, for instance, [1,2,3,4,5,6,7,8,9,10] for some research works on this topic. Meanwhile, many researchers have also paid their attention to the differential systems corresponds to (1.1), namely,
where \(a_{i}, b_{i}\in C(\mathbb{R},[0,\infty))\) are
ω-periodic functions satisfying
Obviously, the basic assumption \(\int_{0}^{\omega }a(t)\,dt>0\) or \(\int_{0}^{\omega }a_{i}(t)\,dt>0\) (\(i=1,2,\dots,n\)), usually employed to ensure the linear boundary value problem
is non-resonant, has played a key role in the arguments of the above mentioned papers. In fact, this assumption ensures that a number of tools, such as fixed point theory, bifurcation theory and so on, could be employed to study the corresponding problems and establish the desired existence results. Here the linear problem (1.2) is called non-resonant if the unique solution of it is the trivial one. It is well known if (1.2) is non-resonant then, provided
h is an \(L^{1}\)-function, the Fredholm’s alternative theorem implies that the nonhomogeneous problem
always admits a unique solution which, moreover, can be written as
Compared with the non-resonant problems, the research of resonant problems proceeds very slowly and the related results are few. And, of course, a natural and interesting question is whether or not the corresponding nonlinear equation possesses a positive periodic solution, provided that
which means
a may change its sign on \(\mathbb{R}\) and the studied problem is resonant. In the present paper, we shall give a positive answer to the above question. More concretely, several new existence and multiplicity results will be established for the resonant equation
To the best of our knowledge, the above problem has not been studied so far, and our results shall fill this gap.
For simplicity, we say a function \(q\gg 0\) provided that \(q: \mathbb{R}\to (0,\infty)\) is
ω-periodic and continuous. If \(q:\mathbb{R}\to [0,\infty)\) is ω-periodic and continuous with \(\int_{0}^{\omega }q(t)\,dt>0\), then it’s denoted as \(q\succ 0\). Thus, if we choose a function \(\chi \gg 0\) such that \(p:=a+\chi \succ 0\), then the linear differential operator \(Lu:=u'+p(t)u\) is invertible since
Moreover, it is not difficult to see that
u is a positive periodic solution of (1.3) if and only if it is a positive periodic solution of
Therefore, we shall focus on (1.4).
Throughout the paper, we make the following assumptions:
(H1)
\(a, \tau \in C(\mathbb{R},\mathbb{R})\) are
ω-periodic functions with \(\int_{0}^{\omega }a(t)\,dt=0\); (H2)
There exists \(\chi \gg 0\) such that \(p:=a+\chi \succ 0\);
(H3)
\(f\in C(\mathbb{R}\times [0,\infty),\mathbb{R})\) is
ω-periodic with respect to tand \(f(t,u)\geq -\chi (t)u\). Remark 1.1
Obviously, since
a and f are sign-changing, equation (1.3) is more general than corresponding ones studied in the existing literature. For other existence results on nonlinear differential equations at resonance, we refer the readers to [14,15,16,17] and the references listed therein.
The rest of the paper is arranged as follows. In Sect. 2, we introduce some preliminaries. Finally, in Sect. 3, we shall state and prove our main results. In addition, several remarks will be given to demonstrate the feasibility of our main results.
Preliminaries
Let us consider the linear boundary value problem
where
p is defined as in (H2). If we denote by \(K(t,s)\) the Green’s function of (2.1), then a simple calculation gives Lemma 2.1 Suppose (H1) and (H2) hold. Let \(\delta =e^{- \int_{0}^{\omega }p(t)\,dt}\). Then Moreover,
Let
E be the Banach space of continuous ω-periodic functions equipped with the norm \(\|u\|=\max_{t\in [0,\omega ]}|u(t)|\). For \(h\in E\), define
Then we have
Lemma 2.2 Suppose (H1) and (H2) hold. Then \(A:E\to E\) is a completely continuous linear operator. Moreover, if \(h\succ 0\), then \((Ah)(t)>0\) on \([0,\omega ]\). Proof
By a standard argument, we can easily show that
A is a linear completely continuous operator. In addition, Lemma 2.1 yields \(K(t,s)>0\) for any \((t,s)\), which, together with \(h\succ 0\), implies \((Ah)(t)>0\) on \([0,\omega ]\). □
Let
and
Then \(\mathcal{P}\) is a positive cone in
E. Moreover, it is not difficult to check that (1.4) can be rewritten as an equivalent operator equation Lemma 2.3 Suppose (H1), (H2) and (H3) hold. Then \(T(\mathcal{P})\subseteq \mathcal{P}\) and \(T:\mathcal{P}\to \mathcal{P}\) is completely continuous. Proof
Assume \(u\in \mathcal{P}\), then \(u(t)\geq \sigma \|u\|\). It follows from (H3) that \(\chi (s)u(s)+f(s,u(s-\tau (s)))\) is nonnegative, and therefore
Applying (H3) again, we get
This, together with (2.3), yields \(T(\mathcal{P})\subseteq \mathcal{P}\). Finally, by Lemma 2.2 and an argument similar to that of [12, Lemmas 2.2, 2.3] with obvious changes, we can prove that
T is a completely continuous operator. □
The following lemma is crucial to prove our main results.
Lemma 2.4
(Guo–Krasnoselskii’s fixed point theorem [18])
Let E be a Banach space, and let \(\mathcal{P}\subseteq E\) a cone. Assume \(\varOmega_{1}\), \(\varOmega_{2}\) are two open bounded subsets of E with \(0\in \varOmega_{1}, \bar{\varOmega }_{1}\subseteq \varOmega_{2}\), and let \(T:\mathcal{P}\cap (\bar{\varOmega }_{2}\setminus \varOmega_{1})\to \mathcal{P}\) be a completely continuous operator such that (i)
\(\|Tu\|\leq \|u\|\), \(u\in \mathcal{P}\cap \partial \varOmega _{1}\),
and\(\|Tu\|\geq \|u\|\), \(u\in \mathcal{P}\cap \partial \varOmega _{2}\); or (ii)
\(\|Tu\|\geq \|u\|\), \(u\in \mathcal{P}\cap \partial \varOmega _{1}\),
and\(\|Tu\|\leq \|u\|\), \(u\in \mathcal{P}\cap \partial \varOmega _{2}\). Then T has a fixed point in \(\mathcal{P}\cap (\bar{\varOmega } _{2}\setminus \varOmega_{1})\). Main results
In this section, we state and prove our main findings.
Theorem 3.1 Let (H1) –(H3) hold. If and then (1.3) admits at least one positive ω- periodic solution. Proof
For \(0< r< R<\infty \), setting
we have \(0\in \varOmega_{1}\), \(\bar{\varOmega }_{1}\subseteq \varOmega_{2}\).
It follows from (3.1) that there exists \(r>0\) so that for any \(0< u\leq r\),
where
c is a positive constant satisfying \(cM\omega <1\). Therefore, for \(u\in \mathcal{P}\) with \(\|u\|=r\),
Moreover, \(0<\sigma \|u\|\leq u(t)\leq \|u\|=r\). Thus,
which implies \(\|Tu\|\leq \|u\|\), \(\forall u\in \mathcal{P}\cap \partial \varOmega_{1}\).
On the other hand, (3.2) yields the existence of \(\tilde{R}>0\) such that for any \(u\geq \tilde{R}\),
where \(\eta >0\) is a constant large enough such that \(\sigma m\omega (\eta +\underline{\chi })>1\) and \(\underline{\chi }=\min_{t\in [0,\omega ]}\chi (t)\). Fixing \(R>\max \{r, \frac{ \tilde{R}}{\sigma }\}\) and letting \(u\in \mathcal{P}\) with \(\|u\|=R\), we get \(u(t)\geq \sigma \|u\|=\sigma R>\tilde{R}\), and therefore
Consequently, for \(u\in \mathcal{P}\) with \(\|u\|=R\), we can conclude
Hence \(\|Tu\|\geq \|u\|\), \(\forall u\in \mathcal{P}\cap \partial \varOmega _{2}\).
Theorem 3.2 Let (H1) –(H3) hold. If and then (1.3) admits at least one positive ω- periodic solution. Proof
We follow the same strategy and notations as in the proof of Theorem 3.1. Firstly, we show that for \(r>0\) sufficiently small,
From (3.3) it follows that there exists \(\tilde{r}>0\) such that \(f(t,u)\geq \beta u\) for any \(0< u\leq \tilde{r}\), where \(\beta >0\) is a constant large enough such that \(\sigma m\omega (\beta +\underline{ \chi })>1\). Therefore, for \(0< r\leq \tilde{r}\), if \(u\in \mathcal{P}\) and \(\|u\|=r\), then
Furthermore, we obtain
Thus, (3.5) is true.
Next we show for \(R>0\) sufficiently large,
It follows from (3.4) that there exists \(\tilde{R}>0\) so that for any \(u\geq \tilde{R}\),
where \(\mu >0\) satisfies \(\mu M\omega <1\). Let \(R>\max \{\tilde{r}, \frac{ \tilde{R}}{\sigma }\}\), then if \(u\in \mathcal{P}\) and \(\|u\|=R\), we can obtain
and therefore,
Thus for \(u\in \mathcal{P}\) with \(\|u\|=R\), we have
which means that (3.6) is also true.
In the rest of this section, we shall investigate the multiplicity of positive
ω-periodic solutions of (1.3). To the end, we assume: (H4)
\(\lim_{u\to 0+}\frac{f(t,u)}{u}=\infty \) and \(\lim_{u\to +\infty }\frac{f(t,u)}{u}=\infty \) uniformly for \(t\in [0,\omega ]\). In addition, there is a constant \(\alpha >0\) such that$$ \max \bigl\{ f(t,u): \sigma \alpha \leq u\leq \alpha, t\in [0,\omega ]\bigr\} \leq \bigl(\epsilon -\chi (t)\bigr)\alpha, $$(3.7)
where \(\epsilon >0\) satisfies \(\epsilon M\omega <1\).
Theorem 3.3 Assume that (H1) –(H4) hold. Then (1.3) admits at least two positive ω- periodic solutions. Proof
Define
Let \(\varOmega_{1}\) and \(\varOmega_{2}\) be as in the proof of Theorems 3.1 and 3.2. Then for \(0< r<\alpha <R\), we have \(\bar{\varOmega }_{1}\subseteq \varOmega_{3}\), \(\bar{\varOmega }_{3}\subseteq \varOmega_{2}\).
Since \(\lim_{u\to 0+}\frac{f(t,u)}{u}=\infty \) uniformly for \(t\in [0,\omega ]\), by an argument similar to the proof of Theorem 3.2, we can obtain
Similarly, we can show by (H4) that
Clearly, the proof is completed if we prove
Suppose \(u\in \mathcal{P}\) and \(\|u\|=\alpha \), then \(\sigma \alpha \leq \sigma \|u\|\leq u(t)\leq \|u\|=\alpha\), and from (3.7) it follows
Thus, we get
and so (3.8) is satisfied.
Consequently, Lemma 2.4 implies that
T has at least two fixed points \(u_{1}\) and \(u_{2}\), located in \(\mathcal{P}\cap (\bar{\varOmega }_{3} \setminus \varOmega_{1})\) and \(\mathcal{P}\cap (\bar{\varOmega }_{2}\setminus \varOmega_{3})\), respectively. And accordingly, (1.3) admits at least two positive ω-periodic solutions. □
If we replace (H4) with
(H4)′:
\(\lim_{u\to 0+}\frac{f(t,u)}{u}=-\chi (t)\), \(\lim_{u\to +\infty }\frac{f(t,u)}{u}=-\chi (t)\), and there exists a constant \(\alpha >0\) such that$$ \min \bigl\{ f(t,u): \sigma \alpha \leq u\leq \alpha, t\in [0,\omega ]\bigr\} \geq \bigl(\mu -\sigma \chi (t)\bigr)\alpha, $$(3.9)
Then we can obtain the following:
Theorem 3.4 Let (H1) –(H3) and (H4)′ hold. Then (1.3) admits at least two positive ω- periodic solutions. Proof
For \(0< r<\alpha <R\), let \(\varOmega_{i}\) (\(i=1,2,3\)) be as in the proof of Theorems 3.1 and 3.3. Then \(\bar{\varOmega }_{1}\subseteq \varOmega_{3}\), \(\bar{\varOmega }_{3}\subseteq \varOmega_{2}\). We shall follow the same strategy as in the proof of Theorem 3.3.
Now, to apply Lemma 2.4, we only need to show
Let \(u\in \mathcal{P}\) with \(\|u\|=\alpha \), then \(\sigma \alpha \leq \sigma \|u\|\leq u(t)\leq \|u\|=\alpha\), by (3.9) we get
and then
and therefore (3.10) is true. Using Lemma 2.4 again, we know
T has two fixed points \(u_{1}\) and \(u_{2}\), located in \(\mathcal{P}\cap (\bar{ \varOmega }_{3}\setminus \varOmega_{1})\) and \(\mathcal{P}\cap (\bar{\varOmega }_{2}\setminus \varOmega_{3})\), respectively. Consequently, (1.3) admits at least two positive ω-periodic solutions. □ Remark 3.1 Remark 3.2 Conclusion
By applying the fixed point theorem in cones, some new existence theorems are established for a class of first-order delayed differential equations. Our main results enrich and complement those available in the literature.
References 1.
Chow, S.N.: Existence of periodic solutions of autonomous functional differential equations. J. Differ. Equ.
15, 350–378 (1974) 2.
Wazewska-Czyzewska, M., Lasota, A.: Mathematical problems of the dynamics of a system of red blood cells. Mat. Stosow.
6, 23–40 (1976) (in Polish) 3.
Gurney, W.S., Blythe, S.P., Nisbet, R.N.: Nicholson’s blowflies revisited. Nature
287, 17–21 (1980) 4.
Freedman, H.I., Wu, J.: Periodic solutions of single-species models with periodic delay. SIAM J. Math. Anal.
23, 689–701 (1992) 5.
Kuang, Y.: Delay Differential Equations with Applications in Population Dynamics. Academic Press, New York (1993)
6.
Mackey, M.C., Glass, L.: Oscillations and chaos in physiological control systems. Science
197, 287–289 (1997) 7.
Jin, Z.L., Wang, H.Y.: A note on positive periodic solutions of delayed differential equations. Appl. Math. Lett.
23(5), 581–584 (2010) 8.
Graef, J., Kong, L.J.: Existence of multiple periodic solutions for first order functional differential equations. Math. Comput. Model.
54, 2962–2968 (2011) 9.
Ma, R.Y., Chen, R.P., Chen, T.L.: Existence of positive periodic solutions of nonlinear first-order delayed differential equations. J. Math. Anal. Appl.
384, 527–535 (2011) 10.
Ma, R.Y., Lu, Y.Q.: One-signed periodic solutions of first-order functional differential equations with a parameter. Abstr. Appl. Anal.
2011, 1 (2011) 11.
Wang, H.Y.: Positive periodic solutions of functional differential systems. J. Differ. Equ.
202, 354–366 (2004) 12.
Wang, H.Y.: Positive periodic solutions of singular systems of first order ordinary differential equations. Appl. Math. Comput.
218, 1605–1610 (2011) 13.
Chen, R.P., Ma, R.Y., He, Z.Q.: Positive periodic solutions of first-order singular systems. Appl. Math. Comput.
218, 11421–11428 (2012) 14.
Han, X.L.: Positive solutions for a three-point boundary value problem at resonance. J. Math. Anal. Appl.
336, 556–568 (2007) 15.
Ma, R.Y.: Existence results of an
m-point boundary value problem at resonance. J. Math. Anal. Appl. 294, 147–157 (2004) 16.
Gao, C.H., Ma, R.Y., Zhang, F.: Spectrum of discrete left definite Sturm–Liouville problems with eigenparameter-dependent boundary conditions. Linear Multilinear Algebra
65, 1–9 (2017) 17.
Gao, C.H., Li, X.L., Ma, R.Y.: Eigenvalues of a linear fourth-order differential operator with squared spectral parameter in a boundary condition. Mediterr. J. Math.
15, Article ID 107 (2018) 18.
Guo, D.J., Lakshmikantham, V.: Nonlinear Problems in Abstract Cones. Academic Press, New York (1988)
Acknowledgements
The authors are very grateful to the referees for their valuable suggestions. The authors thank for the help from the editor.
Availability of data and materials
Not applicable.
Funding
The first author is supported by National Natural Science Foundation of China (No. 61761002; No. 11761004), the Scientific Research Funds of North Minzu University (No. 2018XYZSX03) and The key project of North Minzu University (No. ZDZX201804).
Ethics declarations Competing interests
The authors declare that they have no competing interests.
Additional information Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
How is the closed form solution to linear regression derived using matrix derivatives as opposed to using the trace method as Andrew Ng does in his Machine learning lectures. Specifically, I am trying to understand how Nando de Frietas does it here.
We want to find the value of $ \theta $ that minimizes $ J(\theta)=(X\theta-Y)^{T}(X\theta-Y) $, where $\theta \in \mathbb{R}^{N \times 1}, X \in \mathbb{R}^{M \times N}$, and $Y \in \mathbb{R}^{M \times 1}$
$\nabla_{\theta}J(\theta) = \nabla_{\theta} (X\theta-Y)^{T}(X\theta-Y)$
$ = \nabla_{\theta} (\theta^{T} X^{T}-Y^{T})(X\theta-Y)$
$ = \nabla_{\theta} (\theta^{T} X^{T}X\theta-\theta^{T} X^{T}Y - Y^{T}X\theta + Y^{T}Y) $
Note that $\theta^{T} X^{T}Y$ is a scalar, so $\theta^{T} X^{T}Y = (\theta^{T} X^{T}Y)^{T} = Y^{T} X \theta$
$\nabla_{\theta}J(\theta) = \nabla_{\theta}(\theta^{T} X^{T}X\theta-Y^{T} X \theta - Y^{T}X\theta + Y^{T}Y)$
$ = \nabla_{\theta}(\theta^{T} X^{T}X\theta- 2 Y^{T} X \theta + Y^{T}Y)$
$ = \nabla_{\theta} \theta^{T} X^{T}X\theta - \nabla_{\theta} 2 Y^{T} X \theta + \nabla_{\theta} Y^{T}Y$
$ = \nabla_{\theta} \theta^{T} X^{T}X\theta - \nabla_{\theta} 2 Y^{T} X \theta $
How do I apply the matrix derivatives described in that video to solve this? He skip steps.
Edit: Below is the suggested strategy of removing theta by differentiating, then taking the inverse of both sides. So looking at one term at a time, we have
$ \nabla_{\theta} \theta^{T} X^{T}X\theta = ? $ How do I differntiate this? This is like differntiating $x\alpha_{1} \alpha_{2} x$ w.r.t. x in the scalar case. I need to combine those $\theta$ terms to hit them with the derivative. Transposing seems to result in the same expression: $$ (\nabla_{\theta} \theta^{T} X^{T}X\theta)^{T} = \nabla_{\theta} \theta^{T} X^{T}X\theta$$
Looking at the second term, we have
$ \nabla_{\theta} 2 Y^{T} X \theta = 2 X^{T} Y$.
Putting together this, we have: $$\nabla_{\theta} \theta^{T} X^{T}X\theta = 2 X^{T} Y$$
Knowing the solution is $\theta = (X^{T}X)^{-1}X^{T}Y$ we can reverse engineer the problem, but I am just not seeing it. And how do we get rid of that 2 factor? |
A
quadratic form is a polynomial $p(x_1,\dots,x_n)$ of the form $$ p(x_1,\dots,x_n)=\sum_{i \leq j}a_{ij}x_ix_j. $$For example, $p_1(x,y,z,w)=x^2+y^2+z^2+w^2$ and $3x^2-5y^2$ are quadratic forms. I'm not sure this is relevant here, but for any such $p$ there is a symmetric matrix $A$ such that $p(\vec x)= (\vec x)^T A(\vec x)$, where we think of $\vec x$ as a column vector.
$p$ is
positive definite iff $p(\vec x)>0$ whenever $\vec x\ne 0$.
The form is
integer valued iff $p(\vec a)\in{\mathbb Z}$ whenever $\vec a\in {\mathbb Z}^n$.
For short, let's say "form" instead of "positive definite integer valued quadratic form".A form is
universal iff it represents every positive integer, i.e., for all $m>0$ there is $\vec a\in{\mathbb Z}^n$ with $p(\vec a)=m$. For example, $p_1$ above is universal.
I read that no form in 3 or fewer variables is universal, and I was hoping for a proof (or at least a reference if the proof is too long/convoluted for it to be written here). I was also wondering whether the requirement of positive-definiteness is relevant here, and whether this is only for forms $p$ where the matrix $A$ above has integer entries. |
I'm working through a problem set, and am stuck on the following problem:
a) What can go wrong in Shor’s algorithm if Q (the dimension of the Quantum Fourier Transform) is not taken to be sufficiently large? Illustrate with an example.
b) What can go wrong if the function, f, satisfies $f(p) = f(q)$ if the period $s$ divides $p − q$, but it’s not an “if and only if” (i.e., we could have $f(p) = f(q)$ even when $s$ doesn’t divide $p − q$)? Note that this does not actually happen for the function in Shor’s algorithm, but it could happen when attempting period finding on an arbitrary function. Illustrate with an example.
c) What can go wrong in Shor’s algorithm if the integer $N$ to be factored is even (that is, one of the prime factors, $p$ and $q$, is equal to 2)? Illustrate with an example.
d) Prove that there can be at most one rational $\frac{a}{b}$, with $a$ and $b$ positive integers, that’s at most $\epsilon$ away from a real number $x$ and that satisfies $b < \frac{1}{\sqrt{2\epsilon}}$. Explain how this relates to the choice, in Shor’s algorithm, to choose Q to be quadratically larger than the integer $N$ that we’re trying to factor.
I've been wrestling with it for a while, and my attempt so far is:
a) When $s$ (the period) does not divide $Q$, a sufficiently large $Q$ ensures that the rational approximation to $\frac{k Q}{s}$ where $k$ is an integer is sufficiently close to determine a unique $s$.
b) There might be more than one period $s$ associated with the function (something like an acoustic beat), so it would be much more difficult to solve for one period individually.
c) Completely lost....
d) I supposed that there existed two different rationals such that $\mid{\frac{a_1}{b_1} - \frac{a_2}{b_2}}\mid> 0$ and tried to force a contradiction using the constraints $\mid\frac{a_\mu}{b_\mu}-x\mid \leq \epsilon$ and $b_\mu < \frac{1}{\sqrt{2\epsilon}}$, but couldn't get it to come out. Either I am making a stupid mistake and/or missing something simple (?).
I am really struggling to gain intuition for Shor's algorithm and its specific steps, so I'm very unconfident when trying to address parts (a) - (c). I'm stumped by (c) especially; isn't Shor's algorithm robust in the sense that it does not matter if $N$ is even or odd? If anyone could point me in the right direction, it would be appreciated. Thanks! |
In this MO post, I ran into the following family of polynomials: $$f_n(x)=\sum_{m=0}^{n}\prod_{k=0}^{m-1}\frac{x^n-x^k}{x^m-x^k}.$$ In the context of the post, $x$ was a prime number, and $f_n(x)$ counted the number of subspaces of an $n$-dimensional vector space over $GF(x)$ (which I was using to determine the number of subgroups of an elementary abelian group $E_{x^n}$).
Anyway, while I was investigating asymptotic behavior of $f_n(x)$ in Mathematica, I got sidetracked and (just for fun) looked at the set of complex roots when I set $f_n(x)=0$. For $n=24$, the plot looked like this: (The real and imaginary axes are from $-1$ to $1$.)
Surprised by the unusual symmetry of the solutions, I made the same plot for a few more values of $n$. Note the clearly defined "tails" (on the left when even, top and bottom when odd) and "cusps" (both sides).
You can see that after $n=60$-ish, the "circle" of solutions started to expand into a band of solutions with a defined outline. To fully absorb the weirdness of this, I animated the solutions from $n=2$ to $n=112$. The following is the result.
Pretty weird right!? Anyhow, here are my questions:
First, has anybody ever seen anything at all like this before? What's up with those "tails?" They seem to occur only on even $n$, and they are surely distinguishable from the rest of the solutions. Look how the "enclosed" solutions rotate as $n$ increases. Why does this happen? [Explained in edits.] Anybody have any idea what happens to the solution set as $n\rightarrow \infty$?Thanks to @WillSawin, we now know that all the roots are contained in an annulus that converges to the unit circle, which is fantastic. So, the final step in understanding the limit of the solution sets is figuring out what happens onthe unit circle. We can see from the animation that there are many gaps, particularly around certain roots of unity; however, they do appear to be closing. The natural question is, which points on the unit circle "are roots in the limit"? In other words, what are the accumulation points of $\{z\left|z\right|^{-1}:z\in\mathbb{C}\text{ and }f_n(z)=0\}$? Is the set of accumulation points dense? @NoahSnyder's heuristic of considering these as a random family of polynomials suggests it should be- at least, almost surely. These are polynomials in $\mathbb{Z}[x]$. Can anybody think of a way to rewrite the formula (perhaps recursively?) for the simplified polynomial, with no denominator? If so, we could use the new formula to prove the series converges to a function on the unit disc, as well as cut computation time in half. [See edits for progress.] Does anybody know a numerical method specifically for finding roots of high degree polynomials? Or any other way to efficiently compute solution sets for high $n$? [Thanks @Hooked!]
Thanks everyone. This may not turn out to be particularly mathematically profound, but it sure is
neat. EDIT: Thanks to suggestions in the comments, I cranked up the working precision to maximum and recalculated the animation. As Hurkyl and mercio suspected, the rotation was indeed a software artifact, and in fact evidently so was the thickening of the solution set. The new animation looks like this:
So, that solves one mystery: the rotation and inflation were caused by tiny roundoff errors in the computation. With the image clearer, however, I see the behavior of the cusps more clearly. Is there an explanation for the gradual accumulation of "cusps" around the roots of unity? (Especially 1.)
EDIT: Here is an animation $Arg(f_n)$ up to $n=30$. I think we can see from this that $f_n$ should converge to some function on the unit disk as $n\rightarrow \infty$. I'd love to include higher $n$, but this was already rather computationally exhausting.
Now, I've been tinkering and I may be onto something with respect to point $5$ (i.e. seeking a better formula for $f_n(x)$). The folowing claims aren't proven yet, but I've checked each up to $n=100$, and they seem inductively consistent. Here denote $\displaystyle f_n(x)=\sum_{m}a_{n,m}x^m$, so that $a_{n,m}\in \mathbb{Z}$ are the coefficients in the simplified expansion of $f_n(x)$.
First, I found $\text{deg}(f_n)=\text{deg}(f_{n-1})+\lfloor \frac{n}{2} \rfloor$. The solution to this recurrence relation is $$\text{deg}(f_n)=\frac{1}{2}\left({\left\lceil\frac{1-n}{2}\right\rceil}^2 -\left\lceil\frac{1-n}{2}\right\rceil+{\left\lfloor \frac{n}{2} \right\rfloor}^2 + \left\lfloor \frac{n}{2} \right\rfloor\right)=\left\lceil\frac{n^2}{4}\right\rceil.$$
If $f_n(x)$ has $r$ more coefficients than $f_{n-1}(x)$, the leading $r$ coefficients are the same as the leading $r$ coefficients of $f_{n-2}(x)$, pairwise.
When $n>m$, $a_{n,m}=a_{n-1,m}+\rho(m)$, where $\rho(m)$ is the number of integer partitions of $m$. (This comes from observation, but I bet an actual proof could follow from some of the formulas here.) For $n\leq m$ the $\rho(m)$ formula first fails at $n=m=6$, and not before for some reason. There is probably a simple correction term I'm not seeing - and whatever that term is, I bet it's what's causing those cusps.
Anyhow, with this, we can make
almost make a recursive relation for $a_{n,m}$,$$a_{n,m}= \left\{ \begin{array}{ll} a_{n-2,m+\left\lceil\frac{n-2}{2}\right\rceil^2-\left\lceil\frac{n}{2}\right\rceil^2} & : \text{deg}(f_{n-1}) < m \leq \text{deg}(f_n)\\ a_{n-1,m}+\rho(m) & : m \leq \text{deg}(f_{n-1}) \text{ and } n > m \\ ? & : m \leq \text{deg}(f_{n-1}) \text{ and } n \leq m \end{array} \right.$$but I can't figure out the last part yet. EDIT:Someone pointed out to me that if we write $\lim_{n\rightarrow\infty}f_n(x)=\sum_{m=0}^\infty b_{m} x^m$, then it appears that $f_n(x)=\sum_{m=0}^n b_m x^m + O(x^{n+1})$. The $b_m$ there seem to me to be relatively well approximated by the $\rho(m)$ formula, considering the correction term only applies for a finite number of recursions.
So, if we have the coefficients up to an order of $O(x^{n+1})$, we can at least prove the polynomials converge on the open unit disk, which the $Arg$ animation suggests is true. (To be precise, it looks like $f_{2n}$ and $f_{2n+1}$ may have different limit functions, but I suspect the coefficients of both sequences will come from the same recursive formula.) With this in mind, I put a bounty up for the correction term, since from that all the behavior will probably be explained.
EDIT: The limit function proposed by Gottfriend and Aleks has the formal expression $$\lim_{n\rightarrow \infty}f_n(x)=1+\prod_{m=1}^\infty \frac{1}{1-x^m}.$$I made an $Arg$ plot of $1+\prod_{m=1}^r \frac{1}{1-x^m}$ for up to $r=24$ to see if I could figure out what that ought to ultimately end up looking like, and came up with this:
Purely based off the plots, it seems not entirely unlikely that $f_n(x)$ is going to the same place this is, at least inside the unit disc. Now the question is, how do we determine the solution set at the limit? I speculate that the unit circle may become a dense combination of zeroes and singularities, with fractal-like concentric "circles of singularity" around the roots of unity... :) |
Table of Contents
Path Connectivity of Countable Unions of Connected Sets
Suppose that we have a countable collection $\{ A_i \}_{i=1}^{\infty}$ of path connected sets. Then $\displaystyle{\bigcup_{i=1}^{\infty} A_i}$ need not be path connected as the union itself may not connected. However, if $A_i \cap A_{i+1} \neq \emptyset$ for all $i \in I$ then it reasonable to suspect that $\displaystyle{\bigcup_{i=1}^{\infty} A_i}$ is connected. To prove this we will first need to get some notation out of the way.
Definition: Let $\alpha, \beta : [0, 1] \to X$ be paths such that $\alpha(1) = \beta(0)$. Then a new path denoted $\alpha * \beta : I \to X$ can be defined for all $x \in [0, 1]$ by $\alpha * \beta(x) = \left\{\begin{matrix} \alpha (2x) & \mathrm{if} \: 0 \leq x \leq \frac{1}{2} \\ \beta(2x - 1) & \mathrm{if} \: \frac{1}{2} \leq x \leq 1 \end{matrix}\right.$
We now prove the result stated above.
Theorem 1: Let $X$ be a topological space and let $\{ A_i \}_{i = 1}^{\infty}$ be a countable collection of path connected subsets of $X$. If $A_i \cap A_{i+1} \neq \emptyset$ for all $i \in \{1, 2, … \}$ then $\displaystyle{\bigcup_{i=1}^{\infty} A_i}$ is also path connected. Proof:Let $\{ A_i \}_{i =1}^{\infty}$ be a countable collection of path connected subsets of $X$. For each $i \in \{ 1, 2, …, \infty \}$ take $a_i \in A_i \cap A_{i+1}$, which is possible since $A_i \cap A_{i+1} \neq \emptyset$ for all $i \in \{ 1, 2, … \}$. Now since $A_i$ is connected for each $i \in \{ 1, 2, … \}$ there exists paths $\alpha_i : I \to X$ such that $\alpha_i(0) = a_i$ and $\alpha_i(1) = a_{i+1}$. Now, let $\displaystyle{x, y \in \bigcup_{i=1}^{\infty} A_i}$ be distinct points. Then there exists a $j, k \in \{ 1, 2, … \}$ such that $x \in A_j$ and $y \in A_k$. Assume without loss of generality that $j < k$. Then there exists paths $\beta_1, \beta_2 : I \to X$ such that $\beta_1(0) = x$ and $\beta_1(1) = a_j$, while $\beta_2(0) = a_k$ and $\beta_2(1) = y$. Consider the following path: Then this is a path from $x$ to $y$. This shows that $\bigcup_{i=1}^{\infty} A_i$ is path connected. |
For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence has several additional interpretations. First, it is often referred to as the
average sensitivity of $f$ because of the following proposition: Proposition 27For $f : \{-1,1\}^n \to \{-1,1\}$ \[ \mathbf{I}[f] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})], \] where $\mathrm{sens}_f(x)$ is the sensitivityof $f$ at $x$, defined to be the number of pivotal coordinates for $f$ on input $x$. Proof: \begin{multline*} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf Pr}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})] \\ = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}\right] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})]. \quad \Box \end{multline*}
The total influence of $f : \{-1,1\}^n \to \{-1,1\}$ is also closely related to the size of its
edge boundary; from Fact 14 we deduce: Examples 29(Recall Examples 15.) For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence ranges between $0$ and $n$. It is minimized by the constant functions $\pm 1$ which have total influence $0$. It is maximized by the parity function $\chi_{[n]}$ and its negation which have total influence $n$; every coordinate is pivotal on every input for these functions. The dictator functions (and their negations) have total influence $1$. The total influence of $\mathrm{OR}_n$ and $\mathrm{AND}_n$ is very small: $n2^{1-n}$. On the other hand, the total influence of $\mathrm{Maj}_n$ is fairly large: roughly $\sqrt{2/\pi}\sqrt{n}$ for large $n$.
By virtue of Proposition 20 we have another interpretation for the total influence of
monotone functions:
This sum of the degree-$1$ Fourier coefficients has a natural interpretation in social choice:
Proposition 31Let $f : \{-1,1\}^n \to \{-1,1\}$ be a voting rule for a $2$-candidate election. Given votes ${\boldsymbol{x}} = ({\boldsymbol{x}}_1, \dots, {\boldsymbol{x}}_n)$, let $\boldsymbol{w}$ be the number of votes which agree with the outcome of the election, $f({\boldsymbol{x}})$. Then \[ \mathop{\bf E}[\boldsymbol{w}] = \frac{n}{2} + \frac12 \sum_{i=1}^n \widehat{f}(i). \] Proof: By the formula for Fourier coefficients, \begin{equation} \label{eqn:deg-1-sum} \sum_{i=1}^n \widehat{f}(i) = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) {\boldsymbol{x}}_i] = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)]. \end{equation} Now ${\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n$ equals the difference between the number of votes for candidate $1$ and the number of votes for candidate $-1$. Hence $f({\boldsymbol{x}})({\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n)$ equals the difference between the number of votes for the winner and the number of votes for the loser; i.e., $\boldsymbol{w} – (n-\boldsymbol{w}) = 2\boldsymbol{w} – n$. The result follows. $\Box$
Rousseau
[Rou62] suggested that the ideal voting rule is one which maximizes the number of votes which agree with the outcome. Here we show that the majority rule has this property (at least when $n$ is odd): Theorem 32The unique maximizers of $\sum_{i=1}^n \widehat{f}(i)$ among all $f : \{-1,1\}^n \to \{-1,1\}$ are the majority functions. In particular, $\mathbf{I}[f] \leq \mathbf{I}[\mathrm{Maj}_n] = \sqrt{2/\pi}\sqrt{n} + O(n^{-1/2})$ for all monotone $f$. Proof: From \eqref{eqn:deg-1-sum}, \[ \sum_{i=1}^n \widehat{f}(i) = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)] \leq \mathop{\bf E}_{{\boldsymbol{x}}}[|{\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n|], \] since $f({\boldsymbol{x}}) \in \{-1,1\}$ always. Equality holds if and only if $f(x) = \mathrm{sgn}(x_1 + \cdots + x_n)$ whenever $x_1 + \cdots + x_n \neq 0$. The second statement of the theorem follows from Proposition 30 and Exercise 18 in this chapter. $\Box$
Let’s now take a look at more analytic expressions for the total influence. By definition, if $f : \{-1,1\}^n \to {\mathbb R}$ then \begin{equation} \label{eqn:tinf-gradient} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{D}_i f({\boldsymbol{x}})^2] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \mathrm{D}_i f({\boldsymbol{x}})^2\right]. \end{equation} This motivates the following definition:
Definition 33The (discrete) gradient operator$\nabla$ maps the function $f : \{-1,1\}^n \to {\mathbb R}$ to the function $\nabla f : \{-1,1\}^n \to {\mathbb R}^n$ defined by \[ \nabla f(x) = (\mathrm{D}_1 f(x), \mathrm{D}_2 f(x), \dots, \mathrm{D}_n f(x)). \]
Note that for $f : \{-1,1\}^n \to \{-1,1\}$ we have $\|\nabla f(x)\|_2^2 = \mathrm{sens}_f(x)$, where $\| \cdot \|_2$ is the usual Euclidean norm in ${\mathbb R}^n$. In general, from \eqref{eqn:tinf-gradient} we deduce:
An alternative analytic definition involves introducing the
Laplacian: Definition 35The Laplacian operator$\mathrm{L}$ is the linear operator on functions $f : \{-1,1\}^n \to {\mathbb R}$ defined by $\mathrm{L} = \sum_{i=1}^n \mathrm{L}_i$.
In the exercises you are asked to verify the following:
$\displaystyle \mathrm{L} f (x) = (n/2)\bigl(f(x) – \mathop{\mathrm{avg}}_{i \in [n]} \{f(x^{\oplus i})\}\bigr)$, $\displaystyle \mathrm{L} f (x) = f(x) \cdot \mathrm{sens}_f(x) \quad$ if $f : \{-1,1\}^n \to \{-1,1\}$, $\displaystyle \mathrm{L} f = \sum_{S \subseteq [n]} |S|\,\widehat{f}(S)\,\chi_S$, $\displaystyle \langle f, \mathrm{L} f \rangle = \mathbf{I}[f]$.
We can obtain a Fourier formula for the total influence of a function using Theorem 19; when we sum that theorem over all $i \in [n]$ the Fourier weight $\widehat{f}(S)^2$ is counted exactly $|S|$ times. Hence:
Theorem 37For $f : \{-1,1\}^n \to {\mathbb R}$, \begin{equation} \label{eqn:total-influence-formula} \mathbf{I}[f] = \sum_{S \subseteq [n]} |S| \widehat{f}(S)^2 = \sum_{k=0}^n k \cdot \mathbf{W}^{k}[f]. \end{equation} For $f : \{-1,1\}^n \to \{-1,1\}$ we can express this using the spectral sample: \[ \mathbf{I}[f] = \mathop{\bf E}_{\boldsymbol{S} \sim \mathscr{S}_{f}}[|\boldsymbol{S}|]. \]
Thus the total influence of $f : \{-1,1\}^n \to \{-1,1\}$ also measures the average “height” or degree of its Fourier weights.
Finally, from Proposition 1.13 we have $\mathop{\bf Var}[f] = \sum_{k > 0} \mathbf{W}^{k}[f]$; comparing this with \eqref{eqn:total-influence-formula} we immediately deduce a simple but important fact called the
Poincaré inequality. Poincaré InequalityFor any $f : \{-1,1\}^n \to {\mathbb R}$, $\mathop{\bf Var}[f] \leq \mathbf{I}[f]$.
Equality holds in the Poincaré inequality if and only if all of $f$’s Fourier weight is at degrees $0$ and $1$; i.e., $\mathbf{W}^{\leq 1}[f] = \mathop{\bf E}[f^2]$. For boolean-valued $f : \{-1,1\}^n \to \{-1,1\}$, Exercise 1.19 tells us this can only occur if $f = \pm 1$ or $f = \pm \chi_i$ for some $i$.
For boolean-valued $f : \{-1,1\}^n \to {\mathbb R}$, the Poincaré inequality can be viewed as an (edge-)isoperimetric inequality, or
(edge-)expansion bound, for the Hamming cube. If we think of $f$ as the indicator function for a set $A \subseteq \{-1,1\}^n$ of “measure” $\alpha = |A|/2^n$, then $\mathop{\bf Var}[f] = 4\alpha(1-\alpha)$ (Fact 1.14) whereas $\mathbf{I}[f]$ is $n$ times the (fractional) size of $A$’s edge boundary. In particular, the Poincaré inequality says that subsets $A \subseteq \{-1,1\}^n$ of measure $\alpha = 1/2$ must have edge boundary at least as large as those of the dictator sets.
For $\alpha \notin \{0, 1/2, 1\}$ the Poincaré inequality is not sharp as an edge-isoperimetric inequality for the Hamming cube; for small $\alpha$ even the asymptotic dependence is not optimal. Precisely optimal edge-isoperimetric results (and also vertex-isoperimetric results) are known for the Hamming cube. The following simplified theorem is optimal for $\alpha$ of the form $2^{-i}$:
This result illustrates an important recurring concept in the analysis of boolean functions: the Hamming cube is a “small-set expander”. Roughly speaking, this is the idea that “small” subsets $A \subseteq \{-1,1\}^n$ have unusually large “boundary size”. |
Given following question:
Let $G$ be a context-free grammar, $G=(V, \Sigma, R, S)$, that has start variable $S$, set of variables $V = \{S\}$, set of terminals $\Sigma = \{0, 1\}$, and set of rules $R = \{S \rightarrow \epsilon , S \rightarrow 0 , S \rightarrow 1 , S → 0S0 , S \rightarrow 1S1\}$. Describe $L(G)$, the language of the grammar $G$.
I know that $\epsilon, 0, 1, 00, 11, 010, 101, 01010, 10101, 10001, 01110$ all fit the grammar, but I can't find a formal way to express what I want to say. Perhaps I don't know the symbols or something, but I would express it as
$L(G) = \{(0\ or\ 1)^n(0\ or\ 1)^{(0\ or\ 1)}(0\ or\ 1)^n,\ n \ge 0\}$
Is this right? Is there a more formal way to express this? |
I have implemented the Libor Market Model in Matlab. When I generate a number of paths, I notice that some of them explode. Does anybody have an idea what could cause this?
I already tried solving the problem by decreasing the timestep (up to dt=0.001) in order to reduce the error and also by simulating with the log-Euler scheme instead of the 'normal' Euler. In both cases it did not resolve the problem, since some of the Libor rates paths are still diverging.
Specifics:
I simulate the forward Libor rates under the spot measure, whose dynamics are given by: $$dL_n\left(t\right)=\sigma_n\left(t\right)L_n\left(t\right)\sum_{j=q\left(t\right)}^n \frac{\tau_j \rho_{j,n} \sigma_j\left(t\right)L_j\left(t\right)}{1+\tau_j L_j\left(t\right)}dt + \sigma_n\left(t\right)L_n\left(t\right)dW\left(t\right)$$ where $$L_n\left(t\right):=L\left(t;T_n,T_{n+1}\right),$$ $$\tau_n = T_{n+1}-T_n,$$ $$\sigma_n\left(t\right) = k_n \left[\left(a+b\left(T_n-t\right)\right)e^{-c\left(T_n-t\right)}+d\right],$$ index function $q\left(t\right)$ is defined by $$T_{q\left(t\right)-1}\leq t < T_{q\left(t\right)},$$ $W$ is a Brownian Motion under the spot measure. |
To be modified
Given a bounded open set $\Omega \subset \mathbb{R}^{2}$, an exponent $p \in (1,+\infty)$ and a function $f \in L^{p^{\prime}}(\Omega)$ with $p^{\prime}=p/(p-1)$. We consider the problem
\[
(\mathcal{P}) \,\ \,\ \min\{C_{p}(\Sigma) + \lambda \mathcal{H}^{1}(\Sigma) : \Sigma \in \mathcal{K}(\Omega)\},
\]
where $\mathcal{K}(\Omega)$ is the class of compact connected subsets of the closure of $\Omega$, $\lambda>0$ is a fixed constant and $C_{p}$ is the $p$-compliance functional which for a given $\Sigma \in \mathcal{K}(\Omega)$ is defined as the maximum value of the functional
\[
u \mapsto \int_{\Omega} fu dx - \frac 1p \int_{\Omega \backslash \Sigma} |\nabla u|^{p} dx
\] on the Sobolev space $W^{1,p}_{0}(\Omega \backslash \Sigma)$.
In this talk I will present a partial $C^{1,\alpha}$ regularity result, extending some of the results obtained by A. Chambolle, J. Lamboley, A. Lemenant and E. Stepanov (2017). I will prove that every solution of problem $(\mathcal{P})$ has no loops (i.e. homeomorphic images of $S^{1}$), is Ahlfors-regular, and $C^{1,\alpha}$-regular at $\mathcal{H}^{1}$-a.e. point for every $p \in (1,+\infty)$. |
I was wondering if anybody to help me to generate the following state. It would be preferable if you use only Hadamard, CNOT and T-gates, on $\lceil\log_2(M+1)\rceil$ qubits: $$|\psi\rangle = \frac{1}{\sqrt{2}}\biggl(|0\rangle + \frac{1}{\sqrt{M}}\sum_{j=1}^M|j\rangle\biggr)$$ Assume M is a power of 2 value.
As you used the tag Qiskit I assume that you want a method to implement this state with Qiskit. Moreover, you did not mention any performance goal, so here is a general method that can be used for
any quantum state:
# Import the Qiskit SDKimport qiskit# Import the initializerimport qiskit.extensions.quantum_initializer._initializer as initializer# Import numpyimport numpydef generate_amplitudes(M: int) -> numpy.ndarray: qubit_number = int(numpy.ceil(numpy.log2(M + 1))) amplitudes = numpy.zeros((2**qubit_number, ), dtype=numpy.complex) amplitudes[0] = 1 / numpy.sqrt(2) amplitudes[1:M+1] = 1 / numpy.sqrt(2*M) return amplitudesM = 10N = int(numpy.ceil(numpy.log2(M + 1)))# Create a Quantum Register with 2 qubits.q = qiskit.QuantumRegister(N)# Create a Quantum Circuitqc = qiskit.QuantumCircuit(q)# Initialise the state with the gate set of IBM (not H+T+CX)qc.initialize(params=generate_amplitudes(M), qubits=q)
Note that this method is
not efficient in the sense that the generated circuit may not be optimal at all. As your state is quite simple, there is probably a clever algorithm to construct it more efficiently.
If you are restricted to a gate set of
H,
T and
CX then you will need to use an external
1 tool to translate the non-{
H,
T} gates into sequences of
H and
T. This can be done efficiently with the Solovay-Kitaev algorithm or (more?) efficiently with the algorithm described in https://arxiv.org/abs/1212.6253.
1 I tried to play with Qiskit's compiler and unroller, but I could not make them work properly to perform the translation to the gate set
H,
T and
CX. Maybe someone else have an idea on how to do this translation with Qiskit?
Get a qubit $c$ into the $|+\rangle$ state, then do controlled uniform superposition preparation onto a register $i$ conditioned on $c$, then increment $i$ conditioned on $c$, then toggle $c$ if $i>0$. $i$ now holds the state you wanted.
The asymptotic T cost is $O(\lg M - \lg \epsilon)$ where $\epsilon$ is the absolute error tolerance. The circuit uses two more qubits than what you wanted in order to achieve that time cost (one for the control, one to hold the temporary comparison).
Here's the controlled uniform superposition preparation circuit (the figure is from https://arxiv.org/abs/1805.03662 . I don't know the original paper that discovered this technique):
There are standard constructions for exactly converting the comparisons, increments, multi-controlled-NOTs, and controlled-Hs into the H/CNOT/T gate set. You have to be careful to use single-dirty-ancilla constructions for the arithmetic ( https://arxiv.org/abs/1706.07884 ) because otherwise the qubit count will secretly double. And there are standard constructions for approximating the arbitrary-angle Z gates. |
Disclaimer: this post is pretty much a comprehensive solution to exercise 1.19 from Structure and Interpretation of Computer Programs by Harold Abelson and Gerald Jay Sussman (the solution is written in Haskell instead of Lisp though).
Let us quickly recall the definition of Fibonacci numbers as well as a few well-known ways to calculate them.
Fibonacci numbers are defined recursively, with \(n\)-th number denoted as \(
F_n = \left\{ \begin{array}{l l} 0 & \quad \text{if $n = 0$} \\ 1 & \quad \text{if $n = 1$} \\ F_{n-2} + F_{n-1} & \quad \text{otherwise.} \end{array} \right. \) This definition can be translated to a Haskell program in a straightforward manner: fib1 :: Int -> Integer fib1 0 = 0 fib1 1 = 1 fib1 n = fib1 (n-2) + fib1 (n-1)
However, computation of \(n\)-th Fibonacci number with the above function requires \(O(log_2(F_n)n)\) space and \(O(log_2(F_n)2^n)\) time: recursive calls create a binary tree of depth \(n\) (hence the space requirement as you need to keep track of \(n\) intermediate values, each taking up to \(O(log_2(F_n))\) bits), which contains at most \(2^n-1\) elements, where each node takes \(O(log_2(F_n))\) time to compute as we’re using arbitrary precision arithmetic. It is worth noting that \(O(log_2(F_n))\) is in fact equal to \(O(n)\) as the growth of Fibonacci sequence is exponential, but the former will be used as it’s more clear.
The other, significantly more performant way is to use the iterative approach, i.e. start with \(F_0\) and \(F_1\) and by appropriately adding them, compute your way up to \(F_n\):
fib2 :: Int -> Integer fib2 n = go n 0 1 where go k a b | k == 0 = a | otherwise = go (k - 1) b (a + b)
The initial state is \((0, 1) = (F_0, F_1)\) and each iteration we replace a state \((F_{n-k}, F_{n-k+1})\) with \((F_{n-k+1}, F_{n-k+2})\), eventually ending up with \((F_n, F_{n+1})\) and taking the first element of a pair as a result.
The performance of this variation is much better than its predecessor as its space complexity is \(O(log_2(F_n))\) and time complexity is \(O(log_2(F_n)n)\).
Can we do better than that? As a matter of fact, we can. First of all, observe that in the last solution, each iteration we replaced a pair of two numbers with another pair of two numbers using elementary arithmetic operations, which is basically a transformation in two dimensional vector space (let us assume \(\mathbb{R}^2\)). In fact, this transformation is linear and its matrix representation is \(T = \begin{pmatrix} 0 & 1 \\ 1 & 1\end{pmatrix}\), because \(T \cdot \begin{pmatrix} F_n \\ F_{n+1} \end{pmatrix} = \begin{pmatrix} F_{n+1} \\ F_n + F_{n+1} \end{pmatrix} = \begin{pmatrix} F_{n+1} \\ F_{n+2} \end{pmatrix}\).
With this knowledge the formula for Fibonacci numbers can be written in a compact way:
\(F_n = \pi_1\left( T^n \cdot \begin{pmatrix} F_0 \\ F_1 \end{pmatrix} \right)\), where \(\pi_1 : \mathbb{R}^2 \rightarrow \mathbb{R}\), \(\pi_1\left(\begin{pmatrix}v_1 \\ v_2\end{pmatrix}\right) = v_1\).
There are two routes to choose from at this point: we can either use an algorithm for fast exponentiation of matrices (such as square and multiply) to calculate \(T^n\) or try to apply more optimizations. We will go with the latter, because matrix multiplication is still pretty heavy operation and we can improve the situation by exploiting the structure of \(T\).
First, close observation of the powers of \(T\) allows us to notice the following:
Fact. \(T^n = \begin{pmatrix} F_{n-1} & F_n \\ F_n & F_{n+1} \end{pmatrix}\) for \(n \in \mathbb{N_+}\). Proof. For \(n = 1\) it is easy to see that \(T = \begin{pmatrix}F_0 & F_1 \\ F_1 & F_2\end{pmatrix}\). Now assume that the fact is true for some \(n \in \mathbb{N_+}\). Then \(T^{n+1} = T^n \cdot T = \begin{pmatrix} F_{n-1} & F_n \\ F_n & F_{n+1} \end{pmatrix} \cdot \begin{pmatrix} 0 & 1 \\ 1 & 1 \end{pmatrix} = \begin{pmatrix} F_n & F_{n-1} + F_n \\ F_{n+1} & F_n + F_{n+1} \end{pmatrix} = \begin{pmatrix} F_n & F_{n+1} \\ F_{n+1} & F_{n+2} \end{pmatrix}\). Corollary. For \(n \in \mathbb{N_+}\) there exists \(p\) and \(q\) such that \(T^n = \begin{pmatrix} p & q \\ q & p+q \end{pmatrix}\).
Equipped with this knowledge we can not only represent \(T^k\) for some \(k \in \mathbb{N_+}\) using only two numbers instead of four, but also derive a transformation of these numbers that corresponds to the computation of \(T^{2k}\).
Fact. Let \(n \in \mathbb{N_+}\), \(p, q \in \mathbb{N}\). Then \(\begin{pmatrix} p & q \\ q & p+q \end{pmatrix}^2 = \begin{pmatrix} p’ & q’ \\ q’ & p’+q’ \end{pmatrix}\), where \(p’ = p^2 + q^2\) and \(q’ = (2p + q)q\).
Now, we can put all of these together and construct the final solution:
fib3 :: Int -> Integer fib3 n = go n 0 1 0 1 where go k a b p q | k == 0 = a | odd k = go (k - 1) (p*a + q*b) (q*a + (p+q)*b) p q | otherwise = go (k `div` 2) a b (p*p + q*q) ((2*p + q)*q)
Let us denote \(i\)-th bit of \(n\) by \(n_i \in \{0,1\}\), where \(i \in \{0, \dots, \lfloor log_2(n) \rfloor \}\). We start with \(\begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} F_0 \\ F_1 \end{pmatrix}\) and \(\begin{pmatrix} p \\ q \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \cong T\). Then we traverse the bits of \(n\) and return \(a\). Note that while iterating through \(n_i\):
\(\begin{pmatrix} p \\ q \end{pmatrix} \cong T^{2^i}\). \(\begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} F_m \\ F_{m+1} \end{pmatrix}\) for \(m = \displaystyle\sum_{j = 0}^{i-1} n_j \cdot 2^j\). If \(n_i = 1\), \(\begin{pmatrix} F_m \\ F_{m+1} \end{pmatrix}\) is replaced with \(T^{2^i} \cdot \begin{pmatrix} F_m \\ F_{m+1} \end{pmatrix} = \begin{pmatrix} F_{m+2^i} \\ F_{m+2^i+1} \end{pmatrix}\).
Hence, in the end \(\begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} F_n \\ F_{n+1} \end{pmatrix}\), so the function correctly computes \(n\)-th Fibonacci number.
The space complexity of this solution is \(O(log_2(F_n))\), whereas its time complexity is \(O(log_2(F_n)(log_2(n)+H(n)))\) with \(H(n)\) being Hamming weight of \(n\).
Now, let us put all of the implementations together and measure their performance using criterion library.
{-# OPTIONS_GHC -Wall #-} {-# LANGUAGE BangPatterns #-} module Main where import Criterion.Main import Criterion.Types fib1 :: Int -> Integer fib1 0 = 0 fib1 1 = 1 fib1 n = fib1 (n - 2) + fib1 (n - 1) fib2 :: Int -> Integer fib2 n = go n 0 1 where go !k !a b | k == 0 = a | otherwise = go (k - 1) b (a + b) fib3 :: Int -> Integer fib3 n = go n 0 1 0 1 where go !k !a b !p !q | k == 0 = a | odd k = go (k - 1) (p*a + q*b) (q*a + (p+q)*b) p q | otherwise = go (k `div` 2) a b (p*p + q*q) ((2*p + q)*q) main :: IO () main = defaultMainWith (defaultConfig { timeLimit = 2 }) [ bgroup "fib1" $ map (benchmark fib1) $ [10, 20] ++ [30..42] , bgroup "fib2" $ map (benchmark fib2) $ 10000 : map (100000*) [1..10] , bgroup "fib3" $ map (benchmark fib3) $ 1000000 : map (10000000*) ([1..10] ++ [20]) ] where benchmark fib n = bench (show n) $ whnf fib n
The above program was compiled with GHC 7.10.2 and run on Intel Core i7 3770. HTML report generated by it is available here.
In particular, after substituting the main function with:
main :: IO () main = defaultMainWith (defaultConfig { timeLimit = 2 }) [ bgroup "fib3" [benchmark fib3 1000000000] ] where benchmark fib n = bench (show n) $ whnf fib n
we can see that the final implementation is able to calculate billionth Fibonacci number in a very reasonable time:
benchmarking fib3/1000000000 time 30.82 s (29.86 s .. 31.97 s) 1.000 R² (0.999 R² .. 1.000 R²) mean 30.34 s (29.96 s .. 30.56 s) std dev 345.1 ms (0.0 s .. 387.0 ms) variance introduced by outliers: 19% (moderately inflated) |
The Vector Subspace of 2 x 2 Matrices
Recall that the set of all $m \times n$ matrices denoted $M_{mn}$ forms a vector space, as verified on The Vector Space of m x n Matrices page. We will now begin to show that $M_{22}$, the set of all $2 \times 2$ matrices is a subspace of $M_{mn}$.
Recall that we only need to verify that the closure of addition and closure of scalar multiplication axioms hold, since we know that the other axioms are inherited since clearly $M_{22} \subset M_{mn}$.
Let $u, v \in M_{mn}$ such that $u = \begin{bmatrix} u_{11} & u_{12}\\ u_{21} & u_{22} \end{bmatrix}$ and $v = \begin{bmatrix} v_{11} & v_{12}\\ v_{21} & v_{22} \end{bmatrix}$, and let $a \in \mathbb{F}$
9.$u + v = \begin{bmatrix} u_{11} & u_{12}\\ u_{21} & u_{22} \end{bmatrix} + \begin{bmatrix} v_{11} & v_{12}\\ v_{21} & v_{22} \end{bmatrix} = \begin{bmatrix} u_{11} + v_{11} & u_{12} + v_{12} \\ u_{21} + v_{21} & u_{22} + v_{22} \end{bmatrix}$. This is still a $2 \times 2$ matrix, and so $(u + v) \in M_{22}$ and thus $M_{22}$ is closed under matrix addition. 10.$au = a \begin{bmatrix} u_{11} & u_{12}\\ u_{21} & u_{22} \end{bmatrix} = \begin{bmatrix} au_{11} & au_{12}\\ au_{21} & au_{22} \end{bmatrix}$ which is still a $2 \times 2$ matrix, and so $(au) \in M_{22}$ and $M_{22}$ is closed under scalar multiplication.
Therefore since axioms 9 and 10 hold, the subset $M_{22} \subset M_{mn}$ is a vector space, namely a vector subspace of the vector space $M_{mn}$.
The Vector Subspace of 2 x 2 Diagonal Matrices
Let $W$ be the set of $2 \times 2$ diagonal matrices. We note that $W \subset M_{22} \subset M_{mn}$, and we will now verify that $W$ is a subspace of $M_{22}$ and $M_{mn}$.
Let $u, v \in W$ such that $u = \begin{bmatrix} u_{11} & 0 \\ 0 & u_{22} \end{bmatrix}$ and $v = \begin{bmatrix} v_{11} & 0 \\ 0 & v_{22} \end{bmatrix}$, and let $a \in \mathbb{F}$.
9.$u + v = \begin{bmatrix} u_{11} & 0 \\ 0 & u_{22} \end{bmatrix} + \begin{bmatrix} v_{11} & 0 \\ 0 & v_{22} \end{bmatrix} = \begin{bmatrix} u_{11} + v_{11} & 0 \\ 0 & u_{22} v_{22} \end{bmatrix}$ which is still a diagonal matrix, so $(u + v) \in W$, and so $W$ is closed under addition. 10.$au = a \begin{bmatrix} u_{11} & 0 \\ 0 & u_{22} \end{bmatrix} = \begin{bmatrix} au_{11} & 0 \\ 0 & au_{22} \end{bmatrix}$ which is still a diagonal matrix, so $(au) \in W$, and so $W$ is closed under scalar multiplication. |
It looks like you're new here. If you want to get involved, click one of these buttons!
Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the
way in which we match up these two objects, to see that they look the same.
For example, any two of these squares look the same after you rotate and/or reflect them:
An isomorphism between two of these squares is a
process of rotating and/or reflecting the first so it looks just like the second.
As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse:
Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that
and
I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\).
Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse.
Now we're ready for isomorphisms!
Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\).
Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like!
What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph:
The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2:
$$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1:
$$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms:
$$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism!
In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism.
We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a
preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\).
Puzzle 144 says that in a poset, the only isomorphisms are identities.
Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions.
Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\).
So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them.
One more example:
Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism.
This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the
isomorphisms deserve to be called 'natural isomorphisms'.
But what are they like?
Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism
$$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes:
Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism
$$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that
$$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means
$$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\).
In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\).
But the converse is true, too! It takes a
little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism.
Doing this will help you understand natural isomorphisms. But you also need examples!
Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal!
We should talk about this. |
Assume that you have a portfolio for which you have estimated a parametric model to the underlying instruments, but the distribution of the portfolio as a whole is too complicated to compute explicitly. Now you want to determine the expected shortfall by Monte Carlo simulations.
We know that for our r.v. $Y$ the empirical cdf can be estimated by $$\hat{F}_Y(y)=\frac{1}{n}\sum\limits_{i=1}^n I(Y_i \leq y)$$ and the quantiles can be estimated by $$\hat{y}_q=\text{inf}[y:\hat{F}_Y(y)\ge q] =\Upsilon_{[nq]+1}$$ where $\Upsilon_i$ is the i:th order statistic. Thus the ES can be estimated by $$\widehat{ES}_p(Y) = \frac{1}{p}\left(\sum\limits_{i=1}^{[np]}\frac{\Upsilon_i}{n}+\left(p-\frac{[np]}{n}\right)\Upsilon_{[np]+1}\right)$$
However, as we will see for this numerical approximation is that it converges very slow for increasing sample size N! This is illustrated with an example where the random variable Y is standard normal (the x axis is N/100)
Maybe you could naively repeat the simulation for fixed N (sufficiently large, eg. ~200*100) and then take the mean. But isn't there any other techniques that deal with this problem (especially in the case of heavy tails)? I've managed to find several different methods, for example using control variates, importance sampling, delta-gamma approximation etc. But none of these doesn't apply to the case of empirical ES.
All comments, including references to articles, are welcome! |
Let $X$ be an abelian variety over a field $k$, $L$ a line bundle on $X$.
Let $\varphi_L : X \to X^t$ be the morphism obtained by considering the Mumford line bundle $\Lambda (L) = m^*L \otimes p_1 ^* L^{-1} \otimes p_2 ^* L^{-1}$ on $X \times X$ as a family of line bundles on the first coordinate parameterized by the second coordinate.
There is a claim that
Direct computation yields $\varphi_{(-1)^* L} (x) = - \varphi _L(x)$ for every $L$ and $x$.
From the formula for $\varphi _L$ and $\Lambda (L)$, I see no reason why this should be true. Moreover, there seems to be an easy counterexample: namely take $L$ to be a symmetric line bundle, i.e. such that $(-1)^* L \cong L$, then this would imply that $\varphi_L = 0$ is constant zero. |
Any 1-qubit special gate can be decomposed into a sequence of rotation gates ($R_z$, $R_y$ and $R_z$). This allows us to have the general 1-qubit special gate in matrix form:
$$ U\left(\theta,\phi,\lambda\right)=\left(\begin{array}{cc} e^{-i(\phi+\lambda)}\cos\left(\frac{\theta}{2}\right) & -e^{-i(\phi-\lambda)}\sin\left(\frac{\theta}{2}\right)\\ e^{i(\phi-\lambda)}\sin\left(\frac{\theta}{2}\right) & e^{i(\phi+\lambda)}\cos\left(\frac{\theta}{2}\right) \end{array}\right) $$
If given $U\left(\theta,\phi,\lambda\right)$, how do I decompose it into any arbitrary set of gates such as rotation gates, pauli gates, etc?
To make the question more concrete, here is my current situation: for my project, I am giving users the ability to apply $U\left(\theta,\phi,\lambda\right)$ for specific values of $\theta$, $\phi$ and $\lambda$ to qubits.
But I am targeting real machines that only offer specific gates. For instance, the Rigetti Agave quantum processor only offers $R_z$, $R_x\left(k\pi/2\right)$ and $CZ$ as primitive gates. One can think of any $U\left(\theta,\phi,\lambda\right)$ as an intermediate form that needs to be transformed into a sequence of whatever is the native gateset of the target quantum processor.
Now, in that spirit, how do I transform any $U\left(\theta,\phi,\lambda\right)$ into say $R_z$ and $R_x\left(k\pi/2\right)$? Let us ignore $CZ$ since it is a 2-qubit gate.
Note: I'm writing a compiler so an algorithm and reference papers or book chapters that solve this exact problems are more than welcome! |
Consider a rotor rotating around a fixed axis. Because of wear and damage, the mass distribution is not homogeneous. This leads to dangerous vibrations in the rotation. A prototypical example can be a wind turbine, affected by misalignment of the blades and/or mass imbalance of the hub and blades [2].
In order to compensate the imbalance, two balancing heads are mounted at the endpoints of the axle, as in figure 1. Each balancing head is made of two masses free to rotate.
Our goal is to determine the optimal movement of the balancing masses to minimize the vibrations. Control theoretical techniques are employed. For further details, see [6].
We consider two planes
The heads are fixed to the rotor and rotate with it. In particular, $\alpha_i$ and $\gamma_i$ are defined with respect to the rotor-fixed reference frame $(O;(x,y,z))$.
Each head is made of a pair of balancing masses, which are free to rotate orthogonally to the rotation axis $z$.
Namely, we have
two mass-points $(m_1,P_{1,1})$ and $(m_1,P_{1,2})$ lying on $\pi_1$ at distance $r_1$ from the axis $z$, i.e., in the reference frame $(O;(x,y,z))$ two mass-points $(m_2,P_{2,1})$ and $(m_2,P_{2,2})$ lying on $\pi_2$ at distance $r_2$ from the axis $z$, namely, in the reference frame $(O;(x,y,z))$
For any $i=1,2$, let $b_i$ be the bisector of the angle generated by $\overset{\longrightarrow}{OP_{i,1}}$ and $\overset{\longrightarrow}{OP_{i,2}}$ (see figure 3). The
intermediate angle $\alpha_i$ is the angle between the $x$-axis and the bisector $b_i$, while the gap angle $\gamma_i$ is the angle between $\overset{\longrightarrow}{OP_{i,1}}$ and the bisector $b_i$.
The imbalance is modelled by a resulting force $F$ and a momentum $N$ orthogonal to the rotation axis. In the rotor-fixed reference frame $(O;(x,y,z))$, set $P_1 := (0,0,-a)$, $P_2 := (0,0,b)$, $F := (F_x,F_y,0)$ and $N := (N_x,N_y,0)$. By imposing the equilibrium condition on forces and momenta, the force $F$ and the momentum $N$ can be decomposed into a force $F_1$ exerted at $P_1$ contained in plane $\pi_1$ and a force $F_2$ exerted at $P_2$ contained in $\pi_2$.
In each plane, we generate a force to balance the system, by moving the balancing masses:
in the plane $\pi_1$, we compensate $F_1$ by the centrifugal force: in the plane $\pi_2$, we compensate $F_2$ by the centrifugal force:
The overall imbalance of the system is then given by the resulting forces in $\pi_1$ and $\pi_2$,
and
respectively.
The overall imbalance on the system made of rotor and balancing device is measured by the imbalance indicator
Our task is to find a control strategy such that
the balancing masses move from their initial configuration $\Phi_0$ to a final configuration $\overline{\Phi}$, where the imbalance is compensated; the imbalance and velocities should be kept small during the correction process.
We address the minimization problem
where $Q(\Phi) := G(\Phi)-\inf G$ and
with .
The theoretical analysis of the above problem is in [6]. The existence of the optimum is proved. The stabilization of the optimal trajectories towards steady optima is proved in any condition.
Simulation
In order to perform some numerical simulations, we firstly discretize our cost functional and then we run AMPL-IPOpt to minimize the resulting discretized functional.
For the purpose of the numerical simulations, it is convenient to rewrite the cost functional as
subject to the state equation
Discretization
Choose $T$ sufficiently large and $Nt \in \mathbb{N} \setminus {0,1}$. Set
The discretized state is , whereas the discretized control (velocity) is $(\psi_{i})_{i=0, … ,Nt-2}$. The discretized functional then reads as
subject to the state equation
Execution
The discretized minimization problem is
We address the above minimization problem by employing the interior-point optimization routine IPOpt (see [3,4]) coupled with AMPL [1], which serves as modelling language and performs the automatic differentiation. The interested reader is referred to [8, Chapter 9] and [7] for a survey on existing numerical methods to solve an optimal control problem.
In figures 4, 5, 6 and 7, we plot the computed optimal trajectory for \eqref{functional}, with initial datum $\Phi_0=\left(\alpha_{0,1},\gamma_{0,1};\alpha_{0,2},\gamma_{0,2}\right) := \left(2.6,0.6, 2.5,1.5\right)$. We choose $F$, $N$ and $m_i$. The exponential stabilization proved in [6] emerges.
In figure 8, we depict the imbalance indicator versus time along the computed trajectories. As expected, it decays to zero exponentially.
References:
[1] Robert Fourer, David M Gay, and Brian W Kernighan. A modeling language for mathematical programming. Management Science, 36(5):519{554, 1990.
[2] Mike Jeffrey, Michael Melsheimer, and Jan Liersch. Method and system for determining an imbalance of a wind turbine rotor, September 11 2012. US Patent 8,261,599.
[3] Andreas Waechter, Carl Laird, F Margot, and Y Kawajir. Introduction to ipopt: A tutorial for downloading, installing, and using ipopt. Revision, 2009.
[4] Andreas Waechter and Lorenz T Biegler. On the implementation of an interior-point lterline-search algorithm for large-scale nonlinear programming. Mathematical programming, 106(1):25{57, 2006.
[6] Matteo Gnuffi, Dario Pighin and Noboru Sakamoto. Rotors imbalance suppression by optimal control. Preprint.
[7] Noboru Sakamoto and Arjan J van der Schaft. Analytical approximation methods for the stabilizing solution of the Hamilton-Jacobi equation. IEEE Transactions on Automatic Control, 53(10):2335{2350, 2008.
[8] Emmanuel Trélat. Contrôle optimal : théorie et applications Mathématiques Concrètes. Vuibert, Paris, 2005. available online: https://www.ljll.math.upmc.fr/trelat/chiers/livreopt2.pdf. |
Convergence of measures
A concept in measure theory, determined by a certain topology in a space of measures that are defined on a certain $\sigma$-algebra $\mathcal{B}$ of subsets of a space $X$ or, more generally, in a space $\mathcal{M} (X, \mathcal{B})$ of charges, i.e. countably-additive real (resp. complex) functions $\mu: \mathcal{B}\to \mathbb R$ (resp. $\mathbb C$), often also called $\mathbb R$ (resp. $\mathbb C$) valued or signed measures. The total variation measure of a $\mathbb C$-valued measure is defined on $\mathcal{B}$ as: \[ \abs{\mu}(B) :=\sup\left\{ \sum \abs{\mu(B_i)}: \text{'"`UNIQ-MathJax13-QINU`"' is a countable partition of '"`UNIQ-MathJax14-QINU`"'}\right\}. \] In the real-valued case the above definition simplifies as \[ \abs{\mu}(B) = \sup_{A\in \mathcal{B}, A\subset B} \left(\abs{\mu (A)} + \abs{\mu (X\setminus B)}\right). \] The total variation of $\mu$ is then defined as $\left\|\mu\right\|_v := \abs{\mu}(X)$. The space $\mathcal{M}^b (X, \mathcal{B})$ of $\mathbb R$ (resp. $\mathbb C$) valued measure with finite total variation is a Banach space and the following are the most commonly used topologies.
1) The norm or strong topology: $\mu_n\to \mu$ if and only if $\left\|\mu_n-\mu\right\|_v\to 0$.
2) The weak topology: a sequence of measures $\mu_n \rightharpoonup \mu$ if and only if $F (\mu_n)\to F(\mu)$ for every bounded linear functional $F$ on $\mathcal{M}^b$.
3) When $X$ is a topological space and $\mathcal{B}$ the corresponding $\sigma$-algebra of Borel sets, we can introduce on $X$ the narrow topology. In this case $\mu_n$ converges to $\mu$ if and only if \begin{equation}\label{e:narrow} \int f\, \mathrm{d}\mu_n \to \int f\, \mathrm{d}\mu \end{equation} for every bounded continuous function $f:X\to \mathbb R$ (resp. $\mathbb C$). This topology is also sometimes called the weak topology, however such notation is inconsistent with the Banach space theory, see below. The following is an important consequence of the narrow convergence: if $\mu_n$ converges narrowly to $\mu$, then $\mu_n (A)\to \mu (A)$ for any Borel set such that $\abs{\mu}(\partial A) = 0$.
4) When $X$ is a locally compact topological space and $\mathcal{B}$ the $\sigma$-algebra of Borel sets yet another topology can be introduced, the so-called wide topology, or sometimes referred to as weak$^\star$ topology. A sequence $\mu_n\rightharpoonup^\star \mu$ if and only if \eqref{e:narrow} holds for continuous functions which are compactly supported.
This topology is in general weaker than the narrow topology. If $X$ is compact and Hausdorff the Riesz representation theorem shows that $\mathcal{M}^b$ is the dual of the space $C(X)$ of continuous functions. Under this assumption the narrow and weak$^\star$ topology coincides with the usual weak$^\star$ topology of the Banach space theory. Since in general $C(X)$ is not a reflexive space, it turns out that the narrow topology is in general weaker than the weak topology.
A topology analogous to the weak$^\star$ topology is defined in the more general space $\mathcal{M}^b_{loc}$ of locally bounded measures, i.e. those measures $\mu$ such that for any point $x\in X$ there is a neighborhood $U$ with $\abs{\mu}(U)<\infty$.
References
[AmFuPa] L. Ambrosio, N. Fusco, D. Pallara, "Functions of bounded variations and free discontinuity problems". Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000. MR1857292Zbl 0957.49001 [Bo] N. Bourbaki, "Elements of mathematics. Integration" , Addison-Wesley (1975) pp. Chapt.6;7;8 (Translated from French) MR0583191 Zbl 1116.28002 Zbl 1106.46005 Zbl 1106.46006 Zbl 1182.28002 Zbl 1182.28001 Zbl 1095.28002 Zbl 1095.28001 Zbl 0156.06001 [DS] N. Dunford, J.T. Schwartz, "Linear operators. General theory" , 1 , Interscience (1958) MR0117523 [Bi] P. Billingsley, "Convergence of probability measures" , Wiley (1968) MR0233396 Zbl 0172.21201 [Ma] P. Mattila, "Geometry of sets and measures in euclidean spaces. Cambridge Studies in Advanced Mathematics, 44. Cambridge University Press, Cambridge, 1995. MR1333890 Zbl 0911.28005 How to Cite This Entry:
Convergence of measures.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Convergence_of_measures&oldid=27239 |
When running my Finite Difference code, I observe something odd.
Although implementing a classical (non-reverting) SABR model, I initialized the variables such that it should be equal to Black-Scholes.
Boundary conditions for
Lower bound on price
Lower bound on volatility
Upper bound on volatility
are Dirichlet-style (just set equal to a value).
For the upper bound on forward price ($F$), I want to set the Neumann condition of $\Gamma=0$, since I believe this is true for all (put and call) options for large underlying price. This condition yields:
$\dfrac{\partial^2 V}{\partial x^2}(\dfrac{\partial x}{\partial F})^2+\dfrac{\partial V}{\partial x}\dfrac{\partial^2 x}{\partial F^2}=\Gamma=0$
Note that the extra terms with $x$ and $F$ are due to my variable transformation from $F$ to $x$.
Edit June 19: @Yian_Pap I agree, lets forget about smoothing condition and focus on getting Neumann condition working. Let me be specific regarding my steps applying your approach :) This is how I would currently implement it, but once again I'm not very familiar with FD (this is my first use case):
First of all, it is not immediately clear to me what PDE to start with (pricing equation? or just the $\Gamma=0$ equation?). If we start from $\Gamma=0$, why would we bother about cross derivative term being 0 or not?
If I start with $\Gamma=0$, no transformations in F, then
$\dfrac{\partial^2 V}{\partial F^2} = 0$
discretize using 2nd order central FD approx,
$\dfrac{1}{\Delta F^2} (V_{N-1,j} - 2V_{N,j} + V_{N+1,j}) = 0$,
$V_{N,j}=\dfrac{1}{2}(V_{N-1,j}+V_{N+1,j})$ (1),
which contains the outer grid point $V_{N+1,j}$, which I assume you mean in your answer?
Then, as I understand your comment regarding that the first derivative at $F \to \infty$ is known and constant, you mean that the delta is 1 (0) for a call (put)?
$\dfrac{\partial V}{\partial F} = \Delta = 1$,
$V_{N,j} = \Delta F + V_{N-1,j}$,
use this to substitute $V_{N+1,j}$ in (1),
$V_{N,j} = \Delta F + V_{N-1,j}$,
with $\Delta F$ step size in F. Then I can incorporate this in the coefficient matrix as follows. (Side question: why not use $\Delta=1$ condition directly?)
Say I have implicit scheme $V_{i,j,k} = z_1V_{i-1,j,k+1} + z_2V_{i,j,k+1} + z_3V_{i+1,j,k+1}$. Then for some i (i.e. at some places in matrix) $V_{i+1,j}$ is a bound, hence matrix entry is set $0$. Incorporate bound by adding $z_3$ to $z_2$ at every place in matrix where $i=N-1$. Additionally add $z_3\Delta F$. Mathematically,
$V_{N-1,j,k} = z_1V_{N-2,j,k+1} + (z_2+z_3)V_{N-1,j,k+1} + z_3\Delta F$.
Would this be a correct understanding of how the upper bound F Neumann condition should be implemented? |
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference |
If $M$ is a complete Riemannian manifold, it possesses a
unique self-adjoint positive operator $-\Delta$ on $L^2(M)$. If $M$ is not complete, though, it is known that the Laplace-Beltrami operator $-\Delta$ has several self-adjoint extensions in $L^2(M)$. Nevertheless, the heat kernel is uniquely defined (by the minimality property), regardless of whether $M$ is complete or not.
If $M$ is not complete, what is the self-adjoint extension of $-\Delta$ that corresponds to the heat kernel (i.e. what is the self-adjoint extension that generates the heat semigroup)? Is it the Friedrichs extension? If so, where can I find a proof of this?
I would be satisfied with an answer to the following, simpler question: if $M$ is complete and $U \subset M$ is an open subset with negligible complementary, and $h_U$ and $h_M$ their corresponding heat kernels, what is the relationship between $h_U$ and $h_M \big| _U$? By minimality, $h_U \le h_M \big| _U$; I expect them to be equal, though, at least as integral operators on $L^2 (U) = L^2 (M)$ - but I do not know whether this is true or how to prove it, because I do not know what extension of $-\Delta_U$ corresponds to $h_U$. |
Table of Contents
A Basis for Arbitrary Topological Products of Topological Spaces
Recall from the Arbitrary Topological Products of Topological Spaces page that if $\{ X_i \}_{i \in I}$ is an arbitrary collection of topological spaces and $\displaystyle{\prod_{i \in I} X_i}$ is the Cartesian product of these topological spaces whose elements are sequences $(x_i)_{i \in I}$ where $x_i$ is the $i^{\mathrm{th}}$ component of $(x_i)_{i \in I}$, then the product topology $\tau$ on $\displaystyle{\prod_{i \in I} X_i}$ is the initial topology induced by the projection maps $\displaystyle{p_i : \prod_{i \in I} \to X_i}$ and the resulting topological product is the topological space $\left ( \prod_{i \in I} X_i, \tau \right )$.
We also noted that a subbasis for the product topology is:(1)
In other words, the elements in the subbasis $\mathcal S$ are the subsets $\displaystyle{U \subseteq \prod_{i \in I} X_i}$ whose inverse image under some projection $p_i$ is equal to an open set $V$ in $X_i$.
We will now give an explicit form for a basis of the product topology on an arbitrary topological product.
Theorem 1: Let $\{ X_i \}_{i \in I}$ be an arbitrary collection of topological spaces and let $\displaystyle{\prod_{i \in I} X_i}$ be the Cartesian product of these spaces. Then a basis for the product topology is $\displaystyle{\mathcal B = \left \{ U = \prod_{i \in I} U_i : U_i \subseteq X_i \: \mathrm{is \: open \: in \:} X_i \: \mathrm{and} U_i = X_i \: \mathrm{for \: all \: but \: finitely \: many} \: i \right \}}$. Proof:Take $\mathcal S$ as above as a subbasis for the product topology $\tau$ on $\displaystyle{\prod_{i \in I} X_i}$. Then the collection of all finite intersections of elements from $\mathcal S$ form a basis of $\tau$. Consider an arbitrary finite intersection $U$ of elements from $\mathcal S$: Since $\mathcal S$ is a subbasis of $\tau$ we have that $U$ is an open set in $\displaystyle{\prod_{i \in I} X_i}$. But this means that for some $i \in I$, $p_i^{-1}(U) = V$ is an open set in $X_i$ for some $i \in I$. This happens if $U_{i, 1}, U_{i, 2}, ..., U_{i, n}$ are all open in $X_i$. So $\displaystyle{U = \prod_{i \in I} U_i}$ if $U_i = X_i$ for all but finitely many $i$. This shows that $\mathcal B$ is the basis obtained from $\mathcal S$. $\blacksquare$ |
Let $f_1,f_2, \ldots ,f_n$ be polynomials in any number of variables with algebraic coefficients. Is there algorithm to determine whether the ring $\mathbb{Z}[f_1,f_2,\ldots ,f_n]$ contains a non-constant polynomial with integer coefficients?
To illustrate what the game is here, I'll give some examples:
$\mathbb{Z}[\sqrt{2}x-y]\cap \mathbb{Z}[x,y]=\mathbb{Z}$. Proof-sketch: Suppose that $H\in\mathbb{Z}[u]$ and $H(\sqrt{2}x-y)\in\mathbb{Z}[x,y]$. Choose a sequence of distinct points $(x_n,y_n)\in\mathbb{Z}^2$ such that $\sqrt{2}x_n-y_n$ converges. Then $H(\sqrt{2}x_n-y_n)$ is eventually an integer constant $c$. Thus the equation $H(u)=c$ has infinitely many solutions. It follows that $H$ is identically $c$
$\mathbb{Z}[\sqrt{2}x-y, 2\sqrt{2}xy-z]$ contains the polynomial $2x^2+y^2-z$. (Take the square of the first polynomial plus the second.) Note that the generators are algebraically independent.
$\mathbb{Z}[\sqrt{2}x-y, \sqrt{3}xy-z]\cap \mathbb{Z}[x,y,z]=\mathbb{Z}$. (I know only a rather lengthy and ad hoc proof) |
#machine learning
Probability theory itself is intuitive, but can be confusing when it is overly formalized. For now, this section provides some basic terms and formulas.
The best
interactive exploration I know is Seeing Theory.
Term Describe Formalize Experiment We toss a fair coin twice / Outcome Head or Tail $\{h, t \}$ Sample Space aka Event Space | State Space $\Omega = \{tt, ht, th, hh \}$ Event of Interest Let's say: Getting head exactly once $ E= \{ ht, th\}$ Random Variable Worst naming ever: It is neither random nor a variable! It's a function ( lookup table) that maps the sample space Ω to T $X( (h, h) )= 2 \\ X( (t, h) )=1 \\ X( (h,t) )= 1 \\ X( ( t,t ) )=0$ Target Space Number of times we get $h$ when flipping the coin $\mathcal{T} = \{0, 1, 2 \}$ Probability Notation $P(X=1) means the probability for all events where $X((event)) = 1$ $P(X=1) = P((h,t)) \\ \cup P((t,h)) \\ = P(h) * P(t) \\ + P(t) * P(h)= 0.5$
Assert [English] Formalize Nothing > 100% probable : $P(\Omega) = 1$ and $\forall S \subseteq \mathcal{T} $ , we know $P(X(S)) \in [0, 1]$ Code
Often Code is more tangible than symbols. If you made a snippet for your favorite language, I'd be happy to add it here.
[...more to come...] |
By definition, the work done by a force is $W = F\cdot d$, so how can static friction do work?
Can this force move the body a distance of $75~\text{m}$?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
I think you are confused about what $d$ is supposed to mean in the equation $W=F\cdot d.$ You seem to be under the impression that $d$ is the distance that the object being acted on moves relative to the object providing the force. But this is not the correct meaning of $d$ in the equation and you know it.
Imagine if the car crate were in front of the truck, and the truck were pushing the crate. Then I think you would have no problem saying that the truck is doing work on the crate even though there is no change in the relative distance between the truck and the crate.
Now the situation in your question is basically the same as this one except the force acts on the bottom of the crate instead of the side, and the force is due to friction instead of a normal force. But neither of these differences ought to change the amount of work being done.
That being said, you would have a valid point if the problem were asking for the work done in the frame of the car. In that frame, the box does not move (assuming the coefficient of static friction is sufficiently large), so that $d$ really is zero. Thus no work is done in this frame.
Without friction between the crate and the truck bed, the crate would remain at rest in the frame of reference of the road, as the truck accelerates away down the road.
The crate moves in the frame of reference of the road, because of the force of friction acting on it.
So the work done on the crate, in the frame of reference of the road, is the friction force times the distance the crate has traveled in the frame of reference of the road (which could be less than the distance the truck has traveled).
By definition, the work done by a force is $W = F\cdot d$, so how can static friction do work ?
Can this force move the body a distance of $75~\text{m}$ ?
Friction does
negative work on the truck, slowing it down and does not move it forward.
What does
positive work on the truck, accelerates it and makes it translate $75~\text{m}$ is the engine of the truck.
The 80-Kg crate does negative work on it, because it is opposing the truck which is trying to push it forward, and acts over a distance of 75m.
Knowing that in $75~\text{m}$ of
space, it reaches the velocity of $72~\text{Km/h}$, you can find the acceleration of the crate/truck. Using the weight of the crate and the coefficients of friction, you can find out the negative work done by the crate on the truck, subtracting energy and slowing it down.
Static friction locks the crate to the truck and prevents it from slipping back and off the truck's bed.
yes, this is the work done by the friction force on the truck not on the crate. I need to understand the
work done by the friction force on the crate. – mech.eng
As I said, friction just locks the crate, it is an
interface, like the clutch on a motor-car. The amount of positive work on the crate equals the amount of negative work $-W = +W$ on the truck. But the work on the crate is actually done by the engine of the truck.
The black arrow to the right shows the truck pulling the crate (thanks to friction) speeding it up to $v=20m/s$ and therefore giving it $W = E =16,000J$. I hope it is clear now.
It doesn't. The work is done by the active force(ex. A human trying to pull a bull.). This work is converted into frictional energy(ex. Heat generated b/w surfaces)
Static friction does not produce or consume work in most of the times. For example for a solid body that rolls without sliding the velocity of the base point $A$ is $\vec v_a = \vec v_{cm} + \vec v_{tangential} \Rightarrow v_a = v_{cm} - \omega R = \omega R - \omega R =0$ which implies that $x_a = 0$. The static friction is a force that acts on $A$ so $W_T = T x_a = 0$
But when the object slides then the friction force is constant and equal to $T = \mu N$ and is is always opposite to the velocity of the body. So then $W_T = - Ts$ where $s$ is the total space traveled by the body. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.