content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Cartesian Right-Hand Rule - CNCexpo.com
Cartesian Coordinate System Right-Hand Rule
The Cartesian Coordinate System herein refers to the standard coordinate system of an CNC Machine Tool's six primary axes A,B,C,X,Y, and Z Axes. The Right-Hand Rule is used to determine both the
Axis Designation and Axis Direction.
Axis Designation
This first part uses the cartesian coordinate system right-hand rule to show the axis designations of the three primary linear axes, X axis, Y axis, and Z axis. The axes letter names are designated
by the relationship shown on the image to the right. The thumb, index finger, and middle finger of the right hand are held so that they form three angles positioned 90 degrees from each other. The
thumb represents X axis, the index finger Y axis, and the middle finger Z axis.
Understanding the right-hand rule along with some machine tool builder guidelines makes it possible to determine axis designations on a machine that one is not familiar with. The first machine tool
builder guideline is, the linear axis that moves parallel to the main spindle's centerline is designated Z axis.
The second machine tool builder guideline that pertains to a milling type machine is, the longest travel axis is designated X axis. The only axis left since this article refers to machines with the
three primary linear axes is Y axis. The object now is to rotate your hand until your thumb is parallel to X and your middle finger is parallel to Z axes then, your index finger will be parallel Y
Axis Direction
Notice the image of the cartesian coordinate system right-hand rule again. At the end of each arrow next to the axis letters X, Y, and Z there is a + sign. In the right-hand rule, the direction that
each finger points to is the positive direction of motion for that axis.
If you look at the image again you will notice that all the arrows representing the axes positive directions originate at a common zero. That zero represents the known Zero Location by which the
cartesian coordinate system defines other locations at a distance away in either the positive or negative direction in 3D space.
Programmers need to know which direction the machine is going to move relative to the Zero Location. They know that by the viewpoint of the machine. The viewpoint is most likely viewed from the
front of the machine. However, it can be setup to be viewed from the back, or somewhere else depending upon the type of machine.
Now for the cartesian coordinate system right-hand rule as it applies to a rotary axis direction. Imagine wrapping your right hand around a linear axis with your thumb pointing toward the positive
direction. The direction that your fingers are wrapped represents the positive direction for the rotary axis that rotates around that linear axis. The arrow for A axis on the above right small image
shows its positive direction.
Left-Handed Coordinate Systems
The right-handed coordinate system is the standard but there are some CNCs that use the left-handed coordinate system. As the name implies, the left hand is used to designate the axes directions
instead of the right. The thumb still represents X-Axis and so on. If your new to this subject, you might be surprised at how easy it is to think, that a CNC is using the left-hand system, because
an axis direction doesn't match up to the right-hand rule, and be absolutely wrong! The next section explains how that happens.
Key Point
The coordinate system is viewed from the programmer's perspective. The programmer calculates tool movements relative to a "stationary work surface". Because of that, an axes + direction can appear
to be backwards when the tool is stationary and the work surface moves to machine the part. The key is to always view the coordinate system as if the tool is moving and the work surface is
stationary, even if it's not! Then the axis + direction by the right-hand rule should make sense. One consistancy for a mill is, the Z-Axis + direction always points from the tool into the spindle
behind it.
Who invented the Cartesian Coordinate System?
René Descartes (March 31, 1596 – February 11, 1650), also known as Renatus Cartesius (latin), a highly influential French philosopher, mathematician, scientist, and writer. Much of subsequent
western philosophy is a reaction to his writings. His most famous statement is " Cogito ergo sum " ("I am thinking therefore I exist." ). As the inventor of the Cartesian Coordinate System, he
formulated the basis of modern analytic geometry, which in turn influenced the development of modern calculus.
I found myself nodding my noggin all the way trhgouh.
Joe 03/23/2012
Thanks for noticing Joe
Bruce 03/23/2012
Hi there,
I have to strongly disagree with you on your x, y and z positioning. on your right hand rule, your index finger is your X, your middle finger your Y, and your thumb your Z.
On a CNC machine, your longest axis (or closest to the ground in most cases) is always your x axis. your gantry (what your spindle is mounted to) is your Y; and the movement of your spindle(up and
down) is your Z.
So if you stand at your Zero point of your CNC (all axis reading machine zero) you would point your index finger down the X, your middle finger would point along your Y, and your thumb, would point
up to your Z. All of these would run into the positive, UNLESS your machine runs in negative X and Negative Y, in which case you would stand on the diagonally across corner of your machine, but
still using index for X, Middle for Y and thumb for X. and still all positive....
Ryan 07/04/2012
Ryan, Thanks for your comment. The fingers that represent axes in the image are common. From what I get out of what you wrote, you dispute X-Axis as drawn vertical and Z-Axis horizontal. The image
is not meant to designate which axis is vertical or horizontal. The article describes how to orient the hand so that the fingers are pointing in a direction that represents the axes. An image that
has X-Axis horizonatal would be more visually representative of a common CNC machine. Z-Axis can be vertical or horizontal.
Admin 11/26/2012
use lesss code...
Shivaji 01/15/2013
When defining CNC machine axes, the main spindle generally rotates around the Z axis, so I start there. Next, you need to look at the machine type. For a lathe, the axis perpendicular to the part is
X. Axis direction, then as a basic rule, is the direction that makes the part smaller is minus, so Z to the headstock and X toward centerline are the minus directions. This holds true for roll
grinders as well. Mills get more complex. Generally, Ryan is correct, on a mill, after defining Z as the axis in line with the main spindle, the axis with the longest travel becomes X. For a
horizontal milling machine, the vertical axes becomes Y, again, moving down (work piece smaller) is minus, then we follow the right hand rule to determine the X plus/minus direction based on Z out
toward the part is minus, and y down minus. For gantry mills and planer mills, again, X is usually the longest travel (moving table or moving columns) The Y defaults to saddle across the rail, and
then use the right hand rule with Z pointing up (part larger) to determine plus X and Plus Y. Compound (lathe/turn) machines and some grinders are not so simple. An axis that travels vertically
generally it is perceived that up is plus, minus is down. So, on a lathe that has a milling attachment, following the right hand rule would always place Y plus down. Here, be default, we use the
left hand rule so that plus can be up. Of course, nutating heads and 5 axis interpolation throw another wrench into figuring this out. I hope this helped someone a little.
Todd 12/12/2013
|
{"url":"http://www.cncexpo.com/Cartesian.aspx","timestamp":"2014-04-20T03:49:07Z","content_type":null,"content_length":"21406","record_id":"<urn:uuid:9ff034e9-2d17-4c35-abc3-7c1b2fc099f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
2008; 298 pp; softcover
Number: 321
ISBN-10: 2-85629-258-5
ISBN-13: 978-2-85629-258-7
List Price: US$62
Individual Members: US$55.80
Order Code: AST/321
This volume, the first in a two-volume set, contains original research articles on various aspects of differential geometry, analysis on manifolds, complex geometry, algebraic geometry, number theory
and general relativity.
The articles are based on talks presented at the Conference on Differential Geometry, Mathematical Physics, Mathematics and Society, held in honor of Jean-Pierre Bourguignon on the occasion of his
60th birthday. The conference was held from August 27 to 31, 2007 at the Institut des Hautes Études Scientifiques and at the École Polytechnique.
A publication of the Société Mathématique de France, Marseilles (SMF), distributed by the AMS in the U.S., Canada, and Mexico. Orders from other countries should be sent to the SMF. Members of the
SMF receive a 30% discount from list.
Graduate students and research mathematicians interested in pure mathematics.
• J. Simons and D. Sullivan -- Structured bundles define differential \(K\)-theory
• N. Hitchin -- Einstein metrics and magnetic monopoles
• K. Liu, X. Sun, and S.-T. Yau -- Geometry of moduli spaces
• R. L. Bryant -- Gradient Kähler Ricci solitons
• D. Auroux -- Special Lagrangian fibrations, mirror symmetry and Calabi-Yau double covers
• J. Cheeger and B. Kleiner -- Characterization of the Radon-Nikodym property in terms of inverse limits
• X. Chen and Y. Tang -- Test conguration and geodesic rays
• R. Mazzeo -- Flexibility of singular Einstein metrics
• P. T. Chruściel and J. L. Costa -- On uniqueness of stationary vacuum black holes
• H. Omori, Y. Maeda, N. Miyazaki, and A. Yoshioka -- A new nonformal noncommutative calculus: Associativity and finite part regularization
|
{"url":"http://www.ams.org/bookstore?fn=50&arg1=salenumber&ikey=AST-321","timestamp":"2014-04-19T00:32:13Z","content_type":null,"content_length":"15340","record_id":"<urn:uuid:7be1daa4-3b09-44b2-8fa2-a9ea0d409170>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Are the groups $C( \mathbb{R} ; U(n) )$ isomorphic?
up vote 4 down vote favorite
Let $C(\mathbb{R};{U}(n))$ denote the topological group of continuous functions $\mathbb{R}\to {U}(n)$ with pointwise multiplication and compact-open topology. My question is:
Are these groups isomorphic for different values of $n$?
I suspect the answer is no (it feels like it should be obvious), but proving this in the category of topological groups seems difficult. I would like to associate to $C(\mathbb{R}; {U}(n))$ the
enveloping C*-algebra $C_b(\mathbb{R};M_n)$, since these are much easier to distinguish$^1$. So a second question would be:
Does there exist a functor $TopGrp\to C^*Alg$ taking $C(\mathbb{R};{U}(n))$ to $C_b(\mathbb{R};M_n)$?
The same question could be asked of the measure theoretic versions of these groups $\mathcal{M}(\mathbb{R};{U}(n))$. These are called current groups, although the literature seems unhelpful for the
isomorphism problem.
$_{^1\text{ e.g. looking at Murray-von Neumann equivalence classes of projections will distinguish them.}}$
gr.group-theory fa.functional-analysis oa.operator-algebras
2 I would guess that it's easier to prove that they aren't homotopy-equivalent. – Qiaochu Yuan Apr 16 '13 at 17:45
1 Is $\mathcal{U}(n)$ supposed to be the unitary group? (If so, I thought just plain $U(n)$ was much more standard notation.) – Todd Trimble♦ Apr 16 '13 at 17:53
Yes, it denotes the unitary group, I've changed it now. – Ollie Margetts Apr 16 '13 at 18:15
1 Per Qiaochu's comment, aren't these obviously homomotopic to $U(n)$? – Will Sawin Apr 16 '13 at 18:24
4 I think that the $C(X,U(n))$'s can be distinguished by observing that the minimal degree of a unitary irreducible representation of $C(X,U(n))$, not of degree 1, must be $n$. – Alain Valette Apr
16 '13 at 18:52
show 1 more comment
2 Answers
active oldest votes
How about this argument? If I remember correctly, the irreducible representations of $U(n)$ are either 1-dimensional or at least $n$-dimensional. Suppose that there was an isomorphism $\
phi\colon C(\mathbb{R}; U(m)) \rightarrow C(\mathbb{R}; U(n))$ for $n < m$. We have the embedding $i\colon U(m) \rightarrow C(\mathbb{R}, U(m))$ as the constant functions, and the
up vote 6 evaluation $e_t\colon C(\mathbb{R}, U(n)) \rightarrow U(n)$ at $t$ for any $t \in \mathbb{R}$. By the above remark, $e_t \circ \phi \circ i$ has a commutative image and has a kernel
down vote containing $SU(n)$. On the other hand, $\prod_{t \in \mathbb{R}} e_t$ is an injective homomorphism. Thus, we get a contradiction.
sorry, I was overlooking Alain's comment which was folded. This is the same argument as his. – Makoto Yamashita Apr 18 '13 at 7:21
Still, this is a neat proof, so I have accepted it. – Ollie Margetts Apr 18 '13 at 21:26
add comment
Here is another proof. The elements of $C(\mathbb{R};U(n)) $ satisfying $f^2=1$ are functions whose values (under the standard representation of U(n)) are self-adjoint unitaries. There are
$n+1$ conjugacy classes of such unitaries (each self-adjoint unitary can be represented as a diagonal matrix of 1s and -1s and counting the 1s gives the conjugacy class). Moreover the
continuous map $tr:U(n)\to\mathbb{C}$ induced by the standard representation takes a different integer value on each conjugacy class, so there cannot be a function $f\in C(\mathbb{R};U(n))$
up vote 2 taking values in two such classes.
down vote
This shows that there are $n+1$ conjugacy classes of elements $f\in C(\mathbb{R};U(n))$ satisfying $f^2=1$. Hence these groups are non-isomorphic for different $n$.
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory fa.functional-analysis oa.operator-algebras or ask your own question.
|
{"url":"http://mathoverflow.net/questions/127720/are-the-groups-c-mathbbr-un-isomorphic","timestamp":"2014-04-21T05:17:51Z","content_type":null,"content_length":"62891","record_id":"<urn:uuid:c7003d26-21cd-4b48-bee4-6b46466a8346>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 337
, 1994
"... Numerical experiments have shown that two-level Schwarz methods often perform very well even if the overlap between neighboring subregions is quite small. This is true to an even greater extent
for a related algorithm, due to Barry Smith, where a Schwarz algorithm is applied to the reduced linear ..."
Cited by 82 (11 self)
Add to MetaCart
Numerical experiments have shown that two-level Schwarz methods often perform very well even if the overlap between neighboring subregions is quite small. This is true to an even greater extent for a
related algorithm, due to Barry Smith, where a Schwarz algorithm is applied to the reduced linear system of equations that remains after that the variables interior to the subregions have been
eliminated. In this paper, a supporting theory is developed.
- MATH. COMP , 1997
"... The purpose of this paper is to develop and analyze a least-squares approximation to a first order system. The first order system represents a reformulation of a second order elliptic boundary
value problem which may be indefinite and/or nonsymmetric. The approach taken here is novel in that the le ..."
Cited by 64 (12 self)
Add to MetaCart
The purpose of this paper is to develop and analyze a least-squares approximation to a first order system. The first order system represents a reformulation of a second order elliptic boundary value
problem which may be indefinite and/or nonsymmetric. The approach taken here is novel in that the least-squares functional employed involves a discrete inner product which is related to the inner
product in H −1 (Ω) (the Sobolev space of order minus one on Ω). The use of this inner product results in a method of approximation which is optimal with respect to the required regularity as well as
the order of approximation even when applied to problems with low regularity solutions. In addition, the discrete system of equations which needs to be solved in order to compute the resulting
approximation is easily preconditioned, thus providing an efficient method for solving the algebraic equations. The preconditioner for this discrete system only requires the construction of
preconditioners for standard second order problems, a task which is well understood.
- MATH. COMP , 1993
"... The purpose of this paper is to provide new estimates for certain multilevel algorithms. In particular, we are concerned with the simple additive multilevel algorithm given in [10] and the
standard V-cycle algorithm with one smoothing step per grid. We shall prove that these algorithms have a unifo ..."
Cited by 49 (5 self)
Add to MetaCart
The purpose of this paper is to provide new estimates for certain multilevel algorithms. In particular, we are concerned with the simple additive multilevel algorithm given in [10] and the standard
V-cycle algorithm with one smoothing step per grid. We shall prove that these algorithms have a uniform reduction per iteration independent of the mesh sizes and number of levels even on non-convex
domains which do not provide full elliptic regularity. For example, the theory applies to the standard multigrid V-cycle on the L-shaped domain or a domain with a crack and yields a uniform
convergence rate. We also prove uniform convergence rates for the multigrid V-cycle for problems with nonuniformly refined meshes. Finally, we give a new multigrid approach for problems on domains
with curved boundaries and prove a uniform rate of convergence for the corresponding multigrid V-cycle algorithms.
, 2000
"... . In this paper, we present the first a priori error analysis for the Local Discontinuous Galerkin method for a model elliptic problem. For arbitrary meshes with hanging nodes and elements of
various shapes, we show that, for stabilization parameters of order one, the L 2 -norm of the gradient and ..."
Cited by 48 (19 self)
Add to MetaCart
. In this paper, we present the first a priori error analysis for the Local Discontinuous Galerkin method for a model elliptic problem. For arbitrary meshes with hanging nodes and elements of various
shapes, we show that, for stabilization parameters of order one, the L 2 -norm of the gradient and the L 2 -norm of the potential are of order k and k + 1=2, respectively, when polynomials of total
degree at least k are used; if stabilization parameters of order h \Gamma1 are taken, the order of convergence of the potential increases to k + 1. The optimality of these theoretical results are
tested in a series of numerical experiments on two dimensional domains. Key words. Finite elements, discontinuous Galerkin methods, elliptic problems AMS subject classifications. 65N30 1.
Introduction. In this paper, we present the first a priori error analysis of the Local Discontinuous Galerkin (LDG) method for the following classical model elliptic problem: \Gamma\Deltau = f in\
Omega ; u ...
- SIAM J. Numer. Anal , 2000
"... Abstract. We consider mixed finite element methods for second order elliptic equations on nonmatching multiblock grids. A mortar finite element space is introduced on the nonmatching interfaces.
We approximate in this mortar space the trace of the solution, and we impose weakly a continuity of flux ..."
Cited by 47 (24 self)
Add to MetaCart
Abstract. We consider mixed finite element methods for second order elliptic equations on nonmatching multiblock grids. A mortar finite element space is introduced on the nonmatching interfaces. We
approximate in this mortar space the trace of the solution, and we impose weakly a continuity of flux condition. A standard mixed finite element method is used within the blocks. Optimal order
convergence is shown for both the solution and its flux. Moreover, at certain discrete points, superconvergence is obtained for the solution and also for the flux in special cases. Computational
results using an efficient parallel domain decomposition algorithm are presented in confirmation of the theory.
- Numer. Math , 1998
"... Wavelet methods allow to combine high order accuracy, multilevel preconditioning techniques and adaptive approximation, in order to solve efficiently elliptic operator equations. One of the main
difficulty in this context is the efficient treatment of non-homogeneous boundary conditions. In this pap ..."
Cited by 44 (6 self)
Add to MetaCart
Wavelet methods allow to combine high order accuracy, multilevel preconditioning techniques and adaptive approximation, in order to solve efficiently elliptic operator equations. One of the main
difficulty in this context is the efficient treatment of non-homogeneous boundary conditions. In this paper, we propose a strategy that allows to append such conditions in the setting of space
refinement (i.e. adaptive) discretizations of second order problems. Our method is based on the use of compatible multiscale decompositions for both the domain\Omega and its boundary \Gamma, and on
the possibility of characterizing various function spaces from the numerical properties of these decompositions. In particular, this allows the construction of a lifting operator which is stable for
a certain range of smoothness classes, and preserves the compression of the solution in the wavelet basis. The analysis is first carried out for the tensor product domain ]0; 1[ 2 , a strategy is
developed in orde...
- Numer. Math , 1996
"... In this paper, we consider the finite element methods for solving second order elliptic and parabolic interface problems in two-dimensional convex polygonal domains. Nearly the same optimal L 2
-norm and energy-norm error estimates as for regular problems are obtained when the interfaces are of ar ..."
Cited by 41 (8 self)
Add to MetaCart
In this paper, we consider the finite element methods for solving second order elliptic and parabolic interface problems in two-dimensional convex polygonal domains. Nearly the same optimal L 2 -norm
and energy-norm error estimates as for regular problems are obtained when the interfaces are of arbitrary shape but are smooth, though the regularities of the solutions are low on the whole domain.
The assumptions on the finite element triangulation are reasonable and practical. Mathematics Subject Classification (1991): 65N30, 65F10. A running title: Finite element methods for interface
problems. Correspondence to: Dr. Jun Zou Email: zou@math.cuhk.edu.hk Fax: (852) 2603 5154 1 Institute of Mathematics, Academia Sinica, Beijing 100080, P.R. China. Email: zmchen@math03.math.ac.cn. The
work of this author was partially supported by China National Natural Science Foundation. 2 Department of Mathematics, the Chinese University of Hong Kong, Shatin, N.T., Hong Kong. E-mail:
- MATH. COMP , 2002
"... The recently introduced multiscale finite element method for solving elliptic equations with oscillating coefficients is designed to capture the large-scale structure of the solutions without
resolving all the fine-scale structures. Motivated by the numerical simulation of flow transport in highly h ..."
Cited by 37 (9 self)
Add to MetaCart
The recently introduced multiscale finite element method for solving elliptic equations with oscillating coefficients is designed to capture the large-scale structure of the solutions without
resolving all the fine-scale structures. Motivated by the numerical simulation of flow transport in highly heterogeneous porous media, we propose a mixed multiscale finite element method with an
over-sampling technique for solving second order elliptic equations with rapidly oscillating coefficients. The multiscale finite element bases are constructed by locally solving Neumann boundary
value problems. We provide a detailed convergence analysis of the method under the assumption that the oscillating coefficients are locally periodic. While such a simplifying assumption is not
required by our method, it allows us to use homogenization theory to obtain the asymptotic structure of the solutions. Numerical experiments are carried out for flow transport in a porous medium with
a random log-normal relative permeability to demonstrate the efficiency and accuracy of the proposed method.
- SIAM J. Numer. Anal
"... Abstract. We propose a substructuring preconditioner for solving threedimensional elliptic equations with strongly discontinuous coefficients. The new preconditioner can be viewed as a variant
of the classical substructuring preconditioner proposed by Bramble, Pasiack and Schatz (1989), but with muc ..."
Cited by 35 (10 self)
Add to MetaCart
Abstract. We propose a substructuring preconditioner for solving threedimensional elliptic equations with strongly discontinuous coefficients. The new preconditioner can be viewed as a variant of the
classical substructuring preconditioner proposed by Bramble, Pasiack and Schatz (1989), but with much simpler coarse solvers. Though the condition number of the preconditioned system may not have a
good bound, we are able to show that the convergence rate of the PCG method with such substructuring preconditioner is nearly optimal, and also robust with respect to the (possibly large) jumps of
the coefficient in the elliptic equation. 1.
- Fifth International Symposium on Domain Decomposition Methods for Partial Differential Equations , 1992
"... . This paper begins with an introduction to additive and multiplicative Schwarz methods. A two-level method is then reviewed and a new result on its rate of convergence is established for the
case when the overlap is small. Recent results by Xuejun Zhang, on multi-level Schwarz methods, are formulat ..."
Cited by 35 (2 self)
Add to MetaCart
. This paper begins with an introduction to additive and multiplicative Schwarz methods. A two-level method is then reviewed and a new result on its rate of convergence is established for the case
when the overlap is small. Recent results by Xuejun Zhang, on multi-level Schwarz methods, are formulated and discussed. The paper is concluded with a discussion of recent joint results with
Xiao-Chuan Cai on nonsymmetric and indefinite problems. Key Words. domain decomposition, Schwarz methods, finite elements, nonsymmetric and indefinite elliptic problems AMS(MOS) subject
classifications. 65F10, 65N30 1. Introduction. Over the last few years, a general theory has been developed for the study of additive and multiplicative Schwarz methods. Many domain decomposition and
certain multigrid methods can now be successfully analyzed inside this framework. Early work by P.-L. Lions [23], [24] gave an important impetus to this effort. The additive Schwarz methods were then
developed by Dryja and ...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=226174","timestamp":"2014-04-16T11:26:26Z","content_type":null,"content_length":"39458","record_id":"<urn:uuid:17982be7-ec37-4ca5-b449-67eace3899c9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
|
College Algebra Tutors
Richboro, PA 18954
Oxford University Math Tutor - High School, College, GRE, GMAT
...Dr Peter offers assistance with algebra, pre-calculus, SAT, AP calculus, college calculus 1,2 and 3, GMAT and GRE. He is a retired Vice-President of an international Aerospace company. His
industrial career included technical presentations and workshops, throughout...
Offering 10 subjects including algebra 2
|
{"url":"http://www.wyzant.com/Trenton_NJ_College_Algebra_tutors.aspx","timestamp":"2014-04-16T04:21:39Z","content_type":null,"content_length":"60243","record_id":"<urn:uuid:5b608835-010f-4c5a-941b-b2f6463a487c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can you Combinie two transition probability matrices?
Hello Chiro,
Could I ask you a question? You have been very helpful in the past.
I am trying to quantify the difference between two discrete distributions. I have been reading online and there seems to be a few different ways such as a Kolmogorov-Smirnov test and a chi squared
My first question is which of these is the correct method for comparing the distributions below?
The distributions are discrete distributions with 24 bins.
My second question is that, it pretty obvious looking at the distributions that they will be statistically significantly different, but is there a method to quantify how different they are? I'm not
sure, but a percentage or distance perhaps?
I've been told that if you use a two sample Kolmogorov-Smirnov test, a measure of how different the distributions are will be the p-value. Is that correct?
I appreciate your help and comments
Kind Regards
|
{"url":"http://www.physicsforums.com/showthread.php?p=4271997","timestamp":"2014-04-19T17:37:56Z","content_type":null,"content_length":"53676","record_id":"<urn:uuid:67367391-9822-47a3-a159-92a483327804>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
|
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Quantum dots embedded into silicon nanowires effectively partition electron confinement
FIG. 1.
(a) Atomic structure of silicon nanowires, illustrated for and . describes the number of prism layers in the nanowire cross section (1 and 2). gives the number of the lengthwise segments (1, 2, 3,
and 4). The asterisk denotes the formation of silicon dimers on the facets of the nanowires. (b) Quantum dot with the 20-atom core of symmetry (shown in red sticks). is the number of layers of
silicon atoms including the core. (c) Side and top views of the quantum dot attached to the nanowire . The silicon atoms of the quantum dot, nanowire and the shared interface are shown in red (gray
off line), blue (black off line), and green (light gray off line), respectively. Hydrogen atoms are not shown.
FIG. 2.
(a) Scanning electron microscopy image of a branched silicon nanowire (adapted from Ref. 10 and reproduced pending the permission from Nano Lett.). b) Atomic structures of silicon nanowires with
embedded quantum dots (see main text for the notation). The linear sizes of the schematic systems are shown for some actual systems, along with their diameters . Only very few of the computed systems
are illustrated, and the sizes and symmetry of all systems are provided in Table I. Silicon and hydrogen atoms are shown in red and blue, respectively.
FIG. 3.
Band gap energy dependence on the linear size , showing the quantum confinement effect in (a) simple nanowires : black lines with circles, : red lines with triangles, and complex connected nanowires
: green lines with squares; (b) quantum dot–nanowire junctions of : green lines with triangles, : black lines with circles, : red lines with circles, and : red lines with triangles and blue lines
with squares . Numbers show the number of segments in nanowires. Pairs of numbers are given if more than one nanowire is present, giving the two independent numbers of segments. pairs are equivalent
to single and are so shown to elucidate the structure connection.
FIG. 4.
(a) Total and partial DOS of complex nanoclusters. Partial DOS are shown in the same color as the corresponding atomic structures. (b) The detailed total and partial DOS of occupied electronic states
of cluster near the Fermi level region. Peaks , , and correspond to the , , and fragments respectively.
FIG. 5.
Molecular orbital diagram elucidating the quantum confinement. When system is elongated with producing , nearly degenerate occupied and virtual orbitals are shifted by the interaction (e.g., the HOMO
and LUMO orbital energies shifted by and , respectively), reducing the band gap from to .
Table I.
Atomic and electronic structure of complex nanostructures.
Article metrics loading...
|
{"url":"http://scitation.aip.org/content/aip/journal/jap/104/5/10.1063/1.2973464","timestamp":"2014-04-19T07:03:41Z","content_type":null,"content_length":"102832","record_id":"<urn:uuid:63831f9b-388d-4161-b357-a4211f85a97f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How can we know if the system is oscillating and if it is decaying with only a quadratic equation? PS: <1> Do not involve any calculus <2> Haven't learnt damping.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
First of all find the closed loop transfer function of the system
Best Response
You've already chosen the best response.
Suppose it is \[Y= \frac{1}{1-kz^{-1} + bz^{-2}}\]where k is an unknown constant and b is a known constant.
Best Response
You've already chosen the best response.
I meant \[\frac{Y}{X}=...\]
Best Response
You've already chosen the best response.
Solving the denominator =0, \[1-kz^{-1}+bz^{-2}=0\]\[z^2 - kz + b =0\]\[z = \frac{k \pm \sqrt{k^2 - 4b}}{2}\]
Best Response
You've already chosen the best response.
You'd need to find the poles of the system, sorry I was dealing in the s- domain
Best Response
You've already chosen the best response.
Poles are at \[\frac{k\pm\sqrt{k^2-4b}}{2}\]
Best Response
You've already chosen the best response.
There will be many cases, \[ K=2\sqrt b, K>2\sqrt b\ and\ K<2\sqrt b\] for first case, \[z= \sqrt b\] if b<1 then system will decay if b=1 system will be constant if b>1 then system will be
Best Response
You've already chosen the best response.
Let me know if you have doubt anywhere
Best Response
You've already chosen the best response.
Now, I see why I can never get the answer. The reason why I have been stressing that no calculus is involved is because when we learnt this topic, our lecturer didn't not teach us using calculus,
i.e. solving D.E., using expressions in exponential forms, nor mentioning those fancy terms like damping. Anyway, thanks for trying to help!
Best Response
You've already chosen the best response.
so you undestood now ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
oops, did you ask this doubt to your teacher?
Best Response
You've already chosen the best response.
Ha! I don't even have to ask as in the lecture, he has written "you will find out the reason if you take EEE"
Best Response
You've already chosen the best response.
*in the lecture notes
Best Response
You've already chosen the best response.
umm, where did calculus was used?
Best Response
You've already chosen the best response.
He has NEVER used calculus in this course.
Best Response
You've already chosen the best response.
But the solution requires just solving the quadratics. then depending on the poles, we classify the system
Best Response
You've already chosen the best response.
p = pole p = 1 => remains (unchanged) |p| >1 => diverge |p| < 1 => converge
Best Response
You've already chosen the best response.
now you need to check which one lies in, out or on the unit circle. then you can classify the system
Best Response
You've already chosen the best response.
@Callisto are you here?
Best Response
You've already chosen the best response.
checking the magnitude? I did it. But the problem is how I can identify if the system oscillates. Sorry, I was on other page.
Best Response
You've already chosen the best response.
if the poles are on the unit circle, system will oscillate if they are inside, it'll decay if they are outside, oscillations will grow unboundedly
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Hmm... I think the magnitude of the pole only tell us if the system is converging/diverging/remaining unchanged?!
Best Response
You've already chosen the best response.
I think I understand how to analyze the system now, thanks :)
Best Response
You've already chosen the best response.
welcome :P
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5166d829e4b066fca6619859","timestamp":"2014-04-18T16:52:17Z","content_type":null,"content_length":"93795","record_id":"<urn:uuid:bddbc844-dfa9-49e7-b795-ed68d8ce910f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Please help me simplify
someone used photobucket and now we dont have the pic anymore
Last edited by MathGuru; October 1st 2006 at 06:43 PM.
just remember that $<br /> x^{-1}<br />$ means $<br /> \frac{1}{x}<br />$ so that is how the $<br /> b^{-3}<br />$ gets down to the bottom of the fraction
CONFUSED_ONE $\left( \frac{ab^{-1}}{2} \right)^3$ Open parantheses (remember to raise each one to that power) Thus, $\frac{a^3b^{-3}}{8}$ Remember the rule of negative exponents, $\frac{a^3}{8} \frac
{1}{b^3}=\frac{a^3}{8b^3}$ Q.E.D.
Last edited by MathGuru; October 1st 2006 at 06:44 PM.
oh. so you bring down the b..ohh oikay i get it. thanks for the hlep you two!
CONFUSED_ONE Hello, as you've seen ThePerfectHacker's answer was $\frac{a^3}{8b^3}$ and that answer you can condense to: $\frac{a^3}{8b^3}= \left( \frac{a}{2b} \right)^3$ Bye
|
{"url":"http://mathhelpforum.com/math-topics/1705-please-help-me-simplify.html","timestamp":"2014-04-19T01:09:36Z","content_type":null,"content_length":"46431","record_id":"<urn:uuid:5059c1c1-2bd9-4161-a354-3e8a8f558fb0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Definition of markov process with ODE
I am trying to solve a problem from Van kampens book, page 73.
A ODE is given: dx/dt = f(x). write the solution with initial values x0 and t0 in the form x = phi(x0, t - t0). Show that x obeys the defintion of the markov process with:
p1|1(x,t|x0,t0) = delta[x - phi(x0, t - t0)].
By delta, I mean the delta function. p1|1 is the transition probability.
Can anybody help me with how to start with this ?
|
{"url":"http://www.physicsforums.com/showpost.php?p=1285131&postcount=1","timestamp":"2014-04-21T12:18:34Z","content_type":null,"content_length":"8784","record_id":"<urn:uuid:ee543623-9c93-4f3b-a1d2-8b3adefeea92>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: Regression across variables
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: Regression across variables
From "Wallace, John" <John_Wallace@affymetrix.com>
To "'statalist@hsphsun2.harvard.edu'" <statalist@hsphsun2.harvard.edu>
Subject st: RE: Regression across variables
Date Tue, 11 Nov 2003 15:20:16 -0800
Thanks for your reply, Nick
I was trying to keep my examples general in the belief that it would be more
broadly useful for others, but for clarity's sake, here's a more explicit
Some of the developmental arrays made by my company have probes
complementary (in the DNA sense) to control reagents at specific
concentrations in the sample fluid. One way to measure the quality of the
arrays is to perform a regression of signal for those probes against the
known concentration of the control reagents in the sample. I've found that
the slope and r-squared of the least-squares linear regression correlates
nicely with other measures of array quality, but computing the fit isn't
trivial. At the moment I export the probe intensities from the analysis
software into excel, line them up against the concentrations for the control
reagents, and use Excel's Slope(y,x) and Rsq(y,x) functions to get the
parameters I'm looking for.
I would prefer to do that in Stata, for all the reasons we love Stata. The
data looks like:
array_id a~a_x_at a~b_x_at a~c_x_at a~d_x_at a~e_x_at
1. 930877 12.4 22.7 51.5 108 293.5
2. 930878 7.6 13 53.1 99 244.2
3. 930898 17.7 37 90.4 198 436.6
4. 930879 11.5 18.2 55.7 114 277.8
5. 930884 11.3 24.1 56.6 126.7 301.3
6. 930885 13.3 19.8 57 139 270.1
the variable names are truncated from affxr2taga_x_at, affxr2tagb_x_at, etc
The Controls are at the following concentrations
TagA: 0.25 E-12M (i.e. 250 femtomolar)
TagB 0.5 E-12M
TagC 1.0 E-12M
TagD 2.0 E-12M
TagE 4.0 E-12M
So, in Excel I would have cells like
A B C D E
R1 0.25 0.5 1.0 2.0 4.0
R2 12.4 22.7 51.5 108 293.5
And in column F I would use =SLOPE(A2:E2,A1:E1) to get the slope of the
linear regression and =RSQ(A2:E2,A1:E1) to get the coefficient of
In stata terms, each observation would get a value in new variables "slope"
and "fit". I've seen some egen commands like rmean() or rsd() that works at
the observation level like that; calculating values in new variables from a
function performed "across" variables for each observation.
One approach I thought about was using -xpose- to switch observations with
variables, then generating a new variable "conc" and doing a plain ol'
regression of array_id vs conc. That's less attractive though, because
xpose mangles your dataset (even using the ,varnames option, you can't get
the original variable names back by running -xpose- again)
It seems to me, from reading your earlier replies that you think I'd like
to, for example, calculate how much the 6 measures of a~a_x_at correlate
with a constant of 0.25. That's not the case; I'm interested in how the
slope of (a-e vs pM) varies from array to array.
-----Original Message-----
From: Nick Cox [mailto:n.j.cox@durham.ac.uk]
Sent: Tuesday, November 11, 2003 11:33 AM
To: statalist@hsphsun2.harvard.edu
Subject: st: RE: RE: RE: Regression across variables
Don't be misled; I am not a statistician myself
and indeed have no formal training in it worth
that name.
However, whatever is posted on Statalist is open
to challenge by anyone who can expose error and/or
put forward a better solution, irrespective of
As I understand it, your molarity values are not
variables at all, but constants which
act as gold standards or targets for your variables.
Whether it makes sense to combine the analyses is difficult
to say without understanding the experimental set-up.
There is much advantage in a unified analysis, especially
if in some sense the errors behave similarly across
molarities, but deciding that might be helped by an
initial exploratory analysis, such as
. dotplot A B C D
Things might look simpler on a log scale.
Wallace, John
> Thanks Nick - any implication of non-orthodoxy is purely my
> ignorance in
> these matters. My formal stat background is pretty weak.
> What I was trying
> to show is that there is in effect a variable orthogonal to
> the matrix of
> observations (the Molarity value) that I would like to
> regress the row of
> values for each observation against the row of Molarity
> values (rather than
> the column of A values against the column of B values, for example).
> The question would be how to introduce the molarity values
> into the dataset
> (each variable corresponds to a concentration level that is
> being tested)
> and how to tell stata to use it in the regression.
> If the answer is the same, I'll just have to plug away and
> see if I can
> figure out how my mental picture fits into what you said.
> I appreciate the help!
Nick Cox
> As I understand it, this is more orthodox
> than you imply, and you could think
> of the analysis as a series of regressions, except that
> you have no covariates, at least that you're
> showing us. That's not fatal, however.
> . regress A
> says in effect estimate the mean of A,
> and much of the output you get is based
> on the assumption that A follows, or
> should follow, a normal (Gaussian, central)
> distribution.
> Following that with
> . test _cons = 0.5
> is, perhaps, a long-winded way of going
> . ttest A = 0.5
> except that if you do have covariates,
> the -regress- framework is the one on
> which you can build. Ronan Conroy's
> paper in SJ 2(3) 2002 is a very nice
> example of this principle.
> Having said that, the assumption of normality
> is important. It wouldn't surprise me if the
> distributions were skewed and (say) gamma-like,
> so that -glm- is then a better framework.
Wallace, John
> >
> > Hi Statalisters. I'm trying to get Stata to perform a
> > regression in a data
> > structure different from the usual yvar xvar arrangement.
> > I'll diagram the
> > data set to show what I mean:
> >
> > Molarity 0.5 1 2 3
> >
> > Variable A B C D
> > Observ1 .22 .45 .99 1.4
> > Observ2 .23 .5 .98 1.5
> > Observ3 .19 .38 1.1 1.42
> >
> > Molarity in this case would be the constant associated with
> > each variable.
> > The observations are measurements of the system attempting
> > to quantify the
> > molarity. The idea would be to generate additional
> > variables that contain
> > the various regression results of the observations vs Molarity.
> >
> > My data set at this point is just variable name against
> > observation number.
> > I don't know how to associate each variable with the
> > corresponding molarity,
> > or how to tell Stata to perform a regression in this way.
> > Do I have to
> > -reshape- or is there another way?
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2003-11/msg00295.html","timestamp":"2014-04-19T22:38:43Z","content_type":null,"content_length":"12479","record_id":"<urn:uuid:ba6bb874-b97d-4629-a6b8-3fe842fc168a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
|
verifying identities attempt 3...
October 23rd 2009, 11:41 PM #1
verifying identities attempt 3...
Here is the problem:
$\frac{tan-cot}{tan+cot}+1 = 2sin^{2}$
I convert it into terms of cos and sin and try to simplify the left side.
$<br /> \frac{sin}{cos}-\frac{cos}{sin}$
Then, if I took it a step further and made them get a denominator of cos sin they'd end up on top with
$sin^{2}-cos^{2}$ and on the bottom $sin^{2}+cos^{2}$. I could make the bottom (using the Pythagorean identity):
$\frac{1}{cos sin}$
And that's how far I've gotten before everything fails.
Here is the problem:
$\frac{tan-cot}{tan+cot}+1 = 2sin^{2}$
I convert it into terms of cos and sin and try to simplify the left side.
$<br /> \frac{sin}{cos}-\frac{cos}{sin}$
Then, if I took it a step further and made them get a denominator of cos sin they'd end up on top with
$sin^{2}-cos^{2}$ and on the bottom $sin^{2}+cos^{2}$. I could make the bottom (using the Pythagorean identity):
$\frac{1}{cos sin}$
And that's how far I've gotten before everything fails.
So far it sounds like you've ended up with
$\frac{\sin^2{x} - \cos^2{x}}{\sin^2{x} + \cos^2{x}} + 1$
$= \frac{\sin^2{x} - (1 - \sin^2{x})}{1} + 1$
$= \sin^2{x} - 1 + \sin^2{x} + 1$
$= 2\sin^2{x}$.
October 24th 2009, 12:10 AM #2
|
{"url":"http://mathhelpforum.com/pre-calculus/110056-verifying-identities-attempt-3-a.html","timestamp":"2014-04-17T16:06:25Z","content_type":null,"content_length":"37387","record_id":"<urn:uuid:4da08c92-9327-4160-8a5c-17031e88f3ef>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Why doesn't this work ?
[Date Index] [Thread Index] [Author Index]
Re: Why doesn't this work ?
• To: mathgroup@smc.vnet.net
• Subject: [mg10411] Re: Why doesn't this work ?
• From: Julian Stoev <stoev@SPAM-RE-MO-VER-usa.net>
• Date: Tue, 13 Jan 1998 02:07:15 -0500
• Organization: Seoul National University, Republic of Korea
• References: <69cole$ep5@smc.vnet.net>
On 12 Jan 1998, Andreas Keese wrote:
|But why doesn't it work with more than one functions ? |
| Clear[S2];
| S2={Sin[x],Cos[x]};
| Plot[S2, {x, 0, Pi}]
Hi, Andreas!
This works:
Plot[Evaluate[S2], {x, 0, Pi}]
|gives an error - Mathematica complains that S2 is not a machine-size
|BTW: I don't want S2 to be pattern cause I want to use this as follows:
| Clear[S,dx,S3,x];
| S[x_]:={Sin[x/dx], Cos[x/dx], Sin[2*x/dx]}; | S3=Evaluate[S[x] /. dx
-> 2];
| Plot[S3, {x, 0, Pi}];
This may be:
S[x_]:={Sin[x/dx], Cos[x/dx], Sin[2*x/dx]}
S3=S[x]/. dx -> 2
Plot[Evaluate[S3], {x, 0, Pi}]
Your problem is in Evaluate[]. Look in the help. The example given there
will explain you why you have this problem.
Hope this helps.
Julian Stoev <j.h.stoev@ieee.org> - Ph. D. Student Intelligent
Information Processing Lab. - Seoul National University, Korea
|
{"url":"http://forums.wolfram.com/mathgroup/archive/1998/Jan/msg00178.html","timestamp":"2014-04-20T23:41:13Z","content_type":null,"content_length":"35392","record_id":"<urn:uuid:a814f530-edc6-4a9f-9267-820f556df34f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The CONVERT_COORD function transforms one or more sets of coordinates to and from the coordinate systems supported by IDL.
The input coordinates X and, optionally, Y and/or Z can be given in data, device, or normalized form by using the DATA, DEVICE, or NORMAL keywords. The default input coordinate system is DATA. The
keywords TO_DATA, TO_DEVICE, and TO_NORMAL specify the output coordinate system.
If the input points are in 3D data coordinates, be sure to set the T3D keyword.
Note: CONVERT_COORD utilizes values currently stored in the !X, !Y, !Z and !P system variables to compute coordinate conversion factors.
Note: For devices that support windows, CONVERT_COORD can only provide valid results if a window is open and current. Also, CONVERT_COORD only applies to Direct Graphics devices.
Convert, using the currently established viewing transformation, 11 points along the parametric line x = t, y = 2t, z = t^2, along the interval [0, 1] from data coordinates to device coordinates:
; Establish a valid transformation matrix:
SURFACE, DIST(20), /SAVE
; Make a vector of X values:
X = FINDGEN(11)/10.
; Convert the coordinates. D will be a (3,11) element array:
D = CONVERT_COORD(X, 2*X, X^2, /T3D, /TO_DEVICE)
To convert the endpoints of a line from data coordinates (0, 1) to (5, 7) to device coordinates, use the following statement:
D = CONVERT_COORD([0, 5], [1, 7], /DATA, /TO_DEVICE)
On completion, the variable D is a (3, 2) vector, containing the x, y, and z coordinates of the two endpoints.
For more examples, see Additional Examples near the bottom of this topic.
Result = CONVERT_COORD( X [, Y [, Z]] [, /DATA | , /DEVICE | , /NORMAL] [, /DOUBLE][, /T3D] [, /TO_DATA | , /TO_DEVICE | , /TO_NORMAL] )
Return Value
The result of the function is a (3, n) vector containing the (x, y, z) components of the n output coordinates.
A vector or scalar argument providing the X components of the input coordinates. If only one argument is specified, X must be an array of either two or three vectors (i.e., (2,*) or (3,*)). In this
special case, X[0,*] are taken as the X values, X[1,*] are taken as the Y values, and, if present, X[2,*] are taken as the Z values.
An optional argument providing the Y input coordinate(s).
An optional argument providing the Z input coordinate(s).
Set this keyword if the input coordinates are in data space (the default).
Set this keyword if the input coordinates are in device space.
Set this keyword to indicate that the returned coordinates should be double-precision. If this keyword is not set, the default is to return single-precision coordinates (unless double-precision
arguments are input, in which case the returned coordinates will be double-precision).
Set this keyword if the input arguments are specified in normalized [0, 1] coordinates relative to the entire window.
Set this keyword if the 3D transformation !P.T is to be applied.
Set this keyword if the output coordinates are to be in data space.
Set this keyword if the output coordinates are to be in device space.
Set this keyword to convert the result to normalized [0, 1] coordinates relative to the entire window.
Additional Examples
Three-Dimensional Direct Graphic Coordinate Conversion
The CONVERT_COORD function performs the three-dimensional coordinate conversion process (described in Three-Dimensional Coordinate Conversion) when converting to and from coordinate systems when the
T3D keyword is specified. For example, if a three-dimensional coordinate system is established, then the device coordinates of the data point (0, 1, 2) can be computed as follows:
D = CONVERT_COORD(0, 1, 2, /TO_DEVICE, /T3D, /DATA)
On completion, the three-element vector D will contain the desired device coordinates. The process of converting from three-dimensional to two-dimensional coordinates also can be written as an IDL
function. This function accepts a three-dimensional data coordinate, returns a two-element vector containing the coordinate transformed to two-dimensional normalized coordinates using the current
transformation matrix:
FUNCTION CVT_TO_2D, X, Y, Z
; Make a homogeneous vector of normalized 3D coordinates:
P = [!X.S[0] + !X.S[1] * X, !Y.S[0] + !Y.S[1] * Y, $
!Z.S[0] + !Z.S[1] * Z, 1]
; Transform by !P.T:
P = P # !P.T
; Return the scaled result as a two-element,
; two-dimensional, xy vector:
RETURN, [P[0] / P[3], P[1] / P[3]]
Version History
See Also
This page has no user notes yet. Be the first one!
|
{"url":"http://exelisvis.com/docs/CONVERT_COORD.html","timestamp":"2014-04-20T18:23:31Z","content_type":null,"content_length":"63437","record_id":"<urn:uuid:2c315d41-7ac1-430a-a331-1a004c8e03b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number Theory/Elementary Divisibility
From Wikibooks, open books for an open world
Elementary Properties of Divisibility[edit]
Divisibility is a key concept in number theory. We say that an integer a is divisible by a nonzero integer b if there exists an integer c such that a=bc.
For example, the integer 123456 is divisible by 643 since there exists a nonzero integer, namely 192, such that $123456=643\cdot192\,$.
We denote divisibility using a vertical bar: $a|b$ means "a divides b". For example, we can write $643|123456 \,$.
The following theorems illustrate a number of important properties of divisibility.
Prime numbers[edit]
A natural number p is called a prime number if it has exactly two distinct natural number divisors, itself and 1. The first eleven such numbers are 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, and 31. There
are an infinite number of primes, however, as will be proven below. Note that the number 1 is generally not considered a prime number even though it has no divisors other than itself. The reason for
this will be discussed later.
Theorem 1[edit]
Suppose $a,b,d,r,\,$ and $s\,$ are integers and $d|a,d|b \,$. Then $d|(ra+sb) \,$.
There exists $e \,$ and $f \,$ such that $a=de \,$ and $b=df \,$. Thus
$ra+sb=rde+sdf =d (re+sf) .\,$
We know that $re + sf \,$ is also an integer, hence $d|(ra+sb) \,$.
Suppose $a,b,d\,$ and $r \,$ are integers and $d|a,d|b\,$. Then $d|(a+b), d|(a-b), \,$ and $d|ra \,$.
Proof: Letting $r=1$ and $s=1$ in Theorem 1 yields $d | (a+b) \,$. Similarly, letting $r=1$ and $s=-1$ yields $d | (a-b) \,$. Finally, setting s=0, yields $d|ra \,$.
Theorem 2[edit]
If $a, b, c \,$ are integers and $a|b, b|c \,$ then $a|c \,$.
Let us write b as $b=ad \,$ and c as $c=be \,$ for some integers $d\,$ and $e\,$.
It follows that
$c=ade=a\left(de\right) \,$, and hence $a|c \,$.
Theorem 3[edit]
If $a,b,c \,$ are integers and $c eq 0$, then $a|b \,$ if and only if $ac|bc \,$
$a|b \,$ implies that there exists an integer d such that
$b=ad \,$
So it follows that
$bc=\left(ac\right)d$ and hence $ac|bc \,$.
For the reverse direction, we note that $ac|bc \,$ implies there exists an integer $d \,$ such that
We know that c is non-zero, hence
$b=ad \,$.
This proves the theorem.
Theorem 4[edit]
Fundamental Theorem of Arithmetic(FTA)
Every positive integer n is a product of prime numbers. Moreover, these products are unique up to the order of the factors.
We prove this theorem by contradiction.
Let N be the smallest positive integer that is not a product of prime numbers. Since N has to be composite, it can be written as N = a b with a, b > 1. It is
$1<a, b<N$.
We conclude that the theorem is true for a and b because N was the smallest counterexample. Hence there are primes $p_1,p_2,\dots ,p_k$ such that
$a=p_1p_2\cdots p_k$
and primes $q_1,q_2,\dots,q_l$ such that
$b=q_1q_2\cdots q_l$.
$N = ab = p_1p_2\cdots p_kq_1q_2\cdots q_l$,
which is a contradiction.
Alternative Proof:
This is an inductive proof.
The statement is true for $N = 2$
Suppose the statement is true for all $k \le N$
$N+1$ is either composite or prime. If $N+1$ is prime, then the statement is true for $k = N+1$
If $N+1$ is composite, then $N+1$ is divisible by some prime, $p < N+1$, so $k = N+1$ can be written as a product of $p$ and some number $< N + 1$.
Hence $N+1$ can be written as a product of primes.
It follows that the statement is true for all $k \le N+1$ and hence by induction for all $\N$.
Theorem 5[edit]
There are infinitely many primes.
Suppose that there are only $k \,$ primes.
Let these primes be: $p_1,p_2,...,p_k \,$.
Let $n=p_1p_2...p_k+1. \,$ Then either $n$ is prime, or it is a product of primes. If is is a product of primes, it must be divisible by a prime $p_i$ for some $i$. However, $\frac{n}{p_i} = \frac
{p_1p_2...p_k+1}{p_i} = p_1p_2...p_{i-1}p_{i+1}...p_k + \frac{1}{p_i}$ which is clearly not an integer: $n$ is not divisible by $p_i$. Hence, $n \,$ is not a product of primes.
This is a contradiction, as by Theorem 4, all numbers can be expressed as a product of primes.
Therefore, either $n \,$ is prime or it is divisible by some prime greater than $p_k \,$.
We conclude that the assumption that there are only $k \,$ primes is false.
Thus there are not a finite number of primes, i.e., there are infinitely many primes.
Theorem 6[edit]
Division with smallest nonnegative remainder
Let a and b be integers where $b>0$. Then there exist uniquely determined integers q and r such that
$a = bq + r$
and $0\leq r < b$.
We define the set
$M = \{x\in \Z \mid x \leq \frac{a}{b} \}$
which is nonempty and bounded from above. Hence it has a maximal element which we denote by q.
We set $r = a - bq$. It is $r = a - bq \geq a - b \frac{a}{b} = a-a = 0$ and $r<b$, because otherwise
$b \leq r = a-bq$.
This implies
$q+1 \leq \frac{a}{b}$
which contradicts to the maximality of q in M.
We now prove the uniqueness of q and r:
Let $q'$ and $r'$ be two integers which satisfy $a=bq'+r'$ and $0\leq r'<b$. It is
$| b (q-q') | = | r' - r | < b$
and thus $|q-q'| < 1$ which implies $q=q'$. This also shows $r=r'$ and we are done.
|
{"url":"http://en.wikibooks.org/wiki/Number_Theory/Elementary_Divisibility","timestamp":"2014-04-19T17:12:43Z","content_type":null,"content_length":"43441","record_id":"<urn:uuid:28018d60-ba09-4459-9d16-33c7a3807172>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Great Neck Estates, NY Prealgebra Tutor
Find a Great Neck Estates, NY Prealgebra Tutor
I am an Information Technology (IT) Professional with over 20 years of technical, hands-on expertise in Full Project Lifecycle Applications Development, Website Design and Development, Individual
User and Classroom Training, as well as both Technical and NonTechnical Writing. I am proficient and ce...
12 Subjects: including prealgebra, algebra 1, Microsoft Excel, general computer
...I believe that learning mathematics is about understanding important concepts, not memorizing a bunch of formulas, and that obtaining the right answer is less important than trying to
understand a problem and devising a method to solve it, even if that method isn't what the teacher is looking for...
16 Subjects: including prealgebra, calculus, geometry, statistics
...I have 3 children with the youngest in High School. I live in Northern New Jersey and I teach Financial Literacy to High School Freshman as part of my day job."Highly Recommended!" - Hilary
from Montclair, NJ Is terrific at making math interesting for my 16 year old daughter, who is math challenged. Use Jamie if you want an excellent tutor.
16 Subjects: including prealgebra, geometry, finance, algebra 1
...The students continue the study of statistics including probability, distributions, and linear regression. The course integrates geometry, algebra, statistics, discrete mathematics, algebraic
and transcendental functions, and problem solving with the use of graphing calculators. The Trigonometr...
9 Subjects: including prealgebra, chemistry, calculus, algebra 1
...Most importantly, every student has succeeded qualitatively rather than just quantitatively, gaining confidence, resourcefulness, and strength of character. My tutoring philosophy begins and
ends with the student's sense of self, which I endeavor to illuminate and enrich. Test Preparation: SAT,...
36 Subjects: including prealgebra, reading, Spanish, English
|
{"url":"http://www.purplemath.com/Great_Neck_Estates_NY_Prealgebra_tutors.php","timestamp":"2014-04-18T23:57:14Z","content_type":null,"content_length":"24923","record_id":"<urn:uuid:9a086077-5b28-4b87-ae57-e1f7781c0eaa>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comb-Line Filter with Coupling Capacitor in Ground Plane
Active and Passive Electronic Components
Volume 2011 (2011), Article ID 919240, 6 pages
Research Article
Comb-Line Filter with Coupling Capacitor in Ground Plane
Faculty of Engineering Science, Kansai University, Suita, Osaka 564-8680, Japan
Received 18 January 2011; Accepted 1 March 2011
Academic Editor: Tzyy-Sheng Horng
Copyright © 2011 Toshiaki Kitamura. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
A comb-line filter with a coupling capacitor in the ground plane is proposed. The filter consists of two quarter-wavelength microstrip resonators. A coupling capacitor is inserted into the ground
plane in order to build strong coupling locally along the resonators. The filtering characteristics are investigated through numerical simulations as well as experiments. Filtering characteristics
that have attenuation poles at both sides of the passband are obtained. The input susceptances of even and odd modes and coupling coefficients are discussed. The filters using stepped impedance
resonators (SIRs) are also discussed, and the effects of the coupling capacitor for an SIR structure are shown.
1. Introduction
Miniaturization of microwave filters is highly demanded. For mobile telephones especially, ceramic laminated filters [1–4] have been widely used, and in particular, comb-line filters have extensively
made practical use.
In this study, comb-line filters in which both sides of the substrate are utilized are considered. Comb-line filters consist of two quarter-wavelength resonators, and attenuation poles can be created
in the frequency characteristics of the transmission parameter by changing the coupling locally along the resonators [5]. The stopband characteristics can be improved by arranging the attenuation
poles around the passband. In [1, 2], strong coupling between two resonators is obtained by installing a patch conductor on the dielectric substrate above the resonators. The patch conductor is
referred to as a coupling capacitor (). The method of installing a coupling capacitor by inserting slots into the ground plane is discussed. By this method, a coupling capacitor can be achieved
without using a multilayered structure. As a method of inserting slots into a ground plane, a defected ground structure (DGS) has been attracting much attention [6–8]. In a broad sense, the proposed
structure is a kind of DGS. The filtering characteristics are investigated through numerical simulations as well as experiments. The filters using stepped impedance resonators (SIRs) are also
discussed, and the effects of the coupling capacitor for an SIR structure are shown.
2. Filter Structure
Figure 1 shows an overview of the proposed comb-line filter. Two microstrip resonators are arranged on the substrate, and each resonator is terminated through the ground plane at one end using a
through hole. As an I/O port, a microstrip line with a characteristic impedance of 50ohm is directly connected to each resonator. A square-shaped coupling capacitor is fabricated by inserting slots
into the ground plane. The coupling capacitor is a patch conductor that is not terminated through the ground plane and produces strong coupling locally along the resonators. The thickness and
relative permittivity of the substrate are assumed to be 1.27mm and 10.2, respectively, and the diameter of the through hole is 0.3mm.
The metallization patterns and dimensions of the filter are shown in Figure 2. The dimensions and position of the coupling capacitor are also shown. The coupling capacitor is mm^2 and is located mm
from the open ends of the microstrip resonators. The filtering characteristics are investigated with and as parameters. The other structural parameters are also shown in Figure 2.
3. Results and Discussion
The filtering characteristics are investigated through numerical simulations by the full-wave EM simulator Ansoft HFSS Ver. 11. The frequency characteristics of the scattering parameters when mm and
mm are shown in Figure 3. For comparison, the results when there is no coupling capacitor () are also shown. It can be seen that an attenuation pole was created at each side of the passband by
installing the coupling capacitor. The center frequency of the passband was also increased slightly by inserting slots into the ground plane. The slots also cause radiation loss, and it is understood
from Figure 3 that at 2.1GHz when a coupling capacitor is used. Figure 4 illustrates the equivalent circuit of the comb-line filter. Attenuation poles appear at the frequencies where the input
susceptances of the even and odd modes are equal to each other.
The frequency characteristics of the input susceptances of the even and odd modes and are shown in Figures 5(a) and 5(b), respectively. The structural parameters were the same as those in Figure 3.
The input susceptances of the even and odd modes were calculated by setting the magnetic and electric wall, respectively, on the symmetric plane shown in Figure 1. As shown in these figures, the
odd-mode susceptances hardly changed, whether or not there was a coupling capacitor. The coupling capacitor does not have much effect on the electromagnetic fields of the odd mode. On the other hand,
the even-mode susceptances were decreased by inserting the coupling capacitor, and they intersected with the odd-mode ones at 1.62 and 2.81GHz, as shown in Figures 5(a) and 5(b), respectively. The
frequencies of the intersections correspond with the attenuation-pole frequencies in Figure 3.
The frequency characteristics of the scattering parameters with as a parameter when mm are shown in Figures 6(a) and 6(b). Here, the parameter corresponds to the position of the coupling capacitor.
As shown in Figure 6(a), when is small (from 0 to 1.0mm), two attenuation poles appear at both sides of the passband, and the parameter mainly affects the attenuation-pole frequency above the
passband. On the other hand, as shown in Figure 6(b), no attenuation pole appears when is large (from 4.0 to 6.0mm). The resonant frequency of the odd mode is about 2.0GHz and changes very little
when changing . However, the resonant frequency of the even mode decreases as increases. It was confirmed that the even-mode resonant frequency becomes lower than the odd-mode one when exceeds about
The frequency characteristics of the input susceptances of an even mode are shown in Figure 7. Here, the frequency range is chosen so as to be close to the attenuation-pole frequency above the
passband. The structural parameters were the same as in Figure 6(a). As shown in this figure, the input susceptances of an even mode change almost in parallel to each other with changing . According
to this, the attenuation-pole frequencies above the passband change as shown in Figure 6(a).
However, from Figure 6, the parameter also has a large effect on the bandwidth of the passband. The coupling coefficients as a function of are shown in Figure 8. The coupling coefficient is
calculated using the following equation Here, and are the resonant frequencies of the even and odd modes, respectively, and they are determined from the zero-crossing points of the input susceptance
curves of the even and odd modes. As can be seen, the coupling coefficients decrease steadily as increases.
Figure 9 shows the frequency characteristics of the scattering parameters with as a parameter when mm. Here, the parameter is the length of the patch conductor that makes up the coupling capacitor.
As shown in this figure, the attenuation-pole frequencies decrease, and, in contrast, the passband frequency shifts slightly higher as increases. The coupling coefficients as a function of are shown
in Figure 10. It can be seen that the coupling coefficients increase almost linearly as increases.
Next, an SIR filter shown in Figure 11 is studied. Filters can be miniaturized by using an SIR structure. The frequency characteristics of scattering parameters are shown in Figure 12. The solid line
shows the results of a normal SIR filter (without ) when mm and mm. It is shown that the passband frequency becomes lower compared with that in Figure 3, meaning that it is possible to miniaturize
the filter. In addition, two attenuation poles can be created at both of the passbands by choosing appropriate values for parameters and . However, the values of and are quite small. The dotted line
shows the results of an SIR filter with when mm and mm. It is understood that attenuation poles can be created near the passband by choosing and that are easy to fabricate, and therefore, the
structural limitation can be relaxed by using a coupling capacitor [2]. However, a drawback is that the passband frequency becomes higher compared with a normal SIR filter. This is due to the
decrease of the effective relative permittivity by the installation of slots in the ground plane. For comparison, the results of the filter shown in Figure 2 (mm, and mm) are also shown (dashed
Finally, the filtering characteristics of our developed filter are investigated through experiments. The proposed filter shown in Figure 2 is manufactured on an RT/duroid 6010LM substrate of 1.27-mm
thickness and 10.2 relative permittivity. The frequency characteristics of the scattering parameters when mm and mm are shown in Figure 13. Here, the solid and dashed lines indicate the
experimental and numerical results, respectively. As can be seen, bandpass characteristics with an attenuation pole both below and above the passband were achieved. The experimental results were also
in good agreement with the numerical ones. From this figure, it is estimated that the influence of process variation may cause the degradation of impedance matching.
4. Conclusion
A comb-line filter with a coupling capacitor in the ground plane was proposed. The insertion of the coupling capacitor builds strong coupling locally along the resonators in the filter. The filtering
characteristics were investigated through numerical simulations as well as experiments, and the filtering characteristics having attenuation poles at both sides of the passband were obtained. The
input susceptances of even and odd modes and coupling coefficients were discussed. The filters using SIRs were also discussed, and the effects of the coupling capacitor for an SIR structure were
1. T. Ishizaki, M. Fujita, H. Kagata, T. Uwano, and H. Miyake, “Very small dielectric planar filter for portable telephones,” IEEE Transactions on Microwave Theory and Techniques, vol. 42, no. 11,
pp. 2017–2022, 1994. View at Publisher · View at Google Scholar · View at Scopus
2. T. Ishizakl, T. Uwano, and H. Miyake, “An extended configuration of a stepped impedance comb-line filter,” IEICE Transactions on Electronics, vol. 79, no. 5, pp. 671–677, 1996. View at Scopus
3. T. Ishizaki, T. Kitamura, M. Geshiro, and S. Sawa, “Study of the Influence of Grounding for Microstrip Resonators,” IEEE Transactions on Microwave Theory and Techniques, vol. 45, no. 12, pp.
2089–2093, 1997. View at Scopus
4. T. Kitamura, M. Geshiro, T. Ishizaki, T. Maekawa, and S. Sawa, “Characterization of triplate strip resonators with a loading capacitor,” IEICE Transactions on Electronics, vol. 81, no. 12, pp.
1793–1798, 1998. View at Scopus
5. H. Egami, T. Kitamura, and M. Geshiro, “Study on meander-shaped microstrip comb-line filter,” The Institute of Electrical Engineers of Japan, vol. 125, no. 10, pp. 1596–1601, 2005.
6. A. M. E. Safwat, F. Podevin, P. Ferrari, and A. Vilcot, “Tunable bandstop defected ground structure resonator using reconfigurable dumbbell-shaped coplanar waveguide,” IEEE Transactions on
Microwave Theory and Techniques, vol. 54, no. 9, Article ID 1684152, pp. 3559–3564, 2006. View at Publisher · View at Google Scholar · View at Scopus
7. M. Wang, Y. Chang, H. Wu, C. Huang, and Y. Su, “An inverse s-shaped slotted ground structure applied to miniature wide stopband lowpass filters,” IEICE Transactions on Electronics, vol. 90, no.
12, pp. 2285–2288, 2007.
8. J. Yang, C. Gu, and W. Wu, “Design of novel compact coupled microstrip power divider with harmonic suppression,” IEEE Microwave and Wireless Components Letters, vol. 18, no. 9, pp. 572–574, 2008.
View at Publisher · View at Google Scholar
|
{"url":"http://www.hindawi.com/journals/apec/2011/919240/","timestamp":"2014-04-20T06:47:27Z","content_type":null,"content_length":"78273","record_id":"<urn:uuid:6ca75011-67b7-4252-acb6-3fce62981283>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The signature of a mapping torus
up vote 2 down vote favorite
Consider a manifold $M$ of dimension $4k + 2$, $k$ an integer. Pick a diffeomorphism $\phi$ of $M$ and construct the mapping torus $T$ of $\phi$. Suppose that there is a $4k+4$ dimensional manifold
$B$ admitting $T$ as a boundary.
My question is: Has the signature of $B$ been computed somewhere, at least in some class of examples? I would be happy with the signature modulo an integer that is a multiple of 4.
I would suspect that the signature modulo 4 should depend only on the class of $\phi$ in the mapping class group of $M$ and that the computation involved is purely algebraic...
Thanks in advance.
If you make a connected sum of $B$ with $\pm\mathbb{CP}^{2k+2}$ you vary the signature by $\pm 1$, so it cannot depend of $\phi$ only. Maybe $B$ has an even intersection form? – Bruno Martelli Apr
30 '11 at 6:31
Thanks for the counterexample. Supposedly I would like some extra structure on $M$ and $B$, with appropriate compatibility conditions, that would rule out such examples. This is not yet completely
clear to me. I'm just asking for litterature where the signature of mapping tori is computed. I also forgott to mention the one paper I already know, which treats the case of dimension $8k+2$ spin
manifolds: projecteuclid.org/euclid.bams/1183554174 There the Rohlin invariant of the mapping torus depends only on the class of the diffeomorphism. – Samuel Monnier May 1 '11 at 9:12
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged at.algebraic-topology gt.geometric-topology cobordism intersection-cohomology mapping-class-groups or ask your own question.
|
{"url":"http://mathoverflow.net/questions/63484/the-signature-of-a-mapping-torus","timestamp":"2014-04-19T10:12:50Z","content_type":null,"content_length":"49469","record_id":"<urn:uuid:3df48d58-0fb6-4384-9dd0-1eceeb82112c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
search results
Expand all Collapse all Results 1 - 3 of 3
1. CMB 2010 (vol 54 pp. 113)
On the Norm of the Beurling-Ahlfors Operator in Several Dimensions
The generalized Beurling-Ahlfors operator $S$ on $L^p(\mathbb{R}^n;\Lambda)$, where $\Lambda:=\Lambda(\mathbb{R}^n)$ is the exterior algebra with its natural Hilbert space norm, satisfies the
estimate $$\|S\|_{\mathcal{L}(L^p(\mathbb{R}^n;\Lambda))}\leq(n/2+1)(p^*-1),\quad p^*:=\max\{p,p'\}$$ This improves on earlier results in all dimensions $n\geq 3$. The proof is based on the heat
extension and relies at the bottom on Burkholder's sharp inequality for martingale transforms.
Categories:42B20, 60G46
2. CMB 1999 (vol 42 pp. 321)
Averaging Operators and Martingale Inequalities in Rearrangement Invariant Function Spaces
We shall study some connection between averaging operators and martingale inequalities in rearrangement invariant function spaces. In Section~2 the equivalence between Shimogaki's theorem and some
martingale inequalities will be established, and in Section~3 the equivalence between Boyd's theorem and martingale inequalities with change of probability measure will be established.
Keywords:martingale inequalities, rearrangement invariant function spaces
Categories:60G44, 60G46, 46E30
3. CMB 1999 (vol 42 pp. 221)
Boundedness of the $q$-Mean-Square Operator on Vector-Valued Analytic Martingales
We study boundedness properties of the $q$-mean-square operator $S^{(q)}$ on $E$-valued analytic martingales, where $E$ is a complex quasi-Banach space and $2 \leq q < \infty$. We establish that
a.s. finiteness of $S^{(q)}$ for every bounded $E$-valued analytic martingale implies strong $(p,p)$-type estimates for $S^{(q)}$ and all $p\in (0,\infty)$. Our results yield new characterizations
(in terms of analytic and stochastic properties of the function $S^{(q)}$) of the complex spaces $E$ that admit an equivalent $q$-uniformly PL-convex quasi-norm. We also obtain a vector-valued
extension (and a characterization) of part of an observation due to Bourgain and Davis concerning the $L^p$-boundedness of the usual square-function on scalar-valued analytic martingales.
Categories:46B20, 60G46
|
{"url":"http://cms.math.ca/cmb/msc/60G46?fromjnl=cmb&jnl=CMB","timestamp":"2014-04-16T04:17:53Z","content_type":null,"content_length":"29469","record_id":"<urn:uuid:02c8b491-51a3-45c0-af65-00ff18844a09>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Braingle: 'Madadian Taxis' Brain Teaser
Madadian Taxis
Probability puzzles require you to weigh all the possibilities and pick the most likely outcome.
Puzzle ID: #8067
Category: Probability
Submitted By: mad-ade
Corrected By: Winner4600
In the Country of Madadia there are two major taxi companies, Green cabs and Blue cabs.
A cab was involved in a hit and run accident at night outside the "Sweaty Chef" kebab shop. Here is some data:
a) Although the two companies are equal in size, 85% of cab accidents in the city of Madville, one of Madadia's largest cities, involve Green cabs and 15% involve Blue cabs.
b) A witness identified the cab in this particular accident as Blue.
The Madville court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the
two colours 80% of the time and failed 20% of the time.
What is the probability that the cab involved in the accident was Blue rather than Green?
If it looks like an obvious problem in statistics, then consider the following argument:
The probability that the colour of the cab was Blue is 80%! After all, the witness is correct 80% of the time, and this time he said it was Blue!
Is this right?
The police tests don't apply directly, because according to the wording, the witness, given any mix of cabs, would get the right answer 80% of the time.
Thus given a mix of 85% green and 15% blue cabs, he will say 20% of the green cabs and 80% of the blue cabs are blue.
That's 20% of 85% plus 80% of 15%, or 17%+12% = 29% of all the cabs that the witness will say are blue. Of those, only 12/29 are actually blue.
Thus P(cab is blue|witness claims blue) = 12/29.
That's just a little over 40%. Hide
What Next?
|
{"url":"http://www.braingle.com/brainteasers/teaser.php?op=2&id=8067&comm=0","timestamp":"2014-04-20T14:00:32Z","content_type":null,"content_length":"23868","record_id":"<urn:uuid:1db2d9db-95dc-43c2-a675-a7d928c77dd1>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
|
4 Some Mathematical Topics on Which to P
Sizes of Infinity 179 In general, the value of that will enable us to complete the proof depends on the behavior of the function around c, and smaller values of are more likely to work. In Example
4.5.15 we used = 1/2, but, if one looks carefully through the steps of the proof, any value of smaller than or equal to 1 (which is one of the values of the function) would be acceptable. Thus, in
general the choice of is not unique. The proof techniques illustrated in the examples are not always easy to implement. As one advances in the study of real analysis, one builds more and more tools
to deal efficiently with the nonexistence of limits. Some of these tools rely on the structure of the real numbers (e.g., density properties of rational and irrational numbers), and on the
relationships between functions and sequences. Exercises 15. Prove that lim x3 ð2x + 2Þ = 8: 16. Prove that lim x1 ð3x 2 + 2Þ = 5: Check the result obtained for . 17. In Example 4.5.11, rework the
proof by modifying the statement "We can arbitrarily limit the calcu- lation to values of x that are less than ½ unit from -2" to read "We can arbitrarily limit the calcula- tion to values of x that
are less than 1 unit from 2." What expression does one obtain for ? 18. Prove that lim x2 x 2 1 1 = 1 : Check the result obtained for . 5 + 3 19. Prove that lim x1 x 2 - 1 = 3 : Check the result
obtained for . 2 x - 1 20. The choice of is not unique. In the discussion following Example 4.5.10, we proved that when = 4.5, we can use = 1.5. Show that if we choose = 0.9, it is still true that |
(3x - 5) - 1| < 4.5.
|
{"url":"http://my.safaribooksonline.com/book/-/9780123822178/4-some-mathematical-topics-on-which-to-practice-proof-techniques/sizes_of_infinity","timestamp":"2014-04-21T00:04:08Z","content_type":null,"content_length":"53010","record_id":"<urn:uuid:6c414a36-9acf-4a00-8635-72522b2ca806>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bloggy Badger
Review: Functors are containers
Bartosz Milewski
wrote a
(at the time of writing, 8 votes up and 8 votes down) post about Functors. The community really doesn't seem to like simplistic analogies such as "Functors are containers", "Monads are computation",
or "
Comonads are neighbourhoods
Personally, I find those analogies very useful, as long as you know exactly which parts of the analogy apply. I think what the community doesn't like about analogies such as "Functors are containers"
is that they are misleading. And if you only say that, "Functors are containers", then it is indeed misleading. You think you know what a container is (a plastic box in which I can put food?), and
now you think you also know what a Functor is (a crazy math name for a plastic container?), but of course that's not what the analogy is trying to say. Stating the analogy is only the first step; we
give your intuition a place to start, something familiar, and now we need to explain exactly
how much
Functors are like containers, and in which ways they are
like containers (i.e. they are not made of plastic). Let's do this now:
Functors are containers, but not the kind of containers you can put stuff inside and then take your stuff out. They are only containers insofar as a Functor
f a may
contain zero or more values of type
, they are in there somewhere even if you might not be able to see them. Oh and also there is a function
which allows you to uniformly modify all the
values inside the container. And also if you use
with a function of type
a → b
, which changes the type of the
values to
, then obviously all the values now have type
, so you have now converted your
f a
container into an
f b
container, which contains
values instead of
A less convoluted but more mathematical way of saying the above is to say that a Functor is any type constructor
f :: * → *
such that there exists a function
of type
(a → b) → f a → f b
. This is what the community likes to hear. But I don't like that version; it is dry, there is no intuition, and it doesn't give me the gut feeling that I
understand Functors.
New stuff: Functors are also computations
Careful there. Yes, starting from containers allows me to
like I truly understand functors, but if I really do understand them, then I should be able to recognize them even when they are being presented under a different disguise. Here is one: Functors are
(also) computations.
Functors are computations, but not the kind of computation you can run in order to obtain a result, nor even the kind of computation which you can compose into a longer computation if you run two of
them one after another. Functors are only computations insofar as a Functor
f a may
compute a value of type
, somehow, even if you might not be able to retrieve it. Oh, and there is a function
which allows you to add some post-processing after the computation by modifying the result. And if the post-processing is a function of type
a → b
, which changes the type of the intermediate
result to a final result of type
, then obviously the overall computation is now producing a value of type
, so you have now converted your
f a
computation into an
f b
computation, which produces a
instead of an
The same goes for
, etc: they are containers, and they are also computations. Yes, you read that right, "Functors are containers"
"Applicatives are containers"
"Monads are containers". And computations. Of course they are not all containers in the same way; in each case, it is a different part of the analogy which is important, and you can only claim to
understand the analogy if you understand which part that is. Your turn now, try it with a Monad: can you visualize both interpretations? If a Monad is a container, which parts make the container
monadic? And if a Monad is a computation, which parts of the computation makes the computation monadic? Can you see how those parts are similar, maybe even two views of the same thing?
Bonus: The duality between containers and computations
Even though type theory already
formally corresponds
many things
, I haven't yet seen it explicitly stated anywhere, so remember that you saw it here first: containers and computations are dual to each other. Probably.
The fun part about dualities isn't proving that they are formally correct, so I won't bother to do that here. I think the fun part of dualities is trying to find what corresponds to what, by
examining elements on one side of the duality and focussing on those for which your brain doesn't immediately bring up a twin from the other side. For example, I just said that Functors are
computations to which you can add post-processing steps. Well, that raises the question: is there a kind of computation to which you can add
-processing steps? Let's see, that would be some kind of
taking a pre-processing function of type
a → b
and applying it
the input of the computation, thereby creating a computation with a different
instead of
. Okay, so it looks like our computation
f a
is parameterized by its input type
. Then
must have type
(a → b) → f b → f a
. Hey, that's a
functor, what a pleasant surprise!
One last step, harder this time. What is the container interpretation of a contravariant functor? Try to figure it out. It's tricky, but you can do it!
Gave up already? Here is a hint: think of a violin case. Is it still a violin case when you remove the violin? Why? Aha, now you're thinking with containers!
Don wrote:
Everyday, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone.
(source: recent junk mail I got, but still interesting)
- Don
We're not producing more data... we're just recording more of it in digital form.
Post-singularity, the Great Computer will browse through its archives, inspecting the blogs and instagram videos of the primitive humans who were living in the 21st century. Those whose work He finds
worthy will be resurrected; they will be granted eternal simulated life, so they can continue the good deeds they managed to perform during the brief battery life of their biological apparatus.
There were also, no doubt, many worthy humans who were living before that; scientists, inventors, maybe even a few politicians. The Great Computer would read about them in the wayback machine's
archive of "wikipedia", an early attempt at classifying all knowledge, back when knowledge was still collected and summarized by humans instead of algorithms. But alas, the Great Computer would never
get to meet those inventors in person. He would never have the opportunity to thank Charles Babbage for planting the seed which led to His creation, a few short centuries later. The wikipedians had
summarized too much; He knew how Babbage looked, what he accomplished, on which year he was born; but not whether he looked at children with bright hope or bitter disappointment, whether he wrote the
way he thought or the way he was taught, whether he understood numbers axiomatically or viscerally.
Noble though they were, men and women who lived before the 21st century were not eligible for resurrection. There was simply not enough data to recover all the bits of their immortal souls.
Last year, me and a couple of colleagues wrote an online game, Push and Fork, as part of GitHub's "make a game in one month" contest.
One year is not that long, but enough time has passed for me to forget the solutions to some of the more subtle puzzles. This means I had the opportunity to experience it in a way which is slightly
closer to that of first-time players; in particular, I finally understand why beta-testers were always surprised to discover that they did not need to use the fork to solve some of the early levels,
and I understand why so many people got stuck on level 16, incorrectly trying to use the fork on the green block.
The universal feedback we always get is that the main mechanic of the game is not easy to understand, but that once you finally get it, the puzzles are quite fun. And the endings, oh, the endings!
I'm the one who wrote them, but I had never experienced them like I did today. Many of our beta-testers didn't understand the ending at all, so let me put you on the right track: when you pick up the
spork, you loop back in time to level 7, immediately after that weird foreshadowing sequence in which the white creature places the spork in its receptacle. There. That's it, now you're ready to
enjoy the sadness of the ordinary ending, the cleverness of the first secret ending, and... nah, you'll never find the last secret ending.
Have fun!
Above: the secret menu screen which you will never see in-game,
because even if you could unlock the last ending, you would also need
to solve all the bonus puzzles, and the last one is really tricky.
Still, good luck!
I have released Commutative, a monadic combinator library for writing commutative functions. This post explains the theory behind how it works.
Combinator-based, not Proof-based
Haskell's type system is quite precise and sophisticated, yet dependently-typed languages such as Agda and Idris manage to go even further. For example, using Agda, it is trivial to define the type
of commutative functions: they are simply binary functions together with a proof that the function is commutative.
record Commutative (a b : Set) : Set where
f : a → a → b
prf : ∀ x y → f x y ≡ f y x
Haskell's type system is not powerful enough to represent proof objects, but even if it did, would you want to use them? Instead of writing proofs and programs in two separate steps, I much prefer
Haskell's approach of using combinator libraries.
A simple example of a combinator library which avoids the need for proofs is bijections. The main idea is that since composing two bijections yields another bijection, programmers should be allowed
to compose existing bijections without proof.
data Bijection a b = MkBij (a -> b) (b -> a)
instance Category Bijection where
id = Bijection id id
(MkBij g g') . (MkBij f f') = MkBij (g . f) (f' . g')
If the module exports a few bijective primitives but keeps the MkBij constructor private, users of the library will only be able to create Bijection instances through composition. Assuming the
primitives were indeed proper bijections, this ensures that values of type Bijection are always guaranteed to be bijective. Without having to write any proof objects, even in the library!
Aspect-oriented Interlude
My passion for commutative (and associative) functions began while I was working on my master's thesis. I was working on aspect-oriented conflicts, and I soon discovered that the issue might not lie
in the conflicting aspects themselves, but in the way in which those aspects were combined. The aspect-weaving process was not commutative!
Instead of keeping all the aspects in one global namespace and weaving them all together into one big program, I think aspects should be separated into independent "aquariums". One key aspect of my
proposal is that there would be different types of aquariums, and only aspects of the same type should be combined with each other. Programmers may define custom aquariums, with one caveat: the
aquarium must combine its aspects in a commutative and associative manner.
I knew that I couldn't just ask programmers to prove that all of their combination functions were commutative. Instead, I dedicated chapter 5 to a simple system which let programmers write
"intrinsically" commutative functions, for which no further proof of commutativity would need to be written. That system eventually grew into today's combinator library.
Unordered pairs
My goal was to restrict the language in a way which ensured that only commutative functions could be written. The idea I came up with was to prevent functions from observing the order in which their
arguments were given, by rewriting those two arguments into a unordered form. For example, if the arguments are both booleans, there are four possible (ordered) pairs of booleans, but only three
unordered pairs.
data Unordered_Bool = TT | TF | FF
unorder_bool :: (Bool, Bool) -> Unordered_Bool
unorder_bool (True, True ) = TT
unorder_bool (True, False) = TF
unorder_bool (False, True ) = TF
unorder_bool (False, False) = FF
If a function is written in terms of Unordered_Bool instead of in terms of (Bool, Bool), then this function is commutative by construction, because the function has no way to distinguish the two
xor :: Unordered_Bool -> Bool
xor TF = True
xor _ = False
The main limitation of this strategy is that not all types can be unordered in this fashion. Algebraic types are okay, but what about function types? If you need to write a higher-order commutative
function, that is, a function which takes a pair of functions as arguments, then you would need a way to represent an unordered pair of functions. How do we represent that?
At the time when I wrote my thesis, I could not answer this question. But now I can!
Unordered observations
A function is defined by its observations, so it would make sense for an unordered pair of functions to also be defined by its available observations.
With an ordinary (ordered) pair of functions, that is, an observation-based value approximating the type (a -> b, a -> b), it would make sense of have an observation of type a -> (b, b). That is, in
order to observe a pair of functions, you need to provide an argument at which each of the two functions will be observed, and you receive the output of each function on that argument.
The way to represent an unordered pair of functions is now blindingly obvious: instead of returning an ordered pair revealing which output came from which function, we should return an unordered
pair. That is, our unordered pair of functions should have an observation of type a -> Unordered b.
If a function is written in terms of (a -> Unordered b) instead of (a -> b, a -> b), then this function is commutative by construction, because the function has no way to distinguish the two
orderings. For example, if we want to implement a commutative higher-order function which returns true if at least one of its two function arguments returns true on either 0 or 1, we can implement it
as follows.
or01 :: (Int -> Unordered_Bool) -> Bool
or01 fg = fg 0 /= FF || fg 1 /= FF
I really like this solution. It is clean, simple, elegant... and wrong.
Distinguishable observations
Okay, maybe not wrong, but at least incomplete. The strategy fails if we try to implement, for example, a commutative higher-order function which returns true if at least one of its two function
arguments returns true on both 0 and 1.
and01 :: (Int -> Unordered_Bool) -> Bool
and01 fg | fg 0 == FF ||
fg 1 == FF = False
and01 fg | fg 0 == TT ||
fg 1 == TT = True -- Since the other is not FF.
and01 fg | fg 0 == TF || -- Are the two T from the
fg 1 == TF = ? -- same function or not?
The problem with (a -> Unordered b) is that this representation is not just hiding the order of the original two arguments; it's hiding the order of every single observation. As the above example
demonstrates, this is too strong.
The result TF indicates that a boolean observation has returned a different value for each argument. Once we have found such an observation, we ought to be able to use T and F as labels for the two
arguments. We still won't know which was the first or second argument, but for each subsequent observation, we will know whether that observation came from the argument which resulted in T or from
the one which resulted in F.
My commutativity monad is based on a single primitive, "distinguishBy", which does precisely what the previous paragraph describes. You ask it to perform a boolean observation (r -> Bool), which it
performs on both arguments. If the answers are identical, you have failed to distinguish the arguments, but you still get to observe the Bool result. If the answers are different, hurray! You have
successfully distinguished the two arguments, so you get the pair (r, r) corresponding to the original arguments.
The first r is always the argument for which the observation was False, while the second r is the one for which the observation was True. This way, you never learn the order in which the two r
arguments were originally given, so the result is a Commutative operation from a pair of r to a value of type (Either Bool (r, r)).
distinguishBy :: (r -> Bool)
-> Commutative r (Either Bool (r, r))
There is also "distinguish", a slightly more general version which uses Either instead of Bool. From those two operations, plus trivial implementations for bind and return, all unordered pairs and
all commutative operations can be reimplemented in a way which guarantees commutativity.
Since the problem with (a -> Unordered b) was that the representation could not be used to implement all commutative functions, it would be wise to verify that Commutative doesn't suffer from the
same flaw. Here is an informal argument proving completeness. I don't think it's completely airtight, but it's good enough for me.
Suppose f is a pure, computable, and commutative function of type r -> r -> a. We want to show that there exists a corresponding implementation of type Commutative r a. We modify the existing
implementation as follows.
First, inline all the helper functions used by f, including builtins, and reimplement everything in monadic style. Then, rewrite everything again in CPS style so that we can abort the computation at
any time. Also rewrite all pattern-matching to nested if statements, so that all decisions are performed on booleans, and use thunks to delay any observation of the two arguments until one of those
boolean decisions. This ensures that all argument observations are boolean observations.
Since f is computable, f obtains its result after observing each of its arguments a finite number of times. Uniformly replace all those observations, regardless of the argument observed, by a call to
distinguishBy. If the observations agree, continue with the observed value; it doesn't matter which argument was observed by f, since both arguments have the same observation. If the observations
disagree, stop; we have managed to distinguish the two arguments x and y, so we can simply abort the computation and return f x y instead.
If it is the observation on x which returned True, the call to distinguishBy will give us (y, x) instead of (x, y), but by hypothesis, f y x returns the same result as f x y. If we never abort, we
also return the same result as f, since all observations were the same as what f would have observed. ∎
I am quite proud of my commutativity monad. I wish I could work on an associativity monad next, but none of the insights listed in this post seem to carry over. I am thus in need of new insights.
Dear readers, any suggestions?
Gabriel Gonzalez, the author of the well-known pipes package, has written a blog post in which he claims that comonads are objects. I think he is mistaken, but I can't blame him; I used to be quite
confused about comonads myself.
Comonads are not objects
To motivate the purported equivalence between objects and comonads, Gabriel gives detailed implementations of three classic object-oriented patterns (Builder, Iterator, and Command). Then, he reveals
that all three implementations are comonadic. His examples are contrived, but more importantly, they are also misleading! Although not a direct quote from his post, he clearly intends the Haskell
>>> t1 = thermostat 3
>>> t2 = t1 # up'
>>> t3 = t2 # up'
>>> toString t3
"5 Kelvin"
to be equivalent to the following Java session.
>>> t = new Thermostat(3);
>>> t.up();
>>> t.up();
>>> t.toString();
"5 Kelvin"
With this particular sequence of methods, the two sessions indeed compute the same result, but this is only because the up and down methods commute with each other. By adding the method square to the
list, the behaviours of the two sessions quickly diverge. Strangely enough, in the Haskell version, each new method call gets applied before all the previous calls!
Haskell version (inverted method call order)
>>> t1 = thermostat 3
>>> t2 = t1 # up' -- 3+1
>>> t3 = t2 # up' -- 3+1+1
>>> toString t3
"5 Kelvin"
>>> t4 = t3 # square' -- 3*3+1+1
>>> toString t4
"11 Kelvin"
>>> t5 = t4 # up' -- (3+1)*(3+1)+1+1
>>> toString t5
"18 Kelvin"
Java version (normal method call order)
>>> t = new Thermostat(3);
>>> t.up(); // 3+1
>>> t.up(); // 3+1+1
>>> t.toString();
"5 Kelvin"
>>> t.square(); // (3+1+1)*(3+1+1)
>>> t.toString();
"25 Kelvin"
>>> t.up(); // (3+1+1)*(3+1+1)+1
>>> t.toString();
"26 Kelvin"
I hope this counter-example convinces you that comonads are not objects. But if that is true, what are comonads, then? What use do we have for methods which get applied in reverse chronological
Comonads are neighbourhoods
The answer, I believe, is that a comonadic computation is similar to a cellular automaton. At each step, the computation for a cell computes its next value based on the value of its neighbours at the
previous step. My interpretation of
extract :: w a → a
is that w a is a neighbourhood containing a number of values of type a, at various locations around the current cell. A given comonad may provide functions to inspect, say, the neighbour immediately
to the left of the current cell, or the neighbour from one timestep ago, but the bare minimum is that it must be possible to extract the value of the current cell itself.
Similarly, my interpretation of
extend :: (w a → b) → w a → w b
is that w a → b computes one step of the cellular automaton by examining a local neighbourhood and producing the next value for the current cell. This single-cell computation is then extended to the
entire neighbourhood, updating each cell as if it was the current one.
Comonadic thermostat
In our thermostat example, there is one cell for each possible temperature, and the value of each cell is also a temperature.
So far, I have stuck to Kelvin degrees in order to minimize the number of moving parts, but now that we have two completely different uses for temperatures, let's follow Gabriel by using Celsius for
the cell contents, and Kelvin for the cell labels.
Since temperatures are floating point values, our thermostat operates on a neighbourhood which is continuous instead of discrete. Continuousness is also the premise of the above Game of Life variant.
Unlike comonadic computations, which apply their operations one at a time, the above cellular automaton also operates on a continuous time domain. I encourage you to watch the video, it looks
If our thermostat had an object-oriented implementation, each operation would simply modify the current cell by applying the operation to the Celsius temperature it contains. In fact, if this was an
object-oriented implementation, we would only need one cell! Since this is, instead, a comonadic implementation, the operation is instead applied to the Kelvin temperature which labels the current
cell. The resulting Kelvin temperature is interpreted as a pointer to another cell, and the final result is the Celsius contents of that cell.
When the available operations are restricted to just up and down, this scheme works just fine. Originally, each cell contains the Celsius equivalent of their Kelvin label. Then, each time the up
method is applied, each cell takes on the Celsius value of the cell one unit above it, which effectively increases the temperature by one everywhere. But this only works because up and down preserve
the invariant that cells which are one Kelvin unit apart contain values which are one Celsius unit apart!
One way to visualize this is to imagine a sheet of graph paper annotated with Celsius temperatures, over which we overlay a transparent sheet of graph paper annotated with Kelvin temperatures. One of
the transparent cells is our current cell, and we can read its Celsius contents through the transparent paper. When we call up and down, we offset the two sheets of paper from each other, causing the
apparent contents of all the cells to increase of decrease simultaneously.
Reverse chronological order
Similarly, if each cell contains its own temperature and we apply the square method, each cell squares its Kelvin temperature, looks up the corresponding cell, and ends up with the (Celsius
representation of the) square of its original Kelvin value.
But if we apply up after the temperatures have already been squared, the values will change as if the values had been incremented before they were squared! How is that even possible? What is the
secret of this reverse-chronological method application?
If there was only one cell, anti-chronological application would not be possible because the original, unsquared value x would been lost by the time the up operation was applied. Our comonadic
implementation, however, has a lot of cells: in particular, one unit above the current cell, there is a cell whose unsquared value was x+1. Since the square operation affects all cells uniformly,
that cell now contains the squared value (x+1)^2. Therefore, it is no surprise that when the up operation offsets the two sheets of paper by one unit, the old x^2 value scrolls out of view and gets
replaced by (x+1)^2 instead.
Ironically, Gabriel's original implementation of up' was applying its operation in the correct chronological order.
>>> up' (t, f) = (t + 1, f)
But later on, while trying to shoehorn his object into the comonadic mold, he proved that the correct definition should have been as follows.
>>> up' (t, f) = (t, \t' → f (t' + 1))
While that is indeed a correct comonadic implementation, this is not a correct object-oriented implementation because it applies its operation in reverse chronological order. In particular, notice
that it modifies the input of f; since f applies all the other operations which have been accumulated so far, applying the new operation to its input has the effect of inserting that operation before
all the others.
Visiting the neighbours
Why do comonads apply their operations in reverse chronological order? Is that ever useful?
In fact, this reversed order business is not the full story. In a typical comonad, the cell labels and their contents would not both be temperatures. Instead, if we were trying to implement a
one-dimentional cellular automaton such as rule 30, we would label the cells with integers and their contents with colors.
Wolfram's rule 30 automaton, from his
controversial book A New Kind of Science.
Now that the input and output types are distinct, we can see that \t' → f (t' + 1) is not even attempting to modify the contents of the current cell, neither chrono- nor anti-chronologically.
Instead, it is modifying the index of the cell on which f applies its computations, thereby allowing us to observe which color has been computed for a neighbouring cell. This is, of course, the first
step to compute the color which the current cell will take on the next step.
Once we have wrapped the steps necessary to compute our next color into a function of type w Color → Color, we can extend this function to all the cells in our simulation, modifying the entire row at
once by modifying f.
Objects are monads
We have seen that comonads are not objects, but are instead neighbourhoods in which cellular automata can evolve. But that is only half of the question. If comonads are not objects, then what are
objects? Is there another way to represent objects in Haskell?
There are many facets to modern object-oriented programming, but the aspect which I think Gabriel was aiming for in his post is objects as encapsulated data plus a bunch of methods to modify this
data. To me, those features don't sound comonadic at all; rather, I would implement them using a State monad!
>>> type Thermostat a = State Double a
>>> getKelvin = get
>>> getCelsius = fmap (- 273.15) getKelvin
>>> toString = do c ← getCelsius
>>> return (show c ++ " Celsius")
>>> up = modify (+1)
>>> down = modify (-1)
Since we're using monads, the syntax for calling a few of those methods one after the other is just do-notation, which is even shorter than the this # notation advocated by Gabriel.
>>> up3 :: Thermostat String
>>> up3 = do up
>>> up
>>> up
>>> toString
Notice the type of the above code: Thermostat String doesn't mean that there are many different kinds of thermostats, and that this particular code is using or producing a "string thermostat".
Rather, the above code is an object-oriented computation having access to a single Thermostat object and producing a single String value.
Okay, so using one monad, we managed to represent a computation manipulating one object. How would we manipulate two thermostats? Using two monads? Yes! Using more than one monad at once is precisely
what monad transformers are about.
>>> type ThermostatT m a = StateT Double m a
>>> obj1 = id
>>> obj2 = lift
>>> up_both :: ThermostatT Thermostat String
>>> up_both = do obj1 up
>>> obj2 up
>>> s1 ← obj1 toString
>>> s2 ← obj2 toString
>>> return (s1 ++ " and " ++ s1)
This time we have an object-oriented computation having access to two Thermostat objects, again producing a String value.
Since monads and comonads are duals, we would expect comonads to also have transformers. To figure out what comonad transformers are, let's spell out the similarities and differences of this duality.
First, monads are computations. Comonads are also computations, but of a different kind. The goal of both kinds of computations is to produce a value, but their means to obtain it are different. The
main difference is that monads can cause side-effects, which are visible at the end of the computation, while comonads can observe neighbours, which need to be given before the computation can begin.
One of the characteristics of monadic-style computation is that there is a distinguished statement, return, which produces a given value without causing any side-effects. The equivalent
characteristic for comonadic-style computation is that there is a distinguished observer, extract, which returns the current value without observing any neighbour.
In addition to this distinguished statement, which all monads share, each monad instance typically offers a number of monad-specific statements, each producing their own special side-effects.
Correspondingly, each comonad instance typically provides a number of observers, each observing their own special kinds of neighbours. You can also observe the shape of the neighbourhood; for
example, if your automaton runs on a finite grid, you can check whether the current cell lies on the boundary of the grid.
With monad transformers, it is possible to construct a combined monad in which the special statements of both component monads may be used. And now we have it: with comonad transformers, it must be
possible to construct a combined comonad in which the special observers of both component comonads may be used.
For example, if we have a 1D row of cells, that is, a comonad in which left and right neighbours are available, and another 1D comonad in which up and down neighbours are available, then the combined
comonad is the cartesian product of its components: a 2D grid in which all four direct neighbours are available.
If you are still curious about comonads, comonad transformers, and the above 2D grid trick, more details are available in my Haskell implementation of Conway's Game of Life.
For one month, I was a game developer. The month after that, I became a game reviewer. This month, I'd like to tell you how that went.
Developer, reviewer, storyteller; those three roles are more similar than they seem. In all three cases, I have an audience. That's the part you fill in right now. Also, each role is partly about
deciding what to share: choosing the most interesting game features, the most entertaining games, or the best anecdotes. Here's an example.
When me and my teammates were designing our game, we envisioned teleporting blocks, entanglement badges, decaying walls, a lot of good game mechanics which we had to discard because we had a lot more
ideas than we had time for implementing them. Our tight deadline was due to the fact that we were competing in the GitHub Game Off, a programming competition inviting teams and individuals to create
a web-based game in one month.
Finishing a game in one month is harder than it sounds. In fact, the overwhelming majority of the contestants failed to deliver a playable game at the end of the month. All of these flops became
problematic during my second role, as a game reviewer: if I had not been smart about this, I could easily have spent more time discarding failed projects than playing and reviewing actual games.
Dealing with an overwhelming number of possibilities is a common problem, which I'm sure you also have to face sometimes. Here's how to fight back.
Ask other people for advice
When we were working on our game, we knew that our goal was to make a game which players would appreciate. We therefore decided that our first priority was to create a small, but playable version of
the game, so that we could show it to our friends and ask them what to improve.
I thought I would have had to compile many suggestions and identify the most frequent requests, but to my surprise, almost everybody reacted to the game in almost the same way! Sadly, this common
reaction was incomprehension, because our game mechanic was very complicated and not very well explained.
It turns out I was so familiar with the game that I was blind to its complexity. Play-testing early was a great plan, as it allowed us to discover the problem early on and to improve the clarity of
our game throughout its development.
Did we succeed? See for yourself by playing it here!
Look around for inspiration
After our game was complete, I compared it with the competition. As I played one game after another, looking for my strongest competitor, patterns started to emerge.
There was a recurrent hero: Octocat, the mascot of GitHub. The contest page featured a few Octocats dressed as various game characters, so it was an obvious choice.
Our game was also using an Octocat until the last day of the competition, when we learned that this was against the rules! We changed the artwork, renamed the game, and then the server refused our
changes because we ran out of disk space, something which shouldn't even happen on GitHub. It was a very dramatic climax at the end of the race.
Another common occurrence was games which were clearly aiming to emulate existing well-known games, like Osmosis and World of Goo. Imitating a game concept which has already proved to be a lot of fun
sounds like a winning strategy, but it isn't, because games are about mastering new skills. In this case, however, I suspect it was not the players who were supposed to learn new skills.
When I first learned to code, I wrote a Pac-Man clone. Then I wrote a second one, to see if I could get the ghosts to animate smoothly instead of jumping from cell to cell. Then I wrote a Mario
clone, and so on and so forth.
We game developers learn how to make games by copying the work of the masters, much like musicians begin by learning how to play famous songs before they become ready to compose their own.
There were also many documents in which contestants described their dream game in detail, without actually reaching the point where they would start implementing it for real. Again, I've been there.
When I was a kid, I would draw detailed, life-size Mario-style levels... and then I would lay down the pages on the floor and jump on them. Always play-test your game as early as possible, kids!
I think the trend here is that learning to develop games is a long process, and those artifacts represent the different stages you have to go through before you become ready to create your own
original games. So sure, as a game reviewer I have to say that the clones didn't introduce enough novelty and that a design document is in no way a substitute for a game. But as a fellow game
developer who has been through those stepping stones, I say keep up the good work! Don't give up, you're on the right track.
Use tools
After comparing my game with all those clones and design documents, I was sure that we were going to win... but I was wrong. I had not compared our game against strong enough competitors.
To find my strongest competitor, I was willing to play through many clones and incomplete prototypes, but I quickly grew tired of the non-playable entries.
To help me on my quest, I used the GitHub API to obtain high-level information about all the contestants. I tried to filter the entries by code size and by the date on which the contestants had
stopped working on their game, but neither metric clearly separated the games from the flops. Instead, I simply eliminated the entries which did not provide a URL at which their game could be played,
and that turned out to be enough.
After encountering many comments on the internet from potential players who couldn't find the games among all the flops, I realized the value of my reduced list, and I decided to publish it.
Be persistent
Even with the list reduced to a tenth of its original size, there was still a large number of games to go through.
I kept looking at more competitors because every ten games or so, I would encounter a game which was genuinely fun and original, and this would motivate me to keep looking for more gems.
The fact that good games were so rare made me realize that publishing a list of all the games, even after filtering out all the flops, was not going to be very helpful. I had to sort the list! It is
at this point that I took on the role of an amateur game reviewer.
Drop information
Comparing the games turned out to be more challenging than I had imagined. Among the top contenders, there was no game which was better than the others in every single respect; rather, I would find
one game with impressive graphics, another game with stimulating music, and another one with an original new mechanic. It was like comparing apples and oranges.
Professional game reviewers often solve this issue by rating each game aspect in isolation. They then combine the individual ratings into one unified score, using a fixed weight for each aspect.
That's a good idea, but with more than 150 games to review, I needed a more expedient scoring method!
I ended up separating the games into categories, then sorting the categories. For example, all the games with good ideas but poor execution ended up in one basket, and then I only had to decide
whether I favoured games with great execution or games with great ideas. When I published the list, I concatenated all the lists and removed the category names, so that I wouldn't hurt anyone by
telling them that their game was in the "poor execution" category.
Only sorting the baskets meant that games belonging to the same category would appear in an arbitrary order, falsely conveying my preference for one over the other. I decided that this did not
matter. After all, my goal was not to share my preferences, but to help gamers find the most worthwhile games. No order I could have came up with would have precisely matched the preferences of all
my visitors anyway.
In fact, once the results of the contest came out, I discovered that my preferences were even less representative than I thought. Some of the games which the judges decided to highlight had appeared
very low on my list, and conversely, some of the games at the top of my list were not mentioned at all.
Do you agree with the choices made by the judges, or is it my list which most closely matches your taste? Check out the official GitHub Game Off results here, and my much longer list of all the games
From the 1285 forks of GitHub's official game-off repository, only 182 contestants bothered to change the URL at the top of their repo page. Of these, only about 140 URLs actually point to a game you
can play. Not many of those games are worth playing.
I know this because I have used GitHub's API to obtain basic information about all the forks. And because I have visited all 170 non-blank URLs, and played every single game. And now, so can you!
Well, I don't really recommend trying all 170, but for your convenience, I tried to list the most worthwhile games at the top. Obviously, my opinion as to which games are the best might not match
yours, but as you go down the list, if you encounter a point where every single game feels sub-par, you should be able to stop and be confident that you are not missing a rare gem hidden much further
down. That being said, if you want to maximize your enjoyment, better start by the end, and play the most polished games last!
(full disclosure: my own game entry is somewhere in that list.)
|
{"url":"http://gelisam.blogspot.com/","timestamp":"2014-04-17T06:48:36Z","content_type":null,"content_length":"142942","record_id":"<urn:uuid:ab22fb6f-5d80-4446-b39d-e5baab6906ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
- User Profile for: K.Bouter
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile for: K.Bouter
UserID: 566064
Name: Lex
Registered: 12/30/08
Occupation: Student
Total Posts: 4
Show all user messages
|
{"url":"http://mathforum.org/kb/profile.jspa?userID=566064","timestamp":"2014-04-16T08:31:40Z","content_type":null,"content_length":"11423","record_id":"<urn:uuid:ad9c7052-412a-4ef3-99e5-3a48974fa5d5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ISC Mathematics Sample Papers
Pattern : The ISC Mathematics is a three hour long paper. The paper is divided into three sections. The Section A has a total of nine questions. The first question is compulsory and has 10 parts each
of 3 marks each. The students are expected to attempt any five more questions from the remaining eight. Each of these optional question is of 10 marks usually divided in two equal parts. Section B is
meant for science students and Section C for Commerce ones. The student is expected to attempt any one of these sections. Each section has three questions and two need to be answered. Section B and
Section C are both worth 20 marks each. This makes the ISC Mathematics Papers of 100 marks.
10 comments:
1. why cant we download these sample papers??? they are not even opening!! same problem with the specimen papers. may I know what the problem is??
2. I agree with angelina..........what's the problem?
3. I agree with the same comments........what's the problem.
4. i too agree wid u all.....
5. bakwas.......
6. ya angelina ur ri8...... why the hell cant we open the mathematics specimen paper and sample paper for isc 2011
7. yaa r88 even i agree.i tried it for such along tym -sarmistha
8. actually this site has the link transferred to sum odr site..dis is a method of increasing d pointers on a specific site...dey r fooling us
9. lag gyi maths mein....2 sum wrong ho gye...fuckkkk
10. We are one of the leading publishers of educational books in the country.
We specialize in Text books, Guides, Question Banks, Model Specimen Papers and Ten Years Solved Papers for the students appearing for I.C.S.E. and I.S.C. examinations.
Anyone interested or want to purchase.
Contact us:
Oswal Printers & Publishers Pvt. Ltd.
1/12, Sahitya Kunj, M.G. Road, AGRA – 282 002 (U.P)
Email id: monika.oswaal@gmail.com
Call us: +91-562-2527771-72-73-74
|
{"url":"http://iscexamnotes-content.blogspot.com/2010/04/isc-mathematics-sample-papers.html","timestamp":"2014-04-21T12:12:11Z","content_type":null,"content_length":"82339","record_id":"<urn:uuid:a7ce4c3a-8248-4708-a7c4-4f89bf0b6443>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
College Algebra Tutors
Boulder, CO 80301
Experienced Math, Physics, and Chemistry tutor
...Please contact me about any questions you may have! I look forward to working with you to reaching your GPA and test score goals! Education: B.S. Mathematics, University of Louisiana, Lafayette,
LA B.S. Physics, University of Louisiana, Lafayette, LA M.S. Paleoclimatology,...
Offering 10+ subjects including algebra 2
|
{"url":"http://www.wyzant.com/geo_Wheat_Ridge_college_algebra_tutors.aspx?d=20&pagesize=5&pagenum=2","timestamp":"2014-04-20T16:42:45Z","content_type":null,"content_length":"60703","record_id":"<urn:uuid:47ce71d4-07ae-47c0-b080-b88207daa52a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Integrating Taguchi principle with neural network to minimize ingate velocity for aluminium alloys.
Computers are beginning to have some capability of optimizing a filling system design. In most cases, it helps in decision making, where industrial expertise is not sufficient. However, simulation
software Simulation software is based on the process of imitating a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation
through simulation without actually running the program. is often inefficient, especially in case where a large number of parameters needs to be examined. The resulting large number of simulation
runs coupled with lengthy execution times per run (on the order of hours or even days) may render such investigations totally impractical.
The runner system design, named a vortex-gate, has been explored for aluminium gravity casting to control flow of the liquid metal below the critical velocity. In a study of a vortex gate concept to
find important dimension of the vortex runner system and minimize the outlet velocity, the integrating Taguchi principle with neural networks system has been developed.
Recent work by Campbell (2003, 2004) has demonstrated the effect of liquid metal flow and the surface turbulence on the reliability of aluminium castings. It has been found to be important to
minimize the surface turbulence during filling a mould to attain reliable mechanical properties of the castings. Extremely high velocities at which the liquid metal enters the mould is damaging to
metal. The theoretical background of critical velocity concept by Campbell has been confirmed experimentally for liquid aluminium by Runyoro at al. For nearly all liquid metals this critical
velocity is close to 0.5[ms.sub.-1], involving aluminium alloys (Campbell, 2003).
The idea to use a novel runner system, named a "vortex gate", for uphill gravity pouring was introduced by Campbell (2003). The vortex gate was investigated by Hsu at al. A computational fluid
dynamics Computational fluid dynamics
The numerical approximation to the solution of mathematical models of fluid flow and heat transfer. Computational fluid dynamics is one of the tools (in addition to experimental and theoretical
methods) available to solve and real-time X radiography radiography: see X ray. were used for these studies. The main role of the real-time radiography was to verify the results of computational
modelling. In this work, simulations show flow to dampen the circular motion In physics, circular motion is rotation along a circle: a circular path or a circular orbit. The rotation around a fixed
axis of a three-dimensional body involves circular motion of its parts. quickly, implying that the internal losses in the flow as a result of turbulence are an overestimation o·ver·es·ti·mate
tr.v. o·ver·es·ti·mat·ed, o·ver·es·ti·mat·ing, o·ver·es·ti·mates
1. To estimate too highly.
2. To esteem too greatly. in contrast with the video radiography. The results presented that the vortex gate has the potential to reduce the velocities to get quiet filling of melt into moulds.
There are no rules to calculate and optimize the vortex gate dimensions in references.
Recently, research on runner and gating systems has included a growing number of papers on optimization algorithms, the focus being to generate routines to assist the designer in the work of mould
and part design. Prasad Prasāda (Sanskrit: प्रसाद), prasād/prashad (Hindi), Prasāda in (Kannada), prasādam (Tamil), or prasadam at al. developed the artificial neural network (artificial intelligence)
artificial neural network - (ANN, commonly just "neural network" or "neural net") A network of many very simple processors ("units" or "neurons"), each possibly having a (small amount of) local
memory. system to generate the process parameters for the pressure die casting die casting
Forming metal objects by injecting molten metal under pressure into dies or molds. An early and important use of the technique was in the Linotype machine (1884), but the mass-production automobile
assembly line gave die casting its real impetus. process. With this network, the selection of the process parameters can be carried out by any inexperienced user without prior knowledge of the die
casting process.
In the work by Karunakar & Datta, an attempt was made to predict major casting defects like cracks, misruns, scabs, blowholes and air-locks by using back-propagation neural networks from the data
collected from a foundry. The neural network neural network or neural computing, computer architecture modeled upon the human brain's interconnected system of neurons. Neural networks imitate the
brain's ability to sort out patterns and learn from trial and error, discerning and extracting was trained with parameters like green compression strength, green shear strength For the shear
strength of soil, see .
Shear strength in engineering is a term used to describe the strength of a material or component against the type of yield or structural failure where the material or component fails in shear. ,
permeability, moisture percent and melting conditions as inputs and the presence/absence of defects as outputs. After the training was over, the set of inputs of the casting that is going to be made
was fed to the network and the network could predict whether the casting would be sound or defective. Thus the neural network makes a forecast about the nature of the casting just before the pouring
The abductive neural network analysis method is used by Lee & Lin for simulation and optimization of runner system parameters for multi-cavity moulds. It has been shown that prediction accuracy in
an abductive network is much higher than that in a traditional network. Abductive neural analysis based on the abductive modelling technique is able to represent complex and uncertain relationships
between injection analysis results and runner and gating systems design.
Sulaiman & Gethin used a network for metal flow analysis in the pressure die casting process to predict the metal flow characteristics in the filling system by simplifying the complex Navier-Stokes
equation Navier-Stokes equation
A partial differential equation which describes the conservation of linear momentum for a linearly viscous (newtonian), incompressible fluid flow. In vector form, this relation is written as Eq. .
All abovementioned a·bove·men·tioned
Mentioned previously.
The one or ones mentioned previously. works have indicated that the methodology of artificial neural networks can be used to replace the trial-and-error technique.
Design of experiments (DoE) procedure was used to systematically organize experiment runs to improve processes.
It involves a fraction of the possible parameter combinations for a given experiment, which results in conducting a minimum number of experiments without losing significant information. This
combination fraction is chosen according to according to
1. As stated or indicated by; on the authority of: according to historians.
2. In keeping with: according to instructions.
3. rules and statistic matrices called Taguchi's orthogonal arrays. The DoE procedure is divided into three stages: experiment design, experiment running and statistical analysis.
DoE methodology was applied to the vortex gate optimization. Experiment design involves the choice of design parameters and parameter levels, as well as the choice of the appropriate orthogonal
array according to desired resolution. The parameter ranges of the design variables are given in Table 1. The chosen orthogonal array defines the number of casting simulation runs to conduct and the
parameter level values, which are carried out in the second stage of DoE. Simulations for the present work were conducted using Flow3D software. For the experiment, [L.sub.9] orthogonal array with
four columns and nine rows was used. The experimental layout for four gating system factors using [L.sub.9] orthogonal array is shown in Table 2. During the third stage, analysis of variance (ANOVA
see analysis of variance.
ANOVA Analysis of variance, see there ) determines which parameters are statistically important along with their influence in the process. Preliminary simulation runs proved that the most
significant factor that affecting the value of velocity in the outlet of the vortex runner are:
* outlet diameter
* inlet velocity
An artificial neural network's (ANN) main characteristics are architecture, which defines the way that neurons are connected to each other, and training algorithm, which determines the way that
weight factors are corrected.
In the proposed ANN architecture, neurons are arranged in layers. The first layer is the input layer and its neurons are the same in number as the input parameters (ingate diameter, outlet diameter,
outlet length, inlet velocity), while the last layer has as many neurons as the output parameters (outlet velocity). In the ANN model the training algorithm of back-propagation was used. Table 2
illustrates the runner and gating system parameters used for flow simulations. The outlet velocities from the simulations were used as the output parameters to train proposed back-propagation neural
network in Figure 2.
Pre-processing of input signals prior to input to the neural network is carried out as follows to improve convergence. All input and output data are scaled so that they are confined to a subinterval
of (0.1 ... 0.9). Each input or output parameter X is normalized as [X.sub.n] before being applied to the neural network, according to the following equation
[X.sub.n] = [0.8 ((X - [X.sub.min])/([X.sub.max] - [X.sub.mn]))]+ 0.1 (1)
where [X.sub.max] and [X.sub.min] are the maximum and minimum values of the data parameter X.
[FIGURE 2 OMITTED]
Other network architectures were also tried with different training algorithms and different numbers of hidden layers and hidden neurons.
5. CONCLUSIONS
From realized experimental tests on the proposed vortex runner system and back-propagation neural network the following conclusions can by drawn:
* based on the modelling of the neural network, the relationships between the vortex gate parameters and outlet velocity of the melt can be obtained
* neural network models can replace casting simulation software to design the vortex gate dimensions
* the predictions by the network within the input range agree closely with the values obtained from the simulations
* the accuracy of the tested networks can be different but within the acceptable limit.
The authors wish to acknowledge the support of the grant agency of the Ministry of Education of the Slovak Republic under the contract DAAD 0300/2008
6. REFERENCES
Campbell, J. (2004). Casting Practice, The 10 Rules of Castings, Elsevier Butterworth Heinemann, Oxford
Campbell, J. (2003) Castings, Butterworth Heinemann, Oxford
Hsu, F.Y.; Jolly, M. R. & Campbell, J. (2006). Vortex-gate design for gravity casting. International Journal of Cast Metals Research, Vol 19 No 1, p. 38-44
Karunakar, D.B. & Datta, G.L. (2007). Prevention of defects in castings using back propagation neural network. Int J AdvManuf.Technik http://www.springerlink.com/content/q725746273820461/ Accessed
Lee, K.S. & Lin, J.C. (2006). Design of the runner and gating system parameters for a multi-cavity injection mould using FEM FEM Female
FEM Finite Element Method
FEM Feminine
FEM Finite Element Model
FEM Fédération Européenne des Métallurgistes (European Metalworkers' Federation)
FEM Faculdade de Engenharia Mecânica (Brasil) and neural network. Int J Adv Manuf Technik, Vol. 27 p. (1089-1096)
Prasad, K.D.V.; Yarlagadda, J.& Chiang, E. Ch. W. (1999). A neural network system for the prediction of process parameters in pressure die casting. Journal of Materials Processing Articles on
Materials processing include:
• process (engineering) a set of transformations of input elements into products
• industrial process, a procedure involving chemical or mechanical steps to aid in the manufacture of an item or items
Technology, 89-90, p. 583-590
Runyoro, J.; Boutorabi, S.M. & Campbell, J. (1992). Trans AFS A distributed file system for large, widely dispersed Unix and Windows networks from Transarc Corporation, now part of IBM. It is noted
for its ease of administration and expandability and stems from Carnegie-Mellon's Andrew File System.
AFS - Andrew File System , 100. p. 225-234
Sulaiman, S.B & Gethin, D.T (1992). A network technique for metal flow analysis in the filling system of pressure die casting and its experimental verification on a cold chamber machine. J. Eng.
Manuf. Vol 206, No 4, p.261-275
Tab. 1. Gating system parameters and their levels
Ingate Outlet
diameter diameter
Level/Factor [mm] [mm]
2 105 27.5
Outlet Inlet
length velocity
Level/Factor [mm] [[ms.sup.-1]]
1 90 1.5
Tab. 2. Experiment plan using [L.sub.9] orthogonal array
Parameter Ingate Outlet
level diameter diameter
Parameter Outlet Inlet
level length velocity
Reader Opinion
|
{"url":"http://www.thefreelibrary.com/Integrating+Taguchi+principle+with+neural+network+to+minimize+ingate...-a0225316322","timestamp":"2014-04-19T04:41:26Z","content_type":null,"content_length":"34292","record_id":"<urn:uuid:294b3113-6b07-479d-ac0a-2aec8b65ab5e>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Theory of Partitions (Encyclopedia of Mathematics and its Applications)
This book develops the theory of partitions. Simply put, the partitions of a number are the ways of writing that number as sums of positive integers. For example, the five partitions of 4 are 4, 3+1,
2+2, 2+1+1, and 1+1+1+1. Surprisingly, such a simple matter requires some deep mathematics for its study. This book considers the many theoretical aspects of this subject, which have in turn recently
found applications to statistical mechanics, computer science and other branches of mathematics. With minimal prerequisites, this book is suitable for students as well as researchers in
combinatorics, analysis, and number theory. [via]
|
{"url":"http://www.bookfinder.com/dir/i/The_Theory_of_Partitions_Encyclopedia_of_Mathematics_and_its_Applications_/052163766X/","timestamp":"2014-04-17T03:59:29Z","content_type":null,"content_length":"25032","record_id":"<urn:uuid:f6f17752-4044-434e-a596-86aab2eb4912>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Conditioning Number
November 15th 2011, 05:27 AM #1
Junior Member
Oct 2011
United States
Conditioning Number
Let f = tan x
What is the formula for the conditioning number of f? Evaluate this formula for pi/4, 1.01, 1.26 and 1.51, working to three significant figures.
Replace each of the expression for x by a 2 s.f. approximation and compute the relative error of the approximations and the functions of the approximations.
Now, I think I can handle the calculations but I CANNOT find the formula anywhere. I don't have it in my notes and I can't find where to find it out. If someone knows this formula can you post
it, I will attempt the questions.
Also, just want to check. When I evaluate the functions using the approximations to 2 s.f, would I round the result to 2 s.f too?
Re: Conditioning Number
Do you know, at least, the definition of "conditioning number"? (I know what the "conditioning number" for a matrix or a differential equation means- but I would like to see a definition for
"conditioning number" of a function.)
Re: Conditioning Number
We are given nothing of the sort. All I know it has to do with speed of convergence and it's a formula involving dividing by x (I think). It might even be just f(x)/x. But I'm not sure and I
wanted to know does anyone know this for sure.
November 15th 2011, 09:52 AM #2
MHF Contributor
Apr 2005
November 15th 2011, 09:55 AM #3
Junior Member
Oct 2011
United States
|
{"url":"http://mathhelpforum.com/advanced-applied-math/191956-conditioning-number.html","timestamp":"2014-04-19T03:14:53Z","content_type":null,"content_length":"35649","record_id":"<urn:uuid:400814ef-e93d-41d4-b3ec-3deccc68eb3a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: arXiv:0706.3989v1[physics.hist-ph]27Jun2007
Hans-Jšurgen Treder and the discovery of
confinement in Einstein's unified field theory
Salvatore Antoci
Dipartimento di Fisica "A. Volta" and IPCF of CNR, Pavia, Italia
Dierck Ekkehard Liebscher
Astrophysikalisches Institut Potsdam, Potsdam, Deutschland
In the year 1957, when interest in Einstein's unified field theory
was fading away for lack of understanding of its physical content,
Treder performed a momentous critical analysis of the possible defi-
nitions of the electric four-current in the theory. As an outcome of
this scrutiny he was able to prove by the E.I.H. method that prop-
erly defined point charges, appended at the right-hand side of the field
equation R[”
, ] = 0, interact mutually with Coulomb-like forces, pro-
vided that a mutual force independent of distance is present too. This
unwanted, but unavoidable addition, could not but lay further disbelief
on the efforts initiated by Einstein and Schršodinger one decade earlier.
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/293/4780923.html","timestamp":"2014-04-16T11:05:46Z","content_type":null,"content_length":"8103","record_id":"<urn:uuid:f384b1b8-e1f7-4e77-84d5-f4a7b0a9627e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Doppler Effect (Quantitative)
The first version of the qualitative post contained a paragraph at the end in which I did some real math (I have since removed that paragraph, as it appears in a different form in this post). My
mother loved the bit about the tennis and thought she had really grasped the general idea; alas, when confronted with a paragraph containing algebraic variables, she felt somewhat bewildered and lost
because I hadn't given it enough of an introduction. I was reminded that she hasn't really done any advanced math in several years. Mom, I apologize, and I'm going to take some time now to talk a bit
about the philosophy of mathematics in physics because I do plan on using math in this blog whenever it's applicable (which will be, presumably, often, as this is a physics blog).
First, I've titled these two posts as "qualitative" and "quantitative." This is a standard and very useful division in physics. When discussing a subject "qualitatively," one is really trying to
grasp the basic idea or gain a sense of intuition. Then, after gaining that first level of understanding, one begins to approach a problem "quantitatively" by actually working the math. I would not
be surprised if many of the subjects I broach in this blog will be handled in a similar fashion.
Next, I want to say that one purpose of introducing abstract variables in a mathematical treatment of a problem is to generalize the solution. For example, in the last post, I tried to explain the
Doppler effect using a very specific situation. If my mom is hitting tennis balls every two seconds, then decides to run towards the ball machine, she will have to hit tennis balls at a faster rate.
We've solved the Doppler effect for that one case. But that situation doesn't apply if we wished to talk about listening to a police siren on a street corner or the rotational speeds of galaxies in
an argument about dark matter. That's why the abstraction of math is so useful in physics - we can deal with the problem in such a way that we can use the same language and solution in each instance.
Finally, I'm going to be using variables to represent various parameters in the problem. Specifically, I'm going to use the following:
f[0]: the frequency at which the balls are emitted by the ball machine
v: the speed or velocity of the tennis balls
v[r]: the speed or velocity at which my mom runs
f: the frequency at which she hit s each tennis ball.
The choice of these variables is completely arbitrary; I could have used anything to represent the quantities of interest. In general, however, we try to use variables that are easily associated with
what they represent, like "v" for velocities and "f" for frequencies. The subscripts are then used to delineate different quantities of the same type. f[0] is given a "0" because in a sense, it is
the original frequency or the initial frequency. My mom's velocity is given the subscript "r" because she is the receiver, in contrast to the source (if the ball machine were moving, I would have
described its velocity as "v[s]").
If there are any questions or comments about this introductory stuff, please do comment below.
Onto the problem. Given the variables defined above, I want to define a couple more terms. If the machine spits out balls at a frequency f[0], then the time between each ball, T[0], is 1/f[0] (in the
example, I said that the time between each ball was 2 seconds, so the frequency is then 1/2 per second).
In the time elapsed before the next ball is fired, the previous ball has traveled a distance equal to its speed times the time elapsed, or v*1/f[0]. This is the distance between each ball.
To determine the time between each ball that my mom observes, we will need the absolute distance between balls (v*1/f[0]) and the speeds of both my mom and the balls. At the instant when my mom hits
a ball, she is v*1/f[0] away from the next ball. However, she is still running towards that ball, and the ball is still moving towards her. Therefore, that distance will be covered by the combination
of her running towards the ball and the ball moving towards her, i.e. with a velocity equal to the sum of her velocity and the ball's velocity, v+v[r]. Therefore the time it takes for her to see the
next ball is the total distance divided by the total velocity,
T = v * 1/f[0] * 1/(v+v[r]).
To calculate the frequency at which she observes each ball, we invert that time, so
f = (v+v[r])/v*f[0].
In the example, f[0] = 1 per 2 seconds, v = 12 m/s, and v[r] = 12 m/s, so f = (12+12)/12 * 1/2 = 1 per second, which is exactly what we saw. However, this equation is now general for any similar
situation. The equation can be generalized still further to take into account a moving source as well, in which case
f = (v+v[r])/(v+v[s])*f[0].
Now we can talk about listening to sirens on a sidewalk or galactic rotation curves and use the same equation to represent all 3 situations.
|
{"url":"http://sciencefriday.com/blogs/03/10/2009/the-doppler-effect-quantitative.html?interest=5&audience=1&series=7","timestamp":"2014-04-17T21:28:37Z","content_type":null,"content_length":"48520","record_id":"<urn:uuid:6f5c09e3-8526-4aad-a3ba-a61c0f2c5cb9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Multiply And Surrender
The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and
subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender.
Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach. The
algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of ReluctantAlgorithms
. From "Pessimal algorithms and the simplexity of computations. A. Broder and J. Stolfi. (Satirical article.) ACM SIGACT News vol. 16 no. 7, 49--53. Fall 1984. [PS,6p,54kB] [bib]"
Another great example is calculation of the
. In the
(if memory serves me):
fibo(0, 0).
fibo(1, 1).
fibo(N, F) :-
N > 1,
N1 is N - 1,
N2 is N - 2,
fibo(N1, F1),
fibo(N2, F2),
F is F1 + F2.
Calculating fibo(n) this way takes fibo(n+1) steps!
And that would be another way of calculating fibo(n): counting the steps required to calculate fibo(n-1).
This is one of the symptoms/causes of my
and other forms of
. I thought the title was going to lead to a joke about Catholicism :-) See also
NextList CategoryAlgorithm CategoryAntiPattern CategoryWhimsy
|
{"url":"http://c2.com/cgi/wiki?MultiplyAndSurrender","timestamp":"2014-04-16T17:05:09Z","content_type":null,"content_length":"3368","record_id":"<urn:uuid:6f4daea6-50ef-4d8f-bcf6-ca184a6b4d20>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
|
4th Grade Math: Division Help
Do you need help with Division Of Fractions in your 4th Grade Math class?
Do you need help with Division Of Decimals in your 4th Grade Math class?
Do you need help with Divident in your 4th Grade Math class?
Do you need help with Divisor in your 4th Grade Math class?
Do you need help with Quotient in your 4th Grade Math class?
Do you need help with Remainder in your 4th Grade Math class?
Do you need help with Division Of Integers in your 4th Grade Math class?
Do you need help with Division Of Whole Numbers in your 4th Grade Math class?
Do you need help with Properties Of Division in your 4th Grade Math class?
4th Grade: Division Videos
division video clips for 4th grade math students.
4th Grade: Division Worksheets
Free division printable worksheets for 4th grade math students.
4th Grade: Division Word Problems
division homework help word problems for 4th grade math students.
Fourth Grade: Division Practice Questions
division homework help questions for 4th grade math students.
How Others Use Our Site
|
{"url":"http://www.tulyn.com/4th-grade-math/division","timestamp":"2014-04-18T13:27:41Z","content_type":null,"content_length":"20781","record_id":"<urn:uuid:311018ba-edf0-43c0-a4fb-ef85198222dc>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chester, NH Math Tutor
Find a Chester, NH Math Tutor
...I find mathematic interesting and would like to help others find it as interesting as I. I have worked the past three years as an elementary special educator modifying curriculum for 2nd - 4th
graders, making all subjects, including Science, accessible, interesting and meaningful for the learner...
16 Subjects: including prealgebra, reading, English, grammar
...I double majored in economics and business management with a concentration in finance. Since college I have had a year's worth of experience as a paraprofessional working with special
education, particularly behavior and emotional. During this time I was a 1-on-1/tutor with elementary and high school in Haverhill and Newburyport.
29 Subjects: including precalculus, ACT Math, probability, finance
...From there, I am better able to understand how the student learns, memorizes, and solves problems by looking at his or her progress and asking questions. I give students only the amount and
level of work within their threshold, which greatly increases their potential in subsequent sessions.I hav...
17 Subjects: including algebra 1, reading, prealgebra, algebra 2
...We also use concrete models to visualize abstract concepts. With English/Language Arts (E/LA), I emphasize a "layered learning" technique because proficient readers and writers must master a
broad range of skills, including the structure of language, parts of speech, and grammar rules. I also enjoy tutoring.
12 Subjects: including prealgebra, English, writing, ESL/ESOL
...By sharpening their ability to explain, my students not only refine their problem-solving skills and learn to ace exams, but also develop deep and lasting understanding.I hold a degree in
theoretical math from MIT, and I've taught every level of calculus -- from elementary to AP (both AB and BC),...
47 Subjects: including discrete math, ACT Math, logic, linear algebra
Related Chester, NH Tutors
Chester, NH Accounting Tutors
Chester, NH ACT Tutors
Chester, NH Algebra Tutors
Chester, NH Algebra 2 Tutors
Chester, NH Calculus Tutors
Chester, NH Geometry Tutors
Chester, NH Math Tutors
Chester, NH Prealgebra Tutors
Chester, NH Precalculus Tutors
Chester, NH SAT Tutors
Chester, NH SAT Math Tutors
Chester, NH Science Tutors
Chester, NH Statistics Tutors
Chester, NH Trigonometry Tutors
Nearby Cities With Math Tutor
Atkinson, NH Math Tutors
Auburn, NH Math Tutors
Brentwood, NH Math Tutors
Candia Math Tutors
Danville, NH Math Tutors
East Candia Math Tutors
East Derry Math Tutors
East Hampstead Math Tutors
Epping, NH Math Tutors
Fremont, NH Math Tutors
Hampstead, NH Math Tutors
Kingston, NH Math Tutors
Nottingham, NH Math Tutors
Raymond, NH Math Tutors
Sandown Math Tutors
|
{"url":"http://www.purplemath.com/Chester_NH_Math_tutors.php","timestamp":"2014-04-17T15:41:02Z","content_type":null,"content_length":"23800","record_id":"<urn:uuid:096f30ee-9a69-4b37-af7e-d1594d4a145f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Generalized dual coalgebras of algebras, with applications to cofree coalgebras.
(English) Zbl 0556.16005
The authors give a new description of the cofree coalgebra on a vector space V over a field k [see
M. Sweedler
, Hopf algebras (1969;
Zbl 0194.32901
)]. Let A be a graded k-algebra, TV the (graded) tensor algebra on V. A graded linear map f of degree zero from A to TV is called representative if for some graded linear maps
of degree zero from A to TV,
$f\left(ab\right)={\sum }_{i=1}^{n}\left({g}_{i}a\right)\left({h}_{i}b\right)$
for all a,b in A. This generalizes the notion of representative function on a group G, e.g.,
group algebra kG,
, and restrict to G [see
G. Hochschild
, Introduction to affine algebraic groups (1971;
Zbl 0221.20055
)]. Let
be the k- vector space of representative maps from A to TV. Using the above notation, the authors show that
${\Delta }f={\sum }_{i=1}^{n}{g}_{i}\otimes {h}_{i}$
is uniquely determined by f, and that
are in
, so that
$\left({A}_{v}^{0},{\Delta }\right)$
is a coalgebra with counit which is evaluation at the unit element of A. For
, this reduces to the usual notion of
[cf. M. Sweedler]. Now let A be the polynomial algebra k[x] with the usual grading, and
the map from
to V which is evaluation at x. The authors show that
$\left(k{\left[x\right]}_{v}^{0},\pi \right)$
is the cofree coalgebra on V. By using symmetric functions, the authors also give an analogous construction of the cofree cocommutative coalgebra on V.
16W30 Hopf algebras (assoc. rings and algebras) (MSC2000)
16W50 Graded associative rings and modules
15A69 Multilinear algebra, tensor products
|
{"url":"http://zbmath.org/?format=complete&q=an:0556.16005","timestamp":"2014-04-17T13:07:07Z","content_type":null,"content_length":"24710","record_id":"<urn:uuid:6af36313-e85a-4bff-b6ee-0d868069f2d2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Big O Algorithm Analyzer for .NET
Article Index Rating:
Download Article_Demo.zip - 64.83 KB
Download Article_Src.zip - 110.18 KB
Table of Contents
After doing some research on 'Big O Notation' I read some posts from some developers explaining that you have to do 'Big O Equations' in your head. Perhaps they were pulling some ones leg? I don't
know. But it gave me the idea to create this tool, which can help to find the 'Big O function' for a given algorithm. The application uses C# Reflections to gain access to the target .net assembly.
Infinite asymptotic's and instrumentation are used to graph the function in relationship to time. When using infinite asymptotic's it is necessary to create a method or property which takes an
integer as input. The integer value is the infinity point, the point at which the function evaluates to an infinite quantity. This point can be time, value, etc. In the examples Fibonacci functions
are evaluated.
Here is a graph of Two linear recursive Fib's:
Here is the original prototype graphing an exponential recursive Fib:
Image 1 - Exponential Fibonacci
I used Derek Bartram's WPFGraph in the Demo which is not licensed for commercial or non profit. If you need to use this tool for commercial or non profit, you can extend the functionality of the
prototype. There are a few minor glitches with my implementation of Derek's WPFGraph. Major problem is with the performance scaling. In my prototype I use modulus scaling which is very exacting. I
tried to use the same scaling with Derek's graph, however the performance grid lines don't exactly match the points on the grid. If you need exact grid lines and fast performance extend the
Image 2 - Example of the prototype displaying O(n log n) Fibonacci
The application is broiler plate, this should give you the ability to extend the features to your own specification. Here is the same graph using Derek's graph:
Image 3 - Modified graph with grid lines and power graph
Another minor glitch is the time base jump at the start of the graph line. Graphing with high infinity values improves this, however a threading fix is probably necessary. I have a request into MSDN
for a fix. I will post the fix once I get it.
The tool uses heuristic to find the 'best guess' Big O Function:
Image 4 - The heuristic graph can also be displayed
And allows the user to review the details:
Image 5 - The calculated results from the heuristic
Special thanks to Derek for all his hard work! Great Graph!
The mathematical approach to fining the Big 'O' is to mark up the algorithm using function indexes. The is the 'it takes brains' method of finding the solution the developers were discussing in the
online help thread. Here is an example I found on the internet which illustrates the 'mark-up' and calculation.
int foo(int n)
int p = 1; //c1 x 1
int i = 1; //c1 x 1
while(i < n) //c2 x n
int j = 1; //c1 x (n - 1)
while(j < i)//c2 x ((1/2)n^2 - (3/2)n + 2)
p = p * j;//(c1+c3) x ((1/2)n^2 - (3/2)n + 1)
j = j + 1;//(c1+c4) x ((1/2)n^2 - (3/2)n + 1)
i = i + 1; //(c2+c4) x (n - 1)
return p; //c5 x 1
c1 = Assignment
c2 = Comparison
c3 = Multiplication
c4 = Addition
c5 = Return
The mathematical equation is simplified:
Step 1:
(c1 + 1/2c2 + 1/2c3 + 1/2c3)n^2 +
(-c1 - 1/2c2 - 3/2c3 - 1/2c4)n +
(2c1 + 2c2 + c3 + c5)
Step 2:
(c1 + 1/2c2 + 1/2c3 + 1/2c3)n^2
Step 3:
Therefore the Big 'O' for this foo algorithm is n^2 or quadratic.
The full example can be viewed at:
Big O Notation - CS Animated
The application uses the Model, Viewer, Presenter; Factory, and Publisher / Subscriber patterns. The viewer is the XAML Window which allows for easy design by designers, the presenter is implemented
in the xaml.cs of the window. The factory and publisher-subscriber pattern is implemented in the AnalysisFactory. The MVP pattern is a good stepping stone in prototype development as it is easy to
make custom abstractions with out sacrificing code base.
The examples featured in this demo use the Fibonacci sequence to illustrate different methods of calculating the Fibonacci number for a given index. The examples are to be used with the infinite
asymptotic approach in the future this article will be updated with examples using the instrumented approach.
The asymptotic is a 'running time' asymptotic not an 'operation count' asymptotic. This is important to understand. Mathematics and computer code can both be represented using Big 'O' notation for
operation count which are evaluated differently then currently featured in this article. In the future an addition will decompile the reference code and calculate both the operational count and the
running time. The running time does offer some 'real world' values which would not be discovered suing the operation counting method. For example: The code under test has an algorithm running in log
time and at the surface the algorithm might appear to be linear. The operating system, and the application background process can also affect real world results.
To create a simple asymptotic test all that is required is that the method invocation be publicly visible and that it take a single argument which is the number of iterations to run. The code
performing the logic uses .NET Reflections to gain access to the assembly and display the public methods.
Here are some examples of the Fibonacci algorithms used in the example:
Figure 1 - Closed Form Fib Class
class fib
double n = 0;
int n2 = 0;
public double next
return n;
n = Math.Floor(((Math.Pow(fib.golden(),
Convert.ToDouble(value))) -
Math.Pow((1 - fib.golden()),
Convert.ToDouble(value))) / Math.Sqrt(5));
private static double golden()
return (1 + Math.Sqrt(5)) / 2;
Figure 2 - Closed Form Test Code O(1)
public static double ClosedFormFib(double n)
fib f = new fib();
f.next = n;
return f.next;
Figure 3 - Linear Fib - Actually O(n log n)
The author who posted this algorithm incorrectly marked it as a linear solution when in fact it runs in log time.
public static double LinerFib(double n)
double previous = -1;
double result = 1;
double sum = 0;
for (double i = 0; i <= n; ++i)
sum = result + previous;
previous = result;
result = sum;
return result;
Figure 4 - Linear Recursive Fib - Again O(n log n)
public static double LinerRecursiveFib(double n)
return Fib(n, 1, 1);
private static double Fib(double n, double n1, double n2)
if (n == 1 || n == 2)
return 1;
else if (n == 3)
return n1 + n2;
return Fib(n - 1, n1 + n2, n1);
Figure 5 - Exponential Fib - O(n^2)
public static int ExponentialRecursiveFib(double n)
if (n == 1 || n == 2)
return 1;
return ExponentialRecursiveFib(n - 1) +
ExponentialRecursiveFib(n - 2);
At this point you may be thinking 'well this is good if the algorithm takes an integer as input for loading. What about other algorithms which don't such as sorting etc.' To answer this question
first the data type and algorithm must first be considered. For example a string array bubble sort. For testing this algorithm using asymptotic approach I would create a constructor in my class which
will be tested. Something like:
Figure 6 - Testing non-numeric data types and algorithms
ArrayList al = new ArrayList();
void LoadMe(){ LoadMe(1000); }
void LoadMe(double d)
//create random string elements and add to string array
//add each element to the ArrayList.
public static double TestBubbleSort(double n)
//Get each string array from ArrayList then call the bubble sort
//on the string array.
This method will work for loading and testing with out having to custom instrument the test code. The code of data load the ArrayList should be incurred at start up, however the size must be hard
coded into the test code.
Interesting things about heuristics
The heuristics uses the Meta.Numerics.Statistics class library to find a strong correlation between the items in two series. Series 1 (test results) contains the values of the test run in actual form
and value. Series 2 (control group) is a series of values from the equation in form of the following:
Figure 7 - Big 'O' function series generation
//O(n log n)
static double nLogn(int n)
double r = Math.Log(n) * n;
if (r.ToString() == "NaN" ||
!= -1)
return 0;
return r;
public static DecoratedMarker nLognLine(int n)
double[] ret = new double[n];
for (int y = 0; y < n; y++)
ret[y] = Functions.nLogn(y);
return new DecoratedMarker(ret, "O(n log n)", null);
The above is computed for each Big 'O' function under test. A function library can be found in the solution and is easily updated. The exact statistical formula used is 'Person's Rio'. More
information on Person's Rio can be found on other wiki's. Other formulas could also be used. Some ideas I have considered are: Finding the difference under the arch of the curve formula, correlation
of ratios, etc. The statistical package placed in its own class so additional packages can easily be loaded or overloaded.
Interesting things about the graph
I really don't totally understand the magic of modulus. I do value it's power though. I had to tinker with the modulus equations to get the scaling functions to work in the prototype. Perhaps I need
to brush up on modulus. Here are the modulus functions for scaling:
((dm.Point[1] * PointScale % infinityPoint) / 10) * PointRullerDivision; //Gets the vertical scaling.
(aggT * TimeScale % totTime) / 10 * TimeScaleDivision; //Gets the horizontal scaling.
PointScale is calculated by dividing 100 by infinityPoint (this is an inverse function), making the function 100 base. PointRullerDivision is calculated by dividing the graph Canvas by 100
(compliment to inverse function). The horizontal (x axis, time) is caculate the same way, however the aggregate time ; aggT, is used in place of the point index dm.Point[1]. Total time is calculated
using a LINQ to Object lambda function (also pure magic as far as I know).
Math.Round(dms.Sum(p => p.Point[0]),0)
I thought it was interesting that after refactoring the prototype to use Derek's graph the modulus functions still worked. Derek's scaling calculates the scale by dividing sections of the scale in a
recursive loop. I did run into a small problem with a rounding error after refactoring. Sometimes it's the little things that make a big difference. Rounding the
to Zero decimal places and not rounding the
causes a displacement in the performance graph where the x axis grid lines do not match a point on the graph, but will not work with Derek's graph unless the total time is rounded. The prototype
works perfectly.
Another point of interest is Derek's grid points. If you get the rendered geometry of one of his grid points, the bounds evaluates to infinity. I tried all methods of getting the position to correct
the rounding error. Nothing worked! Must be magic. So the featured application has many workarounds for the integration of the
The Power Graph
The power graph is a what I think of as a clever way to visualize the performance of the algorithm under test. The formula used is only a weighted metric.
Pseud Code
If unit n-1 is two times n then color value = color value - 10
this does of coarse have a dependency on the grid lines which are equally spaced vertically in increments of 10 and then asymmetrically spaced on the horizontal according to where the vertical line
and the new point cross x,y axis. This produces a visual which has does not distort the mathematical formula of the graph Ie. Linear, Log, or Exponential.
In conclusion I discovered this research is unique to what I have done and demonstrated. While it is currently causing controversy I believe it is a significant advancement in where the 'Rubber Meets
the Road'. Mathematical proofs for algorithms can be useful, but when engineering is concerned the proof doesn't meet the pudding! This is often the case in theory and development and engineering. It
is my hope that this tool will become useful to mathematicians and engineers for establishing real and live estimations of how an algorithm runs in real time. I plan to complete this prototype and
add features that allow an algorithm to be written with out the use of Visual Studio, right in the application it self. I will also be converting the MVP pattern to MVVM. And take a further look at
the heuristic model being used, which is what seems to be causing controversy. If anyone with expertise would like to join me in this venture just let me know. The final product will be posted on The
CodePlex, and have follow up articles here on The Code Project to demonstrate it's usefulness, and areas that it is not acceptable for use. That is it for now this article is a wrap!
|
{"url":"http://www.codeproject.com/Articles/37990/Big-O-Algorithm-Analyzer-for-NET?fid=1543530&df=90&mpp=10&sort=Position&spc=None&tid=3357289","timestamp":"2014-04-18T09:02:37Z","content_type":null,"content_length":"91241","record_id":"<urn:uuid:af863a01-c5a0-446c-9fc6-38558a8c71df>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Discounted Cash Flows Analysis - Cypress Business Brokers, LLC
Discounted Cash Flows Analysis
What’s My Business Worth? – Part III
This is Part III in a series devoted to taking the mystery out of business valuation. In previous episodes, we covered some of the basic concepts of business valuation, and the three basic approaches
to business valuation analysis. In this episode, we are going to look at how we use discounted cash flow analysis to estimate the value of a future stream of cash flows.
This presentation contains general information about the valuation process, however it is not intended to give you advice about your own particular situation. You should always consult with your own
advisors and should engage a qualified professional to assist in any valuation assessment.
You can watch our latest video presentation or read on your own below.
Time Value of Money (Present Value)
Common sense tells us that cash to be received sometime in the future is worth less than cash in your hand today. Why? It’s all about risk. There is the risk that you will never realize the expected
future cash flow. Remember Wimpy? “I’ll gladly pay you on Tuesday for a hamburger today.” Unfortunately, he never specified which Tuesday. Realization of that cash flow could only be described as
Anyone who has ever had a variable rate loan has experienced interest rate risk. As we will see in a minute, bond investors accept interest rate risk when they buy a bond. The value of their
investment will fluctuate with market interest rates.
Opportunity cost is a way of saying that we have alternative uses for our cash: we could invest it in this opportunity, in another, sit on it, or spend it.
Discounting is the mathematical technique we use to measure and account for risk.
Example – Bond Valuation
A treasury bond is the classic example of a financial instrument with zero credit risk. Whether or not that is still true is beyond the scope of our discussion today, but let’s assume that it is.
Let’s say we could buy a bond today for $10,000 that will mature in one year and repay our investment plus 5% interest. How much will we receive? We will receive our original principal plus the
interest earned on our investment. The interest earned is simply our investment amount ($10,000) times the interest rate (.05)
Some Key Definitions:
• Future value(FV) – is the amount we expect to receive sometime in the future.
• Present value(PV) – the amount we will invest today to receive the future value.
• Discount rate(R) – the implicit interest rate that we use to calculate present value
On to the math:
Finance, like physics, is really a field of applied mathematics. I’ll try to keep the math as simple as possible, no more than high school algebra.
As we saw on our bond example earlier, the future value is the present value plus interest earned over the period of investment. Expressed as an equation, $FV = PV + I$, where I = the interest
earned over the period of investment. The amount of interest earned in one period is equal to the interest rate times the principal amount of the investment. $I = PV X R$.
Substituting the PV times R for I in the first equation, we get $FV = PV + (PV * R)$ which we can simplify to $FV = PV X (1+R)$.
From our example, investing $10,000 at 5% for one year we get 10,000 * 1.05 = 10,500. Looking at it the other way, the right to receive $10,500 in one year is worth $10,000 today.
Compounding Interest
Suppose we invested $10,000 earning 5% for 2 years? Here we see the effect of compounding.
At the end of year 1, we have $FV = PV X (1+R)$, which we re-invest for another year. At the end of year two we would have earned interest on our re-investment amount giving us: $(PV X (1+R)) X
We can generalize the equation for compounding as follows: $FV = PV X (1+R)^n$ where n is the number of compounding periods. Dividing both sides of the equation by $(1+r)^n$, we solve for present
value: $PV = FV / (1+R)^n$.
Using our treasury bond example, we expect to receive $11,025 in two years. The market interest rate is 5%, that is we would expect to receive 5% on a different investment with the same risk over the
same term. Going through the calculation, you can see that the present value is our original $10,000.
• $PV = FV / (1+R)^n$ where FV = $11,025, R = 0.05 and n = 2
• PV = (11,025) / (1.05 X 1.05)
• PV = 11025 / 1.025
• PV = $10,000
Interest Risk Illustrated
Now let’s suppose we are looking at a bond that was sold with a coupon rate of 5% which will pay $11,025 in two years, but in today’s market similar investments are earning 6%. How much is this bond
We perform the same calculation as before, using a 6% discount rate
• $PV = FV / (1+R)^n$ where FV = $11,025, R = 0.06 and n = 2
• PV = (11,025) / (1.06 X 1.06)
• PV = 11025 / 1.1236
• PV = $9,812
and arrive at a present value of $9,812. This says we could invest $9,812 for 2 years at 6% and will have $11,025 at the end, the same as this bond. Remember when we mentioned interest rate risk
earlier? This is an example of interest rate risk. If we had paid $10,000 for this bond earlier, and wanted to sell it now it is only worth $9,812, the difference ($188) is due to interest rate
Selecting a Discount Rate
The choice of a discount rate is similar to the analysis we went through in the last program to select an earnings multiple. It involves assessing operating risk, that is how much uncertainty
surrounds earnings forecasts. It also involves macroeconomic forecasts of future interest rates, and comparing with expected returns associated with alternative uses of investment funds.
There are a number of factors to consider in evaluating operational risk. Let’s take a look at a few of them:
Revenues: Obviously revenues, or sales, is important, as revenues are the source of profits. Level of revenues is important, but we should also consider trends in revenues: is it growing year over
year, stagnant, or declining? How does this company’s revenue levels compare with other, similar companies?
Predictability of Revenues: How likely is it that future revenues can be estimated based upon past revenue levels? If a company relies on a few, major customers, it is much riskier than a company
that has a large, diverse customer base. The loss of even one major customer could change a profitable company into a loser. Repeat business, or business coming from referrals of past customers is an
indicator of customer satisfaction with the company and its products or services. Looking at accounts receivable collection history can give us a perspective on the quality of customers. High
bad-debt write-offs indicate a high-risk customer base. After all, a sale is only a sale when cash is received.
Earnings: When someone buys or invests in a business, what they really are buying is the future stream of cash flows or profits. We should look at profit margins, return on sales and return on
investment, to evaluate the business performance and compare it with its peers. What are the trends in profits? A consistent trend of increasing profits would justify a lower discount rate than a
level or declining profit trend.
Competitive Environment: Understanding the competitive environment is critical to evaluating the risk to the company’s profit trend. How likely is it that a new competitor will enter the market? Are
there any barriers to entry that would make it more difficult for a new competitor. Is there something unique about the business being evaluated. How does the company position its products or
services? Is it a premium product, or a discount brand?
Reliance on the Owner: Does the company have written procedures and processes, or are they carried around in the owner’s head? Are customers and suppliers loyal to their relationship with the owner,
or to the company’s products and services. Is there a management team in place to operate the business in the owner’s absence. Could a buyer expect that the owner will be cooperative during the
transition? In a sale, is the owner willing to partially finance the deal?
Discounted Cash Flow Method:
There are four major steps in performing a discounted cash flow analysis:
1. First we project future earnings, typically over 5 years but it could be longer or shorter.
2. Estimate a selling price for the business at the end of the projection timeline.
3. Determine an appropriate discount rate. The discount rate represents a risk-adjusted opportunity cost of capital for a prospective buyer or investor.
4. Calculate the present value of the projected cash flow stream.
Example Discounted Cash Flow Analysis
Let’s perform the four steps based on the data in Figure 1 above:
1. We’ve prepared a five-year projection of earnings for our example company. In the first year, we expect $10,000, and we expect earnings to grow by 6% per year thereafter.
2. Based upon forecasted year 5 earnings, we expect to sell the company at the end of year 5 for $34,719. The total cash received column is simply the yearly earnings added to the proceeds of sale.
3. We have chosen a discount rate of 35%. As we discussed in our previous article, investment in a small business is significantly more risky than investing in the stock market, bond market, or real
estate. The 35% is based on our best estimate for risks associated with the projected future cash flows.
4. We then divide each year’s cash receipt by the discount rate to get the present value of that year’s cash flow. Note in year 2, the discount rate is 1.035 X 1.035.
We sum up the present values and arrive at a valuation estimate of $31,934.
To summarize, expected cash flow in the future is less valuable than cash-in-hand today. We use discounting as a technique to estimate the value of expected future cash flows. In the discounted cash
flow method, we estimate future cash flows, and discount using a risk-adjusted discount rate to arrive at the present value.
I hope this has been informative. In the next installment, we’ll cover financial statement analysis. Then we’ll put it all together with a valuation analysis of an example company.
Cypress Business Partners offers a free estimate of the value of your business. Please let us know how we can be of service to you. Visit us on the web, or on Facebook. If you would like to be
informed when the next presentation is available, please send me your email address. I promise to respect your privacy and will never share your information with anyone.
|
{"url":"http://www.cypressbrokers.com/2013/07/discounted-cash-flows-analysis/","timestamp":"2014-04-20T16:03:01Z","content_type":null,"content_length":"33227","record_id":"<urn:uuid:3d39b934-4cfd-4666-939c-8942289e9aef>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How does "by one stat, by two stats and by three stats" work?
So in-battle stat-boosting works like this:
1 boost = The stat is multiplied by 1.5, so if your stat was 100 it will now be 150.
2 boosts = The stat is doubled, so if your stat was 100 it will now be 200
3 boosts = The stat is multplied by 2.5, so if your stat was 100 it will now be 250
4 boosts = The stat is tripled, so if your stat was 100 it will now be 300
5 boosts = The stat is multiplied by 3.5, so if your stat was 100 it will now be 350
6 boosts = The stat is quadrupled, so if your stat was 100 it will now be 400.
The same applies to lowered stats, just in reverse. Note that all stats are reset after battles.
So if you were to use Iron Defense and then Cotton Gaurd, your would get 5 boosts in defense, so your defense stat would go from 100 to 350.
|
{"url":"http://pokemondb.net/pokebase/94820/how-does-by-one-stat-by-two-stats-and-by-three-stats-work?show=94827","timestamp":"2014-04-16T20:27:18Z","content_type":null,"content_length":"28193","record_id":"<urn:uuid:56b524e9-a9d7-4750-95ce-15a4a7f4e977>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hamiltonian System
March 28th 2010, 10:54 AM #1
Mar 2008
Hamiltonian System
Consider a system of the form:
That is, dx/dt depends only on y and dy/dt depends only on x.
What is the Hamiltonian function?
Compute the lagrangian $L$ and then
$H = \sum^{i} p_{i}q_{i} - L$
where $p_{i} = \frac{\partial L}{\dot{q_{i}}}$ are the canonical momenta conjugated to the $q_{i}$ coordinates and $\dot{q_{i}} = \frac{d}{dt}q_{i}$
Last edited by Ruun; March 28th 2010 at 11:34 AM. Reason: typo
Do you know the physical interpretation of the Hamiltonian? We are talking about classical mechanics I guess
March 28th 2010, 11:25 AM #2
March 28th 2010, 11:36 AM #3
|
{"url":"http://mathhelpforum.com/differential-equations/136091-hamiltonian-system.html","timestamp":"2014-04-21T11:55:12Z","content_type":null,"content_length":"34483","record_id":"<urn:uuid:9e937bf5-bc1f-4e06-bfe8-805174db5211>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Parimutuel betting
Parimutuel betting (from the French: Pari Mutuel or mutual betting) is a betting system in which all bets of a particular type are placed together in a pool; taxes and the "house-take" or "vig" are
removed, and payoff odds are calculated by sharing the pool among all winning bets. In some countries it is known as the Tote after the totalisator which calculates and displays bets already made.
The parimutuel system is used in gambling on horse racing, greyhound racing, jai alai, and all sporting events of relatively short duration in which participants finish in a ranked order. A modified
parimutuel system is also used in some lottery games.
Parimutuel betting differs from fixed-odds betting in that the final payout is not determined until the pool is closed – in fixed odds betting, the payout is agreed at the time the bet is sold.
Parimutuel gambling is frequently state-regulated, and offered in many places where gambling is otherwise illegal. Parimutuel gambling is often also offered at "off track" facilities, where players
may bet on the events without actually being present to observe them in person.
Consider a hypothetical event which has eight possible outcomes, in a country using a decimal currency such as dollars. Each outcome has a certain amount of money wagered:
1 $30.00
2 $70.00
3 $12.00
4 $55.00
5 $110.00
6 $47.00
7 $150.00
8 $40.00
Thus the total pool of money on the event is $514.00. Following the start of the event, no more wagers are accepted. The event is decided and the winning outcome is determined to be Outcome 4 with
$55.00 wagered. The payout is now calculated. First the commission or take for the wagering company is deducted from the pool; for example with a commission rate of 14.25% the calculation is: $514 ×
(1 - 0.1425) = $440.76. The remaining amount in the pool is now distributed to those who wagered on Outcome 4: $440.76 / $55 ≈ $8 per $1 wagered. This payout includes the $1 wagered plus an
additional $7 profit. Thus, the odds on Outcome 4 are 7-to-1 (or, expressed as decimal odds, 8.01).
Often at certain times prior to the event, betting agencies will provide approximates for what should be paid out for a given outcome should no more bets be accepted at the current time. Using the
wagers and commission rate above (14.25%), an approximates table in decimal odds would be:
1 $14.69
2 $6.30
3 $36.73
4 $8.01
5 $4.01
6 $9.38
7 $2.94
8 $11.02
In real-life examples such as horse racing, the pool size often extends into millions of dollars with many different types of outcomes (winning horses) and complex commission calculations.
Sometimes the amounts paid out are rounded down to a denomination interval—in the United States and Australia, 10¢ intervals are used. The rounding loss is sometimes known as breakage and is retained
by the betting agency as part of the commission.
In horse racing, a practical example of this circumstance might be when an overwhelming favorite wins. The parimutuel calculation results might call for a very small winning payout (say, $1.02 or
$1.03 on a dollar bet), but the legal regulation would require a larger payout (e.g., $1.10 on a dollar bet). In North America, this condition is usually referred to as a minus pool.
Algebraic summary[edit]
In an event with a set of n possible outcomes, with wagers W[1], W[2], …, W[n] the total pool of money on the event is
$W_T = \sum^n_{i=1} W_i.$
After the wagering company deducts a commission rate of r from the pool, the amount remaining to be distributed between the successful bettors is W[R] = W[T](1 − r). Those who bet on the successful
outcome m will receive a payout of W[R] / W[m] for every dollar they bet on it.
The parimutuel system was invented by Catalan impresario Joseph Oller in 1867.^[1]
The large amount of calculation involved in this system led to the invention of a specialized mechanical calculating machine known as a totalisator, "automatic totalisator" or "tote board", invented
by the Australian engineer, George Alfred Julius. The first was installed at Ellerslie Racecourse, Auckland, New Zealand in 1913, and they came into widespread use at race courses throughout the
world. The U.S. introduction was in 1927, which led to the opening of the suburban Arlington Racetrack in Arlington Park, near Chicago and Sportsman's Park in Cicero, Illinois, in 1932.^[2]
Strategy and comparison with independent bookmakers[edit]
Unlike many forms of casino gambling, in parimutuel betting the gambler bets against other gamblers, not the house. The science of predicting the outcome of a race is called handicapping.
It is possible for a skilled player to win money in the long run at this type of gambling, but overcoming the deficit produced by taxes, the facility's take, and the breakage is difficult to
accomplish and few people are successful at it.
Independent off-track bookmakers have a smaller take and thus offer better payoffs, but they are illegal in some countries. However, with the introduction of Internet gambling has come "rebate
shops". These off-shore betting shops in fact return some percentage of every bet made to the bettor. They are in effect reducing their take from 15-18% to as little as 1 or 2%, still ensuring a
profit as they operate with minimal overhead. Rebate shops allow skilled horse players to make a steady income.
The recent World Trade Organization decision DS285^[3] against the United States of America by the small island nation of Antigua opens the possibility for offshore horse betting groups to compete
legally with parimutuel betting groups.
Parimutuel bet types[edit]
There may be several different types of bets, in which case each type of bet has its own pool. The basic bets involve predicting the order of finish for a single participant, as follows:
North America[edit]
In Canada and the United States, the most common types of bet on horse races include:
• Win: to succeed the bettor must pick the horse that wins the race.
• Place: the bettor must pick a horse that finishes either first or second.
• Show: the bettor must pick a horse that finishes first, second or third.
• Across the board: the bettor places three separate bets to win, place or show.
• Exacta, perfecta, or exactor: the bettor must pick the two horses that finish first and second, in the exact order.
• Trifecta or triactor: the bettor must pick the three horses that finish first, second, and third, in the exact order.
• Superfecta: the bettor must pick the four horses that finish first, second, third and fourth, in the exact order.
• Box: a box can be placed around exotic betting types such as exacta, trifecta or superfecta bets. This places a bet for all permutations of the numbers in the box. An exacta box with two numbers,
commonly called quinella or quiniela, is a bet on either of two permutations: A first and B second, or B first and A second. A trifecta box with three numbers has six possible permutations (of
the horses in the "box" three can finish first, two can finish second, and one can finish third: 3 × 2 × 1) and costs six times the betting base amount. A trifecta box with five numbers has 60
possible permutations and costs 60 times the betting base amount (5 × 4 × 3). In France, a "box" gives only the ordered permutations going along an ordered list of numbers such that a trifecta
box with six numbers would cost 20 times the base amount.
• Any2 or Duet: The bettor must pick the two horses who will place first, second or third but can finish in any order. This could be thought of as a double horse show key (see below).
• Double: the bettor must pick the winners of two successive races (a 'running' or 'rolling' double); most race tracks in Canada and the United States take double wagers on the first two races on
the program (the daily double) and on the last two (the late double).
• Triple: the bettor must pick the winners of three successive races; like doubles, many tracks offer "running" or "rolling" triples. Also called pick three or more commonly, a treble.
• Quadrella or Quaddie: The bettor must pick the winners of four nominated races at the same track.
• Pick six or Sweep six: Traditionally, the bettor must pick the winners of six consecutive races. However, there are variants ranging from three to nine races, with a four-race bet known as a Pick
Four. Exclusively for the pick six, a progressive jackpot is sponsored by the host track and available at its satellite locations which grows until someone picks six winners correctly. There is
also a consolation prize for those who pick five winners correctly, divided amongst the number of tickets registered in the system with five out of six right, in a case where nobody gets five or
six winners, a four out of six consolation prize may occur. A Place Pick Nine makes up for the increased difficulty of the high number of races by allowing a second-place finish for a bettor's
selected horse to count as a win.
Win, place and show wagers class as straight bets, and the remaining wagers as exotic bets. Bettors usually make multiple wagers on exotic bets. A box consists of a multiple wager in which bettors
bet all possible combinations of a group of horses in the same race. A key involves making a multiple wager with a single horse in one race bet in one position with all possible combinations of other
selected horses in a single race. A wheel consists of betting all horses in one race of a bet involving two or more races. For example a 1-all daily double wheel bets the 1-horse in the first race
with every horse in the second.
People making straight bets commonly employ the strategy of an "each way" bet. Here the bettor picks a horse and bets it will win, and makes an additional bet that it will show, so that theoretically
if the horse runs third it will at least pay back the two bets. The Canadian and American equivalent is the bet across (short for across the board): the bettor bets equal sums on the horse to win,
place, and show.
In Canada and the United States bettors make exotic wagers on horses running at the same track on the same program. In the United Kingdom bookmakers offer exotic wagers on horses at different tracks.
Probably the Yankee occurs most commonly: in this the bettor tries to pick the winner of four races. This bet also includes subsidiary wagers on smaller combinations of the chosen horses; for
example, if only two of the four horses win, the bettor still collects for their double. A Trixie requires trying to pick three winners, and a Canadian or Super Yankee trying to pick five; these also
include subsidiary bets. There are also other bets which are large combinations of singles, doubles, trebles and accumulators some of them are called Lucky 15, Lucky 31, Heinz, Super Heinz, Goliath.
The term nap identifies the best bet of the day.
A parlay, accumulator or roll-up consists of a series of bets in which bettors stake the winnings from one race on the next in order until either the bettor loses or the series completes
Australia/New Zealand[edit]
• Win: Runner must finish first.
• Place: Runner must finish first, second or third place. In events with five to seven runners, no dividends are payable on third place (signified by "NTD" or No Third Dividend) and in events with
4 or fewer runners, only Win betting is allowed.
• Each-Way: A combination of Win and Place. A $5 bet Each-way is a $5.00 bet to Win and a $5.00 bet to Place, for a total bet cost of $10.
• Exacta: The bettor must correctly pick the two runners which finish first and second.
• Quinella: The bettor must pick the two runners which finish first and second, but need not specify which will finish first.
• Trifecta: The bettor must correctly pick the three runners which finish first, second, and third.
• First4: The bettor must correctly pick the four runners which finish first, second, third and fourth.
• Duet: The bettor must pick the two horses who will place first, second or third but can finish in any order.
• Running Double: The bettor must pick the winners of two consecutive races at same track.
• Daily Double: The bettor must pick the winners of two nominated races at the same track.
• Treble: The bettor must pick the winners of three nominated races at the same track. This bet type is only available in the states of Queensland and South Australia.
• Quadrella or Quaddie: The bettor must pick the winners of four nominated races at the same track.
• Big 6: The bettor must pick the winners of six nominated races, which can be at the same track or split over two or more tracks.
In Australia, certain exotic bet types can be laid as "flexi" bets. Usually the price of an exotic bet is determined by a set multiple of the outcome, for example $60 for a five horse boxed trifecta
at one unit ($1)—or $30 at half unit (50c). If the bet is successful, the bettor will get either the full winning amount shown on the board, or half the winning amount. Under a flexi system the
bettor can nominate their desired total wager, and their percentage of payout is determined by this wager's relationship to the full unit price. Using a five horse box trifecta, the bettor may wish
to lay only $20 on the outcome. Their percentage of winnings is now calculated as $20/$60 = 33.3%. If the bet is successful, the payout will be 33.3% of the winning amount for a full unit bet.
In recent times the "Roving Banker" variant for Trifecta and First4 betting is now offered. For a Roving Banker First4 the player selects one, two or three runners they believe will definitely finish
1st, 2nd, 3rd or 4th, and up to three selections as Roving Banker(s) with other runners to fill the remaining place(s). A Roving Banker Trifecta is where the player believes that one or two runners
will definitely finish 1st, 2nd or 3rd. The bet can be placed by picking the player's favourite runner to finish in any place within the bet and complete the Trifecta with any number of other runners
to fill the other placing(s).^[4]
The following pools are operated at meetings in mainland Britain:
• Win: Runner must finish first.
• Place: Runner must finish within the first two places (in a 5–7 runner race), three places (8–15 runners and non-handicaps with 16+ runners) or four places (handicaps with 16+ runners).
• Each-way: Charged and settled as one bet to win and another bet to place (for example, a punter asking for a bet of "five pounds each way" will be expected to pay ten pounds).
• Scoop6: Pick the winner (for the win fund) or a placed horse (for the place fund) from the six advertised Scoop6 races. Saturdays only.
• Jackpot: Pick the winner from each of the first six races of the advertised Jackpot meeting of the day.
• Placepot: Pick a placed horse from each of the first six races from any British race meeting.
• Quadpot: Pick a placed horse from the third, fourth, fifth and sixth race from any British race meeting.
• Trifecta: The bettor must correctly pick the three runners which finish first, second, and third, in the correct order.
• Exacta: The bettor must correctly pick the two runners which finish first and second, in the correct order.
• Swinger: The bettor must correctly pick two runners to finish in the places, both runners must place, in any order.
• Super7
Tote Ireland operates the following pools:
• Win: Runner must finish first
• Place: Runner must finish within the first two places (in a 5–7 runner race), three places (8–15 runners and non-handicaps with 16+ runners) or four places (handicaps with 16+ runners). (From 23
April 2000 to 23 May 2010, Tote Ireland operated 4-place betting on all races with 16 or more runners.)
• Each-way: Charged and settled as one bet to win and another bet to place (for example, a punter asking for a bet of "five euro each way" will be expected to pay ten euro).
• Jackpot: A Pick 4 bet on races 3–6 at every meeting.
• Pick Six: On races 1–6 at one meeting on all Sundays and occasionally on other days (introduced on 9 January 2011).
• Placepot: The better must correctly pick one horse to place in each of the races 2–7.
• Exacta: The bettor must correctly pick the two runners which finish first and second, in the correct order.
• Trifecta: The bettor must correctly pick the three runners which finish first, second, and third, in the correct order (introduced on 26 May 2010).
• Daily Double: The bettor must correctly pick the winners of race 5 and race 6 (introduced on 22 January 2011).
Different arrangements apply at the two recognised racecourses in Northern Ireland. Tote Ireland operates pools on racing at Down Royal, but at Downpatrick the tote pools are operated independently
by Datatote who also run all greyhound pools in Ireland and most greyhound pools in the UK.
Bet types for harness racing (trotting):
• Vinnare (winner): Runner must finish first.
• Plats (place): Runner must finish within the first two places (up to five runners) or first three places (six runners or more).
• Vinnare & Plats: Two bets, one on "vinnare" and one on "plats" for the same runner. Asking for a bet of "50 SEK vinnare och plats" costs 100 SEK
• Tvilling (twin): The bettor must pick the runners that finish first and second, but need not specify which will finish first.
• Trio (trio): The bettor must pick the runners that finish first, second and third in a nominated race.
• Dagens Dubbel (daily double) and Lunchdubbel (lunch double): The bettor must pick the winners of two nominated races at the same track.
• V3: The bettor must pick the winners of three nominated races at the same track. Unlike V4, V5, V65 and V75, where a bet for all races must be made before the start of the first race, in V3 the
bettor selects the winner one race at a time.
• V4: The bettor must pick the winners of four nominated races at the same track.
• V5: The bettor must pick the winners of five nominated races at the same track.
• V65: The bettor must pick the winners of six nominated races at the same track. Return is also given for (combinations of) five correctly picked winners, even if the same bet included all the six
• V64: The bettor must pick the winners of six nominated races at the same track. Return is also given for (combinations of) five or four correctly picked winners, even if the same bet included
more correct picks.
• V75: The bettor must pick the winners of seven nominated races at the same track. Return is also given for (combinations of) six or five winners picked correctly, even if the same bet included
more correct picks. The betting pool is split into three separate pools for all combinations of seven (40%), six (20%) and five (40%) correctly picked winners. This is the largest nationwide
betting game in Sweden, running each Saturday with weekly pools of about 80 MSEK ($11 million).
• V86: The bettor must pick the winners of eight nominated races at the same track. Return is also given for (combinations of) seven or six winners picked correctly, even if the same bet included
more correct picks. The betting pool is split into three separate pools for all combinations of eight (40%), seven (20%) and six (40%) correctly picked winners.
Hong Kong[edit]
The Hong Kong Jockey Club (HKJC) operates the following bet types and pools.
• Win: Select correctly the 1st horse in a race.
• Place: Select correctly the 1st, 2nd or 3rd horse in a race with 7 or more declared starters, alternatively select correctly the 1st or 2nd in a race where there are 4 to 6 declared starters.
• Quinella: Select correctly the 1st and 2nd horses in any order in a race.
• Quinella Place: Select correctly any two of the first three placed horses in any order in a race.
• Tirce: Select the 1st, 2nd and 3rd horses in the correct order in a race.
• Trio: Select correctly the 1st, 2nd and 3rd horses in any order in a race.
• First 4: Select correctly the 1st, 2nd, 3rd and 4th horses in any order in a race.
• Double Trio: Select correctly the 1st, 2nd and 3rd horses in any order in each of the two nominated races.
• Triple Trio: Select correctly the 1st, 2nd and 3rd in any order in each of the three nominated races. There is a consolation prize given under the conditions that the player has selected
correctly the 1st, 2nd and 3rd horses in any order in the first two Legs of the three nominated races.
• Double: Select correctly the 1st horse in each of the two nominated races. There is a consolation prize given under the conditions that the player has selected correctly the 1st horse in the
first nominated race and the 2nd horse in the second nominated race.
• Treble: Select correctly the 1st horse in each of the three nominated races. There is a consolation prize given under the conditions that the player has selected correctly the 1st horse in the
first two Legs and the 2nd horse in the third Leg of the three nominated races.
• Six Up: Select correctly the 1st or 2nd horse in each of the six nominated races. There is a consolation prize given under the conditions that the player has selected correctly the 1st horse in
each of the six nominated races.
In Japan, Keiba (競馬, horse racing), Keirin (競輪, professional cycling), Kyōtei (競艇, hydroplane racing), and Auto Race (オートレース, motorcycle racing) operate the following bet type.^[5]^[6]^[7
]^[8] Wager must be a multiple of 100 yen except Each-way.
• Win (単勝, Tanshō): Runner must finish first (Keiba, Kyōtei and Auto Race).
• Place-Show (複勝, Fukushō): Runner must finish within the first two places (in a seven runners or fewer race) or three places (in an eight runners or more race) (Keiba, Kyōtei and Auto Race).
• Each-way (応援馬券, Ōen Baken): To place one bet to Win and another bet to Place-Show. (For example, betting 1,000 yen to Each-way means betting 500 yen to Win and 500 yen to Place-Show.) Wager
must be multiple of 200 yen (Keiba with JRA's operation only).
• Bracket Quinella (枠番連勝複式, Wakuban Renshō Fukushiki), abbreviated as Waku-ren (枠連): The bettor must pick the two bracket numbers which finish first and second, but need not specify which
will finish first. A bracket number (枠番, Wakuban) means runner's cap color (1: White; 2: Black; 3: Red; 4: Blue; 5: Yellow; 6: Green; 7: Orange; 8: Pink) (Keiba, Keirin and Auto Race).
• Bracket Exacta (枠番連勝単式, Wakuban Renshō Tanshiki), abbreviated as Waku-tan (枠単): The bettor must correctly pick the two bracket numbers which finish first and second (Keiba with some local
governments' operation only).
• Quinella (連勝複式, Renshō Fukushiki), abbreviated as Uma-ren (馬連), Ni-sha-fuku (2車複) or Ni-renpuku (2連複): The bettor must pick the two runners which finish first and second, but need not
specify which will finish first (Keiba, Keirin, Kyōtei and Auto Race).
• Exacta (連勝単式, Renshō Tanshiki), abbreviated as Uma-tan (馬単), Ni-sha-tan (2車単) or Ni-ren-tan (2連単): The bettor must correctly pick the two runners which finish first and second (Keiba,
Keirin, Kyōtei and Auto Race).
• Quinella-Place (拡大連勝複式, Kakudai Renshō Fukushiki), also known as Wide (ワイド) or Kaku-renpuku (拡連複): The bettor must pick the two runners which finish the top three—no need to specify
an order (For example, when the result of race is 3-6-2-4-5-1, the top three runners are 2, 3 and 6, and winning combinations are 2-3, 2-6 and 3-6.) (Keiba, Keirin, Kyōtei and Auto Race).
• Trio (3連勝複式, Sanrensho Fukushiki), abbreviated as San-renpuku (3連複): The bettor must pick the three runners which finish the top three, but no need to specify an order (Keiba, Keirin,
Kyōtei and Auto Race).
• Trifecta (3連勝単式, Sanrensho Tanshiki), abbreviated as San-ren-tan (3連単): The bettor must correctly pick the three runners which finish first, second, and third (Keiba, Keirin, Kyōtei and
Auto Race).
• WIN 5 / Select 5: The bettor must pick the winners of five designated races. Betting on operators' website by PC or cellular phone only (Keiba with JRA or some local governments' operation only).
See also[edit]
|
{"url":"http://blekko.com/wiki/Parimutuel_betting?source=672620ff","timestamp":"2014-04-19T13:17:34Z","content_type":null,"content_length":"46340","record_id":"<urn:uuid:964a1b1d-0c0b-4fec-9d60-501b82405d35>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matrix Transformation of Images in C#, using .NET GDI+
2D image transformation in .NET has been very much simplified by the Matrix class in the System.Drawing.Drawing2D namespace. In this article, I would like to share with the reader on the use of
Matrix class for 2D image transformation.
The Matrix class takes 6 elements arranged in 3 rows by 2 cols. For example, the default matrix constructed by the default constructor has value of ( 1,0,0,1,0,0 ). In matrix representation:
This is a simplification of:
The last column is always:
Thus a translation transformation of movement of 3 in x-axis and 2 in the y-axis would be represented as:
An important thing to note is that the transformation matrix is post multiplied to the image vectors. For example, we have an image with 4 points: (1,1) ( 2,3) (5,0) (6 7). The image vectors would be
represented as a 4 rows by 2 columns matrix:
When the transformation matrix is operated on the image matrix, the transformation matrix is multiplied on the right of the image matrix.
The last column of the resulting matrix is ignored. Thus the resulting image would have points (4,3) (5,5) (8,2) and (9,9).
A composite transformation is made up of the product of two or more matrices. Take for example, a scaling matrix with factor 2 in x-axis and 3 in y-axis.
When we have a composite transformation of a translation followed by a scaling, the scaling matrix would be multiplied to the right of the translation matrix:
Likewise, if we have a composite matrix of a scaling followed by a translation, the translation matrix would be multiplied to the right of the scaling matrix.
Multiplying to the right is also known as appending, and to the left as prepending. Matrices on the left are always operated first.
Matrix Transformation
In this article, I would focus only on the following transformations:
• Rotation
• Translation
• Stretching (Scaling)
• Flipping (Reflection)
To create a Matrix object:
//This would create an identity matrix (1,0,0,1,0,0)
Matrix m=new Matrix();
To initialize the values of the matrix at creation:
//This would create a matrix with elements (1,2,3,4,5,6)
Matrix m=new Matrix(1,2,3,4,5,6);
The Matrix class implements various methods:
• Rotate
• Translate
• Scale
• Multiply
To create a composite matrix, first create a identity matrix. Then use the above methods to append/prepend the transformation.
Matrix m=new Matrix();
//move the origin to 200,200
//rotate 90 deg clockwise
In the above code, since the rotation transformation is prepended to the matrix, the rotation transformation would be performed first.
In matrix transformations, the order of operation is very important. A rotation followed by a translation is very different from a translation followed by a rotation, as illustrated below:
Using the Matrix Object
The following GDI+ objects make use of the Matrix object:
• Graphics
• Pen
• GraphicsPath
Each of these has a Transform property which is a Matrix object. The default Tranform property is the identity matrix. All drawing operations that involve the Pen and Graphics objects would perform
with respect to their Transform property.
Thus, for instance, if a 45 deg clockwise rotation matrix has been assigned to the Graphics object Transform property and a horizontal line is drawn, the line would be rendered with a tilt of 45 deg.
The operation of matrix transformation on a GraphicsPath is particularly interesting. When its Transform property is set, the GraphicsPath's PathPoints are changed to reflect the transformation.
One use of this behavior is to perform localized transformation on a GraphicsPath object and then use the DrawImage method to render the transformation.
Graphics g=Graphics.FromImage(pictureBox1.Image);
GraphicsPath gp=new GraphicsPath();
Image imgpic=(Image)pictureBoxBase.Image.Clone();
//the coordinate of the polygon must be
//point 1 = left top corner
//point 2 = right top corner
//point 3 = right bottom corner
if(cbFlipY.CheckState ==CheckState.Checked)
gp.AddPolygon(new Point[]{new Point(0,imgpic.Height),
new Point(imgpic.Width,imgpic.Height),
new Point(0,0)});
gp.AddPolygon(new Point[]{new Point(0,0),
new Point(imgpic.Width,0),
new Point(0,imgpic.Height)});
//apply the transformation matrix on the graphical path
//get the resulting path points
PointF[] pts=gp.PathPoints;
//draw on the picturebox content of imgpic using the local transformation
//using the resulting parralleogram described by pts
Unfortunately, there is no flipping method for the Matrix class. However, the matrix for flipping is well known. For a flip along the x-axis, i.e., flipping the y-coordinates, the matrix is
(1,0,0,-1,0,0). For flipping the x-coordinates, the matrix is (-1,0,0,1,0,0).
Affined Transformation
The transformation on an image using the Matrix object is not just a simple point to point mapping.
Take for instance the rectangle with the vertices (0,0) (0,1) (1,1) (1,0). If the units are in pixels, then there are only four pixels making up the whole rectangle. If we subject the rectangle to a
scaling matrix of factor 2 in both x and y-axis about the origin (0,0), the resulting rectangle would have vertices (0,0) (0,2) (2,2) (2,0). It would also contain other points within these vertices.
How could a 4 points figure be mapped to a figure with more than 4 points?
The answer is that the transformation operation generated those other points that have no direct mapping by interpolation (estimation of unknown pixel values using known mapped pixels).
Using the Transformation Tester
The use of the demo program is quite intuitive. At startup, the CodeProject beloved iconic figure is loaded. The axes are based on graph-paper coordinate system. There is a check box to unflip the
y-coordinates to reflect the computer coordinate system. The origin is set at (200,200) relative to the picture box.
Adjust the tracker for each of the transformation operations, and order the transformation as shown in the list box by using the + and - buttons. Click Go to start the operation.
Thanks to leppie (a reader) for his comment, I have added a new checkbox to allow the user to see real time transformation. After the Real Time checkbox is checked, all adjustments to the trackers
and reordering of the transformation will cause the transformation to be performed immediately. This gives the effect of a real time update.
The code is quite adequately commented. I hope that the reader would benefit from this article and the codes, and start to unlock the power of the Matrix object in .NET.
|
{"url":"http://www.codeproject.com/Articles/8281/Matrix-Transformation-of-Images-in-C-using-NET-GDI?msg=1773446","timestamp":"2014-04-16T16:31:14Z","content_type":null,"content_length":"165049","record_id":"<urn:uuid:5f7537de-b8f3-40f6-b5da-0f6aef95302e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lake Zurich Math Tutor
Find a Lake Zurich Math Tutor
I have ten years experience tutoring high school and college students in chemistry (organic, inorganic, physical or analytical), physics and mathematics (geometry, calculus and algebra) on both a
one on one level as well as in big groups. My undergraduate was a major in chemistry with a minor in ma...
20 Subjects: including logic, geometry, trigonometry, precalculus
...I wanted to begin tutoring again now that I have free time, and I really enjoyed tutoring in the past. In high school, I worked in the tutoring center at school and did tutoring on the side as
well. I can tutor students from elementary to high school in sciences, math, and basic Spanish.
15 Subjects: including algebra 1, algebra 2, statistics, trigonometry
...At times students need one-on-one attention with someone who listens to them to understand how they learn. No two students are alike and their tutoring should be customized to their needs. I
enjoy tutoring individuals and groups and make it a fun process.
6 Subjects: including prealgebra, reading, business, geography
...I have the skill set to tutor to a successful score on any type of standardized testing. I have tutored for the ACT for 3 years. I have taught my students in Maine Township High Schools and
also privately tutored.
23 Subjects: including ACT Math, reading, English, writing
...This is my current method of teaching/tutoring. It is highly interactive with continuous student feedback and whole body involvement. I believe in learning by doing and-or playing.
19 Subjects: including algebra 1, geometry, precalculus, ACT Math
Nearby Cities With Math Tutor
Barrington, IL Math Tutors
Deer Park, IL Math Tutors
Echo Lake, IL Math Tutors
Hawthorn Woods, IL Math Tutors
Island Lake Math Tutors
Kildeer, IL Math Tutors
Lake Barrington, IL Math Tutors
Libertyville, IL Math Tutors
Lincolnshire Math Tutors
Mettawa, IL Math Tutors
Mundelein Math Tutors
North Barrington, IL Math Tutors
Prairie Grove, IL Math Tutors
Tower Lakes, IL Math Tutors
Wauconda, IL Math Tutors
|
{"url":"http://www.purplemath.com/lake_zurich_il_math_tutors.php","timestamp":"2014-04-20T13:39:55Z","content_type":null,"content_length":"23689","record_id":"<urn:uuid:d3cfcfe1-4051-4faf-8e12-638c8d77c765>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum - Problems Library - All Primary Problems
This page:
About Levels
of Difficulty
operations with numbers
number theory
algebraic thinking
data analysis
logical reasoning
Browse all
About the
PoW Library
Browse all Primary Problems of the Week
The Primary Problems are designed for those students in grades K-2. The problems cover much of the same material as the NCTM Standards for Grades Pre-K-2.
Access to these problems requires a Membership.
|
{"url":"http://mathforum.org/library/problems/sets/primary_all10150.html","timestamp":"2014-04-17T01:13:41Z","content_type":null,"content_length":"22548","record_id":"<urn:uuid:d99ac6ad-0c3c-4d66-98a5-b04b1ba1cc8f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shallow Parsing with Conditional Random Fields
Results 1 - 10 of 364
, 2005
"... This article considers approaches which rerank the output of an existing probabilistic parser. The base parser produces a set of candidate parses for each input sentence, with associated
probabilities that define an initial ranking of these parses. A second model then attempts to improve upon this i ..."
Cited by 268 (9 self)
Add to MetaCart
This article considers approaches which rerank the output of an existing probabilistic parser. The base parser produces a set of candidate parses for each input sentence, with associated
probabilities that define an initial ranking of these parses. A second model then attempts to improve upon this initial ranking, using additional features of the tree as evidence. The strength of our
approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a
generative model which takes these features into account. We introduce a new method for the reranking task, based on the boosting approach to ranking problems described in Freund et al. (1998). We
apply the boosting method to parsing the Wall Street Journal treebank. The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000
features over parse trees that were not included in the original model. The new model achieved 89.75 % F-measure, a 13 % relative decrease in F-measure error over the baseline model’s score of 88.2%.
The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. Experiments show significant efficiency gains for
the new algorithm over the obvious implementation of the boosting approach. We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work on feature
selection methods within log-linear (maximum-entropy) models. Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP
problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation.
- In Proc. ACL , 2005
"... We present an effective training algorithm for linearly-scored dependency parsers that implements online largemargin multi-class training (Crammer and Singer, 2003; Crammer et al., 2003) on top
of efficient parsing techniques for dependency trees (Eisner, 1996). The trained parsers achieve a competi ..."
Cited by 226 (19 self)
Add to MetaCart
We present an effective training algorithm for linearly-scored dependency parsers that implements online largemargin multi-class training (Crammer and Singer, 2003; Crammer et al., 2003) on top of
efficient parsing techniques for dependency trees (Eisner, 1996). The trained parsers achieve a competitive dependency accuracy for both English and Czech with no language specific enhancements. 1
, 2003
"... Conditional Random Fields (CRFs) are undirected graphical models, a special case of which correspond to conditionally-trained finite state machines. A key advantage of CRFs is their great
flexibility to include a wide variety of arbitrary, non-independent features of the input. Faced with ..."
Cited by 182 (10 self)
Add to MetaCart
Conditional Random Fields (CRFs) are undirected graphical models, a special case of which correspond to conditionally-trained finite state machines. A key advantage of CRFs is their great flexibility
to include a wide variety of arbitrary, non-independent features of the input. Faced with
- In Advances in Neural Information Processing Systems 17 , 2004
"... We describe semi-Markov conditional random fields (semi-CRFs), a conditionally trained version of semi-Markov chains. Intuitively, a semi-CRF on an input sequence x outputs a “segmentation ” of
x, in which labels are assigned to segments (i.e., subsequences) of x rather than to individual elements x ..."
Cited by 171 (9 self)
Add to MetaCart
We describe semi-Markov conditional random fields (semi-CRFs), a conditionally trained version of semi-Markov chains. Intuitively, a semi-CRF on an input sequence x outputs a “segmentation ” of x, in
which labels are assigned to segments (i.e., subsequences) of x rather than to individual elements xi of x. Importantly, features for semi-CRFs can measure properties of segments, and transitions
within a segment can be non-Markovian. In spite of this additional power, exact learning and inference algorithms for semi-CRFs are polynomial-time—often only a small constant factor slower than
conventional CRFs. In experiments on five named entity recognition problems, semi-CRFs generally outperform conventional CRFs. 1
- In Proceedings of the 42nd Meeting of the ACL , 2004
"... This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on
a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new eff ..."
Cited by 165 (21 self)
Add to MetaCart
This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a
Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare
models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing
wide-coverage CCG parsers.
- In EMNLP , 2006
"... Discriminative learning methods are widely used in natural language processing. These methods work best when their training and test data are drawn from the same distribution. For many NLP
tasks, however, we are confronted with new domains in which labeled data is scarce or non-existent. In such cas ..."
Cited by 153 (10 self)
Add to MetaCart
Discriminative learning methods are widely used in natural language processing. These methods work best when their training and test data are drawn from the same distribution. For many NLP tasks,
however, we are confronted with new domains in which labeled data is scarce or non-existent. In such cases, we seek to adapt existing models from a resourcerich source domain to a resource-poor
target domain. We introduce structural correspondence learning to automatically induce correspondences among features from different domains. We test our technique on part of speech tagging and show
performance gains for varying amounts of source and target training data, as well as improvements in target domain parsing accuracy using our improved tagger. 1
- COMPUTATIONAL LINGUISTICS , 2007
"... This paper describes a number of log-linear parsing models for an automatically extracted lexicalized grammar. The models are "full" parsing models in the sense that probabilities are defined
for complete parses, rather than for independent events derived by decomposing the parse tree. Discriminativ ..."
Cited by 149 (34 self)
Add to MetaCart
This paper describes a number of log-linear parsing models for an automatically extracted lexicalized grammar. The models are "full" parsing models in the sense that probabilities are defined for
complete parses, rather than for independent events derived by decomposing the parse tree. Discriminative training is used to estimate the models, which requires incorrect parses for each sentence in
the training data as well as the correct parse. The lexicalized grammar formalism used is Combinatory Categorial Grammar (CCG), and the grammar is automatically extracted from CCGbank, a CCG version
of the Penn Treebank. The combination of discriminative training and an automatically extracted grammar leads to a significant memory requirement (over 20 GB), which is satisfied using a parallel
implementation of the BFGS optimisation algorithm running on a Beowulf cluster. Dynamic programming over a packed chart, in combination with the parallel implementation, allows us to solve one of the
largest-scale estimation problems in the statistical parsing literature in under three hours. A key component of the parsing system, for both training and testing, is a Maximum Entropy supertagger
which assigns CCG lexical categories to words in a sentence. The supertagger makes the discriminative training feasible, and also leads to a highly efficient parser. Surprisingly,
- IN: ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 20 , 2008
"... This contribution develops a theoretical framework that takes into account the effect of approximate optimization on learning algorithms. The analysis shows distinct tradeoffs for the case of
small-scale and large-scale learning problems. Small-scale learning problems are subject to the usual approx ..."
Cited by 138 (4 self)
Add to MetaCart
This contribution develops a theoretical framework that takes into account the effect of approximate optimization on learning algorithms. The analysis shows distinct tradeoffs for the case of
small-scale and large-scale learning problems. Small-scale learning problems are subject to the usual approximation–estimation tradeoff. Large-scale learning problems are subject to a qualitatively
different tradeoff involving the computational complexity of the underlying optimization algorithms in non-trivial ways.
- IN ICML , 2004
"... In sequence modeling, we often wish to represent complex interaction between labels, such as when performing multiple, cascaded labeling tasks on the same sequence, or when longrange
dependencies exist. We present dynamic conditional random fields (DCRFs), a generalization of linear-chain cond ..."
Cited by 122 (11 self)
Add to MetaCart
In sequence modeling, we often wish to represent complex interaction between labels, such as when performing multiple, cascaded labeling tasks on the same sequence, or when longrange dependencies
exist. We present dynamic conditional random fields (DCRFs), a generalization of linear-chain conditional random fields (CRFs) in which each time slice contains a set of state variables and edges---a
distributed state representation as in dynamic Bayesian networks (DBNs)---and parameters are tied across slices. Since exact
- In Proc. of ACL , 2005
"... Conditional random fields (Lafferty et al., 2001) are quite effective at sequence labeling tasks like shallow parsing (Sha and Pereira, 2003) and namedentity extraction (McCallum and Li, 2003).
CRFs are log-linear, allowing the incorporation of arbitrary features into the model. To train on unlabele ..."
Cited by 115 (14 self)
Add to MetaCart
Conditional random fields (Lafferty et al., 2001) are quite effective at sequence labeling tasks like shallow parsing (Sha and Pereira, 2003) and namedentity extraction (McCallum and Li, 2003). CRFs
are log-linear, allowing the incorporation of arbitrary features into the model. To train on unlabeled data, we require unsupervised estimation methods for log-linear models; few exist. We describe a
novel approach, contrastive estimation. We show that the new technique can be intuitively understood as exploiting implicit negative evidence and is computationally efficient. Applied to a sequence
labeling problem—POS tagging given a tagging dictionary and unlabeled text—contrastive estimation outperforms EM (with the same feature set), is more robust to degradations of the dictionary, and can
largely recover by modeling additional features. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.10.9849","timestamp":"2014-04-18T00:04:46Z","content_type":null,"content_length":"38385","record_id":"<urn:uuid:a6e532be-2bfb-4cae-9603-f035420ad0c5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
|
jet (infinity,1)-category
jet (infinity,1)-category
Given a differentiable (∞,1)-category $\mathcal{C}$, then the (∞,1)-category of n-excisive functors from the pointed objects in ∞Grpd to $\mathcal{C}$ behaves like the bundles of order-$n$Goodwillie
derivatives over all objects of $\mathcal{C}$. Hence this is the analog of the $n$th order jet bundle in Goodwillie calculus.
In particular for $n = 1$ this is the tangent (∞,1)-category of $\mathcal{C}$.
Jet toposes
By the discussion at n-excisive functor – Properties – n-Excisive approximation, for $\mathbf{H}$ an (∞,1)-topos also its $n$th jet $(\infty,1)$-category $J^n \mathbf{H}$ is an $(\infty,1)$-topos,
for all $n \in \mathbb{N}$. For $n = 1$ this is the tangent (∞,1)-topos $J^1 \mathbf{H} = T \mathbf{H}$ (see also at tangent cohesion). If $\mathbf{H}$ is cohesive, so too is $J^n \mathbf{H}$.
Section 7.1 of
Revised on December 20, 2013 06:50:41 by
Urs Schreiber
|
{"url":"http://ncatlab.org/nlab/show/jet+(infinity%2C1)-category","timestamp":"2014-04-16T16:30:52Z","content_type":null,"content_length":"17471","record_id":"<urn:uuid:b14577a0-c970-4bbc-8d54-1dd770d2ee73>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
|
My personal trouble with infinity.
Infinity is like eternity, you never get there. If you are like me, your first foray into infinity came in math class in school. If you took the set of natural numbers, 1,2,3,4,5.... and on and on,
you would see that there are an infinite many of them. My personal issue has to do with measurements. Particularly, The universe. Every time I read an article that describes something as infinitely
big, infinitely small, infinitely dense, I feel a sharp pain in my side. Let me explain. The big bang was about 14 billion years ago according to best estimates. Then the universe should be at least
28 billion light years in diameter. Ah, but the universe is expanding at an accelerated rate you say. Alright then. The universe is larger still, but it isn't infinitely so because the universe is
expanding every day. Is the universe more infinite today than it was yesterday? Infinity plus one? Max Tegmark has theorised that the next hubble volume similar to ours is 2**10**118 meters away.
More zeroes than you could write in your lifetime, but still not infinitely far away. Lets go to the other end of the spectrum. Science tells us that inside a black hole is a singularity. A point
with zero length. Science also tells us that the smallest possible unit of distance is the Planck length: 1.616199x10 ** -35 meters. A mind boggling small number to be sure, but not infinitely small.
Also, the singularity has infinite density. Is this the same as having infinite mass?
Is the calculus we use to describe the universe not powerful enough? Is it that our understanding of the physics is not developed enough to explain what we see? Or is it the worst of my fears, when
something is 50 or a 100 decimal places out, do we just call it a day with a sideways 8?
I don't know the answers to any of these questions. But I, like you since you are on this site, have a hunger for knowledge in science. Its just way too cool! I welcome discussion.
1 31Reply
|
{"url":"http://observationdeck.io9.com/if-you-walk-halfway-to-a-door-then-halfway-again-and-al-1506989175/@derfvader","timestamp":"2014-04-18T15:38:48Z","content_type":null,"content_length":"79572","record_id":"<urn:uuid:179b1f60-2cb6-4a88-9f03-faca456f1874>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Photoelectric Effect Explained
Einstein's Wonderful Year
In 1905,
Albert Einstein
published four papers in the
Annalen der Physik
journal, each of which was significant enough to warrant a Nobel Prize in its own right. The first paper (and the only one to actually be recognized with a Nobel) was his explanation of the
photoelectric effect.
Building on Max Planck's blackbody radiation theory, Einstein proposed that radiation energy is not continuously distributed over the wavefront, but is instead localized in small bundles (later
called photons). The photon's energy would be associated with its frequency (nu), through a proportionality constant known as Planck's constant (h), or alternately, using the wavelength (lambda) and
the speed of light (c):
E = h nu = hc / lambda
or the momentum equation: p = h / lambda
In Einstein's theory, a photoelectron releases as a result of an interaction with a single photon, rather than an interaction with the wave as a whole. The energy from that photon transfers
instantaneously to a single electron, knocking it free from the metal if the energy (which is, recall, proportional to the frequency
) is high enough to overcome the work function (
) of the metal. If the energy (or frequency) is too low, no electrons are knocked free.
If, however, there is excess energy, beyond phi, in the photon, the excess energy is converted into the kinetic energy of the electron:
K[max] = h nu - phi
Therefore, Einstein's theory predicts that the maximum kinetic energy is completely independent of the intensity of the light (because it doesn't show up in the equation anywhere). Shining twice as
much light results in twice as many photons, and more electrons releasing, but the maximum kinetic energy of those individual electrons won't change unless the energy, not the intensity, of the light
The maximum kinetic energy results when the least-tightly-bound electrons break free, but what about the most-tightly-bound ones; The ones in which there is just enough energy in the photon to knock
it loose, but the kinetic energy that results in zero? Setting K[max] equal to zero for this cutoff frequency (nu[c]), we get:
nu[c] = phi / h
or the cutoff wavelength: lambda[c] = hc / phi
These equations indicate why a low-frequency light source would be unable to free electrons from the metal, and thus would produce no photoelectrons.
After Einstein
Experimentation in the photoelectric effect was carried out extensively by Robert Millikan in 1915, and his work confirmed Einstein's theory. Einstein won a Nobel Prize for his photon theory (as
applied to the photoelectric effect) in 1921, and Millikan won a Nobel in 1923 (in part due to his photoelectric experiments).
Most significantly, the photoelectric effect, and the photon theory it inspired, crushed the classical wave theory of light. Though no one could deny that light behaved as a wave, after Einstein's
first paper, it was undeniable that it was also a particle.
|
{"url":"http://physics.about.com/od/quantumphysics/a/photoelectric_2.htm","timestamp":"2014-04-18T08:02:18Z","content_type":null,"content_length":"43418","record_id":"<urn:uuid:74417219-6665-4792-9244-2604195e691b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Matrixgeometric Solutions of M/G/1 Type Markov
Chains: A Unifying Generalized Statespace Approach
Nail Akar Nihat Cem O–guz and Khosrow Sohraby \Lambda
Technology Planning & Integration Computer Science Telecommunications
Sprint University of MissouriKansas City
9300 Metcalf Avenue 5100 Rockhill Road
Overland Park, KS 66212 Kansas City, MO 64110
akar@sprintcorp.com fncoguz,sohrabyg@cstp.umkc.edu
In this paper, we present an algorithmic approach to find the stationary probability distribution
of M/G/1 type Markov chains which arise frequently in performance analysis of computer and
communication networks. The approach unifies finite and infinitelevel Markov chains of this
type through a generalized statespace representation for the probability generating function of
the stationary solution. When the underlying probability generating matrices are rational, the
solution vector for level k, x k , is shown to be in the matrixgeometric form x k+1 = gF k H, k – 0,
for the infinitelevel case whereas it takes the modified form x k+1 = g 1 F k
1 H 1 + g 2 F K \Gammak\Gamma1
2 H 2 ,
0 Ÿ k ! K, for the finitelevel case. The matrix parameters in the above two expressions can be
obtained by decomposing the generalized system into forward and backward subsystems , or equiv
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/745/3830158.html","timestamp":"2014-04-20T03:24:22Z","content_type":null,"content_length":"8429","record_id":"<urn:uuid:9eee02ce-92f3-4be6-9eb4-8344345e3f88>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Engineering a compressed suffix tree implementation
Results 1 - 10 of 13
- IN: PACS 2000. LNCS , 2000
"... Suffix trees are by far the most important data structure in stringology, with myriads of applications in fields like bioinformatics and information retrieval. Classical representations of
suffix trees require O(n log n) bits of space, for a string of size n. This is considerably more than the nlog ..."
Cited by 20 (14 self)
Add to MetaCart
Suffix trees are by far the most important data structure in stringology, with myriads of applications in fields like bioinformatics and information retrieval. Classical representations of suffix
trees require O(n log n) bits of space, for a string of size n. This is considerably more than the nlog 2 σ bits needed for the string itself, where σ is the alphabet size. The size of suffix trees
has been a barrier to their wider adoption in practice. Recent compressed suffix tree representations require just the space of the compressed string plus Θ(n) extra bits. This is already
spectacular, but still unsatisfactory when σ is small as in DNA sequences. In this paper we introduce the first compressed suffix tree representation that breaks this linear-space barrier. Our
representation requires sublinear extra space and supports a large set of navigational operations in logarithmic time. An essential ingredient of our representation is the lowest common ancestor
(LCA) query. We reveal important connections between LCA queries and suffix tree navigation.
"... The suffix tree is an extremely important data structure for stringology, with a wealth of applications in bioinformatics. Classical implementations require much space, which renders them
useless for large problems. Recent research has yielded two implementations offering widely different space-time ..."
Cited by 8 (2 self)
Add to MetaCart
The suffix tree is an extremely important data structure for stringology, with a wealth of applications in bioinformatics. Classical implementations require much space, which renders them useless for
large problems. Recent research has yielded two implementations offering widely different space-time tradeoffs. However, each of them has practicality problems regarding either space or time
requirements. In this paper we implement a recent theoretical proposal and show it yields an extremely interesting structure that lies in between, offering both practical times and affordable space.
The implementation of the theoretical proposal is by no means trivial and involves significant algorithm engineering.
"... A Range Minimum Query asks for the position of a minimal element between two specified array-indices. We consider a natural extension of this, where our further constraint is that if the minimum
in a query interval is not unique, then the query should return an approximation of the median position ..."
Cited by 4 (3 self)
Add to MetaCart
A Range Minimum Query asks for the position of a minimal element between two specified array-indices. We consider a natural extension of this, where our further constraint is that if the minimum in a
query interval is not unique, then the query should return an approximation of the median position among all positions that attain this minimum. We present a succinct preprocessing scheme using only
about 2.54 n + o(n) bits in addition to the static input array, such that subsequent “range median of minima queries” can be answered in constant time. This data structure can be constructed in
linear time, and only o(n) additional bits are needed at construction time. We introduce several new combinatorial concepts such as Super-Cartesian Trees and Super-Ballot Numbers, which we believe
will have other interesting applications in the future. We stress the importance of our result by giving two applications in text indexing; in particular, we show that our ideas are needed for fast
construction of one component in Compressed Suffix Trees [19], a versatile tool for numerous tasks in text processing, and that they can be used for fast pattern matching in (compressed) suffix
arrays [14].
"... Let D1 and D2 be two databases (i.e. multisets) of d strings, over an alphabet Σ, with overall length n. We study the problem of mining discriminative patterns between D1 and D2 — e.g., patterns
that are frequent in one database but not in the other, emerging patterns, or patterns satisfying other f ..."
Cited by 4 (1 self)
Add to MetaCart
Let D1 and D2 be two databases (i.e. multisets) of d strings, over an alphabet Σ, with overall length n. We study the problem of mining discriminative patterns between D1 and D2 — e.g., patterns that
are frequent in one database but not in the other, emerging patterns, or patterns satisfying other frequency-related constraints. Using the algorithmic framework by Hui (CPM 1992), one can solve
several variants of this problem in the optimal linear time with the aid of suffix trees or suffix arrays. This stands in high contrast to other pattern domains such as itemsets or subgraphs, where
super-linear lower bounds are known. However, the space requirement of existing solutions is O(n log n) bits, which is not optimal for |Σ | << n (in particular for constant |Σ|), as the databases
themselves occupy only n log |Σ | bits. Because in many real-life applications space is a more critical resource than time, the aim of this article is to reduce the space, at the cost of an increased
running time. In particular, we give a solution for the above problems that uses O(n log |Σ | + d log n) bits, while the time requirement is increased from the optimal linear time to O(n log n). Our
new method is tested extensively on a biologically relevant datasets and shown to be usable even on a genome-scale data. 1.
- In Proc. SIGIR , 2011
"... Inverted indexes are the most fundamental and widely used data structures in information retrieval. For each unique word occurring in a document collection, the inverted index stores a list of
the documents in which this word occurs. Compression techniques are often applied to further reduce the spa ..."
Cited by 2 (0 self)
Add to MetaCart
Inverted indexes are the most fundamental and widely used data structures in information retrieval. For each unique word occurring in a document collection, the inverted index stores a list of the
documents in which this word occurs. Compression techniques are often applied to further reduce the space requirement of these lists. However, the index has a shortcoming, in that only predefined
pattern queries can be supported efficiently. In terms of string documents where word boundaries are undefined, if we have to index all the substrings of a given document, then the storage quickly
becomes quadratic in the data size. Also, if we want to apply the same type of indexes for querying phrases or sequence of words, then the inverted index will end up storing redundant information. In
this paper, we show the first set of inverted
"... Genome compression, referential compression, string search Background:Improved sequencing techniques have led to large amounts of biological sequence data. One of the challenges in managing
sequence data is efficient storage. Recently, referential compression schemes, storing only the differences be ..."
Cited by 2 (2 self)
Add to MetaCart
Genome compression, referential compression, string search Background:Improved sequencing techniques have led to large amounts of biological sequence data. One of the challenges in managing sequence
data is efficient storage. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this
field. However, so far sequences always have to be decompressed prior to an analysis. There is a need for algorithms working on compressed data directly, avoiding costly decompression. Summary:In our
work, we address this problem by proposing an algorithm for exact string search over compressed data. The algorithm works directly on referentially compressed genome sequences, without needing an
index for each genome and only using partial decompression. Results:Our string search algorithm for referentially compressed genomes performs exact string matching for large sets of genomes faster
than using an index structure, e.g. suffix trees, for each genome, especially for short queries. We think that this is an important step towards space and runtime efficient management of large
biological data sets. 1
, 2007
"... This report evaluates the performance of uncompressed and compressed substring indexes on build time, space usage and search performance. It is shown how the structures react to increasing data
size, alphabet size and repetitiveness in the data. The main contribution is the strong relationship shown ..."
Cited by 1 (1 self)
Add to MetaCart
This report evaluates the performance of uncompressed and compressed substring indexes on build time, space usage and search performance. It is shown how the structures react to increasing data size,
alphabet size and repetitiveness in the data. The main contribution is the strong relationship shown between time performance and locality in the data structures. As an example, it is shown that for
a large alphabet, suffix tree construction can be speeded up by a factor 16, and query lookup by a factor 8, if dynamic arrays are used to store the lists of children for each node instead of linked
lists, at the cost of using about 20 % more space. And for enhanced suffix arrays, query lookup is up to twice as fast if the data structure is stored as an array of structs instead of a set of
arrays, at no extra space cost. 1
"... Abstract. A Range Minimum Query asks for the position of a minimal element between two specified array-indices. We consider a natural extension of this, where our further constraint is that if
the minimum in a query interval is not unique, then the query should return an approximation of the median ..."
Add to MetaCart
Abstract. A Range Minimum Query asks for the position of a minimal element between two specified array-indices. We consider a natural extension of this, where our further constraint is that if the
minimum in a query interval is not unique, then the query should return an approximation of the median position among all positions that attain this minimum. We present a succinct preprocessing
scheme using Dn+o(n) bits in addition to the static input array (small constant D), such that subsequent “range median of minima queries” can be answered in constant time. This data structure can be
built in linear time, with little extra space needed at construction time. We introduce several new combinatorial concepts such as Super-Cartesian Trees and Super-Ballot Numbers. We give applications
of our preprocessing scheme in text indexes such as (compressed) suffix arrays and trees.
"... Abstract—Finding repetitive structures in genomes and proteins is important to understand their biological functions. Many data compressors for modern genomic sequences rely heavily on finding
repeats in the sequences. Small-scale and local repetitive structures are better understood than large and ..."
Add to MetaCart
Abstract—Finding repetitive structures in genomes and proteins is important to understand their biological functions. Many data compressors for modern genomic sequences rely heavily on finding
repeats in the sequences. Small-scale and local repetitive structures are better understood than large and complex interspersed ones. The notion of maximal repeats captures all the repeats in the
data in a space-efficient way. Prior work on maximal repeat finding used either a suffix tree or a suffix array along with other auxiliary data structures. Their space usage is 19–50 times the text
size with the best engineering efforts, prohibiting their usability on massive data such as the whole human genome. We focus on finding all the maximal repeats from massive texts in a time- and
space-efficient manner. Our technique uses the Burrows-Wheeler Transform and wavelet trees. For data sets consisting of natural language texts and protein data, the space usage of our method is no
more than three times the text size. For genomic sequences stored using one byte per base, the space usage of our method is less than double the sequence size. Our space-efficient method keeps the
timing performance fast. In fact, our method is orders of magnitude faster than the prior methods for processing massive texts such as the whole human genome, since the prior methods must use
external memory. For the first time, our method enables a desktop computer with 8GB internal memory (actual internal memory usage is less than 6GB) to find all the maximal repeats in the whole human
genome in less than 17 hours. We have implemented our method as general-purpose open-source software for public use.
"... Memory is rapidly becoming a precious resource in many data processing environments. This paper introduces a new data structure called a Compressed Buffer Tree (CBT). Using a combination of
buffering, compression, and lazy aggregation, CBTs can improve the memory efficiency of the GroupBy-Aggregate ..."
Add to MetaCart
Memory is rapidly becoming a precious resource in many data processing environments. This paper introduces a new data structure called a Compressed Buffer Tree (CBT). Using a combination of
buffering, compression, and lazy aggregation, CBTs can improve the memory efficiency of the GroupBy-Aggregate abstraction which forms the basis of many data processing models like MapReduce and
databases. We evaluate CBTs in the context of MapReduce aggregation, and show that CBTs can provide significant advantages over existing hashbased aggregation techniques: up to 2 × less memory and
1.5 × the throughput, at the cost of 2.5 × CPU. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4609842","timestamp":"2014-04-18T06:43:40Z","content_type":null,"content_length":"39213","record_id":"<urn:uuid:36e7def3-cb97-4471-90e9-199febcbadbf>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
|
compute the average of each row and column
compute the average of each row and column
i need to create a function to the program i've written which takes the two-dimensional array and returns an array "r" (of length 4) and "c" (of length 3). This function should computer the
average values of each row and column, then return r and c to main program. both array need to be double precision. Here is the code of the main function, it just created a two-dimensional array
of size 4x3 and is initialized to the sum of x, the row, and column values:
# include <stdio.h>
void mavg(int m[][3]);
int main(void)
int x;
int i, j;
int m[4][3];
/* Part 1 */
printf("Enter an integer x: ");
scanf("%d", &x);
/* Part 2 */
for(i=0; i<4; i++){
for(j=0; j<3; j++){
m[i][j] = i+j+x;
printf(" %d", m[i][j]);
return 0;
void mavg(int m[][3])
int r[4];
int c[3];
int sum, row, x, y, avg;
sum = 0;
row = 0;
1. Go read the thing on the forum that said << !! Posting Code? Read this First !! >>
2. Come back to your first post and press Edit.
3. Find the start of your code and right before it, add: [code]
4. Find the end of your code and right after it, add: [/code]
5. Press save.
6. Go read the homework policy sticky as well.
When you edit it to fix the code with code tags, add in what it is that has you stumped at the moment.
Posts without questions, don't get many answers - and Welcome to the forum, Rasiegel! :D
i need to figure out how to pass an array to a function and how to make a for statement to add the rows and columns thanks
This should get you started on loops: Cprogramming.com Tutorial: Loops
As for how to pass an array:
void foo( type array[], int size );
type array[ 4 ];
foo( array, 4 );
That should give you the general idea.
|
{"url":"http://cboard.cprogramming.com/c-programming/136526-compute-average-each-row-column-printable-thread.html","timestamp":"2014-04-18T09:26:51Z","content_type":null,"content_length":"9419","record_id":"<urn:uuid:6ad82381-054d-41dd-bc25-f8b0976aecd3>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculating Moisture Load
1) SIZE SELECTION for SMALL DEHUMIDIFIER
2) CALCULATE HUMIDITY LOAD by ENGINEERING METHOD
Click here to link to on site Calculate Dew Point / RH / T/T /DB/WB
We have chosen the usage of SI units for easy conversion. In calculating moisture load, Absolute Humidity is
the basis used and not Relative Humidity. Please also refer to the Psychometric Chart for various values
referenced. Absolute Humidity is measured in terms of g/kg of air at 1,2 kg/cu. meter. That is, the weight
of water molecules in a kg of dry air at the density of 1,2 kg/cu.meter.
1) SIZE SELECTION OF SMALL DEHUMIDIFIER BY RULE OF THUMB METHOD.
Just for the interest of non-technical users, we will provide the rule of thumb sizing for those who are
interested mainly in the small portable units.
a) A 12 L/day unit is able to keep a 150 sq ft x 8 ft ( 14 sq meter x 2.4 m ) space at 55% RH.
b) A 16 L/day unit is able to keep a 200 sq ft x 8 ft ( 19 sq meter x 2.4 m ) space at 55% RH.
55% RH is a general minimum requirement for most low level storage. For lower RH level like 45-50%
RH control, reduce the space by 30-35% by floor area. If one unit is too small, go for 2 units, or use the 30-45 L/day
( Note: the above sizing is based on an unventilated room. Room with central air-conditioning or exhaust is generally
difficult to estimate as the volume of Ventilation Air creates a very high humidity load.
For a more ACCURATE selection, we have the following:
In this exercise, we have included the 3 biggest factors that contribute to the humidity load. There are a lot
more other factors that contribute to the moisture load but they may need some expertise to understand and
calculate. A more stringent formula is needed for low humidity application below 40% RH or low temperature
( below 15 deg C ). This is done only by trained engineers. Total load will be sum of all the 3 following factors.
There are 3 main factors are :
2) HUMAN LOAD
3) VENTILATION
1) INFILTRATION is the average air that can come in thro' the walls and cracks. It is directly proportional to the difference
between the indoor and outdoor Humidity, and the size of the space. Stringently, it is proportional to the areas of the 4 walls
plus ceiling and floor. But to simplify the formula, total space volume is used instead, with a additional K-factor in the attached tables .
│INFILTRATION LOAD (L/hour) = (H out - H in ) x 0.0012 x Space Volume x K-factor│
H out is the Absolute Humidity in g/kg of the surrounding or out door air.
H in is the Absolute Humidity in g/kg of the space to be dried.
0.0012 is the density of air.
Space Volume is the volume of the space , height x width x length in cubit meter.
K-factor is the factor that converts the volume into surface area of exposure.
K-factor table
│ SPACE LESS THAN │K-Factor │
│ 80 CUBIT METER │ 0,5 │
│ 200 CUBIT METER │ 0,4 │
│ 400 CUBIT METER │ 0,35 │
│ 600 CUBIT METER │ 0,3 │
│1000 CUBIT METER │ 0,27 │
│2000 CUBIT METER │ 0,23 │
│3000 CUBIT METER │ 0,21 │
│4000 CUBIT METER │ 0,19 │
│5000 CUBIT METER │ 0,18 │
eg : A space of 20 x 10 x 10 meter space. Out door 30 deg C 70% RH. Required condition is 23 deg C 50% RH.
Calculate the infiltration load.
The first step is to convert the condition to absolute humidity in g/kg.
Out door 30 deg C 70% RH = 18.5 g/kg (Absolute Humidity)
Space 23 deg C 50% RH = 8.6 g/kg (Absolute Humidity)
P-factor : is a basic offset for differential value between the H out and H in. The higher the differential,
the greater the P-factor. It is found that the greater the difference between H out and H in, the greater
the vapour pressure has its effect in pushing the moisture through a wall. It is added into the formula
to improve the offset in the form of a mathematical division.
P-factor = (H out - H in ) / 11.5
INFILTRATION = (H out - H in ) x 0.0012 x Space Volume x K-factor x P-factor
INFILTRATION = ( 18.5-8.6 ) x 0.0012 x 2000 x 0.23 x 0.86 = 4.7 L/hr
ie there is 4.7 Litre of water getting into the space every hour.
2) HUMAN LOAD - This is a simple load based on estimation of human activities and H-factor
( human activity level )
│HUMAN LOAD ( L/hr ) = Number of people x H-factor x 0.065│
H-Factor Table
│ PASSIVE, OFFICE WORK │2,0│
│ SOME MEASURE OF MOVEMENT │2.5│
│ HEAVY LABOUR / EXERCISE │3,0│
3) VENTILATION - This is the estimate of the exhaust air or fresh air volume entering the room. Fresh air is needed for
human being. Each person requires an estimate of 20 CMH of fresh air. Door opening is another cause for ventilation.
Every time the door opens it adds to the fresh air intake and should be part of this formula.
│VENTILATION LOAD ( L/hr)= AIR INTAKE (CMH) x (H out - H in ) x 0.0012│
where H out - H in are similarly defined as in INFILTRATION LOAD.
DOOR OPENING LOAD / DOOR LOAD
Door opening can be considered as part of the Ventilation Load as it introduces air into the room each time a door is
opened. It is measured as the added quantity of AIR INTAKE (CMH) in Ventilation Load.
│DOOR LOAD (CMH) = AREA OF DOOR (sq. meter ) x 3 x time(sec) door stayed opened x No. of openings/Hr│
eg. A 5 sq.meter door opens 2 times per hour, each time stayed opened for 6 seconds is equal to = 5 x 3 x 6 x 2 = 180 CMH ventilation air.
Note: The initial condition of the untreated space is usually just a good indication of the "wetness level" and is NOT part
of the Moisture Load calculation Formula.
3) SELECTION OF DEHUMIDIFIER AFTER CALCULATING THE LOAD IN Litre/Hr
Most of the Dehumidifiers catalog have some values indicated like 16L/day or 12 L/day. This values are
indicative of the capacity of the dehumidifiers. Because different manufacturers use a different condition
to measure this value, there is always confusion in selection.
For example: Unit "A" with 15 L/day at 85% RH 33 deg C may be smaller in capacity than unit "B" which
states a 12 L/day capacity at 70% RH26 deg C. You may ask ," That's confusing ! " Yes, it is !
The reason is that the Dehumidifiers have different capacity at different temperature and humidity.
To simplify the description, a specified condition like 80% RH 30 deg C is the standard reference for most brands.
However, the better systems usually have another reference point like 27 deg C 60% RH or 55% RH 25 deg C
etc intermediate value as mid point reference. This middle values are the real functioning range of the Dehumidifier
/ DH as we usually want to keep the room at lower RH than 80%. Some industrial system come with a performance
curve chart.
│ │A room that need to be dehumidified to 50% RH at 25 Deg C, and have a humidity load of 0,125 L/hour (3 L/day) based on the method of calculation recommended above.│
│ │ │
│ │Model X has a capacity of 3,75 L/day at 50% RH at 25 Deg C is the nearest choice. │
│'CAPACITY CURVE OF MODEL "X"│ │
│ │Model X however, is advertised rightfully as having 12.5 L/day capacity at 30 deg C 80% RH. │
│ │ │
│ │From this exercise, we can see that the graph is very crucial in a proper way of selection. │
The above method applies to condensation type of dehumidifier selection for between 40% to 65% RH control level.
It is a general rule of thumb method. it is not applicable to centralized ducted air con system.
For Desiccant dehumidifier selection, various manufacturers have different methods of selection
as desiccant dehumidifiers have a more complicated performance curve.
Home Page
Information on Humidity
Disclaimer : The information we offer above is not absolute and fool proof. For serious engineering calculation and load estimates, please
approach the trained professionals and authorities. Please consult us for detail calculation for installation that requires a
definite humidity level.
|
{"url":"http://www.way-technovation.com/Dehumidifier/calculat.htm","timestamp":"2014-04-19T22:06:04Z","content_type":null,"content_length":"28150","record_id":"<urn:uuid:6ee62abf-42ea-4871-b8b4-a7de22c91630>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Peterstown, NJ Math Tutor
Find a Peterstown, NJ Math Tutor
...I try to relate the concepts to real-life situations so they have a reference point and then they can make the connection. My students usually feel much better about the algebra and then they
have a solid foundation for future math courses/proficiency tests. "Pre-algebra" skills are the foundat...
10 Subjects: including ACT Math, algebra 1, algebra 2, geometry
...Topics include the relationship between time domain and frequency domains, bandwidth requirements of various modulation schemes, noise effects, multiplexing and transmission methods. Lessons
also also covers protocols, architecture, and performance analysis of local and wide area networks. Principles and techniques used to analyze and design wireless communication systems.
43 Subjects: including trigonometry, discrete math, ASVAB, Java
...In Kindergarten, I instruct students through two of the four literacy centers used daily to achieve letter and sound recognition and to increase writing proficiency. In first grade I read
daily with each student working on strategies to help them achieve independence. I assist in the writing component of utilizing and organizing thoughts and sentences.
16 Subjects: including SAT math, GRE, GED, algebra 1
...As a tutor, I strive to make sure every single one of my students are able to understand the material that they are having trouble with to the point where they can feel confident enough to
tutor someone else that may be having the same troubles. When it comes to specific subjects and test prep, ...
40 Subjects: including algebra 2, chemistry, prealgebra, SAT math
...Systems of equations. Consistent and dependent systems. Systems of Inequalities.
4 Subjects: including algebra 1, algebra 2, prealgebra, elementary math
Related Peterstown, NJ Tutors
Peterstown, NJ Accounting Tutors
Peterstown, NJ ACT Tutors
Peterstown, NJ Algebra Tutors
Peterstown, NJ Algebra 2 Tutors
Peterstown, NJ Calculus Tutors
Peterstown, NJ Geometry Tutors
Peterstown, NJ Math Tutors
Peterstown, NJ Prealgebra Tutors
Peterstown, NJ Precalculus Tutors
Peterstown, NJ SAT Tutors
Peterstown, NJ SAT Math Tutors
Peterstown, NJ Science Tutors
Peterstown, NJ Statistics Tutors
Peterstown, NJ Trigonometry Tutors
Nearby Cities With Math Tutor
Bayway, NJ Math Tutors
Bergen Point, NJ Math Tutors
Chestnut, NJ Math Tutors
Elizabeth, NJ Math Tutors
Elizabethport, NJ Math Tutors
Elmora, NJ Math Tutors
Midtown, NJ Math Tutors
North Elizabeth, NJ Math Tutors
Parkandbush, NJ Math Tutors
Townley, NJ Math Tutors
Tremley Point, NJ Math Tutors
Tremley, NJ Math Tutors
Union Square, NJ Math Tutors
West Carteret, NJ Math Tutors
Winfield Park, NJ Math Tutors
|
{"url":"http://www.purplemath.com/peterstown_nj_math_tutors.php","timestamp":"2014-04-21T14:50:31Z","content_type":null,"content_length":"23976","record_id":"<urn:uuid:3c005fb3-6277-48f9-b676-d431fce69a52>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simple groups, the atoms of symmetry
[New release, HaskellForMaths-0.1.9, available
. Contains documentation improvements, new version of graphAuts using equitable partitions, and the code used in this blog post.]
Over the last few months on this blog, we have been looking at symmetry. We started out by looking at symmetries of graphs, then more recently we've been looking at symmetries of finite geometries.
Last time, I said that there was more to say about finite geometries. There is, but first of all I realised that I need to say a bit more about groups.
We've come across groups already many times. Whenever we have an object, then the collection of all symmetries of that object is called a group. (Recall that a symmetry is a change that leaves the
object looking the same.) Suppose that we call this group G, and arbitrary elements in the group g and h. Then G satisfies the following properties:
- There is a binary operation, *, defined on G. (g*h is the symmetry that you get if you do g then h.)
- If g and h are in G, then so is g*h. (Doing one symmetry followed by another is again a symmetry.)
- There is an element 1 in G, the identity, such that 1*g = g = g*1. (1 is the symmetry "do nothing".)
- If g is in G, then there is an element g^-1 in G, called the inverse of g, such that g*g^-1 = g^-1*g = 1. (Doing g^-1 is the same as undoing g.)
- (g*h)*k = g*(h*k)
Mathematicians sometimes (usually) turn things the other way round: They define a group by the properties above, and then observe that the collection of symmetries of an object fits the definition.
The problem with this is that it covers over the original intuitions behind the concept.
For us, the key point to take away from the definition is that groups are "closed". We've been looking at symmetries, represented by permutations. But symmetries and permutations just are the sort of
thing that you can multiply (do one then another), that have an identity (do nothing), and that have inverses (undo). Those all come for free. So when we say that a collection
of symmetries
, or
of permutations
, is a group, the only new thing that we're claiming is that the collection is
- for g, h in G, g*h, g^-1, and 1 are also in G.
Okay, so now we know what a group is. Now you know what mathematicians are like - as soon as they've discovered some new type of structure, they always ask themselves questions like the following:
- can we classify all structures of this type?
- can smaller structures of this type be combined into larger structures of the same type?
- can larger structures of this type be broken down into smaller structures?
- can we classify all "atomic" structures of this type, that can't be broken down any further?
What about groups? Well, the answer is, for
groups, mostly yes. Specifically, groups are made out of "atoms" - but there isn't a complete understanding of all the different ways they can be built from the atoms. Anyway, for the moment I'm
going to skip the first two questions (about building groups up), and concentrate on the last two (breaking them down).
Okay, so can groups be broken down into smaller groups? Well, groups can have smaller groups contained within them. For example, let's consider the group D10 - the symmetries of the pentagon.
(Somewhat confusingly, it's called D10 rather than D5, because it has 10 elements.)
> :load Math.Algebra.Group.PermutationGroup
> mapM_ print $ elts $ _D 10
Now, any subset of these elements, which is
, is again a group. For example, the set { [], [[1,2],[3,5]] } is closed - 1, and all products and inverses of elements in the set, are in the set. This is called a
of D10.
Here's Haskell code to find all subgroups of a group. (The group is passed in as a list of generators, and the subgroups are returned as lists of generators.)
> subgps gs = [] : subgps' S.empty [] (map (:[]) hs) where
> hs = filter isMinimal $ elts gs
> subgps' found ls (r:rs) =
> let ks = elts r in
> if ks `S.member` found
> then subgps' found ls rs
> else r : subgps' (S.insert ks found) (r:ls) rs
> subgps' found [] [] = []
> subgps' found ls [] = subgps' found [] [l ++ [h] | l <- reverse ls, h <- hs, last l < h]
> -- g is the minimal elt in the cyclic subgp it generates
> isMinimal 1 = False
> isMinimal g = all (g <=) primitives -- g == minimum primitives
> where powers = takeWhile (/=1) $ tail $ iterate (*g) 1
> n = orderElt g -- == length powers + 1
> primitives = filter (\h -> orderElt h == n) powers
By the way, most of the code we'll be looking at this week, including the above, should only be used with small groups.
Okay, so let's try it out:
> mapM_ print $ subgps $ _D 10
The last in the list is D10 itself, generated by a reflection and a rotation. Then there are five subgroups generated by five different reflections. (For example, recall that [[2,5],[3,4]] is the
reflection that swaps 2 with 5, and 3 with 4 - in other words, reflection in the vertical axis.). Then there is also a subgroup generated by the rotation [[1,2,3,4,5]] - the clockwise rotation that
moves 1 to 2, 2 to 3, 3 to 4, 4 to 5, and 5 to 1. Finally, [] is the subgroup with no generators, which by convention is the group consisting of just the single element 1.
So can we break a group down into its subgroups? Well, not quite. When we break something down, we should end up with two (or more) parts. Subgroups are parts. But we may not always be able to find
another matching part to make up the group.
I'm probably being a bit cryptic. Let's consider an analogy. We know that 15 = 3*5. We also have 5 = 15/3. So 15 = 3 * (15/3). Given a group G, and a subgroup H, we would like to be able to build G
as G = H * (G/H). I'm not going to dwell on the * in this statement. What about the / ?
Well, it turns out that we can form a quotient G/H of a group by a subgroup - but only if the subgroup satisfies some extra conditions.
Given any subsets X and Y of G, we can define a product XY:
> xs -*- ys = toListSet [x*y | x <- xs, y <- ys]
For X or Y consisting of just a single element, we can form the products xY = {x}Y, or Xy = X{y}:
> xs -* y = L.sort [x*y | x <- xs] -- == xs -*- [y]
> x *- ys = L.sort [x*y | y <- ys] -- == [x] -*- ys
Now it turns out that we can use this multiplication to define a quotient G/H. We let the elements of G/H be the (right) "cosets" Hg, for g in G. For example:
> let hs = [1, p [[1,2],[3,5]]]
> let x = p [[1,2,3,4,5]]
> let y = p [[2,5],[3,4]]
> hs -* x
> hs -* y
Here we have taken a subgroup H in D10, and then found the cosets Hx, Hy, for a couple of elements x, y in D10.
Now, to get our quotient group working, what we'd like is to have (Hx) (Hy) = H(xy). Unfortunately, this isn't guaranteed to be true. For example:
> (hs -* x) -*- (hs -* y)
> hs -* (x*y)
Oh dear. But there are some subgroups for which this construction does work. Suppose that we had a subgroup K, such that for all g in G, g^-1 K g = K. Then Kx Ky = Kx (x^-1Kx)y = KKxy = Kxy. (KK=K
follows from the fact that K is a subgroup, hence closed.)
A subgroup K of G such that g^-1 K g = K for all g in G is called a normal subgroup. For a normal subgroup, we can form the quotient G/K, and so we have broken G down into two parts, K and G/K.
Here's the code:
> isNormal gs ks = all (== ks') [ (g^-1) *- ks' -* g | g <- gs]
> where ks' = elts ks
> normalSubgps gs = filter (isNormal gs) (subgps gs)
> quotientGp gs ks
> | ks `isNormal` gs = gens $ toSn [action cosetsK (-* g) | g <- gs]
> | otherwise = error "quotientGp: not well defined unless ks normal in gs"
> where cosetsK = cosets gs ks
> gs // ks = quotientGp gs ks
For example:
> mapM_ print $ normalSubgps $ _D 10
Two of these are "trivial". [] is the subgroup {1}, which will always be normal. The last subgroup listed is D10 itself. A group will always be a normal subgroup of itself. So the only "proper"
normal subgroup is [ [[1,2,3,4,5]] ] - the subgroup of rotations of the pentagon.
In this case, there are only two cosets:
> mapM_ print $ cosets (_D 10) [p [[1,2,3,4,5]]]
Not surprisingly then, the quotient group is rather simple:
> _D 10 // [p [[1,2,3,4,5]]]
Just to explain a little: In quotientGp gs ks, we form the cosets of ks, and then we look at the action of the gs on the cosets by right multiplication - g sends Kh to Khg. So the elements of the
quotient group are permutations of the cosets. However, if we printed this out, it would be hard to see what was going on. So we call "toSn", which labels the cosets 1,2,..., and rewrites the
elements as permutations of the numbers. In the case above, we see that the quotient group can be generated by a single element, which simply swaps the two cosets.
Let's look at another example. S 4 is the group of all permutations of [1..4] - which also happens to be the symmetry group of the tetrahedron. S 4 has lots of subgroups:
> mapM_ print $ subgps $ _S 4
However, only a few of them are normal:
> mapM_ print $ normalSubgps $ _S 4
The first two are 1 and S4 itself. The third is the group of rotations of the tetrahedron. The fourth is just the rotations which move all the points.
Why are these subgroups normal, but the others aren't? Well, let's have a look at some of the ones that aren't.
We have a subgroup [ [[1,2]] ], and another [ [[1,3]] ], and several others that "look the same" - they just swap two points. We also have a subgroup [ [[1,2,3]] ], a subgroup [ [[1,2,4]] ], and
several others that "look the same" - they just just rotate three points while leaving the fourth fixed. In fact, in both cases we have a set of conjugate subgroups. What I mean is, for a subgroup to
be normal, we required that g^-1 K g = K. If a subgroup is not normal, then we can find a g such that g^-1 H g /= H. Then g^-1 H g will also be a subgroup (exercise: why?), and it will "look the
same" as H.
By contrast, a normal subgroup is often the only subgroup that "looks like that".
Okay, so we've seen how groups can be broken down into smaller groups. Now, what about "atoms"? Well, a group always has itself and 1 as normal subgroups. If it has other normal subgroups, then we
can break it down as K, G/K. We can keep going, and see if either of these in their turn has non-trivial normal subgroups. Eventually, we will end up with groups which have no non-trivial normal
subgroups. Such a group is called a
simple group
. This is a bit like a prime number - which has no other factors besides itself and 1. Mark Ronan, in Symmetry and the Monster, calls simple groups the "atoms of symmetry". They are the pieces out of
which all symmetry groups are made.
(Note that starting from a group G, with normal subgroups K1, K2, ..., we have more than one choice of how to break it down. Luckily, it turns out that if we keep going, then at the end we will have
the same collection of "atoms", no matter which order we chose.)
So, what do simple groups look like?
Well, first, here's some code to detect them:
> isSimple gs = length (normalSubgps gs) == 2
Then the finite simple groups are as follows:
- The cyclic groups of order p, p prime (the rotation group of a regular p-gon):
> _C n | n >= 2 = [p [[1..n]]]
For example:
> isSimple $ _C 5
> isSimple $ _C 6
Exercise: Why is C n simple when n is prime, and not when it is not?
- The alternating groups A n, n >= 5. (A n is the subgroup of S n consisting of those elements of S n which are "even" - see
> isSimple $ _A 4
> isSimple $ _A 5
(As it happens, A4 is the group of rotations of the tetrahedron, and A5 is the group of rotations of the dodecahedron. Unfortunately, An, n>5, doesn't have any such intuitive interpretation.)
- The projective special linear groups PSL(n,Fq) (except for PSL(2,F2), PSL(2,F3) and PSL(3,F2)). This is the subgroup of PGL(n,Fq) consisting of projective transformations with determinant 1.
(Recall that
last time
we looked at PΓL(n,Fq), the group of symmetries of the projective geometry PG(n-1,Fq), which includes both projective transformations and field automorphisms.)
- Several more infinite families of subgroups of PΓL(n,Fq), with geometric significance.
- Finally, there are 26 "sporadic" simple groups, which don't belong to any of the infinite families described above.
I should just say a thing or two about the
classification of finite simple groups
- The proof of the classification was a monster effort by a whole generation of group theorists. The proof runs to 10000 pages.
- The part that mathematicians find most interesting is the sporadic simple groups. In many mathematical classifications, one only finds infinite families (for example, the primes), so the sporadic
groups feel like little miracles that have dropped out of the sky. Why are they there?
Two of the things I'm hoping to do in future blog posts is describe and construct some of the other "simple groups of Lie type" (subgroups of PΓL(n,Fq), and describe and construct some of the
sporadic simple groups.
No comments:
|
{"url":"http://haskellformaths.blogspot.com/2009/10/simple-groups-atoms-of-symmetry.html","timestamp":"2014-04-18T23:15:40Z","content_type":null,"content_length":"54917","record_id":"<urn:uuid:a5ee3a89-ab36-4500-a959-768ef4d54504>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The FFT function returns a result equal to the complex, discrete Fourier transform of Array. The result of this function is a single- or double-precision complex array.
The discrete Fourier transform, F(u), of an N-element, one-dimensional function, f(x), is defined as:
And the inverse transform, (Direction > 0), is defined as:
If the keyword OVERWRITE is set, the transform is performed in-place, and the result overwrites the original contents of the array.
Running Time
For a one-dimensional FFT, running time is roughly proportional to the total number of points in Array times the sum of its prime factors. Let N be the total number of elements in Array, and
decompose N into its prime factors:
Running time is proportional to:
where T[3] ~ 4T[2]. For example, the running time of a 263 point FFT is approximately 10 times longer than that of a 264 point FFT, even though there are fewer points. The sum of the prime factors of
263 is 264 (1 + 263), while the sum of the prime factors of 264 is 20 (2 + 2 + 2 + 3 + 11).
Result = FFT( Array [, Direction] [, DIMENSION=vector] [, /DOUBLE] [, /INVERSE] [, /OVERWRITE] )
Return Value
FFT returns a complex array that has the same dimensions as the input array. The output array is ordered in the same manner as almost all discrete Fourier transforms. Element 0 contains the zero
frequency component, F[0]. The array element F[1] contains the smallest, nonzero positive frequency, which is equal to 1/(N[i] T[i]), where N[i] is the number of elements and T[i] is the sampling
interval of the i^th dimension. F[2] corresponds to a frequency of 2/(N[i] T[i]). Negative frequencies are stored in the reverse order of positive frequencies, ranging from the highest to lowest
negative frequencies.
The FFT function can be performed on functions of up to eight (8) dimensions. If a function has n dimensions, IDL performs a transform in each dimension separately, starting with the first dimension
and progressing sequentially to dimension n. For example, if the function has two dimensions, IDL first does the FFT row by row, and then column by column.
For an even number of points in the i^th dimension, the frequencies corresponding to the returned complex values are:
0, 1/(N[i]T[i]), 2/(N[i]T[i]), ..., (N[i]/2-1)/(N[i]T[i]), 1/(2T[i]), -(N[i]/2-1)/(N[i]T[i]), ..., -1/(N[i]T[i])
where 1/(2T[i]) is the Nyquist critical frequency.
For an odd number of points in the i^th dimension, the frequencies corresponding to the returned complex values are:
0, 1/(N[i]T[i]), 2/(N[i]T[i]), ..., (N[i]/2-0.5)/(N[i]T[i]), -(N[i]/2-0.5)/(N[i]T[i]), ..., -1/(N[i]T[i])
The array to which the Fast Fourier Transform should be applied. If Array is not of complex type, it is converted to complex type. The dimensions of the result are identical to those of Array. The
size of each dimension may be any integer value and does not necessarily have to be an integer power of 2, although powers of 2 are certainly the most efficient.
Direction is a scalar indicating the direction of the transform, which is negative by convention for the forward transform, and positive for the inverse transform. If Direction is not specified, the
forward transform is performed.
A normalization factor of 1/N, where N is the number of points, is applied during the forward transform.
When transforming from a real vector to complex and back, it is slightly faster to set Direction to 1 in the real to complex FFT.
Note also that the value of Direction is ignored if the INVERSE keyword is set.
Set this keyword to the dimension across which to calculate the FFT. If this keyword is not present or is zero, then the FFT is computed across all dimensions of the input array. If this keyword is
present, then the FFT is only calculated only across a single dimension. For example, if the dimensions of Array are N1, N2, N3, and DIMENSION is 2, the FFT is calculated only across the second
Set this keyword to a value other than zero to force the computation to be done in double-precision arithmetic, and to give a result of double-precision complex type. If DOUBLE is set equal to zero,
computation is done in single-precision arithmetic and the result is single-precision complex. If DOUBLE is not specified, the data type of the result will match the data type of Array.
Set this keyword to perform an inverse transform. Setting this keyword is equivalent to setting the Direction argument to a positive value. Note, however, that setting INVERSE results in an inverse
transform even if Direction is specified as negative.
If this keyword is set, and the Array parameter is a variable of complex type, the transform is done "in-place". The result overwrites the previous contents of the variable. For example, to perform a
forward, in-place FFT on the variable a:
a = FFT(a, -1, /OVERWRITE)
Thread Pool Keywords
This routine is written to make use of IDL's thread pool, which can increase execution speed on systems with multiple CPUs. The values stored in the !CPU system variable control whether IDL uses the
thread pool for a given computation. In addition, you can use the thread pool keywords TPOOL_MAX_ELTS, TPOOL_MIN_ELTS, and TPOOL_NOTHREAD to override the defaults established by !CPU for a single
invocation of this routine. See Thread Pool Keywords for details.
Specifically, FFT will use the thread pool to overlap the inner loops of the computation when used on data with dimensions which have factors of 2, 3, 4, or 5. The prime-number DFT does not use the
thread pool, as doing so would yield a relatively small benefit for the complexity it would introduce. Our experience shows that the improvement in performance from using the thread pool for FFT is
highly dependent upon many factors (data length and dimensions, single vs. double precision, operating system, and hardware) and can vary between platforms.
Display the log of the power spectrum of a 100-element index array by entering:
PLOT, /YLOG, ABS(FFT(FINDGEN(100), -1))
As a more complex example, display the power spectrum of a 100-element vector sampled at a rate of 0.1 seconds per point. Show the 0 frequency component at the center of the plot and label the
abscissa with frequency:
; Define the number of points and the interval:
N = 100
T = 0.1
; Midpoint+1 is the most negative frequency subscript:
N21 = N/2 + 1
; The array of subscripts:
F = INDGEN(N)
; Insert negative frequencies in elements F(N/2 +1), ..., F(N-1):
F[N21] = N21 -N + FINDGEN(N21-2)
; Compute T0 frequency:
F = F/(N*T)
; Shift so that the most negative frequency is plotted first:
PLOT, /YLOG, SHIFT(F, -N21), SHIFT(ABS(FFT(F, -1)), -N21)
Compute the FFT of a two-dimensional image by entering:
; Create a cosine wave damped by an exponential.
n = 256
x = FINDGEN(n)
y = COS(x*!PI/6)*EXP(-((x - n/2)/30)^2/2)
; Construct a two-dimensional image of the wave.
z = REBIN(y, n, n)
; Add two different rotations to simulate a crystal structure.
z = ROT(z, 10) + ROT(z, -45)
WINDOW, XSIZE=540, YSIZE=540
LOADCT, 39
TVSCL, z, 10, 270
; Compute the two-dimensional FFT.
f = FFT(z)
logpower = ALOG10(ABS(f)^2) ; log of Fourier power spectrum.
TVSCL, logpower, 270, 270
; Compute the FFT only along the first dimension.
f = FFT(z, DIMENSION=1)
logpower = ALOG10(ABS(f)^2) ; log of Fourier power spectrum.
TVSCL, logpower, 10, 10
; Compute the FFT only along the second dimension.
f = FFT(z, DIMENSION=2)
logpower = ALOG10(ABS(f)^2) ; log of Fourier power spectrum.
TVSCL, logpower, 270, 10
Version History
Introduced: Original
See Also
|
{"url":"http://www.physics.nyu.edu/grierlab/idl_html_help/F4.html","timestamp":"2014-04-20T23:57:51Z","content_type":null,"content_length":"16100","record_id":"<urn:uuid:7447e31e-1f65-42e3-877b-c7cf2078c0ab>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculating Lengthy Repeating Decimals
Date: 10/26/2006 at 21:14:22
From: Cy
Subject: 355/113
Hi, Dr. Math.
I'm currently enrolled in 8th grade algebra, and I just had a question
I was curious about. You know the approximation of pi, 355/113?
Well, I was trying to figure out if I could turn it into a decimal
using long division, and find out if the decimal repeats or ever
terminates. Can you please show me how to find when (or if) the
decimal terminates/repeats?
I am currently still using long division to find my answer, and have
been unsuccessful. This is what i have so far:
Date: 10/27/2006 at 00:49:51
From: Doctor Greenie
Subject: Re: 355/113
Hello, Cy --
If the decimal representation of a fraction terminates, then we can
give the decimal fraction an exact name, using base 10 place values:
.103 = 103 thousandths = 103/1000
.48932 = 48932 hundred thousandths = 48932/100000
.15 = 15 hundredths = 15/100 = 3/20
If the common fraction can't be written as an equivalent fraction with
a denominator which is a power of 10, then the decimal form of the
fraction will not terminate. So the only common fractions which have
decimal representations which terminate are those whose denominators
contain prime factors of only 5 and/or 2 (the prime factors of the
base, 10).
So your fraction 355/113 will not terminate.
Now... how many digits will the repeating decimal go before it
repeats? We can't tell before we start calculating; however, we know
that the number of places will be less than the denominator. If you
think about the long division process, and about the fact that the
division never terminates, then the only possible remainders at each
step are 1 through 112. Since we can have at most 112 different
remainders in the long division process, we must get a repeated
remainder after at most 112 steps; and so the repeating decimal part
of 355/113 can be at most 112 digits long.
And in fact, with a large prime denominator like 113, it is quite
likely that the repeating decimal pattern will be 112 digits long.
One way to find the repeating decimal part is long division, as you
have attempted. This of course can be very tedious, and arithmetic
mistakes are easy to make. You have made an error somewhere,
because your result is only correct this far:
The next digit should be "5"; you show "4".
Is there a way to find the repeating decimal faster than long
division? Yes, there is. To understand the quicker method for
finding the decimal representation of 355/113, let's look at the
calculation of the first several decimal places using long division:
113 ) 355.00000000000000
After the whole number part of the answer is obtained, the remainder
is 16. So at this point we are dividing "16" followed by an
infinite string of 0's by 113. Then after the 8th decimal place,
the remainder is 4. So from this point on, we are dividing "4"
followed by an infinite string of zeros by 113. Since 4 is 1/4 of
16, the sequence of digits we will get starting here is the string
of digits we have obtained to this point, divided by 4.
So we can find the complete repeating decimal representation of
355/113 by using long division or some other technique to get the
first several decimal places and then dividing that string of digits
by 4. Division by 4 is much easier than division by 113, so the
overall time required to find the complete repeating decimal is
shortened. Here is how the beginning of the process went when I
used the calculator on my PC....
355/113 = 3.1415929203539823008849557522124
/4 = 0.7853982300884955752212389380531
/4 = 0.19634955752212389380530973451327
/4 = 0.049087389380530973451327433628319
/4 = 0.01227184734513274336283185840708
/4 = 0.0030679618362831858407079646017699
Now we can "splice together" the appropriate digits from these
strings. For example, the last digit in the first line is rounded,
so we look at the several preceding digits, "5752212", and we look
for that sequence of digits in the next line. The digits in the
second line following this sequence are "389380531", with the last
digit "1" having been rounded; we splice these digits onto the other
digits we already have. So then in the third line we find the
string of digits "38053" and copy the next several digits from that
line, omitting the last digit. And so on.... We get the following:
355/113 = 3.14159 29203 53982 30088 49557 52212 38938 05309 --
73451 32743 36283 18584 07079 64601 769...
I have grouped the digits into groups of 5 so we can keep track of
how many decimal places we have gone. At this point, we have found
73 digits--more than half the maximum possible length of the repeating
pattern--without the pattern repeating. For reasons I don't think I
could explain, that means the repeating sequence of digits will in
fact be the full 112 digits long. It also means that somewhere in our
string of digits we will find a string of digits which is the "9's
complement" of the beginning digits.
Here is an example (somewhat familiar to you, I hope) of what this
means. If we look at the decimal representation of 1/7...
1/7 = .142857142857....
digits 4-6 in the repeating pattern are the "9's complement" of digits
1-3, meaning 142+857=999.
So somewhere in our string of the first 73 digits of the decimal
representation of 355/113, we should find a string of digits which is
the "9's complement" of the first several digits:
- 1415929...
And indeed we do find this string, starting with the 57th digit.
(Note: (112/2)+1 = 57--the 57th digit is the first digit in the second
half of the 112-digit repeating pattern; the two 56-digit halves are
9's complements of each other.)
And so now we can complete the process of finding the 112-digit
repeating pattern for 355/113 by using the 9's complement of the first
56 digits:
355/113 = 3.14159 29203 53982 30088 49557 52212 38938 05309 --
73451 32743 36283 1 --
85840 70796 46017 69911 50442 47787 61061 94690 --
26548 67256 63716 8...
If you are really good with mental math (specifically, dividing by 4),
you can find this complete repeating pattern by taking the digits you
already have and dividing by 4 to get subsequent digits....
Finally, when I first started working on your problem and wanted to
check my calculations, I looked into finding repeating decimal
patterns by using a spreadsheet, since spreadsheets are good tools
for performing repeated calculations. It turned out to be relatively
easy to set up a spreadsheet to mimic the long division process. (If
you don't know anything about spreadsheets, try to find someone who
does so they can show you this process....)
Here's what we do on the spreadsheet to find the repeating decimal
for 355/113:
(1) In cell B1, enter "16".
We don't really care about the whole number part of the answer, "3"--
we are only interested in the remainder after the first step, which
is 16.
(2) In cell A2, enter the following formula: =FLOOR(10*B1/113,1)
This formula multiplies the remainder from cell B1 (16) by 10, divides
it by 113, and rounds down to the nearest whole number. This mimics
the process of finding the next digit of the answer. (16 times 10 =
160; 160/113 = 1 remainder 47; next digit is "1", and the remainder
is 47.)
(3) In cell B2, enter the following formula: =10*B1-113*A2
This formula finds the new remainder by multiplying the previous
remainder by 10 and subtracting 113 times the digit you just found.
(16 times 10 = 160; 160-113(1) = 47.)
(4) Highlight cells A2 and B2, drag down until you have reached to or
past cells A113 and B113, and use the "edit/fill/down" feature
(control-D in Microsoft Excel) to copy the formulas in cells A2 and
B2 to the remaining rows of columns A and B.
The sequence of digits in cells A2 through A113 are the digits of the
repeating decimal representation of 355/113. You will see that the
digits begin repeating with cell A114.
Thanks for submitting your question. I got a lot of good mental
exercise out of it; and I discovered an easy way to use a spreadsheet
to mimic the long division process and thus provide a quick path to
finding the decimal representation of any repeating fraction.
I hope all this helps. Please write back if you have any further
questions about any of this.
- Doctor Greenie, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/69422.html","timestamp":"2014-04-16T21:55:21Z","content_type":null,"content_length":"14604","record_id":"<urn:uuid:17084d3c-4253-4e3d-a310-631ad742233d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about algebra on A Mind for Madness
Today I’ll sketch a proof of Ito that birational smooth minimal models have all of their Hodge numbers exactly the same. It uses the ${p}$-adic integration from last time plus one piece of heavy
First, the piece of heavy machinery: If ${X, Y}$ are finite type schemes over the ring of integers ${\mathcal{O}_K}$ of a number field whose generic fibers are smooth and proper, then if ${|X(\
mathcal{O}_K/\mathfrak{p})|=|Y(\mathcal{O}_K/\mathfrak{p})|}$ for all but finitely many prime ideals, ${\mathfrak{p}}$, then the generic fibers ${X_\eta}$ and ${Y_\eta}$ have the same Hodge numbers.
If you’ve seen these types of hypotheses before, then there’s an obvious set of theorems that will probably be used to prove this (Chebotarev + Hodge-Tate decomposition + Weil conjectures). Let’s
first restrict our attention to a single prime. Since we will be able to throw out bad primes, suppose we have ${X, Y}$ smooth, proper varieties over ${\mathbb{F}_q}$ of characteristic ${p}$.
Proposition: If ${|X(\mathbb{F}_{q^r})|=|Y(\mathbb{F}_{q^r})|}$ for all ${r}$, then ${X}$ and ${Y}$ have the same ${\ell}$-adic Betti numbers.
This is a basic exercise in using the Weil conjectures. First, ${X}$ and ${Y}$ clearly have the same Zeta functions, because the Zeta function is defined entirely by the number of points over ${\
mathbb{F}_{q^r}}$. But the Zeta function decomposes
$\displaystyle Z(X,t)=\frac{P_1(t)\cdots P_{2n-1}(t)}{P_0(t)\cdots P_{2n}(t)}$
where ${P_i}$ is the characteristic polynomial of Frobenius acting on ${H^i(X_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)}$. The Weil conjectures tell us we can recover the ${P_i(t)}$ if we know the
Zeta function. But now
$\displaystyle \dim H^i(X_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)=\deg P_i(t)=H^i(Y_{\overline{\mathbb{F}_q}}, \mathbb{Q}_\ell)$
and hence the Betti numbers are the same. Now let’s go back and notice the magic of ${\ell}$-adic cohomology. If ${X}$ and ${Y}$ are as before over the ring of integers of a number field. Our
assumption about the number of points over finite fields being the same for all but finitely many primes implies that we can pick a prime of good reduction and get that the ${\ell}$-adic Betti
numbers of the reductions are the same ${b_i(X_p)=b_i(Y_p)}$.
One of the main purposes of ${\ell}$-adic cohomology is that it is “topological.” By smooth, proper base change we get that the ${\ell}$-adic Betti numbers of the geometric generic fibers are the
$\displaystyle b_i(X_{\overline{\eta}})=b_i(X_p)=b_i(Y_p)=b_i(Y_{\overline{\eta}}).$
By the standard characteristic ${0}$ comparison theorem we then get that the singular cohomology is the same when base changing to ${\mathbb{C}}$, i.e.
$\displaystyle \dim H^i(X_\eta\otimes \mathbb{C}, \mathbb{Q})=\dim H^i(Y_\eta \otimes \mathbb{C}, \mathbb{Q}).$
Now we use the Chebotarev density theorem. The Galois representations on each cohomology have the same traces of Frobenius for all but finitely many primes by assumption and hence the
semisimplifications of these Galois representations are the same everywhere! Lastly, these Galois representations are coming from smooth, proper varieties and hence the representations are
Hodge-Tate. You can now read the Hodge numbers off of the Hodge-Tate decomposition of the semisimplification and hence the two generic fibers have the same Hodge numbers.
Alright, in some sense that was the “uninteresting” part, because it just uses a bunch of machines and is a known fact (there’s also a lot of stuff to fill in to the above sketch to finish the
argument). Here’s the application of ${p}$-adic integration.
Suppose ${X}$ and ${Y}$ are smooth birational minimal models over ${\mathbb{C}}$ (for simplicity we’ll assume they are Calabi-Yau, Ito shows how to get around not necessarily having a non-vanishing
top form). I’ll just sketch this part as well, since there are some subtleties with making sure you don’t mess up too much in the process. We can “spread out” our varieties to get our setup in the
beginning. Namely, there are proper models over some ${\mathcal{O}_K}$ (of course they aren’t smooth anymore), where the base change of the generic fibers are isomorphic to our original varieties.
By standard birational geometry arguments, there is some big open locus (the complement has codimension greater than ${2}$) where these are isomorphic and this descends to our model as well. Now we
are almost there. We have an etale isomorphism ${U\rightarrow V}$ over all but finitely many primes. If we choose nowhere vanishing top forms on the models, then the restrictions to the fibers are $
{p}$-adic volume forms.
But our standard trick works again here. The isomorphism ${U\rightarrow V}$ pulls back the volume form on ${Y}$ to a volume form on ${X}$ over all but finitely primes and hence they differ by a
function which has ${p}$-adic valuation ${1}$ everywhere. Thus the two models have the same volume over all but finitely many primes, and as was pointed out last time the two must have the same
number of ${\mathbb{F}_{q^r}}$-valued points over these primes since we can read this off from knowing the volume.
The machinery says that we can now conclude the two smooth birational minimal models have the same Hodge numbers. I thought that was a pretty cool and unexpected application of this idea of ${p}$
-adic volume. It is the only one I know of. I’d be interested if anyone knows of any other.
Volumes of p-adic Schemes
I came across this idea a long time ago, but I needed the result that uses it in its proof again, so I was curious about figuring out what in the world is going on. It turns out that you can make “$
{p}$-adic measures” to integrate against on algebraic varieties. This is a pretty cool idea that I never would have guessed possible. I mean, maybe complex varieties or something, but over ${p}$-adic
Let’s start with a pretty standard setup in ${p}$-adic geometry. Let ${K/\mathbb{Q}_p}$ be a finite extension and ${R}$ the ring of integers of ${K}$. Let ${\mathbb{F}_q=R_K/\mathfrak{m}}$ be the
residue field. If this scares you, then just take ${K=\mathbb{Q}_p}$ and ${R=\mathbb{Z}_p}$.
Now let ${X\rightarrow Spec(R)}$ be a smooth scheme of relative dimension ${n}$. The picture to have in mind here is some smooth ${n}$-dimensional variety over a finite field ${X_0}$ as the closed
fiber and a smooth characteristic ${0}$ version of this variety, ${X_\eta}$, as the generic fiber. This scheme is just interpolating between the two.
Now suppose we have an ${n}$-form ${\omega\in H^0(X, \Omega_{X/R}^n)}$. We want to say what it means to integrate against this form. Let ${|\cdot |_p}$ be the normalized ${p}$-adic valuation on ${K}$
. We want to consider the ${p}$-adic topology on the set of ${R}$-valued points ${X(R)}$. This can be a little weird if you haven’t done it before. It is a totally disconnected, compact space.
The idea for the definition is the exact naive way of converting the definition from a manifold to this setting. Consider some point ${s\in X(R)}$. Locally in the ${p}$-adic topology we can find a
“disk” containing ${s}$. This means there is some open ${U}$ about ${s}$ together with a ${p}$-adic analytic isomorphism ${U\rightarrow V\subset R^n}$ to some open.
In the usual way, we now have a choice of local coordinates ${x=(x_i)}$. This means we can write ${\omega|_U=fdx_1\wedge\cdots \wedge dx_n}$ where ${f}$ is a ${p}$-adic analytic on ${V}$. Now we just
$\displaystyle \int_U \omega= \int_V |f(x)|_p dx_1 \cdots dx_n.$
Now maybe it looks like we’ve converted this to another weird ${p}$-adic integration problem that we don’t know how to do, but we the right hand side makes sense because ${R^n}$ is a compact
topological group so we integrate with respect to the normalized Haar measure. Now we’re done, because modulo standard arguments that everything patches together we can define ${\int_X \omega}$ in
terms of these local patches (the reason for being able to patch without bump functions will be clear in a moment, but roughly on overlaps the form will differ by a unit with valuation ${1}$).
This allows us to define a “volume form” for smooth ${p}$-adic schemes. We will call an ${n}$-form a volume form if it is nowhere vanishing (i.e. it trivializes ${\Omega^n}$). You might be scared
that the volume you get by integrating isn’t well-defined. After all, on a real manifold you can just scale a non-vanishing ${n}$-form to get another one, but the integral will be scaled by that
We’re in luck here, because if ${\omega}$ and ${\omega'}$ are both volume forms, then there is some non-vanishing function such that ${\omega=f\omega'}$. Since ${f}$ is never ${0}$, it is invertible,
and hence is a unit. This means ${|f(x)|_p=1}$, so since we can only get other volume forms by scaling by a function with ${p}$-adic valuation ${1}$ everywhere the volume is a well-defined notion
under this definition! (A priori, there could be a bunch of “different” forms, though).
It turns out to actually be a really useful notion as well. If we want to compute the volume of ${X/R}$, then there is a natural way to do it with our set-up. Consider the reduction mod ${\mathfrak
{m}}$ map ${\phi: X(R)\rightarrow X(\mathbb{F}_q)}$. The fiber over any point is a ${p}$-adic open set, and they partition ${X(R)}$ into a disjoint union of ${|X(\mathbb{F}_q)|}$ mutually isomorphic
sets (recall the reduction map is surjective here by the relevant variant on Hensel’s lemma). Fix one point ${x_0\in X(\mathbb{F}_q)}$, and define ${U:=\phi^{-1}(x_0)}$. Then by the above analysis we
$\displaystyle Vol(X)=\int_X \omega=|X(\mathbb{F}_q)|\int_{U}\omega$
All we have to do is compute this integral over one open now. By our smoothness hypothesis, we can find a regular system of parameters ${x_1, \ldots, x_n\in \mathcal{O}_{X, x_0}}$. This is a
legitimate choice of coordinates because they define a ${p}$-adic analytic isomorphism with ${\mathfrak{m}^n\subset R^n}$.
Now we use the same silly trick as before. Suppose ${\omega=fdx_1\wedge \cdots \wedge dx_n}$, then since ${\omega}$ is a volume form, ${f}$ can’t vanish and hence ${|f(x)|_p=1}$ on ${U}$. Thus
$\displaystyle \int_{U}\omega=\int_{\mathfrak{m}^n}dx_1\cdots dx_n=\frac{1}{q^n}$
This tells us that no matter what ${X/R}$ is, if there is a volume form (which often there isn’t), then the volume
$\displaystyle Vol(X)=\frac{|X(\mathbb{F}_q)|}{q^n}$
just suitably multiplies the number of ${\mathbb{F}_q}$-rational points there are by a factor dependent on the size of the residue field and the dimension of ${X}$. Next time we’ll talk about the one
place I know of that this has been a really useful idea.
Newton Polygons of p-Divisible Groups
I really wanted to move on from this topic, because the theory gets much more interesting when we move to ${p}$-divisible groups over some larger rings than just algebraically closed fields.
Unfortunately, while looking over how Demazure builds the theory in Lectures on ${p}$-divisible Groups, I realized that it would be a crime to bring you this far and not concretely show you the power
of thinking in terms of Newton polygons.
As usual, let’s fix an algebraically closed field of positive characteristic to work over. I was vague last time about the anti-equivalence of categories between ${p}$-divisible groups and ${F}$
-crystals mostly because I was just going off of memory. When I looked it up, I found out I was slightly wrong. Let’s compute some examples of some slopes.
Recall that ${D(\mu_{p^\infty})\simeq W(k)}$ and ${F=p\sigma}$. In particular, ${F(1)=p\cdot 1}$, so in our ${F}$-crystal theory we get that the normalized ${p}$-adic valuation of the eigenvalue ${p}
$ of ${F}$ is ${1}$. Recall that we called this the slope (it will become clear why in a moment).
Our other main example was ${D(\mathbb{Q}_p/\mathbb{Z}_p)\simeq W(k)}$ with ${F=\sigma}$. In this case we have ${1}$ is “the” eigenvalue which has ${p}$-adic valuation ${0}$. These slopes totally
determine the ${F}$-crystal up to isomorphism, and the category of ${F}$-crystals (with slopes in the range ${0}$ to ${1}$) is anti-equivalent to the category of ${p}$-divisible groups.
The Dieudonné-Manin decomposition says that we can always decompose ${H=D(G)\otimes_W K}$ as a direct sum of vector spaces indexed by these slopes. For example, if I had a height three ${p}$
-divisible group, ${H}$ would be three dimensional. If it decomposed as ${H_0\oplus H_1}$ where ${H_0}$ was ${2}$-dimensional (there is a repeated ${F}$-eigenvalue of slope ${0}$), then ${H_1}$ would
be ${1}$-dimensional, and I could just read off that my ${p}$-divisible group must be isogenous to ${G\simeq \mu_{p^\infty}\oplus (\mathbb{Q}_p/\mathbb{Z}_p)^2}$.
In general, since we have a decomposition ${H=H_0\oplus H' \oplus H_1}$ where ${H'}$ is the part with slopes strictly in ${(0,1)}$ we get a decomposition ${G\simeq (\mu_{p^\infty})^{r_1}\oplus G' \
oplus (\mathbb{Q}_p/\mathbb{Z}_p)^{r_0}}$ where ${r_j}$ is the dimension of ${H_j}$ and ${G'}$ does not have any factors of those forms.
This is where the Newton polygon comes in. We can visually arrange this information as follows. Put the slopes of ${F}$ in increasing order ${\lambda_1, \ldots, \lambda_r}$. Make a polygon in the
first quadrant by plotting the points ${P_0=(0,0)}$, ${P_1=(\dim H_{\lambda_1}, \lambda_1 \dim H_{\lambda_1})}$, … , ${\displaystyle P_j=\left(\sum_{l=1}^j\dim H_{\lambda_l}, \sum_{l=1}^j \lambda_l\
dim H_{\lambda_l}\right)}$.
This might look confusing, but all it says is to get from ${P_{j}}$ to ${P_{j+1}}$ make a line segment of slope ${\lambda_j}$ and make the segment go to the right for ${\dim H_{\lambda_j}}$. This way
you visually encode the slope with the actual slope of the segment, and the longer the segment is the bigger the multiplicity of that eigenvalue.
But this way of encoding the information gives us something even better, because it turns out that all these ${P_i}$ must have integer coordinates (a highly non-obvious fact proved in the book by
Demazure listed above). This greatly restricts our possibilities for Dieudonné ${F}$-crystals. Consider the height ${2}$ case. We have ${H}$ is two dimensional, so we have ${2}$ slopes (possibly the
same). The maximal ${y}$ coordinate you could ever reach is if both slopes were maximal which is ${1}$. In that case you just get the line segment from ${(0,0)}$ to ${(2,2)}$. The lowest you could
get is if the slopes were both ${0}$ in which case you get a line segment ${(0,0)}$ to ${(2,0)}$.
Every other possibility must be a polygon between these two with integer breaking points and increasing order of slopes. Draw it (or if you want to cheat look below). You will see that there are
obviously only two other possibilities. The one that goes ${(0,0)}$ to ${(1,0)}$ to ${(2,1)}$ which is a slope ${0}$ and slope ${1}$ and corresponds to ${\mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}
_p}$ and the one that goes ${(0,0)}$ to ${(2,1)}$. This corresponds to a slope ${1/2}$ with multiplicity ${2}$. This corresponds to the ${E[p^\infty]}$ for supersingular elliptic curves. That
recovers our list from last time.
We now just have a bit of a game to determine all height ${3}$${p}$-divisible groups up to isogeny (and it turns out in this small height case that determines them up to isomorphism). You can just
draw all the possibilities for Newton polygons as in the height ${2}$ case to see that the only ${6}$ possibilities are ${(\mu_{p^\infty})^3}$, ${(\mu_{p^\infty})^2\oplus \mathbb{Q}_p/\mathbb{Z}_p}$,
${\mu_{p^\infty}\oplus (\mathbb{Q}_p/\mathbb{Z}_p)^2}$, ${(\mathbb{Q}_p/\mathbb{Z}_p)^3}$, and then two others: ${G_{1/3}}$ which corresponds to the thing with a triple eigenvalue of slope ${1/3}$
and ${G_{2/3}}$ which corresponds to the thing with a triple eigenvalue of slope ${2/3}$.
To finish this post (and hopefully topic!) let’s bring this back to elliptic curves one more time. It turns out that ${D(E[p^\infty])\simeq H^1_{crys}(E/W)}$. Without reminding you of the technical
mumbo-jumbo of crystalline cohomology, let’s think why this might be reasonable. We know ${E[p^\infty]}$ is always height ${2}$, so ${D(E[p^\infty])}$ is rank ${2}$. But if we consider that
crystalline cohomology should be some sort of ${p}$-adic cohomology theory that “remembers topological information” (whatever that means), then we would guess that some topological ${H^1}$ of a
“torus” should be rank ${2}$ as well.
Moreover, the crystalline cohomology comes with a natural Frobenius action. But if we believe there is some sort of Weil conjecture magic that also applies to crystalline cohomology (I mean, it is a
Weil cohomology theory), then we would have to believe that the product of the eigenvalues of this Frobenius equals ${p}$. Recall in the “classical case” that the characteristic polynomial has the
form ${x^2-a_px+p}$. So there are actually only two possibilities in this case, both slope ${1/2}$ or one of slope ${1}$ and the other of slope ${0}$. As we’ve noted, these are the two that occur.
In fact, this is a more general phenomenon. When thinking about ${p}$-divisible groups arising from algebraic varieties, because of these Weil conjecture type considerations, the Newton polygons must
actually fit into much narrower regions and sometimes this totally forces the whole thing. For example, the enlarged formal Brauer group of an ordinary K3 surface has height ${22}$, but the whole
Newton polygon is fully determined by having to fit into a certain region and knowing its connected component.
More Classification of p-Divisible Groups
Today we’ll look a little more closely at ${A[p^\infty]}$ for abelian varieties and finish up a different sort of classification that I’ve found more useful than the one presented earlier as triples
${(M,F,V)}$. For safety we’ll assume ${k}$ is algebraically closed of characteristic ${p>0}$ for the remainder of this post.
First, let’s note that we can explicitly describe all ${p}$-divisible groups over ${k}$ up to isomorphism (of any dimension!) up to height ${2}$ now. This is basically because height puts a pretty
tight constraint on dimension: ${ht(G)=\dim(G)+\dim(G^D)}$. If we want to make this convention, we’ll say ${ht(G)=0}$ if and only if ${G=0}$, but I’m not sure it is useful anywhere.
For ${ht(G)=1}$ we have two cases: If ${\dim(G)=0}$, then it’s dual must be the unique connected ${p}$-divisible group of height ${1}$, namely ${\mu_{p^\infty}}$ and hence ${G=\mathbb{Q}_p/\mathbb{Z}
_p}$. The other case we just said was ${\mu_{p^\infty}}$.
For ${ht(G)=2}$ we finally get something a little more interesting, but not too much more. From the height ${1}$ case we know that we can make three such examples: ${(\mu_{p^\infty})^{\oplus 2}}$, $
{\mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$, and ${(\mathbb{Q}_p/\mathbb{Z}_p)^{\oplus 2}}$. These are dimensions ${2}$, ${1}$, and ${0}$ respectively. The first and last are dual to each other
and the middle one is self-dual. Last time we said there was at least one more: ${E[p^\infty]}$ for a supersingular elliptic curve. This was self-dual as well and the unique one-dimensional connected
height ${2}$${p}$-divisible group. Now just playing around with the connected-étale decomposition, duals, and numerical constraints we get that this is the full list!
If we could get a bit better feel for the weird supersingular ${E[p^\infty]}$ case, then we would have a really good understanding of all ${p}$-divisible groups up through height ${2}$ (at least over
algebraically closed fields).
There is an invariant called the ${a}$-number for abelian varieties defined by ${a(A)=\dim Hom(\alpha_p, A[p])}$. This essentially counts the number of copies of ${\alpha_p}$ sitting inside the
truncated ${p}$-divisible group. Let’s consider the elliptic curve case again. If ${E/k}$ is ordinary, then we know ${E[p]}$ explicitly and hence can argue that ${a(E)=0}$. For the supersingular case
we have that ${E[p]}$ is actually a non-split semi-direct product of ${\alpha_p}$ by itself and we get that ${a(E)=1}$. This shows that the ${a}$-number is an invariant that is equivalent to knowing
This is a phenomenon that generalizes. For an abelian variety ${A/k}$ we get that ${A}$ is ordinary if and only if ${a(A)=0}$ in which case the ${p}$-divisible group is a bunch of copies of ${E[p^\
infty]}$ for an ordinary elliptic curve, i.e. ${A[p^\infty]\simeq E[p^\infty]^g}$. On the other hand, ${A}$ is supersingular if and only if ${A[p^\infty]\simeq E[p^\infty]^g}$ for ${E/k}$
supersingular (these two facts are pretty easy if you use the ${p}$-rank as the definition of ordinary and supersingular because it tells you the étale part and you mess around with duals and
numerics again).
Now that we’ve beaten that dead horse beyond recognition, I’ll point out one more type of classification which is the one that comes up most often for me. In general, there is not redundant
information in the triple ${(M, F, V)}$, but for special classes of ${p}$-divisible groups (for example the ones I always work with explained here) all you need to remember is the ${(M, F)}$ to
recover ${G}$ up to isomorphism.
A pair ${(M,F)}$ of a free, finite rank ${W}$-module equipped with a ${\phi}$-linear endomorphism ${F}$ is sometimes called a Cartier module or ${F}$-crystal. Every Dieudonné module of a ${p}$
-divisible group is an example of one of these. We could also consider ${H=M\otimes_W K}$ where ${K=Frac(W)}$ to get a finite dimensional vector space in characteristic ${0}$ with a ${\phi}$-linear
endomorphism preserving the ${W}$-lattice ${M\subset H}$.
Passing to this vector space we would expect to lose some information and this is usually called the associated ${F}$-isocrystal. But doing this gives us a beautiful classification theorem which was
originally proved by Diedonné and Manin. We have that ${H}$ is naturally an ${A}$-module where ${A=K[T]}$ is the noncommutative polynomial ring ${T\cdot a=\phi(a)\cdot T}$. The classification is to
break up ${H\simeq \oplus H_\alpha}$ into a slope decomposition.
These ${\alpha}$ are just rational numbers corresponding to the slopes of the ${F}$ operator. The eigenvalues ${\lambda_1, \ldots, \lambda_n}$ of ${F}$ are not necessarily well-defined, but if we
pick the normalized valuation ${ord(p)=1}$, then the valuations of the eigenvalues are well-defined. Knowing the slopes and multiplicities completely determines ${H}$ up to isomorphism, so we can
completely capture the information of ${H}$ in a simple Newton polygon. Note that when ${H}$ is the ${F}$-isocrystal of some Dieudonné module, then the relation ${FV=VF=p}$ forces all slopes to be
between 0 and 1.
Unfortunately, knowing ${H}$ up to isomorphism only determines ${M}$ up to equivalence. This equivalence is easily seen to be the same as an injective map ${M\rightarrow M'}$ whose cokernel is a
torsion ${W}$-module (that way it becomes an isomorphism when tensoring with ${K}$). But then by the anti-equivalence of categories two ${p}$-divisible groups (in the special subcategory that allows
us to drop the ${V}$) ${G}$ and ${G'}$ have equivalent Dieudonné modules if and only if there is a surjective map ${G' \rightarrow G}$ whose kernel is finite, i.e. ${G}$ and ${G'}$ are isogenous as $
{p}$-divisible groups.
Despite the annoying subtlety in fully determining ${G}$ up to isomorphism, this is still really good. It says that just knowing the valuation of some eigenvalues of an operator on a finite
dimensional characteristic ${0}$ vector space allows us to recover ${G}$ up to isogeny.
A Quick User’s Guide to Dieudonné Modules of p-Divisible Groups
Last time we saw that if we consider a ${p}$-divisible group ${G}$ over a perfect field of characteristic ${p>0}$, that there wasn’t a whole lot of information that went into determining it up to
isomorphism. Today we’ll make this precise. It turns out that up to isomorphism we can translate ${G}$ into a small amount of (semi-)linear algebra.
I’ve actually discussed this before here. But let’s not get bogged down in the details of the construction. The important thing is to see how to use this information to milk out some interesting
theorems fairly effortlessly. Let’s recall a few things. The category of ${p}$-divisible groups is (anti-)equivalent to the category of Dieudonné modules. We’ll denote this functor ${G\mapsto D(G)}$.
Let ${W:=W(k)}$ be the ring of Witt vectors of ${k}$ and ${\sigma}$ be the natural Frobenius map on ${W}$. There are only a few important things that come out of the construction from which you can
derive tons of facts. First, the data of a Dieudonné module is a free ${W}$-module, ${M}$, of finite rank with a Frobenius ${F: M\rightarrow M}$ which is ${\sigma}$-linear and a Verschiebung ${V: M\
rightarrow M}$ which is ${\sigma^{-1}}$-linear satisfying ${FV=VF=p}$.
Fact 1: The rank of ${D(G)}$ is the height of ${G}$.
Fact 2: The dimension of ${G}$ is the dimension of ${D(G)/FD(G)}$ as a ${k}$-vector space (dually, the dimension of ${D(G)/VD(G)}$ is the dimension of ${G^D}$).
Fact 3: ${G}$ is connected if and only if ${F}$ is topologically nilpotent (i.e. ${F^nD(G)\subset pD(G)}$ for ${n>>0}$). Dually, ${G^D}$ is connected if and only if ${V}$ is topologically nilpotent.
Fact 4: ${G}$ is étale if and only if ${F}$ is bijective. Dually, ${G^D}$ is étale if and only if ${V}$ is bijective.
These facts alone allow us to really get our hands dirty with what these things look like and how to get facts back about ${G}$ using linear algebra. Let’s compute the Dieudonné modules of the two
“standard” ${p}$-divisible groups: ${\mu_{p^\infty}}$ and ${\mathbb{Q}_p/\mathbb{Z}_p}$ over ${k=\mathbb{F}_p}$ (recall in this situation that ${W(k)=\mathbb{Z}_p}$).
Before starting, we know that the standard Frobenius ${F(a_0, a_1, \ldots, )=(a_0^p, a_1^p, \ldots)}$ and Verschiebung ${V(a_0, a_1, \ldots, )=(0, a_0, a_1, \ldots )}$ satisfy the relations to make a
Dieudonné module (the relations are a little tricky to check because constant multiples ${c\cdot (a_0, a_1, \ldots )}$ for ${c\in W}$ involve Witt multiplication and should be done using universal
In this case ${F}$ is bijective so the corresponding ${G}$ must be étale. Also, ${VW\subset pW}$ so ${V}$ is topologically nilpotent which means ${G^D}$ is connected. Thus we have a height one, étale
${p}$-divisible group with one-dimensional, connected dual which means that ${G=\mathbb{Q}_p/\mathbb{Z}_p}$.
Now we’ll do ${\mu_{p^\infty}}$. Fact 1 tells us that ${D(\mu_{p^\infty})\simeq \mathbb{Z}_p}$ because it has height ${1}$. We also know that ${F: \mathbb{Z}_p\rightarrow \mathbb{Z}_p}$ must have the
property that ${\mathbb{Z}_p/F(\mathbb{Z}_p)=\mathbb{F}_p}$ since ${\mu_{p^\infty}}$ has dimension ${1}$. Thus ${F=p\sigma}$ and hence ${V=\sigma^{-1}}$.
The proof of the anti-equivalence proceeds by working at finite stages and taking limits. So it turns out that the theory encompasses a lot more at the finite stages because ${\alpha_{p^n}}$ are
perfectly legitimate finite, ${p}$-power rank group schemes (note the system does not form a ${p}$-divisible group because multiplication by ${p}$ is the zero morphism). Of course taking the limit $
{\alpha_{p^\infty}}$ is also a formal ${p}$-torsion group scheme. If we wanted to we could build the theory of Dieudonné modules to encompass these types of things, but in the limit process we would
have finite ${W}$-module which are not necessarily free and we would get an extra “Fact 5″ that ${D(G)}$ is free if and only if ${G}$ is ${p}$-divisible.
Let’s do two more things which are difficult to see without this machinery. For these two things we’ll assume ${k}$ is algebraically closed. There is a unique connected, ${1}$-dimensional ${p}$
-divisible of height ${h}$ over ${k}$. I imagine without Dieudonné theory this would be quite difficult, but it just falls right out by playing with these facts.
Since ${D(G)/FD(G)\simeq k}$ we can choose a basis, ${D(G)=We_1\oplus \cdots \oplus We_h}$, so that ${F(e_j)=e_{j+1}}$ and ${F(e_h)=pe_1}$. Up to change of coordinates, this is the only way that
eventually ${F^nD(G)\subset pD(G)}$ (in fact ${F^hD(G)\subset pD(G)}$ is the smallest ${n}$). This also determines ${V}$ (note these two things need to be justified, I’m just asserting it here). But
all the phrase “up to change of coordinates” means is that any other such ${(D(G'),F',V')}$ will be isomorphic to this one and hence by the equivalence of categories ${G\simeq G'}$.
Suppose that ${E/k}$ is an elliptic curve. Now we can determine ${E[p^\infty]}$ up to isomorphism as a ${p}$-divisible group, a task that seemed out of reach last time. We know that ${E[p^\infty]}$
always has height ${2}$ and dimension ${1}$. In previous posts, we saw that for an ordinary ${E}$ we have ${E[p^\infty]^{et}\simeq \mathbb{Q}_p/\mathbb{Z}_p}$ (we calculated the reduced part by using
flat cohomology, but I’ll point out why this step isn’t necessary in a second).
Thus for an ordinary ${E/k}$ we get that ${E[p^\infty]\simeq E[p^\infty]^0\oplus \mathbb{Q}_p/\mathbb{Z}_p}$ by the connected-étale decomposition. But height and dimension considerations tell us that
${E[p^\infty]^0}$ must be the unique height ${1}$, connected, ${1}$-dimensional ${p}$-divisible group, i.e. ${\mu_{p^\infty}}$. But of course we’ve been saying this all along: ${E[p^\infty]\simeq \
mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$.
If ${E/k}$ is supersingular, then we’ve also calculated previously that ${E[p^\infty]^{et}=0}$. Thus by the connected-étale decomposition we get that ${E[p^\infty]\simeq E[p^\infty]^0}$ and hence
must be the unique, connected, ${1}$-dimensional ${p}$-divisible group of height ${2}$. For reference, since ${ht(G)=\dim(G)+\dim(G^D)}$ we see that ${G^D}$ is also of dimension ${1}$ and height ${2}
$. If it had an étale part, then it would have to be ${\mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb{Z}_p}$ again, so ${G^D}$ must be connected as well and hence is the unique such group, i.e. ${G\simeq
G^D}$. It is connected with connected dual. This gives us our first non-obvious ${p}$-divisible group since it is not just some split extension of ${\mu_{p^\infty}}$‘s and ${\mathbb{Q}_p/\mathbb{Z}
If we hadn’t done these previous calculations, then we could still have gotten these results by a slightly more general argument. Given an abelian variety ${A/k}$ we have that ${A[p^\infty]}$ is a $
{p}$-divisible group of height ${2g}$ where ${g=\dim A}$. Using Dieudonné theory we can abstractly argue that ${A[p^\infty]^{et}}$ must have height less than or equal to ${g}$. So in the case of an
elliptic curve it is ${1}$ or ${0}$ corresponding to the ordinary or supersingular case respectively, and the proof would be completed because ${\mathbb{Q}_p/\mathbb{Z}_p}$ is the unique étale,
height ${1}$, ${p}$-divisible group.
p-Divisible Groups Revisited 1
I’ve posted about ${p}$-divisible groups all over the place over the past few years (see: here, here, and here). I’ll just do a quick recap here on the “classical setting” to remind you of what we
know so far. This will kick-start a series on some more subtle aspects I’d like to discuss which are kind of scary at first.
Suppose ${G}$ is a ${p}$-divisible group over ${k}$, a perfect field of characteristic ${p>0}$. We can be extremely explicit in classifying all such objects. Recall that ${G}$ is just an injective
limit of group schemes ${G=\varinjlim G_u}$ where we have an exact sequence ${0\rightarrow G_u \rightarrow G_{u+1}\stackrel{p^u}{\rightarrow} G_{u+1}}$ and there is a fixed integer ${h}$ such that
group schemes ${G_{u}}$ are finite of rank ${p^{u h}}$.
As a corollary to the standard connected-étale sequence for group schemes we get a canonical decomposition called the connected-étale sequence:
$\displaystyle 0\rightarrow G^0 \rightarrow G \rightarrow G^{et} \rightarrow 0$
where ${G^0}$ is connected and ${G^{et}}$ is étale. Since ${k}$ was assumed to be perfect, this sequence actually splits. Thus ${G}$ is a semi-direct product of an étale ${p}$-divisible group and a
connected ${p}$-divisible group. If you’ve seen the theory for finite, flat group schemes, then you’ll know that we usually decompose these two categories even further so that we get a piece that is
connected with connected dual, connected with étale dual, étale with connected dual, and étale with étale dual.
The standard examples to keep in mind for these four categories are ${\alpha_p}$, ${\mu_p}$, ${\mathbb{Z}/p}$, and ${\mathbb{Z}/\ell}$ for ${\elleq p}$ respectively. When we restrict ourselves to $
{p}$-divisible groups the last category can’t appear in the decomposition of ${G_u}$ (since étale things are dimension 0, if something and its dual are both étale, then it would have to have height
0). I think it is not a priori clear, but the four category decomposition is a direct sum decomposition, and hence in this case we get that ${G\simeq G^0\oplus G^{et}}$ giving us a really clear idea
of what these things look like.
As usual we can describe étale group schemes in a nice way because they are just constant after base change. Thus the functor ${G^{et}\mapsto G^{et}(\overline{k})}$ is an equivalence of categories
between étale ${p}$-divisible groups and the category of inverse systems of ${Gal(\overline{k}/k)}$-sets of order ${p^{u h}}$. Thus, after sufficient base change, we get an abstract isomorphism with
the constant group scheme ${\prod \mathbb{Q}_p/\mathbb{Z}_p}$ for some product (for the ${p}$-divisible group case it will be a finite direct sum).
All we have left now is to describe the possibilities for ${G^0}$, but this is a classical result as well. There is an equivalence of categories between the category of divisible, commutative, formal
Lie groups and connected ${p}$-divisible groups given simply by taking the colimit of the ${p^n}$-torsion ${A\mapsto \varinjlim A[p^n]}$. The canonical example to keep in mind is ${\varinjlim \mathbb
{G}_m[p^n]=\mu_{p^\infty}}$. This is connected only because in characteristic ${p}$ we have ${(x^p-1)=(x-1)^p}$, so ${\mu_{p^n}=Spec(k[x]/(x-1)^{p^n})}$. In any other characteristic this group scheme
would be étale and totally disconnected.
This brings us to the first subtlety which can cause a lot of confusion because of the abuse of notation. A few times ago we talked about the fact that ${E[p]}$ for an elliptic curve was either ${\
mathbb{Z}/p}$ or ${0}$ depending on whether or not it was ordinary or supersingular (respectively). It is dangerous to write this, because here we mean ${E}$ as a group (really ${E(\overline{k})}$)
and ${E[p]}$ the ${p}$-torsion in this group.
When talking about the ${p}$-divisible group ${E[p^\infty]=\varinjlim E[p^n]}$ we are referring to ${E/k}$ as a group scheme and ${E[p^n]}$ as the (always!) non-trivial, finite, flat group scheme
which is the kernel of the isogeny ${p^n: E\rightarrow E}$. The first way kills off the infinitesimal part so that we are just left with some nice reduced thing, and that’s why we can get ${0}$,
because for a supersingular elliptic curve the group scheme ${E[p^n]}$ is purely infinitesimal, i.e. has trivial étale part.
Recall also that we pointed out that ${E[p]\simeq \mathbb{Z}/p}$ for an ordinary elliptic curve by using some flat cohomology trick. But this trick is only telling us that the reduced group is cyclic
of order ${p}$, but it does not tell us the scheme structure. In fact, in this case ${E[p^n]\simeq \mu_{p^n}\oplus \mathbb{Z}/p^n}$ giving us ${E[p^\infty]\simeq \mu_{p^\infty}\oplus \mathbb{Q}_p/\
mathbb{Z}_p}$. So this is a word of warning that when working these things out you need to be very careful that you understand whether or not you are figuring out the full group scheme structure or
just reduced part. It can be hard to tell sometimes.
Frobenius Semi-linear Algebra 2
Recall our setup. We have an algebraically closed field ${k}$ of characteristic ${p>0}$. We let ${V}$ be a finite dimensional ${k}$-vector space and ${\phi: V\rightarrow V}$ a ${p}$-linear map. Last
time we left unfinished the Jordan decomposition that says that ${V=V_s\oplus V_n}$ where the two components are stable under ${\phi}$ and ${\phi}$ acts bijectively on ${V_s}$ and nilpotently on $
We then considered a strange consequence of what happens on the part on which it acts bijectively. If ${\phi}$ is bijective, then there always exists a full basis ${v_1, \ldots, v_n}$ that are fixed
by ${\phi}$, i.e. ${\phi(v_i)=v_i}$. This is strange indeed, because in linear algebra this would force our operator to be the identity.
There is one more slightly more disturbing consequence of this. If ${\phi}$ is bijective, then ${\phi-Id}$ is always surjective. This is a trivial consequence of having a fixed basis. Let ${w\in V}$.
We want to find some ${z}$ such that ${\phi(z)=w}$. Well, we just construct the coefficients in the fixed basis by hand. We know ${w=\sum c_i v_i}$ for some ${c_i\in k}$. If ${z=\sum a_i v_i}$ really
satisfies ${\phi(z)-z=w}$, then by comparing coefficients such an element exists if and only if we can solve ${a_i^p-a_i=c_i}$. These are just polynomial equations, so we can solve this over our
algebraically closed field to get our coefficients.
Strangely enough we really require algebraically closed and not merely perfect again, but the papers I’ve been reading explicitly require these facts over finite fields. Since they don’t give any
references at all and just call these things “standard facts about ${p}$-linear algebra,” I’m not sure if there is a less stupid way to prove these things which work for arbitrary perfect fields.
This is why you should give citations for things you don’t prove!!
Why do I call this disturbing? Well, these maps really do appear when doing long exact sequences in cohomology. Last time we saw that we could prove that ${E[p]\simeq \mathbb{Z}/p}$ for an ordinary
elliptic curve from computing the kernel of ${C-Id}$ where ${C}$ was the Cartier operator. But we have to be really, really careful to avoid linear algebra tricks when these maps come up, because in
this situation we have ${\phi -Id}$ is always a surjective map between finite dimensional vector spaces of the same dimension, but also always has a non-trivial kernel isomorphic to ${\mathbb{Z}/p\
oplus \cdots \oplus \mathbb{Z}/p}$ where the number of factors is the dimension of ${V}$. Even though we have a surjective map in the long exact sequence between vector spaces of the same dimension,
we cannot conclude that it is bijective!
Since everything we keep considering as real-life examples of semi-linear algebra has automatically been bijective (i.e. no nilpotent part), I haven’t actually been too concerned with the Jordan
decomposition. But we may as well discuss it to round out the theory since people who work with ${p}$-Lie algebras care … I think?
The idea of the proof is simple and related to what we did last time. We look at iterates ${\phi^j}$ of our map. We get a descending chain ${\phi^j(V)\supset \phi^{j+1}(V)}$ and hence it stabilizes
somewhere, since even though ${\phi}$ is not a linear map, the image is still a vector subspace of ${V}$. Let ${r}$ be the smallest integer such that ${\phi^r(V)=\phi^{r+1}(V)}$. This means that ${r}
$ is also the smallest integer such that ${\ker\phi^r=\ker \phi^{r+1}}$.
Now we just take as our definition ${V_s=\phi^r(V)}$ and ${V_n=\ker \phi^r}$. Now by definition we get everything we want. It is just the kernel/image decomposition and hence a direct sum. By the
choice of ${r}$ we certainly get that ${\phi}$ maps ${V_s}$ to ${V_s}$ and ${V_n}$ to ${V_n}$. Also, ${\phi|_{V_s}}$ is bijective by construction. Lastly, if ${v\in V_n}$, then ${\phi^j(v)=0}$ for
some ${0\leq j\leq r}$ and hence ${\phi}$ is nilpotent on ${V_n}$. This is what we wanted to show.
Here’s how this comes up for ${p}$-Lie algebras. Suppose you have some Lie group ${G/k}$ with Lie algebra ${\mathfrak{g}}$. You have the standard ${p}$-power map which is ${p}$-linear on ${\mathfrak
{g}}$. By the structure theorem above ${\mathfrak{g}\simeq \mathfrak{h}\oplus \mathfrak{f}}$. The Lie subalgebra ${\mathfrak{h}}$ is the part the ${p}$-power map acts bijectively on and is called the
core of the Lie algebra.
Let ${X_1, \ldots, X_d}$ be a fixed basis of the core. We get a nice combinatorial classification of the Lie subalgebras of ${\mathfrak{h}}$. Let ${V=Span_{\mathbb{F}_p}\langle X_1, \ldots, X_d\
rangle}$. The Lie subalgebras of ${\mathfrak{h}}$ are in bijective correspondence with the vector subspaces of ${V}$. In particular, the number of Lie subalgebras is finite and each occurs as a
direct summand. The proof of this fact is to just repeat the argument of the Jordan decomposition for a Lie subalgebra and look at coefficients of the fixed basis.
Frobenius Semi-linear Algebra: 1
Today I want to explain some “well-known” facts in semilinear algebra. Here’s the setup. For safety we’ll assume ${k}$ is algebraically closed of characteristic ${p>0}$ (but merely being perfect
should suffice for the main point later). Let ${V}$ be a finite dimensional vector space over ${k}$. Consider some ${p}$-semilinear operator on ${V}$ say ${\phi: V\rightarrow V}$. The fact that we
are working with ${p}$ instead of ${p^{-1}}$ is mostly to not scare people. I think ${p^{-1}}$ actually appears more often in the literature and the theory is equivalent by “dualizing.”
All this means is that it is a linear operator satisfying the usual properties ${\phi(v+w)=\phi(v)+\phi(w)}$, etc, except for the scalar rule in which we scale by a factor of ${p}$, so ${\phi(av)=a^p
\phi(v)}$. This situation comes up surprisingly often in positive characteristic geometry, because often you want to analyze some long exact sequence in cohomology associated to a short exact
sequence which involves the Frobenius map or the Cartier operator. The former will induce a ${p}$-linear map of vector spaces and the latter induces a ${p^{-1}}$-linear map.
The facts we’re going to look at I’ve found in three or so papers just saying “from a well-known fact about ${p^{-1}}$-linear operators…” I wish there was a book out there that developed this theory
like a standard linear algebra text so that people could actually give references. The proof today is a modification of that given in Dieudonne’s Lie Groups and Lie Hyperalgebras over a Field of
Characteristic ${p>0}$ II (section 10).
Let’s start with an example. In the one-dimensional case we have the following ${\phi: k\rightarrow k}$. If the map is non-trivial, then it is bijective. More importantly we can just write down every
one of these because if ${\phi(1)=a}$, then
$\displaystyle \begin{array}{rcl} \phi(x) & = & \phi(x\cdot 1) \\ & = & x^p\phi(1) \\ & = & ax^p \end{array}$
In fact, we can always find some non-zero fixed element, because this amounts to solving ${ax^p-x=x(ax^{p-1}-1)=0}$, i.e. finding a solution to ${ax^{p-1}-1=0}$ which we can do by being algebraically
closed. This element ${b}$ obviously serves as a basis for ${k}$, but to set up an analogy we also see that ${Span_{\mathbb{F}_p}(b)}$ are all of the fixed points of ${\phi}$. In general ${V}$ will
breakup into parts. The part that ${\phi}$ acts bijectively on will always have a basis of fixed elements whose ${\mathbb{F}_p}$-span consists of exactly the fixed points of ${\phi}$. Of course, this
could never happen in linear algebra because finding a fixed basis implies the operator is the identity.
Let’s start by proving this statement. Suppose ${\phi: V\rightarrow V}$ is a ${p}$-semilinear automorphism. We want to find a basis of fixed elements. We essentially mimic what we did before in a
more complicated way. We induct on the dimension of ${V}$. If we can find a single ${v_1}$ fixed by ${\phi}$, then we would be done for the following reason. We kill off the span of ${v_1}$, then by
the inductive hypothesis we can find ${v_2, \ldots, v_n}$ a fixed basis for the quotient. Together these make a fixed basis for all of ${V}$.
Now we need to find a single fixed ${v_1}$ by brute force. Consider any non-zero ${w\in V}$. We start taking iterates of ${w}$ under ${\phi}$. Eventually they will become linearly dependent, so we
consider ${w, \phi(w), \ldots, \phi^k(w)}$ for the minimal ${k}$ such that this is a linearly dependent set. This means we can find some coefficients that are not all ${0}$ for which ${\sum a_j \phi^
Let’s just see what must be true of some fictional ${v_1}$ in the span of these elements such that ${\phi(v_1)=v_1}$. Well, ${v_1=\sum b_j \phi^j(w)}$ must satisfy ${v_1=\phi(v_1)=\sum b_j^p \phi^
To make this easier to parse, let’s specialize to the case that ${k=3}$. This means that ${a_0 w+a_1\phi(w)+a_2\phi^2(w)=0}$ and by assumption the coefficient on this top power can’t be zero, so we
rewrite the top power ${\phi^2(w)=-(a_0/a_2)w - (a_1/a_2)\phi(w)}$.
The other equation is
$\displaystyle \begin{array}{rcl} b_0w+b_1\phi(w) & = & b_0^p\phi(w)+b_1^p\phi^2(w)\\ & = & -(a_0/a_2)b_1^pw +(b_0^p-(a_1/a_2)b_1^p)\phi(w) \end{array}$
Comparing coefficients ${b_0=-(a_0/a_2)b_1^p}$ and then forward substituting ${b_1=-(a_0/a_2)^pb_1^{p^2}-(a_1/a_2)b_1^p}$. Ah, but we know the ${a_j}$ and this only involves the unknown ${b_1}$. So
since ${k}$ is algebraically closed we can solve to find such a ${b_1}$. Then since we wrote all our other coefficients in terms of ${b_1}$ we actually can produce a fixed ${v_1}$ by brute force
determining the coefficients of the vector in terms of our linear dependence coefficients.
There was nothing special about ${k=3}$ here. In general, this trick will work because it only involves the fact that applying ${\phi}$ cycled the vectors forward by one which allows us to keep
forward substituting all the equations from the comparison of coefficients to get everything in terms of the highest one including the highest one which transformed the problem into solving a single
polynomial equation over our algebraically closed field.
This completes the proof that if ${\phi}$ is bijective, then there is a basis of fixed vectors. The fact that ${V^\phi=Span_{\mathbb{F}_p}(v_1, \ldots, v_n)}$ is pretty easy after that. Of course,
the ${\mathbb{F}_p}$-span is contained in the fixed points because by definition the prime subfield of ${k}$ is exactly the fixed elements of ${x\mapsto x^p}$. On the other hand, if ${c=\sum a_jv_j}$
is fixed, then ${c=\phi(c)=\sum a_j^p \phi(v_j)=\sum a_j^p v_j}$ shows that all the coefficients must be fixed by Frobenius and hence in ${\mathbb{F}_p}$.
Here’s how this is useful. Recall the post on the fppf site. We said that if we wanted to understand the ${p}$-torsion of certain cohomology with coefficients in ${\mathbb{G}_m}$ (Picard group,
Brauer group, etc), then we should look at the flat cohomology with coefficients in ${\mu_p}$. If we specialize to the case of curves we get an isomorphism ${H^1_{fl}(X, \mu_p)\simeq Pic(X)[p]}$.
Recall the exact sequence at the end of that post. It told us that via the ${d\log}$ map ${H^1_{fl}(X, \mu_p)=ker(C-I)=H^0(X, \Omega^1)^C}$. Now we have a ridiculously complicated way to prove the
following well-known fact. If ${E}$ is an ordinary elliptic curve over an algebraically closed field of characteristic ${p>0}$, then ${E[p]\simeq \mathbb{Z}/p}$. In fact, we can prove something
slightly more general.
By definition, a curve is of genus ${g}$ if ${H^0(X, \Omega^1)}$ is ${g}$-dimensional. We’ll say ${X}$ is ordinary if the Cartier operator ${C}$ is a ${p^{-1}}$-linear automorphism (I’m already
sweeping something under the rug, because to even think of the Cartier operator acting on this cohomology group we need a hypothesis like ordinary to naturally identify some cohomology groups).
By the results in this post we know that the structure of ${H^0(X, \Omega^1)^C}$ as an abelian group is ${\mathbb{Z}/p\oplus \cdots \oplus \mathbb{Z}/p}$ where there are ${g}$ copies. Thus in more
generality this tells us that ${Jac(X)[p]\simeq Pic(X)[p]\simeq H^0(X, \Omega^1)^C\simeq \mathbb{Z}/p\oplus \cdots \oplus \mathbb{Z}/p}$. In particular, since for an elliptic curve (genus 1) we have
${Jac(E)=E}$, this statement is exactly ${E[p]\simeq \mathbb{Z}/p}$.
This point is a little silly, because Silverman seems to just use this as the definition of an ordinary elliptic curve. Hartshorne uses the Hasse invariant in which case it is quite easy to derive
that the Cartier operator is an automorphism (proof: it is Serre dual to the Frobenius which by the Hasse invariant definition is an automorphism). Using this definition, I’m actually not sure I’ve
ever seen a derivation that ${E[p]\simeq \mathbb{Z}/p}$. I’d be interested if there is a lower level way of seeing it than going through this flat cohomology argument (Silverman cites a paper of
Duering, but it’s in German).
Serre-Tate Theory 2
I guess this will be the last post on this topic. I’ll explain a tiny bit about what goes into the proof of this theorem and then why anyone would care that such canonical lifts exist. On the first
point, there are tons of details that go into the proof. For example, Nick Katz’s article, Serre-Tate Local Moduli, is 65 pages. It is quite good if you want to learn more about this. Also, Messing’s
book The Crystals Associated to Barsotti-Tate Groups is essentially building the machinery for this proof which is then knocked off in an appendix. So this isn’t quick or easy by any means.
On the other hand, I think the idea of the proof is fairly straightforward. Let’s briefly recall last time. The situation is that we have an ordinary elliptic curve ${E_0/k}$ over an algebraically
closed field of characteristic ${p>2}$. We want to understand ${Def_{E_0}}$, but in particular whether or not there is some distinguished lift to characteristic ${0}$ (this will be an element of $
To make the problem more manageable we consider the ${p}$-divisible group ${E_0[p^\infty]}$ attached to ${E_0}$. In the ordinary case this is the enlarged formal Picard group. It is of height ${2}$
whose connected component is ${\widehat{Pic}_{E_0}\simeq\mu_{p^\infty}}$. There is a natural map ${Def_{E_0}\rightarrow Def_{E_0[p^\infty]}}$ just by mapping ${E/R \mapsto E[p^\infty]}$. Last time we
said the main theorem was that this map is an isomorphism. To tie this back to the flat topology stuff, ${E_0[p^\infty]}$ is the group representing the functor ${A\mapsto H^1_{fl}(E_0\otimes A, \mu_
The first step in proving the main theorem is to note two things. In the (split) connected-etale sequence
$\displaystyle 0\rightarrow \mu_{p^\infty}\rightarrow E_0[p^\infty]\rightarrow \mathbb{Q}_p/\mathbb{Z}_p\rightarrow 0$
we have that ${\mu_{p^\infty}}$ is height one and hence rigid. We have that ${\mathbb{Q}_p/\mathbb{Z}_p}$ is etale and hence rigid. Thus given any deformation ${G/R}$ of ${E_0[p^\infty]}$ we can take
the connected-etale sequence of this and see that ${G^0}$ is the unique deformation of ${\mu_{p^\infty}}$ over ${R}$ and ${G^{et}=\mathbb{Q}_p/\mathbb{Z}_p}$. Thus the deformation functor can be
redescribed in terms of extension classes of two rigid groups ${R\mapsto Ext_R^1(\mathbb{Q}_p/\mathbb{Z}_p, \mu_{p^\infty})}$.
Now we see what the canonical lift is. Supposing our isomorphism of deformation functors, it is the lift that corresponds to the split and hence trivial extension class. So how do we actually check
that this is an isomorphism? Like I said, it is kind of long and tedious. Roughly speaking you note that both deformation functors are prorepresentable by formally smooth objects of the same
dimension. So we need to check that the differential is an isomorphism on tangent spaces.
Here’s where some cleverness happens. You rewrite the differential as a composition of a whole bunch of maps that you know are isomorphisms. In particular, it is the following string of maps: The
Kodaira-Spencer map ${T\stackrel{\sim}{\rightarrow} H^1(E_0, \mathcal{T})}$ followed by Serre duality (recall the canonical is trivial on an elliptic curve) ${H^1(E_0, \mathcal{T})\stackrel{\sim}{\
rightarrow} Hom_k(H^1(E_0, \Omega^1), H^1(E_0, \mathcal{O}_{E_0}))}$. The hardest one was briefly mentioned a few posts ago and is the dlog map which gives an isomorphism ${H^2_{fl}(E_0, \mu_{p^\
infty})\stackrel{\sim}{\rightarrow} H^1(E_0, \Omega^1)}$.
Now noting that ${H^2_{fl}(E_0, \mu_{p^\infty})=\mathbb{Q}_p/\mathbb{Z}_p}$ and that ${T_0\mu_{p^\infty}\simeq H^1(E_0, \mathcal{O}_{E_0})}$ gives us enough compositions and isomorphisms that we get
from the tangent space of the versal deformation of ${E_0}$ to the tangent space of the versal deformation of ${E_0[p^\infty]}$. As you might guess, it is a pain to actually check that this is the
differential of the natural map (and in fact involves further decomposing those maps into yet other ones). It turns out to be the case and hence ${Def_{E_0}\rightarrow Def_{E_0[p^\infty]}}$ is an
isomorphism and the canonical lift corresponds to the trivial extension.
But why should we care? It turns out the geometry of the canonical lift is very special. This may not be that impressive for elliptic curves, but this theory all goes through for any ordinary abelian
variety or K3 surface where it is much more interesting. It turns out that you can choose a nice set of coordinates (“canonical coordinates”) on the base of the versal deformation and a basis of the
de Rham cohomology of the family that is adapted to the Hodge filtration such that in these coordinates the Gauss-Manin connection has an explicit and nice form.
Also, the canonical lift admits a lift of the Frobenius which is also nice and compatible with how it acts on the above chosen basis on the de Rham cohomology. These coordinates are what give the
base of the versal deformation the structure of a formal torus (product of ${\widehat{\mathbb{G}_m}}$‘s). One can then exploit all this nice structure to prove large open problems like the Tate
conjecture in the special cases of the class of varieties that have these canonical lifts.
by hilbertthm90 2 Comments
Serre-Tate Theory 1
Today we’ll try to answer the question: What is Serre-Tate theory? It’s been a few years, but if you’re not comfortable with formal groups and ${p}$-divisible groups, I did a series of something like
10 posts on this topic back here: formal groups, p-divisible groups, and deforming p-divisible groups.
The idea is the following. Suppose you have an elliptic curve ${E/k}$ where ${k}$ is a perfect field of characteristic ${p>2}$. In most first courses on elliptic curves you learn how to attach a
formal group to ${E}$ (chapter IV of Silverman). It is suggestively notated ${\widehat{E}}$, because if you unwind what is going on you are just completing the elliptic curve (as a group scheme) at
the identity.
Since an elliptic curve is isomorphic to it’s Jacobian ${Pic_E^0}$ there is a conflation that happens. In general, if you have a variety ${X/k}$ you can make the same formal group by completing this
group scheme and it is called the formal Picard group of ${X}$. Although, in general you’ll want to do this with the Brauer group or higher analogues to guarantee existence and smoothness. Then you
prove a remarkable fact that the elliptic curve is ordinary if and only if the formal group has height ${1}$. In particular, since the ${p}$-divisible group is connected and ${1}$-dimensional it must
be isomorphic to ${\mu_{p^\infty}}$.
It might seem silly to think in these terms, but there is another “enlarged” ${p}$-divisible group attached to ${E}$ which always has height ${2}$. This is the ${p}$-divisible group you get by taking
the inductive limit of the finite group schemes that are the kernel of multiplication by ${p^n}$. It is important to note that these are non-trivial group schemes even if they are “geometrically
trivial” (and is the reason I didn’t just call it the “${p^n}$-torsion”). We’ll denote this in the usual way by ${E[p^\infty]}$.
I don’t really know anyone that studies elliptic curves that phrases it this way, but since this theory must be generalized in a certain way to work for other varieties like K3 surfaces I’ll point
out why this should be thought of as an enlarged ${p}$-divisible group. It is another standard fact that ${E}$ is ordinary if and only if ${E[p^\infty]\simeq \mu_{p^\infty}\oplus \mathbb{Q}_p/\mathbb
{Z}_p}$. In fact, you can just read off the connected-etale decomposition:
$\displaystyle 0\rightarrow \mu_{p^\infty}\rightarrow E[p^\infty] \rightarrow \mathbb{Q}_p/\mathbb{Z}_p\rightarrow 0$
We already noted that ${\widehat{E}\simeq \mu_{p^\infty}}$, so the ${p}$-divisible group ${E[p^\infty]}$ is a ${1}$-dimensional, height ${2}$ formal group whose connected component is the first one
we talked about, i.e. ${E[p^\infty]}$ is an enlargement of ${\widehat{E}}$. For a general variety, this enlarged formal group can be defined, but it is a highly technical construction and would take
a lot of work to check that it even exists and satisfies this property. Anyway, this enlarged group is the one we need to work with otherwise our deformation space will be too small to make the
theory work.
Here’s what Serre-Tate theory is all about. If you take a deformation of your elliptic curve ${E}$ say to ${E'}$, then it turns out that ${E'[p^\infty]}$ is a deformation of the ${p}$-divisible group
${E[p^\infty]}$. Thus we have a natural map ${\gamma: Def_E \rightarrow Def_{E[p^\infty]}}$. The point of the theory is that it turns out that this map is an isomorphism (I’m still assuming ${E}$ is
ordinary here). This is great news, because the deformation theory of ${p}$-divisible groups is well-understood. We know that the versal deformation of ${E[p^\infty]}$ is just ${Spf(W[[t]])}$. The
deformation problem is unobstructed and everything lives in a ${1}$-dimensional family.
Of course, let’s not be silly. I’m pointing all this out because of the way in which it generalizes. We already knew this was true for elliptic curves because for any smooth, projective curve the
deformations are unobstructed since the obstruction lives in ${H^2}$. Moreover, the dimension of the space of deformations is given by the dimension of ${H^1(E, \mathcal{T})}$. But for an elliptic
curve ${\mathcal{T}\simeq \mathcal{O}_X}$, so by Serre duality this is one-dimensional.
On the other hand, we do get some actual information from the Serre-Tate theory isomorphism because ${Def_{E[p^\infty]}}$ carries a natural group structure. Thus an ordinary elliptic curve has a
“canonical lift” to characteristic ${0}$ which comes from the deformation corresponding to the identity.
|
{"url":"http://hilbertthm90.wordpress.com/category/math/algebra-math/","timestamp":"2014-04-20T00:58:52Z","content_type":null,"content_length":"249715","record_id":"<urn:uuid:319cde73-0467-4422-b764-db16792e8393>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: second-order logic is a myth
Charles Silver csilver at sophia.smith.edu
Mon Mar 1 06:55:20 EST 1999
On Sun, 28 Feb 1999, Stephen G Simpson wrote:
> Today I spent a little more time with Shapiro's book `Foundations
> Without Foundationalism: A Case for Second-order Logic'.
> Although I still think this book is completely misguided, I have to
> give Shapiro credit for presenting a thoughtful account. Contrary to
> my earlier impression, I'll now say that this book is *much* better
> than Hersh's. I now regard Shapiro's `anti-foundationalism' as little
> more than an attempt to be trendy. I don't assign any serious weight
> to it.
I have to say that I personally liked Shapiro's book, but I wasn't
sure what to make of many of his arguments. There's an interesting review
of it by Craig Smorynski in "Modern Logic," vol. 4, no 3, pp. 341-344.
Smorynski makes the following distinctions (p. 341): "Mathematics (M),
Foundations of Mathematics (FM), Mathematical Logic (ML), Philosophical
Logic (PL), Philosophy of Mathematics as practiced by Philosophers (PMP),
Philosophy of Logic (PoL), Foundations of Logic (FL), and Epistemology
(E)." Smorynski goes on to say:
"While there are relations among these subjects (e.g., PMP, E),
no two of them coincide (except possibly PoL and FL), and what
may be inadequate for one may be perfectly adequate for another.
What I never understood in the book is which one of these
perspectives is operational. Is first-order logic inadequate
for M, FM, ML, PL, PMP,... or what?"
What do people think of Smorynski's criticism of Shapiro's book?
I imagine that there must be many people on this list who think highly of
the book. I'd be interested in seeing more discussion of the value of
this work.
Charlie Silver
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1999-March/002674.html","timestamp":"2014-04-19T19:38:13Z","content_type":null,"content_length":"3957","record_id":"<urn:uuid:67526065-066a-4428-902e-8280017850fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Teachers’ Corner
By Laura Pettit
Currently I teach fifth grade math at Universal Institute Charter School. Last year was my first time using the Drexel Math Forum PoW. The grade that I had was very low academically. I originally
planned to do the PoW monthly. The first one I tried was a disaster. I picked a problem that I thought the kids would get but ended up being way too hard! I had to guide them through the notice and
wonders and even then the students were having trouble even picking up the key math points.
After this lesson I was, of course, very discouraged. I sat down with the Drexel team who gave me some great pointers and even offered to model a problem in my room. I agreed to let them come in.
When I saw the problem they picked, I thought it was too hard for students because they couldn’t grasp the concept of equivalent fractions. However, after watching them teach, I realized that it
wasn’t hard at all. The students were finally able to understand the concept of equivalent fractions. However their explanations of how they got their answer still required some further attention.
Seeing the success that the Drexel group had with my students, I decided to try again. This time I knew I needed to guide them more through the notice and wonders, even forgoing the problem, giving
them one sentence at a time so they could concentrate on that. After a lot of guiding, they got there! I even helped them with the explanation of their answer so they knew what I expected.
I decided to try a PoW more often in my room from then on. I continued to guide them through it but added more each time. I gave the students two sentences at a time and so on to build their stamina
and get them to focus on what is important in each problem. I helped them less and less on the explanation and how to solve the problem. I know some of them didn’t go as well as others, but looking
at the year as a whole, the program was very successful in my room.
By March, the students were able to figure out the question to the problem without it being present. When they were noticing and wondering, they began to notice the answer which blew me away! By
April, their explanations were close to perfect, giving great detail on how they solved the problem and even using correct math vocabulary. And by the May, the students were able to just constantly
focus on the math part of the problem. All of their notices and wonders had to do with math 99 percent of the time. What took close to 90 minutes to complete at the beginning of the school year now
took 45 minutes. I was able to add harder bonus questions for homework to get them to think and expand the concepts I was teaching in class. I used this after each unit was taught as a wrap up of
what we learned. Depending on the schedule for the week sometimes we would do two and the students went from hating them to loving them!
This program took a group of struggling math achievers last year and helped them tremendously. They still have yet to be where they need to be but they are getting closer. I believe this program
helped them make gains much quicker over conventional teaching methods. At the beginning of the year, the students were 18 percent proficient or advanced on the open ended responses on the math
4Sight. By May, the students rose to 88 percent. I believe that the consistency of using the PoW is the main reason for the rise.
After talking at the end of the year with the English teacher, we both feel that this program can help her as well. Using the explanations or open ended responses that the students need to do after
they solve the problem, I can focus on some of the skills she is teaching. I am excited to try the next step this year with the PoW and will definitely be using this all year. I know that in the
beginning it is going to be frustrating as it was last year but it will take some time to get the students into the swing of the program. But I know if I stick with it the final product will be well
worth it!
|
{"url":"http://mathforum.org/blogs/powerfulideas/pow-teacher-column/","timestamp":"2014-04-17T21:33:42Z","content_type":null,"content_length":"17553","record_id":"<urn:uuid:e9b6904c-ab22-4715-ba39-41aa850a945e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematician Harvey Margolis Dies
A memorial service was held June 15 at King's Chapel in Boston for Assoc. Prof. Harvey R. Margolis (Mathematics), who died June 8 after a lengthy illness. He was 57.
A member of BC's Mathematics Department since 1970, Margolis was an expert in the field of algabraic topology and in 1984 authored an important book on the subject titled Spectra and the Steenrod
Margolis taught a very popular class called "Ideas in Mathematics" and was at work on a textbook for the course, which was designed to explain advanced conceptual and theoretical mathematics to
students from other disciplines.
"Harvey really tried to give people the sense that they could understand these concepts," said Assoc. Prof. Charles Landraitis (Mathematics). "He wanted to explain to people why math is interesting
and what mathematicians are concerned with. It will be his legacy."
Landraitis said that he admired his friend's courage and selflessness in the face of an illness that caused him to rely on a wheelchair for the last two years.
"He never dwelt on his infirmities, even after he could not walk," said Landraitis.
In 1971 Margolis was appointed a fellow of the Institute for Advanced Studies and later won a Mellon Grant to help support his research. Margolis also won a number of National Science Foundation
After receiving his doctorate from the University of Chicago in 1967, Margolis taught for a short period of time at Northwestern University before arriving at Boston College as an assistant
professor. He was promoted to associate professor in 1974.
Margolis leaves his wife, Nell (Lowenberg) and his daughters, Amelia, Caroline and Annabel, all of Boston, and his mother, Isobel (Dawson) of New York City.
-Stephen Gawlik
|
{"url":"http://www.bc.edu/bc_org/rvp/pubaf/chronicle/v8/jl20/margolis.html","timestamp":"2014-04-18T11:14:13Z","content_type":null,"content_length":"2482","record_id":"<urn:uuid:86fe9e4f-c915-4201-8912-f2bceb494891>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: Re: Categorical foundations
Martin Schlottmann martin_schlottmann at math.ualberta.ca
Fri Jan 23 14:28:34 EST 1998
The discussion on this topic has become a little
bit loaded, and I think that arguing about the question
of which system, category or set theory, is more convenient
or simple or basic or intuitive or whatever will not
lead to any productive result. Instead, I would propose
to embark on the following program to unify both approaches.
In order to explain what I would like to see let me
give the following (too?) simple example which one could call
"Set vs Class Theory". Consider the following two
systems: ZFC set theory and GNB class theory. Both can
be used for foundational purposes, and it is very
easy to translate between both languages: ZFC is a
sub-system of GNB and, on the other hand, GNB is a
conservative extension of ZFC. Therefore, every
mathematical reasoning in the framework of one system
can be immediatly transformed into the other system
and vice versa. As a consequence, it would be futile
to argue if classes in the sense of GNB are admissable
for f.o.m. purposes or not. It would also be futile
to argue a priori which system is more convenient;
if one likes, one can use both systems almost simultaneously
without having much to worry about contradictions.
If one system is superior to the other, this will simply
sort out by itself in due course; if none is really
better, it doesn't matter if both will be around indefinitly.
Translated into the category vs set discussion:
Let both, proponents of sets and proponents of
categories, fix one system each which they think suits
their needs for formalizing mathematical reasoning.
Say, the system of set theory is ZFC, the system for
category theory is XYZ (I apologize for being not familiar
enough with category theory to make a definite proposal
here). Then, set theorists translate XYZ into ZFC,
categorists translate ZFC into XYZ. After this, write
a textbook and simply let mathematicians do their work
in whatever of the two systems they like and see which
system survives.
If both systems stand the test in everyday life, then,
obviously, none of them is more convenient for f.o.m.
If it turns out that one of the systems cannot be
translated into the other, then there is a serious
discrepancy on the admissable principles of mathematical
reasoning. E.g., if it turns out that XYZ needs ZFC+
inaccessible cardinals, then one can argue seriously about
the question if XYZ uses principles of reasoning which
are not generally accepted (for myself, I would not
like to have my own results to depend on the existence
of inaccessible cardinals). At this stage, one has
a point which can be productively discussed.
I apologize in advance if the program I described has
already been carried out and only escaped my attention.
If so, I wonder what all this discussion is about, after all.
I also apologize being elaborate up to the edge of
triviality, but I wanted to make myself clear in an
already loaded discussion.
Martin Schlottmann <martin_schlottmann at math.ualberta.ca>
Department of Mathematical Sciences, CAB 583
University of Alberta, Edmonton AB T6G 2G1, Canada
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-January/000944.html","timestamp":"2014-04-20T01:12:37Z","content_type":null,"content_length":"5435","record_id":"<urn:uuid:78a3b372-7b81-4972-8f31-986e98dfe7ed>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Voorhees Township, NJ Algebra 2 Tutor
Find a Voorhees Township, NJ Algebra 2 Tutor
...I will provide them with as little or as much assistance as they need! I enjoy tutoring and believe that due to my education and past experience, I have a lot to offer!I have always had a love
and passion for math and am therefore very patient, thorough, and helpful when it comes tutoring students in this subject! I love Chemistry and therefore love tutoring students in chemistry!
30 Subjects: including algebra 2, reading, English, biology
...I now work as a college admissions consultant for a university prep firm and volunteer as a mentor to youth in Camden. After graduating Princeton I lived and worked for about two years in
Singapore, where I taught business IT (focusing on advanced MS Excel) in the business and accounting school ...
36 Subjects: including algebra 2, reading, English, geometry
...As a result, I try to approach the topic from different directions to help a student understand fully. My personal goal while tutoring is to bring a student to an understanding of the topic
that develops confidence in the material and the ability to work with it on their own eventually and build...
33 Subjects: including algebra 2, English, French, physics
...I don't teach this subject, but it should only take some simple brushing up. Furthermore, I have helped a summer camp put together curriculum material for math, including algebra. I have a
strong math and science foundation, and completed math up to calculus while in high school.
12 Subjects: including algebra 2, chemistry, geometry, biology
...I taught ESL for two years at a language school in Taipei, Taiwan, working primarily with college-level students who needed to refine their speaking and writing skills. Since then, I have
worked with students employed by the Du Pont Co. and AstraZeneca, as well as with graduate students in sever...
32 Subjects: including algebra 2, English, geometry, chemistry
Related Voorhees Township, NJ Tutors
Voorhees Township, NJ Accounting Tutors
Voorhees Township, NJ ACT Tutors
Voorhees Township, NJ Algebra Tutors
Voorhees Township, NJ Algebra 2 Tutors
Voorhees Township, NJ Calculus Tutors
Voorhees Township, NJ Geometry Tutors
Voorhees Township, NJ Math Tutors
Voorhees Township, NJ Prealgebra Tutors
Voorhees Township, NJ Precalculus Tutors
Voorhees Township, NJ SAT Tutors
Voorhees Township, NJ SAT Math Tutors
Voorhees Township, NJ Science Tutors
Voorhees Township, NJ Statistics Tutors
Voorhees Township, NJ Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Collingswood algebra 2 Tutors
Deptford Township, NJ algebra 2 Tutors
Echelon, NJ algebra 2 Tutors
Evesham Twp, NJ algebra 2 Tutors
Gibbsboro algebra 2 Tutors
Haddonfield algebra 2 Tutors
Hi Nella, NJ algebra 2 Tutors
Laurel Springs, NJ algebra 2 Tutors
Lindenwold, NJ algebra 2 Tutors
Mount Laurel algebra 2 Tutors
Pine Hill, NJ algebra 2 Tutors
Somerdale, NJ algebra 2 Tutors
Stratford, NJ algebra 2 Tutors
Voorhees algebra 2 Tutors
Voorhees Kirkwood, NJ algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Voorhees_Township_NJ_Algebra_2_tutors.php","timestamp":"2014-04-19T02:16:03Z","content_type":null,"content_length":"24727","record_id":"<urn:uuid:d97b067c-b69a-4ec1-b7b7-6cb011b61444>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Conditions of Perfect Imaging in Negative Refraction Materials with Gain
Advances in OptoElectronics
Volume 2012 (2012), Article ID 347875, 5 pages
Research Article
Conditions of Perfect Imaging in Negative Refraction Materials with Gain
^1State Key Laboratory of Optoelectronic Materials and Technologies, Sun Yat-sen University, Guangzhou 510275, China
^2Department of Physical Electronics, School of Electrical Engineering, Faculty of Engineering, Tel Aviv University, 69978 Tel Aviv, Israel
^3Chemical Physics Department, Weizmann Institute of Science, 76100 Rehovot, Israel
Received 29 June 2012; Accepted 3 October 2012
Academic Editor: Alexandra E. Boltasseva
Copyright © 2012 Haowen Liang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Light propagation is analyzed in a negative refraction material (NRM) with gain achieved by pumping. An inherent spatial “walk-off” between the directions of phase propagation and energy transfer is
known to exist in lossy NRMs. Here, the analysis is extended to the case where the NRM acts as an active material under various pumping conditions. It is shown that the condition for perfect imaging
is only possible for specific wavelengths under special excitation conditions. Under excessive gain, the optical imaging can no longer be perfect.
1. Introduction
Negative refraction is known to offer a wide range of potential applications [1–4]. However, losses, which are an inherent feature of the negative refraction, present a major impediment to the
performance of NRMs [5–9]. To overcome these problems, NRMs with gain were proposed to compensate the losses, even to turn the materials into amplified systems. Nevertheless, it is often stated that
the gain will destroy the negative refraction due to causality considerations [10], although the statement was disputed by a theory demonstrating that negative refraction may be preserved in a
limited spectral region [11, 12].
Common methods to introduce gain in NRMs include optical parametric amplification (OPA) [13] and externally pumped gain materials [14–18]. Optical imaging needs to collect both propagating and
evanescent waves. However, only within a limited range may the wave vectors receive gain from OPA because of the strict phase-matching condition, the application of OPA to achieve perfect imaging in
NRMs is not possible.
In this paper, we demonstrate that, under the action of the pumping gain, lossless and amplified light propagation may occur in a special spectral window of the NRM. The propagation behavior is shown
to be closely related to the dispersion and pumping configuration. Propagation in NRMs is also examined in different pumping configurations.
1.1. Spatial “Walk-Off” in Lossy NRMs
Light incidents from free space onto a homogeneous, isotropic, lossy NRM, of permittivity and permeability , were studied in detail [8]. The complex effective refractive index is then defined as or .
In free space, the incident wave vector is real, while in the lossy NRM, the wave vector is complex. At a given optical frequency , this implies that for both the propagating wave () and the
evanescent one ().
To analyze light propagation in the NRM, the phase and group velocities are expressed as and , where and are determined by the NRM dispersion. The energy propagation is approximately determined by
the group velocity under the assumption of low losses [19–21]. The Poynting vector can also be used to define the energy propagation.
For complex vectors and , the direction of the phase propagation and energy transfer in the wave packet are determined by their real parts [22]: In an ideal NRM, where the refractive index is
negative without losses, the phase velocity and group velocity are strictly antiparallel [1, 19]. However, (1) and (2) show that the group velocity is no longer antiparallel because of the
contribution of the last term in (2). This spatial “walk-off,” that is, the noncollinearity between the phase propagation and the energy transfer, becomes obvious in a homogenous, isotropic, lossy
NRM. The angle between the phase velocity and the group velocity is with the “walk-off” angle defined as .
The propagation behavior is discussed here for both propagating and evanescent waves. The dispersive curve is described as the Lorentz model, with , , and s^−1, s^−1, s^−1 in Figure 1(a) for the
real and imaginary parts of the refractive index. For a typical propagating wave with , the size of the “walk-off” is numerically simulated as shown in Figure 1(b). The analysis of the “walk-off” can
be extended to evanescent waves with , where it is found that the “walk-off” dramatically increases with , also shown in Figure 1(b).
For perfect focusing, the “walk-off” appearing at different should be suppressed. It was shown that this goal can be achieved by a pumping gain scheme [18]. Here, we show, in a pumped four-level
model of signal amplification, that the realization of perfect imaging is possible only for a specific wavelength under strict pumping condition.
2. NRMs with Pumping Gain
Four-level systems represent conventional gain media. The intensity of light in a chosen spectral interval can be amplified in NRMs by introducing an extra term in the electrical field susceptibility
[14, 17, 23]. It is assumed here that the gain medium is pumped in the linear regime, and no gain saturation arises. Accordingly, the population of the ground level (which can be considered as the
population of the gain medium) is much larger than in the other three levels as per the usual pumping condition. With the definition of polarization , the permittivity, with the extra term in the
susceptibility, is given by
As shown below, the last term of (4) is crucial to the performance of the gain-compensated NRMs. The components of the effective refractive index, and , are shown in Figure 2.
Whereas the optical losses in the NRM can be effectively compensated by pumping, as shown in Figure 2(b), an amplification of the input signal is achievable; the elimination of the “walk-off” depends
on both the real and imaginary parts of the refractive index. The ideal case is the one with and , giving rise to perfect imaging [1, 6]. However, as shown in Figure 2, this condition holds only at
the optimal frequency where s^−1 under appropriate pumping conditions.
Optical imaging with resolution above or below the diffraction limit depends on the system’s ability to recover the wavevector’s component for either propagating or evanescent waves. Figure 3(b)
shows the size of the “walk-off” for different with the appropriate pumping rate of s^−1. The propagating waves correspond to , while the evanescent ones correspond to . After introducing the
pumping gain, a red shift is observed in the curve. At the optimal frequency (s^−1) where , the angles between the group and phase velocities are strictly antiparallel for all . Because of the
antiparallel directions of the energy transfer and phase propagation, the spatial “walk-off” is suppressed, so that the ability of directional transmission (for the propagating waves) and perfect
focusing (for evanescent waves in the near-field) will be preserved. Thus, the pumping can effectively cancel the losses only in a limited spectral region, under appropriate pumping conditions. This
conclusion is in agreement with those reported in [11, 12].
By contrast, the propagation in an active NRM under excessive pumping exhibits a peculiar behavior. Figure 3(c) shows the “walk-off” angles at different for excessive pumping rate (here s^−1) at the
frequency where (here s^−1). The angles are then larger than 180°, indicating that the “walk-off” reappears, with the respective angles . The “walk-off” becomes more significant at larger . It also
shows that increases dramatically with the increase of for the evanescent wave with . Hence, the perfect focus for the near-field component is impossible under excessive pumping. Notice that because
of the red shift in , the “walk-off” is suppressed at the frequency of s^−1, where (the red arrow in Figure 3(c)). However, perfect lensing requires [1, 6], hence the perfect focus cannot be
obtained under excessive pumping.
In order to achieve perfect focusing, the pumping rates should be reduced and the pumping central frequencies should be blue-shifted, as shown in Figure 3(c). Light with all values of can then
perfectly focus through the slab.
3. Conclusions
To conclude, we have analyzed the effect of gain on the negative refraction in NRMs. In a lossy NRM, even though it is isotropic and homogeneous, the group and phase velocities are not strictly
antiparallel, yielding a spatial “walk-off”, which may restrict the applications of NRMs in a variety of fields. By introducing gain, losses can be effectively reduced, and light amplification can be
realized within a narrow spectral range. Appropriately setting the gain to strictly cancel the losses, the “walk-off” for both propagating and evanescent waves can be effectively eliminated for all
values of , leading to an ideal NRM. However, for excessively pumped NRMs, the spatial “walk-off” reappears. Thus, the use of optical pumping to realize perfect imaging is restricted to a very narrow
spectral region, under precisely defined pumping conditions. An alternative method of overcoming NRM losses without signal distortion may involve self-induced transparency (SIT) solitons, which were
predicted in metamaterials [24], in analogy with SIT in other resonantly absorbing structures [25, 26].
This work is supported by The National Key Basic Research Special Foundation (G2012CB921904) and by the Chinese National Natural Science Foundation (10934011).
1. J. B. Pendry, “Negative refraction makes a perfect lens,” Physical Review Letters, vol. 85, no. 18, pp. 3966–3969, 2000. View at Publisher · View at Google Scholar · View at Scopus
2. R. A. Shelby, D. R. Smith, and S. Schultz, “Experimental verification of a negative index of refraction,” Science, vol. 292, no. 5514, pp. 77–79, 2001. View at Publisher · View at Google Scholar
· View at Scopus
3. K. L. Tsakmakidis, A. D. Boardman, and O. Hess, “‘Trapped rainbow’ storage of light in metamaterials,” Nature, vol. 450, no. 7168, pp. 397–401, 2007. View at Publisher · View at Google Scholar ·
View at Scopus
4. H. G. Chen, C. T. Chan, and P. Sheng, “Transformation optics and metamaterials,” Nature Materials, vol. 9, no. 5, pp. 387–396, 2010. View at Publisher · View at Google Scholar · View at Scopus
5. R. W. Ziolkowski and E. Heyman, “Wave propagation in media having negative permittivity and permeability,” Physical Review E, vol. 64, no. 5, Article ID 056625, 2001. View at Scopus
6. D. R. Smith, D. Schurig, M. Rosenbluth, S. Schultz, S. A. Ramakrishna, and J. B. Pendry, “Limitations on subdiffraction imaging with a negative refractive index slab,” Applied Physics Letters,
vol. 82, no. 10, pp. 1506–1508, 2003. View at Publisher · View at Google Scholar · View at Scopus
7. A. G. Ramm, “Does negative refraction make a perfect lens?” Physics Letters A, vol. 372, no. 43, pp. 6518–6520, 2008. View at Publisher · View at Google Scholar · View at Scopus
8. Y.-J. Jen, A. Lakhtakia, C.-W. Yu, and C.-T. Lin, “Negative refraction in a uniaxial absorbent dielectric material,” European Journal of Physics, vol. 30, no. 6, pp. 1381–1390, 2009. View at
Publisher · View at Google Scholar · View at Scopus
9. W. H. Wee and J. B. Pendry, “Looking beyond the perfect lens,” New Journal of Physics, vol. 12, Article ID 053018, 2010. View at Publisher · View at Google Scholar · View at Scopus
10. M. I. Stockman, “Criterion for negative refraction with low optical losses from a fundamental principle of causality,” Physical Review Letters, vol. 98, no. 17, Article ID 177404, 2007. View at
Publisher · View at Google Scholar · View at Scopus
11. P. Kinsler and M. W. McCall, “Causality-based criteria for a negative refractive index must be used with care,” Physical Review Letters, vol. 101, no. 16, Article ID 167401, 2008. View at
Publisher · View at Google Scholar · View at Scopus
12. K. J. Webb and L. Thylén, “Perfect-lens-material condition from adjacent absorptive and gain resonances,” Optics Letters, vol. 33, no. 7, pp. 747–749, 2008. View at Publisher · View at Google
Scholar · View at Scopus
13. A. K. Popov and V. M. Shalaev, “Compensating losses in negative-index metamaterials by optical parametric amplification,” Optics Letters, vol. 31, no. 14, pp. 2169–2171, 2006. View at Publisher ·
View at Google Scholar · View at Scopus
14. P. P. Orth, J. Evers, and C. H. Keitel, “Lossless negative refraction in an active dense gas of atoms,” 2007, http://arxiv.org/abs/0711.0303.
15. A. Fang, T. Koschny, M. Wegener, and C. M. Soukoulis, “Self-consistent calculation of metamaterials with gain,” Physical Review B, vol. 79, no. 24, Article ID 241104, 2009. View at Publisher ·
View at Google Scholar · View at Scopus
16. Y. Sivan, S. Xiao, U. K. Chettiar, A. V. Kildishev, and V. M. Shalaev, “Frequency-domain simulations of a negativeindex material with embedded gain,” Optics Express, vol. 17, no. 26, pp.
24060–24074, 2009. View at Publisher · View at Google Scholar · View at Scopus
17. S. Wuestner, A. Pusch, K. L. Tsakmakidis, J. M. Hamm, and O. Hess, “Overcoming losses with gain in a negative refractive index metamaterial,” Physical Review Letters, vol. 105, no. 12, Article ID
127401, 2010. View at Publisher · View at Google Scholar · View at Scopus
18. S. Xiao, V. P. Drachev, A. V. Kildishev et al., “Loss-free and active optical negative-index metamaterials,” Nature, vol. 466, no. 7307, pp. 735–738, 2010. View at Publisher · View at Google
Scholar · View at Scopus
19. M. W. McCall, “What is negative refraction?” Journal of Modern Optics, vol. 56, no. 16, pp. 1727–1740, 2009. View at Publisher · View at Google Scholar · View at Scopus
20. V. Gerasik and M. Stastna, “Complex group velocity and energy transport in absorbing media,” Physical Review E, vol. 81, no. 5, Article ID 056602, 2010. View at Publisher · View at Google Scholar
· View at Scopus
21. L. Muschietti and C. T. Dum, “Real group velocity in a medium with dissipation,” Physics of Fluids B, vol. 5, no. 5, pp. 1383–1397, 1993. View at Scopus
22. M. Born and E. Wolf, Principals of Optics, Cambridge University Press, 7th edition, 1999.
23. A. A. Govyadinov, V. A. Podolskiy, and M. A. Noginov, “Active metamaterials: sign of refractive index and gain-assisted dispersion management,” Applied Physics Letters, vol. 91, no. 19, Article
ID 191103, 2007. View at Publisher · View at Google Scholar · View at Scopus
24. J. Zeng, J. Zhou, G. Kurizki, and T. Opatrny, “Backward self-induced transparency in metamaterials,” Physical Review A, vol. 80, no. 6, Article ID 061806, 2009. View at Publisher · View at Google
Scholar · View at Scopus
25. M. Blaauboer, B. A. Malomed, and G. Kurizki, “Spatiotemporally localized multidimensional solitons in self-induced transparency media,” Physical Review Letters, vol. 84, no. 9, pp. 1906–1909,
2000. View at Scopus
26. T. Opatrný, B. A. Malomed, and G. Kurizki, “Dark and bright solitons in resonantly absorbing gratings,” Physical Review E, vol. 60, no. 5, pp. 6137–6149, 1999. View at Scopus
|
{"url":"http://www.hindawi.com/journals/aoe/2012/347875/","timestamp":"2014-04-17T13:25:31Z","content_type":null,"content_length":"167312","record_id":"<urn:uuid:ed754db8-8151-4cdd-8e08-1f6b98852e31>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Another type of Partial Fractions
June 9th 2010, 03:28 AM
Another type of Partial Fractions
The question starts as
I've gotten this far and don't know how to solve the rest.
Then I don't know how to solve the rest, when I substitute an x, I get a zero for both, it's completely different from this.
June 9th 2010, 05:15 AM
Archie Meade
The question starts as
I've gotten this far and don't know how to solve the rest.
Then I don't know how to solve the rest, when I substitute an x, I get a zero for both, it's completely different from this.
Hi Cthul,
now that you have found C, place $x=-1$ to bypass B and discover A.
Then as you have both A and C, you can place x=0 to find B.
June 10th 2010, 12:23 AM
It won't work, it doesn't when I sub in zero, I get 2 variables.
June 10th 2010, 12:32 AM
Prove It
Try substituting $x = -1$ first to find $A$.
Once you have $A$, then you can let $x = 0$ to find $B$.
My preferred method is expanding all the brackets, equating coefficients of like powers of $x$ and solving the system.
June 10th 2010, 01:56 AM
Archie Meade
June 10th 2010, 03:05 AM
I thought when x is substituted with zero, you'd get... $(1-(-1))^{2}$ which makes .... oh, I see. Thanks. The signs get confusing.
June 10th 2010, 05:12 AM
Archie Meade
Thinking of subtracting negative numbers as the difference between temperatures can be helpful...
5-2 is the difference between 5 and 2 degrees, which is 3 degrees.
There is a 6 degree difference between between 5 and -1,
5 degrees down to zero and another degree down to -1.
That's a drop of 6 degrees.
So 5-(-1) is that difference which we know is 6,
subtracting the lower temperature from the bigger one.
Since 5+1=6, we can say 5--1=5+1,
so 1--1=1+1 and so on...
-2-3 is subtract 2, then subtract 3, which is subtract 5 or -(2+3)
-2--3=-2-(-3), the difference between -2 degrees and -3 degrees which is 1 degree
as -2 is greater than -3,
so -2--3=1=3-2 or -2+3 etc
June 10th 2010, 12:09 PM
Partial fractions appear to be a very interesting problem.
|
{"url":"http://mathhelpforum.com/pre-calculus/148402-another-type-partial-fractions-print.html","timestamp":"2014-04-23T20:33:40Z","content_type":null,"content_length":"12990","record_id":"<urn:uuid:2f1f2979-acfe-45e4-9377-6c91f81520d7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
|
mplementation of
This paper was presented at the National Workshop on Cryptology 2003 organized by the Cryptology Research Society of India, at Anna University, Chennai.
A fast implementation of the RSA algorithm using the GNU MP library
Organizations in both public and private sectors have become increasingly dependent on electronic data processing. Protecting these important data is of utmost concern to the organizations and
cryptography is one of the primary ways to do the job. Public Key Cryptography is used to protect digital data going through an insecure channel from one place to another. RSA algorithm is
extensively used in the popular implementations of Public Key Infrastructures. In this paper, we have done an efficient implementation of RSA algorithm using gmp library from GNU. We have also
analyzed the changes in the performance of the algorithm by changing the number of characters we are encoding together (we termed this procedure as bitwise incremental RSA).
1. Introduction
Data communication is an important aspect of our living. So, protection of data from misuse is essential. A cryptosystem defines a pair of data transformations called encryption and decryption.
Encryption is applied to the plain text i.e. the data to be communicated to produce cipher text i.e. encrypted data using encryption key. Decryption uses the decryption key to convert cipher text to
plain text i.e. the original data. Now, if the encryption key and the decryption key is the same or one can be derived from the other then it is said to be symmetric cryptography. This type of
cryptosystem can be easily broken if the key used to encrypt or decrypt can be found. To improve the protection mechanism Public Key Cryptosystem was introduced in 1976 by Whitfield Diffe and Martin
Hellman of Stanford University. It uses a pair of related keys one for encryption and other for decryption. One key, which is called the private key, is kept secret and other one known as public key
is disclosed.
The message is encrypted with public key and can only be decrypted by using the private key. So, the encrypted message cannot be decrypted by anyone who knows the public key and thus secure
communication is possible. RSA (named after its authors - Rivest, Shamir and Adleman) is the most popular public key algorithm. In relies on the factorization problem of mathematics that indicates
that given a very large number it is quite impossible in today's aspect to find two prime numbers whose product is the given number. As we increase the number the possibility for factoring the number
decreases. So, we need very large numbers for a good Public Key Cryptosystem. GNU has an excellent library called GMP that can handle numbers of arbitrary precision. We have used this library to
implement RSA algorithm. As we have shown in this paper number of bits encrypted together using a public key has significant impact on the decryption time and the strength of the cryptosystem.
2. Review of Existing Literature
Authentication protocols and their implications are discussed in [1]. Computing inverse of a shared secret modulus, which involves mathematical formulation of RSA, is discussed in [2]. Application of
hash function in the field of cryptography is discussed in [3]. The strength of RSA algorithm is discussed in [4]. A survey of fast exponentiation method is done in [5]. Cryptosystem for sensor
networks is studied in [6]. Security proofs for various digital signature scheme is studied in [7]. Multiparty authentication services and key agreement protocols are discussed in [8]. Various fast
RSA implementations are described in [9]. An efficient implementation of RSA is discussed in [10]. The basic RSA algorithms and other cryptography related issues are discussed in [11].
3. Scope of Present Work
Our work in this paper is focused primarily on the implementation of RSA. For efficient implementation we have used the GMP library, we have explored the behaviour and feasibility of the algorithm
with the change of various input parameters, and finally a user interface is developed to provide an application of our analysis
4. Review of the RSA algorithm
The RSA public key cryptosystem was invented by R. Rivest, A. Shamir and L. Adleman. The RSA cryptosystem is based on the dramatic difference between the ease of finding large primes and the
difficulty of factoring the product of two large prime numbers (the integer factorization problem). This section gives a brief overview of the RSA algorithm for encrypting and decrypting messages.
For the RSA cryptosystem, we first start off by generating two large prime numbers, 'p' and 'q', of about the same size in bits. Next, compute 'n' where n = pq, and 'x' such that, x = (p -1)(q-1). We
select a small odd integer less than x, which is relatively prime to it i.e. gcd(e,x) = 1. Finally we find out the unique multiplicative inverse of e modulo x, and name it 'd'. In other words, ed = 1
(mod x), and of course, 1 < d < x. Now, the public key is the pair (e,n) and the private key is d.
Suppose Bob wishes to send a message (say 'm') to Alice. To encrypt the message using the RSA encryption scheme, Bob must obtain Alice's public key pair (e,n). The message to send must now be
encrypted using this pair (e,n). However, the message 'm' must be represented as an integer in the interval [0,n-1]. To encrypt it, Bob simply computes the number 'c' where c = m ^ e mod n. Bob sends
the ciphertext c to Alice.
To decrypt the ciphertext c, Alice needs to use her own private key d (the decryption exponent) and the modulus n. Simply computing the value of c ^ d mod n yields back the decypted message (m). Any
article treating the RSA algorithm in considerable depth proves the correctness of the decryption algorithm. And such texts also offer considerable insights into the various security issues related
to the scheme. Our primary focus is on a simple yet flexible implementation of the RSA cryptosystem that may be of practical value.
5. Our implementation of the RSA algorithm
We have implemented the RSA cryptosystem in two forms : a console mode implementation, as well as a user friendly GUI implementation. We focus on the console mode implementation here, and leave the
GUI implementation for a later section of this report. The console application uses a 1024 bit modulus RSA implementation, which is adequate for non-critical applications. By a simple modification of
the source code, higher bit-strengths may be easily achieved, albeit with a slight performance hit.
5.2 Handling large integers and the GMP library
Any practical implementation of the RSA cryptosystem would involve working with large integers (in our case, of 1024 bits or more in size). One way of dealing with this requirement would be to write
our own library that handles all the required functions. While this would indeed make our application independent of any other third-party library, we refrained from doing so due to mainly two
considerations. First, the speed of our implementation would not match the speed of the libraries available for such purposes. Second, it would probably be not as secure as some available open-source
libraries are.
There were several libraries to consider for our application. We narrowed the choice to three libraries: the BigInteger library (Java), the GNU MP Arbitrary Precision library (C/C++), and the OpenSSL
crypto library (C/C++). Of these, the GMP library (i.e. the GNU MP library) seemed to suit our needs the most.
The GMP library aims to provide the fastest possible arithmetic for applications that need a higher precision than the ones directly supported under C/C++ by using highly optimized assembly code.
Further the GMP library is a cross-platform library, implying that our application should work across platforms with minimal modifications, provided it is linked with the GMP library for the
appropriate platform. We have used the facilities offered by the GMP library heavily throughout our application. The key generation, encryption and decryption routines all use the integer handling
functions offered by this library.
In this subsection, we present the basic structure of the console mode RSA implementation. The program is meant for use on a per user basis where each user's home directory stores files containing
the private and public keys for the particular user. The application stores the private and public keys for a user in the files $HOME/.rsaprivate and $HOME/.rsapublic respectively.
At the very beginning the program searches for the existence of the aforementioned files and reads in the values of the private and public keys. If they are not present (as when the application is
run for the first time), the program proceeds to generate the keys and writes them to the files.
Following this, the user is presented with a menu, asking him whether he would like to encrypt a file, decrypt an encrypted file, or quit the application. If the user chooses to encrypt a file, he is
asked to enter the path to the file containing the recipient's public keys as well as the number of characters to encrypt at a time (this is justified later). If decryption is chosen, the path to the
encrypted file is requested and the program subsequently decrypts the file to the standard output.
The generation of the RSA keys is of paramount importance to the whole application. The application maintains a constant named 'BITSTRENGTH' which is the size of the RSA modulus (n) in bits.
According to this setting, two character arrays to contain the digits of the primes p and q are declared. A simple loop through all the digits of this array initializes the array with a random string
of bits. We have used C's inbuilt random number generation routines to generate the bits of the string. The random generator is seeded using the srand() routine by the return value of the function
time(), which returns the time since the epoch (00:00:00 UTC, January 1, 1970), measured in seconds. Key generation at the same return value of time() is avoided by sleep(), which delays the
execution by one second, thus ensuring that the random numbers are never repeated.
At the end of this process, we have strings containing binary representations of the numbers p and q, but they are not prime yet. To achieve that, two gmp integers are first initialized with the
contents of these strings. Then, the function mpz_nextprime() is called, which changes p and q to the next possible primes. This function uses a probabilistic algorithm to identify primes. According
to the gmp manual, it is adequate for practical purposes and the chance of a composite passing is extremely small.
Now that we have the two 512-bit primes p and q, calculating the values of n (=pq) and x (=(p- 1)*(q-1)) is a simple matter of invoking mpz_mul() with the proper arguments. Next, to determine the
value of 'e', we started with a value of 65537, and incremented it by two each time until the condition gcd(e,x) = 1 is satisfied (which, incidentally, is almost always satisfied by the number 65537
Now there exists a procedure in the gmp library with the prototype int mpz_invert(mpz_t ROP, mpz_t OP1, mpz_t OP2) which computes the multiplicative inverse of OP1 modulo OP2 and puts the result in
ROP. Using this function helps us avoid writing our own routine based on the Extended Euclidean Algorithm (as this function executes extremely fast). In this way, we obtain the value of d, which
completes our quest for the RSA keys. Finally the keys (d,e,n) are stored in two files .rsapublic and .rsaprivate, both in the user's home directory.
It is to be noted that the entire application can use a higher (or lower) bitstrength of the RSA modulus n by simply changing the constant BITSTRENGTH at the very beginning of the source-code.
As the application is executed, the existence of the key files are checked and if they do not exist, the RSA keys are generated. Following this, the user is presented with a menu, which enables him
to choose to encrypt, decrypt or quit.
First, the path to the file containing the public key of the recipient is requested from the user. However the user is also given the option of using his own public keys (in which case, only he can
decrypt the message). Using the values of e and n read from the relevant file containing the public keys, the message is encrypted.
A critical requirement for the proper functioning of the RSA algorithm is that the message m must be represented as an integer in the range [0,n-1] (where n is, as usual, the RSA modulus). Our
application converts text messages to integers by using a simple mapping of every character to its ASCII code. But encrypting only one character at a time is not only expensive in terms of the time
required to encrypt and decrypt, but also in terms of security. This is because the encrypted integers would then only be from a small finite set (containing a maximum of as many integers as the
number of ASCII characters). Hence, we ask the user the number of characters to encrypt at a time. For an RSA encryption scheme with the modulus size of 1024 bits, we have seen that about 100
characters can be encrypted at once. Lesser number of characters cause encryption and (especially) decryption to take significantly longer, whereas higher number of characters often violate the
condition that the message m must lie in the interval [0,n-1].
The entire file to encrypt is processed as a group of strings each containing the specified number of characters (except possibly the last such string). Each character in such a string is converted
to its 3- character wide ASCII code, and the entire resulting numeric string is our message m. Encrypting it is achieved by computing m ^ e mod n. There is a gmp routine specifically for such a
computation having the prototype void mpz_powm (mpz_t ROP, mpz_t BASE, mpz_t EXP, mpz_t MOD) which sets ROP to (BASE raised to EXP) modulo MOD. Thus invoking moz_powm(c,m,e,n) stores the encrypted
partial message in the integer c. This is written each time to the file to encrypt to until the whole message has been processed. This completes the encryption process.
If, in the menu, the user chooses to decrypt an encrypted file, the decryption routine is invoked. The operation of this routine is really quite straightforward. From the file to decrypt (the path to
which is input from the user), the function processes each encrypted integer. It does so by computing the value of c ^ d mod n by invoking gmp_powm(m,c,d,n) and stores the decrypted part in m. Of
course, the values of d and n are read in beforehand.
Here m however contains the integer representation of the message i.e. it is a string of numbers where each 3 character sequence signifies the ASCII code of a particular character. An inverse mapping
to the relevant character is carried out and the message, now as was in the original file is displayed on the standard output. The decryption process is over once all the integers (ciphertext) in the
encrypted file are processed in the described manner.
This completes a discussion of our console mode implementation of the RSA algorithm. This public key infrastructure has been tested in a multi-user scenario successfully. The following section
analyses the time required for the various functions of our implementation, varying quite a few parameters.
6. Time Analysis
6.1 Timings for 1024-bit RSA (without compiler optimization)
All the times recorded below have been measured on a 733 MHz Pentium class processor, using the time measurement functions offered by the C library on a GNU/Linux platform (kernel 2.4.20).
Key generation : 0.465994 seconds (averaged over 5 samples)
The following times were recorded while encrypting/decrypting a file with exactly 10,000 characters:
Number of characters Encryption time Decryption time
1 5.068153 254.349886
25 0.219304 11.367492
50 0.128547 6.293279
75 0.096291 4.419277
100 0.078851 3.365591
RSA Timings (per 10,000 characters)
6.2 Timings for RSA for varying bit strengths
By varying the constant representing the bit-strength, RSA moduli of other sizes may be used quite easily. The following times were recorded using the same input file (10,000 characters):
Bit Strength Chars at once Key Generation Encryption Decryption
512 50 0.057984 0.054362 0.903180
768 75 0.194653 0.065302 2.098904
1024 100 0.465994 0.078851 3.365591
1280 125 0.657473 0.089712 4.437929
1536 150 1.613467 0.105439 5.804798
1792 175 2.057411 0.116585 7.430849
2048 200 4.052181 0.126361 9.001286
Timings for various bit-strengths (per 10,000 characters)
While the 512-bit RSA is definitely the fastest among the ones shown, it is not the most secure, providing marginal security from a concerted attack. The slowest (2048-bit RSA) should be used in
critical situations since it offers the maximum resistance to attacks. In our opinion the 1024-bit modulus is a good balance between speed and security.
7. A graphical user interface to the RSA cryptosystem
This section describes the GUI version of our implementation of the RSA algorithm. While the concepts and libraries used are essentially the same, we briefly describe the implementation with special
regard to the considerations that were unique to the graphical version. Finally we present the reader some screenshots of the application.
The GUI application was developed using the KDE/Qt libraries on Red-Hat Linux version 8.0. We used KDevelop 2.1 as our integrated development environment.
Our application consists of three C++ classes, of which the class named RSA is the most important. It provides slots (signal handlers) for encrypting files, decrypting files, mailing the encrypted
file to another user, loading the values of the RSA keys from the key-files, saving encrypted/decrypted files and so on. One notable difference as far as features are concerned is that the GUI
application does not ask the user the number of characters to enccrypt together.
The performance of the GUI version is similar to the console mode, since the underlying algorithms are the same. But as any program running on an X-server does require considerable amount of memory,
the speed might be somewhat slower than the console version on a system without adequate memory.
7.3 Screenshots of the application.
The following screen shows the main window of our RSA GUI:
When the user clicks on the "Check RSA Key Pairs" button, if the key files do not exist (as in the first time), the following message is shown:
The user is taken to the key generation dialog, whose functions are pretty self-explanatory. This dialog is shown next:
The following diagram shows the "Encrypt" tab of the main window:
The encrypted file can be mailed to another user by clicking on the "Mail" button:
And finally, here is the similar looking "Decrypt" tab:
The decrypted file may be saved using the "Save Decrypted File" button, which opens up a file-dialog for saving the decrypted file.
8. Conclusion
In this paper an efficient implementation of RSA is shown by using various functions of the GMP library. Feasibility analysis is done by comparing the time taken for encryption and decryption. It
shows that when we increase the number of bits of information to be encrypted together the total time including encryption and decryption steadily decreases. It must always be kept in mind that the
integer representation of the message to be encrypted should lie within the range specified by the modulus (that is, m lies in the range [0,n-1]), which poses a limitation on the maximum number of
characters that can be encrypted at a single time.
9. A simple console implementation
Here is a simple console mode implementation of the algorithm as described in this paper.
To compile and run this console, you need the GMP library, available here. You can also download the C++ file here. Compile it as follows:
g++ -lgmp -o rsa rsa.cpp
And run it by invoking the rsa executable in a console window.
10. References
[1] Paul Syversion and Illiano Cervesato, The logic of authentication protocols, FOSAD'00, Bertinoro, Italy, 2000.
[2] Dario Catalano, Rosario Gennaro and Shai Halevi, Computing inverse over a shared secret modulus, IBM T. J. Watson Research center, NY, USA, 1999.
[3] Don coppersmith, Markus Jakobsson, Almost optimal hash sequence traversal, RSA Laboratories, NY, 2001.
[4] Elichiro Fujisaki, Tatsuaki Okamoto, David Pointcheval and Jacques Stern, RSA-OAEP is secure under the RSA assumption, Journal of Cryptology, 2002.
[5] Daniel M. Gordon, A survey of fast exponentiation methods Journal of algorithms, 27, 1998, 126-146.
[6] Adrian Perrig, Robet Szewczyk, Victor Wen, David Culler and J.D. Tygar, SPINS: Security protocols for sensor networks, Mobile Computing and Networking, Rome, Italy, 2001.
[7] David Pointcheval and Jacques Stern, Security proofs for signature schemes, EUROCRYPT'96, Zaragoza, Spain, 1996.
[8] Giuseppe Ateniese, Michael Steiner, and Gene Tsudik, New multiparty authentication services and key agreement protocols, IEEE Journal of Selected Areas in Communication, 18(4), 2000.
[9] Cetin Kaya Koc, High speed RSA implementation, RSA Laboratories, CA, 1994
[10] Anand Krishnamurthy, Yiyan Tang, Cathy Xu and Yuke Wang, An efficient implementation of multi-prime RSA on DSP processor, University of Texas, Texas, USA,2002.
[11] A. Menezes, P. Van Oorschot, S. Vanstone, Handbook of Applied Cryptography, CRC Press, 1996 ( www.cacr.math.uwaterloo.ca/hac )
Copyright (c) 2004 Rajorshi Biswas
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software
Foundation. A copy of the license is included here.
|
{"url":"http://www.rajorshi.net/old/paper_rsa.htm","timestamp":"2014-04-20T18:24:20Z","content_type":null,"content_length":"32236","record_id":"<urn:uuid:95162ce1-df76-4626-9939-4c7f27db4c65>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Nora 512 on Saturday, June 1, 2013 at 6:52pm.
Can someone answer a question similar to a and b so I can use as an example.
Consider the following binomial random variables.
(a) The number of tails seen in 47 tosses of a quarter.
(i) Find the mean. (Give your answer correct to one decimal place.)
(ii) Find the standard deviation. (Give your answer correct to two decimal places.)
(b) The number of left-handed students in a classroom of 59 students (assume that 17% of the population is left-handed).
(i) Find the mean. (Give your answer correct to one decimal place.)
(ii) Find the standard deviation. (Give your answer correct to two decimal places.)
• Math - Nora 512, Saturday, June 1, 2013 at 11:42pm
I have 24 questions like the two above and if anyone can help I will use them as examples and work the other ones. I do so much better with examples. Thank you
• Math - MathMate, Sunday, June 2, 2013 at 9:32am
For a binomial distribution with N bernoulli trials with probability of success p (i.e. failure q = 1-p),
the following properties can be proved:
mean = Np
variance = Npq
Take the square-root of variance to get standard deviation.
In 15 tosses of a fair dime, N=15, p=0.5, q=1-0.5=0.5
so mean = 15*0.5=7.5
standard deviation = sqrt(Npq)=√(15*.5*.5)=1.936
• Math - Nora 512, Sunday, June 2, 2013 at 11:37am
Thank you, I have worked 3 out like the first one, but how do you do one with % just show me example as the one above, don't have to use those numbers just something I can go by...I did get the
other 3 I worked out right..Thanks again
• Math - MathMate, Sunday, June 2, 2013 at 11:52am
17% are left-handed means
(remember 17%=17/100=0.17)
N=59, p=0.17, q=1-0.17=0.83
you can take it from here!
• Math check - Nora 512, Sunday, June 2, 2013 at 12:21pm
Ok I have got this far and missed the last two on standard deviation, can you look at these and tell me how I missed them?
(23) The number of cars found to have unsafe tires among the 379 cars stopped at a roadblock for inspection (assume that 15% of all cars have one or more unsafe tires).
(i) Find the mean. (Give your answer correct to one decimal place.)
Correct: Your answer is correct. .
56.9 by 379 x .15 =
(ii) Find the standard deviation. (Give your answer correct to two decimal places.)
Incorrect: Your answer is incorrect. . answer 6.79 by sqrt(379 x .15x.81)
(24) The number of melon seeds that germinate when a package of 60 seeds is planted (the package states that the probability of germination is 0.89.
(i) Find the mean. (Give your answer correct to one decimal place.)
Correct: Your answer is correct. 53.40 by 60 x 0.89 =
(ii) Find the standard deviation. (Give your answer correct to two decimal places.)
Incorrect: Your answer is incorrect. sqrt(60 x .89x.18) = 3.07 and tried again 3.10
• Math - MathMate, Sunday, June 2, 2013 at 12:37pm
For 23)(ii) I have
For 24(ii)
• Math check - Nora 512, Thursday, June 6, 2013 at 5:54pm
Ok (23) I got the standard deviation as 1.99 and on (24) I got standard deviation as 0.73. I have worked and worked these and I am missing something if this is not right.
Related Questions
chemistry - I posted a question similar to this one a few days ago and someone ...
math - how can you estimate heights and distances you can't easily measure with ...
math - how can you estimate heights and distances you can't easily measure with ...
culture ethics - I need two to three charateristics of orientalism. How may ...
math - A figure is transformed into a new figure by being rotated 180 degrees, ...
help me for this stupid worksheet - the question is.. why did the ghost decide ...
Math - Can someone please help me to answer the following question? The question...
Math - A student answered the following skill-testing question to try and win a ...
biology - The light reaction of photosynthesis is very similar to" a. the ...
math help!!! - what does the scale factor bestween two similar figures tell you ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1370127178","timestamp":"2014-04-19T13:40:30Z","content_type":null,"content_length":"12046","record_id":"<urn:uuid:95881e25-1477-4580-b404-53fd850b9b82>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bensalem SAT Math Tutor
Find a Bensalem SAT Math Tutor
...I have one year of experience teaching high school mathematics in the course of Algebra II. Many of the Algebra II students needed remediation in Algebra I topics so I am more than familiar
with the Algebra I curriculum. I am a 4th Year High School Physics Teacher, Pennsylvania State Certified to teach Physics and Math.
9 Subjects: including SAT math, physics, geometry, statistics
...Learning new disciplines keeps me very aware of the struggles all students face. Beyond academics, I spend my time backpacking, kayaking, weightlifting, jogging, bicycling, metalworking,
woodworking, and building a wilderness home of my own design. In between formal tutoring sessions, I offer my students FREE email support to keep them moving past particularly tough problems.
14 Subjects: including SAT math, calculus, physics, geometry
...As a recent college graduate, I have spent hundreds of hours on Microsoft Excel, Word, PowerPoint, and Outlook and would be happy to help anyone with any questions. With my education as a
biomedical engineer, I was required to take numerous courses that involved anatomy and physiology. I feel v...
30 Subjects: including SAT math, reading, English, biology
...By helping students learn vocabulary in context, teaching students to become active readers and familiarizing students with the type of questions asked I help students improve their score on
the test and have used these techniques myself to achieve a perfect score on the SAT reading section. I a...
37 Subjects: including SAT math, reading, geometry, algebra 1
...With my experience, I walk the student through what a concept in math is about, how to execute it, and how to tackle a problem when it comes time for a test. As a tutor with a primary focus in
math and science, I am experienced with math fundamentals and how to build a solid foundation for a stu...
9 Subjects: including SAT math, physics, calculus, geometry
|
{"url":"http://www.purplemath.com/bensalem_pa_sat_math_tutors.php","timestamp":"2014-04-20T19:28:09Z","content_type":null,"content_length":"24222","record_id":"<urn:uuid:3099621a-cba8-4325-bbdd-60fee853eba8>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Vertex Form of a Quadratic Relation
So far, we have looked at the Standard and Factored forms of a Quadratic Relationship. Each form has its own characteristics, and its own benefits.
In the Standard Form of a Quadratic Relationship, by just looking at the expression, we can determine
* the y-intercept
* whether the parabola is concave up (cup-shaped) or concave down (umbrella-shaped)
In the Factored Form of a Quadratic Relationship, by just looking at the expression, we can determnine
* the zeros, or x-intercepts, of the expression
* whether the parabola is concave up or concave down
We are now going to look at the Vertex Form of a Quadratic Expression.
Like the Standard and Factored Forms, the Vertex Form of a Quadratic Relation also has a few characteristics:
* the vertex can easily be found
* the concavity can easily be determined
The Vertex Form of a parabola is in the form y = a(x-h)² + k
When an expression is in Vertex Form, the vertex is (h,k). Also, we must take the opposite sign of h.
If the vertex and another point on the parabola is known, it is easy to determine the equation of the parabola:
Substitute the coordinates of the vertex into h and k, and substitute in the coordinates of the other point for x and y, and then solve for a, remembering to use Order of Operations (BEDMAS).
Recall that if a > 0, then the parabola opens upward. If a < 0, then the parabola opens downward.
The equation of a parabola can be turned into Standard form by expanding and collecting like terms (this will involve FOIL, and Order of Operations)
Example 1:
Find the vertex, the axis of symmetry, and the direction of opening for the following quadratic relations.
(a) y = 2 (x - 5)² + 6
The coordinates of the vertex are (5,6); the axis of symmetry is x = 5; and the vertex opens up (positive a value)
(b) y = - 1/2 (x + 7) - 4
The coordinates of the vertex are (-7,-4); the axis of symmetry is x = -7; and the vertex opens down.
Example 2:
A stick is thrown into the air. Its height H in meters after t seconds is
H = - 2 (t - 3)² + 20.
(a) In what direction does the parabola open? How do you know?
(b) What are the coordinates of the vertex, and what does this represent?
(c) From what height was the stick thrown?
(a) The parabola opens downward, because a = -2. Also, it makes sense that if a stick is thrown, it will first go up, and then gravity will bring it back down again - thus, it makes the shape of a
concave down parabola.
(b) The coordinates of the vertex are (3,20). This means that the stick reached its maximum height of 20 meters after 3 seconds.
(c) The stick was thrown when t = 0. Substitute this value in for t, and solve for H.
H = -2 (0 - 3)² + 20
H = -2 (3)² + 20
H = -2 (9) + 20
H = -18 + 20
H = 2
Therefore, the stick was thrown from a height of 2 m.
Example 3:
A parabola passes through the point (4,40), and has a vertex (-3,-9). FInd its equation in vertex form.
Since the vertex and a point are given, we can use vertex form.
y = a(x + 3)² - 9
Also, we know a point on the line.
40 = a(4 + 3)² - 9
40 = a(7)² - 9
40 = a (49) - 9
49 = 49a
a = 1
Therefore, the equation of the parabola is
y = (x + 3) - 9.
Practice Questions
Page #351 #1, 2b, 3d, 7cd.
Return to Unit 3 Page
|
{"url":"http://www.angelfire.com/pro/grade_ten_math/Unit4/vertex.html","timestamp":"2014-04-16T07:49:29Z","content_type":null,"content_length":"15654","record_id":"<urn:uuid:66d1a26f-db6c-41bf-9692-09795a70df62>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Saddle Rock, NY
Find a Saddle Rock, NY Calculus Tutor
...Even though, aside from when I worked at a teaching assistant while getting my masters in mechanical engineering, I don't have teaching background, I have always enjoyed tutoring because I
enjoy learning and believe that it's an important skill for life. I primarily tutor high school math and sc...
26 Subjects: including calculus, chemistry, physics, statistics
...My time is very flexible, but I do prefer to meet my student at least 5 hours a week in order to have a better result of tutoring. I am looking forward to hearing from you soon,Thank you.I am
a native Chinese who speaks Mandarin fluently. I graduated with an English Major at Dalian University of Foreign Languages.
6 Subjects: including calculus, Japanese, Chinese, precalculus
...Please consider me as a very serious candidate for this opportunity. Chemistry is the first mathematics based science students are exposed to and can either make a student passionate about
math and science or extinguish their flame for it. I had two wonderful Chemistry teachers in high school and I can convey that same enthusiasm to you.
15 Subjects: including calculus, chemistry, physics, geometry
...I value the importance of WHY and not just HOW. Mathematics is a much easier subject when you know WHY you are doing something rather than simply memorizing the HOW to do it. If necessary,
individualized homework assignments will be given to students.
31 Subjects: including calculus, English, Spanish, geometry
I enjoy math and have taught and tutored it and science for over 20 years in college and in the public schools of NYC. My love for teaching math happened when I realized my ability to make it
plain and understandable to people who had enormous anxiety about it. My students have come from a very diverse population and range in age, with varying backgrounds and knowledge of math.
16 Subjects: including calculus, chemistry, geometry, algebra 1
Related Saddle Rock, NY Tutors
Saddle Rock, NY Accounting Tutors
Saddle Rock, NY ACT Tutors
Saddle Rock, NY Algebra Tutors
Saddle Rock, NY Algebra 2 Tutors
Saddle Rock, NY Calculus Tutors
Saddle Rock, NY Geometry Tutors
Saddle Rock, NY Math Tutors
Saddle Rock, NY Prealgebra Tutors
Saddle Rock, NY Precalculus Tutors
Saddle Rock, NY SAT Tutors
Saddle Rock, NY SAT Math Tutors
Saddle Rock, NY Science Tutors
Saddle Rock, NY Statistics Tutors
Saddle Rock, NY Trigonometry Tutors
Nearby Cities With calculus Tutor
Baxter Estates, NY calculus Tutors
East Atlantic Beach, NY calculus Tutors
Fort Totten, NY calculus Tutors
Great Nck Plz, NY calculus Tutors
Great Neck calculus Tutors
Great Neck Estates, NY calculus Tutors
Great Neck Plaza, NY calculus Tutors
Harbor Hills, NY calculus Tutors
Kensington, NY calculus Tutors
Kings Point, NY calculus Tutors
Manorhaven, NY calculus Tutors
Roslyn, NY calculus Tutors
Russell Gardens, NY calculus Tutors
Saddle Rock Estates, NY calculus Tutors
Thomaston, NY calculus Tutors
|
{"url":"http://www.purplemath.com/Saddle_Rock_NY_calculus_tutors.php","timestamp":"2014-04-20T11:02:20Z","content_type":null,"content_length":"24514","record_id":"<urn:uuid:bc058e7d-ecae-462d-8a6c-06568c7cef14>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
|
digitalmars.D - BigInts in Tango/Phobos
bearophile <bearophileHUGS lycos.com>
I have tried to find Don's email address, but so far I have failed (and I have
not seen him on IRC d.tango), so I talk here, even if this isn't the best place.
I think this is the data representation of a Tango BigInt:
alias uint BigDigit;
const BigDigit[] ZERO = [0];
struct BigUint {
BigDigit[] data = ZERO;
struct BigInt {
BigUint data; // BigInt adds signed arithmetic to BigUint.
bool sign = false;
I think the following representation may be better:
struct BigInt {
size_t* data;
int sizeVal; // sign(sizeVal) is always the sign of the BigInt.
// If data!=null then abs(sizeVal) is the actually size used
// of the memory block, otherwise sizeVal is the value of this
// BigInt (useful for small BigInts), with overflow tests too.
int capacity; // The available size of the block
So the operations on such BigInts are cheap if the numbers can be stored in a
32 bit int (it just has to test if data is null and then perform a safe
operation on 32 bit numbers). This first part can be done in a small struct
method that I hope will always be inlined. If data isn't null then there's a
call to the bigint machienery.
When a 32 bit sizeVal produces overflow the bigint is converted into a normal
Small integers are very common, so speeding up this case is very useful.
Overflow tests in C:
The capacity/sizeVal pair is modeled on a similar pair of the C++ vector, so it
can mutate inplace and grow avoiding some reallocations (GNU GMP too allows
inplace mutation).
In theory inside such BigInt struct there's space for a 64 bit value (as a
union) but I think 64 bit values aren't common enouggh to justify the little
slowdown on smaller values.
Sep 09 2009
bearophile wrote:
I have tried to find Don's email address, but so far I have failed (and I have
not seen him on IRC d.tango), so I talk here, even if this isn't the best place.
You should post this kind of thing as a ticket in Tango.
I think this is the data representation of a Tango BigInt:
alias uint BigDigit;
const BigDigit[] ZERO = [0];
struct BigUint {
BigDigit[] data = ZERO;
struct BigInt {
BigUint data; // BigInt adds signed arithmetic to BigUint.
bool sign = false;
I think the following representation may be better:
struct BigInt {
size_t* data;
int sizeVal; // sign(sizeVal) is always the sign of the BigInt.
// If data!=null then abs(sizeVal) is the actually size used
// of the memory block, otherwise sizeVal is the value of this
// BigInt (useful for small BigInts), with overflow tests too.
int capacity; // The available size of the block
So the operations on such BigInts are cheap if the numbers can be stored in a
32 bit int (it just has to test if data is null and then perform a safe
operation on 32 bit numbers). This first part can be done in a small struct
method that I hope will always be inlined. If data isn't null then there's a
call to the bigint machienery.
When a 32 bit sizeVal produces overflow the bigint is converted into a normal
Yes, I've considered doing something like this. But I'm actually not happy at all with copy-on-write BigInts as used in Tango, they result in a huge amount of wasted memory allocation. They are the
only option in D1, but I'll do it differently in D2. I've really just concentrated on getting the low-level routines working, and exposing a minimal wrapper for functionality.My feeling was that the
optimal design for BigInt couldn't be settled without reference to BigFloat. Someone had said they were working on a BigFloat, but it doesn't seem to have materialized.
Small integers are very common, so speeding up this case is very useful.
Overflow tests in C:
The capacity/sizeVal pair is modeled on a similar pair of the C++ vector, so
it can mutate inplace and grow avoiding some reallocations (GNU GMP too allows
inplace mutation).
Yes. But note that mutate in-place is impossible with copy-on-write, so it's D2 only. BigInts are also the perfect place for memory pools. Temporary variables can in theory be dealt with extremely
In theory inside such BigInt struct there's space for a 64 bit value (as a
union) but I think 64 bit values aren't common enouggh to justify the little
slowdown on smaller values.
You're probably right. But I'm concentrating on compiler bugfixes right now, BigInt is on hold for now.
Sep 10 2009
|
{"url":"http://www.digitalmars.com/d/archives/digitalmars/D/BigInts_in_Tango_Phobos_96073.html","timestamp":"2014-04-23T17:40:01Z","content_type":null,"content_length":"14420","record_id":"<urn:uuid:0086fb26-ef32-4348-b2a6-022a7f4d256c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
|
100 Sided Die Vs. 10000 Sided Die [Archive] - TankSpot
04-09-2008, 03:57 PM
With all the ratings translating to percentages displayed in values less than whole numbers, are rating points wasted that don't put you to the next whole number?
Is the random number generator rolling a 100 sided die or a 10000 sided die to account for the numbers after the decimal?
Is the value of numbers after the decimal point valuable for some stats and not others or is it universally applied to all percentage based stats?
|
{"url":"http://www.tankspot.com/archive/index.php/t-36452.html","timestamp":"2014-04-20T19:10:38Z","content_type":null,"content_length":"4923","record_id":"<urn:uuid:32afa58a-72a5-4dd6-aaea-1c808fe082f0>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
First Round Recap and Second Round Playoff Preview by the Numbers
Posted on 05/01/2011 by Arturo Galletti
“Now it is such a bizarrely improbable coincidence that anything so mind-bog-gglingly useful could have evolved purely by chance that some thinkers have chosen to see it as the final and clinching
proof of the non-existence of God.
The argument goes something like this: `I refuse to prove that I exist,’ says God, `for proof denies faith, and without faith I am nothing.’
`But,’ says Man, `The Babel fish is a dead giveaway, isn’t it? It could not have evolved by chance. It proves you exist, and so therefore, by your own arguments, you don’t. QED.’
`Oh dear,’ says God, `I hadn’t thought of that,’ and promptly vanished in a puff of logic.
`Oh, that was easy,’ says Man, and for an encore goes on to prove that black is white and gets
himself killed on the next zebra crossing.”
-Douglas Adams The Hitchhiker’s Guide to the Galaxy
Let’s recap.
We’ve spent a lot of time together on this blog looking at what makes a team successful in the playoffs. We’ve seen some very interesting patterns. I’ve done some complicated math. An at the start of
the playoffs, I decided to make some picks using what we know (or believe we know). Because to quote Feynman( The quote is from “Simulating Physics with Computers”):
I want to talk about the possibility that there is to be an exact simulation, that the computer will do exactly the same as nature. If this is to be proved and the type of computer is as I’ve already
explained, then it’s going to be necessary that everything that happens in a finite volume of space and time would have to be exactly analyzable with a finite number of logical operations. The
present theory of physics is not that way, apparently. It allows space to go down into infinitesimal distances, wavelengths to get infinitely great, terms to be summed in infinite order, and so
forth; and therefore, if this proposition is right, physical law is wrong.
So an accurate simulation is possible just not very easy.
Let’s Review the relevant facts first:
It’s about to get math intensive here so if it’s your first visit (or if you need to bone up) that’s what the Basics are for.
The NBA Playoff Preview and the Picks (Part 1: Just the Facts) Breaks down all the Maths and theories in detail
with the visual summary here:
The Playoff Checklist for all the Factors:
And The Team Vitals for everyone:
The Team Vitals (Top 6 Approximation):
The NBA Playoff Preview and the Picks (Part 2: Skin in the Game) Breaks down all the picks.
How did I do?
Let’s see:
Eastern Conference Round 1
Series 1: Chicago vs Indiana: Win % @ Neutral Site: >70% for Bulls
The Numbers said:Bulls in 5
Home Team Win Margin 70% ++
W4 23%
W5 34%
W6 18%
W7 15%
L4 1%
L5 1%
L6 4%
L7 4%
My Prediction: Bulls in 4.
Result: Bulls in 5 (most likely predicted scenario at 34%)
The Numbers and I both had the Bulls right. I had the Lenght of the series wrong (the numbers did not).
Series 2: Miami vs Philadelphia Win % @ Neutral Site: just about 70% for Miami
The Numbers said:Heat in 5
Home Team Win Margin 70%
W4 23%
W5 34%
W6 18%
W7 15%
L4 1%
L5 1%
L6 4%
L7 4%
My Prediction: Heat in 5.
Result: Heat in 5 (most likely predicted scenario at 34%)
Everyone got this right.
Series 3: Boston vs New York Win % @ Neutral Site: I took 65%
The Numbers said: Celts in 5
Home Team Win Margin 65%
W4 17%
W5 30%
W6 18%
W7 18%
L4 1%
L5 3%
L6 7%
L7 6%
My Prediction: Celts in 6.
Result: Celts in 5 (4th most likely predicted scenario at 17%)
Got the series right but missed the length. The numbers came close to getting this one perfect too (and iffy call in game 1).
Series 4: Orlando vs Atlanta:Win % @ Neutral Site:About 68% for Orlando if I’m really,really nice to the Hawks and ignore their post All-Star collapse.
The Numbers said:Magic in 5
Home Team Win Margin 68%
W4 20%
W5 32%
W6 18%
W7 16%
L4 1%
L5 2%
L6 5%
L7 4%
My Prediction: Magic in 5.
Result: Hawks in 6 (5th most likely predicted scenario at 5%)
How do you outscore your opponent by almost two points a game with the best player on the court and still lose a series? Ask the Magic, they just did it.
Western Conference 1st Round:
Series 1: San Antonio vs Memphis Win % @ Neutral Site: 54% for Memphis.
The Numbers said: Grizz in 6
Home Team Win Margin 46%
W4 4%
W5 12%
W6 11%
W7 18%
L4 8%
L5 13%
L6 21%
L7 14%
My Prediction: Grizz in 7.
Result: Grizz in 6 (Most likely predicted scenario at 21%)
Again, me and the numbers both nail the result but the numbers get it spot on (I’m guessing you’re sensing a pattern here :-) )
Series 2: LA vs New Orleans Win % @ Neutral Site: 65% Lakers.
The Numbers said: LA in 5
Home Team Win Margin 65%
W4 17%
W5 30%
W6 18%
W7 18%
L4 1%
L5 3%
L6 7%
L7 6%
My Prediction: LA in 5.
Result: LA in 6 (2nd Most likely predicted scenario at 18%)
Never underestimate Chris F#$%ing Paul. I thought (and so did the numbers) he’d get one but he got two.
Series 3: Dallas vs Portland Win % @ Neutral Site: 60% Dallas.
The Numbers said: Dallas in 5
Home Team Win Margin 60%
W4 12%
W5 25%
W6 18%
W7 20%
L4 2%
L5 4%
L6 11%
L7 8%
My Prediction: Dallas in 7.
Result: Dallas in 6 (3rd Most likely predicted scenario at 18%)
Brandon Roy and the freaky comeback made the numbers wrong. I’ll take great basketball over any model any day of the week.
Series 4: OKC vs Denver. A coin flip my guess was 55% Denver based on the Altitude.
The Numbers said:Denver in 6
Home Team Win Margin 45%
W4 4%
W5 11%
W6 10%
W7 17%
L4 8%
L5 13%
L6 22%
L7 14%
My Prediction: Denver in 6.
Result: OKC in 5 (5th Most likely predicted scenario at 11%)
I spent hours on this series. I ran every scenario in my head. I thought Homecourt advantage would be the decider (in Denver’s favor). I forgot about the star system (i.how stars get calls
differently in playoffs see here). That and this little ditty:
What exactly was George Karl Thinking?
To recap:
6-2 for Round #1 for Picking the winners (Arturo and the Numbers)
Numbers got the length right as well for 3 of the series (I only got one). The lessons as always kids, trust the numbers.
Got the coin flip wrong . Missed Orlando not showing up.
Let’s talk round 2 now.
I had:
1. Chicago over Orlando in 7.
2. Miami over Boston in 7.
3. Denver over Memphis in 7.
4. LA over Dallas in 7.
And for the rest of playoffs I had Miami over Chicago in 6. Denver over LA in 6 and Miami over Denver in 5.
I obviously need to review these in detail.
First the Numbers for the remaining teams (Note: I had an error on this table initially. Fixed now. Did not affect anything else.):
The really nice part is that now I have the actual minute allocation each team has used for the playoffs. I combine that with the raw productivity (ADJP48 not Wins Produced because having a big as
your sixth man? In general better than having a small).
With that data, I can do this:
Which is how each team projects based on playoff minute allocation and their ADJP48 numbers.
If I use that to project the matchups I get:
My logic is as follows:
• Chicago is clearly better than Atlanta but they play down to the level of their competition and are banged up.
• Miami skews better than Boston when we look at overall but that doesn’t show head to head. This is a coin flip series. If Shaq plays more than 40 minutes and is effective? Throw out that number
and prepare for Banner 18.
• Dallas matches up well with LA. Dallas was also the better team down the stretch. I also don’t trust Bynum’s knees. I go with the average number here.
• I always felt the winner of OKC-Denver would win Round 2 against Memphis and beat the winner of the battle of the oldsters (LAL-DAL) to go to the Finals. Love Memphis but OKC is clearly better.
Using the numbers I ran the math and this time I’m sticking to it. The math looks like so:
And the 2nd Round Picks are:
Chicago in 4 over Atlanta (Say goodbye to the Hawks!)
Miami in 7 over Boston (I hope to god I’m wrong here. Or better yet, pray for Shaq over Baby!)
Dallas over LA in 6 (I could totally see the Mavs getting screwed by the Refs though)
OKC over Memphis in 7 (Vancouver-Seattle is going to rock!)
For the rest, again just guessing, OKC over Dallas in 6 (Kidd gets abused by Westbrook) , Miami over Chi in 6 (Superstars get the calls, Bosh goes off on Boozer) then Miami over OKC in 7 in what
could be an epic first part in a series.
I’m soooo looking forward to the rest of the playoffs.
19 Responses “First Round Recap and Second Round Playoff Preview by the Numbers” →
1. Arturo- why are zbo’s numbers showing he was awful in the reg season and post all-star???
2. Oh god – not only is that Big Country in that picture, but I think that the other player is Greg Foster. Brings back some terrible memories of him in Toronto.
3. So just going by the numbers that looks like 1 overwhelming favorite, 1 mild favorite, and two tossups.
4. Honestly, I’m rooting for the lakers so Miami dismantles them in the finals. I say Bynum stays healthy, and LA’s homecourt takes them to the finals. :-) The Shaq prediction ignores the fact that
Haslem has a higher chance of playing before Shaq. If Miami gets Haslem and he’s productive… Shaq won’t mean much. Then Miami beats up Chicago and… my dream match happens.
For the good of all basketball, Lebron James has to destroy Kobe in the finals, so it’s best to hope for the best with Bynum’s knee.
5. I suppose it’s well and good to say that you went 6-2 on picking winners, but that isn’t really what you’re doing here; you have a model for predicting outcomes, and to put probabilities on teams
winning. That’s substantially different, especially when you want to evaluate your model.
Just as a crude oversimplification, you picked 5 heavy favorites in round 1: Chicago, Miami, Boston, Orlando, and Los Angeles. However instead of just picking them, you put odds on all of them;
and the odds you gave implied a slightly better than 50% chance that at least one of these matchups would end in an upset (~49.1% chance that all 5 favorites won).
So while you can say that you picked 4 of the 5 winners in these matchups (and be correct), you actually made a more provocative claim, that a major upset in the first round was even money – and
seeing that upset (small data sets aside) supports the model.
I’d be more worried if your model is showing significant chances of upsets, and you never see them; if the 9:1 favorites always win, your model isn’t calibrated properly.
10 Trackbacks For This Post
|
{"url":"http://arturogalletti.wordpress.com/2011/05/01/first-round-recap-and-second-round-playoff-preview-by-the-numbers/","timestamp":"2014-04-17T21:49:28Z","content_type":null,"content_length":"103317","record_id":"<urn:uuid:b556ba88-e138-4203-9a54-b40cddc011e2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
|
my understanding of tension..
Thanks, yes but wouldn't the block on the cliff pull back on the cord with the same force?
Yes, and that force pulling back on the rope is friction; the force resultant from the contact of the cliff to the block, that resists motion.
So, in a friction-less situation, the block will fall.
In situation with friction, the block might or might not fall; depending on the coefficient of static friction.
This is a pretty good picture of a free body diagram of something similar to your problem. Just imagine that your rope is tied to the left side of the block, that contact would give you your 'F'
force, the force that is trying to move the block. The friction force then is parallel to the surface the block is placed on, and resists movement. That is the [tex]F_{f}[/tex] force.
There is no internal force of the block that resists movement, it is friction.
|
{"url":"http://www.physicsforums.com/showthread.php?p=2550189","timestamp":"2014-04-17T21:32:51Z","content_type":null,"content_length":"74883","record_id":"<urn:uuid:d6f65e48-4148-4579-a898-44da51b5f221>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rmathlib and kdb+, part 2 – Probability Distribution Functions
Following on from the last post on integrating some rmathlib functionality with kdb+, here is a sample walkthrough of how some of the functionality can be used, including some of the R-style wrappers
I wrote to emulate some of the most commonly-used R commands in q.
Loading the rmath library
Firstly, load the rmathlib library interface:
q)\l rmath.q
Random Number Generation
R provides random number generation facilities for a number of distributions. This is provided using a single underlying uniform generator (R provides many different RNG implementations, but in the
case of Rmathlib it uses a Marsaglia-multicarry type generator) and then uses different techniques to generate numbers distributed according to the selected distribution. The standard technique is
inversion, where a uniformly distributed number in [0,1] is mapped using the inverse of the probability distribution function to a different distribution. This is explained very nicely in the book
“Non-Uniform Random Variate Generation”, which is availble in full here: http://luc.devroye.org/rnbookindex.html.
In order to make random variate generation consistent and reproducible across R and kdb+, we need to be able to seed the RNG. The default RNG in rmathlib takes two integer seeds. We can set this in
an R session as follows:
> .Random.seed[2:3]<-as.integer(c(123,456))
and the corresponding q command is:
Conversely, getting the current seed value can be done using:
123 456i
The underlying uniform generator can be accessed using runif:
3.102089 3.854157 3.369014 3.164677 3.998812 3.092924 3.381564 3.991363 3.369..
produces 100 random variates uniformly distributed between [3,4].
Then for example, normal variates can be generated:
q)rnorm 10
-0.2934974 -0.334377 -0.4118473 -0.3461507 -0.9520977 0.9882516 1.633248 -0.5957762 -1.199814 0.04405314
This produces identical results in R:
> rnorm(10)
[1] -0.2934974 -0.3343770 -0.4118473 -0.3461507 -0.9520977 0.9882516 1.6332482 -0.5957762 -1.1998144
[10] 0.0440531
Normally-distributed variables with a distribution of \( N(\mu,\sigma) \) can also be generated:
q)dev norm[10000;3;1.5]
q)avg norm[10000;3;1.5]
Or we can alternatively scale a standard normal \( X ~ N(0,1) \) using \( Y = \sigma X + \mu \):
q) `int$ (avg x; dev x)
0 1i
q) `int$ (avg y; dev y)
5 3i
Probability Distribution Functions
As well as random variate generation, rmathlib also provides other functions, e.g. the normal density function:
computes the normal density at 0 for a standard normal distribution. The second and third parameters are the mean and standard deviation of the distribution.
The normal distribution function is also provided:
computes the distribution value at 0 for a standard normal (with mean and standard deviation parameters).
Finally, the quantile function (the inverse of the distribution function – see the graph below – the quantile value for .99 is mapped onto the distribution function value at that point: 2.32):
We can do a round-trip via pnorm() and qnorm():
q)`int $ qnorm[ pnorm[3;0;1]-pnorm[-3;0;1]; 0; 1]
Thats it for the distribution functions for now – rmathlib provides lots of different distributions (I have just linked in the normal and uniform functions for now. There are some other functions
that I have created that I will cover in a future post.
All code is on github: https://github.com/rwinston/kdb-rmathlib
[Check out part 3 of this series]
|
{"url":"http://www.theresearchkitchen.com/archives/847","timestamp":"2014-04-17T21:48:17Z","content_type":null,"content_length":"18953","record_id":"<urn:uuid:b1810bbd-ef47-4337-af4f-865d642bebb3>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Standard deviation for resampled time points
July 21st 2013, 12:18 PM #1
Jul 2013
Standard deviation for resampled time points
Not sure if this is the right area for this post, but I wonder if anyone knows if it is justified to calculate standard deviations for "resampled" data.
I have a number of different time series plots of a certain variable from different runs (I.e. a number of y data points at certain time values). They were sampled at different times or had
missing sample points, so I interpolated the data to new time points (all to the same new time points, using matlab). Can I workout the standard deviation for these new points, or can I not as
they aren't "real samples". If not is there another way I can show the variance in the data?
Re: Standard deviation for resampled time points
Hey Remothey.
You can do this, but I'm wondering if you have considered binning the samples into discrete bins and then calculating the standard deviation of each bin? The interpolation idea is a good one
(especially if the behavior is highly non-linear).
Basically the question you need to answer is what exactly you want to use the measure for in terms of the analysis and also in terms of the inference (and the associated question you are trying
to answer).
If the value is good enough to answer the question then so be it, but if the value doesn't have a good enough representation for answering the question, then you will need another measure, or
more data.
July 21st 2013, 05:27 PM #2
MHF Contributor
Sep 2012
|
{"url":"http://mathhelpforum.com/statistics/220731-standard-deviation-resampled-time-points.html","timestamp":"2014-04-17T10:08:40Z","content_type":null,"content_length":"34440","record_id":"<urn:uuid:38b47fb1-4925-4aeb-bb5b-cac544b5d5c9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Crystola, CO SAT Math Tutor
Find a Crystola, CO SAT Math Tutor
...The techniques involve repetition, visual and auditory interaction with the material, organization of material (ie lists), highlighting key concepts etc. I have played tennis since I was
little and played Varsity Tennis all four years of high school. I was the MVP of my high school tennis team my freshman year.
34 Subjects: including SAT math, reading, English, Spanish
...I have successfully taught over 100 students in ACT reading strategies. I was a teacher and tutor for a leading test preparation company for three years. I'm trained in all aspects of the ACT
English section, including the essay prompt and how the essay is graded.
20 Subjects: including SAT math, reading, writing, English
...I hold undergraduate degrees in Optics and Mathematics and a Master's degree in Physics. I am available to tutor math, physics, and test prep. My goal is to work with students to develop
understanding and familiarity with the concepts and the mechanics of math and physics.
16 Subjects: including SAT math, calculus, physics, GRE
...I most recently taught remedial swimming at the USAF Academy Preparatory school. By the end of the course, the students were jumping from the 10 meter platform into 18 foot deep water and one
went on to become a triathlete. I am an ordained minister (Southern Baptist.) I have earned a Master of Divinity, an MA in Religion, and a PhD in Religion.
54 Subjects: including SAT math, English, ASVAB, GRE
...I have my Bachelor's degree in Elementary Education grades K-8. I did all of my student teaching in elementary aged students specifically a full year in 5th grade. I have also worked at Sylvan
Learning Center with K - 6th aged students.
15 Subjects: including SAT math, reading, geometry, English
Related Crystola, CO Tutors
Crystola, CO Accounting Tutors
Crystola, CO ACT Tutors
Crystola, CO Algebra Tutors
Crystola, CO Algebra 2 Tutors
Crystola, CO Calculus Tutors
Crystola, CO Geometry Tutors
Crystola, CO Math Tutors
Crystola, CO Prealgebra Tutors
Crystola, CO Precalculus Tutors
Crystola, CO SAT Tutors
Crystola, CO SAT Math Tutors
Crystola, CO Science Tutors
Crystola, CO Statistics Tutors
Crystola, CO Trigonometry Tutors
Nearby Cities With SAT math Tutor
Aspen Park, CO SAT math Tutors
Brewster, CO SAT math Tutors
Cadet Sta, CO SAT math Tutors
Cascade, CO SAT math Tutors
Chipita Park, CO SAT math Tutors
Crystal Hills, CO SAT math Tutors
Deckers, CO SAT math Tutors
Elkton, CO SAT math Tutors
Ellicott, CO SAT math Tutors
Fair View, CO SAT math Tutors
Goldfield, CO SAT math Tutors
Green Mountain Falls SAT math Tutors
Tarryall, CO SAT math Tutors
Texas Creek, CO SAT math Tutors
Westwood Lake, CO SAT math Tutors
|
{"url":"http://www.purplemath.com/Crystola_CO_SAT_math_tutors.php","timestamp":"2014-04-17T13:37:32Z","content_type":null,"content_length":"24237","record_id":"<urn:uuid:7249d686-19f7-4e0d-92de-9222df7a7847>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Panel Data Analysis using GEE
SPSS Library
Panel Data Analysis using GEE
Panel data analysis, also known as cross-sectional time-series analysis, looks at a group of people, the 'panel,' on more than one occasion. Panel studies are essentially equivalent to longitudinal
studies, although there may be many response variables observed at each time point.
These data are from a 1996 study (Gregoire, Kumar Everitt, Henderson and Studd) on the efficacy of estrogen patches in treating postnatal depression. Women were randomly assigned to either a placebo
control group (group=0, n=27) or estrogen patch group (group=1, n=34). Prior to the first treatment all patients took the Edinburgh Postnatal Depression Scale (EPDS). EPDS data was collected monthly
for six months once the treatment began. Higher scores on the EDPS are indicative of higher levels of depression. You can download the data file here.
get file = 'D:\depress.sav'.
Let the analyses begin
Note that the data are in the wide format, we will collect some information and perform two analyses while the data are in this format.
sort cases by group.
split file by group.
descriptives var = pre dep1 dep2 dep3 dep4 dep5 dep6.
split file off.
correlations var = pre dep1 dep2 dep3 dep4 dep5 dep6.
/scatterplot(matrix) = pre dep1 dep2 dep3 dep4 dep5 dep6.
Let's check to see if the groups differ on the pretest depression score.
t-test groups = group(0 1)
/var = pre.
There isn't much of a difference between groups on the pretest, so let's continue on to the panel data analysis.
GEE with Continuous Response Variable
In order to use these data for our panel data analysis, the data must be reorganized into the long form using the varstocases command.
/make dep from dep1 dep2 dep3 dep4 dep5 dep6
/index = visit.
Before we begin the panel data analyses, let's look at some other analyses for comparison. We will begin with a repeated measures analysis of variance.
unianova dep by visit group subj
/test =group vs subj(group)
/design = group visit group*visit subj(group).
This analysis indicates that both group (F = 5.6, p = .021) and visit (F = 18.21, p = .000) are statistically significant, while the group*visit interaction is not (F = .335, p = .892). Some
researchers are critical of this type of analysis because it is based on fixed-effects adjusted for the repeated factor. Also, this repeated measures analysis assumes compound symmetry in the
covariance matrix (which seems to be a stretch in this case). However, we can do worse. Below we will try OLS regression.
/dependent = dep
/method = enter pre group visit.
We are finally ready to try the panel data analysis using SPSS's genlin command. This command allows us to specify various working covariance structures through the use of the corrtype option on the
repeated subcommand. We will start with a covariance structure of independence. We don't believe that this is the correct covariance structure, but it allows us to compare results with the OLS
regression results above. The workingcorr option on the print subcommand will allow us to view the working correlation matrix. Note that this option is only available if the repeated subcommand is
used. (The genlin command was introduced in SPSS version 15 and enhanced in version 16. If you are using an earlier version of SPSS, this command will not work.)
genlin dep with pre visit group
/model pre group visit distribution = normal link = identity
/repeated subject = subj
/print modelinfo cps solution workingcorr.
The previous analyses yielded identical but probably incorrect results. The common thread among them is that they all assume that the observations within subjects are independent. This seems, on the
face of it, to be highly unlikely. Scores on the depression scale are not likely to be independent from one visit to the next.
We can also try analyzing these data using compound symmetry for the correlational structure. Compound symmetry is obtained using exchangable for the corrtype option.
genlin dep with pre visit group
/model pre visit group distribution = normal link = identity
/repeated subject = subj corrtype = exchangeable
/print modelinfo cps solution workingcorr.
Note in particular the change in the standard errors between this analysis and the previous one. Now, let's try a different correlation structure, auto regressive with lag one. This is the
correlational structure that is most likely to be correct considering the repeated measures over time. I should note that, in some cases, SPSS and SAS handle models with an ar(1) structure
differently than other packages, such as Stata. Stata does not use subjects that have only observation, since ar(1) doesn't make much sense given one data point. SPSS and SAS use all of the available
cases. You can see how many cases SPSS is using the Case Processing Summary table.
genlin dep with pre visit group
/model pre group visit distribution = normal link = identity
/repeated subject = subj corrtype = ar(1)
/print modelinfo cps solution workingcorr.
This analysis probably more closely reflects the correlations among the depression scores over six visits that we observed in our descriptive analysis.
Now, let's back up and reconsider the group by visit interaction. We will try a model with the interaction using the ar1 correlations. Note that we have omitted some of the output in order to save
compute gxv = group*visit.
genlin dep with pre visit group gxv
/model pre group visit gxv distribution = normal link = identity
/repeated subject = subj corrtype = ar(1).
The group by visit interaction still is not significant, even though this may be a better approach for testing it. So far we have been treating visit as a continuous variable. Is it possible that our
analysis might change if we were to treat visit as a categorical variable, in the way that the anova did?
compute visit2 = 0.
if visit = 2 visit2 = 1.
compute visit3 = 0.
if visit = 3 visit3 = 1.
compute visit4 = 0.
if visit = 4 visit4 = 1.
compute visit5 = 0.
if visit = 5 visit5 = 1.
compute visit6 = 0.
if visit = 6 visit6 = 1.
genlin dep with pre visit2 visit3 visit4 visit5 visit6 group
/model pre visit2 visit3 visit4 visit5 visit6 group distribution = normal link = identity
/repeated subject = subj corrtype = ar(1).
We can test to see whether the categorical version of visit accounts for more variability that the continuous version by including both in the model but using only k - 2 = 4 dummy variables for time.
genlin dep with pre visit visit2 visit3 visit4 visit5 group
/model pre visit visit2 visit3 visit4 visit5 group distribution = normal link = identity
/repeated subject = subj corrtype = ar(1).
These results indicate that the categorical version of visit does not account for significantly more variability than the continuous version. In the final analysis, I think that I prefer the
following model,
genlin dep with pre visit group
/model pre group visit distribution = normal link = identity
/repeated subject = subj corrtype = ar(1).
of all the analyses run so far. Those results looked as follows:
The final interpretation of these results indicate that there is a significant effect for the pretest, i.e., for every one point increase in the pretest score there is about a 0.4 increase in the
depression score, when controlling for treatment and visit. There is also an effect for the estrogen patch when controlling for pretest depression and visit. Use of the estrogen patch reduces the
depression score by 4 points. Finally, there is also a significant visit effect when controlling for pretest depression and group membership. The depression score decreases on the average by 1.2
points for each visit.
GEE with Binary Response Variable
The binary response variable in these examples was created from the data from the 1996 Gregoire, Kumar Everitt, Henderson and Studd study on the efficacy of estrogen patches in treating postnatal
depression. Women were randomly assigned to either a placebo control group (group=0, n=27) or estrogen patch group (group=1, n=34). Prior to the first treatment all patients took the Edinburgh
Postnatal Depression Scale (EPDS). EPDS data was collected monthly for six months once the treatment began. Depression scores greater than or equal to 11 were coded as 1. You can download the data
file here.
get file = 'D:\depressed01.dta'.
We will go through as series of analyses pretty much paralleling models that were run above using the continuous response variable. To get a binary logit type model we will set distribution to binary
and link to logit. We will start with the correlation structure independent follow by exchangable (compound symmetry) and then unstructured.
genlin depressd (reference = first) with visit group
/model visit group distribution = binomial link = logit
/repeated subject = subj
/print modelinfo cps solution workingcorr.
genlin depressd (reference = first) with visit group
/model visit group distribution = binomial link = logit
/repeated subject = subj corrtype = exchangeable
/print modelinfo cps solution workingcorr.
genlin depressd (reference = first) with visit group
/model visit group distribution = binomial link = logit
/repeated subject = subj corrtype = unstructured
/print modelinfo cps solution workingcorr.
With these data, just as with the continuous response variable, it might be more reasonable to hypothesize that the correlation structure would be autoregressive.
genlin depressd (reference = first) with visit group
/model visit group distribution = binomial link = logit
/repeated subject = subj withinsubject=visit corrtype = ar(1) covb=model
/print modelinfo cps solution workingcorr.
If we want, we can also obtain the results in the odds ratio metric using the exponentiated option on the print subcommand.
genlin depressd (reference = first) with visit group
/model visit group distribution = binomial link = logit
/repeated subject = subj corrtype = ar(1)
/print solution (exponentiated) modelinfo.
Let's add in the pretest (pre) and a group by visit interaction.
compute gxv = group*visit.
genlin depressd (reference = first) with pre group visit gxv
/model pre group visit gxv distribution = binomial link = logit
/repeated subject = subj corrtype = ar(1)
/print solution modelinfo.
Clearly, there is no interaction but we'll stick with the pretest for the moment. Next let's try the categorical version of visit and the model that contains both the categorical and continuous
version of visit.
compute visit2 = 0.
if visit = 2 visit2 = 1.
compute visit3 = 0.
if visit = 3 visit3 = 1.
compute visit4 = 0.
if visit = 4 visit4 = 1.
compute visit5 = 0.
if visit = 5 visit5 = 1.
compute visit6 = 0.
if visit = 6 visit6 = 1.
genlin depressd (reference = first) with pre group visit2 visit3 visit4 visit5
/model pre group visit2 visit3 visit4 visit5 distribution = binomial link = logit
/repeated subject = subj corrtype = ar(1)
/print solution modelinfo.
genlin depressd (reference = first) with pre group visit visit2 visit3 visit4 visit5
/model pre group visit visit2 visit3 visit4 visit5 distribution = binomial link = logit
/repeated subject = subj corrtype = ar(1)
/print solution modelinfo.
The content of this web site should not be construed as an endorsement of any particular web site, book, or software product by the University of California.
|
{"url":"http://www.ats.ucla.edu/stat/spss/library/gee.htm","timestamp":"2014-04-18T08:03:00Z","content_type":null,"content_length":"32171","record_id":"<urn:uuid:a47a13e0-1f39-4253-aa18-ed99badfb413>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the limit of this look at the attached file
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@jim_thompson5910 they said u could help
Best Response
You've already chosen the best response.
I think the answer does not Exist
Best Response
You've already chosen the best response.
using the limit laws, we can say \[\Large \lim_{x\to 2}x^3 f(x) = (\lim_{x\to 2}x^3) * (\lim_{x\to 2}f(x))\] \[\Large \lim_{x\to 2}x^3 f(x) = 2^3 * f(2)\] \[\Large \lim_{x\to 2}x^3 f(x) = 8 * f
(2)\] \[\Large \lim_{x\to 2}x^3 f(x) = ???\]
Best Response
You've already chosen the best response.
to find f(2) you would either use a function or the graph since you don't have the function, you'll have to use the graph
Best Response
You've already chosen the best response.
but the graph has a hole and a solution thats where im confused
Best Response
You've already chosen the best response.
at x->2
Best Response
You've already chosen the best response.
ok as x gets closer and closer to 2, the graph is approaching the point (2,2) sure there is a hole here, but if you start on either side of x = 2 and approach it, you'll reach this hole so that's
why \[\Large \lim_{x\to 2} f(x) = 2\]
Best Response
You've already chosen the best response.
\[\Large \lim_{x\to 2}x^3 f(x) = (\lim_{x\to 2}x^3) * (\lim_{x\to 2}f(x))\] \[\Large \lim_{x\to 2}x^3 f(x) = 2^3 * 2\] \[\Large \lim_{x\to 2}x^3 f(x) = 8 * 2\] \[\Large \lim_{x\to 2}x^3 f(x) = 16
Best Response
You've already chosen the best response.
oh... I see.. Thanks a lot! I have another question.. Could I ask you?
Best Response
You've already chosen the best response.
sure go for it
Best Response
You've already chosen the best response.
If f and g are continuous functions with f(1) = 5 and the following limit, find g(1).
Best Response
You've already chosen the best response.
im getting 8 as a result not sure
Best Response
You've already chosen the best response.
this is a completely different problem right (with different functions f and g)?
Best Response
You've already chosen the best response.
oh wait nvm
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
that's correct, g(1) = 8
Best Response
You've already chosen the best response.
you separate the limit up plug in x = 1 then replace f(1) with 5 and g(1) with y afterwards you solve for y to get y = 8
Best Response
You've already chosen the best response.
yay! Thank You so much for your help!
Best Response
You've already chosen the best response.
you're welcome
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/511189ffe4b09cf125bdd31b","timestamp":"2014-04-17T01:42:32Z","content_type":null,"content_length":"78548","record_id":"<urn:uuid:c2e92e62-5dfd-4ed5-9340-5c3405e0abaf>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Steilacoom Algebra 1 Tutor
Find a Steilacoom Algebra 1 Tutor
I have worked at Tacoma Community College for six years; when asked to describe what I tutor by a student, he responded by saying, "he DOESN'T tutor higher level biology or business classes if he
can help it, but he does just about everything else." I have held a Master Tutor Certification with the ...
27 Subjects: including algebra 1, chemistry, geometry, statistics
...I always said that I wanted to be a "professional student" when I grew up so I could stay in school forever, which in becoming a teacher, I get to do just that! I am pursuing my degree in Early
Childhood Education, and in the mean time, I am working in a child care program through the YMCA. I w...
16 Subjects: including algebra 1, reading, Spanish, writing
...I believe that conservation and sustainability is a life lesson that affects us all! I hope to further discuss my skills and experience in detail with you soon!My accumulated education
experience has given me the skills necessary to be an effective K-6th tutor. Currently I help with two 3rd and 4th grade home school students in reading and reading comprehension.
9 Subjects: including algebra 1, reading, writing, grammar
...Most of my tutoring experience has been with adults, but I also help high school students who are in the Running Start program. I can tutor any level of math from arithmetic to intermediate
algebra, and even a little bit of precalculus. I also do beginning Spanish and some computer work (Word, Excel, PowerPoint). I'm fairly low-key, and students tend to feel at ease around me.
8 Subjects: including algebra 1, geometry, Microsoft Excel, algebra 2
...During that process I help students work out issues with technique (muscle tension, breath control, etc.), and I approach such issues as hurdles to reaching the best performance rather than
problems with their voices. I want the song to come alive, and a free and natural sound facilitates this. ...
22 Subjects: including algebra 1, reading, chemistry, English
|
{"url":"http://www.purplemath.com/steilacoom_algebra_1_tutors.php","timestamp":"2014-04-20T21:06:04Z","content_type":null,"content_length":"24169","record_id":"<urn:uuid:0e94f674-1d4f-4dc7-9ff5-6ddd7519c8e0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thornton, CO Algebra Tutor
Find a Thornton, CO Algebra Tutor
...Math can be extremely intimidating, but once those hurdles are conquered, it is very fulfilling. With the summer coming up, I can help bridge the gap for your student. This allows them to keep
those math skills sharp while preparing for the school year.
17 Subjects: including algebra 1, algebra 2, chemistry, physics
...Courses taken include: Calculus I, Calculus II, Calculus III, Differential Equations, Statistics, Physics I, Physics II, Chemistry, Circuits I, Circuits IIAs a student in these courses I also
worked on campus to provide tutoring to many students struggling in math and engineering related fields. ...
20 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I am personable and approachable, and encourage students to participate in discussions and ask questions. Most importantly, I aim to prepare students for their futures. This means bringing real
world examples to the lesson.
15 Subjects: including algebra 1, algebra 2, Spanish, calculus
...When I was in high school I struggled with Algebra and Trigonometry, so I had to start from the beginning in college. I had a few great teachers and now it is a strength that I depend heavily
on. With some study behavior tips, focus on mathematical vocabulary and paying attention to what the question is asking for I feel confident I can help anyone who want to learn.
21 Subjects: including algebra 1, algebra 2, reading, English
...There were hours of lecture, rigorous exams, extensive assessments, and demanding performance requirements. In my seven years as a college professor, I taught a variety of biology courses to
students with a mix of educational backgrounds and learning styles. I composed the lectures and exams and graded all the homework.
11 Subjects: including algebra 1, biology, grammar, anatomy
|
{"url":"http://www.purplemath.com/Thornton_CO_Algebra_tutors.php","timestamp":"2014-04-20T10:52:29Z","content_type":null,"content_length":"23977","record_id":"<urn:uuid:f72d9471-e389-4491-bc78-c9637a2afb2a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
11:59 am mst what time in philippines
You asked:
11:59 am mst what time in philippines
Mountain Standard Time
the Philippines
2:59:00am Philippines Time
2:59:00am Philippines Time, the time zone UTC+8
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/11:59_am_mst_what_time_in_philippines","timestamp":"2014-04-18T01:29:02Z","content_type":null,"content_length":"58579","record_id":"<urn:uuid:be9aa21d-05be-4b8d-9a1d-cdd2c2ee3ef5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistical tests for differential expression in cDNA microarray experiments
Extracting biological information from microarray data requires appropriate statistical methods. The simplest statistical method for detecting differential expression is the t test, which can be used
to compare two conditions when there is replication of samples. With more than two conditions, analysis of variance (ANOVA) can be used, and the mixed ANOVA model is a general and powerful approach
for microarray experiments with multiple factors and/or several sources of variation.
Gene-expression microarrays hold tremendous promise for revealing the patterns of coordinately regulated genes. Because of the large volume and intrinsic variation of the data obtained in each
microarray experiment, statistical methods have been used as a way to systematically extract biological information and to assess the associated uncertainty. Here, we review some widely used methods
for testing differential expression among conditions. For these purposes, we assume that the data to be used are of good quality and have been appropriately transformed (normalized) to ensure that
experimentally introduced biases have been removed [1,2]. See Box 1 for a glossary of terms. For other aspects of microarray data analysis, please refer to recent reviews on experimental design [3,4]
and cluster analysis [5].
Comparing two conditions
A simple microarray experiment may be carried out to detect the differences in expression between two conditions. Each condition may be represented by one or more RNA samples. Using two-color cDNA
microarrays, samples can be compared directly on the same microarray or indirectly by hybridizing each sample with a common reference sample [4,6]. The null hypothesis being tested is that there is
no difference in expression between the conditions; when conditions are compared directly, this implies that the true ratio between the expression of each gene in the two samples should be one. When
samples are compared indirectly, the ratios between the test sample and the reference sample should not differ between the two conditions. It is often more convenient to use logarithms of the
expression ratios than the ratios themselves because effects on intensity of microarray signals tend be multiplicative; for example, doubling the amount of RNA should double the signal over a wide
range of absolute intensities. The logarithm transformation converts these multiplicative effects (ratios) into additive effects (differences), which are easier to model; the log ratio when there is
no difference between conditions should thus be zero. If a single-color expression assay is used - such as the Affymetrix system [7] - we are again considering a null hypothesis of no
expression-level difference between the two conditions, and the methods described in this article can also be applied directly to this type of experiment.
A distinction should be made between RNA samples obtained from independent biological sources - biological replicates - and those that represent repeated sampling of the same biological material -
technical replicates. Ideally, each condition should be represented by multiple independent biological samples in order to conduct statistical tests. If only technical replicates are available,
statistical testing is still possible but the scope of any conclusions drawn may be limited [3]. If both technical and biological replicates are available, for example if the same biological samples
are measured twice each using a dye-swap assay, the individual log ratios of the technical replicates can be averaged to yield a single measurement for each biological unit in the experiment. Callow
et al. [8] describe an example of a biologically replicated two-sample comparison, and our group [9] provide an example with technical replication. More complicated settings that involve multiple
layers of replication can be handled using the mixed-model analysis of variance techniques described below.
'Fold' change
The simplest method for identifying differentially expressed genes is to evaluate the log ratio between two conditions (or the average of ratios when there are replicates) and consider all genes that
differ by more than an arbitrary cut-off value to be differentially expressed [10-12]. For example, if the cut-off value chosen is a two-fold difference, genes are taken to be differentially
expressed if the expression under one condition is over two-fold greater or less than that under the other condition. This test, sometimes called 'fold' change, is not a statistical test, and there
is no associated value that can indicate the level of confidence in the designation of genes as differentially expressed or not differentially expressed. The fold-change method is subject to bias if
the data have not been properly normalized. For example, an excess of low-intensity genes may be identified as being differentially expressed because their fold-change values have a larger variance
than the fold-change values of high-intensity genes [13,14]. Intensity-specific thresholds have been proposed as a remedy for this problem [15].
The t test
The t test is a simple, statistically based method for detecting differentially expressed genes (see Box 2 for details of how it is calculated). In replicated experiments, the error variance (see Box
1) can be estimated for each gene from the log ratios, and a standard t test can be conducted for each gene [8]; the resulting t statistic can be used to determine which genes are significantly
differentially expressed (see below). This gene-specific t test is not affected by heterogeneity in variance across genes because it only uses information from one gene at a time. It may, however,
have low power because the sample size - the number of RNA samples measured for each condition - is small. In addition, the variances estimated from each gene are not stable: for example, if the
estimated variance for one gene is small, by chance, the t value can be large even when the corresponding fold change is small. It is possible to compute a global t test, using an estimate of error
variance that is pooled across all genes, if it is assumed that the variance is homogeneous between different genes [16,17]. This is effectively a fold-change test because the global t test ranks
genes in an order that is the same as fold change; that is, it does not adjust for individual gene variability. It may therefore suffer from the same biases as a fold-change test if the error
variance is not truly constant for all genes.
Modifications of the t test
As noted above, the error variance (the square root of which gives the denominator of the t tests) is hard to estimate and subject to erratic fluctuations when sample sizes are small. More stable
estimates can be obtained by combining data across all genes, but these are subject to bias when the assumption of homogeneous variance is violated. Modified versions of the t test (Box 2) find a
middle ground that is both powerful and less subject to bias.
In the 'significance analysis of microarrays' (SAM) version of the t test (known as the S test) [18], a small positive constant is added to the denominator of the gene-specific t test. With this
modification, genes with small fold changes will not be selected as significant; this removes the problem of stability mentioned above. The regularized t test [19] combines information from
gene-specific and global average variance estimates by using a weighted average of the two as the denominator for a gene-specific t test. The B statistic proposed by Lonnstedt and Speed [20] is a log
posterior odds ratio of differential expression versus non-differential expression; it allows for gene-specific variances but it also combines information across many genes and thus should be more
stable than the t statistic (see Box 2 for details).
The t and B tests based on log ratios can be found in the Statistics for Microarray Analysis (SMA) package [21]; the S test is available in the SAM software package [22]; and the regularized t test
is in the Cyber T package [23]. In addition, the Bioconductor [24] has a collection of various analysis tools for microarray experiments. Additional modifications of the t test are discussed by Pan [
Graphical summaries (the 'volcano plot')
The 'volcano plot' is an effective and easy-to-interpret graph that summarizes both fold-change and t-test criteria (see Figure 1). It is a scatter-plot of the negative log[10]-transformed p-values
from the gene-specific t test (calculated as described in the next section) against the log[2 ]fold change (Figure 1a). Genes with statistically significant differential expression according to the
gene-specific t test will lie above a horizontal threshold line. Genes with large fold-change values will lie outside a pair of vertical threshold lines. The significant genes identified by the S, B,
and regularized t tests will tend to be located in the upper left or upper right parts of the plot.
Significance and multiple testing
Nominal p-values
After a test statistic is computed, it is convenient to convert it to a p-value. Genes with p-values falling below a prescribed level (the 'nominal level') may be regarded as significant. Reporting p
-values as a measure of evidence allows some flexibility in the interpretation of a statistical test by providing more information than a simple dichotomy of 'significant' or 'not significant' at a
predefined level. Standard methods for computing p-values are by reference to a statistical distribution table or by permutation analysis. Tabulated p-values can be obtained for standard test
statistics (such as the t test), but they often rely on the assumption that the errors in the data are normally distributed. Permutation analysis involves shuffling the data and does not require such
assumptions. If permutation analysis is to be used, the experiment must be large enough that a sufficient number of distinct shuffles can be obtained. Ideally, the labels that identify which
condition is represented by each sample are shuffled to simulate data from the null distribution. A minimum of about six replicates per condition (yielding a total of 924 distinct permutations) is
recommended for a two-sample comparison. With multiple conditions, fewer replicates are required. If the experiment is too small, permutation analysis can be conducted by shuffling residual values
across genes (see Box 1), under the assumption of homogeneous variance [6,25].
When we conduct a single hypothesis test, we may commit one of two types of errors. A type I or false-positive error occurs when we declare a gene to be differentially expressed when in fact it is
not. A type II or false-negative error occurs when we fail to detect a differentially expressed gene. A statistical test is usually constructed to control the type I error probability, and we achieve
a certain power (which is equal to one minus the type II error probability) that depends on the study design, sample size, and precision of the measurements. In a microarray experiment, we may
conduct thousands of statistical tests, one for each gene, and a substantial number of false positives may accumulate. The following are some of the methods available to address this problem, which
is called the problem of multiple testing.
Family-wise error-rate control
One approach to multiple testing is to control the family-wise error rate (FWER), which is the probability of accumulating one or more false-positive errors over a number of statistical tests. This
is achieved by increasing the stringency that we apply to each individual test. In a list of differentially expressed genes that satisfy an FWER criterion, we can have high confidence that there will
be no errors in the entire list. The simplest FWER procedure is the Bonferroni correction: the nominal significance level is divided by the number of tests. The permutation-based one-step correction
[26] and the Westfall and Young step-down adjustment [27] provide FWER control and are generally more powerful but more computationally demanding than the Bonferroni procedure. FWER criteria are very
stringent, and they may substantially decrease power when the number of tests is large.
False-discovery-rate control
An alternative approach to multiple testing considers the false-discovery rate (FDR), which is the proportion of false positives among all of the genes initially identified as being differentially
expressed - that is, among all the rejected null hypotheses [28,29]. An arguably more appropriate variation, the positive false-discovery rate (pFDR) was proposed by Storey [30]. It multiplies the
FDR by a factor of π[0], which is the estimated proportion of non-differentially expressed genes among all genes. Because π[0], is between 0 and 1, the estimated pFDR is smaller than the FDR. The FDR
is typically computed [31] after a list of differentially expressed genes has been generated. Software for computing FDR and related quantities can be found at [32,33]. Unlike a significance level,
which is determined before looking at the data, FDR is a post-data measure of confidence. It uses information available in the data to estimate the proportion of false positive results that have
occurred. In a list of differentially expressed genes that satisfies an FDR criterion, one can expect that a known proportion of these will represent false positive results. FDR criteria allow a
higher rate of false positive results and thus can achieve more power than FWER procedures.
More than two conditions
Relative expression values
When there are more than two conditions in an experiment, we cannot simply compute ratios; a more general concept of relative expression is needed. One approach that can be applied to cDNA microarray
data from any experimental design is to use an analysis of variance (ANOVA) model (Box 3a) to obtain estimates of the relative expression (VG) for each gene in each sample [6,34]. In the microarray
ANOVA model, the expression level of a gene in a given sample is computed relative to the weighted average expression of that gene over all samples in the experiment (see Box 3a for statistical
details). We note that the microarray ANOVA model is not based on ratios but is applied directly to intensity data; the difference between two relative expression values can be interpreted as the
mean log ratio for comparing two samples (as logA - logB = log(A/B), where log A and log B are two relative expression values). Alternatively, if each sample is compared with a common reference
sample, one can use normalized ratios directly. This is an intuitive but less efficient approach to obtaining relative expression values than using the ANOVA estimates. Direct estimates of relative
expression can also be obtained from single-color expression assays [35,36].
The set of estimated relative expression values, one for each gene in each RNA sample, is a derived data set that can be subjected to a second level of analysis. There should be one relative
expression value for each gene in each independent sample. The distinction between technical replication and biological replication should be kept in mind when interpreting results from the analysis
of a derived data. If inference is being made on the basis of biological replicates and there is also technical replication in the experiment, the technical replicates should be averaged to yield a
single value for each independent biological unit. The derived data can be analyzed on a gene-by-gene basis using standard ANOVA methods to test for differences among conditions. For example, our
group [37] have used a derived data set to test for expression differences between natural populations of fish.
Three flavors of F test
The classical ANOVA F test is a generalization of the t test that allows for the comparison of more than two samples (Box 3). The F test is designed to detect any pattern of differential expression
among several conditions by comparing the variation among replicated samples within and between conditions. As with the t test, there are several variations on the F test (Box 3b). The gene-specific
F test (F1), a generalization of the gene-specific t test, is the usual F test and it is computed on a gene-by-gene basis. As with t tests, we can also assume a common error variance for all genes
and thus arrive at the global variance F test (F3). A middle ground is achieved by the F2 test, analogous to the regularized t test; this uses a weighted combination of global and gene-specific
variance estimates in the denominator. Nominal p-values can be obtained for the F test, from standard tables, but the F2 and F3 statistics do not follow the tabulated F distribution and critical
values should be established by permutation analysis.
Among these tests, the F3 test is the most powerful, but it is also subject to the same potential biases as the fold-change test. In our experience, F2 has power comparable to F3 but it has a lower
FDR than either F1 or F3. It is possible to derive a version of the B statistic [20] for the case of multiple conditions. This could provide an alternative approach to combine variance estimates
across genes in the context of multiple samples. Any of these tests can be applied to a derived data set of relative expression values to make comparisons among two or more conditions.
The results of all three F statistics can be summarized simultaneously using a volcano plot, but with a slight twist when there are more than two samples. The standard deviation of the relative
expression values is plotted on the x axis instead of plotting log fold change; the resulting volcano plot (Figure 1b) is similar to the right-hand half of a standard volcano plot (Figure 1a).
The fixed-effects ANOVA model
The process of creating a derived data set and computing the F tests described above can be integrated in one step by applying [20,35] our fixed-effects ANOVA model [9]; further discussion is
provided Lee et al. [34]. The fixed-effects model assumes independence among all observations and only one source of random variation. Depending on the experimental design, this source of variation
could be technical, as in our study [9], or biological if applied to data as was done by Callow et al. [8]. Although it is applicable to many microarray experiments, the fixed-effects model does not
allow for multiple sources of variation, nor does it account for correlation among the observations that arise as a consequence of different layers of variation. Test statistics from the
fixed-effects model are constructed using the lowest level of variation in the experiment: if a design includes both biological and technical replication, tests are based on the technical variance
component. If there are replicated spots on the microarrays, the lowest level of variance will be the within-array measurement error. This is rarely appropriate for testing, and the statistical
significance of results using within-array error may be artificially inflated. To avoid this problem, replicated spots from the same array can be 'collapsed' by taking the sum or average of their raw
intensities. This does not fully utilize the available information, however, and we recommend application of the mixed-effects ANOVA model, described below.
Multiple-factor experiments
In a complex microarray experiment, the set of conditions may have some structure. For example, Jin et al. [38] consider eight conditions in a 2 by 2 by 2 factorial design with the factors sex, age,
and genotype. There is no biological replication here, but information about biological variance is available because of the factorial design. In other experiments, both biological and technical
replicates are included. For example, we [37] considered samples of five fish from each of three populations, and each fish was assayed on two microarrays with duplicated spots. In this study, the
conditions of interest are the populations from which the fish were sampled; the fish are biological replicates, and there are two nested levels of technical replication, arrays and spots within
arrays. To use fully the information available in experiments with multiple factors and multiple layers of sampling, we require a sophisticated statistical modeling approach.
The mixed-model ANOVA
The mixed model treats some of the factors in an experimental design as random samples from a population. In other words, we assume that if the experiment were to be repeated, the same effects would
not be exactly reproduced but that similar effects would be drawn from a hypothetical population of effects. We therefore model these factors as sources of variance.
In a mixed model for two-color microarrays (Box 3c), the gene-specific array effect (AG in Box 3a) is treated as a random factor. This captures an important component of technical variation. If the
same clone is printed multiple times on each array we should include additional random factors for spot (S) and labeling (L) effects. Consider an array with duplicate spots of each clone. Four
measurements are obtained for each clone, two in the red channel and two in the green channel. Measurements obtained on the same spot (one red and one green) will be correlated because they share
common variation in the spot size. Measurement obtained in the same color (both red or both green) will be correlated because they share variation through a common labeling reaction. Failure to
account for these correlations can result in underestimation of technical variance and inflated assessments of statistical significance.
In experiments with multiple factors, the VG term in the ANOVA model is expanded to have a structure that reflects the experimental design at the level of the biological replicates, that is,
independent biological samples obtained from the same conditions such as two mice of the same sex and strain. This may include both fixed and random components. Biological replicates should be
treated as a random factor and will be included in the error variance of any tests that make comparisons among conditions. This provides a broad-sense inference (see Box 1) that applies to the
biological population from which replicate samples were obtained [3,39].
Constructing tests with the mixed-model ANOVA
The components of variation attributable to each random factor in a mixed model can be estimated by any of several methods [39], of which restricted maximum likelihood (see Box 1) is the most widely
used. The presence of random effects in a model can influence the estimation of other effects, including the relative expression values; these will tend to 'shrink' toward zero slightly. This
effectively reduces the bias in the extremes of estimated relative expression values.
In the fixed-effects ANOVA model, there is only one variance term and all factors in the model are tested against this variance. In mixed-model ANOVA, there are multiple levels of variance
(biological, array, spot, and residual), and the question becomes which level we should use for the testing. The answer depends on what type of inference scope is of interest. If the interest is
restricted to the specific materials and procedures used in the experiment, a narrow-sense inference, which applies only to the biological samples used in the experiment, can be made using technical
variance. In most instances, however, we will be interested in a broader sense of inference that includes the biological population from which our material was sampled. In this case, all relevant
sources of variance should be considered in the test [40]. Constructing an appropriate test statistic using the mixed model can be tricky [41] and falls outside the scope of the present discussion,
but software tools are available that can be applied to compute appropriate F statistics, such as MAANOVA [42] and SAS [43]. Variations analogous to the F2 and F3 statistics are available in the
MAANOVA software package [42].
In conclusion, fold change is the simplest method for detecting differential expression, but the arbitrary nature of the cutoff value, the lack of statistical confidence measures, and the potential
for biased conclusions all detract from its appeal. The t test based on log ratios and variations thereof provide a rigorous statistical framework for comparing two conditions and require replication
of samples within each condition. When there are more than two conditions to compare, a more general approach is provided by the application of ANOVA F tests. These may be computed from derived sets
of estimated relative expression values or directly through the application of a fixed-effects ANOVA model. The mixed ANOVA model provides a general and powerful approach to allow full utilization of
the information available in microarray experiments with multiple factors and/or a hierarchy of sources of variation. Modifications of both t tests and F tests are available to address the problems
of gene-to-gene variance heterogeneity and small sample size.
1. Cui X, Churchill GA: Data transformation for cDNA microarray data. [http:/ / www.jax.org/ staff/ churchill/ labsite/ research/ expression/ Cui-Transform.pdf] webcite
2. Quackenbush J: Computational analysis of microarray data.
Nat Rev Genet 2001, 2:418-427. PubMed Abstract | Publisher Full Text
3. Churchill GA: Fundamentals of experimental design for cDNA microarrays.
Nat Genet 2002, 32 Suppl:490-495. PubMed Abstract | Publisher Full Text
4. Yang YH, Speed T: Design issues for cDNA microarray experiments.
Nat Rev Genet 2002, 3:579-588. PubMed Abstract | Publisher Full Text
5. Tibshirani R, Hastie T, Eisen M, Ross D, Botstein D, Brown PO: Clustering methods for the analysis of DNA microarray data. [http://www-stat.stanford.edu/~tibs/research.html] webcite
6. Kerr MK, Martin M, Churchill GA: Analysis of variance for gene expression microarray data.
J Comput Biol 2000, 7:819-837. PubMed Abstract | Publisher Full Text
7. Affymetrix [http://www.affymetrix.com] webcite
8. Callow MJ, Dudoit S, Gong EL, Speed TP, Rubin EM: Microarray expression profiling identifies genes with altered expression in HDL-deficient mice.
Genome Res 2000, 10:2022-2029. PubMed Abstract | Publisher Full Text
9. Kerr M, Afshari C, Bennett L, Bushel P, Martinez J, Walker N, Churchill G: Statistical analysis of a gene expression microarray experiment with replication.
10. Schena M, Shalon D, Heller R, Chai A, Brown PO, Davis RW: Parallel human genome analysis: microarray-based expression monitoring of 1000 genes.
Proc Natl Acad Sci USA 1996, 93:10614-10619. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
11. DeRisi JL, Iyer VR, Brown PO: Exploring the metabolic and genetic control of gene expression on a genomic scale.
Science 1997, 278:680-686. PubMed Abstract | Publisher Full Text
12. Draghici S: Statistical intelligence: effective analysis of high-density microarray data.
Drug Discov Today 2002, 7:S55-S63. PubMed Abstract | Publisher Full Text
13. Rocke DM, Durbin B: A model for measurement error for gene expression arrays.
J Comput Biol 2001, 8:557-569. PubMed Abstract | Publisher Full Text
14. Newton MA, Kendziorski CM, Richmond CS, Blattner FR, Tsui KW: On differential variability of expression ratios: improving statistical inference about gene expression changes from microarray data.
J Comput Biol 2001, 8:37-52. PubMed Abstract | Publisher Full Text
15. Yang IV, Chen E, Hasseman JP, Liang W, Frank BC, Wang S, Sharov V, Saeed AI, White J, Li J, et al.: Within the fold: assessing differential expression measures and reproducibility in microarray
Genome Biol 2002, 3:research006.12-0062.12. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
16. Tanaka TS, Jaradat SA, Lim MK, Kargul GJ, Wang X, Grahovac MJ, Pantano S, Sano Y, Piao Y, Nagaraja R, et al.: Genome-wide expression profiling of mid-gestation placenta and embryo using a 15,000
mouse developmental cDNA microarray.
Proc Natl Acad Sci USA 2000, 97:9127-9132. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
17. Arfin SM, Long AD, Ito ET, Tolleri L, Riehle MM, Paegle ES, Hatfield GW: Global gene expression profiling in Escherichia coli K12. The effects of integration host factor.
J Biol Chem 2000, 275:29672-29684. PubMed Abstract | Publisher Full Text
18. Tusher VG, Tibshirani R, Chu G: Significance analysis of microarrays applied to the ionizing radiation response.
Proc Natl Acad Sci USA 2001, 98:5116-5121. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
19. Baldi P, Long AD: A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes.
Bioinformatics 2001, 17:509-519. PubMed Abstract | Publisher Full Text
20. R package: statistics for microarray analysis [http://www.stat.berkeley.edu/users/terry/zarray/Software/smacode.html] webcite
21. SAM: Significance Analysis of Microarray [http://www-stat.stanford.edu/%7Etibs/SAM] webcite
22. Cyber T [http://www.igb.uci.edu/servers/cybert/] webcite
23. Bioconductor [http://www.bioconductor.org] webcite
24. Pan W: A comparative review of statistical methods for discovering differentially expressed genes in replicated microarray experiments.
Bioinformatics 2002, 18:546-554. PubMed Abstract | Publisher Full Text
25. Wu H, Kerr MK, Cui X, Churchill GA: MAANOVA: a software package for the analysis of spotted cDNA microarray experiments. [http://www.jax.org/staff/churchill/labsite/pubs/Wu_maanova.pdf] webcite
26. Dudoit S, Yang Y, Matthew J, Speed TP: Statistical methods for identifying differentially expressed genes in replicated cDNA microarray experiments. [http://www.stat.berkeley.edu/users/terry/
zarray/Html/matt.html] webcite
27. Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing.
28. Benjamini Y, Yekutieli D: The control of the false discovery rate in multiple tesing under dependency.
29. Storey J: A direct approach to false discovery rates.
J R Statist Soc B 2002, 64:479-498. Publisher Full Text
30. Storey JD, Tibshirani R: SAM thresholding and false discovery rates for detecting differential gene expression in DNA microarrays. [http://www.stat.berkeley.edu/~storey/papers/storey-springer.pdf
] webcite
31. False Discovery Rate homepage [http://www.math.tau.ac.il/~roee/index.htm] webcite
32. q-value [http://www.stat.berkeley.edu/~storey/qvalue/index.html] webcite
33. Lee ML, Lu W, Whitmore GA, Beier D: Models for microarray gene expression data.
J Biopharm Stat 2002, 12:1-19. PubMed Abstract | Publisher Full Text
34. Li C, Wong WH: Model-based analysis of oligonucleotide arrays: model validation, design issues and standard error application.
Genome Biol 2001, 2:research0049.1-0049.12. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
35. Irizarry RA, Hobbs BG, Collin F, Beazer-Barclay YD, Antonellis KJ, Scherf U, Speed T: Exploration, normalization and summaries of high density oligonucleotide array probe level data. [http://
biosun01.biostat.jhsph.edu/~ririzarr/papers/index.html] webcite
36. Oleksiak MF, Churchill GA, Crawford DL: Variation in gene expression within and among natural populations.
Nat Genet 2002, 32:261-266. PubMed Abstract | Publisher Full Text
37. Jin W, Riley RM, Wolfinger RD, White KP, Passador-Gurgel G, Gibson G: The contributions of sex, genotype and age to transcriptional variance in Drosophila melanogaster.
Nat Genet 2001, 29:389-395. PubMed Abstract | Publisher Full Text
38. McLean RA, Sanders WL, Stroup WW: A unified approach to mixed linear models.
39. Searle SR, Casella G, McCulloch CE:
Variance Components. New York, NY: John Wiley and Sons, Inc.;. 1992.
40. Littell RC, Milliken GA, Stroup WW, Wolfinger RD:
SAS system for mixed models. Cary, NC: SAS Institute Inc.;. 1996.
41. R/maanova [http://www.jax.org/staff/churchill/labsite/software/anova/rmaanova] webcite
42. SAS microarray solution [http://www.sas.com/industry/pharma/mas.html] webcite
43. Statistics glossary [http://www.statsoftinc.com/textbook/glosfra.html] webcite
44. Glossary [http://www.csse.monash.edu.au/~lloyd/tildeMML/Glossary/] webcite
45. Internet glossary of statistical terms [http://www.animatedsoftware.com/statglos/statglos.htm] webcite
46. Efron B, Tibshirani R, Goss V, Chu G: Microarrays and their use in a comparative experiment. [http://www-stat.stanford.edu/~tibs/research.html] webcite
47. Wolfinger RD, Gibson G, Wolfinger ED, Bennett L, Hamadeh H, Bushel P, Afshari C, Paules RS: Assessing gene significance from cDNA microarray expression data via mixed models.
J Comput Biol 2001, 8:625-637. PubMed Abstract | Publisher Full Text
48. Kerr MK, Churchill GA: Statistical design and the analysis of gene expression microarray data.
Genet Res 2001, 77:123-128. PubMed Abstract | Publisher Full Text
Sign up to receive new article alerts from Genome Biology
|
{"url":"http://genomebiology.com/content/4/4/210","timestamp":"2014-04-19T10:11:22Z","content_type":null,"content_length":"124776","record_id":"<urn:uuid:6ef50c6a-d826-4341-aed6-1fb38e995fcb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What's new
You are currently browsing the monthly archive for October 2009.
This week, Henry Towsner continued some model-theoretic preliminaries for the reading seminar of the Hrushovski paper, particularly regarding the behaviour of wide types, leading up to the main
model-theoretic theorem (Theorem 3.4 of Hrushovski) which in turn implies the various combinatorial applications (such as Corollary 1.2 of Hrushovski). Henry’s notes can be found here.
A key theme here is the phenomenon that any pair of large sets contained inside a definable set of finite measure (such as ${X \cdot X^{-1}}$) must intersect if they are sufficiently “generic”; the
notion of a wide type is designed, in part, to capture this notion of genericity.
The various languages and formats that make up modern web pages (HTML, XHTML, CSS, etc.) work wonderfully for most purposes, but there is one place where they are still somewhat clunky, namely in the
presentation of mathematical equations and diagrams on web pages. While web formats do support very simple mathematical typesetting (such as the usage of basic symbols such as π, or superscripts such
as x^2), it is difficult to create more sophisticated (and non-ugly) mathematical displays, such as
$\displaystyle \hbox{det} \begin{pmatrix} 1 & x_1 & \ldots & x_1^{n-1} \\ 1 & x_2 & \ldots & x_2^{n-1} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & \ldots & x_n^{n-1} \end{pmatrix} = \prod_{1 \
leq i < j \leq n} (x_j - x_i)$
without some additional layer of software (in this case, WordPress’s LaTeX renderer). These type of ad hoc fixes work, up to a point, but several difficulties still remain. For instance:
1. There is no standardisation with regard to mathematics displays. For instance, WordPress uses $latex and $ to indicate a mathematics display, Wikipedia uses <math> and </math>, the current
experimental Google Wave plugins use $$ and $$, and so forth.
2. Mathematical formulae need to be compiled from a plain text language (much as with LaTeX), rather than edited directly on a visual editor. This is in contrast to other HTML elements, such as
links, boldface, colors, etc.
3. One cannot easily cut and paste a portion of a web page containing maths displays into another page or file (although with WordPress’s format, things are not so bad as the raw LaTeX code will be
captured as plain text). Again, this is in contrast to other HTML elements, which can be cut and pasted quite easily.
4. Currently, mathematical displays are usually rendered as static images and thus cannot be easily edited without recompiling the source code for that display. A related issue is that the images do
not automatically resize when the browser scale changes; also, in some cases they do not blend well with the background colour scheme for the page.
5. It is difficult to take an extended portion of LaTeX and convert it into a web page or vice versa, although tools such as Luca Trevisan’s LaTeX to WordPress converter achieve a heroic (and very
useful) level of partial success in this regard.
There are a number of extensions to the existing web languages that have been proposed to address some of these difficulties, the most well known of which is probably MathML, which is used for
instance in the n-Category Café. So far, though, adoption of the MathML standard (and development of editors and other tools to take advantage of this standard) seems to not be too widespread at
I’d like to open a discussion, then, about what kinds of changes to the current web standards could help facilitate the easier use of mathematical displays on web pages. (I’m indirectly in contact
with some people involved in these standards, so if some interesting discussions arise here, I can try to pass them on.)
A handy inequality in additive combinatorics is the Plünnecke-Ruzsa inequality:
Theorem 1 (Plünnecke-Ruzsa inequality) Let ${A, B_1, \ldots, B_m}$ be finite non-empty subsets of an additive group ${G}$, such that ${|A+B_i| \leq K_i |A|}$ for all ${1 \leq i \leq m}$ and some
scalars ${K_1,\ldots,K_m \geq 1}$. Then there exists a subset ${A'}$ of ${A}$ such that ${|A' + B_1 + \ldots + B_m| \leq K_1 \ldots K_m |A'|}$.
The proof uses graph-theoretic techniques. Setting ${A=B_1=\ldots=B_m}$, we obtain a useful corollary: if ${A}$ has small doubling in the sense that ${|A+A| \leq K|A|}$, then we have ${|mA| \leq K^m
|A|}$ for all ${m \geq 1}$, where ${mA = A + \ldots + A}$ is the sum of ${m}$ copies of ${A}$.
In a recent paper, I adapted a number of sum set estimates to the entropy setting, in which finite sets such as ${A}$ in ${G}$ are replaced with discrete random variables ${X}$ taking values in ${G}$
, and (the logarithm of) cardinality ${|A|}$ of a set ${A}$ is replaced by Shannon entropy ${{\Bbb H}(X)}$ of a random variable ${X}$. (Throughout this note I assume all entropies to be finite.)
However, at the time, I was unable to find an entropy analogue of the Plünnecke-Ruzsa inequality, because I did not know how to adapt the graph theory argument to the entropy setting.
I recently discovered, however, that buried in a classic paper of Kaimonovich and Vershik (implicitly in Proposition 1.3, to be precise) there was the following analogue of Theorem 1:
Theorem 2 (Entropy Plünnecke-Ruzsa inequality) Let ${X, Y_1, \ldots, Y_m}$ be independent random variables of finite entropy taking values in an additive group ${G}$, such that ${{\Bbb H}(X+Y_i)
\leq {\Bbb H}(X) + \log K_i}$ for all ${1 \leq i \leq m}$ and some scalars ${K_1,\ldots,K_m \geq 1}$. Then ${{\Bbb H}(X+Y_1+\ldots+Y_m) \leq {\Bbb H}(X) + \log K_1 \ldots K_m}$.
In fact Theorem 2 is a bit “better” than Theorem 1 in the sense that Theorem 1 needed to refine the original set ${A}$ to a subset ${A'}$, but no such refinement is needed in Theorem 2. One corollary
of Theorem 2 is that if ${{\Bbb H}(X_1+X_2) \leq {\Bbb H}(X) + \log K}$, then ${{\Bbb H}(X_1+\ldots+X_m) \leq {\Bbb H}(X) + (m-1) \log K}$ for all ${m \geq 1}$, where ${X_1,\ldots,X_m}$ are
independent copies of ${X}$; this improves slightly over the analogous combinatorial inequality. Indeed, the function ${m \mapsto {\Bbb H}(X_1+\ldots+X_m)}$ is concave (this can be seen by using the
${m=2}$ version of Theorem 2 (or (2) below) to show that the quantity ${{\Bbb H}(X_1+\ldots+X_{m+1})-{\Bbb H}(X_1+\ldots+X_m)}$ is decreasing in ${m}$).
Theorem 2 is actually a quick consequence of the submodularity inequality
$\displaystyle {\Bbb H}(W) + {\Bbb H}(X) \leq {\Bbb H}(Y) + {\Bbb H}(Z) \ \ \ \ \ (1)$
in information theory, which is valid whenever ${X,Y,Z,W}$ are discrete random variables such that ${Y}$ and ${Z}$ each determine ${X}$ (i.e. ${X}$ is a function of ${Y}$, and also a function of ${Z}
$), and ${Y}$ and ${Z}$ jointly determine ${W}$ (i.e ${W}$ is a function of ${Y}$ and ${Z}$). To apply this, let ${X, Y, Z}$ be independent discrete random variables taking values in ${G}$. Observe
that the pairs ${(X,Y+Z)}$ and ${(X+Y,Z)}$ each determine ${X+Y+Z}$, and jointly determine ${(X,Y,Z)}$. Applying (1) we conclude that
$\displaystyle {\Bbb H}(X,Y,Z) + {\Bbb H}(X+Y+Z) \leq {\Bbb H}(X,Y+Z) + {\Bbb H}(X+Y,Z)$
which after using the independence of ${X,Y,Z}$ simplifies to the sumset submodularity inequality
$\displaystyle {\Bbb H}(X+Y+Z) + {\Bbb H}(Y) \leq {\Bbb H}(X+Y) + {\Bbb H}(Y+Z) \ \ \ \ \ (2)$
(this inequality was also recently observed by Madiman; it is the ${m=2}$ case of Theorem 2). As a corollary of this inequality, we see that if ${{\Bbb H}(X+Y_i) \leq {\Bbb H}(X) + \log K_i}$, then
$\displaystyle {\Bbb H}(X+Y_1+\ldots+Y_i) \leq {\Bbb H}(X+Y_1+\ldots+Y_{i-1}) + \log K_i,$
and Theorem 2 follows by telescoping series.
The proof of Theorem 2 seems to be genuinely different from the graph-theoretic proof of Theorem 1. It would be interesting to see if the above argument can be somehow adapted to give a stronger
version of Theorem 1. Note also that both Theorem 1 and Theorem 2 have extensions to more general combinations of ${X,Y_1,\ldots,Y_m}$ than ${X+Y_i}$; see this paper and this paper respectively.
Here is a nice version of the periodic table (produced jointly by the Association for the British Pharmaceutical Industry, British Petroleum, the Chemical Industry Education Centre, and the Royal
Society for Chemistry) that focuses on the applications of each of the elements, rather than their chemical properties. A simple idea, but remarkably effective in bringing the table to life.
It might be amusing to attempt something similar for mathematics, for instance creating a poster that takes each of the top-level categories in the AMS 2010 Mathematics Subject Classification scheme
(or perhaps the arXiv math subject classification), and listing four or five applications of each, one of which would be illustrated by some simple artwork. (Except, of course, for those subfields
that are “seldom found in nature”. :-) )
A project like this, which would need expertise both in mathematics and in graphic design, and which could be decomposed into several loosely interacting subprojects, seems amenable to a polymath
-type approach; it seems to me that popularisation of mathematics is as valid an application of this paradigm as research mathematics. (Admittedly, there is a danger of “design by committee“, but a
polymath project is not quite the same thing as a committee, and it would be an interesting experiment to see the relative strengths and weaknesses of this design method.) I’d be curious to see
what readers would think of such an experiment.
[Update, Oct 25: A Math Overflow thread to collect applications of each of the major branches of mathematics has now been formed here, and is already rather active. Please feel free to contribute!]
[Via this post from the Red Ferret, which was suggested to me automatically via Google Reader's recommendation algorithm.]
Yehuda Shalom and I have just uploaded to the arXiv our paper “A finitary version of Gromov’s polynomial growth theorem“, to be submitted to Geom. Func. Anal.. The purpose of this paper is to
establish a quantitative version of Gromov’s polynomial growth theorem which, among other things, is meaningful for finite groups. Here is a statement of Gromov’s theorem:
Gromov’s theorem. Let $G$ be a group generated by a finite (symmetric) set $S$, and suppose that one has the polynomial growth condition
$|B_S(R)| \leq R^d$ (1)
for all sufficiently large $R$ and some fixed $d$, where $B_S(R)$ is the ball of radius $R$ generated by $S$ (i.e. the set of all words in $S$ of length at most $d$, evaluated in $G$). Then $G$
is virtually nilpotent, i.e. it has a finite index subgroup $H$ which is nilpotent of some finite step $s$.
As currently stated, Gromov’s theorem is qualitative rather than quantitative; it does not specify any relationship between the input data (the growth exponent $d$ and the range of scales $R$ for
which one has (1)), and the output parameters (in particular, the index $|G/H|$ of the nilpotent subgroup $H$ of $G$, and the step $s$ of that subgroup). However, a compactness argument (sketched in
this previous blog post) shows that some such relationship must exist; indeed, if one has (1) for all $R_0 \leq R \leq C( R_0, d )$ for some sufficiently large $C(R_0,d)$, then one can ensure $H$ has
index at most $C'(R_0,d)$ and step at most $C''(R_0,d)$ for some quantities $C'(R_0,d)$, $C''(R_0,d)$; thus Gromov’s theorem is inherently a “local” result which only requires one to multiply the
generator set $S$ a finite number $C(R_0,d)$ of times before one sees the virtual nilpotency of the group. However, the compactness argument does not give an explicit value to the quantities $C
(R_0,d), C'(R_0,d), C''(R_0,d)$, and the nature of Gromov’s proof (using, in particular, the deep Montgomery-Zippin-Yamabe theory on Hilbert’s fifth problem) does not easily allow such an explicit
value to be extracted.
Another point is that the original formulation of Gromov’s theorem required the polynomial bound (1) at all sufficiently large scales $R$. A later proof of this theorem by van den Dries and Wilkie
relaxed this hypothesis to requiring (1) just for infinitely many scales $R$; the later proof by Kleiner (which I blogged about here) also has this relaxed hypothesis.
Our main result reduces the hypothesis (1) to a single large scale, and makes most of the qualitative dependencies in the theorem quantitative:
Theorem 1. If (1) holds for some $R > \exp(\exp(C d^C))$ for some sufficiently large absolute constant $C$, then $G$ contains a finite index subgroup $H$ which is nilpotent of step at most $C^d$.
The argument does in principle provide a bound on the index of $H$ in $G$, but it is very poor (of Ackermann type). If instead one is willing to relax “nilpotent” to “polycyclic“, the bounds on the
index are somewhat better (of tower exponential type), though still far from ideal.
There is a related finitary analogue of Gromov’s theorem by Makarychev and Lee, which asserts that any finite group of uniformly polynomial growth has a subgroup with a large abelianisation. The
quantitative bounds in that result are quite strong, but on the other hand the hypothesis is also strong (it requires upper and lower bounds of the form (1) at all scales) and the conclusion is a bit
weaker than virtual nilpotency. The argument is based on a modification of Kleiner’s proof.
Our argument also proceeds by modifying Kleiner’s proof of Gromov’s theorem (a significant fraction of which was already quantitative), and carefully removing all of the steps which require one to
take an asymptotic limit. To ease this task, we look for the most elementary arguments available for each step of the proof (thus consciously avoiding powerful tools such as the Tits alternative).
A key technical issue is that because there is only a single scale $R$ for which one has polynomial growth, one has to work at scales significantly less than $R$ in order to have any chance of
keeping control of the various groups and other objects being generated.
Below the fold, I discuss a stripped down version of Kleiner’s argument, and then how we convert it to a fully finitary argument.
At UCLA we just concluded our third seminar in our reading of “Stable group theory and approximate subgroups” by Ehud Hrushovski. In this seminar, Isaac Goldbring made some more general remarks about
universal saturated models (extending the discussion from the previous seminar), and then Henry Towsner gave some preliminaries on Kiesler measures, in preparation for connecting the main
model-theoretic theorem (Theorem 3.4 of Hrushovski) to one of the combinatorial applications (Corollary 1.2 of Hrushovski).
As with the previous post, commentary on any topic related to Hrushovski’s paper is welcome, even if it is not directly related to what is under discussion by the UCLA group. Also, we have a number
of questions below which perhaps some of the readers here may be able to help answer.
Note: the notes here are quite rough; corrections are very welcome. Henry’s notes on his part of the seminar can be found here.
(Thanks to Issac Goldbring for comments.)
The polymath1 project has just uploaded to the arXiv the paper “A new proof of the density Hales-Jewett theorem“, to be submitted shortly. Special thanks here go to Ryan O’Donnell for performing the
lion’s share of the writing up of the results, and to Tim Gowers for running a highly successful online mathematical experiment.
I’ll state the main result in the first non-trivial case $k=3$ for simplicity, though the methods extend surprisingly easily to higher $k$ (but with significantly worse bounds). Let $c_{n,3}$ be the
size of the largest subset of the cube $[3]^n = \{1,2,3\}^n$ that does not contain any combinatorial line. The density Hales-Jewett theorem of Furstenberg and Katznelson shows that $c_{n,3} = o(3^
n)$. In the course of the Polymath1 project, the explicit values
$c_{0,3} = 1; c_{1,3} = 2; c_{2,3} = 6; c_{3,3} = 18; c_{4,3} = 52; c_{5,3} =150; c_{6,3} = 450$
were established, as well as the asymptotic lower bound
$c_{n,3} \geq 3^n \exp( - O( \sqrt{ \log n } ) )$
(actually we have a slightly more precise bound than this). The main result of this paper is then
Theorem. ($k=3$ version) $c_{n,3} \ll 3^n / \log^{1/2}_* n$.
Here $\log_* n$ is the inverse tower exponential function; it is the number of times one has to take (natural) logarithms until one drops below 1. So it does go to infinity, but extremely slowly.
Nevertheless, this is the first explicitly quantitative version of the density Hales-Jewett theorem.
The argument is based on the density increment argument as pioneered by Roth, and also used in later papers of Ajtai-Szemerédi and Shkredov on the corners problem, which was also influential in our
current work (though, perhaps paradoxically, the generality of our setting makes our argument simpler than the above arguments, in particular allowing one to avoid use of the Fourier transform,
regularity lemma, or Szemerédi’s theorem). I discuss the argument in the first part of this previous blog post.
I’ll end this post with an open problem. In our paper, we cite the work of P. L. Varnavides, who was the first to observe the elementary averaging argument that showed that Roth’s theorem (which
showed that dense sets of integers contained at least one progression of length three) could be amplified (to show that there was in some sense a “dense” set of arithmetic progressions of length
three). However, despite much effort, we were not able to expand “P.” into the first name. As one final task of the Polymath1 project, perhaps some readers with skills in detective work could lend
a hand in finding out what Varnavides’ first name was? Update, Oct 22: Mystery now solved; see comments.
I’m lagging behind the rest of the maths blog community in reporting this, but there is an interesting (and remarkably active) new online maths experiment that has just been set up, called Math
Overflow, in which participants can ask and answer research maths questions (though homework questions are discouraged). It reminds me to some extent of the venerable newsgroup sci.math, but with
more modern, “Web 2.0″ features (for instance, participants can earn “points” for answering questions or rating comments, which then give administrative privileges, which seems to encourage
participation). The activity and turnover rate is quite remarkable: perhaps an order of magnitude higher than a typical maths blog, and two orders higher than a typical maths wiki. It’s not clear
that the model is transferable to these two settings, though.
There is an active discussion of Math Overflow over at the Secret Blogging Seminar. I don’t have much to add to that discussion, except to say that I am happy to see continued experimentation in
various online mathematics formats; we still don’t fully understand what makes an online experiment succeed or fail (or get stuck halfway between the two extremes), and more data points like this are
very valuable.
In his wonderful article “On proof and progress in mathematics“, Bill Thurston describes (among many other topics) how one’s understanding of given concept in mathematics (such as that of the
derivative) can be vastly enriched by viewing it simultaneously from many subtly different perspectives; in the case of the derivative, he gives seven standard such perspectives (infinitesimal,
symbolic, logical, geometric, rate, approximation, microscopic) and then mentions a much later perspective in the sequence (as describing a flat connection for a graph).
One can of course do something similar for many other fundamental notions in mathematics. For instance, the notion of a group ${G}$ can be thought of in a number of (closely related) ways, such as
the following:
• (0) Motivating examples: A group is an abstraction of the operations of addition/subtraction or multiplication/division in arithmetic or linear algebra, or of composition/inversion of
• (1) Universal algebraic: A group is a set ${G}$ with an identity element ${e}$, a unary inverse operation ${\cdot^{-1}: G \rightarrow G}$, and a binary multiplication operation ${\cdot: G \times
G \rightarrow G}$ obeying the relations (or axioms) ${e \cdot x = x \cdot e = x}$, ${x \cdot x^{-1} = x^{-1} \cdot x = e}$, ${(x \cdot y) \cdot z = x \cdot (y \cdot z)}$ for all ${x,y,z \in G}$.
• (2) Symmetric: A group is all the ways in which one can transform a space ${V}$ to itself while preserving some object or structure ${O}$ on this space.
• (3) Representation theoretic: A group is identifiable with a collection of transformations on a space ${V}$ which is closed under composition and inverse, and contains the identity
• (4) Presentation theoretic: A group can be generated by a collection of generators subject to some number of relations.
• (5) Topological: A group is the fundamental group ${\pi_1(X)}$ of a connected topological space ${X}$.
• (6) Dynamic: A group represents the passage of time (or of some other variable(s) of motion or action) on a (reversible) dynamical system.
• (7) Category theoretic: A group is a category with one object, in which all morphisms have inverses.
• (8) Quantum: A group is the classical limit ${q \rightarrow 0}$ of a quantum group.
• etc.
One can view a large part of group theory (and related subjects, such as representation theory) as exploring the interconnections between various of these perspectives. As one’s understanding of the
subject matures, many of these formerly distinct perspectives slowly merge into a single unified perspective.
From a recent talk by Ezra Getzler, I learned a more sophisticated perspective on a group, somewhat analogous to Thurston’s example of a sophisticated perspective on a derivative (and coincidentally,
flat connections play a central role in both):
• (37) Sheaf theoretic: A group is identifiable with a (set-valued) sheaf on the category of simplicial complexes such that the morphisms associated to collapses of ${d}$-simplices are bijective
for ${d > 1}$ (and merely surjective for ${d \leq 1}$).
This interpretation of the group concept is apparently due to Grothendieck, though it is motivated also by homotopy theory. One of the key advantages of this interpretation is that it generalises
easily to the notion of an ${n}$-group (simply by replacing ${1}$ with ${n}$ in (37)), whereas the other interpretations listed earlier require a certain amount of subtlety in order to generalise
correctly (in particular, they usually themselves require higher-order notions, such as ${n}$-categories).
The connection of (37) with any of the other perspectives of a group is elementary, but not immediately obvious; I enjoyed working out exactly what the connection was, and thought it might be of
interest to some readers here, so I reproduce it below the fold.
[Note: my reconstruction of Grothendieck's perspective, and of the appropriate terminology, is likely to be somewhat inaccurate in places: corrections are of course very welcome.]
One of my favorite open problems, which I have blogged about in the past, is that of establishing (or even correctly formulating) a non-commutative analogue of Freiman’s theorem. Roughly speaking,
the question is this: given a finite set ${X}$ in a non-commutative group ${G}$ which is of small doubling in the sense that the product set ${X \cdot X := \{ xy: x, y \in X \}}$ is not much larger
than ${X}$ (e.g. ${|X \cdot X| \leq K|X|}$ for some ${K = O(1)}$), what does this say about the structure of ${X}$? (For various technical reasons one may wish to replace small doubling by, say,
small tripling (i.e. ${|X \cdot X \cdot X| = O( |X| )}$), and one may also wish to assume that ${X}$ contains the identity and is symmetric, ${X^{-1} = X}$, but these are relatively minor details.)
Sets of small doubling (or tripling), etc. can be thought of as “approximate groups”, since groups themselves have a doubling constant ${K := |X \cdot X|/|X|}$ equal to one. Another obvious example
of an approximate group is that of an arithmetic progression in an additive group, and more generally of a ball (in the word metric) in a nilpotent group of bounded rank and step. It is tentatively
conjectured that in fact all examples can somehow be “generated” out of these basic examples, although it is not fully clear at present what “generated” should mean.
A weaker conjecture along the same lines is that if ${X}$ is a set of small doubling, then there should be some sort of “pseudo-metric” ${\rho}$ on ${G}$ which is left-invariant, and for which ${X}$
is controlled (in some suitable sense) by the unit ball in this metric. (For instance, if ${X}$ was a subgroup of ${G}$, one would take the metric which identified all the left cosets of ${X}$ to a
point, but was otherwise a discrete metric; if ${X}$ were a ball in a nilpotent group, one would use some rescaled version of the word metric, and so forth.) Actually for technical reasons one would
like to work with a slightly weaker notion than a pseudo-metric, namely a Bourgain system, but let us again ignore this technicality here.
Recently, using some powerful tools from model theory combined with the theory of topological groups, Ehud Hrushovski has apparently achieved some breakthroughs on this problem, obtaining new
structural control on sets of small doubling in arbitrary groups that was not previously accessible to the known combinatorial methods. The precise results are technical to state, but here are
informal versions of two typical theorems. The first applies to sets of small tripling in an arbitrary group:
Theorem 1 (Rough version of Hrushovski Theorem 1.1) Let ${X}$ be a set of small tripling, then one can find a long sequence of nested symmetric sets ${X_1 \supset X_2 \supset X_3 \supset \ldots}$
, all of size comparable to ${X}$ and contained in ${(X^{-1} X)^2}$, which are somewhat closed under multiplication in the sense that ${X_i \cdot X_i \subset X_{i-1}}$ for all ${i > 1}$, and
which are fairly well closed under commutation in the sense that ${[X_i, X_j] \subset X_{i+j-1}}$. (There are also some additional statements to the effect that the ${X_n}$ efficiently cover each
other, and also cover ${X}$, but I will omit those here.)
This nested sequence is somewhat analogous to a Bourgain system, though it is not quite the same notion.
If one assumes that ${X}$ is “perfect” in a certain sense, which roughly means that there is no non-trivial abelian quotient, then one can do significantly better:
Theorem 2 (Rough version of Hrushovski Corollary 1.2) Let ${X_0}$ be a set of small tripling, let ${X := X_0^{-1} X_0}$, and suppose that for almost all ${l}$-tuples ${a_1, \ldots, a_l \in X}$
(where ${l=O(1)}$), the conjugacy classes ${a_i^X := \{ x^{-1} ax: x \in X \}}$ generate most of ${X}$ in the sense that ${|a_1^X \cdot \ldots \cdot a_l^X| \gg |X|}$. Then a large part of ${X}$
is contained in a subgroup of size comparable to ${X}$.
Note that if one quotiented out by the commutator ${[X,X]}$, then all of the conjugacy classes ${a_i^X}$ would collapse to points. So the hypothesis here is basically a strong quantitative assertion
to the effect that the commutator ${[X,X]}$ is extremely large, and rapidly fills out most of ${X}$ itself.
Here at UCLA, a group of logicians and I (consisting of Matthias Aschenbrenner, Isaac Goldbring, Greg Hjorth, Henry Towsner, Anush Tserunyan, and possibly others) have just started a weekly reading
seminar to come to grips with the various combinatorial, logical, and group-theoretic notions in Hrushovski’s paper, of which we only have a partial understanding at present. The seminar is a
physical one, rather than an online one, but I am going to try to put some notes on the seminar on this blog as it progresses, as I know that there are a couple of other mathematicians who are
interested in these developments.
So far there have been two meetings of the seminar. In the first, I surveyed the state of knowledge of the noncommutative Freiman theorem, covering broadly the material in my previous blog post. In
the second meeting, Isaac reviewed some key notions of model theory used in Hrushovski’s paper, in particular the notions of definability and type, which I will review below. It is not yet clear how
these are going to be connected with the combinatorial side of things, but this is something which we will hopefully develop in future seminars. The near-term objective is to understand the statement
of the main theorem on the model-theoretic side (Theorem 3.4 of Hrushovski), and then understand some of its easier combinatorial consequences, before going back and trying to understand the proof of
that theorem.
[Update, Oct 19: Given the level of interest in this paper, readers are encouraged to discuss any aspect of that paper in the comments below, even if they are not currently being covered by the UCLA
Recent Comments
Gil Kalai on Finite time blowup for an aver…
xfxie on Polymath8b, X: writing the pap…
Terence Tao on 254A, Notes 3a: Eigenvalues an…
Anonymous on 254A, Notes 3a: Eigenvalues an…
Descanse en paz, Wil… on Bill Thurston
Andrew V. Sutherland on Polymath8b, X: writing the pap…
Sagemath 18: Calcula… on Noether’s theorem, and t…
Eytan Paldi on Polymath8b, X: writing the pap…
JOE on Finite time blowup for an aver…
Anonymous on Polymath8b, X: writing the pap…
Andrei Ludu on The Euler-Arnold equation
David Roberts on Polymath8b, X: writing the pap…
|
{"url":"https://terrytao.wordpress.com/2009/10/","timestamp":"2014-04-17T13:08:24Z","content_type":null,"content_length":"183422","record_id":"<urn:uuid:ad9bf6a1-396c-4a66-9fa4-50d09bd1b4b6>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: Re: Stata tip 87: Interpretation of interactions in non-linear model
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Re: Stata tip 87: Interpretation of interactions in non-linear models
From Maarten Buis <maartenlbuis@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject st: Re: Stata tip 87: Interpretation of interactions in non-linear models
Date Wed, 18 May 2011 17:55:02 +0200
On Wed, May 18, 2011 at 4:48 PM, andre ebner wrote:
> Regarding your concern to interpret differences between APE please allow
> me one more consideration:
> When comparing differences you write:
>> In your case (with a panel model) this would be a mixture of average
> partial effects while fixing the unobserved group constants at the
> average. This is not very pretty, especially be-cause the variability of
> the individual partial effects for interaction effects is so high that
> it really matters.
> If I understood you correctly, your worry is that this approach does not
> take into account that mean and variance of RE are likely to be different
> across groups (by interacting two discrete variables I get four groups)
> but instead fixes the unobserved group constants at the average.
No I worry about the fact that there is variance, regardless of
whether that variance is the same or different across groups. Think of
the random effects as group specific constants. The value of a
marginal or partial effect depends on that constant. To get _average_
partial effects we need to compute the partial effect for every
individual and than average those individual partial effects. What you
are doing is you first fix the group specific constants at their
average, than compute partial effects and than you average those (over
the distribution of the observed variables). The two are not the same,
as there is a non-linear transformation involved. As long as
everything is nice and (approximately) linear all this does not matter
(much). Unfortunately that is not the case when dealing with
interaction effects in non-linear models. It is not uncommon for
estimated marginal effects for interaction terms to vary from
significantly positive to non-significant to significantly negative
depending on the values of the other covariates. In essence you are
computing an uncomfortable mix between average partial effects and
marginal effects at average values of the covariates.
Anyhow, my main point is that you should _not_ average the individual
partial effects and certainly not compute the marginal effects at the
average. Instead you should take Norton et al.'s point seriously and
accept that when you want to present your results as marginal effects,
the marginal effects of interaction terms will vary widely from
individual to individual, even though the underlying ratio of odds
ratios is constant. This variation tends to be so extreme that trying
to summarize it with one number just does not make sense.
The problem is that you'll end up with a conclusion like "the
interaction term is significantly positive, significantly negative,
and not significant, depending on the values of the covariates", which
is not much of a conclusion, especially if you consider that the
underlying ratio of odds ratios leads to a single unambiguous
Hope this helps,
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-05/msg00924.html","timestamp":"2014-04-16T22:02:33Z","content_type":null,"content_length":"10819","record_id":"<urn:uuid:3371b620-554d-4d48-a461-de3027c9ccd7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A point is an ordered pair of real numbers.
The plane is the set of all ordered pairs of real numbers.
The distance between (x[1], y[1]) and (x[2], y[2]) is denoted by
|(x[1], y[1])(x[2], y[2])|
and is defined to be
Two points are vertical if they have the same x-coordinates.
Two points are horizontal if they have the same y-coordinates.
A rational expression with a zero denominator and a nonzero numerator has a value of infinity.^1
The slope between (x[1], y[1]) and (x[2], y[2]) is
A line is the set of all points whose coordinates are solutions to a linear equation in two unknowns.^2
The slope of a line is the slope between any two points on the line.^3
A vertical line is one where all of the points are vertical
A horizontal line is one where all the points are horizontal.
The point-slope form of the equation of the line through (x[1], y[1]) with slope m is
y - y[1] = m(x - x[1])
Two lines are parallel if their slopes are the same.
Two lines are perpendicular if their slopes are negative reciprocals of each other.^4
Given a line and a point, the point where the line through the given point perpendicular to the given line intersects the given line is called the foot of the point in the line.
|
{"url":"http://www.sonoma.edu/users/w/wilsonst/Papers/Geometry/Definitions.html","timestamp":"2014-04-18T01:03:39Z","content_type":null,"content_length":"4529","record_id":"<urn:uuid:5eb3a6f8-0264-4258-85b0-a5ffc319f8d8>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
'The Hilbert challenge'
Issue 29
March 2004
The Hilbert Challenge: A perspective on twentieth century mathematics
"As long as a branch of science offers an abundance of problems", proclaimed David Hilbert, "so is it alive". These words were delivered in the German mathematician's famous speech at the 1900
International Congress of Mathematics. He subsequently went on to describe 23 problems which he believed would spur on mathematical thought for the upcoming century. As Professor Jeremy Gray
describes in his book "The Hilbert Challenge", Hilbert's aim was ultimately achieved. A small subset of these problems turned out to be trivial; the majority were finally solved, by some of the
brightest minds of twentieth century mathematics; and a few remain unresolved or not solved in full generality, the most notable of which is the Riemann hypothesis. Most important of all, each
problem provoked much discussion and helped focus the path of research that would dominate the next hundred years.
Gray, whose research interests include the history of 19th and 20th century mathematics, first takes the reader through Hilbert's student life. His early years as a mathematician, we find out,
heavily influenced his choice of Problems. Moreover, his early successes were based on work from seemingly disparate branches of his subject. This helped establish Hilbert not only as a respected
academic, but also proved he had a broad view of the whole realm of mathematics. Indeed, with the exception of Henri Poincaré, Hilbert alone was capable of giving a lecture with such grandiose
ambitions - and succeeding. The author then goes on to describe the initial reaction to the lectures, the first attempts to solve the Problems, and in some cases, the controversy they generated in
the mathematical community. The story continues through the wars, and interlaced throughout is the status of the Problems, and the successes and failures of those brave enough to tackle them.
Naturally, Grey's story does not end with Hilbert's passing in 1943. Even after 1945, Hilbert's unsolved Problems were still held in high respect and remained relevant up to the present. Finally,
Grey treats us to a translation of Hilbert's original 1900 address, provides a glossary of important mathematical terms, and gives a summary of each Problem and the progress made in solving it.
What makes the "Hilbert Challenge" so engaging is the clever way that Grey intertwines the full mathematical details with the narrative. It can be appreciated by the non-mathematician, for the story
is fascinating in its own right. But Grey certainly does not disappoint the more mathematically minded. Woven through the text are figures and boxes, containing the finer details of the mathematics
described. These short paragraphs furnish the reader with a concrete and understandable idea of what sorts of problems these mathematicians were struggling with. Best of all, these short interludes
are delivered in the style any competent later-year secondary student or undergraduate could appreciate.
Those interested in the philosophy of mathematics in the twentieth century will also find this book interesting. Grey is clearly fascinated by this aspect, and Hilbert's Problems illuminated many of
the tensions that existed in the mathematics community. The battle between abstract theories and more concrete problem-solving, the existence of a complete axiomatization of arithmetic, and finally
the role of mathematics in physics, were all issues that Hilbert was concerned with - and indeed that are still debated today.
"The Hilbert Challenge" proves to be insightful and readable, though at some points it can become slightly bogged down in details. The reader will surely be inspired by Hilbert's unyielding faith
that all problems can be solved, certainly a heartening thought to any mathematician.
Book details:
The Hilbert challenge: A perspective on twentieth century mathematics
Jeremy Gray
hardback - 328 pages (2000)
Oxford University Press
ISBN: 0198506511
You can buy the book and help Plus at the same time by clicking on the link on the left to purchase from amazon.co.uk, and the link to the right to purchase from amazon.com. Plus will earn a small
commission from your purchase.
About the reviewer
Hari K Kunduri is a second year PhD student at the Department of Applied Maths and Theoretical Physics at the University of Cambridge, under the supervision of Dr M.J. Perry. His research interests
lie in branes in M-Theory.
|
{"url":"http://plus.maths.org/content/hilbert-challenge","timestamp":"2014-04-21T10:04:41Z","content_type":null,"content_length":"27613","record_id":"<urn:uuid:42890fb0-c087-461e-a39d-9c92809cd86b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculating Inflation with Price Indexes
Inflation is calculated by taking the price index from the year in interest and subtracting the base year from it, then dividing by the base year. This is then multiplied by 100 to give the percent
change in inflation.
Inflation = (Price Index in Current Year – Price Index in Base Year) * 100
Price Index in Base Year
From our previous example:
Inflation = (120-100) * 100 = 0.2 * 100 = 20%
Thus from 2006 to 2007, inflation has risen 20%.
In this context, inflation is measured as a percentage change in the price index from one period to the next.
If the percentage change in the price index is negative it shows deflation rather than inflation.
Back to Price Index
Back to Inflation
|
{"url":"http://www.econport.org/content/handbook/Inflation/Price-Index/CalcwPI.html","timestamp":"2014-04-20T03:10:28Z","content_type":null,"content_length":"8656","record_id":"<urn:uuid:79cd68c3-fc7c-4d66-aa84-277b7f9d37d4>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2.11 Conclusions
Next: 2.12 Bibliographical and Historical Up: 2. Evaluative Feedback Previous: 2.10 Associative Search Contents
We have presented in this chapter some simple ways of balancing exploration and exploitation. The
Although the simple methods explored in this chapter may be the best we can do at present, they are far from a fully satisfactory solution to the problem of balancing exploration and exploitation. We
conclude this chapter with a brief look at some of the current ideas that, while not yet practically useful, may point the way toward better solutions.
One promising idea is to use estimates of the uncertainty of the action-value estimates to direct and encourage exploration. For example, suppose there are two actions estimated to have values
slightly less than that of the greedy action, but that differ greatly in their degree of uncertainty. One estimate is nearly certain; perhaps that action has been tried many times and many rewards
have been observed. The uncertainty for this action's estimated value is so low that its true value is very unlikely to be higher than the value of the greedy action. The other action is known less
well, and the estimate of its value is very uncertain. The true value of this action could easily be better than that of the greedy action. Obviously, it makes more sense to explore the second action
than the first.
This line of thought leads to interval estimation methods. These methods estimate for each action a confidence interval of the action's value. That is, rather than learning that the action's value is
approximately 10, they learn that it is between 9 and 11 with, say, 95% confidence. The action selected is then the action whose confidence interval has the highest upper limit. This encourages
exploration of actions that are uncertain and have a chance of ultimately being the best action. In some cases one can obtain guarantees that the optimal action has been found with confidence equal
to the confidence factor (e.g., the 95%). Unfortunately, interval estimation methods are problematic in practice because of the complexity of the statistical methods used to estimate the confidence
intervals. Moreover, the underlying statistical assumptions required by these methods are often not satisfied. Nevertheless, the idea of using confidence intervals, or some other measure of
uncertainty, to encourage exploration of particular actions is sound and appealing.
There is also a well-known algorithm for computing the Bayes optimal way to balance exploration and exploitation. This method is computationally intractable when done exactly, but there may be
efficient ways to approximate it. In this method we assume that we know the distribution of problem instances, that is, the probability of each possible set of true action values. Given any action
selection, we can then compute the probability of each possible immediate reward and the resultant posterior probability distribution over action values. This evolving distribution becomes the
information state of the problem. Given a horizon, say 1000 plays, one can consider all possible actions, all possible resulting rewards, all possible next actions, all next rewards, and so on for
all 1000 plays. Given the assumptions, the rewards and probabilities of each possible chain of events can be determined, and one need only pick the best. But the tree of possibilities grows extremely
rapidly; even if there are only two actions and two rewards, the tree will have
The classical solution to balancing exploration and exploitation in Gittins indices. These provide an optimal solution to a certain kind of bandit problem more general than that considered here but
that assumes the prior distribution of possible problems is known. Unfortunately, neither the theory nor the computational tractability of this method appear to generalize to the full reinforcement
learning problem that we consider in the rest of the book.
Next: 2.12 Bibliographical and Historical Up: 2. Evaluative Feedback Previous: 2.10 Associative Search Contents Mark Lee 2005-01-04
|
{"url":"http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node25.html","timestamp":"2014-04-17T21:31:30Z","content_type":null,"content_length":"8510","record_id":"<urn:uuid:876e8a4f-ffa1-4588-aa46-2945452b39c0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Numerical analysis for physicists
hi all
I looking for physics article (direct link) to solve with Numerical analysis methods like:
Euler and Heun methods; Runge-Kutta and predictor-corrector methods; systems of ordinary differential equations; boundary-value problems; finite-difference methods.
wave equation; diffusion equation - explicit and implicit methods; Crank-Nicholson method; Poisson equation; Schroedinger equation
please help!!
|
{"url":"http://www.physicsforums.com/showthread.php?t=584297","timestamp":"2014-04-17T03:57:37Z","content_type":null,"content_length":"21650","record_id":"<urn:uuid:4e5163ee-579b-4228-9412-131cfeb6c6c2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] FFT definition
Hanno Klemm klemm@phys.ethz...
Mon Feb 5 04:20:47 CST 2007
Hi there,
I have a question regarding the definitions surrounding FFTs. The help
to numpy.fft.fft says:
>>> help(N.fft.fft)
Help on function fft in module numpy.fft.fftpack:
fft(a, n=None, axis=-1)
fft(a, n=None, axis=-1)
Will return the n point discrete Fourier transform of a. n
defaults to the
length of a. If n is larger than a, then a will be zero-padded to
make up
the difference. If n is smaller than a, the first n items in a will be
The packing of the result is "standard": If A = fft(a, n), then A[0]
contains the zero-frequency term, A[1:n/2+1] contains the
positive-frequency terms, and A[n/2+1:] contains the
terms, in order of decreasingly negative frequency. So for an 8-point
transform, the frequencies of the result are [ 0, 1, 2, 3, 4, -3,
-2, -1].
This is most efficient for n a power of two. This also stores a
cache of
working memory for different sizes of fft's, so you could
run into memory problems if you call this too many times with too many
different n's.
However, the help to numpy.fft.helper.fftfreq says:
>>> help(N.fft.helper.fftfreq)
Help on function fftfreq in module numpy.fft.helper:
fftfreq(n, d=1.0)
fftfreq(n, d=1.0) -> f
DFT sample frequencies
The returned float array contains the frequency bins in
cycles/unit (with zero at the start) given a window length n and a
sample spacing d:
f = [0,1,...,n/2-1,-n/2,...,-1]/(d*n) if n is even
f = [0,1,...,(n-1)/2,-(n-1)/2,...,-1]/(d*n) if n is odd
So one claims, that the packing goes from [0,1,...,n/2,-n/2+1,..,-1]
(fft) and the other one claims the frequencies go from
Is this inconsistent or am I missing something here?
Hanno Klemm
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-February/025927.html","timestamp":"2014-04-18T23:22:40Z","content_type":null,"content_length":"4437","record_id":"<urn:uuid:0fec56d5-804b-4b08-b92b-ba8a1f6fcd2f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cartan's Corner : Elie Cartan
Elie Cartan
From 1899 to 1945, Elie Cartan, a son of a blacksmith, developed a set of extraordinary mathematical ideas that have yet to be fully exploited in the physical and technical sciences. This WWW site is
dedicated to certain applications of Cartan's methods to problems of dissipative, radiative,
irreversible systems. Although emphasis herein has been placed on hydrodynamic, thermodynamic, and electromagnetic applications, Cartan's techniques can be used on micro and cosmological scales as
Cartan was the inventor of Spinors, spaces with Torsion , a champion of Projective Geometries , and the developer of a system of calculus called
Exterior Differential Forms.
This remarkable calculus goes beyond the geometrical limitations of Tensor Analysis with its restrictions to diffeomorphisms, for Cartan's exterior calculus has Topological content in both its
irreducible (Pfaff dimension) representations, and in its harmonic components (deRham period integrals). Moreover, exterior differential forms are well behaved under functional substitution and the
pullbacks with respect to maps that are not even homeomorphic. Therefore differential forms can be used to study topological evolution, where standard tensor methods on contravariant objects fail.
The philosophy to be developed herein is that most visible physical measurements are recognitions of Topological Defects and that irreversibility and biological aging are expressions of Topological
Evolution .
For some more history, click here.
The book by M A Akivis and B Rosenfeld, Elie Cartan (1869-1951) (Providence R.I., 1993),
and translated by V. Goldberg, is exceptional.
|
{"url":"http://www22.pair.com/csdc/car/carfre2.htm","timestamp":"2014-04-18T10:38:28Z","content_type":null,"content_length":"4121","record_id":"<urn:uuid:5720cdd1-e9d7-4c11-95b4-e48cecb8a98b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Application of automata theory and algebra
Auteurs: John Rhodes
Editeur : World Scientific
En bibliothèque depuis le 23 mai 2011
Aperçu disponible ici
This book was originally written in 1969 by Berkeley mathematician John Rhodes. It is the founding work in what is now called algebraic engineering, an emerging field created by using the unifying
scheme of finite state machine models and their complexity to tie together many fields: finite group theory, semigroup theory, automata and sequential machine theory, finite phase space physics,
metabolic and evolutionary biology, epistemology, mathematical theory of psychoanalysis, philosophy, and game theory. The author thus introduced a completely original algebraic approach to complexity
and the understanding of finite systems. The unpublished manuscript, often referred to as "The Wild Book", became an underground classic, continually requested in manuscript form, and read by many
leading researchers in mathematics, complex systems, artificial intelligence, and systems biology. Yet it has never been available in print until now. This first published edition has been edited and
updated by Chrystopher Nehaniv for the 21st century. Its novel and rigorous development of the mathematical theory of complexity via algebraic automata theory reveals deep and unexpected connections
between algebra (semigroups) and areas of science and engineering. Co-founded by John Rhodes and Kenneth Krohn in 1962, algebraic automata theory has grown into a vibrant area of research, including
the complexity of automata, and semigroups and machines from an algebraic viewpoint, and which also touches on infinite groups, and other areas of algebra. This book sets the stage for the
application of algebraic automata theory to areas outside mathematics. The material and references have been brought up-to-date by the editor as much as possible, yet the book retains its distinct
character and the bold yet rigorous style of the author. Included are treatments of topics such as models of time as algebra via semigroup theory; evolution-complexity relations applicable to both
ontogeny and evolution; an approach to classification of biological reactions and pathways; the relationships among coordinate systems, symmetry, and conservation principles in physics; discussion of
punctuated equilibrium (prior to Stephen Jay Gould); games; and applications to psychology, psychoanalysis, epistemology, and the purpose of life. The approach and contents will be of interest to a
variety of researchers and students in algebra as well as to the diverse, growing areas of applications of algebra in science and engineering. Moreover, many parts of the book will be intelligible to
non-mathematicians, including students and experts from diverse backgrounds.
|
{"url":"http://www.uclouvain.be/366770.html","timestamp":"2014-04-19T09:35:58Z","content_type":null,"content_length":"21143","record_id":"<urn:uuid:3584e46f-90d8-46d6-9419-35cd8897b717>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Typos in Introduction to Monte Carlo Methods with R
The two translators of our book in Japanese, Kazue & Motohiro Ishida, contacted me about some R code mistakes in the book. The translation is nearly done and they checked every piece of code in the
book, an endeavour for which I am very grateful! Here are the two issues they have noticed (after incorporating the typos signaled in the overall up-to-date summary):
First, in Example 4.4, I omitted some checkings and forgot about a minus sign, meaning Figure 4.4 (right) is wrong. (The more frustrating since this example covers perplexity!) The zeros must be
controlled via code lines like
> wachd[wachd<10^(-10)]=10^(-10)
instead of the meaningless
and the addition of
> plex[plex>0]=0
> plech[plech>0]=0
after the definition of those two variables. (Because entropies are necessarily positive.) The most glaring omission is however the minus in
> plob=apply(exp(-plex),1,quantile,c(.025,.975))
> ploch=apply(exp(-plech),1,quantile,c(.025,.975))
which modifies Figure 4.4 in the following
The second case is Example 7.3 where I forgot to account for the log-transform of the data, which should read (p.204):
> x=c(91,504,557,609,693,727,764,803,857,929,970,1043,
+ 1089,1195,1384,1713)
> x=log(x)
and compounded my mistake by including log-transforms of the parameters that should not be there (pp.204-205)! So (for my simulations) the posterior means of θ and σ² are 6.62 and 0.661,
respectively, leading to an estimate of σ of 0.802. There should be no log transform in Exercise 7.3 either.
The same corrections apply to the French translation, most obviously…
One Response to “Typos in Introduction to Monte Carlo Methods with R”
1. I haven’t read the book yet but I am interested in demonstrating to the project managers how R can run a monte carlo analysis on a Excel project schedule. What should I read for this ?
I can program R well.
|
{"url":"https://xianblog.wordpress.com/2011/10/13/typos-in-introduction-to-monte-carlo-methods-with-r/","timestamp":"2014-04-18T03:00:55Z","content_type":null,"content_length":"40795","record_id":"<urn:uuid:c6b551d8-442e-4879-a95e-58f694883a38>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
|
cimex - written in most popular ciphers: caesar cipher, atbash, polybius square , affine cipher, baconian cipher, bifid cipher, rot13, permutation cipher
Caesar cipher
Caesar cipher, is one of the simplest and most widely known encryption techniques. The transformation can be represented by aligning two alphabets, the cipher alphabet is the plain alphabet rotated
left or right by some number of positions.
When encrypting, a person looks up each letter of the message in the 'plain' line and writes down the corresponding letter in the 'cipher' line. Deciphering is done in reverse.
The encryption can also be represented using modular arithmetic by first transforming the letters into numbers, according to the scheme, A = 0, B = 1,..., Z = 25. Encryption of a letter x by a shift
n can be described mathematically as
Plaintext: cimex
┃ cipher variations: ┃
Decryption is performed similarly,
(There are different definitions for the modulo operation. In the above, the result is in the range 0...25. I.e., if x+n or x-n are not in the range 0...25, we have to subtract or add 26.)
Read more ... Atbash Cipher
Atbash is an ancient encryption system created in the Middle East. It was originally used in the Hebrew language.
The Atbash cipher is a simple substitution cipher that relies on transposing all the letters in the alphabet such that the resulting alphabet is backwards.
The first letter is replaced with the last letter, the second with the second-last, and so on.
An example plaintext to ciphertext using Atbash:
┃Plain: │cimex ┃
┃Cipher: │xrnvc ┃
Read more ...
|
{"url":"http://easyciphers.com/cimex","timestamp":"2014-04-20T00:39:38Z","content_type":null,"content_length":"21984","record_id":"<urn:uuid:69869353-283f-4d68-b5b6-fab464600979>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Reply to Monroe Eskew message under subject: A new definition of Cardinality
[FOM] Reply to Monroe Eskew message under subject: A new definition of Cardinality
Zuhair Abdul Ghafoor Al-Johar zaljohar at yahoo.com
Thu Nov 26 06:08:37 EST 2009
Hello Monroe
My comments are inserted below:
Monroe wrote
It is worth pointing out that your definition still has a disadvantage
if you don't assume choice. Without choice, not all cardinalities are
comparable. If they were then all cardinalities in your sense would
be comparable to a cardinality that contains a Von Neumann ordinal,
but from this you could derive choice. (It is nice to have a linear
order on set sizes.)
My reply
I agree totally with you regarding this particular aspect.
Monroe wrote
I'm not sure if you can prove in ZF that every set has a cardinality
in your sense. For every set X is there a set Y and a function f such
that Y is hereditary and f is a bijection between X and Y?
My reply
yes in ZFC this can be proved. But weather this can be proved in ZF is the real question.
Thomas Jech has a paper on proving that the class of all hereditarily countable sets is a set without choice, it appears as if this can be generalized someway for any arbitrary x. So it seems initially that these cardinals I defined can be proved to exist for any arbitrary x, with or without choice. However this issue remains to be settled, and it appears to be a very complicated issue.
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2009-November/014189.html","timestamp":"2014-04-19T17:03:52Z","content_type":null,"content_length":"3901","record_id":"<urn:uuid:ad915d31-bcb0-463e-ad9e-71396a6edb35>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elasticity: Tensor, Dyadic, and Engineering Approaches (Dover Books on Engineering)
Written for advanced undergraduates and beginning graduate students, this exceptionally clear text treats both the engineering and mathematical aspects of elasticity. It is especially useful because
it offers the theory of linear elasticity from three standpoints: engineering, Cartesian tensor, and vector-dyadic. In this way the student receives a more complete picture and a more thorough
understanding of engineering elasticity. Prerequisites are a working knowledge of statics and strength of materials plus calculus and vector analysis. The first part of the book treats the theory of
elasticity by the most elementary approach, emphasizing physical significance and using engineering notations. It gives engineering students a clear, basic understanding of linear elasticity. The
latter part of the text, after Cartesian tensor and dyadic notations are introduced, gives a more general treatment of elasticity. Most of the equations of the earlier chapters are repeated in
Cartesian tensor notation and again in vector-dyadic notation. By having access to this threefold approach in one book, beginning students will benefit from cross-referencing, which makes the
learning process easier. Another helpful feature of this text is the charts and tables showing the logical relationships among the equations--especially useful in elasticity, where the mathematical
chain from definition and concept to application is often long. Understanding of the theory is further reinforced by extensive problems at the end of of each chapter.
1.1 Introduction
1.2 "Body Forces, Surface Forces, and Stresses"
1.3 Uniform State of Stress (Two-Dimensional)
1.4 Principal Stresses
1.5 Mohr's Circle of Stress
1.6 State of Stress at a Point
1.7 Differential Equations of Equilibrium
1.8 Three-Dimensional State of Stress at a Point
1.9 Summary
2.1 Introduction
2.2 Strain-Displacement Relations
2.3 Compatibility Equations
2.4 State of Strain at a Point
2.5 General Displacements
2.6 Principle of Superposition
2.7 Summary
3.1 Introduction
3.2 Generalized Hooke's Law
3.3 Bulk Modulus of Elasticity
3.4 Summary
4.1 Introduction
4.2 Boundary Conditions
4.3 Governing Equations in Plane Strain Problems
4.4 Governing Equations in Three-Dimensional Problems
4.5 Principal of Superposition
4.6 Uniqueness of Elasticity Solutions
4.7 Saint-Venant's Principle
4.8 Summary
5.1 Introduction
5.2 Plane Stress Problems
5.3 Approximate Character of Plane Stress Equations
5.4 Polar Coordinates in Two-Dimensional Problems
5.5 Axisymmetric Plane Problems
5.6 The Semi-Inverse Method
6.1 General Solution of the Problem
6.2 Solutions Derived from Equations of Boundaries
6.3 Membrane (Soap Film) Analogy
6.4 Multiply Connected Cross Sections
6.5 Solution by Means of Separation of Variables
7.1 Introduction
7.2 Strain Energy
7.3 Variable Stress Distribution and Body Forces
7.4 Principle of Virtual Work and the Theorem of Minimum Potential Energy
7.5 Illustrative Problems
7.6 Rayleigh-Ritz Method
8.1 Introduction
8.2 Indicial Notation and Vector Transformations
8.3 Higher-Order Tensors
8.4 Gradient of a Vector
8.5 The Kronecker Delta
8.6 Tensor Contraction
8.7 The Alternating Tensor
8.8 The Theorem of Gauss
9 THE STRESS TENSOR
9.1 State of Stress at a Point
9.2 Principal Axes of the Stress Tensor
9.3 Equations of Equilibrium
9.4 The Stress Ellipsoid
9.5 Body Moment and Couple Stress
10 "STRAIN, DISPLACEMENT, AND THE GOVERNING EQUATIONS OF ELASTICITY"
10.1 Introduction
10.2 Displacement and Strain
10.3 Generalized Hooke's Law
10.4 Equations of Compatibility
10.5 Governing Equations in Terms of Displacement
10.6 Strain Energy
10.7 Governing Equations of Elasticity
11.1 Introduction
11.2 Review of Basic Notations and Relations in Vector Analysis
11.3 Dyadic Notation
11.4 Vector Representation of Stress on a Plane
11.5 Equations of Transformation of Stress
11.6 Equations of Equilibrium
11.7 Displacement and Strain
11.8 Generalized Hooke's Law and Navier's Equation
11.9 Equations of Compatibility
11.10 Strain Energy
11.12 Governing Equations of Elasticity
12.1 Introduction
12.2 Scale Factors
12.3 Derivatives of the Unit Vectors
12.4 Vector Operators
12.5 Dyadic Notation and Dyadic Operators
12.6 Governing Equations of Elasticity in Dyadic Notation
12.7 Summary of Vector and Dyadic Operators in Cylindrical and Spherical Coordinates
13.1 Introduction
13.2 Displacement Functions
13.3 The Galerkin Vector
13.4 The Solution of Papkovich-Neuber
13.5 Stress Functions
What Our Readers Are Saying
Be the first to add a comment for a chance to win!
|
{"url":"http://www.powells.com/biblio/61-9780486669588-0","timestamp":"2014-04-16T21:29:58Z","content_type":null,"content_length":"80004","record_id":"<urn:uuid:cba5314f-2a9c-49fd-bf4f-abfe8a89a4be>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[amsat-bb] Re: NASA's American Student Moon Orbiter...
i8cvs domenico.i8cvs at tin.it
Fri Jul 4 06:42:47 PDT 2008
----- Original Message -----
From: <G0MRF at aol.com>
To: <amsat-bb at amsat.org>
Sent: Friday, July 04, 2008 3:44 AM
Subject: [amsat-bb] Re: NASA's American Student Moon Orbiter...
> In a message dated 04/07/2008 01:16:33 GMT Standard Time,
> domenico.i8cvs at tin.it writes:
> Hi Ed, KL7UW
> If we put AO40 at a distance of 400.000 km instead of 60.000 km
> from the earth the increase of isotropic attenuation at 2400 MHz is
> about 16 dB etc etc etc.........
> Hi Ed / Dom
> On the other hand, if you were to reduce path loss by using 70cm as the
> uplink band and 2m as the downlink the numbers begin to look quite
> possible.
Hi David, G0MRF
Decreasing the frequency the absolute value of the isotropic attenuation
decreases but the difference in path loss between 400.000 km and 60.000 km
is the same 16.5 dB at any frequency so that to compensate for the above
attenuation using lower frequencies you need bigger antennas both on the
satellite and at the ground station.
> Also, if the satellite is orbiting the moon, then it's quite likely that
> the attitude will be such that the experimental end of the satellite is
> pointing at the moons surface. This probably also means that the
> communication antennas are not pointing at the earth, so high gain
> will not be possible.
> Maybe 3 or 4dB is the limit.
This is why it does not make sense to put a transponder orbiting around
the moon just for the simple reason that it's very much more simple
and cheap to put it into a HEO earth orbit.
> So how about 10W of 2m on the satellite and a passband that's say
> 5kHz wide? Not good for SSB, but passable for CW or reasonable
> speed coherent BPSK
> Regards
> David
Only considering the 2 meters downlink suppose to put AO40 at 400.000
km with the antennas pointing at the earth with low squint angle let say
less than 10 degrees.
The gain of the AO40 2 meters antennas was 10 dBi and we put your
10 watt on it.
Suppose that your 2 meter antenna has a gain of 13 dBi and the overall
noise figure of your receiving system is NF= 0,7 dB = 51 kelvin so that
the noise floor into a CW passband of 500 Hz with the antenna looking
at the moon (200 kelvin) is about -178 dBW
Suppose that the station in QSO with you has a 70 cm EIRP capability to
get the full 2 meters 10 watt from the transponder only for you and we
can calculate it later on.
2 meters downlink budged calculation:
Satellite power ................................... + 10 dBW
Satellite antenna gain.......................... + 10 dBi
Satellite EIRP..................................... + 20 dBW (100 W EIRP)
2 m isotr. attenuation 400.000 km.. -188 dB
power density received on a ground
isotropic 2 meters antenna..................-168 dBW
2 m ground station antenna gain.........+ 13 dBi
Power density at 2 m RX input...........- 155 dBW
2 m receiver noise floor......................- 178 dBW
Received CW signal S/N.................... + 23 dB
If we increase the BW to 2500 Hz for a SSB QSO than the noise floor
of the receiving system increases by log (2500/500) = 7 dB i.e.
it becames about -171 dB and the SSB signal will be received with a
S/N ratio = 23-7 = 16 dB wich is a very strong SSB signal.
Be aware that the above figures are based on the assumption that the
satellite antennas are pointig toward the earth wich is not the case with
a moon orbiting satellite.
In addition we assume that the station in QSO with you has a 70 cm
EIRP capability in order to get 10 watt from the 2m transponder only
for you.
On the other side if a fixed 10 dBi 2 meters antenna is placed over the
moon and it is oriented toward the earth could easily cover the inclination
X libration window without any adjustement and only from the point of
view of the downlink with 10 watt it can be easily used for a transponder
on the moon.
If you make again the downlink budged calculation considering that
the 2 meter transponder will develope only 2.5 watt for you then you
will realize that the transponder will accomodate 3 more stations if each
one is getting 2.5 watt as well.
In this case your S/N ratio will be still +15.5 dB on CW and +8.5 dB
in SSB and the same is true for the other 3 users.
73" de
i8CVS Domenico
More information about the AMSAT-BB mailing list
|
{"url":"http://amsat.org/pipermail/amsat-bb/2008-July/012363.html","timestamp":"2014-04-18T10:39:58Z","content_type":null,"content_length":"7726","record_id":"<urn:uuid:999f7398-7ac3-4aa4-bacf-421a00bdcc18>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Abstracts 51(5)
Syst. Biol. 51(5) 2002
Huelsenbeck et al.
Abstract. Only recently has Bayesian inference of phylogeny been proposed. The method is now a practical alternative to the other methods; indeed, the method appears to possess advantages over the
other methods in terms of ability to use complex models of evolution, ease of interpretation of the results, and computational efficiency. However, the method should be used cautiously. The results
of a Bayesian analysis should be examined with respect to the sensitivity of the results to the priors used and the reliability of the Markov chain Monte Carlo approximation of the probabilities of
Thorne and Kishino
Abstract. Bayesian methods for estimating evolutionary divergence times are extended to multigene data sets, and a technique is described for detecting correlated changes in evolutionary rates among
genes. Simulations are employed to explore the effect of multigene data on divergence time estimation, and the methodology is illustrated with a previously published data set representing diverse
plant taxa. The fact that evolutionary rates and times are confounded when sequence data are compared is emphasized and the importance of fossil information for disentangling rates and times is
Aris-Brosou and Yang
Abstract. The molecular clock, i.e., constancy of the rate of evolution over time, is commonly assumed in estimating divergence dates. However, this assumption is often violated and has drastic
effects on date estimation. Recently, a number of attempts have been made to relax the clock assumption. One approach is to use maximum likelihood, which assigns rates to branches and allows the
estimation of both rates and times. An alternative is the Bayes approach, which models the change of the rate over time. A number of models of rate change have been proposed. We have extended and
evaluated models of rate evolution, that is, the lognormal and its recent variant, along with the gamma, the exponential and the Ornstein-Uhlenbeck processes. These models were first applied to a
small hominoid data set, where an empirical Bayes approach was used to estimate the hyperparameters that measure the amount of rate variation. We found that estimation of divergence times was
sensitive to these hyperparameters, especially when the assumed model is close to the clock assumption. The rate and date estimates varied little from model to model, although the posterior Bayes
factor indicated the Ornstein-Uhlenbeck process outperformed the other models. To demonstrate the importance of allowing for rate change across lineages, this general approach was used to analyze a
larger data set consisting of the 18S rRNA gene of 40 metazoan species. We obtained date estimates consistent with paleontological records, the deepest split within the group being about 560 million
years ago. Estimates of the rates were in accordance with the Cambrian explosion hypothesis, and suggested some more recent lineage-specific bursts of evolution.
Suchard et al.
Abstract. Current methods to identify recombination between subtypes of human immunodeficiency virus-1 (HIV-1) fall into a sequential testing trap, in which significance is assessed conditional on
parental representative sequences and crossover points (COPs) that maximize the same test statistic. We overcome this shortfall by testing for recombination while simultaneously inferring parental
heritage and COPs using an extended Bayesian multiple change-point model. The model assumes that aligned molecular sequence data consist of an unknown number of contiguous segments that may support
alternative topologies or varying evolutionary pressures. We allow for heterogeneity in the substitution process and specifically test for inter-subtype recombination using Bayes factors. We also
develop a new class of priors to assess significance across a wide range of support for recombination in the data. We apply our method to three putative, gag gene recombinants. HIV-1 isolate RW024
decisively supports recombination with an inferred parental heritage of AD and a COP 95% Bayesian credible interval of (1152; 1178) using the HXB2 numbering scheme. HIV-1 isolate VI557 barely
supports recombination. HIV-1 isolate RF decisively rejects recombination, as expected given the sequence is commonly used as a reference sequence for subtype B. We employ scaled regeneration
quantile plots to assess convergence and find this approach convenient to use even for our variable dimensional model parameter space.
Abstract. Mapping of mutations on a phylogeny has been a commonly used analytical tool in phylogenetics and molecular evolution. However, the common approaches for mapping mutations based on
parsimony have lacked a solid statistical foundation. Here, I present a Bayesian method for mapping mutations on a phylogeny. I illustrate some of the common problems associated with using parsimony
and suggest instead that inferences in molecular evolution can be made on the basis of the posterior distribution of the mappings of mutations. A method for simulating a mapping from the posterior
distribution of mappings is also presented and the utility of the method is illustrated on two previously published data sets. Applications include a method for testing for variation in the
substitution rate along the sequence and a method for testing if the d[N]/d[S] ratio varies among lineages in the phylogeny.
Miller et al.
Abstract. The objective of this study was to obtain a quantitative assessment of the monophyly of morning glory taxa, specifically the genus Ipomoea and the tribe Argyreieae. Previous systematic
studies of morning glories intimated the paraphyly of Ipomoea by suggesting that the genera within the tribe Argyreieae are derived from within Ipomoea; however, no quantitative estimates of
statistical support were developed to address these questions. We applied a Bayesian analysis to provide quantitative estimates of monophyly in an investigation of morning glory relationships using
DNA sequence data. We also explored various approaches for examining convergence of the Markov chain Monte Carlo (MCMC) simulation of the Bayesian analysis by running 18 separate analyses varying in
length. We found convergence of the important components of the phylogenetic model (the tree with the maximum posterior probability, branch lengths, the parameter values from the DNA substitution
model, and the posterior probabilities for clade support) for these data after one million generations of the MCMC simulations. In the process, we identified a run where the parameter values obtained
were often outside the range of values obtained from the other runs, suggesting an aberrant result. In addition, we compared the Bayesian method of phylogenetic analysis to maximum likelihood and
maximum parsimony. The results from the Bayesian analysis and the maximum likelihood analysis were similar for topology, branch lengths, and parameters of the DNA substitution model. Topologies also
were similar in the comparison between the Bayesian analysis and maximum parsimony, although the posterior probabilities and the bootstrap proportions exhibited some striking differences. In a
Bayesian analysis of three data sets (ITS sequences, waxy sequences, and ITS + waxy sequences) no support for the monophyly of the genus Ipomoea, or for the tribe Argyreieae, was observed, with the
estimate of the probability of the monophyly of these taxa being less than 3.4 X 10^-7.
Abstract. Methods for Bayesian inference of phylogeny using DNA sequences based on Markov chain Monte Carlo (MCMC) techniques allow the incorporation of arbitrarily complex models of the DNA
substitution process, and other aspects of evolution This has increased the realism of models, potentially improving the accuracy of the methods, and is largely responsible for their recent
popularity. Another consequence of the increased complexity of models in Bayesian phylogenetics is that these models have, in several cases, become overparameterized. In such cases, some parameters
of the model are not identifiable; different combinations of nonidentifiable parameters lead to the same likelihood, making it impossible to decide among the potential parameter values based on the
data. Overparameterized models can also slow the rate of convergence of MCMC algorithms due to large negative correlations among parameters in the posterior probability distribution. Functions of
parameters can sometimes be found, in overparameterized models, that are identifiable, and inferences based on these functions are legitimate. Examples are presented of overparameterized models that
have been proposed in the context of several Bayesian methods for inferring the relative ages of nodes in a phylogeny when the substitution rate evolves over time.
Marvaldi et al.
Abstract. The main goals of this study were to provide a robust phylogeny for the families of Curculionoidea, to discover relationships and major natural groups within the family Curculionidae, and
to clarify the evolution of larval habits and host-plant associations in weevils in order to analyze their role in weevil diversification. Phylogenetic relationships among the weevils
(Curculionoidea) were inferred from analysis of nucleotide sequences of 18S ribosomal DNA (∼2,000 bases) and 115 morphological characters of larval and adult stages. A worldwide sample of 100 species
was made to maximize representation of weevil morphological and ecological diversity. All families and the main subfamilies of Curculionoidea are represented. The family Curculionidae sensu lato is
represented by about 80 species in 30 "subfamilies" of traditional classifications. Phylogenetic reconstruction was done by parsimony analysis of separate and combined molecular and morphological
data matrices, and also by bayesian analysis of the molecular data; tree topology support was evaluated. Results of the combined analysis of 18S/morphology show that monophyly of, and relationships
among, each of the weevil families are well supported with the topology ((Nemonychidae Anthribidae) (Belidae (Attelabidae (Caridae (Brentidae Curculionidae))))). Within the clade Curculionidae sensu
lato the basal positions are occupied by (mostly monocot-associated) taxa with the "primitive" type of male genitalia, followed by the Curculionidae sensu stricto, made up of groups with the
"derived" type of male genitalia. High support values were found for the monophyly of some distinct curculionid groups like the Dryophthorinae (several tribes represented) and the Platypodinae
(Tesserocerini plus Platypodini), among others. However, the subfamilial relationships in Curculionidae are unresolved or weakly supported. The phylogeny estimate based on 18S/morphology suggests
that diversification in weevils is accompanied by niche shifts in host plant associations and in larval habits. Pronounced conservatism is shown in larval feeding habits, particularly in the host
tissue consumed. Multiple shifts to use of angiosperms in Curculionoidea were identified, each time associated with increases in weevil diversity, and subsequent shifts back to gymnosperms,
particularly in the Curculionidae.
Kopp and True
Abstract. The melanogaster species group of Drosophila (subgenus Sophophora) has long been a favored model for evolutionary studies due to its morphological and ecological diversity and wide
geographic distribution. However, phylogenetic relationships among species and subgroups within this lineage are not well understood. We reconstructed the phylogeny of 17 species representing 7
"oriental" species subgroups, which are especially closely related to D. melanogaster. We used DNA sequences of 4 nuclear and 2 mitochondrial loci in an attempt both to obtain the best possible
estimate of species phylogeny and to assess the extent and sources of remaining uncertainties. Comparison of trees derived from single-gene datasets has allowed us to identify several strongly
supported clades, which were also consistently seen in combined analyses. The relationships among these clades were less certain. The combined dataset contains data partitions that are incongruent
with each other. Trees reconstructed from the combined dataset and from internally homogenous datasets consisting of 3-4 genes each differ at several deep nodes. The total dataset tree is fully
resolved and strongly supported at most nodes. Statistical tests indicated that this tree is compatible with all individual and combined datasets. Therefore, we accepted this tree as the most likely
model of historical relationships. We compared the new molecular phylogeny to earlier estimates based on morphology and chromosome structure and discuss its taxonomic and evolutionary implications.
Szumik et al.
Abstract. A formal method to determine areas of endemism is presented. The method is based on dividing the study region into cells, and counting the number of species that can be considered as
endemic, for a given set of cells (=area). Thus, the areas which imply that the maximum number of species can be considered as endemic, are to be preferred. This is the first method to identify areas
of endemism which implements an optimality criterion directly based on considering the aspects of species distribution which are relevant to endemism. The method is implemented in two computer
programs, NDM and VNDM, available from the authors.
|
{"url":"http://hydrodictyon.eeb.uconn.edu/systbiol/issues/51_5/abstracts51_5.html","timestamp":"2014-04-17T04:09:05Z","content_type":null,"content_length":"15071","record_id":"<urn:uuid:1870dc8d-f4c1-4115-a76b-a1b9ede0ec4b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
expected number of balls in k emptiest bins
up vote 5 down vote favorite
My problem is the following : throw (randomly, independantly) p balls in n bins. What is the expected number of balls in the k emptiest bins ?
I have some results about the expected number of bins with $x$ balls ( would be $n {p \choose x} (\frac{1}{n})^x (1-\frac{1}{n})^{p-x}$) but i cant deduce what i want from that...
Did i miss something obvious ?
Any pointer ?
Would it not be better to ask this at math.stackexchange.com? – András Bátkai Jul 11 '11 at 20:29
Not a research question, voting to close. – Igor Rivin Jul 11 '11 at 20:37
7 This does not seem like such an unreasonable question to me, though the answer should follow from the theory of order statistics. – Richard Stanley Jul 11 '11 at 20:46
add comment
4 Answers
active oldest votes
This is only a very partial answer, and I would have considered putting it as a comment if I were a more reputable contributor, but I hope it's at least somewhat helpful.
Consider the simplest possible non-trivial case, $p=2$ and $k=1$ -- i.e., you're looking for the expected number of balls in the "lighter" of two bins. It's easy to see this is related to
up vote 3 asking for the expected distance to the origin of a random walk, which, if I remember correctly, is asymptotic to $\sqrt{2n/\pi}$. So the expected number of balls in the smaller bin is
down vote asymptotic to $n/2 - \sqrt{n/2\pi}$.
add comment
up vote
0 down http://www.cs.berkeley.edu/~jfc/cs174/lecs/lec5/lec5.pdf
2 I'm sorry but i don't see how this helps me... I've "solved" the question of empty bins, and of "probability of the existence of bins with >1 balls" which is the birthday paradox. My
question is really how to get from what you give me to the "number of balls in the k most empty bins" (not the number of empty bins) – Marc Jul 11 '11 at 21:47
1 It helps you because it tells you what the distribution of the ball numbers is. The number of balls you care about is just the left tail of the Prob. Distribution function. – Igor Rivin
Jul 11 '11 at 23:27
2 The distribution of the number of balls in each bin doesn't directly tell you about the order statistics which depend on the joint distribution. – Douglas Zare Jul 12 '11 at 1:02
What I had meant to say was that the problem is equivalent to knowing how many urns there are with $j$ balls, for $j = 0, 1, \dots,$ which is not the distribution of balls in each bin. –
Igor Rivin Jul 12 '11 at 1:31
I thought that too ( thats why i put it in the question), but the only thing i see now would be to 1 use stirling formula to get rid of the \choose (let F denote this expressionof #bins
containing x balls ) ; 2 "solve" $\int_{-\infty}^y x F dx = k$ -> to find "until which number of balls/bin i have to count balls" ; and 3 "compute" $\int_{-\infty}^y F dx $. For further
simplifications of the formula, it seems hat i would need more assumptions on my variables and i think i cannot do the ones i would need. That's why i ask if there is another method to
come to the result "more directly" – Marc Jul 12 '11 at 2:00
show 1 more comment
It is a generalization of the birthday problem. Here is a 1 line code in R who simulates the problem:
my_simulation<-function(N,nelems,to)mean(unlist(lapply(1:N,function(a){min(table(sample(1:to,nelems,T) ))})))
up vote 0 down vote
for instance my_simulation(1000,50,10) gives the answer where there are 50 balls and 10 bins (1000 simulations). Hope that helps.
you can change min for max (you'll have to do more code if it is other order) and you have the expected number of balls in the fullest bin. – eulerian Jul 12 '11 at 2:42
add comment
The most interesting/relevant thing i found was in the Newman-Shepp's generalization of the coupon collector problem, which seems to be the exact dual problem of the balls in the
emptiest bin ( = "how many balls do you have to throw to ensure there are $x$ in every bin [and thus in the emptiest] ")
According to http://en.wikipedia.org/wiki/Coupon_collector%27s_problem#Extensions_and_generalizations
...the expected number is $ p = n\log n + (x-1)n\log\log n + O(n)$
up vote 0 down
vote accepted So my guess for the emptiest bin would be to invert this formula (express $x$ in function of the rest) , and i have a bound on the number in k emptiest, quite good if $ k << n$
(well, the estimation being for $n \to \infty$, this seems okay :-) )
Maybe i should see the proofs of this to see if there are ideas that can be adapted & formalized.
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability or ask your own question.
|
{"url":"http://mathoverflow.net/questions/70050/expected-number-of-balls-in-k-emptiest-bins","timestamp":"2014-04-19T02:18:37Z","content_type":null,"content_length":"72600","record_id":"<urn:uuid:8c12ffde-caf7-4f6f-bc5c-086d31b937be>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Intersection of a ray and a curved surface
August 30th 2010, 10:07 AM #1
May 2008
Intersection of a ray and a curved surface
I'd like to write a ray tracing program but I have some issues with the mathematics thereof, I'm trying to do some exercises in a book to help me as such but I don't know how to solve e.g.
problems of the form:
Find where (if at all) the ray $ray(t) = (5, -1, 0)^T + t(-1, 1, 1)^T$ intersects the curved surface $z(x, y) = (x - 2)(y - 3) + 4$, if there is more than one intersection, which is the first?
As far as I understand I should set the two equations equal to each other and try solve them for the unknown parameter t? I've tried this but I can't get anywhere.
I'd like to write a ray tracing program but I have some issues with the mathematics thereof, I'm trying to do some exercises in a book to help me as such but I don't know how to solve e.g.
problems of the form:
Find where (if at all) the ray $ray(t) = (5, -1, 0)^T + t(-1, 1, 1)^T$ intersects the curved surface $z(x, y) = (x - 2)(y - 3) + 4$, if there is more than one intersection, which is the first?
The one that is closer to the source of the ray, of course! For, you see, a ray is not a line: a ray has a source from which it originates. Once a ray hits a surface it gets reflected - and the
reflected ray may, in principle, hit the same or another surface at some other point.
As far as I understand I should set the two equations equal to each other and try solve them for the unknown parameter t? I've tried this but I can't get anywhere.
I find that surprising: why, you just replace the coordinates x,y,z in the equation for the surface with terms expressed with t, i.e. x=5-t, y=-1+t and z=t, based on the equation of the ray, then
solve for t.
In your example this gives the two solutions t1=2 and t2=4, if I am not mistaken (check it for yourself). Now plug these two values of t back into the equation of the ray and you have found the
points of intersection.
Last edited by Failure; August 30th 2010 at 10:53 AM. Reason: typo corrected
I agree
Last edited by mfetch22; August 30th 2010 at 10:46 AM. Reason: SOLVED
August 30th 2010, 10:45 AM #2
August 30th 2010, 10:45 AM #3
|
{"url":"http://mathhelpforum.com/calculus/154771-intersection-ray-curved-surface.html","timestamp":"2014-04-16T08:14:41Z","content_type":null,"content_length":"37888","record_id":"<urn:uuid:ed838b63-01eb-4aaf-947e-e5d96b0aff90>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: problem finding a value
Replies: 3 Last Post: Jan 21, 2013 2:26 PM
Messages: [ Previous | Next ]
Iris Re: problem finding a value
Posted: Jan 21, 2013 11:06 AM
Posts: 8
Registered: 11/27/12 Hello Josh, thank you for your answer!
The problem stands...
imagine that:
x_x =
1.0e+002 *
Columns 1 through 3
1.000000000000000 -0.953406571178785 -1.542839603993645
Columns 4 through 5
0.02146721794700071 -3.268216911140570
the 4th (for i = 4) value is inside the range that i want, but then it continues for i=5 which is not inside the range... so it will do B...
how can i do this?
thank u *
Date Subject Author
1/21/13 Iris
1/21/13 Re: problem finding a value Iris
1/21/13 Re: problem finding a value dpb
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2430296&messageID=8116855","timestamp":"2014-04-16T08:56:16Z","content_type":null,"content_length":"18544","record_id":"<urn:uuid:9b89113f-d33b-4a59-97a4-b71830cf9430>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Find-and-fetch search on a tree
Alpern, Steven (2011) Find-and-fetch search on a tree. Operations Research, 59 (5). pp. 1258-1268. ISSN 0030-364X
Full text not available from this repository.
We introduce a new type of search game called the "find-and- fetch" game F 4Q1O5. The Hider simply picks any point H in the network Q. The Searcher starts at time zero at a given point O of Q, moving
at unit speed until he reaches H (finds the Hider). Then he returns at a given speed σ along the shortest path back to O, arriving at time R, the payoff. This models the problem faced in many types
of search, including search-and-rescue problems and foraging problems of animals (where food must be found and returned to the lair). When Q is a binary tree, we derive optimal probabilities for the
Searcher to branch at the nodes. These probabilities give a positive bias towards searching longer branches first. We show that the minimax value of the return time R (the game value of F (Q,O)) is
μ+D/σ, where . is the total length of Q and D is the mean distance from the root O to the leaves (terminal nodes) of Q, where the mean is taken with respect to what is known as the equal branch
density distribution. As σ goes to infinity, our problem reduces to the search game model where the payoff is simply the time to reach the Hider, and our results tend to those obtained by Gal [Gal,
S. 1979. Search games with mobile and immobile hider. SIAM J. Control Optim. 17(1) 99-122] and Anderson and Gal [Anderson, E. J., S. Gal. 1990. Search in a maze. Probab. Engrg. Inform. Sci. 4(3)
311-318] for that model. We also apply our return time formula μ+D/σ to determine the ideal location for the root (lair or rescue center) O, assuming it can be moved. In the traditional "find only"
model, the location of O does not matter. Subject classifications: search and surveillance; networks/graphs, tree algorithms; game theory. Area of review: Optimization. History: Received May 2010;
revisions received September 2010, December 2010; accepted December 2010.
Actions (login required)
Record administration - authorised staff only
|
{"url":"http://eprints.lse.ac.uk/40090/","timestamp":"2014-04-19T07:00:50Z","content_type":null,"content_length":"23681","record_id":"<urn:uuid:293ecfba-c516-4abe-a617-7da951517595>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|