content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
[SciPy-user] Conditionally adding items to a list
Anne Archibald peridot.faceted@gmail....
Tue Aug 26 11:37:48 CDT 2008
2008/8/26 Roger Herikstad <roger.herikstad@gmail.com>:
> I have a prolem that I was wondering if anyone could come up with a
> good solution for. I basically have to lists of number and I want to
> add elements from one list to the other as long as the difference
> between the added element and all elements already in that list
> exceeds a certain threshold. The code I came up with is
> map(times1.append,ifilter(lambda(x):
> numpy.abs(x-numpy.array(times1)).min()>1000, times2))
> but it quickly slows down if times2 becomes sufficiently long. I need
> to be able to do this with lists of 100,000++ elements. Does anyone
> know of a quicker way of doing this?
As others have said, you should think carefully about whether this is
what you actually want: the result you get will depend on the order of
the incoming items:
[500,1000,1500] -> [500,1500]
[1000,500,1500] -> [1000]
But if it is what you want, I would worry more about the fact that
your algorithm is O(n**2) than about the fact that list operations are
(supposedly) slow. Here's an O(n) way to do what you want:
def trim(input,spacing=1000):
r = {}
for n in input:
i = math.floor(n/float(spacing))
if i in r:
if i-1 in r and n-r[i-1]<spacing:
if i+1 in r and r[i+1]-n<spacing:
r[i] = n
return [r[i] for i in r]
If you want to make an array without going through the list generated
on return, you could also write "return np.fromiter(r[i] for i in r)",
provided your python understands iterator comprehensions.
You could also sort the input list; then you only need to compare each
element to the previous one. This isn't even order-dependent anymore,
but it doesn't produce the result you asked for:
def trim2(input, spacing=1000):
input = sorted(input)
r = [input[0]]
for i in input[1:]:
if i-r[-1]>=spacing:
return r
If there are many elements to be discarded, this may be faster:
def trim3_gen(input, spacing=1000):
input = np.sort(input)
i = input[0]
while True:
yield i
i = input[np.searchsorted(input,i+spacing)]
except IndexError:
(note that unlike the others it's a generator, for no really compelling reason.)
Good luck,
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2008-August/017940.html","timestamp":"2014-04-16T13:57:33Z","content_type":null,"content_length":"5150","record_id":"<urn:uuid:bf52f36b-5a71-4403-947f-d7a0d9c0bbf0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
perplexus.info :: Probability : Competitive number guessing
Two players are going to take turns trying to guess a number selected randomly from 1 to 15. After each guess if the number was not chosen they will both be told whether the actual number is higher
or lower. A player must make a reasonable guess among the possible numbers remaining on his turn.
What is sought in each of the following cases is the best strategy for each player and the chance they will win.
Case 1: Unlimited (up to 15 if needed) guesses. The winner gets $30.
Case 2: Only 4 guesses allowed total. If the number is guessed the winner gets $20 and the loser gets $10. If the number is not guessed neither gets anything. | {"url":"http://perplexus.info/show.php?pid=7733&cid=47315","timestamp":"2014-04-19T07:06:12Z","content_type":null,"content_length":"15117","record_id":"<urn:uuid:918691da-9768-4527-a779-3d22431c238f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
NA-Digest index for 1995
Digest for Sunday, December 31, 1995
In this digest:
NA Digest Calendar
Need Help With a PDE
Mathematical Theory of Networkss and Systems
MPI Developers Conference and Users Group Meeting
Positions at Numerical Semiconductor
Contents, Linear Algebra and its Applications
Digest for Sunday, December 24, 1995
In this digest:
PARA96 & Merry Christmas
Konrad Zuse
Draft of New Book Available Via FTP
New Textbook in High-Performance Scientific Computing
Change of Address for Michele Benzi
Reorganisation of Swiss Computing Centers
C Code for Gear Algorithm
Satellite Trajectory Code Sought
Workshop in Croatia on Numerical Linear Algebria
Object-Oriented Numerics Conference
Congress on Computational and Applied Mathematics
Supercomputing on IBM Systems
Conference in Russia on Simulation of Devices
Meeting in Portugal on Vector and Parallel Processing
Position at University of Texas, Austin
Positions at University of Minnesota Research Center
Postdoctoral Position at Rutherford Appleton Laboratory
Postdoctoral Position at CERFACS
Contents, IEEE Computational Science & Engineering
Contents, J. Approximation Theory
Digest for Sunday, December 17, 1995
In this digest:
New Book, Wavelets and Filter Banks
Change of Address for Rob Schreiber
Change of Address for Alfred Inselberg
Pseudospectra of Linear Operators
New SQP Optimization Software Available
Seeking Multigrid Code
Semi-infinite Optimization
Deadline for SIAM Annual Meeting
European Conference on Numerical Mathematics
Bath-Bristol NA Day
Midwest Numerical Analysis Day
CFD Careers Symposium at Oxford
ATLAST Linear Algebra Workshops
Grid Adaptation in Computational PDEs
Special Issue on Wavelets
Position at Unviersity of South Australia
Position at University of Arkansas, Fayetteville
Position at Sandia National Laboratories
Position at Northern Illinois University
Positions at Rice University
Position at University of Utah
Postdoctoral Postion at Utrecht University
Postdoctoral Position at the University of Illinois
Contents, SIAM Applied Mathematics
Digest for Sunday, December 10, 1995
In this digest:
SIAM Election Results
Query on Block Tridiagonal Systems
BLAS Routines in C
New Version of UMFPACK for Sparse Linear Systems
Aztec: A Parallel Iterative Package
Temporary Address Change for Jesse Barlow
Report on Dynamic Load Balancing Meeting
Cambridge Approximation Day
Supercomputing Program for Undergraduate Research
Scientific Computing and Differential Equations
Current and Future Directions in Applied Mathematics
Department Head Position at Worcester Polytechnic Institute
Positions at Florida State University
Positions at University of Maryland
Postdoctoral Position at University of Delaware
Postdoctoral Position at Oxford University
Postdoctoral Positions at UC Santa Barbara
Contents, Advances in Computational Mathematics
Contents, Numerical Algorithms
Contents, Linear Algebra and its Applications
Contents, SIAM Discrete Mathematics
Digest for Sunday, December 3, 1995
In this digest:
NA Digest Calendar
Householder Meeting Deadline Approaches
BLAS Birds-of-a-feather Session at SC'95
Singular Values of Diagonal plus Circulant
Isolated Problem Approach in Boundary Element Method
Software for Unstructured Grids
Web Site: Updates in Global Optimization
Reid Prize: Call for Nominations
New Book: Recent Advances in Nonsmooth Optimization
New Book: The Science of Computer Benchmarking
New Book: Air Pollution Modelling
New Book: Stochastic Finite Elements
Netherlands Mathematical Research Institute
SIAM Conference on Sparse Matrices
Mathematics in Signal Processing
Website for ILAS Conference
Workshop on Interval Arithmetic in Brazil
Discrete Mathematics Day at Carleton University
Workshop on Linear Algebra in Optimization
Deadlines for SIAM Conferences
Conference on Applied and Computational Mathematics
Numerical Methods and Computational Mechanics in Hungary
Conference on Fortran Futures
Position at Eastern Connecticut State University
Chair Position at Kent State University
Positions at the University of Surrey
Postdoc Position at Oak Ridge Nat'l Lab
Report on the CERFACS Workshop on Eigenvalues
Digest for Sunday, November 26, 1995
In this digest:
Householder Fellow in Scientific Computing
Change of Address for Kathy Brenan
A Query about Block Matrices
Eigenvalues of An Integral Operator
Multisplitting Algorithm Needed
New Version of ARPACK Available for Large Eigenproblems
Past and Forthcoming Workshops at CERFACS
European Conference on Parallel Processing
Time-Frequency and Time-Scale Methods for Economics and Finance
Proceedings of AMCA-95 Conference
Graduate Fellowships in CAM at UT-Austin
Position at Weidlinger Associates
Digest for Sunday, November 19, 1995
In this digest:
Jean-Francois Carpraux
Why are They Called Singular Values?
Mittag-Leffler Function
Searching for Optimized FFT Routine
Defending CS Computer Methods Course
Best Paper Award, J. of Complexity
SIAM Student Travel Awards
NA Meeting at University of Liverpool
PVM User Group Meeting
South Eastern Linear Algebra Meeting
Postdoctoral Position at University of Tennessee
Contents, SIAM Scientific Computing
Contents, SIAM Matrix Analysis
Contents, SIAM Control and Optimization
Digest for Monday, November 13, 1995
In this digest:
Albert Booten
Forsythe/Golub Reprint Files
Mesh Generation for Volcano Model
DASSL and High Index DAE's
Interpolating Scattered Data
Seeking Optimization Software
Change of Address for John Hench
New Parser/Solver for Semidefinite Programming Problems
SYISDA Parallel Eigensolver Package
IMANA Newsletter
One Hundred Years of Runge-Kutta Methods
Conference Honoring Mike Powell
Conference Honoring Ivo Babuska
Conference on Monte Carlo and Quasi-Monte Carlo Methods
Basic Parallel Strategies for Scientific Computing
CFD Short Course
Workshop in Bulgaria
Positions at Arizona State University
Position at Numerical Algorithms Group
Contents, Linear Algebra and its Applications
Contents, Transactions on Mathematical Software
Contents, Mathematics of Control, Signals, and Systems
Contents, SIAM Review
Contents, SIAM Numerical Analytsis
Contents. BIT Numerical Mathematics
Digest for Sunday, November 5, 1995
In this digest:
Computational and Applied Mathematics at UT-Austin
Help with Matrix Algebra Proof
Fourth French-Latin American Congress on Applied Mathematics
Call for Papers for Irregular 96
Meeting Announcement for Dynamic Load Balancing on MPP Systems
Conference in Honour of John Pollard
URL for Proceedings of The Workshop on Iterative Methods
Conference on Finite Element Methods
Call for Papers for Summer Seminar 96 of Canadian Mathematical Society
Position at University of Nevada - Reno
Position at University of Liverpool
Position at Heriot-Watt University
Positions at Hitachi Dublin Laboratory
Position at Imperial College
Positions at the University of Iowa
Positions at the University of Leeds
Positions at Rice University
Position at Simon Fraser University
Position at CERFACS
Position at Los Alamos National Laboratory
Positions at the University of the Witwatersrand
Positions at Marquette University
Position at Stony Brook
Position at the University of Alaska Fairbanks
Contents, Journal of Complexity Contents, December, 1995
Contents, SINUM 32-6 December 1995
Digest for Saturday, October 28, 1995
In this digest:
NA Digest Calendar
News from the Geneva NA Group
Challenges in Matrix Theory
Interval Computations Help in Solving Long-Standing Problem
Student Award for Reliable Computing journal
Report from Conference on Mathematical Tools in Metrology
Session on Linear Algebra and Scientific Computing
Symposium on Computer-aided Control System Design
Conference on Interval Methods
Conference in Honour of Jean Meinguet
Position at RWTH, Aachen
Position at University of Strathclyde
Position at UNI-C, Denmark
Position at University of Kentucky
Position at University of Delaware
Position at Northwestern University
Position at Colorado School of Mines
Position at Northern Illinois University
Position at College of Charleston
Contents, Journal of Global Optimization
Digest for Saturday, October 21, 1995
In this digest:
New Code for Linear DAEs with Variable Coefficients Available
New Text, Selected Topics in Approximation and Computation
Short Course in Optimization
Volterra Centennial
SIAM Conference on Optimization
Workshop Course on Wavelets and Filter Banks
International Symposium on Symbolic and Algebraic Computation
Conference on Algebraic Multilevel Iteration Methods
New Ph.D program in Scientific Computing at Carnegie Mellon
Position at Virginia Tech
Postdoctoral Position at SINTEF Applied Mathematics, Oslo
Contents, Numerical Linear Algebra with Applications
Contents, Linear Algebra and its Applications
Contents, SIAM Discrete Mathematics
Digest for Sunday, October 15, 1995
In this digest:
Olga Taussky Todd
Need Optimization Digest
Old Journals
Applied Numerical Linear Algebra Book
Methods and Programs for Mathematical Functions
New Address for Bill Dold
Change of Address for Jane Cullum
AMS-SIAM Summer Seminars
New Network Optimization Package Available
Alpha Test Release of the BLACS for MPI
Surveys on Mathematics for Industry
York Conference on Numerical Analysis
Conference on Images, Wavelets and PDE'S
GAMM-Seminar Kiel 1996
Cornell Theory Center Virtual Workshop
Summer School at Leicester University
Northern England Numerical Analysis Colloquium
Conference on Computational Mechanics in Hungary
One Day Courses on MATLAB and Mathematica
Parallelising CFD & Structures
Position at Purdue University
Position at Norwegian Institute of Technology
Positions at University of Colorado at Denver
Positions at University of Toronto
Position at UC Santa Barbara
Postdoctoral Position at University of Bologna
Postdoctoral Position at IBM Research
Contents, Annals of Numerical Mathematics
Contents, Linear Algebra and its Applications
Contents, Surveys on Mathematics for Industry
Contents, Computational and Applied Mathematics
Digest for Sunday, October 8, 1995
In this digest:
Nonsymmetric Iterative Solvers
Boundary Value ODE Book Republished
Simulation Language, SIL, Available via FTP
Zetta Tsaltas
Workshop Innovative Time Integrators
URL for Copper Mountain Conference
Workshop on Modeling and Computation in Structural Mechanics
Conference on Dynamical Numerical Analysis
Dynamic Load Balancing on MPP Systems
Grid Adaptation in Computational PDEs
Position at University of Sussex
Position at University of Tennessee
Position at Iowa State University
Position at University of Maryland Baltimore County
Position at UMass Dartmouth
Contents, IEEE Computational Science & Engineering
Contents, IEEE Computational Science & Engineering
Contents, Linear Algebra and its Applications
Contents, Annals of Numerical Mathematics
Digest for Sunday, October 1, 1995
In this digest:
NA Digest Calendar
List of Graphics Libraries
New Book on Dynamics of Chemical Reactors
Solving Sparse Linear Equations on the T3D
Inverse of Normal Distribution
The Electronic Journal of Linear Algebra
The NEOS (Network-Enabled Optimization System) Server
Mathematics of Control, Signals and Systems
Position at National Donghwa University
Position at University of Wales, Aberystwyth
Position at Louisiana Tech University
Contents, SIAM Applied Mathematics
Contents, Approximation Theory
Digest for Sunday, September 24, 1995
In this digest:
Math-Net Links to the Mathematical World
Seeking Simple Graphics Library for X-windows
Front-ends for Numerical Simulation Packages
Errata List of TLS Book by S. Van Huffel and J. Vandewalle
Workshop on Total Least Squares
Winter School on Iterative Methods in Hong Kong
Workshop on Applied Parallel Computing in Lyngby, Denmark
Position at BEAM Technologies
Position at CERFACS
Position at Emory University
Position at University of North Carolina
Position at the Australian National University
Postdoctoral Position at University of Minnisota
New Journal, Computer Mathematics and its Applications
Contents, Transactions on Mathematical Software
Contents, SIAM Control and Optimizations
Digest for Sunday, September 17, 1995
In this digest:
MATLAB Conference
Change of Address for Zhongxiao Jia
Quadrature for Optimization and Search
N-dimensional Rotation Matrices
New Journal, Monte Carlo Methods and Applications
Software for Mathews Numerical Methods Book
UMFPACK Version 2.0: General Unsymmetric Sparse Matrix Solver
Workshop on Numerical Fluid Flow In Spherical Geometry
SIAM Conference on Numerical Combustion
Parallel Supercomputing Conference in Japan
Report on the Second Workshop on Applied Parallel Computing
Position at University of Bradford, England
Postdoctoral Position at University of Nottingham.
Postdoctoral Position at Johannes Kepler University
Position at UNICAMP, Brazil
Contents, Linear Algebra and its Applications
Contents, Computers and Mathematics
Digest for Saturday, September 9, 1995
In this digest:
Feng Kang Prize Awarded
Gregory and Karney Eigenvalue Problem
New Code for Stiff ODEs Available
New High Performance Sparse Linear System Solver Available
Chaco-2.0: Graph Partitioning Software
Journal of Statistical Software, A Proposal
Temporary Change of Address for Jinchao Xu
Change of Address for Rob Stevenson
New Book on Symbolic Analysis for Parallelizing Compilers
Conference on Hyperbolic Problems in Hong Kong
Workshop on Quasi-Monte Carlo Methods
Postdoctoral Postion at Rice University
Contents: Reliable Computing
Contents, IMA Numerical Analysis
Contents, Approximation Theory
Digest for Sunday, September 3, 1995
In this digest:
NA Digest Calendar
Address Change for Chunyang He
Errata list, Fundamentals of Matrix Computations
New Book, Advanced Scientific Fortran
Parallel Tridiagonal Matrix Solver
Inviting Feedback on Parallel BLAS
WWW-page on Mesh Generation
METIS, Unstructured Graph Partitioning Software
International Linear Algebra Society Prize
1996 Copper Mountain Conference
Southeastern Conference on Theoretical and Applied Mechanics
Workshop on Eigenvalues and Stability at CERFACS
CTC Symposium: Protein Structure and Folding
Symposium on Matrix Analysis & Applications
Workshop in Bulgaria
ASME Fluids Engineering Division Summer Meeting
Computer Simulation of Aircraft and Engine Icing
Contents, Numerical Algorithms
Contents, SIAM Optimization
Contents, Numerische Mathematik
Contents, Numerische Mathematik
Digest for Sunday, August 27, 1995
In this digest:
Need Algorithm to Solve Matrix Equation
Slides for Numerical Analysis Course
Discrete Inequalities
Fortran Program Performance Tools
Change of Address for Sanzheng Qiao
Change of Address for Floyd Hanson
Temporary Change of Address for Sven Hammarling
Cornell Theory Center Workshop
Parallel CFD '96 Conference in Italy
Conference on Numerical Combustion
Position at Catholic University of Louvain
Position at University of Michigan
Position at Pacific Gas and Electric
Contents, Constructive Approximation
Digest for Sunday, August 20, 1995
In this digest:
Automatic Differentiation Tools
ADIFOR 2.0 is Now Available
Decision Tree for Optimization Software
Matrix Re-ordering for ILU-CG
Change of Address for Roger Ghanem
New Books on Nonconvex Optimization and its Applications
Scottish Computational Maths Symposium
Postdoctoral Position at University of Minnesota
Digest for Monday, August 14, 1995
In this digest:
Announcement of the Netlib Conferences Database
Address Change for Jens Lorenz
Website for Test Problems
New Book, Matrices of Sign-solvable Linear Systems
Multigrid and Molecular Dynamics
Eigenvalue Problems and Applications
Report on the Dendee Conference
Position at Stanford University
Position at UC Santa Barbara
Position at Cornell University
Contents, J. Approximation Theory
Contents, Numerical Algorithms
Contents, SIAM Review
Digest for Monday, August 7, 1995
In this digest:
Report and Photos from LAA95, Univ. Manchester
Fitting Lines and Planes
Seeking Path Finding Software
Decision Tree for Optimization Software
Householder Meeting Announcement
Conference on Dynamical Numerical Analysis
Workshop on Innovative Time Integrators
PARA95, Last Call for Registration
Call for Papers, Graphics Interface '96
Postdoctoral Position at Virginia Tech
Postdoctoral Position at University of New South Wales
Position at New York University
New Electronic Journal on Molecular Modeling
Contents, SIAM Applied Mathematics
Contents, Linear Algebra and its Applications
Digest for Sunday, July 30, 1995
In this digest:
NA Digest Calendar
SVD for a Special Matrix
Conference on Numerical Combustion
Conference on Applied Computational Fluid Dynamics
Direct Methods Workshop at CERFACS
Positions at TransQuest Information Solutions
Position at Massey University, New Zealand
Position at University of Canterbury, New Zealand
Digest for Monday, July 24, 1995
In this digest:
UMIST .ne. University of Manchester
PDE with Interior Boundary Condition
New Home Page for Baltzer Science Publishers
Software for Mathews' Numerical Methods Textbook
New Book on ODEs
C Code for Shortest Path Problem
LIPSOL, MATLAB Linear Programming Package
New Software Package for Sparse Linear Systems Available in WWW
Richard Lehoucq is the 1995 J. H. Wilkinson Fellow
Sven Hammarling is Visiting UNI*C, Denmark
Newest Release of DSSLIB Available for SPARC
Southeastern Atlantic Conference on Differential Equations
SIAM Conference on Discrete Mathematics
SIAM Workshop on Computational Differentiation
Conference in Honor of Lax & Nirenberg
Position at University of Wales, Aberystwyth
Digest for Sunday, July 16, 1995
In this digest:
Wilkinson Prize to Chris Bischof and Alan Carle
Continuing Fatunla's work in Nigeria
Graduate and Professional Resources on the WWW
E-mail List and FTP Site for Fractional Transforms
Conference on Domain Decomposition Methods
Position at Simon Fraser University
Position at University of Strathclyde
Positions at University of Manchester
PhD Studentship at University of Manchester
Contents, Numerische Mathematik
Contents, Surveys on Mathematics for Industry
Contents, BIT
Contents, SIAM Control and Optimization
Digest for Sunday, July 9, 1995
In this digest:
Possible NA Conference with STOC 97
Surveys on Mathematics for Industry on the Web
Mathematics of Control, Signals, and Systems (MCSS)
Rocky Mountain Numerical Analysis Conference
Multigrid Tutorial at Weizmann Institute
Matrix Computations and QCD Workshop
Symposium on the Mathematical Theory of Networks and Systems
Symposium, One Hundred Years of Runge-Kutta Methods
Position at University of Bristol
Position at University of Salford
Position at University of Akron
Contents, Advances in Computational Mathematics
Contents, Linear Algebra and its Applications
Digest for Saturday, July 1, 1995
In this digest:
Nonlinear Generalized Eigenvalues
Release 2.0 of PIM
ADI Relaxation Parameters
METIS: Unstructured Graph Partitioning Software
Interval Computations Homepage
Report from Workshop on Interval Computations
Interval Computations Abstracts
Workshop in Hungary on Global Optimization
SIAM Conference on Numerical Combustion
Summer Seminar on Plates and Shells
Mitrinovic Memorial Conference
SIAM Conference on Optimization
Conference in France on Real Numbers and Computers
Position at University of Newcastle upon Tyne
Postdoc Position at North Carolina State University
Graduate Assistantships at Marquette University
Contents, J. Approximation Theory
Contents, SIAM Optimization
Contents, SIAM Matrix Analysis
Contents, Computation and Applied Mathematics
Contents, Advances in Computational Mathematics
Contents, SIAM Numerical Analysis
Digest for Wednesday, June 21, 1995
In this digest:
Programs in Scientific Computing
Roger Apery
An Electronic Abstract Server?
New Journal on Computation and Computational Mathematics
Para++ Bindings for Message Passing Libraries
Book by Datta on Numerical Linear Algebra
ACM-SIAM Symposium on Discrete Algorithms
Lothar Collatz Memorial Lecture and Dinner
Update on Dundee NA Conference
Postdoc Position at University Greenwich, London
Postdoc Position at Universite de La Reunion, France.
Postdoc and Research Positions at Oxford and Johns Hopkins
PhD Studentship at Chester College, UK
Position at University Dortmund
Position at Silvaco in Santa Clara, CA Position at Rice University
Position at Rice University
Position at Hitachi Dublin Laboratory
Contents, Mathematics of Control, Signals, and Systems
Digest for Monday, June 12, 1995
In this digest:
Accidental Posting to the NA Digest Mailing List
Computing Refinable Integrals
Looking for Bordered Block Diagonal Ordering Software
Generalized Eigenvalue Problems
Broadcast of Parallel Linear Algebra Conference
News from ILAS Information Center
Integral Methods in Science and Engineering
Potential ISSAC in Hawaii, 1997
Position at Drexel University
Position at the Australian National University
Contents, SIAM Computing
Contents, SIAM Applied Mathematics
Contents, Global Optimization
Digest for Sunday, June 5, 1995
In this digest:
NA Digest Calendar
Change of Position for Jim Greenberg
Change of Address for Anne Greenbaum
Seeking Block Toeplitz Examples
New Optimization Code
Global Error Estimation Software
Information on Geo-Digest
Journal of Approximation Theory on the Web
WWW Online Multigrid Tutorial
LAA95 Conference at Manchester
SIAM Student Paper Prize
New Book on Numerical Computing
Computing in High Energy Physics
Joint Conference, "ECCOMAS", in Paris
Conference in Moscow on Ill-Posed Problems
ARITH 12 Advance Program
PARA95, ScaLAPACK & PVM NAG Tutorial
Iterative Linear Algebra IMACS Symposium Program
Contents, IMA Journal Numerical Analysis
Contents, Acta Numerica 1996
Contents, Linear Algebra and its Applications
Contents, Constructive Approximation
Digest for Sunday, May 28, 1995
In this digest:
Simeon Fatunla
A Continuous Choice of an Eigenvalue
Spatial Cluster Analysis
Tests for Kalman Filter
PICL and PSTSWM software updates
Report on TICAM Symposium
Positions at Bell Labs
Postdoctoral Position at Argonne Laboratory
Postdoctoral Fellowship in Bologna, Italy
Research Studentship at RMCS, Shrivenham
Contents, Approximation Theory
Contents, Global Optimization
Digest for Sunday, May 21, 1995
In this digest:
Honorary Doctorate for Gene Golub at Umea University
Scientific Computing Web Pages
How to Find the Biggest Eigenvalues of Large Matrices
Eigenvalues and -vectors of a Real 3x3 Matrix
Seeking Copy of Ascher, Mattheij and Russell Book
Software for Orthogonal Polynomials
Problems with Email for Book on Iterative Methods
New Book on Conjugate Gradients
Software for 3-D Fluid Interfaces Available
FTP, WWW, Gopher from University of Regensburg
Nominations for George Polya Prize
Conference on Computer Methods and Water Resources
Workshop on Neural Networks and Neurocontrol
Conference in Hungary on Computational Mechanics
Workshop on Computational Electromagnetics
Conference to Honor Thomas Kailath
Fellowship at University of Bologna
Postdoctoral Position at Rice University
Research Assistantship at the University of Salford
Postdoctoral Positions at University of Groningen
Contents, Linear Algebra and its Applications
Contents, International Journal of Computational Fluid Dynamics
Digest for Sunday, May 14, 1995
In this digest:
Cray Fortran Bit Manipulation Routines
Nonlinear Degenerate Diffusion-convection Equation
Piecewise Constant Contours
Out of Core Methods
New Finite Element Mesh Generation Software
New Book on Iterative Methods
New Book/ Study Guide in Numerical Analysis
Industrial Mathematics Digest
Nominations for SIAM Optimization Prize
Student Symposium On Interval Computations
Conference on Control and Information in Hong Kong
Symposium on Matrix Analysis
GAMM Seminar on Boundary Elements
ASME Meeting on Semiconductor Device and Process Modeling
Boole Conference
One Hundred Years of Runge-Kutta Methods
Dundee Conference - Last Call for Papers
Scottish Computational Maths Symposium
GAANN Fellowships in HPCC at Old Dominion
M.Sc. in NA and Computing at Univ. Manchester
Postdoctoral Position at Lousiana State
Contents, Reliable Computing
Contents, SIAM Control and Optimization
Digest for Sunday, May 7, 1995
In this digest:
Happy News
Seeking 1956 De Vogelaere Report on Hamiltonian Systems
C Program for Gauss Kronrod Quadrature
Parallel BiMMeR Matrix Multiplication Routines
Change of Address of Rossana Vermiglio
New Book on Markov Chains
IFIP Conference on Quality of Numerical Software
Conference on Spectral and High Order Methods
Symposium on Geophysical Inverse Problems
Meshing Roundtable
Workshop on Innovative Time Integrators
Position at Cray Research
Contents, SIAM Mathematical Analysis
Digest for Sunday, April 30, 1995
In this digest:
NA Digest Calendar
National Academy of Sciences
Richard Sinkhorn
E-mail Address of IMACS Symposium in Bulgaria
Workshop on Inertial Manifolds in China
European Conference on Numerical Mathematics
Inversion Conference in Denmark
South-Central Student Conference
Nonlinear Programming Conference in Beijing
South East German Colloquium on Numerical Mathematics
Graphics Interface Conference
Symposium in Honor of Herb Keller
Meeting of the DSS/2 User Group
Search for ICASE Director
Postdoctoral Position at Imperial College
Position at Convex Computer Corporation
Position with NEC in Texas
Position at Cray Research
Contents, SIAM Review
Contents, Electronic Transactions on Numerical Analysis
Digest for Sunday, April 23, 1995
In this digest:
Intro. to Scientific, Symbolic, and Graphical Computation
Fortran 90 Tutorial
Software for Hankel Transform Sought
Educational Finite Element Software Sought
Multiresponse Nonlinear Least Square Fitting
IMANA Newsletter
WWW Server for BIT
Band Systems Survey
Colloquium on Systems, Control and Computation
Dundee Conference 1995
IMACS Iterative Linear Algebra Symmposium in Bulgaria
Method of Lines Workshop
PhD Studentship at University of Salford
Postdoctoral Position at MIT
Postdoctoral Position at University of Durham
Contents, Advances in Computational Mathematics
Contents, BIT Numerical Mathematics
Contents, SIAM Scientific Computing
Contents, SIAM Numerical Analysis
Digest for Sunday, April 16, 1995
In this digest:
First Dahlquist Prize
Stamatis Cambanis
Updating Lawson and Hanson Least Squares Book
FTP Site for Blaise Pascal University
Change of URL for Multigrid Algorithm Library
Testing Random Number Generators
Seeking Nonlinear Parameter Estimation Software
Special Issue on Computing with Real Numbers
General Program for Bayesian Inference
MATLAB Software for Mathews Numerical Methods Text
Parallel Coordinates Software
Report on SciCADE95
Southern Ontario Numerical Analysis Day
Midwest NA Day
SCAN-95, Final Call for Papers
Message Passing Interface Developers Conference
Nonlinear Optimization Conference on Capri
Positions at King Saud University
Position at University of Sao Paulo
Contents, Mathematics of Control, Signals, and Systems
Digest for Sunday, April 9, 1995
In this digest:
NA-Net White Pages
Oved Shisha Injury
Svata Poljak
Bau & Trefethen Linear Algebra Book
Datta Numerical Linear Algebra Book
Change of Address for Andy Cleary
Sign Pattern in a Matrix
IMSL C Numerical Libraries
Pacific NorthWest Numerical Analysis Seminar
Nonlinear Optimization Conference in Capri
Cerfacs International Linear Algebra Year
Combinatorial Optimization Day at Rutgers
Abstract Deadline for PVM'95
Applied Parallel Computing Workshop
Position in Namur, Belgium
Research Assistantships at University of Hudderfield
Contents, Linear Algebra and its Applications
Contents, SIAM Appliced Mathematics
Digest for Sunday, April 2, 1995
In this digest:
NA Digest Calendar
Global Optimization Software
Looking for Spherical Bessel Function Evaluator
Generalized Eigenvalues, Maximum Real Part
SIAM Meeting Dates
Director Position at GSF, Bavaria
Postdoc Position at University of New South Wales
Position at Caltech
Contents, Journal of Complexity
Contents, Linear Algebra and its Applications
Contents, Intl. J. Computational Fluid Dynamics
Digest for Sunday, March 26, 1995
In this digest:
History of Flops and Errors
Software Sought for Large-scale, Unconstrained Optimization
Software for Non-linear Parabolic Equations
Inverse of Special Matrix
Root Finding with Complex Function
New Boundary Value Code in Netlib
Suggestion for Recent Contents Publication
New Book on Singular Perturbations
New Book on Numerical Analysis
Contents of Journal of Approximation Theory Available via WWW
Sup'Prize for IBM Parallel Platforms
Optimization Course at University of Hertfordshire
Fortran 90 Course
Postdoctoral Position at Rice University
Postdoctoral Positions at Stony Brook
Postdoctoral Position at Claremont Graduate School
Research Positions at ETH, Zurich
Contents, Journal of Global Optimization
Digest for Sunday, March 19, 1995
In this digest:
Nonlinear Schrodinger Equation
Iterative Solvers
Iterative Methods for Cache Base System
VECFEM, VECtorized Finite Element Method
News from ILAS Information Center
1995 MATLAB Conference
Summer School on Optimization and Identification in Prague
PARA95, Workshop on Applied Parallel Computing
IBM High Performance Computing Conference
Industrial Mathematics Modeling Workshop
Method of Lines Workshop
Postdoc Position at Argonne National Laboratory
Contents, Computational Fluid Dynamics
Contents, Interval Computations
Contents, Surveys on Mathematics for Industry
Digest for Sunday, March 12, 1995
In this digest:
Spanish Science and Technology Prize to J.M. Sanz-Serna
Quad Versus 80-bit Arithmetic
Looking for Brezis's Operateurs Maximaux Monotones
Survey in Sensitivity Analysis
3-D Finite Element Programs
Fast Algorithm Sought for (Eigen) Matrix-vector Product
InterCom Release R2.0.0 for Paragon
Linear Hyperbolic Systems of PDE's
Numerical Linear Algebra on Parallel Processors
JOSTLE - Unstructed Mesh Partitioning Software
Special Issue on Parallelism and Irregularly Structured Problems
The Richard C. DiPrima Prize
Algebraic Multilevel Iteration Methods
Siberian School on Computational Mathematics
Summer School in Finland
Summer Conference in Utah
SIAM Conference Deadlines
Optimization Conference in Sicily
Manchester Linear Algebra Conference
Postdoctoral Position at Imperial College, London
Contents, Global Optimization
Contents, SIAM Computing
Contents, Computional and Applied Mathematics
Contents, Linear Algebra and its Applications
Contents, Linear Algebra and its Applications
Digest for Sunday, March 5, 1995
In this digest:
NA Digest Calendar
Inversion of Elliptic Integral
Change of Address for Magolu monga-Made
Change of Address for I. Kaporin
New Versions of SparseLib++ and IML++ Available
Availability of ScaLAPACK, the PBLAS, and the BLACS
Nominations for George Polya Prize
Results of ILAS Elections
Computational and Applied Mathematics at UT-Austin
Conference on Real Numbers and Computers
Numerical Methods and Computational Mechanics
Workshop on Global Optimization
Symposium on Discrete Algorithms
East Coast Computer Algebra Day
Computational Techniques and Applications Conference
Summer Conference on Conjugate Gradient Methods
Postdoc Position at University of Maryland
Postdoc Position at University of Texas, Austin
Positions at University of Sussex
Position at Clemson University
Contents, SIAM Scientific Computing
Contents, SIAM Numerical Analysis
Contents, IMA Numerical Analysis
Contents, SIAM Review
Contents, BIT
Contents, Transactions on Mathematical Software
Digest for Sunday, February 26, 1995
In this digest:
Travel Support for ICIAM 95
Computational Science Course Material
Sabbatical Address for Hans Stetter
RCM Ordering Algorithm
Subroutine for Weber Function
Hyperbolic PDE Solver Wanted
Looking for Out-of-print Books
Memorial Dedicated to Gabor Szego
Advances and Trends in Computational and Applied Mathematics
Workshop on Matrix Methods for Statistics
Meeting of the DSS/2 User Group
Winter School on Iterative Methods
Optimization Conference in Capri
Biomathematical Conference in Bulgaria
17th SPEEDUP Workshop
Chair in Applied Mathematics at UMIST
Position at University of Surrey
Position at University of Notre Dame
Postdoc Position at University of Tennessee
Position at Bristol University
Postdoctoral Positions at Argonne National Laboratory
Position at University of Edinburgh
Position at University of British Columbia
Contents, Math of Control, Signals, and Systems
Contents, Linear Algebra and its Applications
Contents, Journal of Approximation Theory
Contents, International Journal of Computational Fluid Dynamics
Digest for Sunday, February 19, 1995
In this digest:
Lanczos Book on Linear Differential Equations Sought
Sideways Heat Equation
Integer Solutions to Linear Systems
Quadratic Programming Code
Partial Difference Equations with Delays
Sidney Fernbach Award
Parallel Scientific Computing Session at IFIP95
Report on HPCC Initiative
Formal Orthogonal Polynomials
Wiley Index on the Web
ILAS Web Links
Conference in Russia on Cubature Formulae
Supercomputing Program for Undergraduate Research
Analysis Day at Carleton University
Discrete Math Day at Carleton University
Conference in China on Parallel Algorithms
Southeastern-Atlantic Conference on Differential Equations
Workshop in Taiwan on Scientific Computation
Conference on Numerical Methods for Fluid Dynamics
Summer School on Computing Techniques in Physics
Conference in Naples on Nonlinear Optimization
Graduate Assistantships at Carnegie Mellon
Position at NYU Courant Institute
Position at Minnesota's Geometry Center
Digest for Sunday, February 12, 1995
In this digest:
Memorial Service for John Rollett
Eigenvalue Problems for Tridiagonal Systems
Change of Address for Floyd Hanson
IMACS European Simulation Meeting
Linear Algebra Society Conference
Copper Mountain Conference on Multigrid Methods
Extended Abstracts for Interval Computations Workshop
Postdoctoral Position at Argonne
Postdoctoral Fellowship in Bologna, Italy
Postdoctoral Position at Xerox PARC
Digest for Sunday, February 5, 1995
In this digest:
NA Digest Calendar
Header of the NA Digest
Happy Birthday, Bill
Numerical Computation in CS Departments
Introducing the MATLAB ODE Suite
International Linear Algebra Society Information Center
New Telephone Numbers for K.U.L.-SISTA
Total Least Squares Book by Van Huffel and Vandewalle
Adaptive Simulated Annealing Research Project
Revision of CLAWPACK
IMANA Newsletter
South-Central Computational Science Consortium
6th Stockholm Optimization Days
Conference on Boundary Element Technology
Conference at University of York on Numerical Analysis
SIAM Southeastern-Atlantic Section
Conference in Bulgaria on Differential Equations
Symposium in Memory of John A. Gregory
Summer Internship in Parallel Processing
Position at University of British Columbia
Position at Oxford University
Position at Chester College
Fellowship at Sandia Labs
Digest for Sunday, January 29, 1995
In this digest:
Notes on Numerical Computing
Promised "New Estimates for Ritz Vectors" by FTP
Conference on Complementarity Problems:
Comments on Fiacco-McCormick Book
Haifa Matrix Theory Conference
Changes in ILAS Information Center
Computing the Jordan Normal Form
Release 2.0 of CUTE
Report Distribution from University of Toronto
LIPSOL beta-2.1 Release
Survey and Bibliography on ABS Methods
Workshop in Poland in Homogenization Theories
Benelux Meeting on Systems and Control
Position Available at MIT
Contents, SIAM Optimization
Error Bounds for Numerical Algorithms
Discussion and Linear Algebra Net
Parallel Programming on the IBM SP2
Computing with Real Numbers
Symposium in Beijing on Operations Research
SIAM Annual Meeting
Postdoctoral Position at Argonne National Laboratory
Announcement of Midwest NA Day
Digest for Sunday, January 22, 1995
In this digest:
Jerry Keiper
Reports on ODE Stepsize Control
Advances in Computational Mathematics - Correction
SIAM Undergraduate World Wide Web Pages
Temporary Change of Address for Avram Sidi
Two Special Swedish Birthdays
Who are Seidel, Hessenberg and Jordan?
Diffpack, C++ software for PDEs
Bypassing the Pentium's Bug
Need an Algebraic ODE Solver
Looking for 3rd order PDE Software.
Interactive FE Mesh Generation Sought
Mesh Generator and Contaminant Transport
Monroe Martin Prize
Leslie Fox Prize
SCAN-95, Computer Arithmetic
Diffraction Seminar in St.Petersburg, Russia
Workshop on Intertial Manifolds in China
Symposium in Honor of Herbert B. Keller
Conference on ABS Methods
Boundary Element Conference
Conference on Geometric Design
17th SPEEDUP Workshop
Urgent Position in Nice, France
Postdoc Position at Lawrence Berkeley Laboratory
Postdoc Position at Ames DOE Laboratory
Postdoc Positions at University of Greenwich, London
Position at Royal Military College of Science
Contents, Linear Algebra and its Applications
Contents, Numerical Algorithms
Contents, Selecta Statistica Canadiana
Contents, Journal of Computing and Information
Digest for Sunday, January 15, 1995
In this digest:
Midwest NA Day
Announcement of the GSCI Digest
Change of Address for I.Kaporin
Collecting Information about Simulation Codes
Some Bibliographies in Interval Computation
UniCalc Solver
YSMP and Sparse Solvers
Test Problems for ODEs and DAEs
SIAM Student Paper Prizes
EPA Graduate Student Fellowships
Conference on Computational and Applied Mathematics
Multigrid Course 1995
I. Babuska Prize Awarded
Prague Mathematical Conference 1996
New Journal, Yugoslav Journal of Operations Research
11th GAMM Seminar Kiel
Templates Workshop
Position at Bettis Atomic Power Laboratory
Position at NC State University
Position at Oxford University
Positions at Daresbury Laboratory, UK
Position at Stanford University
Graduate Study in Scientific Computing at Huddersfield, UK
Contents, Advances in Computational Mathematics
Digest for Sunday, January 8, 1995
In this digest:
John Rollett
New Book, Scientific Computing with PCs
Out-of-core Symmetric Banded Solver
New Address for Jos van Dorsselaer
Positions at AspenTech in Cambridge, UK
Contents, SIAM Control and Optimization
Digest for Sunday, January 1, 1995
In this digest:
NA Digest Calendar
WWW Server for Journal of Approximation Theory
Re: Charta of Free Electronic Access to Publications
Change of Address for Magolu monga-Made
SIAM Southeastern-Atlantic Section
Seminar on Simulation of Devices and Technologies
Conference on Modeling in Mechanics of Continuous Media
Electronic Transactions on Numerical Analysis
Contents, Constructive Approximation | {"url":"http://www.netlib.org/na-digest-html/95/index.html","timestamp":"2014-04-16T18:58:44Z","content_type":null,"content_length":"43068","record_id":"<urn:uuid:b5963ebc-48b2-49f2-adb0-72994c626184>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
A 50 Kg Pole Vaulter Running At 13 M/s Vaults Over ... | Chegg.com
A 50 kg pole vaulter running at 13 m/s vaults over the bar. Her speed when she is above the bar is 1.3 m/s. Neglect air resistance, as well as any energy absorbed by the pole, and determine her
altitude as she crosses the bar.
(answer in M)
A skier of mass 66 kg is pulled up a slope by a motor-driven cable.
a)How much work is required to pull him 45 m up a 30° slope (assumed frictionless) at a constant speed of 2.1 m/s? (answer in J)
b)What power must a motor have to perform this task? (answer in hp)
When a 2.70-kg object is hung vertically on a certain light spring described by Hooke's law, the spring stretches 3.06 cm.
a)What is the force constant of the spring? (in N/m)
b)If the 2.70-kg object is removed, how far will the spring stretch if a 1.35-kg block is hung on it?(in cm)
c)How much work must an external agent do to stretch the same spring 6.40 cm from its unstretched position?(in J) | {"url":"http://www.chegg.com/homework-help/questions-and-answers/50-kg-pole-vaulter-running-13-m-s-vaults-bar-speed-bar-13-m-s-neglect-air-resistance-well--q969136","timestamp":"2014-04-18T11:42:48Z","content_type":null,"content_length":"19610","record_id":"<urn:uuid:514122a8-2456-43b9-aabe-0a4370a8e859>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lecture notes on quantum cohomology of the flag manifold
Sergey Fomin
Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139
Abstract: This is an exposition of some recent developments related to the object in the title, particularly the computation of the Gromov-Witten invariants of the flag manifold [5] and the quadratic
algebra approach [6]. The notes are largely based on the papers [5] and [6], authored jointly with S. Gelfand, A. N. Kirillov, and A. Postnikov. This is by no means an exhaustive survey of the
subject, but rather a casual introduction to its combinatorial aspects.
Classification (MSC2000): 14N35; 05E15
Full text of the article:
Electronic fulltext finalized on: 1 Nov 2001. This page was last modified: 7 Dec 2001.
© 2001 Mathematical Institute of the Serbian Academy of Science and Arts
© 2001 ELibM for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/PIMB/080/5.html","timestamp":"2014-04-16T04:19:01Z","content_type":null,"content_length":"3446","record_id":"<urn:uuid:e9cd0988-5919-4d0c-8475-f542dfafc922>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
An International Journal on the Teaching and Learning of Statistics
JSE Volume 14, Number 1 Abstracts
ProbLab is a probability-and-statistics unit developed at the Center for Connected Learning and Computer-Based Modeling, Northwestern University. Students analyze the combinatorial space of the
9-block, a 3-by-3 grid of squares, in which each square can be either green or blue. All 512 possible 9-blocks are constructed and assembled in a “bar chart” poster according to the number of green
squares in each, resulting in a narrow and very tall display. This combinations tower is the same shape as the normal distribution received when 9-blocks are generated randomly in computer-based
simulated probability experiments. The resemblance between the display and the distribution is key to student insight into relations between theoretical and empirical probability and between
determinism and randomness. The 9-block also functions as a sampling format in a computer-based statistics activity, where students sample from a “population” of squares and then input and pool their
guesses as to the greenness of the population. We report on an implementation of the design in two Grade 6 classrooms, focusing on student inventions and learning as well as emergent classroom
socio-mathematical behaviors in the combinations-tower activity. We propose an application of the 9-block framework that affords insight into the Central Limit Theorem in science..
Key Words: Computers; Education; Mathematics; Sample; Statistics.
Lecture is a common presentation style that gives instructors a lot of control over topics and time allocation, but can limit active student participation and learning. This article presents some
ideas to increase the level of student involvement in lecture. The examples and suggestions are based on the author’s experience as a senior lecturer for four years observing and mentoring graduate
student instructors. The ideas can be used to modify or augment current plans and preparations to increase student participation. The ideas and examples will be useful as enhancements to current
efforts to teach probability and statistics. Most suggestions will not take much class time and can be integrated smoothly into current preparations.
Key Words: Active learning; Contrasts; Problem Solving; Statistical Reasoning; Student Participation; Teaching Methods.
In the Fall 2001 semester, we taught a “Web-enhanced” version of the undergraduate course “Statistical Methods” (STAT 2000) at Utah State University. The course used the electronic textbook
CyberStats in addition to “face-to-face” teaching. This paper gives insight in our experiences in teaching this course. We describe the main features of CyberStats, the course content and the
teaching techniques used in class, students' reactions and performance, and some specific problems encountered during the course. We compare this Web-enhanced course with other similar textbook-based
courses and report instructors' and students' opinions. We finish with a general discussion of advantages and disadvantages of a Web-enhanced statistics course.
Key Words: Computer; Interactivity; Statistical Concepts; Undergraduate Course; Web-enhanced Course.
The Statistical Reasoning Assessment or SRA is one of the first objective instruments developed to assess students’ statistical reasoning. Published in 1998 (Garfield, 1998a), it became widely
available after the Garfield (2003) publication. Empirical studies applying the SRA by Garfield and co-authors brought forward two intriguing puzzles: the ‘gender puzzle’, and the puzzle of
‘non-existing relations with course performances’. Moreover, those studies find a, much less puzzling, country-effect. The present study aims to address those three empirical findings. Findings in
this study suggest that both puzzles may be at least partly understood in terms of differences in effort students invest in studying: students with strong effort-based learning approaches tend to
have lower correct reasoning scores, and higher misconception scores, than students with different learning approaches. In distinction with earlier studies, we administered the SRA at the start of
our course. Therefore measured reasoning abilities, correct as well as incorrect, are to be interpreted unequivocally as preconceptions independent of any instruction in our course. Implications of
the empirical findings for statistics education are discussed.
Key Words: Assessment; Attitudes toward statistics; Learning approaches; Statistical reasoning assessment.
In a very large Introductory Statistics class, i.e. in a class of more than 300 students, instructors may hesitate to apply active learning techniques, discouraged by the volume of extra work. In
this paper two such activities are presented that evoke student involvement in the learning process. The first is group peer teaching and the second is an in-class simulation of random sampling from
the discrete Uniform Distribution to demonstrate the Central Limit Theorem. They are both easy to implement in a very large class and improve learning.
Key Words: In-class simulation; Peer teaching; Sampling distribution.
Datasets and Stories
From a very young age, shoes for boys tend to be wider than shoes for girls. Is this because boys have wider feet, or because it is assumed that girls are willing to sacrifice comfort for fashion,
even in elementary school? To assess the former, a statistician measures kids’ feet.
Key Words:
I selected a simple random sample of 100 movies from the Movie and Video Guide (1996), by Leonard Maltin. My intent was to obtain some basic information on the population of roughly 19,000 movies
through a small sample. In exploring the data, I discovered that it exhibited two paradoxes about a three-variable relationship: (1) A non-transitivity paradox for positive correlation, and (2)
Simpson’s paradox. Giving concrete examples of these two paradoxes in an introductory course gives to students a sense of the nuances involved in describing associations in observational studies.
Key Words: Controlling for a variable; Non-transitivity of positive correlation; Simpson’s paradox. | {"url":"http://www.amstat.org/publications/jse/v14n1/abstracts.html","timestamp":"2014-04-19T01:55:17Z","content_type":null,"content_length":"12887","record_id":"<urn:uuid:d9ecbe60-7dcc-44e5-92a9-bd4433440f6b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
Journal of the Optical Society of America
A sampling theorem applicable to that class of linear systems characterized by sufficiently slowly varying linespread functions is developed. For band-limited inputs such systems can be exactly
characterized with knowledge of the sampled system line-spread function and the corresponding sampled input. The desired sampling rate is shown to be determined by both the system and the input. The
corresponding output is shown to be band limited. A discrete matrix representation of the specific system class is also presented. Applications to digital processing and coherent space-variant system
representation are suggested.
© 1976 by the Optical Society of America
Robert J. Marks II, John F. Walkup, and Marion O. Hagler, "A sampling theorem for space-variant systems," J. Opt. Soc. Am. 66, 918-921 (1976)
Sort: Year | Journal | Reset
1. T. Kailath, "Channel Characterization: Time-Variant Dispersive Channels," in Lectures on Communications System Theory, edited by E. J. Baghdady (McGraw-Hill, New York, 1960), pp. 95–124.
2. N. Liskov, "Analytical Techniques for Linear Time-Varying Systems," Ph.D. dissertation (Electrical Engineering Research Laboratory, Cornell University, Ithaca, N. Y. 1964) (unpublished), pp.
3. T. S. Huang, "Digital Computer Analysis of Linear Shift-Variant Systems," in Proc. NASA/ERA Seminar December, 1969 (unpublished), pp. 83–87.
4. A. W. Lohmann and D. P. Paris, "Space-Variant Image Formation," J. Opt. Soc. Am. 55, 1007–1013 (1965).
5. Here, and in the material to follow, "band limited" refers specifically to that case where the spectrum is nonzero only over a single interval centered about zero frequency. It appears, however,
that the results can be extended to any spectrum with finite support by application of corresponding sampling theorems. For example, see D. A. Linden, "A Discussion of Sampling Theorems," Proc.
IRE 47, 1219–1226 (1959).
6. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, New York, 1968).
7. L. M. Deen, J. F. Walkup, and M. O. Hagler, "Representations of Space-Variant Optical Systems Using Volume Holograms," Appl. Opt. 14, 2438–2446 (1975).
8. L. M. Deen, "Holographic Representations of Optical Systems," M. S. thesis (Department of Electrical Engineering, Texas Tech University, Lubbock, Tex., 1975) (unpublished), pp. 37–60.
9. R. J. Marks II and T. F. Krile, "Holographic Representation of Space-Variant Systems; System Theory," to appear in Appl. Opt.
10. R. J. Marks II, "Holographic Recording of Optical Space-Variant Systems," M. S. thesis (Rose-Hulman Institute of Technology, Terre Haute. Ind., 1973) (unpublished), pp. 74–93.
11. R. J. Collier, C. B. Burckhardt, and L. H. Lin, Optical Holograpy (Academic, New York/London, 1971), pp. 466– 467.
12. D. Slepian, "On Bandwidth," Proc. IEEE 64, 292 (1976).
13. K. Yao and J. B. Thomas, "On Truncation Error Bounds for Sampling Representations of Band-Limited Signals," IEEE Trans. Aerospace Electron. Syst. 2, 640–647 (1966).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/josa/abstract.cfm?uri=josa-66-9-918","timestamp":"2014-04-17T16:39:45Z","content_type":null,"content_length":"76166","record_id":"<urn:uuid:f87b5010-40db-431f-a8c0-02e9b3e714fe>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
Robert Murphy: Mathematician and Physicist - Conclusion - Appendix
Murphy’s years of alcohol abuse took a toll on his health. In 1843, he contracted tuberculosis of the lungs [Barry 1999] and he died soon after, on March 12, 1843. It was shortly after Murphy’s death
that De Morgan made the claim about his genius with which we opened this biography: “He had a true genius for mathematical invention” [Venn 2009]. Murphy was buried in Kensal Green Cemetery, London,
where “[t]he grave has no headstone nor landing stone nor surround. It is totally unmarked” [Barry 1999, p. 173].
We end this biographical journey with Murphy’s obituary, which appeared in The Gentleman’s Magazine:
March 12. The Rev. Robert Murphy, M.A. Fellow of Gonville and Caius college, Cambridge, and Examiner in Mathematics and Natural Philosophy at University College, London. He took his degree of B.A. in
1829; and was the author of “Elementary Principles of the Theories of Electricity, Heat, and Molecular Actions” [Urban 1843, p. 545].
Figure 7. Likeness of Robert Murphy (ca. 1829) (Source: Permission granted by the Master and Fellows of Gonville and Caius College, Cambridge)
As noted above, all but the first of Murphy's known works are readily available from Google Books or JSTOR. This first work,
Refutation of a Pamphlet Written by the Rev. John Mackey Entitled “A Method of Making a Cube a Double of a Cube, Founded on the Principles of Elementary Geometry,” wherein His Principles Are Proved
Erroneous and the Required Solution Not Yet Obtained [1824],
was itself published in pamphlet form and was noticed by at least one well-known mathematician, Augustus De Morgan, in 1864 or earlier. The authors have provided a transcription of Murphy's
Refutation, with commentary, as an appendix available here.
The authors would like to thank Dr. Patricia R. Allaire for providing us with foundational material on Robert Murphy. We are also grateful to the Gonville and Caius College libraries, and
particularly to Ms. Kate McQuillian, for locating many of the sources and pictures for us.
The authors are extremely grateful to an anonymous referee for his/her many helpful suggestions and corrections. Finally, we are thankful to Dr. Patricia R. Allaire who read the revision of this
paper and made additional helpful suggestions.
About the Authors
Anthony J. Del Latto has a B.S. in mathematics from Adelphi University, where he served as a tutor for the Department of Mathematics and Computer Science from 2009 to 2012 and teacher’s assistant for
the course MTH 457: Abstract Algebra during his senior year. He is currently pursuing an M.A. in mathematics education with initial certification for grades 7-12 from Teachers College, Columbia
Salvatore J. Petrilli, Jr. is an assistant professor at Adelphi University. He has a B.S. in mathematics from Adelphi University and an M.A. in mathematics from Hofstra University. He received an
Ed.D. in mathematics education from Teachers College, Columbia University, where his advisor was J. Philip Smith. His research interests include history of mathematics and mathematics education. | {"url":"http://www.maa.org/publications/periodicals/convergence/robert-murphy-mathematician-and-physicist-conclusion-appendix?device=mobile","timestamp":"2014-04-16T09:09:41Z","content_type":null,"content_length":"24791","record_id":"<urn:uuid:1dc5071e-8fd0-4e38-9a45-638373c41bf4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roselle Park Algebra 2 Tutor
...I know the SAT and ACT math tests inside and out. I know the structure and style of questions and can definitely help you improve your score. I have a BA from Rutgers University and an MSc from
the London School of Economics, both degrees in mathematics.
28 Subjects: including algebra 2, physics, calculus, statistics
...I have experience working with adults, high school students and specially freshmen and sophomores. I do at times tutor with middle school students. To help my students succeed, I sometimse do
mentor them, most of the time to help them boost their self confidence and release the genius in them.
23 Subjects: including algebra 2, calculus, French, German
...I played Division 1 for Lehigh University. Then, I played semi-pro in England. I also ran a coaching session while living in England.
16 Subjects: including algebra 2, statistics, geometry, precalculus
...I would be starting a Pure Mathematics PhD program at the University of Oklahoma this fall. In short, I love mathematics. My motivation is to make you love it as well.
12 Subjects: including algebra 2, calculus, statistics, geometry
...I currently play in a band and have been involved with one sometime or another consistently since a teen. Although I cannot provide help relative to my strength in science, I can provide
awesome beginner instruction, and great intermediate skills. Thanks for your time and I sincerely hope to wo...
28 Subjects: including algebra 2, chemistry, writing, geometry | {"url":"http://www.purplemath.com/roselle_park_nj_algebra_2_tutors.php","timestamp":"2014-04-18T18:54:24Z","content_type":null,"content_length":"23827","record_id":"<urn:uuid:e3e72ed5-bc9d-47b7-8512-cd43713e262a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
Real Analysis and Probability
Results 1 - 10 of 413
, 2003
"... Kernel based algorithms such as support vector machines have achieved considerable success in various problems in the batch setting where all of the training data is available in advance.
Support vector machines combine the so-called kernel trick with the large margin idea. There has been little u ..."
Cited by 2029 (128 self)
Add to MetaCart
Kernel based algorithms such as support vector machines have achieved considerable success in various problems in the batch setting where all of the training data is available in advance. Support
vector machines combine the so-called kernel trick with the large margin idea. There has been little use of these methods in an online setting suitable for real-time applications. In this paper we
consider online learning in a Reproducing Kernel Hilbert Space. By considering classical stochastic gradient descent within a feature space, and the use of some straightforward tricks, we develop
simple and computationally efficient algorithms for a wide range of problems such as classification, regression, and novelty detection. In addition to allowing the exploitation of the kernel trick in
an online setting, we examine the value of large margins for classification in the online setting with a drifting target. We derive worst case loss bounds and moreover we show the convergence of the
hypothesis to the minimiser of the regularised risk functional. We present some experimental results that support the theory as well as illustrating the power of the new algorithms for online novelty
detection. In addition
, 1999
"... We introduce a new method of constructing kernels on sets whose elements are discrete structures like strings, trees and graphs. The method can be applied iteratively to build a kernel on an
infinite set from kernels involving generators of the set. The family of kernels generated generalizes the fa ..."
Cited by 368 (0 self)
Add to MetaCart
We introduce a new method of constructing kernels on sets whose elements are discrete structures like strings, trees and graphs. The method can be applied iteratively to build a kernel on an infinite
set from kernels involving generators of the set. The family of kernels generated generalizes the family of radial basis kernels. It can also be used to define kernels in the form of joint Gibbs
probability distributions. Kernels can be built from hidden Markov random elds, generalized regular expressions, pair-HMMs, or ANOVA decompositions. Uses of the method lead to open problems involving
the theory of infinitely divisible positive definite functions. Fundamentals of this theory and the theory of reproducing kernel Hilbert spaces are reviewed and applied in establishing the validity
of the method.
- IEEE Transactions on Information Theory , 2002
"... We consider a user communicating over a fading channel with perfect channel state information. Data is assumed to arrive from some higher layer application and is stored in a buffer until it is
transmitted. We study adapting the user's transmission rate and power based on the channel state informati ..."
Cited by 170 (7 self)
Add to MetaCart
We consider a user communicating over a fading channel with perfect channel state information. Data is assumed to arrive from some higher layer application and is stored in a buffer until it is
transmitted. We study adapting the user's transmission rate and power based on the channel state information as well as the buffer occupancy; the objectives are to regulate both the long-term average
transmission power and the average buffer delay incurred by the traffic. Two models for this situation are discussed; one corresponding to fixed-length/variable-rate codewords and one corresponding
to variable-length codewords. The trade-off between the average delay and the average transmission power required for reliable communication is analyzed. A dynamic programming formulation is given to
find all Pareto optimal power/delay operating points. We then quantify the behavior of this tradeoff in the regime of asymptotically large delay. In this regime we characterize simple buffer control
policies which exhibit optimal characteristics. Connections to the delay-limited capacity and the expected capacity of fading channels are also discussed.
- Journal of Machine Learning Research , 2001
"... In this article we study the generalization abilities of several classifiers of support vector machine (SVM) type using a certain class of kernels that we call universal. It is shown that the
soft margin algorithms with universal kernels are consistent for a large class of classification problems ..."
Cited by 158 (20 self)
Add to MetaCart
In this article we study the generalization abilities of several classifiers of support vector machine (SVM) type using a certain class of kernels that we call universal. It is shown that the soft
margin algorithms with universal kernels are consistent for a large class of classification problems including some kind of noisy tasks provided that the regularization parameter is chosen well. In
particular we derive a simple su#cient condition for this parameter in the case of Gaussian RBF kernels. On the one hand our considerations are based on an investigation of an approximation
property---the so-called universality---of the used kernels that ensures that all continuous functions can be approximated by certain kernel expressions. This approximation property also gives a new
insight into the role of kernels in these and other algorithms. On the other hand the results are achieved by a precise study of the underlying optimization problems of the classifiers. Furthermore,
we show consistency for the maximal margin classifier as well as for the soft margin SVM's in the presence of large margins. In this case it turns out that also constant regularization parameters
ensure consistency for the soft margin SVM's. Finally we prove that even for simple, noise free classification problems SVM's with polynomial kernels can behave arbitrarily badly.
- Journal of Artificial Intelligence Research , 2001
"... Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the
problems associated with policy degradation in value-function methods. In this paper we introduce � � , a si ..."
Cited by 153 (5 self)
Add to MetaCart
Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems
associated with policy degradation in value-function methods. In this paper we introduce � � , a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in
Partially Observable Markov Decision Processes ( � s) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm’s
chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter � � (which has a natural interpretation in terms of bias-variance trade-off), and
requires no knowledge of the underlying state. We prove convergence of � � , and show how the correct choice of the parameter is related to the mixing time of the controlled �. We briefly describe
extensions of � � to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with
internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by � � can be used in both a traditional stochastic gradient algorithm and a
conjugate-gradient procedure to find local optima of the average reward. 1.
- Journal of Artificial Intelligence Research , 2000
"... A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small
enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is suppl ..."
Cited by 143 (0 self)
Add to MetaCart
A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small
enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is supplied by hand through the skill and insights of experts. In this paper a model for
automatically learning bias is investigated. The central assumption of the model is that the learner is embedded within an environment of related learning tasks. Within such an environment the
learner can sample from multiple tasks, and hence it can search for a hypothesis space that contains good solutions to many of the problems in the environment. Under certain restrictions on the set
of all hypothesis spaces available to the learner, we show that a hypothesis space that performs well on a sufficiently large number of training tasks will also perform well when learning novel tasks
in the same environment. Exp...
- Econometrica , 1999
"... This paper develops a regression limit theory for nonstationary panel data with large numbers of cross section Ž n. and time series Ž T. observations. The limit theory allows for both sequential
limits, wherein T� � followed by n��, and joint limits where T, n�� simultaneously; and the relationship ..."
Cited by 137 (13 self)
Add to MetaCart
This paper develops a regression limit theory for nonstationary panel data with large numbers of cross section Ž n. and time series Ž T. observations. The limit theory allows for both sequential
limits, wherein T� � followed by n��, and joint limits where T, n�� simultaneously; and the relationship between these multidimensional limits is explored. The panel structures considered allow for
no time series cointegration, heterogeneous cointegration, homogeneous cointegration, and near-homogeneous cointegration. The paper explores the existence of long-run average relations between
integrated panel vectors when there is no individual time series cointegration and when there is heterogeneous cointegration. These relations are parameterized in terms of the matrix regression
coefficient of the long-run average covariance matrix. In the case of homogeneous and near homogeneous cointegrating panels, a panel fully modified regression estimator is developed and studied. The
limit theory enables us to test hypotheses about the long run average parameters both within and between subgroups of the full population.
- SIAM Review , 1999
"... Abstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov
chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys ..."
Cited by 131 (1 self)
Add to MetaCart
Abstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov
chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys the field and presents some new examples. There is a simple unifying idea: the iterates of
random Lipschitz functions converge if the functions are contracting on the average. 1. Introduction. The
"... We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover
sampling distributions and then make appropriate corrections based on the distribution estimate. We prese ..."
Cited by 130 (9 self)
Add to MetaCart
We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover
sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution
estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice.
- Internat. Statist. Rev. (2002
"... Abstract. When studying convergence of measures, an important issue is the choice of probability metric. We provide a summary and some new results concerning bounds among some important
probability metrics/distances that are used by statisticians and probabilists. Knowledge of other metrics can prov ..."
Cited by 84 (2 self)
Add to MetaCart
Abstract. When studying convergence of measures, an important issue is the choice of probability metric. We provide a summary and some new results concerning bounds among some important probability
metrics/distances that are used by statisticians and probabilists. Knowledge of other metrics can provide a means of deriving bounds for another one in an applied problem. Considering other metrics
can also provide alternate insights. We also give examples that show that rates of convergence can strongly depend on the metric chosen. Careful consideration is necessary when choosing a metric.
Abrégé. Le choix de métrique de probabilité est une décision très importante lorsqu’on étudie la convergence des mesures. Nous vous fournissons avec un sommaire de plusieurs métriques/distances de
probabilité couramment utilisées par des statisticiens(nes) at par des probabilistes, ainsi que certains nouveaux résultats qui se rapportent à leurs bornes. Avoir connaissance d’autres métriques
peut vous fournir avec un moyen de dériver des bornes pour une autre métrique dans un problème appliqué. Le fait de prendre en considération plusieurs métriques vous permettra d’approcher des
problèmes d’une manière différente. Ainsi, nous vous démontrons que les taux de convergence peuvent dépendre de façon importante sur votre choix de métrique. Il est donc important de tout considérer
lorsqu’on doit choisir une métrique. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=139498","timestamp":"2014-04-20T10:14:18Z","content_type":null,"content_length":"40789","record_id":"<urn:uuid:e254b8c3-bcf9-4065-be07-7777c7262c5d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Return to List
Noetherian Rings and Their Applications
             
Mathematical Researchers in ring theory or allied topics, such as the representation theory of finite dimensional Lie algebras, will appreciate this collection of expository lectures on advances
Surveys and in ring theory and their applications to other areas. Five of the lectures were delivered at a conference on Noetherian rings at the Mathematisches Forschungsinstitut, Oberwolfach,
Monographs in January 1983, and the sixth was delivered at a London Mathematical Society Durham conference in July 1983. The study of the prime and primitive ideal spectra of various classes
of rings forms a common theme in the lectures, and they touch on such topics as the structure of group rings of polycyclic-by-finite groups, localization in noncommutative rings,
1987; 118 pp; and rings of differential operators. The lectures require the background of an advanced graduate student in ring theory and may be used in seminars in ring theory at this level.
• J. T. Stafford -- The Goldie rank of a module
Volume: 24 • D. R. Farkas -- Noetherian group rings: An exercise in creating folklore and intuition
• J. C. Jantzen -- Primitive ideals in the enveloping algebra of a semisimple Lie algebra
ISBN-10: • T. J. Enright -- Representation theory of semisimple Lie algebras
0-8218-1525-3 • J.-E. Björk -- Filtered Noetherian rings
• R. Rentschler -- Primitive ideals in enveloping algebras
List Price: US$57
Member Price:
Order Code: SURV/ | {"url":"http://ams.org/bookstore?fn=20&arg1=survseries&ikey=SURV-24","timestamp":"2014-04-19T01:59:44Z","content_type":null,"content_length":"14882","record_id":"<urn:uuid:e4e71433-6f0f-478f-86a4-40bccd7cc68e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Best Efficiency Point (B.E.P.): the point on a pump's performance curve that corresponds to the highest efficiency.
Casing: the body of the pump which encloses the impeller.
Cavitation: the sudden collapse of gas bubbles due to the sudden pressure increase
Centrifugal force: a force associated with a rotating body. In the case of a pump, the rotating impeller pushes fluid on the back of the impeller blade, imparting motion. Since the motion is circular
there is a centrifugal force associated with it. The force pushes the fluid against a fixed pump casing thereby pressurizing the fluid
Control volume: limits imposed for the theoretical study of a system. The limits are usually set to intersect the system at locations where conditions are known
Datum plane: a reference plane. A conveniently accessible known surface from which all vertical measurements are taken or referred to.
Discharge Static Head: the difference in elevation between the liquid level of the discharge tank and the centerline of the pump. This head also includes any additional head that may be present at
the discharge tank fluid surface.
Enthalpy: a thermodynamic property of a fluid. The enthalpy of a fluid consist of the energy associated with the fluid at a microscopic level (related to the temperature of the fluid) plus the energy
present in the form of pressure at the inlet and outlet of a system.
Equipment: refers to any device in the system other than pipes, pipe fittings and isolation valves.
Equipment head difference: the difference in head between the outlet and inlet of an equipment.
Friction: the force produced as reaction to movement. All fluids produce friction when they are in motion. The higher the fluid viscosity, the higher the friction force for the same flow rate.
Friction is produced internally as one layer of fluid moves with respect to another and also at the fluid/wall interface.
Friction head difference: the difference in head required to move a mass of fluid from one position to another at a certain flow rate
Head: refers to the pressure produced by a vertical column of fluid
Heat loss: the heat lost by a system (i.e. the heat lost due to friction).
Heat transfer: the heat lost or gained by a system. This book has not considered the application of equipment that produce a significant change in the fluid temperature.
Impeller: the rotating element of a pump which imparts movement and pressure to a fluid
Internal energy: a thermodynamic property. The energy associated with a substance at a molecular level .
Iteration: a method of solving an equation by trial and error. An iteration technique is used to solve equations where the unknown variable cannot be explicitly isolated. A frequently used technique
is the Newton-Raphson method.
Kinetic energy: a thermodynamic property. The energy associated with the mass and velocity of a body.
Laminar: a distinct flow regime that occurs at low Reynolds number (Re < 2000). It is characterized by particles in successive layers moving past one another in a well behaved manner.
Mercury (Hg): a metal which remains liquid at room temperature. This property makes it useful when used in a thin vertical glass tube since small changes in pressure can be measured as changes in the
mercury column height. The inch of mercury is often used as a unit for negative pressure
Negative pressure: pressure that is less than the pressure in the external environment..
Net Positive Suction Head (N.P.S.H.): the head in feet of water absolute as measured or calculated at the pump suction flange, less the vapor pressure (converted to feet of water absolute) of the
Newtonian: a fluid whose viscosity does not change with the amount of strain it is subjected to
Operating point: the point on the system curve corresponding to the flow and head required to meet the process requirements
Performance curve: a curve of flow vs. Total Head for a specific pump model and impeller diameter
Pipe roughness: a measurement of the average height of peaks producing roughness on the internal surface of pipes. Roughness is measured in many locations, and is usually defined in micro-inches RMS
(root mean square).
Potential energy: a thermodynamic property. The energy associated with the mass and height of a body above a reference plane.
Pressure: the application of external or internal forces to a body producing tension or compression within the body. This tension divided by a surface is called pressure.
Shut-off head: the Total Head corresponding to zero flow on the pump performance curve
Specific gravity: the ratio of the density of a fluid to that of water at standard conditions
Strain: the ration between the absolute displacement of a reference point within a body to a characteristic length of the body.
Stress: in this case refers to tangential stress or the force between the layers of fluid divided by the surface area between the layers.
Suction Static Head: the difference in elevation between the liquid level of the source of supply and the centerline of the pump. This head also includes any additional head that may be present at
the suction tank fluid surface
Suction Static Lift: the same definition as the Suction Static head. This term is only used when the pump centerline is above the suction tank fluid surface.
Siphon: is a system of piping or tubing where the exit point is lower than the entry point.
System: the system as referred to in this book includes all the piping with or without a pump, starting at the inlet point (often the fluid surface of the suction tank) and ending at the outlet point
(often the fluid surface of the discharge tank).
System curve: is a plot of flow vs. Total Head that satisfies the system requirements.
System equation: the equation for Total Head vs. flow for a specific system
System requirements: the parameters that determine Total Head, that is: friction and system inlet and outlet conditions (i.e. velocity, elevation and pressure).
Total Dynamic Head: identical to Total Head. This term is no longer used and has been replaced by the shorter Total Head.
Total Head: the difference between the head at the discharge and suction flange of the pump
Total Static Head: is the difference between the discharge and suction static head including the difference between the surface pressure of the discharge and suction tanks
Turbulent: a type of flow regime characterized by the rapid movement of fluid particles in many directions as well as the general direction of the overall fluid flow
Vapor pressure: the pressure at which a liquid boils at a specified temperature
Velocity Head difference: the difference in velocity head between the outlet and inlet of the system
Viscosity: a property, which measures a fluid's resistance to movement. The resistance is caused by friction between the fluid and the boundary wall and internally by the fluid layers moving at
different velocities
Work: the energy required to drive the fluid through the system
Frequently Asked Questions on Pumps
What is the difference between head and pressure?
To start, head is not equivalent to pressure. Head is a term, which has units of a length or feet. In the following equation (Bernoulli's equation) each of the terms is a head term: elevation head h,
pressure head p/
g and velocity head v^2/2g. Head is equal to specific energy, of which the units are lbf-ft/lbf. Therefore the elevation head is actually the specific potential energy, the pressure head, the
specific pressure energy and the velocity head is the specific kinetic energy (specific means per unit weight).
So what is the difference? Head is energy per unit mass whereas pressure is a force per unit area.
What is the total pressure drop for several pieces of equipment in the same line?
The pressure drop associated with each piece of equipment is additive.
What are fittings?
Fittings are all the miscellaneous pipe connections (tees, elbows, Ys, etc.).), sometimes known as hardware, required to run pipes and their branches in various directions to their destination.
Manual valves are also considered fittings.
Why is the term pressure drop used when describing the effect of equipment on a system?
To drive fluid through a piece of equipment there must be a force at the inlet greater than the force at the outlet. These forces are converted to pressure, which is more convenient in a fluid
system. The difference (or drop) in pressure between the inlet and outlet is proportional to the overall force pushing the liquid forwards. If we convert pressure drop to head then we obtain the
pressure drop value in terms of head (i.e. fluid column height) or pressure head.
How can the same pump satisfy different flow requirements of a system?
If a pump is sized for a greater flow and head that is required for the present conditions, then a manual valve at the outlet of the pump can be used to throttle the flow down to the present
requirements. Therefore, at a future date the flow can be increased by simply opening a valve. This however is wasteful of energy and a variable speed drive should be considered.
Is the head at the suction side of a pump equal to the N.P.S.H. available?
No, the N.P.S.H. available is the head in absolute fluid column height minus the vapor pressure (in terms of fluid column height) of the fluid.
Is the head at the discharge side of the pump equal to the Total Head?
No, the Total Head is the difference in head between the discharge and the suction.
What is the difference between the N.P.S.H. available and the N.P.S.H. required?
The N.P.S.H available can be calculated for a specific situation and depends on the barometric pressure, the friction loss between the system inlet and the pump suction flange, and other factors. The
N.P.S.H. required is given by the pump manufacturer and depends on the head, flow and type of pump. The N.P.S.H. available must always be greater than the N.P.S.H. required for the pump to operate
How is the pressure head at any location in a piping system determined and why bother?
First, calculate the Total Head of the system. Then, using a control volume, set one limit at the point where the pressure head is required and the other at the inlet or outlet of the system. Apply
an energy balance and convert all energy terms to head. The resulting equation gives the pressure head at the point required.
What is the purpose of a variable speed drive?
All systems require a means of flow control. The plant's output requirements may change causing flow demand to vary and therefore the various systems throughout the process must be able to modify
their output flow rate. To achieve this, pumps are sized for the maximum anticipated flow rate. The most frequent means of reducing the output flow rate is to have a line which re-circulates flow
back to the suction tank. Another method is to have a valve in the discharge line which reduces the output flow rate when throttled. Either method works well, but there is a penalty to be paid in
consumption of extra power for running a system, which is oversized for the normal demand flow rate. A solution to this power waste is to use an electronic variable speed drive. For a new
installation this alternative should be considered. This provides the same flow control as a valved system without energy waste.
What is Total Head?
Total Head is the difference between the head at the discharge vs. the head at the inlet of the pump. Total head is a measure of a pump’s ability to push fluid through a system. This parameter (with
the flow) is a more useful term than the pump discharge head since it is independent of a specific system. Also Total head, just as any head at any location in the system, is independent of the fluid
What is Friction Head?
Fluid layers move at different speeds depending on their position with respect to the pipe axis. The velocity is zero at the pipe wall and maximum at the pipe center. This difference in velocity
between fluid layers is a source of friction. Another source of friction is the interaction between the fluid layers close to the pipe wall and the pipe roughness or the small peaks and valleys on
the wall (for turbulent flow only). The sum of these two sources of friction is the total friction due to fluid movement. Friction head is the energy loss due to fluid movement and is proportional to
the flow rate, pipe diameter and viscosity. Tables of values for friction head are available in many references. The Colebrook and Darcy equations provide a method of calculating friction head for
Newtonian fluids. Another component of friction head is the pressure drop due to fittings. Many references supply the data for determining the friction loss due to fittings. The 2K method is
What is Velocity Head?
Velocity head is the kinetic energy of the fluid particles. Velocity head difference is the difference in kinetic energy between the inlet and outlet of the system.
What is Static Head or Total Static Head?
The static head or total static head is the potential energy of the system. It is the difference between the elevation of the outlet vs. the inlet point of the system.
What is N.P.S.H.?
The Net Positive Suction Head (N.P.S.H.) is the head at the suction flange of the pump less the vapour pressure converted to fluid column height of the fluid. The N.P.S.H. is always positive since it
is expressed in terms of absolute fluid column height. The term "Net" refers to the actual head at the pump suction flange and not the static head. The N.P.S.H. is independent of the fluid density as
are all head terms.
What information is required to determine the Total Head of a pump?
1. Flow rate through the pump and everywhere throughout the system.
2. Physical parameters of the system: length and size of pipe, no. of fittings and type, elevation of inlet and outlet.
3. Equipment in the system: control valves, filters.
4. Fluid properties: temperature, viscosity and specific gravity.
What information do I need to order a pump?
Total head, flow and fluid properties (i.e. temperature, PH, composition).
What is the best way to start a pump?
Start the pump with a closed discharge valve.
What does "centrifugal" refer to in centrifugal pump?
A centrifugal pump consists of an impeller rotating within a fixed casing or volute. Because the impeller blades are curved, the fluid is pushed in a tangential and radial direction. A force, which
acts in a radial direction, is known as a centrifugal force. This force is the same one that keeps water inside a bucket, which is rotating at the end of a string.
What is the Best Efficiency Point (B.E.P.)?
The B.E.P. (best efficiency point) is the point of highest efficiency of the pump. All points to the right or left of B.E.P have a lower efficiency. The impeller is subject to non-symmetrical forces
when operating to the right or left of the B.E.P.. These forces manifest themselves as vibration depending on the speed and construction of the pump. The most stable area is near or at the B.E.P.
What is the best way to measure Head and Flow?
Head: Total Head can be measured by installing pressure gauges at the outlet and inlet of the pump. The pump inlet pressure measurement can be eliminated if we can be sure what the pressure head is
at that point. For example, if the pump suction is large and short and the inlet shut off valve is fully open and is the type of design that offers little restriction, then we can assume that the
pressure head at the inlet of the pump is equal to the static head.
1. If there is a flow transmitter in the line then problem solved
2. If you can measure the geometry of the discharge tank and you can get an operator to allow the tank to fill during a certain period of time, you will be able to calculate the flow. This is
probably the best method.
3. I have tried ultra sonic devices which provide a non-invasive method of measuring flow. It does require particles in the fluid. I am told that air bubbles are sufficient. Anyway, I have tried it
and found it to be highly unreliable.
What is barometric pressure and why should I care?
Barometric pressure is the air pressure in absolute terms in the local environment. The air pressure is highest at sea level and gradually diminishes with elevation. Barometric pressure is often
expressed in psia (pound per square inch absolute) or feet of water absolute. The barometric pressure at sea level is 14.7 psia or 34 feet of water absolute. Barometric pressure is used to calculate
the N.P.S.H. available, which is required to determine if the pump will operate properly as designed.
What is my elevation above sea level and why should I care?
Your elevation above sea level varies with your location. Your local airport can give you their elevation and barometric pressure. The relationship between elevation and barometric pressure is well
documented and available in many reference books as charts or tables. You can find your local elevation on a topographic map and determine the barometric pressure at your location. For example, the
air pressure at sea level is 14.7 psia, at 10,000 feet it is 10.2 psia, and at 35,000 feet (the cruising altitude of most passenger jets) 3.5 psia. The local barometric pressure is required to
calculate the N.P.S.H. available at the pump suction.
Ever see a movie where people and things are sucked out of an airplane after the bad guy shoots a hole through a window. Well at a 35,000 feet altitude, an object located over a 12" diameter hole
(approximate size of a window) will be subject to a force of 1270 pounds, frightening isn’t it?
The Colebrook equation gives the value of the friction parameter f with respect to the Reynolds number and the pipe roughness. When the Reynolds number is small, below 2,000 (laminar flow region),
pipe roughness has no effect at all. When the Reynolds number is between 4,000 and 50,000, that is low velocity and/or high viscosity, then the influence of pipe roughness is as equally important as
the effect of velocity. When the Reynolds number is large, above 50,000, that is high velocity and/or low viscosity, then the friction is entirely dependent on pipe roughness.
What is the effect of pipe fittings on the total pipe friction loss?
Any fitting inserted into a pipe run has an effect since it either obstructs the flow or re-directs it or both. Most common fittings have been studied and their effect quantified, the results are
available in many reference books.
How can the Total Head of a system that has more then one outlet be determined and what is the effect compared to a system with one outlet?
One fluid path from inlet to a selected outlet is used for the calculation of Total Head. This path is assumed to require the highest Total Head, if there is a doubt about the head required for the
other path then the calculation is done on the other path and a comparison is made. Also the velocity head input difference to the two separate branches needs to be added to the Total Head. This
however is normally a small and negligible term.
How do you calculate pressure drop due to fluid friction?
The Colebrook equation is the most accepted formula for calculating the pressure or head drop due to friction in pipes for Newtonian fluids. This equation relates the friction factor to the Reynolds
number and the pipe roughness. The friction factor is then used in the Darcy formula to calculate head drop. For non-Newtonian fluids, which is mostly slurries of one kind or another, the process is
much more complicated and many factors are taken into account. Some of these factors are: particle size and distribution, settling velocity of the particles in the mixture, viscosity variation of the
mixture, solids transportation mode, etc.
What is negative pressure ?
Pressure is said to be negative when it is less than the local barometric or atmospheric pressure.
What is relative and absolute pressure?
A pressure measurement that is absolute is not related to any other. The atmospheric pressure at sea level is 14.7 psia (pounds per square inch absolute), that is, 14.7 psi above zero absolute.
Relative pressure is always related to the local atmospheric pressure. For example, 10 psig (pound per square inch gauge) is 10 psi above the local atmospheric pressure. Most pressure measurements
are taken in psig which is relative to the local pressure. Pressure measurements do not normally have to be corrected for altitude since all the measurements you might do on a system are relative to
the same atmospheric pressure therefore the effect of elevation is not a factor. An important exception to this is when taking a pressure measurement at the pump suction to determine the N.P.S.H.
available. This pressure measurements is converted to absolute pressure which should be corrected for altitude.
What is a control volume and how is it used?
A control volume is a theoretical boundary which helps delimit the extent of a system, particularly all its inputs and outputs. The principles of conservation of mass and energy can then be applied
within this region.
What is an energy balance?
Because of the principle of conservation of energy, any energy gain or loss in a system must be accounted for. Therefore, making an energy balance is the process of identifying all the sources of
energy gain or loss and adding them up. The result must be equal to zero.
What is the system equation and how is it developed?
The system equation has on the left hand side the Total Head (difference between the pump discharge head and suction head), and on the right hand side, all the terms which impede fluid flow such as:
friction, velocity, elevation difference, etc. An energy balance is used to derive the system equation.
Does a fluid system with no pump have a Total Head?
No, Total Head is a term that is used only for a pump.
What other devices can create pressure in such a way as to move fluid through a system?
An inductor can raise the pressure of a fluid by using another fluid at a higher pressure.
What happens if the damaged pump's performance curve has all points at a lower head than the good pump's performance curve?
The best that the damaged pump can do is to produce the head corresponding to its shut-off head
D H[C] (point 2) at 0 flow. Since the head produced by the good pump is higher, there will be flow through the damaged pump in the reverse direction. The flow however will be impeded since the pump
can produce some head. The system behaves as a branch system. The branch flow sees a head drop which is the sum of the shut-off head of the damaged pump, plus any friction loss, plus the static head
of the suction tank on the inlet of the damaged pump.
What is laminar and turbulent flow?
Laminar flow is a very well behave flow usually occurring at low speeds for most fluids. In the laminar flow regime it is possible to determine theoretically the speed of any particle between the
center of a pipe and the wall. Most fluids have to be carried at a much higher velocity which puts them in the turbulent flow regime. For turbulent flow, the fluid particles move in many directions,
each particle reacts with its neighbor in an unpredictable fashion creating much higher internal friction than is present in the laminar flow situation. If you put dye in a laminar flow system, you
will observe nice long streams of dye undisturbed by the surrounding liquid. The same dye inserted in a turbulent flow will immediately be dispersed through out the liquid.
Adjustable Speed Drives
A device which is used to provide continuous range process speed control.
An ASDs may be referred to by a variety of names, such as variable speed drives, adjustable frequency drives or variable frequency inverters.
An ASD is capable of adjusting both speed and torque from a constant speed electric motor.
Classifications of Drives
Electric Drives:
Variable frequency/Voltage AC motor controllers for squirrel cage motors
DC Motor controllers for DC Motors
Eddy current clutches for AC Motors
Cycloconverters (less efficient)
Hydraulic Drives:
Adjustable belts and pulleys gears
Throttling valves
Fan dampers
Magnetic clutches
Hydraulic Drives:
Hydraulic clutches
Fluid couplings
Variable Frequency Drives
A variable frequency drive controls the speed of an AC motor by varying the frequency supplied to the motor.
A variable frequency drive has two stages of power conversion
Types of Inverters
Variable Voltage Inverter or Voltage Source Inverter (VSI)
Current Source Inverter (CSI)
Pulse Width Modulated (PWM) Inverter
Comparison of Adjustable Speed Drives
│ │ Variable Voltage Inverter │ Current Source Inverter │ Pulse width Modulated Inverter │
│ Motor Compatibility │ • Squirrel cage or Synchronous │ • Squirrel cage or Synchronous │ • Squirrel cage or Synchronous │
│ │ • Can handle motors smaller than inverter rating │ • Can handle motors smaller than inverter rating │ • Can handle motors smaller than inverter rating │
│ Typical power range (HP) │ 1 - 1000 │ 50 - 5000 │ 5 - 5000 │
│ Speed Reduction │ 10 : 1 │ 10 : 1 │ 30 : 1 │
│ Efficiency Range │ 88 - 93 % │ 88 - 93 % │ 85 - 95 % │
│ Multiple Motor │ Yes │ No │ Yes │
│ capability │ │ │ │
│ Soft Starting │ Yes │ Yes │ Yes │
│ Power factor to Motor │ Better than CSI. Drops with speed │ Drops with speed │ Near unity │
│ Advantages │ • High output frequencies │ • Short circuit and over load protection │ • Excellent power factor │
│ │ • Can be retrofitted to existing fixed speed │ • Soft start │ • Can be retrofitted to existing fixed speed │
│ │ motor │ │ motor │
│ │ • Soft start │ │ • Soft start │
│ Disadvantages │ • Harmonics increase losses │ • Harmonics increase losses │ • Motor is subject to voltage stresses │
│ │ • Lower HP ranges typically │ • Difficult to retrofit │ • Complex logic circuits │
│ │ │ • Only singe motor control │ • High initial cost │
│ Applications: General │ • General purpose low-medium HP (< 500 HP) │ • General purpose when regenerative braking wanted │ • Best reliability AC type at added cost │
│ │ │ (hoists) │ • Suitable for most applications │
│ Applications :Specific │ • Conveyors │ • Pumps │ • Slow speed ranges │
│ │ • Machine tools │ • Fans │ • Conveyors │
│ │ • Pumps │ • Compressors │ • Pumps │
│ │ • Fans │ • Blowers │ • Fans │
│ │ │ │ • Packaging equipment │
Advantages of AC Variable drives
Continuous speed range : 0 to full speed
Improved process control
Improved efficiency and potential energy savings
Softstartng/regenerative braking
Wider speed, torque and power ranges
Short response time
Equipment life improvement
Multiple motor capability
Easy to retrofit
Safe operation in hazardous environments
Reduction in noise and vibration level
Operation above full load speeds
How to select an Adjustable Speed Drive
Determine the need for speed or process flow control
Describe the range of speed control
Estimate the process duty cycle
Gather equipment performance data
Calculate constant and Adjustable speed power requirements
Calculate Energy consumption
Select drive type and features, Estimate costs
Calculate simple Payback
Advantages and Disadvantages of VSDs
Improved equipment (pump/Fan) life
Increased Motor life
Increased life of couplings, gear etc.
Reduced noise and vibration level.
Reduced maintenance
Reliability problems
System harmonic drives and associated problems
Case study 1:
Existing conditions:
Pump HP : 75
Readings with Throttle control :
│ Flow LPM │ 12000 │ 11000 │ 10000 │ 9000 │
│ System Head (m) │ 23.5 │ 21.4 │ 19.34 │ 17.93 │
│ Pump head (m) │ 23.5 │ 25 │ 26.5 │ 27.5 │
│ Pump efficiency │ 86 │ 85 │ 83 │ 79.5 │
│ Pump input KW │ 53.58 │ 52.86 │ 52.16 │ 50.87 │
│ Motor load % │ 97.41 │ 96.11 │ 94.85 │ 92.49 │
│ Motor Efficiency % │ 90 │ 89.9 │ 89.9 │ 89.6 │
│ Motor input KW │ 59.33 │ 58.8 │ 58.02 │ 56.77 │
│ Starter Efficiency │ 99.8 │ 99.8 │ 99.8 │ 99.8 │
│ Input KW │ 59.65 │ 58.92 │ 58.14 │ 56.88 │
Readings with Variable Speed Drive :
│ Flow LPM │ 12000 │ 11000 │ 10000 │ 9000 │
│ System Head (m) │ 23.5 │ 21.4 │ 19.34 │ 17.93 │
│ Pump head (m) │ 23.5 │ 25 │ 26.5 │ 27.5 │
│ Pump efficiency │ 86 │ 86 │ 85.5 │ 85 │
│ Pump input KW │ 53.58 │ 44.725 │ 36.96 │ 31.02 │
│ Motor RPM │ 1450 │ 1535 │ 1280 │ 1210 │
│ Motor load % │ 97.4 │ 81.32 │ 67.2 │ 56.4 │
│ Motor Efficiency % │ 93.7 │ 94 │ 93.7 │ 93.6 │
│ Motor input KW │ 57.18 │ 47.58 │ 39.45 │ 33.14 │
│ Controller Efficiency │ 97 │ 96 │ 95 │ 94 │
│ Input KW │ 58.95 │ 49.56 │ 41.52 │ 35.25 │
│ Saving KW │ 0.70 │ 9.36 │ 16.62 │ 21.63 │
│ % Saving │ 1.12 │ 15.89 │ 28.56 │ 38.03 │
Energy Conservation Techniques in Pumps
The following are different ways to conserve the Energy in Pumping System:
When actual operating conditions are widely different (head or flow variation by more than 25 to 30%) than design conditions, replacements by appropriately sized pumps must be considered.
Replacement with High Efficiency Pumps.
Operating multiple pumps in either series or parallel as per requirement.
Reduction in number of pumps (when System Pressure requirement, Head and Flow requirement is less).
By improving the piping design to reduce Frictional Head Loss
By reducing number of bends and valves in the piping system.
By avoiding throttling process to reduce the flow requirement.
By Trimming or replacing the Impellers when capacity requirement is low.
By using Variable Speed Drives
By using Energy Efficient Motors
Preventive Maintenance Checks for Centrifugal Pumps and Drivers
The following are various preventive maintenance checks for centrifugal pumps that are to be carried daily, monthly, half yearly and Yearly :
Daily :
Check pump for noisy bearings & Cavitation noise
Check bearing oil for water, discoloration & contamination
Feel all bearings for temperature
Inspect bearings and oil rings through filling ports. Wipe bearing covers clean.
Check for oil leaks at gaskets, plugs & fittings
Determine if mechanical seal condition is normal.
Check for any water cooling for effective operation. Hand set temperature differential across coolers, jackets & exchangers. Disassemble & clean out as required.
Check for operation of heat tracing
Determine if steam leakage at packing & valves is normal
Check for leaks at pressure casing & gaskets. Determine if steam traps are operating properly - no continuous blow and no water in casing or drain lines.
Add oil if required
Clean oiler bulbs & level windows as required.
Ascertain that oil level is correct distance from shaft centerline. Adjust oiler as required.
Clean out debris from bearing brackets. Drain hole must be open.
Determine if hydraulic governor is working
Check for proper oil level & leaks at hydraulic governor. Check for oil leaks at lines, fittings & power piston.
Replace guards (repair if required)
Determine if pump unit required general cleaning by others.
1/2 Year
Machines not running - standby service :- Overfill bearing housing to bottom of shaft & rotate several turns by hand coat shaft & bearings with oil. Drain back down to reestablish proper level.
Clean & oil governor linkage & valve stems.
Thoroughly inspect disc coupling for signs of water & cracks in laminations. Tighten bolts.
Inspect trip valve & throttle valve stems & linkages for wear.
Change oil in hydraulic governors.
The following links have been reviewed and approved to be the best engineering information available on the internet, so please enjoy and if you have a better link of engineering information, email
it to SiteManager@edasolutions.com .
Company Profile
EDA, Incorporated provides quality-engineering services on time, on schedule and within budget. EDA, Inc. is able to do this by performing the work correctly the first time. We accept the most
challenging problems and look forward to working with the client as a team member. EDA believes that the client should be an active participant in the work process to ensure that the product is
commensurate with client expectations and is delivered within schedule and budget constraints.
EDA, Inc. belongs to the American Society of Mechanical Engineers (ASME), the National Society of Professional Engineers (NSPE), the Society of Instrument Control Engineers, Society of Professional
Engineers (ISA) and the American Nuclear Society (ANS).
For more information on EDA, Incorporated services, please contact Client Service Manager at:
Client Service Manager
EDA, Inc.
6397 True Lane
Springfield, VA 22150
or email the Client Service Manager at SiteManager@edasolutions.com . | {"url":"http://edasolutions.com/old/Groups/Tech/GenPumpTerminology.htm","timestamp":"2014-04-18T10:55:49Z","content_type":null,"content_length":"102768","record_id":"<urn:uuid:07c5ab1c-c4cd-4017-b46d-c9a014f37332>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
e at Simon's Rock
Computer Science
Computer Science
Introduction to Robotics
Computer Science 240 Bergman
3 credits
This course gives an introduction to the background and theory of robotics, as well as to the practical electronic, mechanical, and programming aspects of building and controlling robots. Topics
include sensors, feedback, control, and mechanical construction. For ease of prototyping we use an off the shelf robot controller, the Handy Board, an 8-bit microprocessor that can run Interactive C,
and the LEGO Technic system. Along with a variety of sensors, these materials will allow the class to work through a series of projects that introduces robotics. In a broader sense, this course
serves as an introduction to solving engineering problems. Prerequisite: Permission of the instructor. No previous programming or robotics experience is required.
This course is generally offered once every two years. Last taught S12.
Computer Science I
Computer Science 242 Shields
3 credits
This course provides an introduction to fundamental concepts of computer science, both as a prelude to further study in the discipline and to serve broader educational goals. Focus will be on
principles of object-oriented programming and design, including the study of basic data types and control structures, objects and classes, and polymorphism and recursion. The course will use the Java
language. No prerequisites.
This course is generally offered three times every two years.
Algorithms and Data Structures
Computer Science 243 Shields
3 credits
This is the second course in the ACM computer science curriculum and lays the foundation for further work in the discipline. Topics covered include algorithmic analysis; asymptotic notation; central
data structures such as lists, stacks, queues, hash tables, trees, sets, and graphs; and an introduction to complexity theory. It is not a language course and is intended for students who already
have competence in a high level language such as C++ or Java. Prerequisite: Computer Science 242 or permission of the instructor.
This course is generally offered once every year and a half.
Computer Networking
Computer Science 244 Staff
3 credits
This is a course on computer networking covering the Internet protocol stack, implementation technologies, and management and security issues. Topics will include service paradigms and switching
alternatives; application layer protocols such as HTML, SMTP, and DNS; transport layer protocols like TCP and UDP, network layer (IP) and routing, data link protocols such as Ethernet, ATM, and Token
Ring; and physical media. We will also look at issues of network management and security, as well as new technologies involving multimedia and wireless networks. Prerequisite: Computer Science 242 or
permission of the instructor.
Last taught F09.
Computer Organization
Computer Science 250 Shields
3 credits
This course introduces the low-level organization and structure of computer systems, including boolean logic and digital circuits, forms of numeric representation and computer arithmetic, instruction
sets and assembly language programming, basic CPU design, and more advanced architecture topics such as pipelining and memory management. Prerequisite: Computer Science 242 or permission of the
This course is generally offered once every year and a half. Last taught F10.
Discrete Mathematics
Computer Science 252 Shields
3 credits
The mathematical foundations of computer science, including propositional and predicate logic; sets, algorithm growth and asymptotic analysis; mathematical induction and recursion; permutations and
combinations; discrete probability; solving recurrences; order relations; graphs; trees; and models of computation. Prerequisite: Mathematics 210.
This course is offered when there is sufficient student interest. Last taught S12.
Scientific Computing
Computer Science 260 Kramer
3 credits
The course covers computer algorithms commonly used in the physical and biological sciences: Minimizing a function, special functions, Fast Fourier Transforms, numerical solution to differential
equations, etc. The end of the semester is devoted to an independent project, with a topic chosen by the student and subject to approval of the instructor. In recent years these projects have ranged
from bioinfomatics to quantum mechanics. Requirements: The student should have a laptop with compiler installed (one may be available as a loan from ITS, though the student is responsible for this
arrangement). The student should already be fluent in a programming language (a prior programming course is not required). The student should be taking or have completed vector calculus (Mathematics
This course is generally offered as a tutorial.
Artificial Intelligence
Computer Science 264 Shields
3 credits
An examination of selected areas and issues in the study of artificial intelligence, including search algorithms and heuristics, game-playing, models of deductive and probabilistic inference,
knowledge representation, machine learning, neural networks, pattern recognition, robotics topics, and social and philosophical implications. Prerequisite: Computer Science 243 or permission of the
This course is offered when there is sufficient student interest. Last taught F07.
Programming Languages
Computer Science 312 Shields
4 credits
An examination of the design and implementation of modern programming languages, covering such paradigms as imperative languages, object-oriented languages, functional languages, and logic-oriented
languages. Topics will include syntax, semantics, pragmatics, grammars, parse trees, types, bindings, scope, parameter passing, and control structures. Prerequisite: Computer Science 243.
This course is generally offered once every two years. Last taught F11.
Operating Systems
Computer Science 316 Shields
4 credits
This course is an introduction to the principles of centralized and distributed operating systems. It examines the management of memory, processes, devices, and file systems. Topics covered include
scheduling algorithms, communications, synchronization and deadlock, and distributed operating systems. Prerequisite: Computer Science 250.
This course is offered when there is sufficient student interest. Last taught S11.
Theory of Computation
Computer Science 320 Shields
4 credits
The study of models of computation and their associated formal languages and grammars. Topics will include finite automata, pushdown automata, turing machines, regular and contextfree languages, the
Chomsky hierarchy, the Church-Turing thesis, and some major limitation results on computability and complexity. Prerequisite: Computer Science 243.
This course is generally offered once every two years. Last taught S11. | {"url":"https://www.simons-rock.edu/academics/divisions/division-of-science-mathematics-computing/computer-science/","timestamp":"2014-04-19T15:46:34Z","content_type":null,"content_length":"35187","record_id":"<urn:uuid:8f310ed4-6ac7-4b9e-842f-ab70b635c3cd>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pseudo limits, biadjoints, and pseudo algebras: categorical foundations of conformal field theory
Results 1 - 10 of 16
- J. Homotopy Relat. Struct
"... Abstract. As an example of the categorical apparatus of pseudo algebras over 2-theories, we show that pseudo algebras over the 2-theory of categories can be viewed as pseudo double categories
with folding or as appropriate 2-functors into bicategories. Foldings are equivalent to connection pairs, an ..."
Cited by 19 (2 self)
Add to MetaCart
Abstract. As an example of the categorical apparatus of pseudo algebras over 2-theories, we show that pseudo algebras over the 2-theory of categories can be viewed as pseudo double categories with
folding or as appropriate 2-functors into bicategories. Foldings are equivalent to connection pairs, and also to thin structures if the vertical and horizontal morphisms coincide. In a sense, the
squares of a double category with folding are determined in a functorial way by the 2-cells of the horizontal 2-category. As a special case, strict 2-algebras with one object and everything
invertible are crossed modules under a group.
"... Abstract. The microcosm principle, advocated by Baez and Dolan and formalized for Lawvere theories lately by three of the authors, has been applied to coalgebras in order to describe
compositional behavior systematically. Here we further illustrate the usefulness of the approach by extending it to a ..."
Cited by 6 (3 self)
Add to MetaCart
Abstract. The microcosm principle, advocated by Baez and Dolan and formalized for Lawvere theories lately by three of the authors, has been applied to coalgebras in order to describe compositional
behavior systematically. Here we further illustrate the usefulness of the approach by extending it to a many-sorted setting. Then we can show that the coalgebraic component calculi of Barbosa are
examples, with compositionality of behavior following from microcosm structure. The algebraic structure on these coalgebraic components corresponds to variants of Hughes’ notion of arrow, introduced
to organize computations in functional programming. 1
- Appl. Categ. Structures
"... Abstract. In this note, we introduce a class of algebras that are in some sense related to conformal algebras. This class (called TC-algebras) includes Weyl algebras and some of their
(associative and Lie) subalgebras. By a conformal algebra we generally mean what is known as H-pseudo-algebra over t ..."
Cited by 3 (2 self)
Add to MetaCart
Abstract. In this note, we introduce a class of algebras that are in some sense related to conformal algebras. This class (called TC-algebras) includes Weyl algebras and some of their (associative
and Lie) subalgebras. By a conformal algebra we generally mean what is known as H-pseudo-algebra over the polynomial Hopf algebra H = k[T1,..., Tn]. Some recent results in structure theory of
conformal algebras are applied to get a description of TC-algebras. 1.
"... Abstract. We define a general concept of pseudo algebras over theories and 2-theories. A more restrictive such notion was introduced in [5], but as noticed by M. Gould, did not capture the
desired examples. The approach taken in this paper corrects the mistake by introducing a more general concept, ..."
Cited by 3 (1 self)
Add to MetaCart
Abstract. We define a general concept of pseudo algebras over theories and 2-theories. A more restrictive such notion was introduced in [5], but as noticed by M. Gould, did not capture the desired
examples. The approach taken in this paper corrects the mistake by introducing a more general concept, allowing more flexibility in selecting coherence diagrams for pseudo algebras. 1.
, 2008
"... Abstract. In this paper we obtain several model structures on DblCat, the category of small double categories. Our model structures have three sources. We first transfer across a
categorificationnerve adjunction. Secondly, we view double categories as internal categories in Cat and take as our weak ..."
Cited by 2 (2 self)
Add to MetaCart
Abstract. In this paper we obtain several model structures on DblCat, the category of small double categories. Our model structures have three sources. We first transfer across a
categorificationnerve adjunction. Secondly, we view double categories as internal categories in Cat and take as our weak equivalences various internal equivalences defined via Grothendieck
topologies. Thirdly, DblCat inherits a model structure as a category of algebras over a 2-monad. Some of these model structures coincide and the different points of view give us further results about
cofibrant replacements and cofibrant objects. As part of this program we give explicit descriptions and discuss properties of free double categories, quotient double categories, colimits of double
categories, and several nerves
"... It has long been known that every weak monoidal category A is equivalent via monoidal functors and monoidal natural transformations to a strict monoidal category st(A). We generalise the
definition of weak monoidal category to give a definition of weak P-category for any strongly regular (operadic) ..."
Cited by 2 (0 self)
Add to MetaCart
It has long been known that every weak monoidal category A is equivalent via monoidal functors and monoidal natural transformations to a strict monoidal category st(A). We generalise the definition
of weak monoidal category to give a definition of weak P-category for any strongly regular (operadic) theory P, and show that every weak P-category is equivalent via P-functors and P-transformations
to a strict P-category. This strictification functor is then shown to have an interesting universal property. 1
- ENTCS, TO APPEAR
"... ..."
"... As an example of the categorical apparatus of pseudo algebras over 2-theories, we show that pseudo algebras over the 2-theory of categories can be viewed as pseudo double categories with folding
or as appropriate 2-functors into bicategories. Foldings are equivalent to connection pairs, and also to ..."
Add to MetaCart
As an example of the categorical apparatus of pseudo algebras over 2-theories, we show that pseudo algebras over the 2-theory of categories can be viewed as pseudo double categories with folding or
as appropriate 2-functors into bicategories. Foldings are equivalent to connection pairs, and also to thin structures if the vertical and horizontal morphisms coincide. In a sense, the squares of a
double category with folding are determined in a functorial way by the 2-cells of the horizontal 2-category. As a special case, strict 2-algebras with one object and everything invertible are crossed
modules under a group. 1.
, 2007
"... www.elsevier.com/locate/aim ..."
, 2006
"... the cobordism and commutative monoid with cancellation ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.239.4802","timestamp":"2014-04-17T01:12:29Z","content_type":null,"content_length":"32345","record_id":"<urn:uuid:281b7e77-e8e3-403c-a224-9892dba88b35>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free Modules and Functional Linear Functionals
Mon 11 Jul 2011
Today I hope to start a new series of posts exploring constructive abstract algebra in Haskell.
In particular, I want to talk about a novel encoding of linear functionals, polynomials and linear maps in Haskell, but first we're going to have to build up some common terminology.
Having obtained the blessing of Wolfgang Jeltsch, I replaced the algebra package on hackage with something... bigger, although still very much a work in progress.
(Infinite) Modules over Semirings
Recall that a vector space V over a field F is given by an additive Abelian group on V, and a scalar multiplication operator
(.*) :: F -> V -> V subject to distributivity laws
s .* (u + v) = s .* u + s .* v
(s + t) .* v = s .* v + t .* v
and associativity laws
(s * t) .* v = s .* (t .* v)
and respect of the unit of the field.
1 .* v = v
Since multiplication on a field is commutative, we can also add
(*.) :: V -> F -> V
v *. f = f .* v
with analogous rules.
But when F is only a Ring, we call the analogous structure a module, and in a ring, we can't rely on the commutativity of multiplication, so we may have to deal left-modules and right-modules, where
only one of those products is available.
We can weaken the structure still further. If we lose the negation in our Ring we and go to a Rig (often called a Semiring), now our module is an additive moniod.
If we get rid of the additive and multiplicative unit on our Rig we get down to what some authors call a Ringoid, but which we'll call a Semiring here, because it makes the connection between
semiring and semigroup clearer, and the -oid suffix is dangerously overloaded due to category theory.
First we'll define additive semigroups, because I'm going to need both additive and multiplicative monoids over the same types, and Data.Monoid has simultaneously too much and too little structure.
-- (a + b) + c = a + (b + c)
class Additive m where
(+) :: m -> m -> m
replicate1p :: Whole n => n -> m -> m -- (ignore this for now)
-- ...
their Abelian cousins
-- a + b = b + a
class Additive m => Abelian m
and Multiplicative semigroups
-- (a * b) * c = a * (b * c)
class Multiplicative m where
(*) :: m -> m -> m
pow1p :: Whole n => m -> n -> m
-- ...
Then we can define a semirings
-- a*(b + c) = a*b + a*c
-- (a + b)*c = a*c + b*c
class (Additive m, Abelian m, Multiplicative m) => Semiring
With that we can define modules over a semiring:
-- r .* (x + y) = r .* x + r .* y
-- (r + s) .* x = r .* x + s .* x
-- (r * s) .* x = r .* (s .* x)
class (Semiring r, Additive m) => LeftModule r m
(.*) :: r -> m -> m
and analogously:
class (Semiring r, Additive m) => RightModule r m
(*.) :: m -> r -> m
For instance every additive semigroup forms a semiring module over the positive natural numbers (1,2..) using replicate1p.
If we know that our addition forms a monoid, then we can form a module over the naturals as well
-- | zero + a = a = a + zero
(LeftModule Natural m,
RightModule Natural m
) => AdditiveMonoid m where
zero :: m
replicate :: Whole n => n -> m -> m
and if our addition forms a group, then we can form a module over the integers
-- | a + negate a = zero = negate a + a
(LeftModule Integer m
, RightModule Integer m
) => AdditiveGroup m where
negate :: m -> m
times :: Integral n => n -> m -> m
-- ...
Free Modules over Semirings
A free module on a set E, is a module where the basis vectors are elements of E. Basically it is |E| copies of some (semi)ring.
In Haskell we can represent the free module of a ring directly by defining the action of the (semi)group pointwise.
instance Additive m => Additive (e -> m) where
f + g = \x -> f x + g x
instance Abelian m => Abelian (e -> m)
instance AdditiveMonoid m => AdditiveMonoid (e -> m) where
zero = const zero
instance AdditiveGroup m => AdditveGroup (e -> m) where
f - g = \x -> f x - g x
We could define the following
instance Semiring r => LeftModule r (e -> m) where
r .* f = \x -> r * f x
but then we'd have trouble dealing with the Natural and Integer constraints above, so instead we lift modules
instance LeftModule r m => LeftModule r (e -> m) where
(.*) m f e = m .* f e
instance RightModule r m => RightModule r (e -> m) where
(*.) f m e = f e *. m
We could go one step further and define multiplication pointwise, but while the direct product of |e| copies of a ring _does_ define a ring, and this ring is the one provided by the Conal Elliot's
vector-space package, it isn't the most general ring we could construct. But we'll need to take a detour first.
Linear Functionals
A Linear functional f on a module M is a linear function from a M to its scalars R.
That is to say that, f : M -> R such that
f (a .* x + y) = a * f x + f y
Consequently linear functionals also form a module over R. We call this module the dual module M*.
Dan Piponi has blogged about these dual vectors (or covectors) in the context of trace diagrams.
If we limit our discussion to free modules, then M = E -> R, so a linear functional on M looks like (E -> R) -> R
subject to additional linearity constraints on the result arrow.
The main thing we're not allowed to do in our function is apply our function from E -> R to two different E's and then multiply the results together. Our pointwise definitions above satisfy those
linearity constraints, but for example:
bad f = f 0 * f 0
does not.
We could capture this invariant in the type by saying that instead we want
newtype LinearM r e =
LinearM {
runLinearM :: forall r. LeftModule r m => (e -> m) -> m
we'd have to make a new such type every time we subclassed Semiring. I'll leave further exploration of this more exotic type to another time. (Using some technically illegal module instances we can
recover more structure that you'd expect.)
Now we can package up the type of covectors/linear functionals:
infixr 0 $*
newtype Linear r a = Linear { ($*) :: (a -> r) -> r }
The sufficiently observant may have already noticed that this type is the same as the Cont monad (subject to the linearity restriction on the result arrow).
In fact the Functor, Monad, Applicative instances for Cont all carry over, and preserve linearity.
(We lose callCC, but that is at least partially due to the fact that callCC has a less than ideal type signature.)
In addition we get a number of additional instances for Alternative, MonadPlus, by exploiting the knowledge that r is ring-like:
instance AdditiveMonoid r => Alternative (Linear r a) where
Linear f < |> Linear g = Linear (f + g)
empty = Linear zero
Note that the (+) and zero there are the ones defined on functions from our earlier free module construction!
Linear Maps
Since Linear r is a monad, Kleisli (Linear r) forms an Arrow:
b -> ((a -> r) ~> r)
where the ~> denotes the arrow that is constrained to be linear.
If we swap the order of the arguments so that
(a -> r) ~> (b -> r)
this arrow has a very nice meaning! (See Numeric.Map.Linear)
infixr 0 $#
newtype Map r b a = Map { ($#) :: (a -> r) -> (b -> r) }
Map r b a represents the type of linear maps from a -> b. Unfortunately due to contravariance the arguments wind up in the "wrong" order.
instance Category (Map r) where
Map f . Map g = Map (g . f)
id = Map id
So we can see that a linear map from a module A with basis a to a vector space with basis b effectively consists of |b| linear functionals on A.
Map r b a provides a lot of structure. It is a valid instance of an insanely large number of classes.
Vectors and Covectors
In physics, we sometimes call linear functionals covectors or covariant vectors, and if we're feeling particularly loquacious, we'll refer to vectors as contravariant vectors.
This has to do with the fact that when you change basis, you change map the change over covariant vectors covariantly, and map the change over vectors contravariantly. (This distinction is
beautifully captured by Einstein's summation notation.)
We also have a notion of covariance and contravariance in computer science!
Functions vary covariantly in their result, and contravariant in their argument. E -> R is contravariant in E. But we chose this representation for our free modules, so the vectors in our free vector
space (or module) are contravariant in E.
class Contravariant f where
contramap :: (a -> b) -> f a -> f b
-- | Dual function arrows.
newtype Op a b = Op { getOp :: b -> a }
instance Contravariant (Op a) where
contramap f g = Op (getOp g . f)
On the other hand (E -> R) ~> R varies covariantly with the change of E.
as witnessed by the fact that it is a Functor.
instance Functor (Linear r) where
fmap f m = Linear $ \k -> m $* k . f
We have lots of classes for manipulating covariant structures, and most of them apply to both (Linear r) and (Map r b).
Other Representations and Design Trade-offs
One common representation of vectors in a free vector space is as some kind of normalized list of scalars and basis vectors. In particular, David Amos's wonderful HaskellForMaths uses
newtype Vect r a = Vect { runVect :: [(r,a)] }
for free vector spaces, only considering them up to linearity, paying for normalization as it goes.
Given the insight above we can see that Vect isn't a representation of vectors in the free vector space, but instead represents the covectors of that space, quite simply because Vect r a varies
covariantly with change of basis!
Now the price of using the Monad on Vect r is that the monad denormalizes the representation. In particular, you can have multiple copies of the same basis vector., so any function that uses Vect r a
has to merge them together.
On the other hand with the directly encoded linear functionals we've described here, we've placed no obligations on the consumer of a linear functional. They can feed the directly encoded linear
functional any vector they want!
In fact, it'll even be quite a bit more efficient to compute,
To see this, just consider:
instance MultiplicativeMonoid r => Monad (Vect r) where
return a = Vect [(1,a)]
Vect as >>= f = Vect
[ (p*q, b) | (p,a) < - as, (q,b) <- runVect (f b) ]
Every >>= must pay for multiplication. Every return will multiply the element by one. On the other hand, the price of return and bind in Linear r is function application.
instance Monad (Linear r) where
return a = Linear $ \k -> k a
m >>= f = Linear $ \k -> m $* \a -> f a $* k
A Digression on Free Linear Functionals
To wax categorical for a moment, we can construct a forgetful functor U : Vect_F -> Set that takes a vector space over F to just its set of covectors.
F E = (E -> F, F,\f g x -> f x + g x ,\r f x -> r * f x)
using the pointwise constructions we built earlier.
Then in a classical setting, you can show that F is left adjoint to U.
In particular the witnesses of this adjunction provide the linear map from (E -> F) to V and the function E -> (V ~> F) giving a linear functional on V for each element of E.
In a classical setting you can go a lot farther, and show that all vector spaces (but not all modules) are free.
But in a constructive setting, such as Haskell, we need a fair bit to go back and forth, in particular we wind up need E to be finitely enumerable to go one way, and for it to have decidable equality
to go in the other. The latter is fairly easy to see, because even going from E -> (E -> F) requires that we can define and partially apply something like Kronecker's delta:
delta :: (Rig r, Eq a) => e -> e -> r
delta i j | i == j = one
| otherwise = zero
The Price of Power
The price we pay is that, given a Rig, we can go from Vect r a to Linear r a but going back requires a to be be finitely enumerable (or for our functional to satisfy other exotic side-conditions).
vectMap :: Rig r => Vect r a -> Linear r a
vectMap (Vect as) = Map $ \k -> sum [ r * k a | (r, a) < - as ]
You can still probe Linear r a for individual coefficients, or pass it a vector for polynomial evaluation very easily, but for instance determining a degree of a polynomial efficiently requires
attaching more structure to your semiring, because the only value you can get out of Linear r a is an r.
Optimizing Linear Functionals
In both the Vect r and Linear r cases, excessive use of (>>=) without somehow normalizing or tabulating your data will cause a lot of repeated work.
This is perhaps easiest to see from the fact that Vect r never used the addition of r, so it distributed everything into a kind of disjunctive normal form. Linear r does the same thing.
If you look at the Kleisli arrows of Vect r or Linear r as linear mappings, then you can see that Kleisli composition is going to explode the number of terms.
So how can we collapse back down?
In the Kleisli (Vect r) case we usually build up a map as we walk through the list then spit the list back out in order having added up like terms.
In the Map r case, we can do better. My representable-tries package provides a readily instantiable HasTrie class, and the method:
memo :: HasTrie a => (a -> r) -> a -> r
which is responsible for providing a memoized version of the function from a -> r in a purely functional way. This is obviously a linear map!
memoMap :: HasTrie a => Map r a a
memoMap = Map memo
We can also flip memo around and memoize linear functionals.
memoLinear :: HasTrie a => a -> Linear r a
memoLinear = Linear . flip memo
Next time, (co)associative (co)algebras and the myriad means of multiplying (co)vectors!
9 Responses to “Free Modules and Functional Linear Functionals”
1. Mathnerd314 Says:
July 11th, 2011 at 9:48 pm
Your Multiplicative class seems to ignore the existence of noncommutative (semi)rings; surely this important area of mathematics shouldn’t be left out?
2. Edward Kmett Says:
July 11th, 2011 at 10:19 pm
Without commutative addition, you don’t get many normalization opportunities. So, in the interest of drawing the line somewhere I left that off. ;) I also avoided talking about left-seminnearings
and right-seminearrings. =) I do enjoy playing with the individual grains of sand in my sandbox though.
3. Paul Keir Says:
July 12th, 2011 at 3:07 am
You say “If we get rid of the additive and multiplicative unit on our Rig we get down to what some authors call a Ringoid, but which we’ll call a Semiring”, but earlier you “…go to a Rig (often
called a Semiring)”. Are both of them Semirings?
4. Edward Kmett Says:
July 12th, 2011 at 3:46 am
This is what I get for trying to nod to the fact that different authors use the same words for different things.
The vocabulary I’m working with here is that a semiring is a pair of semigroups with a pair of distributive laws and that a rig adds a 0 and 1 to a semiring.
Some authors call what I am calling a rig above a semiring, which is somewhat annoying because the semi- just means ‘not-quite’ in this setting, and has no connection to the relationship between
group and semigroup. Moreover it leaves the question of what to call the pair-of-semigroups construction that I’m calling semiring above. The name those authors often use is “ringoid”, but that
conflicts with the much more useful use of ringoid for referring to Ab-enriched categories, which is likely to wind up in my semigroupoid package.
If a group is a groupoid with one object then a ringoid is a ringoid with one object.
So in the vocabuary I’m using here, we get:
Semiring <= Rig <= Ring
which parallels
Semigroup <= Monoid <= Group
by adding unit(s) and additive inverses respectively.
I’ll see if I can clear up the verbage to make it clearer.
5. beroal Says:
July 12th, 2011 at 5:48 am
I suppose that “replicate1p” and “pow1p” both define repetition. IMHO you can name them with “+” and “*”, adding some symbol to denote repetition, maybe “…”? (And with the right order of
arguments in “pow1p”. :) )
6. beroal Says:
July 12th, 2011 at 6:40 am
If we limit our discussion to free modules, then M = E -> R
And every element of M should be 0 on all but finite set of elements of E. This is the definition of free modules.
7. beroal Says:
July 12th, 2011 at 6:49 am
We could capture this invariant in the type by saying that instead we want
Can you please explain what “invariant” you are talking about?
8. Mathnerd314 Says:
July 12th, 2011 at 8:57 am
Oops, sorry. I meant nonassociative rings. But I guess you do have to draw the line somewhere.
9. Edward Kmett Says:
July 12th, 2011 at 9:21 am
@beroal: i didn’t make them symbolic because they have the problem that what do you name the three different operators for ((N+1)*), (N*) and (Z*) ?
whereas the precedent of log1p at least gives some intuition to the name replicate1p, and the action of replicate on lists if the repetition of a monoidal value. I had, however originally started
using (#*) and (*#) for times and may revert. They are mostly provided as reasonable default implementations for the natural number multiplication and integer multiplication requirements placed
on you by the module superclasses.
As for the free module, technically I’m permitting the infinite free module. Haskell provides me a largely coinductive universe, I’d be remiss in not taking advantage of it. I’ll throw an
(infinite) in there. ;) But unlike the classical case I can’t prove the duality in most cases to begin with and I’m not able to enumerate the set of all e in E such that E -> R is non-zero except
when the set E is finitely enumerable itself, because I can’t intensionally inspect the function I’m given. Effectively the constraint moves from requiring the function to have a finite number of
non-zeros to requiring that any linear functional can only inspect a finite number of vectors, which being constructive is all it can do anyways.
The invariant in question is linearity. One (potentially conservative) way to enforce that would be to never apply our continuation vector twice and take the product of the results (or add it to
some value in the ring), but instead only multiply it by values we have lying around in the ring or add and subtract them. If you think about it, quantifying over the choice of a module then
prevents violations of linearity. \k -> r .* f k is well typed, we can plumb in zeroes, etc, but \k -> f k * f k requires our module be strengthened to a semiring, and the unmentioned but also
non-linear \k -> f k + r also would fail to type check, since f k :: m, and under quantification over the module all we’re permitted to do is use the (.*) and the (+) from our module. the reason
this is a problem is if you want to use the zero from a semigroup-with-zero module, you’d need a new type.
There is also a performance impact from the fact that we effectively use the LeftModule dictionary as an interpreter, and most damningly it costs us the ability to use representable-tries for
cheap memoization. | {"url":"http://comonad.com/reader/2011/free-modules-and-functional-linear-functionals/","timestamp":"2014-04-20T06:50:39Z","content_type":null,"content_length":"60589","record_id":"<urn:uuid:51cbe5be-97c8-49e9-9957-2355ef140ed8>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphs with Holes - Concept
Rational functions have points where they are undefined, which introduces us to graphing holes in the function. Graphing holes involves being able to find these points. A rational function is a
quotient of two functions, and if the denominator of this quotient has zeros, the rational function is undefined at that point. Graphing holes means showing what input values makes this denominator
function zero.
We're talking about rational functions and I have one here that usually when we're analyzing the graph of a rational function we'd like to have the numerator and denominator factored, so let me do
this just to figure out what's going on with this rational function.
I would factor the numerator starting with an x in each of these factors and I want to look for,80* let's say factors of 48 that are going to give me 14, so 6 and 8 and if I use -6 and -8 I'll get
this exactly. I'll get -6 times -8 is 48 and -6x-8x is -14x and the denominator I can just pull a -2 out and that gives me -6+x, x-6, so this is interesting I've got the same factor in the numerator
and denominator and I want to I want to know, is it okay just to cancel? Right, is this just the same as negative one half x-8 so we write equal question mark there.
Let's analyze the behavior of this function and the function before the canceling just to see what differences is differences are and the similarities are. I'll evaluate some some points let's try 2,
when I plug in 2 I get 2-8, -6, 2-6, -4, -2, and then x-6 is again -4. This cancels and I get 3. If I plug 2 into this guy, 2-8 is -6 times negative a half also 3.
What about 4? I plug in 4 I get 4-88, -4, 4-6, -2, -2 and again the x-6 is -2, these cancel and I get 2. If I plug 4 in over here 4-8 is -4 times negative one half also 2.
If I plug in 6 this is going to be undefined. If I plug 6 in here I get 6-8, -2 times negative one half, 1. This is the key difference between the two functions because it seems like they have the
same values everywhere else but in x=6 this one's undefined and that one is not. So the way to make these two equal to one another is say negative one half times the quantity x minus 8 for x not
equal to 6.
Saying this, giving this domain restriction means that this function will now have the same domain as this function and now they're equal now you can write an equal sign here. The way to interpret
this function is that its graph is going to be a line like right you know that the graph of this is going to be a line but there'll be a hole at x=6 that's what happens here and that's going to be
really important when graphing rational functions that have these common factors between the numerator and denominator.
rational functions factors holes | {"url":"https://www.brightstorm.com/math/precalculus/polynomial-and-rational-functions/graphs-with-holes/","timestamp":"2014-04-24T10:10:58Z","content_type":null,"content_length":"67685","record_id":"<urn:uuid:fbccb1dd-cbea-41dc-be3b-08bec9fe38e1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
COLIN: Planning with Continuous Numeric Effects
Click here to download the sources of the JAIR 2012 Release of COLIN.
This page contains the new benchmarks introduced in the paper. Some are modified versions of standard IPC benchmarks, whilst others are new encodings of other problems.
Rovers Continuous
Linear Continuous Numeric Effects and Computed Durational Inequalities
Evaluation Domain and Problem Files
This is a continuous version of the well-known Rovers domain that was introduced in IPC3, adding several features:
• The navigate action is modified so that instead of decreasing the energy of the rover by 8 units at the end of a navigate action (fixed duration 5) the energy of the rover is decreased
continuously throughout the duration of the action at a rate of 8/5 units per-second.
• The recharge action is modified such that its duration constraint states that the action cannot exceed (80 - current charge) / recharge rate; that is, assuming the rover has the capacity to hold
80 units of charge, the refuel action must not occur for a length of time above that needed to fully recharge the rover. Further the increase in the rover's charge at the end of the action is
dependent on the duration of the recharge action (and the recharge rate). The effect of this is that the planner can chose for the rover to charge as little or as much as required.
• An additional action journey-recharge is added, this allows the rover to recharge whilst navigating: as real rovers do on the surface of Mars. The energy of the rover increases continuously over
time at a specified recharge rate.
• The rover is required to film all of its journeys as it traverses the surface: this requires the use of a camera which drains energy at a rate of 0.5 units per-second.
The problems in this archive, and used in our experiments, are the standard IPC3 rovers time collection but the domain is modified to have continuous effects. Alternatively these single rover
problems can be used: they are generated with the same parameters to the IPC 3 generator as the standard benchmarks except that only one rover is allowed.
Satellite Cooled
Linear Continuous Numeric Effects and Durational Inequalities
Evaluation domain and problem files
This is an extension of the Time variant of the Satellite domain from IPC 2002. Our continuous domain variant results from three main changes to the domain model:
• Power consumption is represented by a numeric fluent rather than a proposition allowing for parallel usage of power, and consumption of different amounts of power for different activities
• Instruments can be operated in one of two modes: cooled, or uncooled. In cooled mode, active sensor cooling is used to reduce sensor noise, enabling images to be taken in less time. This cooling,
however, requires additional energy.
• A sunrise action models the charging of the satellite via its solar panels increasing the power availability over time, activities can only take place when sufficient power is available.
The problem files are slightly modified versions of the IPC competition problems, updated to include the necessary information about power requirements.
Linear Continuous Numeric Effects and Computed Durational Inequalities
Evaluation Domain and Problem Files (including problem generator)
The AUV domain is concerned with the operations of co-operating Autonomous Underwater Vehicles (AUVs). AUVs move between waypoints underwater and perform science gathering operations: water sampling
and photography. The latter is the most interesting as it requires coordination between multiple AUVs: one AUV must shine a torch to illuminate the object, whilst the other photographs it. As in the
Satellite and Rovers domains, the AUVs have finite battery power and the power usage by actions is continuous throughout their execution. Another interesting use of continuous effects in this domain
is the modelling of drift (how far the AUV has moved off course due to current) this updates continuously but can be reset to zero through the application of a localise action.
Airplane Landing
Linear Continuous Numeric Effects, Durational Inequalities, Temporal Conditional Effects and Optimisation
Evaluation Domain and Problem Files (Edinburgh Airport)
This is the airplane landing problem first described by Kim Larssen at ICAPS 2005. The planner must schedule the landing of several aeroplanes at an airport. Each plane has a target landing time, but
can land early or late; the penalty for doing so being defined by a continuous linear function. The late penalty function can be different to the early penalty function reflecting the fact that the
cost of the two possibilities is different, and further can be different for each plane. The domain uses temporal conditional effects to control the penalty application: conditioning on a the value
of a variable at the end of the action and updating the cost at the end of an action by some function of the action's duration.
Note that the domain formulation enclosed uses ADL, with a conditional effect that the penalty incurred is governed by one function if the landing was early, or another is the landing is late.
However, as the special ?duration variable can only appear in the (:duration ...) section of an action definition (according to the PDDL semantics), as a workaround the condition is written in terms
of a variable fake-duration, which after parsing can be internally converted into ?duration. The result is that because of a slight diversion from PDDL semantics the plan validator will not report
accurate plan costs for this domain.
Airport Fuel Loss
Linear Continuous Numeric Effects, Durational Inequalities, Optimisation
Evaluation Domain and Problem Files
Larger Domain and Problem Files (not used in the paper)
The Airport domain was introduced in IPC 2004 by Jörg Hoffmann and Sebastian Trüg. It concerned with coordinating airport ground traffic: moving planes from gates to runways, and eventually to
take-off, whilst respecting physical separation safety constraints. We add to this domain the metric to minimise the total amount of fuel burnt between the engines of each aircraft starting up and
when it eventually takes off. To model this we make use of envelope actions that must start before the engines of a plane can start-up and cannot finish until the plane has started the take-off
action. This action increases the variable fuel-loss linearly over time according to the number of engines the plane has. We use the standard problem sets from the competition, adding any changes
needed to support the modifications made.
Durational Inequalities and Optimisation
Evaluation Domain and Problem Files (Delivery Window Metric)
Evaluation Domain and Problem Files (Heat Loss Metric)
The café domain, first described by Keith Halsey, models the preparation (cooking) and delivery of orders to a customer. The goals are specified as items of food or drink that the customers have
ordered. Two variants of the café domain were used in our experiments. In the first the objective is to minimise the sum of the time taken to deliver all ordered items (delivery window). The second
minimises heat loss that has occurred, that is the sum of the time between the completion of the preparation of orders and the time they arrive at the tables.
Other Domains - Linear Generator Domain
Linear Continuous Numeric Effects
PDDL Domain Description
Example Problem File
This is a linear version of the generator problem first described by Howey and Long in 2003. A generator must run for 100 seconds. The generator is powered by a fuel tank, which is initally filled to
its 90-unit capacity, and consumes fuel at a rate of 1 unit per second. The fuel tank can be refilled, the action to do this is named refuel and increases the fuel level in the tank at a rate of 2
units per second for a duration of 10 seconds. This problem is interesting because the time at which the refuel action must be applied within the generate action is critical: the fuel level in the
tank must not exceed the capacity of the tank so the planner must realise that the refuel action must be delayed at least ten seconds after the start of the generate action, but must also occur at
least 10 seconds before the end of the generate action to prevent the fuel level falling below zero.
"COLIN: Planning with Continuous Linear Numeric Change." A. J. Coles, A. I. Coles, M. Fox and D. Long. Journal of Artificial Intelligence Research. vol. 44. February 2012. pp. 1--96. Download From
"Temporal Planning in Domains with Linear Processes." A. J. Coles, A. I. Coles, M. Fox, and D. Long. Proceedings of IJCAI 2009. July 2009. Download PDF (BibTeX) | {"url":"http://www.inf.kcl.ac.uk/research/groups/PLANNING/index.php?option=com_content&view=article&id=60:colin&catid=34:actualplanners&Itemid=70","timestamp":"2014-04-17T16:01:42Z","content_type":null,"content_length":"21548","record_id":"<urn:uuid:b5f50037-c382-4a4d-9d15-20389464efe8>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: SECURELY PROVIDING SECRET DATA FROM A SENDER TO A RECEIVER
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
The invention provides a system and a method for securely providing a secret data from a sender to one or more receivers. The receiver uses a sequence of functions originating from a hierarchy of
functions to migrate the secret data from an input transform space to an output transform space using a mathematical transformation under control of one or more seeds. The seeds are provided to the
receiver by the sender. The sender conditionally allows the receiver to obtain the secret data by controlling the seeds.
A system for securely providing a secret data from a sender to one or more receivers, wherein the receiver comprises a first memory configured for storing a sequence of functions originating from a
hierarchy of functions,wherein each function is configured to migrate the secret data from an input transform space to an output transform space using a mathematical transformation under control of a
seed,wherein the sender is configured to provide the seed to the receiver, andwherein the receiver is configured to migrate the secret data from the input transform space to a final output transform
space using the sequence of functions under control of the seed.
The system according to claim 1, wherein each function in the sequence of functions is controlled by a unique seed and wherein the sender is configured to provide each unique seed to the receiver.
The system according to claim 2, wherein the sequence of functions is unique to the receiver.
The system according to claim 1, wherein the receiver comprises a second memory configured for storing a personalized seed and wherein the receiver is configured to obtain the secret data by
migrating the secret data from the final output transform space to a clear text transform space under control of the personalized seed.
The system according to claim 1, wherein each function is protected by code obfuscation.
The system according to claim 1, wherein the sender is configured to transmit a new function to the receiver and wherein the receiver is configured to replace in the memory one or more of the
functions in the sequence of functions with the new function.
A sender for securely providing a secret data to one or more receivers and for use in a system according to claim 1,wherein the sender is configured to define a hierarchy of functions,wherein each
function is configured to migrate the secret data from an input transform space to an output transform space using a mathematical transformation under control of a seed, andwherein the sender is
configured to provide the seed to the receiver.
A receiver for securely receiving a secret data from a sender and for use in a system according to claim 1, comprising a first memory configured for storing a sequences of functions originating from
a hierarchy of functions,wherein each function is configured to migrate the secret data from an input transform space to an output transform space using a mathematical transformation under control of
a seed,wherein the receiver is configured to receive one or more seeds from the sender, andwherein the receiver is configured to migrate the secret data from the input transform space to a final
output transform space using the sequence of functions under control of the seeds.
A method for securely providing a secret data from a sender to one or more receivers, the receiver comprising a first memory configured for storing a sequence of functions originating from a
hierarchy of functions, wherein each function is configured to migrate the secret data from an input transform space to an output transform space using a mathematical transformation under control of
a seed, the method comprising:providing one or more seeds from the sender to the receiver, andmigrating in the receiver the secret data from the input transform space to a final output transform
space using the sequence of functions under control of the seeds.
The method according to claim 9, wherein each function in the sequence of functions is controlled by a unique seed and wherein the method comprises providing each unique seed from the sender to the
The method according to claim 10, wherein the sequence of functions is unique to the receiver.
The method according to claim 9, further comprising reading a personalized seed from a second memory in the receiver and obtaining in the receiver the secret data by migrating the secret data from
the final output transform space to a clear text transform space under control of the personalized seed.
The method according to claim 9, wherein each function is protected by code obfuscation.
The method according to claim 9, further comprising transmitting a new function from the sender to the receiver and replacing in the memory of the receiver one or more of the functions in the
sequence of functions with the new function.
A method in a sender for securely providing a secret data from the sender to one or more receivers, comprising:defining a hierarchy of functions, wherein each function is configured to migrate the
secret data from an input transform space to an output transform space using a mathematical transformation under control of a seed, andproviding one or more seeds to the receivers.
A method in a receiver for securely receiving a secret data from a sender, the receiver comprising a first memory configured for storing a sequences of functions originating from a hierarchy of
functions, wherein each function is configured to migrate the secret data from an input transform space to an output transform space using a mathematical transformation under control of a seed, the
method comprising:receiving one or more seeds from the sender, andmigrating the secret data from the input transform space to a final output transform space using the sequence of functions under
control of the seeds.
CLAIM OF PRIORITY [0001]
The present patent application claims priority under 35 U.S.C. 119 to European Patent Application (EPO) No. 09154129.2 filed Mar. 2, 2009, and to European Patent Application (EPO) No. 10154150.6
filed Feb. 19, 2010, the entire contents of which are incorporated herein by reference.
FIELD OF THE INVENTION [0002]
The present invention relates to a system for securely providing a secret data from a sender to one or more receivers, a sender for securely providing a secret data to one or more receivers, a
receiver for securely receiving a secret data from a sender, a method for securely providing a secret data from a sender to one or more receivers, a method in a sender for securely providing a secret
data from the sender to one or more receivers and a method in a receiver for securely receiving a secret data from a sender.
BACKGROUND [0003]
Various encryption techniques are known for protected provisioning of data from a sender to a receiver, wherein the data is encrypted in the sender using an encryption key, the encrypted data is
transmitted to the receiver and the encrypted data is decrypted in the receiver using a decryption key. The decryption key can be provided from the sender to the receiver as well, in which case the
decryption key is secret data that needs to be securely provided. If the sender is in control of which receiver is able to obtain the secret data then the secret data is conditionally provided.
E.g. in a conditional access system for pay-tv, premium content is typically scrambled in a head-end system using a control word (CW) as encryption key. The scrambled content is broadcast to
conditional access receivers. To allow a receiver to descramble the scrambled content, a smartcard is to be inserted into the receiver. Through the receiver the smartcard receives from the head-end
system an encrypted entitlement management message (EMM) comprising a chipset session key (CSSK) encrypted under a key CSUK of the receiver. Through the receiver the smartcard further receives from
the head-end system an entitlement control message (ECM) comprising the CW encrypted under the CSSK. Typically the CW has a shorter life time than the CSSK. Therefore the CSSK can be used to decrypt
multiple CWs received in multiple ECMs over time. Using the decrypted CSSK the smartcard decrypts the CW, which can subsequently be used by the receiver to descramble the scrambled content. It is
known that additional key layers may be used for decrypting the CW.
Manufacturing costs increase as the receiver is made more secure, because attackers develop new techniques over time to violate computing environments, and more sophisticated countermeasures need to
be incorporated.
Especially in the pay-tv field, smartcards have been the platform of choice for providing a trusted environment to the receivers. However, though secure, smartcards are expensive both in terms of
logistics--as they need to be distributed and tracked--and in terms of component costs. Moreover, as for any other hardware solution, it is difficult and costly to revoke and swap smartcards once
deployed in case some flaw has been discovered. That implies that design and development of smartcard application needs to be very careful, and testing very thorough. Moreover, a smartcard does not
provide sufficient CPU power to carry out bulk decryption of broadcast content. Therefore the role of the smartcard is mostly limited to relaying the obtained CW to more powerful hardware such as a
descrambler in the receiver, either dedicated or general purpose. Such receiver--in turn--disadvantageously has to ensure a minimum degree of confidentiality when communicating to the smartcard,
which entails some unique secret such as a key shared between the smartcard and the receiver.
There is a need for an improved solution for securely and conditionally providing secret data from a sender to a receiver.
SUMMARY OF THE INVENTION [0008]
It is an object of the invention to provide an improved method for securely providing secret data, such as e.g. a control word or a decryption key, from a sender to a receiver.
According to an aspect of the invention a system is proposed for securely providing a secret data from a sender to one or more receivers. The receiver comprises a first memory configured for storing
a sequence of functions originating from a hierarchy of functions. Each function is configured to migrate the secret data from an input transform space to an output transform space using a
mathematical transformation under control of a seed. The sender is configured to provide the seed to the receiver. The receiver is configured to migrate the secret data from the input transform space
to a final output transform space using the sequence of functions under control of the seed.
According to an aspect of the invention a method is proposed for securely providing a secret data from a sender to one or more receivers. The receiver comprises a first memory configured for storing
a sequence of functions originating from a hierarchy of functions, wherein each function is configured to migrate the secret data from an input transform space to an output transform space using a
mathematical transformation under control of a seed. The method comprises the step of providing one or more seeds from the sender to the receiver. The method further comprises the step of migrating
in the receiver the secret data from the input transform space to a final output transform space using the sequence of functions under control of the seeds.
According to an aspect of the invention a sender is proposed for securely providing a secret data to one or more receivers. The sender is for use in a system having one or more of the features as
defined above. The sender is configured to define a hierarchy of functions. Each function is configured to migrate the secret data from an input transform space to an output transform space using a
mathematical transformation under control of a seed. The sender is configured to provide the seed to the receiver.
According to an aspect of the invention a method in a sender is proposed for securely providing a secret data from the sender to one or more receivers. The method comprises the step of defining a
hierarchy of functions, wherein each function is configured to migrate the secret data from an input transform space to an output transform space using a mathematical transformation under control of
a seed. The method further comprises the step of providing one or more seeds to the receivers.
According to an aspect of the invention a receiver is proposed for securely receiving a secret data from a sender. The receiver is for use in a system having one or more of the features defined
above. The receiver comprises a first memory configured for storing a sequence of functions originating from a hierarchy of functions. Each function is configured to migrate the secret data from an
input transform space to an output transform space using a mathematical transformation under control of a seed. The receiver is configured to receive one or more seeds from the sender. The receiver
is configured to migrate the secret data from the input transform space to a final output transform space using the sequence of functions under control of the seeds.
According to an aspect of the invention a method in a receiver is proposed for securely receiving a secret data from a sender. The receiver comprises a first memory configured for storing a sequence
of functions originating from a hierarchy of functions, wherein each function is configured to migrate the secret data from an input transform space to an output transform space using a mathematical
transformation under control of a seed. The method comprises the step of receiving one or more seeds from the sender. The method further comprises the step of migrating the secret data from the input
transform space to a final output transform space using the sequence of functions under control of the seeds.
Thus, the secret data can advantageously be conditionally provided from the sender to the receiver without the need of specific hardware such as a smartcard at the receiver.
A transform (or transformation) is a particular data encoding, chosen to be lossless and not easily reversible to the original representation. Several classes of encodings are known, typically based
on properties of certain algebras. A transform space is the domain defined by a particular transform that includes the encodings for all possible clear data, and where operations on the clear data
are performed by mapped, equivalent operations on the encoded data.
"Under control of the seed" means that--in case the receiver is allowed to receive the secret data--the seed comprises specific data such as a value, a set of values or a function that matches with
the input transform space of the secret data in such a way that the mathematical transformation performed by the function results in a meaningful output transform space of the secret data. In other
words, the output transform space after transformation can be used as an input transform space in a subsequent transformation performed by a subsequent function under control of a corresponding seed
such that the secret data would be obtainable when subsequently migrated to a clear text transform space. In case the receiver is not allowed to receive the secret data, the sender can either not
send the seed resulting in the function being unable to perform the transformation or send an incorrect seed resulting in the function performing the mathematical transformation with a meaningless
output. In the latter case the secret data cannot be obtained by migration to the clear text transform space.
A function is typically a software code portion or a software module stored in the memory. A processor executes the functions in the sequence of functions to migrate the secret data from the input
transform space to the final output transform space.
The embodiments of claims 2 and 10 advantageously enable the sender to disable a group of receivers to obtain the secret data.
The embodiments of claims 3 and 11 advantageously enable the sender to disable a specific receiver to obtain the secret data.
The embodiments of claims 4 and 12 advantageously enable the secret data to be obtainable by a specific receiver only, i.e. the receiver that has the correct personalized seed which is typically
unique to the receiver.
The embodiments of claims 5 and 13 advantageously enable protection against reverse engineering and/or reverse execution of the function, whereby the interfaces between the functions need not be
The embodiments of claims 6 and 14 advantageously provide additional protection against reverse engineering of the functions.
Hereinafter, embodiments of the invention will be described in further detail. It should be appreciated, however, that these embodiments may not be construed as limiting the scope of protection for
the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS [0025]
Aspects of the invention will be explained in greater detail by reference to exemplary embodiments shown in the drawings, in which:
FIG. 1 shows a function performing a mathematical transformation of the prior art;
[0027]FIG. 2
shows a function performing a mathematical transformation under control of a seed of an exemplary embodiment of the invention;
FIG. 3 shows a sequence of functions of an exemplary embodiment of the invention;
[0029]FIG. 4
shows a sequence of functions of an exemplary embodiment of the invention;
FIG. 5 shows a transformation hierarchy of an exemplary embodiment of the invention;
FIG. 6 shows a transformation hierarchy of an exemplary embodiment of the invention; and
[0032]FIG. 7
shows a conditional access receiver of an exemplary embodiment of the invention;
FIG. 8 shows the steps of a method in a system of an exemplary embodiment of the invention;
FIG. 9 shows the steps of a method in a sender of an exemplary embodiment of the invention;
FIG. 10 shows the steps of a method in a receiver of an exemplary embodiment of the invention;
FIG. 11 shows a diagram clarifying transformation functions and encryption in general terms.
DETAILED DESCRIPTION OF THE DRAWINGS [0037]
The function F shown in FIG. 1 is a mathematical operation that migrates data Z across two different transform spaces--e.g. encryption spaces--identified by IN and OUT. The dimension of the output
transform space OUT is at least as large as the input transform space IN, and any data Z is represented (possibly not uniquely) in both input and output transform spaces as X and Y respectively. The
transform spaces IN and OUT are defined in such a way that there is no apparent mapping between the data Z and its representation in either of the transform spaces, i.e. knowing only X and Y it is
difficult or even impossible to obtain the corresponding Z. The function F is designed such that it is difficult to run in reverse direction. Because no apparent mapping between the input and output
transform spaces exists and the dimension of transform spaces IN and OUT is preferably significantly large, recreation of the function F is prevented. Moreover, the function F is implemented in such
a way that it is difficult to extract the data Z as it passes through the function, e.g. using known white box techniques and/or known code obfuscation techniques.
With reference to FIG. 1, function F is e.g. defined as F(X)=3*X+2. If the input transform space IN is a clear text transform space, then X=(Z)
=Z. After migration the following result is obtained: Y=(Z)
=3*X+2. To migrate Z from the output transform space to the clear text transform space again, a reverse function F
(Y)=(Y-2)/3 must be available in the receiver to obtain X as follows: F
(Y)=(3*X+2-2)/3=X. In this example Z, X and Y are a numbers that can be used to transform using simple addition and subtraction mathematics. It will be understood that Z, X and Y can be data in any
data format, including binary values, numbers, characters, words, and etcetera. The function F can be a more complex function and suitable for operation on e.g. binary values, numbers, characters or
words. Function F is e.g. an encryption function.
The function F can be defined as a mathematical operation that can be seeded with an additional parameter (also referred to as "seed") S, as shown in
FIG. 2
. The migration that the function F performs is typically defined by the seed S only and no information about the input space IN and output space OUT is embedded into F. The function F is chosen in
such a way that manipulation of input data X or seed S yields an unpredictable resulting data Y in the output transform space. The seed S does not need to be stored in a secure environment as the
seed S is engineered in such a way that no information about transform space IN or OUT can be extracted.
With reference to
FIG. 2
, function F is e.g. defined as F(X,S)=X-7+S. If the input transform space IN is a clear text transform space, then X=(Z)
=Z. After migration the following result is thus obtained: Y=(Z)
=X-7+S=Z-7+S. If e.g. a seed S is provided as data comprising the value of 5, then F(X,5)=X-7+5 and Y=(Z)
=X-7+5=Z-2. To migrate Z from the output transform space to the clear text transform space again, a reverse function F
(Y,S)=Y+7-S must be available in the receiver to enable the receiver to obtain Z as follows: F
(Y,S)=(X-7+5)+7-S. If the seed S=5 is known in the receiver, then Z can correctly be obtained as: F
(Y,5)=(X-7+5)+7-5=X=Z. If the input transform space IN is not a clear text transform space, then function F typically first performs a reverse transformation in the input transform space IN and next
a transformation in the output transform space OUT. Such function F is e.g. defined as F(X,S1,S2)=F
(X,S1),S2), wherein F
(X,S1)=X-2-S1 and F
(X,S2)=X-7+S2. After migration the following result is thus obtained: Y=(Z)
=(X-2-S1)-7+S2=X-9-<S1,S2>, wherein X=(Z)
. Seeds S1 and S2 can be provided as two separate seeds to first perform F
(X,S1) and next perform F
(X,S2), or as a single seed comprising a compound <S1,S2> that can be used as input to F
(X,S1),S2). If e.g. S1=5 and S2=7, then the compound must equal <S1,S2>=5-7=-2 to successfully migrate Z to the output transform space OUT. In these examples Z, X, Y and S are numbers that can be
used to transform using simple addition and subtraction mathematics. It will be understood that Z, X, Y and S can be data in any data format, including binary values, numbers, characters, words, and
etcetera. The function F can be a more complex function and suitable for operation on e.g. binary values, numbers, characters or words. Function F is e.g. an encryption function.
As shown in FIG. 3, the function F can be repeated multiple times in sequence, each time with a different seed (or compounds of) Si, to allow data Z to be migrated across multiple transform spaces.
In the example of FIG. 3 the data Z is first migrated from the input transform space IN (i.e. X=(Z)
) to output transform space OUT1 (not shown) using function F and seed S1. The intermediate result (Z)
1 (not shown) is then input to the function F with seed S2 to migrate the data Z from transform space OUT1 to transform space OUT2 (not shown). Finally, the intermediate result (Z)
2 (not shown) is input to the function F with seed S3 to migrate the data Z from transform space OUT2 to transform space OUT3 resulting in Y=(Z)
3. The total transformation from IN to OUT3 is fully dependent on all three seeds having correct the values in the correct order. The seeds have no meaning if used in isolation.
To prevent reverse engineering of function F, information about intra-stage transform spaces (OUT1 and OUT2 in the example of FIG. 3) may be partially embedded into the relevant functions, thus
creating a new sequence of non-interchangeable functions Fi based on the same principles as explained for FIG. 3. This is shown in
FIG. 4
. In
FIG. 4
, each of the functions F1, F2 and F3, and its corresponding seed S1, S2 and S3, produces meaningful output only if its input transform space matches the output transform space of the previous
function in the sequence. In the example of
FIG. 4
the seed S1 in conjunction with function F1 migrates data Z from the input transform space IN to the output transform space OUT1, thus requiring the subsequently seed S2 in conjunction with function
F2 to be capable of migrating data Z from an input transform space equal to OUT1. Similar to S1 in conjunction with F1, S2 in conjunction with F2 and S3 in conjunction with F3 are capable of
migrating data Z from transform space OUT1 to transform space OUT2 and from transform space OUT2 to transform space OUT3, respectively.
The seeds Si are preferably chosen such that the data Y=(Z)
3 is only meaningful to a specific receiver, wherein Y is processed by a piece of hardware that is uniquely personalized and thereby capable of obtaining Z from Y=(Z)
As shown in FIG. 5, a transformation hierarchy--i.e. a tree or hierarchy of n levels of functions F1 . . . Fn--can be defined with individual seeds Si for each function. In general a transformation
hierarchy has at least two levels of functions (e.g. the functions F1 and F2 of FIG. 5). In theory the maximum number of levels is indefinite, but in practise the maximum number of levels is
restricted by memory constrains for storing the transformation hierarchy or relevant part of the transformation hierarchy. The transformation hierarchy is used to transform a global transformed
secret X=(Z)
into a multitude of independent transform spaces. Typically a first transformation is performed in the sender to migrate the secret data Z from a clear text input transform space IN to an output
transform space OUT. In the example of FIG. 5 the number of levels is 3 resulting in three different functions F1, F2 and F3 being used in the transformation hierarchy. The transformation hierarchy
is used to conditionally migrate the global transformed secret X to final and possibly unique transform spaces OUT1 . . . OUT4, without exposing the secret data Z in a meaningful way.
With reference to
FIG. 2
, the function F can be chosen such that, for a given seed S* instead of S, it correctly transforms only a specific subset of data X from the input transform space IN to the output transform space
OUT. The characteristics of the subset are determined by the mathematical operation that F performs, whereby the outcome of the transformation is dependent on the correlation between the data X and
the data of the seed S*. In this case, the dimension of the output space OUT may result to be smaller than the input space IN. The seed S* which is used to conditionally migrate Z from transform
space IN to transform space OUT, can be seen as an augmented version on the plain seed S which is used to unconditionally migrate Z from transform space IN to transform space OUT. The function F is
chosen in such a way that it is difficult to deduce the resulting subset from a given data X and seed S*, and it is difficult to manipulate the subset by manipulating X and/or S* in order to include
a specific data of X without affecting the resulting data Y in the output transform space. A correct seed S* correlates to the input transform space IN such that the mathematical operation performed
by F yields the correct output transform space OUT. This technique is used to perform obscured conditional transformations that can be implemented using e.g. white box techniques or code obfuscation.
The technique can be applied to any secret data Z.
The conditional property of an augmented transformation function F allows an individual receiver, or group of receivers, to be revoked from obtaining the transformed control word Y, by choosing new
seeds Si* at the lowest level (i.e. closest to the Y1 . . . Y4, in FIG. 6 this is the level of functions F3) of the transformation hierarchy. An example of a transformation hierarchy with augmented
transformation functions F is shown in FIG. 6. Unlike traditional key hierarchy schemes wherein the valence equals 2, the valence of the bottom nodes can be made significantly larger than 2.
Consequently, receiver revocation can take place more efficiently. For sake of simplicity, in the transformation hierarchy of FIG. 6 the valence is equal to 2.
In the example of FIG. 6, to revoke access of a specific receiver to Y2=(Z)
2--indicated by "X" in-between Y1 and Y3--a new seed S2B1 can be provided in such a way that the resulting output space of F2B matches the input space of F3 only if seeded with the seed S31*. Herein
S31* is specifically chosen to correlate with the F2 output space. The output space of F2B has now become useless when seeded with S32*. To prevent the revoked receiver from blocking any seed update,
seeds S, S2A1 and S2A2 can be renewed too.
The functions F1 . . . Fn can differ from each other by relying on a different correlations between its input data X and seed S.
The invention advantageously enables globally transformed secrets X to be conditionally delivered and made available to a receiver in a preferably uniquely transformed form Y1 . . . Y4 without the
need to deliver these data to each receiver individually. The migration of said secrets to final transform space OUT1 . . . OUT4 is done in a number of steps--each with their own seed Si or Si*--yet
the individual steps, seeds and intermediate data are not meaningful in isolation. As long as the transformed data Y1 . . . Y4 is not meaningful outside the context of a specific receiver--e.g. it
must match the input transform space of a uniquely personalized secure chipset in order to be able to obtain Z, whereby the secure chipset is difficult to copy--distributing this data Y1 . . . Y4 to
other receivers is meaningless as the other receivers cannot obtain Z from Y1 . . . Y4. This provides protection against sharing and cloning the secret data Z, while keeping the resource requirements
associated with white-box cryptography or code obfuscation within the receiver to a minimum. Only minimal hardware support is required in a receiver to be able to interpret the output transform space
OUT1 . . . OUT4 of the conditional transform hierarchy and obtain Z.
The seeds Si and Si* are typically provided as dynamic data and can be cycled in time. Only specific seeds Si or Si* need to be updated and delivered to the appropriate receivers to manipulate
conditional access to secret data Z. This provides bandwidth benefits.
The transformation hierarchy such as shown in FIG. 6 is typically defined or known in the sender. The sender generates the seeds S or S* and transmits the seeds to the relevant receivers. Hereby the
seeds are generated such to enable or disable a specific receiver or a group of receivers, depending on the level of the functions whereto the seeds are applied, to transform X into Y. Moreover, the
sender migrates the secret data Z from a clear text input transform space IN to an output transform space OUT using function F1 under control of seed S1. Each receiver is typically configured to
transform X to Y along a predefined path of the transform hierarchy and subsequently derive Z from Y. Hereto typically a single path of functions is stored in a first memory of the receiver. It is
possible to have multiple paths stored in the receiver to be able to obtain Z along different paths depending on the seeds received, e.g. to allow the sender to control access to different secret
data Z. Several receivers can have the same path of functions Fi implemented or each receiver can have a unique path of functions Fi implemented. Referring to FIG. 6, Y1 . . . Y4 are e.g. data
targeted at four different receivers. The first receiver is configured to transform X into Y1 along the path F2A(S2A1)-F2B(S2B1)-F3(S31*), the second receiver is configured to transform X into Y2
along the path F2A(S2A1)-F2B(S2B1)-F3(S32*), the third receiver is configured to transform X into Y3 along the path F2A(S2A2)-F2B(S2B2)-F3(S32*) and the fourth receiver is configured to transform X
into Y4 along the path F2A(S2A2)-F2B(S2B2)-F3(S33*). The secret data Z is finally obtained by the receiver by migrating the data Z from the final output transform space OUT1, OUT2, OUT3 or OUT4 to a
clear text transform space under control of a personalized seed stored in a second memory in the receiver. The first memory where the sequence of functions is stored and the second memory for storing
the personalized seed can be parts of a single memory module or separate memory modules. In the clear text transform space the data Z is no longer transformed and thus usable by the receiver.
One or more of the transform functions Fi in the transformation hierarchy can be modified or replaced by uploading a new function F from the sender to one or more of the receivers in order to thwart
reverse engineering of the transformation functions within the receiver.
In the receiver the invention is typically implemented at least partly as software or as a field-programmable gate array (FPGA) program in a programmable array. The implementation can reside in an
unprotected, partially protected or secure memory of a processor. The processor executes the functions stored in the memory to migrate the secret data Z from the input transform space IN to the
output transform space OUT. Minimal hardware support is required in the receiver. Limited bandwidth is required between the sender and the receivers and no return path is needed from the receivers to
the sender. The secret data Z cannot be extracted or intercepted and thus cannot be illegally distributed to other receivers.
As explained above, the invention can be used to provide any kind of secret data Z from any kind of data sender to any kind of data receivers. An example application of the invention is conditionally
providing keys or control words from a head-end system to conditional access receivers in a broadcast network. Pay TV applications in the broadcast network rely on the encryption of content data
streams. Conditional access receivers need the relevant control words to decrypt the stream prior to decoding.
[0055]FIG. 7
shows an example of a path of the transformation hierarchy implemented in a conditional access receiver. The receiver receives a control word CW as a globally transformed control word CWD
in an entitlement control message ECM. The receiver migrates the CWD from the input transform space P into the final output transform space CSSK of the receiver in three steps. The last migration
step creates the transformed control word {CW}CSSK, which is the control word CW in the output transform space of the cluster shared secret key CSSK unique to the receiver. The conditional access
receiver of
FIG. 7
comprises a generic computation environment and a secure computation environment.
The generic computation environment comprises an ECM Delivery Path for receiving the ECM from the head-end system. The generic computation environment further comprises an EMM Delivery Path for
receiving an Entitlement Management Messages (EMM) from the head-end system. The EMM comprises the seeds that are needed to migrate the CW through the transform spaces along the path of the
transformation hierarchy. The seeds received in the EMM are stored in a NVRAM memory of the generic computation environment. A first seed equals the compound <P,G1>. A second seed equals the compound
<G1,U1>. A third seed equals the compound <CSSK,U1>.
The secure computation environment comprises a sequence of functions. A first function R
transforms CWD
from the input transform space P to the output transform space G1 using the compound <P,G1> as seed input. Subsequently a second function R
transforms CWD
, i.e. the CW in the transform space G1, from the input transform space G1 to the output transform space U1 using the compound <G1,U1>. Subsequently a third function, in this example a TDES Whitebox
Encryption function, transforms CWD
, i.e. the CW in the transform space U1, from the input transform space U1 to the output transform space CSSK. The resulting {CW}CSSK is the CW encrypted under the CSSK key, which can be decrypted by
the conditional access receiver using the CSSK that is pre-stored in a secured memory or securely derivable by the receiver.
FIG. 8 shows the steps of a method for securely providing a secret data Z from a sender to one or more receivers as can be performed by a system as described above. Optional steps are indicated by
dashed lines. In optional step 5 a new function F is transmitted from the sender to the receiver. In optional step 6 the new function F replaces one or more of the functions in the memory of the
receiver. In step 1 one or more seeds S and/or S* are provided from the sender to the receiver. In step 2 the receiver migrates the secret data Z from the input transform space, e.g. input transform
space IN, to a final output transform space, e.g. output transform space OUT1, OUT2, OUT3 or OUT4, using the sequence of functions under control of the provided seeds. In optional step 3 a
personalized seed is read from the second memory in the receiver. In optional step 4 the receiver obtains the secret data Z by migrating the secret data from the final output transform space to a
clear text transform space under control of the personalized seed.
FIG. 9 shows the steps of a method for securely providing a secret data Z from a sender to one or more receivers as can be performed by a sender as described above. In step 10 the sender defines a
hierarchy of functions, wherein each function F is configured to migrate the secret data Z from an input transform space, e.g. input transform space IN, to an output transform space, e.g. output
transform space OUT, using a mathematical transformation under control of a seed S or S*. In step 11 one or more seeds S and/or S* are provided to the receivers.
FIG. 10 shows the steps of a method for securely providing a secret data Z from a sender to one or more receivers as can be performed by a receiver as described above. In step 20 one or more seeds S
and/or S* are received from the sender. In step 21 the secret data Z is migrated from the input transform space, e.g. input transform space IN, to a final output transform space, e.g. output
transform space OUT1, OUT2, OUT3 or OUT4, using the sequence of functions under control of the seeds S and/or S*.
The concept of transformation functions and encryption is clarified in general with reference to FIG. 11.
Assume, there exists an input domain ID with a plurality of data elements in a non-transformed data space. An encryption function E using some key is defined that is configured to accept the data
elements of input domain ID as an input to deliver a corresponding encrypted data element in an output domain OD. By applying a decryption function D, the original data elements of input domain ID
can be obtained by applying the decryption function D to the data elements of output domain OD.
In a non-secure environment, an adversary is assumed to be able to control the input and output data elements and the operation of the implementation of the encryption function E, in order to
discover the confidential information (such as keys) that is embedded in the implementation.
Additional security can be obtained in such a non-secured environment by applying transformation functions to the input domain ID and output domain OD, i.e. the transformation functions are input-
and output operations. Transformation function T1 maps data elements from the input domain ID to transformed data elements of transformed input domain ID' of a transformed data space. Similarly,
transformation function T2 maps data elements from the output domain OD to the transformed output domain OD'. Transformed encryption and decryption functions E' and D' can now be defined between ID'
and OD' using transformed keys. T1 and T2 are bijections.
Using transformation functions T1, T2, together with encryption techniques implies that, instead of inputting data elements of input domain ID to encryption function E to obtain encrypted data
elements of output domain OD, transformed data elements of domain ID' are input to transformed encryption function E' by applying transformation function T1. Transformed encryption function E'
combines the inverse transformation functions T1
and/or T2
in the encryption operation to protect the confidential information, such as the key. Then transformed encrypted data elements of domain OD' are obtained. By performing T1 and/or T2 in a secured
portion, keys for encryption functions E or decryption function D can neither be retrieved when analyzing input data and output data in the transformed data space nor when analyzing the white box
implementation of E' and/or D'.
One of the transformation functions T1, T2 should be a non-trivial function. In case, T1 is a trivial function, the input domains ID and ID' are the same domain. In case, T2 is a trivial function,
the output domains are the same domain.
Patent applications by Andrew Augustine Wajs, Haarlem NL
Patent applications by Arnoud Evert Van Foreest, Leiden NL
Patent applications by Philip Allan Eisen, Ottawa CA
Patent applications by Irdeto Access B.V.
Patent applications in class COMMUNICATION SYSTEM USING CRYPTOGRAPHY
Patent applications in all subclasses COMMUNICATION SYSTEM USING CRYPTOGRAPHY
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20100246822","timestamp":"2014-04-20T18:04:49Z","content_type":null,"content_length":"73543","record_id":"<urn:uuid:34c1a8ab-4557-47dd-a26b-e308caec969f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Nik Weaver's conceptualism and the correctness of the Schu"tte-Feferman analysis
Nik Weaver nweaver at math.wustl.edu
Sun Apr 9 03:29:52 EDT 2006
Okay, I will post a brief description of my formal systems for
hierarchies of Tarskian truth predicates, which prove relatively
strong well-ordering statements (e.g., enough to imply Kruskal's
theorem) and are supposed to be predicative.
But before I do that, can someone *please* tell me what is wrong
with my critique of Feferman-Schutte? This is independent of
the success of my truth theories.
At least two messages have now been posted which say that my
objections are unconvincing, but don't say why. I suspect that
neither of the authors had actually read my critique. The full
version is available in my paper "Predicativity beyond Gamma_0",
posted on my web site at
but here is a brief version.
Gamma_0 is the smallest predicatively non-provable ordinal. That
is, every ordinal less than Gamma_0 is isomorphic to an ordering
of omega which can be predicatively proven to be a well-ordering;
this is not true of Gamma_0 or any larger ordinal.
If a predicativist trusts some formal system for second order
arithmetic, then he should accept not only the theorems of the
system itself but also additional statements such as the assertion
that the system is consistent. He should indeed accept a "formalized
omega-rule schema" applied to the original system. Then the original
system plus the schema constitutes a new system that he accepts, and
the process can be iterated. It can even be transfinitely iterated,
yielding a family of formal systems S_a indexed by ordinal notations.
Kriesel proposed that a predicativist should accept the system S_a
when and only when he has a prior proof that a is an ordinal notation.
Feferman proved that if S_0 is a reasonable base system, then Gamma_0
is the smallest ordinal with the property that there is no finite
sequence of ordinal notations a_1, ..., a_n with a_1 a notation for
0, a_n a notation for Gamma_0, and such that S_{a_i} proves that
a_{i+1} is an ordinal notation. Thus, Gamma_0 is the smallest
predicatively non-provable ordinal.
The plausibility of Kriesel's proposal hinges on our conflating two
versions of the concept "ordinal notation" --- supports transfinite
induction for arbitrary sets versus supports transfinite induction
for arbitrary properties --- which are not predicatively equivalent.
When we prove a_i is an ordinal notation in Feferman's set-up, we
are only showing transfinite induction up to a_i for statements of
the form "b is in X". That is, if we know that "everything less than
b is in X implies b is in X", then we can infer that everything up
to a_i is in X. To infer soundness of S_{a_i} we need transfinite
induction up to a_i for the statement "S_b is sound". That is a
genuinely stronger assertion since, for example, S_{a_i} proves the
existence of arithmetical jump hierarchies up to a_i. So we should
not be able to infer soundness of S_{a_i} from the fact that a_i is
an ordinal notation.
Let us grant that the predicativist can somehow make the disputed
inference. Then for each a he has some way to make the deduction
(*) from I(a) and Prov_{S_a}(A(n)), infer A(n)
for any formula A, where I(a) formalizes the assertion that a is an
ordinal notation (supporting transfinite induction for sets).
Shouldn't he then accept the assertion
(**) (forall a)(forall n)[I(a) and Prov_{S_a}(A_n) --> A(n)]
for any formula A?
It is easy to see that the second assertion implies I(a) where a
is a notation for Gamma_0. So we somehow have to accept every
instance of the rule (*) but not the general implication (**).
In my Gamma_0 paper I discuss three separate (indeed, contradictory)
attempts by Kreisel to justify this. All three seemed hopeless to me.
If my critique is truly fallacious, surely someone can explain
(1) why (*) is reasonable
(2) why (**) is not reasonable.
Nik Weaver
Math Dept.
Washington University
St. Louis, MO 63130 USA
nweaver at math.wustl.edu
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2006-April/010368.html","timestamp":"2014-04-20T23:28:27Z","content_type":null,"content_length":"7046","record_id":"<urn:uuid:e77022c1-f1b6-492f-8e94-57890642a278>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics 423/502
Mathematics 423/502. Student projects.
Instructor: Z. Reichstein. January-April 2013.
An example of a PID which is not a Euclidean Domain by Siqi Wei
On a principal ideal domain that is not a Euclidean Domain by Conan Wong
A principal ideal domain that is not a Euclidean Domain by Lucas Guillen
A subresultant polynomial remainder sequence algorithm by Ronnie Chen
Introduction to spectral sequences by Huan Vo
Division points on curves by Niki Myrto Mavraki
Wedderburn's Little Theorem by Shamil Asgarli
Points of small height on curves by Vanessa Radzimski
Fibre Dimension Theorem by Xinyu Liu
Model-theoretic proofs of Hilbert's Nullstellensatz and Chevalley's theorem about the image of a constructible set by Yiyang Zhan
Krull Dimension by Cathleen Childs | {"url":"http://www.math.ubc.ca/~reichst/423-502S13projects.html","timestamp":"2014-04-16T10:31:21Z","content_type":null,"content_length":"1932","record_id":"<urn:uuid:c12360e3-73ba-44cd-a9c6-01bd5e0b2774>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Antiferromagnetic potts models on the square lattice: A high-precision Monte Carlo study
Ferreira, SJ; Sokal, AD; (1999) Antiferromagnetic potts models on the square lattice: A high-precision Monte Carlo study. J STAT PHYS , 96 (3-4) 461 - 530.
Full text not available from this repository.
study the antiferromagnetic q-state Potts model on the square lattice for q = 3 and q = 4, using the Wang-Swendsen-Kotecky (WSK) Monte Carlo algorithm and a powerful finite-size-scaling extrapolation
method. For q=3 we obtain good control up to correlation length xi similar to 5000; the data are consistent with xi(B)=Ae(2 beta)beta(P)(1 + a(1)e(-beta) + ...) as beta--> infinity, with p
approximate to 1. The staggered susceptibility behaves as chi(stagg) similar to xi(5/3). For q = 4 the model is disordered (xi less than or similar to 2) even at zero temperature. In appendices we
prove a correlation inequality for Potts antiferromagnets on a bipartite lattice, and we prove ergodicity of the WSK algorithm at zero temperature for Potts antiferromagnets on a bipartite lattice.
Type: Article
Title: Antiferromagnetic potts models on the square lattice: A high-precision Monte Carlo study
Keywords: Potts model, antiferromagnet, square lattice, phase transition, zero-temperature critical point, Monte Carlo, cluster algorithm, Swendsen-Wang algorithm, Wang-Swendsen-Kotecky
algorithm, finite-size scaling, NONLINEAR SIGMA-MODELS, LOGARITHMIC CORRECTIONS, MULTICRITICAL POINT, PHASE-TRANSITIONS, CRITICAL-BEHAVIOR, 3 DIMENSIONS, SPIN MODELS, MASS GAP,
XY-MODEL, SIMULATIONS
UCL UCL > School of BEAMS > Faculty of Maths and Physical Sciences > Mathematics
Archive Staff Only: edit this record | {"url":"http://discovery.ucl.ac.uk/1323526/","timestamp":"2014-04-20T01:30:40Z","content_type":null,"content_length":"21751","record_id":"<urn:uuid:ef472049-cfeb-41b7-b7e6-c043b10ceaf8>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computability and λ-definability
Results 1 - 10 of 17
- COMPUTER , 2006
"... Copyright © 2006, by the author(s). ..."
, 1996
"... This paper relates to system-level design of signal processing systems, which are often heterogeneous in implementation technologies and design styles. The heterogeneous approach, by combining
small, specialized models of computation, achieves generality and also lends itself to automatic synthesis ..."
Cited by 17 (4 self)
Add to MetaCart
This paper relates to system-level design of signal processing systems, which are often heterogeneous in implementation technologies and design styles. The heterogeneous approach, by combining small,
specialized models of computation, achieves generality and also lends itself to automatic synthesis and formal verification. Key to the heterogeneous approach is to define interaction semantics that
resolve the ambiguities when different models of computation are brought together. For this purpose, we introduce a tagged signal model as a formal framework within which the models of computation
can be precisely described and unambiguously differentiated, and their interactions can be understood. In this paper, we will focus on the interaction between dataflow models, which have partially
ordered events, and discrete-event models, with their notion of time that usually defines a total order of events. A variety of interaction semantics, mainly in handling the different notions of time
in the two models, are explored to illustrate the subtleties involved. An implementation based on the Ptolemy system from U.C. Berkeley is described and critiqued.
"... Abstract. Traditional combinatory logic is able to represent all Turing computable functions on natural numbers, but there are effectively calculable functions on the combinators themselves that
cannot be so represented, because they have direct access to the internal structure of their arguments. S ..."
Cited by 5 (4 self)
Add to MetaCart
Abstract. Traditional combinatory logic is able to represent all Turing computable functions on natural numbers, but there are effectively calculable functions on the combinators themselves that
cannot be so represented, because they have direct access to the internal structure of their arguments. Some of this expressive power is captured by adding a factorisation combinator. It supports
structural equality, and more generally, a large class of generic queries for updating of, and selecting from, arbitrary structures. The resulting combinatory logic is structure complete in the sense
of being able to represent pattern-matching functions, as well as simple abstractions. §1. Introduction. Traditional combinatory logic [21, 4, 10] is computationally equivalent to pure λ-calculus [3]
and able to represent all of the Turing computable functions on natural numbers [23], but there are effectively calculable functions on the combinators themselves that cannot be so represented, as
they examine the internal structure of their arguments.
- Minds and Machines 13(1 , 2003
"... Abstract. This paper concerns Alan Turing’s ideas about machines, mathematical methods of proof, and intelligence. By the late 1930s, Kurt Gödel and other logicians, including Turing himself,
had shown that no finite set of rules could be used to generate all true mathematical statements. Yet accord ..."
Cited by 4 (2 self)
Add to MetaCart
Abstract. This paper concerns Alan Turing’s ideas about machines, mathematical methods of proof, and intelligence. By the late 1930s, Kurt Gödel and other logicians, including Turing himself, had
shown that no finite set of rules could be used to generate all true mathematical statements. Yet according to Turing, there was no upper bound to the number of mathematical truths provable by
intelligent human beings, for they could invent new rules and methods of proof. So, the output of a human mathematician, for Turing, was not a computable sequence (i.e., one that could be generated
by a Turing machine). Since computers only contained a finite number of instructions (or programs), one might argue, they could not reproduce human intelligence. Turing called this the “mathematical
objection ” to his view that machines can think. Logico-mathematical reasons, stemming from his own work, helped to convince Turing that it should be possible to reproduce human intelligence, and
eventually compete with it, by developing the appropriate kind of digital computer. He felt it should be possible to program a computer so that it could learn or discover new rules, overcoming the
limitations imposed by the incompleteness and undecidability results in the same way that human mathematicians presumably do. Key words: artificial intelligence, Church-Turing thesis, computability,
effective procedure, incompleteness, machine, mathematical objection, ordinal logics, Turing, undecidability The ‘skin of an onion ’ analogy is also helpful. In considering the functions of the mind
or the brain we find certain operations which we can express in purely mechanical terms. This we say does not correspond to the real mind: it is a sort of skin which we must strip off if we are to
find the real mind. But then in what remains, we find a further skin to be stripped off, and so on. Proceeding in this way, do we ever come to the ‘real ’ mind, or do we eventually come to the skin
which has nothing in it? In the latter case, the whole mind is mechanical (Turing, 1950, p. 454–455). 1.
"... A course in discrete mathematics is a relatively recent addition, within the last 30 or 40 years, to the modern American undergraduate curriculum, born out of a need to instruct computer science
majors in algorithmic thought. The roots of discrete mathematics, however, are as old as mathematics itse ..."
Cited by 2 (1 self)
Add to MetaCart
A course in discrete mathematics is a relatively recent addition, within the last 30 or 40 years, to the modern American undergraduate curriculum, born out of a need to instruct computer science
majors in algorithmic thought. The roots of discrete mathematics, however, are as old as mathematics itself, with the notion of counting a discrete operation, usually cited as the first mathematical
, 2011
"... Undecidability of various properties of first order term rewriting systems is well-known. An undecidable property can be classified by the complexity of the formula defining it. This
classification gives rise to a hierarchy of distinct levels of undecidability, starting from the arithmetical hierarc ..."
Cited by 2 (1 self)
Add to MetaCart
Undecidability of various properties of first order term rewriting systems is well-known. An undecidable property can be classified by the complexity of the formula defining it. This classification
gives rise to a hierarchy of distinct levels of undecidability, starting from the arithmetical hierarchy classifying properties using first order arithmetical formulas, and continuing into the
analytic hierarchy, where quantification over function variables is allowed. In this paper we give an overview of how the main properties of first order term rewriting systems are classified in these
hierarchies. We consider properties related to normalization (strong normalization, weak normalization and dependency problems) and properties related to confluence (confluence, local confluence and
the unique normal form property). For all of these we distinguish between the single term version and the uniform version. Where appropriate, we also distinguish between ground and open terms. Most
uniform properties are Π 0 2-complete. The particular problem of local confluence turns out to be Π 0 2-complete for ground terms, but only Σ 0 1-complete (and thereby recursively enumerable) for
open terms. The most surprising result concerns dependency pair problems without minimality flag: we prove this problem to be Π 1 1-complete, hence not in the arithmetical hierarchy, but properly in
the analytic hierarchy. Some of our results are new or have appeared in our earlier publications [35, 7]. Others are based on folklore constructions, and are included for completeness as their
precise classifications have hardly been noticed previously.
"... Abstract. This paper argues that basing the semantics of concurrent systems on the notions of state and state transitions is neither advisable nor necessary. The tendency to do this is deeply
rooted in our notions of computation, but these roots have proved problematic in concurrent software in gene ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. This paper argues that basing the semantics of concurrent systems on the notions of state and state transitions is neither advisable nor necessary. The tendency to do this is deeply rooted
in our notions of computation, but these roots have proved problematic in concurrent software in general, where they have led to such poor programming practice as threads. I review approaches (some
of which have been around for some time) to the semantics of concurrent programs that rely on neither state nor state transitions. Specifically, these approaches rely on a broadened notion of
computation consisting of interacting components. The semantics of a concurrent compositions of such components generally reduces to a fixed point problem. Two families of fixed point problems have
emerged, one based on metric spaces and their generalizations, and the other based on domain theories. The purpose of this paper is to argue for these approaches over those based on transition
systems, which require the notion of state. 1
"... was one of the founders of computability theory. His main contributions to this field were published in three papers that appeared in the span of a few years, and especially in his
ground-breaking 1936–1937 paper, published when he was twenty-four years old. As indicated by its title, “On Computable ..."
Add to MetaCart
was one of the founders of computability theory. His main contributions to this field were published in three papers that appeared in the span of a few years, and especially in his ground-breaking
1936–1937 paper, published when he was twenty-four years old. As indicated by its title, “On Computable Numbers, with an Application to the Entscheidungsproblem, ” Turing’s paper deals ostensibly
with real numbers that are computable in the sense that their decimal expansion “can be written down by a machine. ” As he pointed out, however, the ideas carry over easily to computable functions on
the integers or to computable predicates. The paper was based on work that Turing had carried out as a Cambridge graduate student, under the direction of Maxwell Newman (1897–1984). When Turing first
saw a 1936 paper by Alonzo Church, he realized at once that the two of them were tackling the same problem—making computability precise— albeit from different points of view. Turing wrote to Church
and then traveled to Princeton University to meet with him. The final form of the paper was
, 2000
"... this paper please consult me first, via my home page. ..."
"... In this project we will learn about both primitive recursive and general recursive functions. We will also learn about Turing computable functions, and will discuss why the class of general
recursive functions coincides with the class of Turing computable functions. We will introduce the effectively ..."
Add to MetaCart
In this project we will learn about both primitive recursive and general recursive functions. We will also learn about Turing computable functions, and will discuss why the class of general recursive
functions coincides with the class of Turing computable functions. We will introduce the effectively calculable functions, and the ideas behind Alonzo Church’s (1903–1995) proposal to identify the | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2828824","timestamp":"2014-04-20T07:32:57Z","content_type":null,"content_length":"36749","record_id":"<urn:uuid:4e7978a7-7b9b-4263-adcd-b70974b19819>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section: XKB FUNCTIONS (3) Updated: libX11 1.4.2 Local index Up
XkbAllocGeomShapes - Allocate space for an arbitrary number of geometry shapes
Status XkbAllocGeomShapes (XkbGeometryPtr geom, int num_needed);
- geom
geometry for which shapes should be allocated
- num_needed
number of new shapes required
Xkb provides a number of functions to allocate and free subcomponents of a keyboard geometry. Use these functions to create or modify keyboard geometries. Note that these functions merely allocate
space for the new element(s), and it is up to you to fill in the values explicitly in your code. These allocation functions increase sz_* but never touch num_* (unless there is an allocation failure,
in which case they reset both sz_* and num_* to zero). These functions return Success if they succeed, BadAlloc if they are not able to allocate space, or BadValue if a parameter is not as expected.
XkbAllocGeomShapes allocates space for num_needed shapes in the specified geometry geom. The shapes are not initialized.
To free geometry shapes, use XkbFreeGeomShapes.
Unable to allocate storage
An argument is out of range
This document was created by man2html, using the manual pages.
Time: 21:58:41 GMT, April 16, 2011 | {"url":"http://www.makelinux.net/man/3/X/XkbAllocGeomShapes","timestamp":"2014-04-20T18:53:01Z","content_type":null,"content_length":"9427","record_id":"<urn:uuid:f18cef82-053b-468e-98dc-e21008fda190>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jackson, NJ Math Tutor
Find a Jackson, NJ Math Tutor
...I have tons of materials in my possession and am looking forward to hearing from you soon.I tutor all components of Algebra 1 including polynomials, x and y intercepts, slopes, simplifying
with exponents, absolute value and everything else in between. I am passionate about writing. So many kids do not know where to start.
19 Subjects: including algebra 2, English, writing, linear algebra
...I have some experience tutoring a variety of age groups and academic levels. References and Background check available upon request. I have much experience in tutoring in Algebra.
22 Subjects: including calculus, SAT math, trigonometry, statistics
...I have been home tutoring for 15 years students from K-12, and special needs students. My area of expertise is Math Grades 5-8 and American history. I have 7 years experience proctoring the
SAT's and the NJASK.
11 Subjects: including algebra 1, prealgebra, geometry, GED
...I have tutored high school students in Algebra I and Geometry as well as assisting many in obtaining their GED. I was a teacher of Elementary Education in a local school system for 11 years
and can provide many letters of recommendation upon request. I look forward to hearing from you and helpi...
20 Subjects: including probability, SAT math, statistics, prealgebra
...While teaching in the classroom setting, I enjoy creating and implementing engaging lessons that get the students working together. In a one-on-one tutoring session, I am able to tailor my
lessons right to the specific student. I begin by getting to know the child.
7 Subjects: including trigonometry, algebra 1, algebra 2, geometry
Related Jackson, NJ Tutors
Jackson, NJ Accounting Tutors
Jackson, NJ ACT Tutors
Jackson, NJ Algebra Tutors
Jackson, NJ Algebra 2 Tutors
Jackson, NJ Calculus Tutors
Jackson, NJ Geometry Tutors
Jackson, NJ Math Tutors
Jackson, NJ Prealgebra Tutors
Jackson, NJ Precalculus Tutors
Jackson, NJ SAT Tutors
Jackson, NJ SAT Math Tutors
Jackson, NJ Science Tutors
Jackson, NJ Statistics Tutors
Jackson, NJ Trigonometry Tutors
Nearby Cities With Math Tutor
Berkeley Township, NJ Math Tutors
Brick Math Tutors
Bricktown, NJ Math Tutors
East Brunswick Math Tutors
Howell, NJ Math Tutors
Jackson Township, NJ Math Tutors
Lakewood, NJ Math Tutors
Long Branch, NJ Math Tutors
Manchester Township Math Tutors
Manchester, NJ Math Tutors
Millstone Township, NJ Math Tutors
Point Pleasant, NJ Math Tutors
Tinton Falls, NJ Math Tutors
Toms River Math Tutors
Wall Township, NJ Math Tutors | {"url":"http://www.purplemath.com/jackson_nj_math_tutors.php","timestamp":"2014-04-17T13:35:57Z","content_type":null,"content_length":"23721","record_id":"<urn:uuid:f0d75efd-60f2-4289-9474-471160ddc346>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Torresdale South, PA
Find a Torresdale South, PA Math Tutor
...I am in my 7th year as a local high school physics teacher. I graduated from the University of Maryland in 2007 with a degree in physics and I have been teaching ever since. I am very
passionate about my profession and about physics in particular.
4 Subjects: including algebra 1, algebra 2, geometry, physics
...Additionally, I have had some experience tutoring for both the Asvab Exams and the SAT Math exam. I look forward to working with you and helping you achieve success in physics or math. PeteI
am Pennsylvania Certified to teach high school math.
9 Subjects: including algebra 2, geometry, trigonometry, statistics
...I performed especially well on the math portions of the ACT and SAT, scoring a 35/36 on the ACT and a 2210/2400 on the SAT. Also, I have been classically trained in piano since the age of 4
and have playing competitive tennis at the national level since the age of 10. Please feel free to contact me if you have an interest in any of the above subjects.
12 Subjects: including algebra 1, algebra 2, biology, prealgebra
I'm a retired college instructor and software developer and live in Philadelphia. I have tutored SAT math and reading for The Princeton Review, tutored K-12 math and reading and SAT for
Huntington Learning Centers for over ten years, and developed award-winning math tutorials.
14 Subjects: including algebra 1, algebra 2, geometry, precalculus
I am certified as a math teacher in Pennsylvania and spent ten years teaching math courses for grades 7-12 in the Philadelphia area. I enjoy tutoring students one-on-one, and watching them become
stronger math students. I like to help them build their confidence and problem solving ability as well as their skills.I taught Algebra to 8th and 9th grade students for over 5 years.
3 Subjects: including algebra 1, geometry, prealgebra
Related Torresdale South, PA Tutors
Torresdale South, PA Accounting Tutors
Torresdale South, PA ACT Tutors
Torresdale South, PA Algebra Tutors
Torresdale South, PA Algebra 2 Tutors
Torresdale South, PA Calculus Tutors
Torresdale South, PA Geometry Tutors
Torresdale South, PA Math Tutors
Torresdale South, PA Prealgebra Tutors
Torresdale South, PA Precalculus Tutors
Torresdale South, PA SAT Tutors
Torresdale South, PA SAT Math Tutors
Torresdale South, PA Science Tutors
Torresdale South, PA Statistics Tutors
Torresdale South, PA Trigonometry Tutors
Nearby Cities With Math Tutor
Andalusia, PA Math Tutors
Baederwood, PA Math Tutors
Bridgeboro, NJ Math Tutors
Cornwells Heights, PA Math Tutors
Delair, NJ Math Tutors
Eddington, PA Math Tutors
Lynnewood Gardens, PA Math Tutors
Masonville, NJ Math Tutors
Meadowbrook, PA Math Tutors
Newportville, PA Math Tutors
North Delran, NJ Math Tutors
Oak Lane, PA Math Tutors
Roslyn, PA Math Tutors
Rydal, PA Math Tutors
Trevose, PA Math Tutors | {"url":"http://www.purplemath.com/Torresdale_South_PA_Math_tutors.php","timestamp":"2014-04-18T05:56:43Z","content_type":null,"content_length":"24404","record_id":"<urn:uuid:11de848b-96b1-417b-9c5c-50d0f29b6c8d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
The King of Bad Math: Dembski's Bad Probability
The King of Bad Math: Dembski's Bad Probability
It's time to take a look at one of the most obnoxious duplicitous promoters of Bad Math, William Dembski. I have a deep revulsion for this character, because he's actually a decent mathematician, but
he's devoted his skills to creating convincing mathematical arguments based on invalid premises. But he's careful: he does his meticulous best to hide his assumptions under a flurry of mathematical
For today, I'm going to focus on his paper
Fitness Among Competitive Agents
One of the arguments that he loves to make, and which is at the heart of this paper, is what he calls the No Free Lunch (NFL) theorem. NFL states that "Averaged over all fitness functions, evolution
does no better than blind search."
Now, first, let's just take a moment to consider the meaning of NFL.
In Dembski's framework, evolution is treated as a search algorithm. The search space is a
. (This is graph in the discrete mathematics sense: a set of discrete nodes, with a finite number of edges to other nodes.) The nodes of the graph in this search space are "outcomes" of the search
process at particular points in time; the edges exiting a node correspond to the possible changes that could be made to that node to produce a different outcome. To model the quality of a nodes
outcome, we apply a fitness function, which produces a numeric value describing the fitness (quality) of the node.
The evolutionary search starts at some arbitrary node. It proceeds by looking at the edges exiting that node, and computes the fitness of their targets. Whichever edge produces the best result is
selected, and the search algorithm progresses to that node, and then repeats the process.
How do you test how well a search process works? You select a fitness function which describes the desired outcome, and see how well the search process matches your assigned fitness. The quality of
your search process is defined by the limit as maxlength approaches infinity:
• For all possible starting points in the graph:
□ Run your search using your fitness metric for maxlength steps to reach an
end point.
□ Using the desired outcome fitness, compute the fitness of
the end point
□ Compute the ratio of your outcome to the the maximum result
the desired outcome. This is the quality of your search for this length
So - what does NFL really say?
"Averaged over all fitness functions": take every possible assignment of fitness values to nodes. For each one, compute the quality of its result. Take the average of the overall quality. This is the
quality of the directed, or evolutionary, search.
"blind search": blind search means instead of using a fitness function, at each step just pick an edge to traverse randomly.
So - NFL says that if you consider every possible assignment of fitness functions, you get the same result as if you didn't use a fitness function at all.
This is just a really fancy way of using mathematical jargon to create a tautology. The key is that "averaged over all fitness functions" bit. If you average over all fitness functions, then
every node
has the same fitness. So, in other words, if you consider a search in which you can't tell the difference between different nodes, and a search in which you don't look at the difference between
different nodes, then you'll get equivalently bad results.
Now, in the paper that I linked to, he's responding to someone who showed that if you limit yourself to
fitness functions (loosely defined, that is, fitness functions where the majority of times that you compare two edges from a node, the target you select will be the one that is better according to
the desired fitness function), then the result of running the search will, on average, be better than a random traversal.
Dembski's response to this (sorry I'm not quoting directly; he only posts the paper in PDF which I can't cut and paste from), is to go into a long discussion of
competitive functions. His focus is on the fact that a pairwise fitness function is not necessarily transitive: if it says A is fitter than B, and B is fitter than C, then that doesn't necessarily
mean that it will say A is fitter that C.
The example he uses for this is a chess tournament: if you create a fitness function for chess players from the results of a serious of tournaments, you can wind up with results like player A can
consistently beat player B; B can consistently beat C, and C can consistently beat A.
That's true. Competitive fitness functions
have that property. But if you're considering evolution,
that doesn't matter
. In an evolutionary process, you'd wind up picking one, two, or all three as the fittest. That's what speciation
. In one situation, A is better, so it "wins". Starting from the same point, but in a slightly different environment, B is better, so it wins.
You're still selecting
better result. The fact that you can't always select one as best doesn't matter. And it doesn't change the fundamental outcome, which Dembski doesn't really address, that competitive fitness
produce a better result that random walks.
In my taxonomy of statistical errors, this is basically modifying the search space: he's essentially arguing for properties of the search space that eliminate any advantage that can be gained by the
nature of the evolutionary search algorithm. But his only argument for making those modifications have nothing to do with evolution: he's carefully picking search spaces that have the properties he
want, even though they have fundamentally different properties from evolution.
It's all hidden behind a lot of low-budget equations which are used to obfuscate things. (In "A Brief History of Time", Steven Hawking said that his publisher told him that each equation in the book
would cut the readership in half. Dembski appears to have taken that idea to heart, and throws in equations even when they aren't needed, in order to try to prevent people from actually reading
through the details of the paper where this error is hidden.)
46 Comments:
• "(sorry I'm not quoting directly; he only posts the paper in PDF which I can't cut and paste from)"
You can, even if you are just using Adobe Reader. Click the button just to the right of the button with a hand icon, an I-beam cursor with the legend "Select". Or pick "Tools" "Basic" "Select".
Select and copy. Of course, fancy formatting and characters may not come through properly, depending on the receiving program.
Nice post.
By , at 11:28 AM
• He'll also have to manually remove all the paragraph marks at the end of each line.
By Orac, at 12:10 PM
• "You can, even if you are just using Adobe Reader."
This is only true if the pdf file has not been protected. I have many pdf papers for which I cannot use the copy function.
By , at 12:20 PM
• Mind if I check my intuitive understanding of the NFL theorems with you? I interpreted it as follows, but I'm no operational researcher.
You can have algorithms that are good at a great many things - for the sake of graphicality consider an algorithm to produce a washing machine, another to produce a toaster, and a "null
algorithm" that does nothing. The NFL theorems say that no such algorithm can be consistently better than the null algorithm - for example, if you try to use the toaster algorithm in place of the
washing machine algorithm you'll get burnt shirts, which is a worse outcome than if you hadn't done anything at all.
These are all blind algorithms - the toaster algorithm, for example, takes no account of the fact that its product is likely to set fire to the laundry. If, however, you choose a non-blind
algorithm, the problems evaporate.
An example of such a non-blind algorithm might be an engineering company producing a product for a customer. The "target space" in this case is basically "whatever the customer wants", which can
be determined by contact with the customer. If the customer wants a toaster, they'll get a toaster. If they want a washing machine, they'll get a washing machine. Such non-blind searches can
consistently perform better than average.
The evolutionary search is hunting for "whatever will help me survive in this environment", and it's determining this by contact with the environment. Thus it's a non-blind search and hence the
NFL theorems don't apply.
By Lifewish, at 12:28 PM
• Oh, by the way, in the last few hours you've been listed on about 4 ScienceBlogs blogs and counting. Expect to be swarmed :P
By Lifewish, at 12:29 PM
• Excellent work! Maybe you could go to Uncommon Descent, Buffalo Bill's blog and beat up DaveScott (Springer). I must admit it would make me laugh to see them cry!
By , at 12:30 PM
• Good blog. I often address Dembski on my blog, Notes in Samsara.
By Mumon, at 12:33 PM
• lifewish:
What Dembski does is even worse than what you're describing. He wants to argue that an evolutionary algorithm - a search algorithm with a fitness function - cannot produce better results than
randomness. But the way that he does that is by talking about the averaged performance of all possible fitness functions.
He doesn't talk about what kind of performance you can get if you select a fitness function, even a bad one. He sticks to the composite of the average of all fitness functions - which is exactly
equivalent to a random walk.
By MarkCC, at 12:35 PM
• Back in 1999, I commented on Dembski's then-new citation of NFL results:
"The central question that Dembski poses concerning evolutionary algorithms is not one of comparative efficiency, but rather that of essential capacity. I find it difficult to see on what grounds
Dembski advances NFL as a relevant finding concerning the capability of evolutionary algorithms to perform tasks. I ask Dembski to clarify his reasoning on this point."
As Mark points out, Dembski has yet to clarify matters concerning this issue.
Wesley R. Elsberry
By , at 12:52 PM
• Grat post and great new blog. I look forward to more. Thaks for demonstrating in yet another way the vapidity of IDiots like Dembski.
By , at 1:03 PM
• Another problem with the NFL argument to evolution:
Evolution doesn't use all possible fitness functions, it uses one specific fitness function: the phenome's chances of surving long enough to reproduce. So whether or not the search process is no
better than random search when looking at all possible fitness functions isn't important. What's important is whether the search is better on random on the specific fitness function under
By Stormy Dragon, at 1:26 PM
• Very nicely written post. Understandable even to me, a non mathematician. One more bunk argument busted.
By , at 1:44 PM
• Ok, now you've got a shout-out from PZ. And well deserved, I should add.
By , at 1:50 PM
• In an earlier paper called "Searching Large Spaces: Displacement and the No Free Lunch Regress", Dembski laid out his basic scheme in what he called the "fundamental theorem" of ID. The piece
under consideration in this thread is an elaboration of one issue associated with that paper.
Dembski's argument founders on this simple point: "The evolutionary search starts at some arbitrary node."
Biological evolution does not start "at some arbitrary node". It starts on a node already known to be sufficiently fit -- the population is reproducing successfully. In the terms of Dembski's
Displacement Theorem paper, the population doesn't have to search for T, a small target in some large space, starting from some arbitrary point; it is already in T. Biological evolution consists
in 'finding' better (more fit) nodes that are also in T without "searching" for them.
Moreover, evolutionary "search" does not sample the whole space of possible variants. It samples adjacent variants, where "adjacent" means "one application of one of the evolutionary operators
away from the current node". Since most selectively relevant variables display nonzero local autocorrelations (i.e. they are characterized by more or less smooth gradients rather than random
topography in space and/or time), the population preferentially samples nodes that are also within T.
In my less-than-humble opinion, the "search" metaphor for biological evolution is seductively and profoundly misleading.
By , at 1:51 PM
• In an earlier paper called "Searching Large Spaces: Displacement and the No Free Lunch Regress", Dembski laid out his basic scheme in what he called the "fundamental theorem" of ID. The piece
under consideration in this thread is an elaboration of one issue associated with that paper.
Dembski's argument founders on this simple point: "The evolutionary search starts at some arbitrary node."
Biological evolution does not start "at some arbitrary node". It starts on a node already known to be sufficiently fit -- the population is reproducing successfully. In the terms of Dembski's
Displacement Theorem paper, the population doesn't have to search for T, a small target in some large space, starting from some arbitrary point; it is already in T. Biological evolution consists
in 'finding' better (more fit) nodes that are also in T without "searching" for them.
Moreover, evolutionary "search" does not sample the whole space of possible variants. It samples adjacent variants, where "adjacent" means "one application of one of the evolutionary operators
away from the current node". Since most selectively relevant variables display nonzero local autocorrelations (i.e. they are characterized by more or less smooth gradients rather than random
topography in space and/or time), the population preferentially samples nodes that are also within T.
In my less-than-humble opinion, the "search" metaphor for biological evolution is seductively and profoundly misleading.
By , at 1:51 PM
• Rats. Sorry for the couble post.
By , at 1:51 PM
• Great job!
Of Dembski's two "big ideas," the NFL stuff was the harder for me to address, but you've explained it very clearly.
If I'd proposed his "Explanatory Filter" in my freshman combinatorics/probability class, I would have been redirected to the English department.
*sigh* Where's my PhD?
By ArtK, at 2:02 PM
• As others have said, nice post.
A separate problem is with the search strategy. In a graph where the evolutionary search has to always choose the more fit route, you could imagine situations where evolution gets stuck on a
local peak, but can't go through a lower fitness region to reach a higher peak. Yet Holland's genetic algorithms show why genetic combination searches complex fitness spaces (multi-peaked) quite
broadly, and overcomes this problem.
By cw, at 2:57 PM
• There's a nice summary of Dembski's mathmatical history on the Panda's thumb here.
A nice summary (which echoes markcc's and Wesley Elsberry's comments):
"Understanding the NFL theorems may require some work, but the basic flaw in Dembski's argument is easy, even trivial, to spot. It is that the NFL theorems only tell us something about the
avergae performance of a fitness algorithm over all possible fitness landscapes. It tells us nothing about the performance of an algorithm on any given fitness landscape. End of discussion."
And another extremely important point:
"There are further difficulties, such as the fact that in evolution the fitness landscape coevolves with the organisms residing upon it, but there is no need to get into that here."
By Tim Hague, at 3:44 PM
• Just to be clear, the NFL stuff was originally done by Wolpert and MacReady: http://www.no-free-lunch.org/WoMa95.pdf
and later published by the IEEE in 1996. There's also a whole kaboodle of refs here: http://www.no-free-lunch.org/
The basic premise is that no search algorithm can perform better than random search -- when averaged over all possible fitness LANDSCAPES. Not fitness FUNCTIONS as you said in this blogpost.
Since all possible landscapes include a large number of environments that have no structure to find, no search algorithm is going to work very well, ON AVERAGE. This is because most (all?)
algorithms assume some sort of hill-to-climb, and as cw said above, genetic algorithms ala Holland solve some of the local maxima difficulties inherent to climbing the nearest hill.
For a little more detailed rant, see my previous comment to the second Information Theory blogpost on this site.
By Michael Schippling, at 4:13 PM
• Very nice exposition. The only unclear thing about it your description of Dembski as a "decent mathemetician." From your deconstruction, he either doesn't know what he's talking about or he is
being intentionally deceptive. Or both.
By , at 4:45 PM
• anonymous:
I meant that Dembski is a good mathematician in the sense of a skillful one. He's not making mistakes because he doesn't know better - he's doing a very slick job of use math to support the
argument he wants to make. He is lying, deliberately cooking up invalid mathematical models that he can then use to present convincing-looking proofs.
His argument in the linked paper about pairwise competition is valid math. It's just that he's subtly shifted the definitions so that what he's talking about is not the same thing as what he's
allegedly refuting.
By MarkCC, at 4:53 PM
• Another point to consider is that evolution works at the population level and not at the level of an individual, so I am not trying to 'survive', but my species is.
By , at 5:34 PM
• Richard Wein pointed out here that the laws of physics give rise to smooth genetic fitness landscapes. Dembski called his argument rubbish, and stated that continuous physical laws don't
necessarily result in smooth fitness landscapes. Wein subsequently settled on a weaker claim - that the landscape is patterned, regardless of whether it's smooth.
But Wein was 100% correct the first time. The genetic fitness landscape is certainly smoother than average, and that smoothness is ultimately due to the continuity of physical laws. Dembski's
response was both dead wrong and inflammatory.
By , at 5:38 PM
• "The basic premise is that no search algorithm can perform better than random search -- when averaged over all possible fitness LANDSCAPES. Not fitness FUNCTIONS as you said in this blogpost."
The phrase used by Wolpert and MacReady was "cost functions", if you want to get picky. The set of all "cost functions" is the set of all mappings of items X in the domain to cost values Y in the
Wesley R. Elsberry
By , at 5:59 PM
• Evolution by natural selection is not an optimizing process at all, at least not in the sense that it optimizes the performance of the population or species. Rather, it usually optimizes the
performance ("fitness") of individuals (or better yet: genes). Individual optima do not necessarily coincide with group, population or species optima. A case in point are genes on Y-chromosomes
that cause fathers to produce only sons. Such a gene will spread (at the expense of X chromosomes) but it may cause population extinction (as soon as there are too few females). This has been
predicted by W.D. Hamilton (in a famous Science paper "Extraordinary sex ratios", 1967 I think), and it has been observed in certain insects, and is being studied seriously as a potential tool to
exterminate pest insects.
By Raevmo, at 6:05 PM
• The reason you state why the NFL-theorem is a tautology is not really correct.
The fact that you average over all
fitness functions does not imply that every node has the same fitness. You cannot bring the averaging process inside the search algorithm.
As a counterexample consider that you only average over all smoothly varying fitness functions. The
fitness of every node will again be the same but the evolutionary search will perform better than random search.
The reason why NFL works is that almost all fitness functions are structureless random noise functions. On those functions no algorithm can do better than random search. Moreover, For every
algorithm that uses some structure of the fitness function to improve the search one can also find fitness functions with an opposite structure for which the algorithm does it worse
(For evolutionary search these are
the functions with a spiked fitness maximum surrounded by a deep broad low fitness valley).
These remarks however do not alter the main conclusion of your post: nl. that Dembski consistently misuses these theorems in his crusade against evolution.
Keep up the good work.
By , at 7:10 PM
• I'm not seeing an RSS or ATOM feed link. Blogger should have an easy method to enable such things, if you wanted to be read more.
By , at 7:29 PM
• Wow but there's a lot of activity today. That's what happens when a ton of folks all link on the same day.
So, wrt to fitness functions/cost functions/fitness landspaces: I'm presenting it from a discrete math point of view: instead of a continuous landspace, I'm using a graph. Each point in the graph
represents a state at a point in time; it has edges to represent states that can be transitioned to in one time unit. Given that representation, the difference between a fitness function and a
fitness landspace is that they're duals: you can make it a fitness landscape by assigning a cost to the graph edges, and counting that cost against the evaluation function; or you can assign
those numbers to the nodes of the graph. (The dual transformation is slightly more complicated than that, but that's the principle.) In a continuous domain, that transformation doesn't work so
well. But discrete graphs work very nicely as a model of evolution.
In the discrete model, it works out pretty nicely; and the fitness functions do operate as the equivalent of a fitness landscape, but I think that the idea of an evaluation function is easier to
understand that a landscape.
By MarkCC, at 7:47 PM
• With respect to the combination of fitness function, I'm afraid my wording was rather imprecise. As I've mentioned, I'm still figuring out how to write for a non-expert audience.
When I talk about averaging functions, I'm talking about the average of their performance over all inputs. If you use all fitness functions, then you can show that for each function f, there's
another function f', where f''s performance score is the opposite of fs. So if f gets a 1, f' gets a 0; etc. So each pair cancels, and you wind up with dead-average perfomance of random walk.
(Again, I'm being a bit simplistic here: the shape of the graph and the fundamental fitness function affect the way that you can generate an opposite function - but that's exactly matched by the
performance impact of that shape and fitness function on the average result of a random walk.)
By MarkCC, at 7:57 PM
• Vis Landscape vs Cost Function.
OOPS...I guess I should at least (re-)read the abstracts of things I reference...I've been thinking about the problem from the gritty details of machine learning and can only deal with
simpleminded mappings of theory to praxis... I'll go with markcc's discrete node/edge explanation of the confusion (in my brain).
As per physics being responsible for _smoother_ "fitness landscapes", I believe it goes back to my point about there being some environmental structure on which to hang your algorithm. It doesn't
need to be smooth, but it needs to be ordered.
I promise not to say this again, but the useful order measurement is not straight forward Shannon Entropy, but what Gell-Mann & Lloyd describe as Effective Complexity.
By Michael Schippling, at 9:43 PM
• If you use all fitness functions, then you can show that for each function f, there's another function f', where f''s performance score is the opposite of fs. So if f gets a 1, f' gets a 0; etc.
So each pair cancels, and you wind up with dead-average perfomance of random walk.
...So the toaster scorches the socks and the washing machine dissolves the crumpets? Or am I getting myself confused again?
After 3 years of pure maths, I really should be able to come up with better analogies :(
By Lifewish, at 10:57 PM
• Writing for a lay-audience, my favorite way to describe the problem would be the continuous, not the discrete, case; I think it's easier to visualize.
It ought to be intuitively obvious that some search algorithms are clearly better than random for some surfaces. For example, if your surface is a simple cone, an algorithm that blindly seeks the
center point from wherever it starts is more efficient than a random path about the cone's surface.
It's a little harder to take this analogy to any surface, but if you can get people to picture weird, fantastical surfaces with crags and spikes and cliffs, then it shouldn't be too difficult
informally argue that even a finely tuned search algorithm that works for whole classes of surfaces, will eventually encounter a set for which it performs miserably.
Thus random is as good as it gets if you try to tackle everything. Jack of all trades, master of none, as they say.
By , at 1:42 AM
• "Anonymous said...
Another point to consider is that evolution works at the population level and not at the level of an individual, so I am not trying to 'survive', but my species is.
This is a different Anonymous speaking now... Sorry - this kind of thinking was left behind a long time ago. The majority view is that selection acts at the individual level, not at the group/
population/species level. Some (e.g. Dawkins) would go as far as to say selection acts at the genetic level...
By , at 3:02 AM
• A simple (if crude) way to sum up the NFL theorem is as follows: in a uniformly random fitness landscape (aka fitness function) it doesn't matter where you search next, because every point is as
likely to be good or bad as every other point (more accurately, the probability distribution of the fitness at every point is uniform); therefore, no search strategy is any better than any other.
The NFL theorem is really quite uninteresting.
By , at 4:24 AM
• Honestly, even beyond the points made above, I would question whether attempting to model evolution as a search function is even the appropriate thing to do in the first place.
If I understand correctly, when theorems like the NFL ones talk about an algorithm "doing better" than another algorithm, they're talking about whether or not the algorithm finds the absolute
optimum result on the graph (or at least a better absolute optimum result than the other algorithm). Right? But we don't *care* about the absolute optimum result, not when we're talking about
evolution. We just care about finding *an* optimal result. If it's just a local maxima, that's fine.
I don't think anyone has ever reasonably claimed evolution finds *absolute* optimal results-- just that it approaches results which are optimal *for some evolutionary niche*. Saying that an
evolved organism is optimal for its niche and that a search algorithm solution is locally optimal seem to me intuitively identical statements. Since the "goal" (metaphorically speaking) of the
biological evolutionary process is not to search for an optimal position on the landscape, but to find some local peak on that landscape and guard it jealously, I don't see why Dembski's theorems
would be important even if they meant what he claims to think they do. One might as well claim the Bible is useless because it makes a poor cookbook.
Am I incorrect or missing something here?
By Andrew McClure, at 5:00 AM
• Andrew, maximums (global or local) do not enter into it. For the purposes of NFL, a search is evaluated in terms of some function of the fitness values of the points traversed. This could be
simply the highest fitness value discovered during the search. So we can say (per the NFL theorem) that, on a uniformly random fitness function, all algorithms attain the same highest fitness
value on average after a given number of steps.
Note: the NFL theorem only considers algorithms which never visit the same point twice. It should be clear that a search algorithm which keeps visiting the same point over and over again will on
average perform worse than one which keeps trying new points.
By , at 7:23 AM
• P. S. It's just occurred to me that you might have been confused by Mark's assertion that the performance of the search is measured "by the limit as maxlength approaches infinity" (where
"maxlength" is the number of steps). This assertion was incorrect. The number of steps is simply fixed at any value you like. Moreover, since the algorithm cannot return to previously visited
points (aka nodes), it there cannot be more steps than there are points in the search space (since by then the search space will have been exhaustively searched).
In practice it is assumed that the number of points in the search space is so vast that the issues of revisiting previous points and exhausting the search space are irrelevant. The search space
is usually a continuous one, and the NFL theorems (which apply only to discrete search spaces) are considered an approximation.
By , at 7:40 AM
• I've only just read the Dembski paper that was the object of Mark's blog entry, and I think it's not all bad. (I should add here that I haven't read the Wolpert paper that Dembski claims to be
responding to, so I can't evaluate Dembski's paper as a response to Wolpert's.)
The particular objection to his NFL argument that Dembski is addressing here is the objection that the NFL theorems do not apply to coevolutionary systems, i.e. ones involving "competitive
agents" because the fitness of each agent depends on the other agents. In this paper, Dembski argues that we can contruct an absolute fitness value for each agent, i.e. a fixed fitness function
over the search space. The search of this fitness function can then be considered an approximation to the coevolutionary search, and is subject to an NFL theorem. I'm not entirely convinced, but
I think he may well have a good point. In fact, I've seen a similar argument made before (by someone who was no friend to Dembski).
Of course, the big problem with Dembski's paper is that it leaves unaddressed the more fundamental objection to his NFL argument, namely that the NFL theorems are all based on the assumption of a
random fitness function, and that this doesn't correspond to the situation in the real world. Additionally, Dembski makes a number of typically deceptive remarks, such as his claim that "David
Fogel ... look[s] to competitive environments in which NFL supposedly breaks down". He implies that Fogel was trying to overcome some sort of problem with NFL, when in reality all Fogel set out
to do was show how effective a simple evolutionary algorithm could be (and he succeeded).
By , at 9:16 AM
• Funny that Dembski spends time trying to prove that evolution doesn't work, considering that we can SEE that it does work in evolutionary computation, as it does, unfortunately, in the bacteria
and viruses that we keep fighting. I just want to link here those of you with (probably institutional) access to mathscinet to the review that Wolpert wrote on Dembski's "No Free Lunch" book. For
those who don't, I quote a bit: I say Dembski "attempts to" turn this trick because despite his invoking the NFL theorems, his arguments are fatally informal and imprecise. Like monographs on any
philosophical topic in the first category, Dembski's is written in jello. There simply is not enough that is firm in his text, not sufficient precision of formulation, to allow one to declare
unambiguously `right' or `wrong' when reading through the argument. All one can do is squint, furrow one's brows, and then shrug.
Nice blog, by the way!
By , at 12:46 PM
• I was reluctant to post a comment on MarkCC's fine critique of Dembski's misuse of the NFL theorem(s) because it may smack of self-promotion, but here it is: chapter 11 (which I authored) in the
anthology Why Intelligent Design Fails (Rutgers Univ. Press, 2004, editors Matt Young and Taner Edis)is devoted to a rather detailed debunking of Dembski's misuse of the NFL theorems. When
writing that chapter, I was in contact with David Wolpert who suggested no objections to my arguments. Regarding MarkCC's fine post, his rendition of the NFL theorem (just the first theorem for
search, there are others as well) is not exactly true to the original rendition by Wolpert & Macready: in its original form it says nothing about evolution but only states that the probability of
a given "sample" to be obtained in a search is the same for all searching "black-box" algorithms if averaged over all possible landscapes (the difference between fitness function and cost
function is irrelevant - they are just opposite ways to speak about the same search results). It says nothing about specific fitness functions or specific algorithms which, if not averaged, may
(and do) very well drastically outperform blind search. Dembski has never responded to my critique in any form, although I know for fact he is familiar with it. Overall, he favors saturating his
writing with formulas mostly adding nothing to the argument but only serving as an embellishment and intimidation of poor laymen scared by mathematical symbols. There is a lot of anti-Dembski
stuff at http://www.talkreason.org. Mark Perakh
By Mark Perakh, at 3:17 PM
• NFL means natural selection is worthless if the environment changes very fast. Yawn.
By , at 7:41 PM
• OK, I think I follow what is going on in this particular formalism, but I don’t see the relevance to evolution. All biological organisms are multi-component and live in environments that have
multiple characteristics. Doesn’t that make fitness into multi-dimensional functions? Such functions cannot be ordered, so what is being compared here? (I haven’t looked at any of the linked
text, are the answers to be found there?)
By , at 4:26 PM
• Sports Pool
Play for FREE in our Office Pool,
and win FREE weekly prizes. Play
FREE now! http://www.officepoolgaming.com/
By Bimbo, at 11:39 PM
• Great posting. Sorry to beat a selected-out horse, but one more comments about PDF's: some people create PDFs by scanning originals as graphics; in that case, it's impossible to grab the text
without further OCR.
By , at 8:21 AM
• Thanks for causing me to think of it as node-exploration.
The node-choice is (obviously) random.
Succesful node choices (by the increased reproduction fitness test) are what is called evolution.
By josh narins, at 7:57 PM
Links to this post:
<< Home | {"url":"http://goodmath.blogspot.com/2006/03/king-of-bad-math-dembskis-bad.html","timestamp":"2014-04-20T09:18:56Z","content_type":null,"content_length":"91048","record_id":"<urn:uuid:e7432063-5d74-41aa-8e4f-bc20a689378e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Function spaces
The mathematical concept of a function expresses dependence between two quantities, one of which is given (the independent variable, argument of the function, or its "input") and the other produced
(the dependent variable, value of the function, or "output"). A function associates a single output to each input element drawn from a fixed set, such as the real numbers.
There are many ways to give a function: by a formula, by a plot or graph, by an algorithm that computes it, or by a description of its properties. Sometimes, a function is described through its
relationship to other functions (see, for example, inverse function). In applied disciplines, functions are frequently specified by their tables of values or by a formula. Not all types of
description can be given for every possible function, and one must make a firm distinction between the function itself and multiple ways of presenting or visualizing it.
One idea of enormous importance in all of mathematics is composition of functions: if z is a function of y and y is a function of x, then z is a function of x. We may describe it informally by saying
that the composite function is obtained by using the output of the first function as the input of the second one. This feature of functions distinguishes them from other mathematical constructs, such
as numbers or figures, and provides the theory of functions with its most powerful structure.
Functions play a fundamental role in all areas of mathematics, as well as in other sciences and engineering. However, the intuition pertaining to functions, notation, and even the very meaning of the
term "function" varies between the fields. More abstract areas of mathematics, such as
set theory
, consider very general types of functions, which may not be specified by a concrete rule and are not governed by any familiar principles. The characteristic property of a function in the most
abstract sense is that it relates exactly one output to each of its admissible inputs. Such functions need not involve numbers and may, for example, associate each of a set of words with its own
first letter.
Functions in algebra are usually expressed in terms of algebraic operations. Functions studied in analysis, such as the exponential function, may have additional properties arising from continuity of
space, but in the most general case cannot be defined by a single formula. Analytic functions in complex analysis may be defined fairly concretely through their series expansions. On the other hand,
in lambda calculus, function is a primitive concept, instead of being defined in terms of set theory. The terms transformation and mapping are often synonymous with function. In some contexts,
however, they differ slightly. In the first case, the term transformation usually applies to functions whose inputs and outputs are elements of the same set or more general structure. Thus, we speak
of linear transformations from a vector space into itself and of symmetry transformations of a geometric object or a pattern. In the second case, used to describe sets whose nature is arbitrary, the
term mapping is the most general concept of function.
Mathematical functions are denoted frequently by letters, and the standard notation for the output of a function ƒ with the input x is ƒ(x). A function may be defined only for certain inputs, and the
collection of all acceptable inputs of the function is called its domain. The set of all resulting outputs is called the range of the function. However, in many fields, it is also important to
specify the codomain of a function, which contains the range, but need not be equal to it. The distinction between range and codomain lets us ask whether the two happen to be equal, which in
particular cases may be a question of some mathematical interest.
For example, the expression ƒ(x) = x^2 describes a function ƒ of a variable x, which, depending on the context, may be an integer, a real or complex number or even an element of a group. Let us
specify that x is an integer; then this function relates each input, x, with a single output, x^2, obtained from x by squaring. Thus, the input of 3 is related to the output of 9, the input of 1 to
the output of 1, and the input of −2 to the output of 4, and we write ƒ(3) = 9, ƒ(1)=1, ƒ(−2)=4. Since every integer can be squared, the domain of this function consists of all integers, while its
range is the set of perfect squares. If we choose integers as the codomain as well, we find that many numbers, such as 2, 3, and 6, are in the codomain but not the range.
It is a usual practice in mathematics to introduce functions with temporary names like ƒ; in the next paragraph we might define ƒ(x) = 2x+1, and then ƒ(3) = 7. When a name for the function is not
needed, often the form y = x^2 is used.
If we use a function often, we may give it a more permanent name as, for example,
$operatorname\left\{Square\right\}\left(x\right) = x^2 . ,!$
The essential property of a function is that for each input there must be a unique output. Thus, for example, the formula
$operatorname\left\{Root\right\}\left(x\right) = pm sqrt x$
does not define a real function of a positive real variable, because it assigns two outputs to each number: the square roots of 9 are 3 and −3. To make the square root a real function, we must
specify, which square root to choose. The definition
$operatorname\left\{Posroot\right\}\left(x\right) = sqrt x ,!$
for any positive input chooses the positive square root as an output.
As mentioned above, a function need not involve numbers. By way of examples, consider the function that associates with each word its first letter or the function that associates with each triangle
its area.
Because functions are used in so many areas of mathematics, and in so many different ways, no single definition of function has been universally adopted. Some definitions are elementary, while others
use technical language that may obscure the intuitive notion. Formal definitions are
set theoretical
and, though there are variations, rely on the concept of
. Intuitively, a function is a way to assign to each element of a given set (the domain or source) exactly one element of another given set (the codomain or target).
Intuitive definitions
One simple intuitive definition, for functions on numbers, says:
• A function is given by an arithmetic expression describing how one number depends on another.
An example of such a function is y = 5x−20x^3+16x^5, where the value of y depends on the value of x. This is entirely satisfactory for parts of elementary mathematics, but is too clumsy and
restrictive for more advanced areas. For example, the cosine function used in trigonometry cannot be written in this way; the best we can do is an infinite series,
$cos\left(x\right) = 1 - frac12 x^2 + frac 1\left\{24\right\} x^4 - frac 1\left\{720\right\} x^6 + dotsb$
That said, if we are willing to accept series as an extended sense of "arithmetic expression", we have a definition that served mathematics reasonably well for hundreds of years.
Eventually the gradual transformation of intuitive "calculus" into formal "analysis" brought the need for a broader definition. The emphasis shifted from how a function was presented — as a formula
or rule — to a more abstract concept. Part of the new foundation was the use of sets, so that functions were no longer restricted to numbers. Thus we can say that
• A function ƒ from a set X to a set Y associates to each element x in X an element y = ƒ(x) in Y.
Note that X and Y need not be different sets; it is possible to have a function from a set to itself. Although it is possible to interpret the term "associates" in this definition with a concrete
rule for the association, it is essential to move beyond that restriction. For example, we can sometimes prove that a function with certain properties exists, yet not be able to give any explicit
rule for the association. In fact, in some cases it is impossible to give an explicit rule producing a specific y for each x, even though such a function exists. In the context of functions defined
on arbitrary sets, it is not even clear how the phrase "explicit rule" should be interpreted.
Set-theoretical definitions
As functions take on new roles and find new uses, the relationship of the function to the sets requires more precision. Perhaps every element in
is associated with some
, perhaps not. In some parts of mathematics, including
recursion theory
functional analysis
, it is convenient to allow values of
with no association (in this case, the term
partial function
is often used). To be able to discuss such distinctions, many authors split a function into three parts, each a set:
• A function ƒ is an ordered triple of sets (F,X,Y) with restrictions, where
• : F (the graph) is a set of ordered pairs (x,y),
• : X (the source) contains all the first elements of F and perhaps more, and
• : Y (the target) contains all the second elements of F and perhaps more.
The most common restrictions are that F pairs each x with just one y, and that X is just the set of first elements of F and no more.
When no restrictions are placed on F, we speak of a relation between X and Y rather than a function. The relation is "single-valued" when the first restriction holds: (x,y[1])∈F and (x,y[2])∈F
together imply y[1] = y[2]. Relations that are not single valued are sometimes called multivalued functions. A relation is "total" when a second restriction holds: if x∈X then (x,y)∈F for some y.
Thus we can also say that
• A function from X to Y is a single-valued, total relation between X and Y.
The range of F, and of ƒ, is the set of all second elements of F; it is often denoted by rng ƒ. The domain of F is the set of all first elements of F; it is often denoted by dom ƒ. There are two
common definitions for the domain of ƒ some authors define it as the domain of F, while others define it as the source of F.
The target Y of ƒ is also called the codomain of ƒ, denoted by cod ƒ; and the range of ƒ is also called the image of ƒ, denoted by im ƒ. The notation ƒ:X→Y indicates that ƒ is a function with domain
X and codomain Y.
Some authors omit the source and target as unnecessary data. Indeed, given only the graph F, one can construct a suitable triple by taking dom F to be the source and rng F to be the target; this
automatically causes F to be total. However, most authors in advanced mathematics prefer the greater power of expression afforded by the triple, especially the distinction it allows between range and
Incidentally, the ordered pairs and triples we have used are not distinct from sets; we can easily represent them within set theory. For example, we can use {{x},{x,y}} for the pair (x,y). Then for a
triple (x,y,z) we can use the pair ((x,y),z). An important construction is the Cartesian product of sets X and Y, denoted by X×Y, which is the set of all possible ordered pairs (x,y) with x∈X and y∈Y
. We can also construct the set of all possible functions from set X to set Y, which we denote by either [X→Y] or Y^X.
We now have tremendous flexibility. By using pairs for X we can treat, say, subtraction of integers as a function, sub:Z×Z→Z. By using pairs for Y we can draw a planar curve using a function, crv:R→R
×R. On the unit interval, I, we can have a function defined to be one at rational numbers and zero otherwise, rat:I→2. By using functions for X we can consider a definite integral over the unit
interval to be a function, int:[I→R]→R.
Yet we still are not satisfied. We may want even more generality in some cases, like a function whose integral is a step function; thus we define so-called generalized functions. We may want less
generality, like a function we can always actually use to get a definite answer; thus we define primitive recursive functions and then limit ourselves to those we can prove are effectively computable
. Or we may want to relate not just sets, but algebraic structures, complete with operations; thus we define homomorphisms.
The idea of a function dates back to the
Persian mathematician
Sharaf al-Dīn al-Tūsī
, in the 12th century. In his analysis of the equation
$x^3 + d = bx^2$
for example, he begins by changing the equation's form to
$x^2 \left(b - x\right) = d$
. He then states that the question of whether the equation has a solution depends on whether or not the “function” on the left side reaches the value
. To determine this, he finds a
maximum value
for the function. Sharaf al-Din then states that if this value is less than
, there are no positive solutions; if it is equal to
, then there is one solution; and if it is greater than
, then there are two solutions.
The history of the function concept in mathematics is described by . As a mathematical term, "function" was coined by Gottfried Leibniz in 1694, to describe a quantity related to a curve, such as a
curve's slope at a specific point. The functions Leibniz considered are today called differentiable functions. For this type of function, one can talk about limits and derivatives; both are
measurements of the output or the change in the output as it depends on the input or the change in the input. Such functions are the basis of calculus.
The word function was later used by Leonhard Euler during the mid-18th century to describe an expression or formula involving various arguments, e.g. ƒ(x) = sin(x) + x^3.
During the 19th century, mathematicians started to formalize all the different branches of mathematics. Weierstrass advocated building calculus on arithmetic rather than on geometry, which favoured
Euler's definition over Leibniz's (see arithmetization of analysis).
At first, the idea of a function was rather limited. Joseph Fourier, for example, claimed that every function had a Fourier series, something no mathematician would claim today. By broadening the
definition of functions, mathematicians were able to study "strange" mathematical objects such as continuous functions that are nowhere differentiable. These functions were first thought to be only
theoretical curiosities, and they were collectively called "monsters" as late as the turn of the 20th century. However, powerful techniques from functional analysis have shown that these functions
are in some sense "more common" than differentiable functions. Such functions have since been applied to the modeling of physical phenomena such as Brownian motion.
Towards the end of the 19th century, mathematicians started to formalize all of mathematics using set theory, and they sought to define every mathematical object as a set. Dirichlet and Lobachevsky
are traditionally credited with independently giving the modern "formal" definition of a function as a relation in which every first element has a unique second element, but Dirichlet's claim to this
formalization is disputed by Imre Lakatos:
There is no such definition in Dirichlet's works at all. But there is ample evidence that he had no idea of this concept. In his [1837], for instance, when he discusses piecewise continuous
functions, he says that at points of discontinuity the function has two values: ...
(Proofs and Refutations, 151, Cambridge University Press 1976.)
defined a function as a relation between two variables
such that "to some values of
at any rate correspond values of
." He neither required the function to be defined for all values of
nor to associate each value of
to a single value of
. This broad definition of a function encompasses more relations than are ordinarily considered functions in contemporary mathematics.
The notion of a function as a rule for computing, rather than a special kind of relation, has been studied extensively in mathematical logic and theoretical computer science. Models for these
computable functions include the lambda calculus, the μ-recursive functions and Turing machines.
The idea of structure-preserving functions, or homomorphisms led to the abstract notion of morphism, the key concept of category theory. More recently, the concept of functor has been used as an
analogue of a function in category theory.
A specific input in a function is called an
of the function. For each argument value
, the corresponding unique
in the codomain is called the function
, or the
x under
ƒ. The image of
may be written as ƒ(
) or as
. (See the section on
The graph of a function ƒ is the set of all ordered pairs (x, ƒ(x)), for all x in the domain X. If X and Y are subsets of R, the real numbers, then this definition coincides with the familiar sense
of "graph" as a picture or plot of the function, with the ordered pairs being the Cartesian coordinates of points.
The concept of the image can be extended from the image of a point to the image of a set. If A is any subset of the domain, then ƒ(A) is the subset of the range consisting of all images of elements
of A. We say the ƒ(A) is the image of A under f.
Notice that the range of ƒ is the image ƒ(X) of its domain, and that the range of ƒ is a subset of its codomain.
The preimage (or inverse image, or more precisely, complete inverse image) of a subset B of the codomain Y under a function ƒ is the subset of the domain X defined by
$f^\left\{-1\right\}\left(B\right) = \left\{x in X : f\left(x\right) in B\right\}.$
So, for example, the preimage of {4, 9} under the squaring function is the set {−3,−2,+2,+3}.
In general, the preimage of a singleton set (a set with exactly one element) may contain any number of elements. For example, if ƒ(x) = 7, then the preimage of {5} is the empty set but the preimage
of {7} is the entire domain. Thus the preimage of an element in the codomain is a subset of the domain. The usual convention about the preimage of an element is that ƒ^−1(b) means ƒ^−1({b}), i.e
$f^\left\{-1\right\}\left(b\right) = \left\{x in X : f\left(x\right) = b\right\}.$
Three important kinds of function are the injections (or one-to-one functions), which have the property that if ƒ(a) = ƒ(b) then a must equal b; the surjections (or onto functions), which have the
property that for every y in the codomain there is an x in the domain such that ƒ(x) = y; and the bijections, which are both one-to-one and onto. This nomenclature was introduced by the Bourbaki
When the first definition of function given above is used, since the codomain is not defined, the "surjection" must be accompanied with a statement about the set the function maps onto. For example,
we might say ƒ maps onto the set of all real numbers.
Restrictions and extensions
Informally, a
of a function ƒ is the result of trimming its domain.
More precisely, if ƒ is a function from a X to Y, and S is any subset of X, the restriction of ƒ to S is the function ƒ|[S] from S to Y such that ƒ|[S](s) = ƒ(s) for all s in S.
If g is any restriction of ƒ, we say that ƒ is an extension of g.
It is common to omit the parentheses around the argument when there is little chance of ambiguity, thus: sin
. In some formal settings, use of
reverse Polish notation
ƒ, eliminates the need for any parentheses; and, for example, the
function is always written
!, even though its generalization, the
gamma function
, is written Γ(
Formal description of a function typically involves the function's name, its domain, its codomain, and a rule of correspondence. Thus we frequently see a two-part notation, an example being
fcolon mathbb{N} &to mathbb{R} n &mapsto frac{n}{pi} end{align} where the first part is read:
• "ƒ is a function from N to R" (one often writes informally "Let ƒ: X → Y" to mean "Let ƒ be a function from X to Y"), or
• "ƒ is a function on N into R", or
• "ƒ is a R-valued function of an N-valued variable",
and the second part is read:
• $n ,$ maps to $frac\left\{n\right\}\left\{pi\right\} ,!$
Here the function named "ƒ" has the natural numbers as domain, the real numbers as codomain, and maps n to itself divided by π. Less formally, this long form might be abbreviated
$f\left(n\right) = frac\left\{n\right\}\left\{pi\right\} , ,!$
though with some loss of information; we no longer are explicitly given the domain and codomain. Even the long form here abbreviates the fact that the
on the right-hand side is silently treated as a real number using the standard embedding.
An alternative to the colon notation, convenient when functions are being composed, writes the function name above the arrow. For example, if ƒ is followed by g, where g produces the complex number e
^ix, we may write
$mathbb\left\{N\right\} xrightarrow\left\{f\right\} mathbb\left\{R\right\} xrightarrow\left\{g\right\} mathbb\left\{C\right\} . ,!$
A more elaborate form of this is the
commutative diagram
Use of ƒ(A) to denote the image of a subset A⊆X is consistent so long as no subset of the domain is also an element of the domain. In some fields (e.g. in set theory, where ordinals are also sets of
ordinals) it is convenient or even necessary to distinguish the two concepts; the customary notation is ƒ[A] for the set { ƒ(x): x ∈ A }; some authors write ƒ`x instead of ƒ(x), and ƒ``A instead of ƒ
Function composition
function composition
of two or more functions uses the output of one function as the input of another. The functions ƒ:
can be
by first applying ƒ to an argument
to obtain
= ƒ(
) and then applying
to obtain
). The composite function formed in this way from general ƒ and
may be written
gcirc fcolon X &to Z
x &mapsto g(f(x)).
end{align} The function on the right acts first and the function on the left acts second, reversing English reading order. We remember the order by reading the notation as "
of ƒ". The order is important, because rarely do we get the same result both ways. For example, suppose ƒ(
) =
) =
+1. Then
)) =
+1, while ƒ(
)) = (
, which is
+1, a different function.
In a similar way, the function given above by the formula y = 5x−20x^3+16x^5 can be obtained by composing several functions, namely the addition, negation, and multiplication of real numbers.
Identity function
The unique function over a set
that maps each element to itself is called the
identity function
, and typically denoted by id
. Each set has its own identity function, so the subscript cannot be omitted unless the set can be inferred from context. Under composition, an identity function is "neutral": if ƒ is any function
, then
f circ mathrm{id}_X &= f , mathrm{id}_Y circ f &= f . end{align}
Inverse function
If ƒ is a function from
then an
inverse function
for ƒ, denoted by ƒ
, is a function in the opposite direction, from
, with the property that a round trip (a
) returns each element to itself. Not every function has an inverse; those that do are called
As a simple example, if ƒ converts a temperature in degrees Celsius to degrees Fahrenheit, the function converting degrees Fahrenheit to degrees Celsius would be a suitable ƒ^−1.
f(C) &= tfrac95 C + 32
f^{-1}(F) &= tfrac59 (F - 32) end{align}
The notation for composition reminds us of multiplication; in fact, sometimes we denote it using juxtaposition, gƒ, without an intervening circle. Under this analogy, identity functions are like 1,
and inverse functions are like reciprocals (hence the notation).
Specifying a function
A function can be defined by any mathematical condition relating each argument to the corresponding output value. If the domain is finite, a function ƒ may be defined by simply tabulating all the
and their corresponding function values ƒ(
). More commonly, a function is defined by a
, or (more generally) an
— a recipe that tells how to compute the value of ƒ(
) given any
in the domain.
There are many other ways of defining functions. Examples include recursion, algebraic or analytic closure, limits, analytic continuation, infinite series, and as solutions to integral and
differential equations. The lambda calculus provides a powerful and flexible syntax for defining and combining functions of several variables.
Functions that send integers to integers, or finite strings to finite strings, can sometimes be defined by an
, which gives a precise description of a set of steps for computing the output of the function from its input. Functions definable by an algorithm are called
computable functions
. For example, the
Euclidean algorithm
gives a precise process to compute the
greatest common divisor
of two positive integers. Many of the functions studied in the context of
number theory
are computable.
Fundamental results of computability theory show that there are functions that can be precisely defined but are not computable. Moreover, in the sense of cardinality, almost all functions from the
integers to integers are not computable. The number of computable functions from integers to integers is countable, because the number of possible algorithms is. The number of all functions from
integers to integers is higher: the same as the cardinality of the real numbers. Thus most functions from integers to integers are not computable. Specific examples of uncomputable functions are
known, including the busy beaver function and functions related to the halting problem and other undecidable problems.
Functions with multiple inputs and outputs
The concept of function can be extended to an object that takes a combination of two (or more) argument values to a single result. This intuitive concept is formalized by a function whose domain is
Cartesian product
of two or more sets.
For example, consider the multiplication function that associates two integers to their product: ƒ(x, y) = x·y. This function can be defined formally as having domain Z×Z , the set of all integer
pairs; codomain Z; and, for graph, the set of all pairs ((x,y), x·y). Note that the first component of any such pair is itself a pair (of integers), while the second component is a single integer.
The function value of the pair (x,y) is ƒ((x,y)). However, it is customary to drop one set of parentheses and consider ƒ(x,y) a function of two variables (or with two arguments), x and y.
The concept can still further be extended by considering a function that also produces output that is expressed as several variables. For example consider the function mirror(x, y) = (y, x) with
domain R×R and codomain R×R as well. The pair (y, x) is a single value in the codomain seen as a cartesian product.
There is an alternative approach: one could instead interpret a function of two variables as sending each element of A to a function from B to C, this is known as currying. The equivalence of these
approaches is expressed by the bijection between the function spaces $C^\left\{A times B\right\}$ and $\left(C^B\right)^A$.
Binary operations
The familiar
binary operations
, can be viewed as functions from
. This view is generalized in
abstract algebra
, where
-ary functions are used to model the operations of arbitrary algebraic structures. For example, an abstract
is defined as a set
and a function ƒ from
that satisfies certain properties.
Traditionally, addition and multiplication are written in the infix notation: x+y and x×y instead of +(x, y) and ×(x, y).
Function spaces
The set of all functions from a set
to a set
is denoted by
, by [
], or by
The latter notation is motivated by the fact that, when X and Y are finite, of size |X| and |Y| respectively, then the number of functions X → Y is |Y^X| = |Y|^|X|. This is an example of the
convention from enumerative combinatorics that provides notations for sets based on their cardinalities. Other examples are the multiplication sign X×Y used for the cartesian product where |X×Y| = |X
|·|Y| , and the factorial sign X! used for the set of permutations where |X!| = |X|! , and the binomial coefficient sign $tbinom X n$ used for the set of n-element subsets where | {"url":"http://www.reference.com/browse/Function+spaces","timestamp":"2014-04-20T08:09:03Z","content_type":null,"content_length":"143201","record_id":"<urn:uuid:f104fb4f-bb07-4d2a-9431-161a3c6f7db0>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
v in V, curl v=0 then v=0
May 7th 2012, 05:44 AM #1
Junior Member
Mar 2009
v in V, curl v=0 then v=0
I really need your help. I have the following problem:
Let Omega be connected Lipschitz domain. The boundary of Omega is Gamma. It has two disjoint smooth open subsets Gamma_1 and Gamma_2. Gamma_i (i=1,2) is of class C^{1,1}.
Let us defined space:
V= { v in H1(omega)^3 | div v = 0 in Omega, v= 0 on Gamma_1, v x n = 0 on Gamma_2}
I want to proof thic implication:
If v in V and curl v = 0 in Omega, then v = 0 in Omega.
So far, I have introduce to Omega smooth cuts to make simply connected domain Omega0. Since v is rotation-free function, there exist unique class q in H1(Omega0)/R such that: v = grad q in Omega
Since v is divergence - free, laplace q = 0 in Omega0.
What should be the next step to get q = 0 in Omega0. This would imply that v=0 in Omega.
I really need your help. Thank you so much!
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-applied-math/198491-v-v-curl-v-0-then-v-0-a.html","timestamp":"2014-04-18T17:59:34Z","content_type":null,"content_length":"29336","record_id":"<urn:uuid:57f6b18b-8768-4fa6-af19-75ff0d9285b1>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] efficient function calculation between two matrices
John [H2O] washakie@gmail....
Tue Feb 17 04:09:12 CST 2009
I am trying to calculate the results of a function between two matrices:
>>> F.shape
(170, 2)
>>> T.shape
(170, 481, 2)
Where F contains lat/lon pairs and T contains 481 lat/lon pairs for 170
trajectories of length 481
I want a new array of shape
containing the results of my distance function:
""" returns great circle distance"""
return dist
Is there a way to do this without the use of loops?
I'm thinking something along the lines of:
distances = [[gcd(f[0],f[1],t[0],t[1]]) for f in F] for t in T]
But obviously this doesn't work...
Thank you,
View this message in context: http://www.nabble.com/efficient-function-calculation-between-two-matrices-tp22054148p22054148.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-February/040419.html","timestamp":"2014-04-18T23:21:31Z","content_type":null,"content_length":"3676","record_id":"<urn:uuid:f049908e-2b1d-41a8-9a77-46502dbf06b1>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multivariable Calculus: Math 202 , Spring 2000
hypercube in mathematica You can create your own view of the hypercube by copying this mathematica file. The pictures have no perspective. Rather, they show the image of the hypercube under your
choice of a linear map from R^4 to R^2, which preserves the y and z axes.
hypercube movie
(Mac only) Download "Geometry games".
Note 1 Quadratic functions, Taylor approximation, Critical points, Second derivative test, Least squares
Exam 1 review, Exam 1 solutions Note 2 Line integrals in the plane, vector fields, work integrals, conservative vector fields and independence of path
Solutions to exercises in Note 2 Extra exercises for Note 2 Pictures: Vector Fields with no flow and no flux around closed curves Note 3 Double integrals, Area, Average, Center of Mass
Solutions to exercises in Note 3
Note 4 Gaussian Integral, Factorial function, Beta integral, Volumes of Spheres
Solutions to exercises in Note 4 Note 5 Green's Theorem, Two-dimensional Curl, Fundamental Theorem of Calculus, Divergence of a Vector field, Cauchy-Riemann equations, Most Interesting Vector Field,
Linear mappings, Regions bounded by graphs.
Solutions to exercises in Note 5 Extra exercises for Notes 3,4,5 Exam 2 solutions Note 6 Line integrals in three dimensions, three dimensional curl, parametrized surfaces, surface integrals, flux
integrals, Stokes theorem, triple integrals, divergence theorem
Extra Problems for Note 6 Final Exam (with solutions) | {"url":"https://www2.bc.edu/~reederma/202.html","timestamp":"2014-04-20T01:26:10Z","content_type":null,"content_length":"3576","record_id":"<urn:uuid:9fbe8d88-93bf-45eb-b10e-e657c057d100>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Haskell School of Expression
Results 1 - 10 of 18
- Engineering theories of software construction , 2001
"... Functional programming may be beautiful, but to write real applications we must grapple with awkward real-world issues: input/output, robustness, concurrency, and interfacing to programs written
in other languages. These lecture notes give an overview of the techniques that have been developed by th ..."
Cited by 97 (1 self)
Add to MetaCart
Functional programming may be beautiful, but to write real applications we must grapple with awkward real-world issues: input/output, robustness, concurrency, and interfacing to programs written in
other languages. These lecture notes give an overview of the techniques that have been developed by the Haskell community to address these problems. I introduce various proposed extensions to Haskell
along the way, and I offer an operational semantics that explains what these extensions mean. This tutorial was given at the Marktoberdorf Summer School 2000. It will appears in the book “Engineering
theories of software construction, Marktoberdorf Summer School 2000”, ed CAR Hoare, M Broy, and R Steinbrueggen, NATO ASI Series, IOS Press, 2001, pp47-96. This version has a few errors corrected
compared with the published version. Change summary: Apr 2005: some examples added to Section 5.2.2, to clarifyevaluate. March 2002: substantial revision 1
- SIAM Journal of Computing
"... The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the
Turing machine, and continues to be of enormous benefit in the classical theory of computation. We propos ..."
Cited by 49 (1 self)
Add to MetaCart
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the
Turing machine, and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the
lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some
of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of Linear Logic. We set
up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
- Journal of Automated Reasoning , 2004
"... Mathematical reasoning may involve several arithmetic types, including those of the natural, integer, rational, real and complex numbers. These types satisfy many of the same algebraic laws.
These laws need to be made available to users, uniformly and preferably without repetition, but with due acco ..."
Cited by 9 (2 self)
Add to MetaCart
Mathematical reasoning may involve several arithmetic types, including those of the natural, integer, rational, real and complex numbers. These types satisfy many of the same algebraic laws. These
laws need to be made available to users, uniformly and preferably without repetition, but with due account for the peculiarities of each type. Subtyping, where a type inherits properties from a
supertype, can eliminate repetition only for a fixed type hierarchy set up in advance by implementors. The approach recently adopted for Isabelle uses axiomatic type classes, an established approach
to overloading. Abstractions such as semirings, rings, fields and their ordered counterparts are defined and theorems are proved algebraically. Types that meet the abstractions inherit the
appropriate theorems. 1
- In Verification: Theory and Practice, essays Dedicated to Zohar Manna on the Occasion of His 64th Birthday (2003 , 2003
"... This case study applies techniques of formal program development by specification refinement and composition to the problem of concurrent garbage collection. The specification formalism is
mainly based on declarative programming paradigms, the imperative aspect is dealt with by using monads. We ..."
Cited by 6 (3 self)
Add to MetaCart
This case study applies techniques of formal program development by specification refinement and composition to the problem of concurrent garbage collection. The specification formalism is mainly
based on declarative programming paradigms, the imperative aspect is dealt with by using monads. We also sketch the use of temporal logic in connection with monadic specifications.
- N PROCEEDINGS OF THE IEEE VR 2008 WORKSHOP "SEARIS - SOFTWARE ENGINEERING AND ARCHITECTURES FOR INTERACTIVE SYSTEMS" , 2008
"... The creation of engaging, interactive virtual environments is a difficult task, but one that can be eased with the development of better software support. This paper proposes that a better
understanding of the problem of building Dynamic, Interactive Virtual Environments must be developed. Equipped ..."
Cited by 4 (0 self)
Add to MetaCart
The creation of engaging, interactive virtual environments is a difficult task, but one that can be eased with the development of better software support. This paper proposes that a better
understanding of the problem of building Dynamic, Interactive Virtual Environments must be developed. Equipped with an understanding of the design space of Dynamics, Dynamic Interaction, and
Interactive Dynamics, the requirements for such a support system can be established. Finally, a system that supports the development of such environments is briefly presented, Functional Reactive
Virtual Reality.
- IEEE TRANSACTIONS ON SOFTWARE ENGINEERING , 2006
"... The desirability of maintaining multiple stakeholders' interests during the software design process argues for leaving choices undecided as long as possible. Yet, any form of underspecification,
either missing information or undecided choices, must be resolved before automated analysis tools can b ..."
Cited by 4 (1 self)
Add to MetaCart
The desirability of maintaining multiple stakeholders' interests during the software design process argues for leaving choices undecided as long as possible. Yet, any form of underspecification,
either missing information or undecided choices, must be resolved before automated analysis tools can be used. This paper demonstrates how Constraint Satisfaction Problem Solution Techniques (CSTs)
can be used to automatically reduce the space of choices for ambiguities by incorporating the local effects of constraints, ultimately with more global consequences. As constraints typical of those
encountered during the software design process, we use UML consistency and well-formedness rules. It is somewhat surprising that CSTs are suitable for the software modeling domain since the
constraints may relate many ambiguities during their evaluation, encountering a well-known problem with CSTs called the k-consistency problem. This paper demonstrates that our CST-based approach is
computationally scalable and effective---as evidenced by empirical experiments based on dozens of industrial models.
, 2002
"... I declare that this thesis is my own account of my research and contains as its main content work which has not previously been submitted for a degree at any tertiary education institution. Joel
Kelso ii The purported advantages of Visual Programming, as applied to general purpose programming langua ..."
Cited by 3 (0 self)
Add to MetaCart
I declare that this thesis is my own account of my research and contains as its main content work which has not previously been submitted for a degree at any tertiary education institution. Joel
Kelso ii The purported advantages of Visual Programming, as applied to general purpose programming languages, have remained largely unfulfilled. The essence of this thesis is that functional
programming languages have at least one natural visual representation, and that a useful programming environment can be based upon this representation. This thesis describes the implementation of a
Visual Functional Programming Environment (VFPE). The programming environment has several significant features. • The environment includes a program editor that is inherently
, 2001
"... TyCO stands for "TYped Concurrent Objects". Not that the language includes any form of primitive objects. Instead, a few basic constructors provide for a form of Object-Based Programming (that
is, objects but no inheritance) . The language is quite simple. The basic syntax reduces to half-adozen con ..."
Cited by 2 (2 self)
Add to MetaCart
TyCO stands for "TYped Concurrent Objects". Not that the language includes any form of primitive objects. Instead, a few basic constructors provide for a form of Object-Based Programming (that is,
objects but no inheritance) . The language is quite simple. The basic syntax reduces to half-adozen constructors. To help in writing common programming patterns, a few derived constructors are
available. This report introduces TyCO by example, rather than explaining the language first and giving examples second.
"... Abstract. The paper investigates the use of preprocessing in adding higher order functionalities to Java, that is in passing methods to other methods. The approach is based on a mechanism which
offers a restricted, disciplined, form of abstraction that is suitable to the integration of high order an ..."
Cited by 2 (2 self)
Add to MetaCart
Abstract. The paper investigates the use of preprocessing in adding higher order functionalities to Java, that is in passing methods to other methods. The approach is based on a mechanism which
offers a restricted, disciplined, form of abstraction that is suitable to the integration of high order and object oriented programming. We discuss how the expressive power of the language is
improved. A new syntax is introduced for formal and actual parameters, hence the paper defines a translation that, at preprocessing time, maps programs of the extended language into programs of
ordinary Java. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1150426","timestamp":"2014-04-18T08:45:25Z","content_type":null,"content_length":"36193","record_id":"<urn:uuid:b919a466-76aa-4d25-9fb0-63e313cff9de>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical Graphics
May 9th, 2009 | Published in Data, Social Science, Statistical Graphics
Final thing on the car-culture regression. Below is a comparison of the actual data on Vehicle Miles Traveled with my reconstruction of Nate Silver’s model, and my model including lagged gas prices,
housing prices, and the stock market.
I “seasonally adjusted” the miles data by fitting a model predicting miles based only on the month of the year. The miles data (whether the actual data or the prediction from a model) is then
corrected by subtracting the coefficient for the month it was collected. This data is normalized according the level of driving in April.
An even better fit is possible with a more complex model that includes a) average monthly temperatures and b) an interaction between gas prices and time. But this simpler model suffices to show that
Silver’s original finding was probably an artifact of his failure to control for wealth effects and the lagged effect of gas prices.
The lesson, I suppose, is: beware of columnists on deadline bearing regressions!
Moment of Zen
May 8th, 2009 | Published in Data, Social Science, Statistical Graphics
Here are the variables I used in the models for the previous post. Simplistic social theories are left as an excercise for the reader.
Attempt to Regress
May 8th, 2009 | Published in Data, Social Science, Statistical Graphics
I’m loathe to say an unkind word about Nate Silver. Besides boosting the profile of my alma mater, he’s done more than anyone else to improve the reputation and sexiness of my present occupation:
statistical data analyst. This is all the more welcome at a time when other people are blaming statistical models for, well, ruining everything.
But I confess to being a bit annoyed when I read Silver’s recent article about the changes in American driving habits. In that article, Silver argues that we’re seeing a real shift away from car
culture, based on the following:
I built a regression model that accounts for both gas prices and the unemployment rate in a given month and attempts to predict from this data how much the typical American will drive. The model
also accounts for the gradual increase in driving over time, as well as the seasonality of driving levels, which are much higher during the summer than during the winter.
All well and good, except that Silver doesn’t provide the model or the data! He asks us to take his word for it that in January, Americans “drove about 8 percent less than the model predicted.”
Now, I don’t expect anyone to publish regression coefficients in Esquire magazine, but Silver does have a rather well-known website, so he could have put it there. The analysis was already done and
published, so I don’t see how it would have hurt Silver to publish the data after the fact. Which is what makes me suspect that he kept things deliberately vague in order to maintain a sense of
mystery and awe around his regression models. Particularly because in this case, the underlying model is actually quite simple.
Which is a shame, because the simplicity of the model is actually the most appealing thing about it. It’s a great example of a situation where a regression illuminates a relationship that would be
really hard to discern using simple descriptive statistics. The model is a perfect balance between being simple enough to be believable, and complex enough to really gain you something over simple
descriptives. In fact, it’s something that I plan to refer to in the future when my less quant-y friends question the need for regressions.
Which is why I decided to recreate Silver’s analysis from scratch, which took me about an hour. First I had to figure out what Silver’s model was. Based on the paragraph above, I decided on:
miles = gas + unemployment + date + month
Monthly miles driven are modeled as a function of that month’s average gas prices, the unemployment rate in that month, the date, and which month of the year it is. The date variable will capture the
“gradual increase” in miles traveled. I use month to capture the “seasonality of driving levels”. I could have grouped the months into seasons, but why not use a more precise measure if you’ve got
The next step was to find the data: From different sources, I obtained data on miles traveled, gas prices, and unemployment. All of these sources start around 1990, so that’s the time frame we’ll
have to work with.
With that in hand, it was time for some analysis. Using R, I combined the different data sources and ran myself a regression:
lm(formula = miles ~ unemp + price + date + month)
coef.est coef.se
(Intercept) 98.52 3.71
unemp -2.09 0.34
gasprice -0.08 0.01
date 0.01 0.00
monthAugust 17.90 1.40
monthDecember -8.82 1.40
monthFebruary -30.26 1.42
monthJanuary -22.03 1.40
monthJuly 17.87 1.42
monthJune 11.34 1.42
monthMarch 0.42 1.42
monthMay 12.56 1.42
monthNovember -10.00 1.40
monthOctober 5.85 1.40
monthSeptember -2.55 1.40
n = 222, k = 15
residual sd = 4.25, R-Squared = 0.98
That R-Squared of 0.98 means that about 98% of the actual variation in miles traveled is explained by the variables in this model. So it’s a pretty comprehensive picture of the things that predict
how much Americans will drive. A one point increase in the unemployment rate, in this model, predicts a 2.09 billion mile decrease in miles driven. And gas prices are in cents, so a one-cent increase
in the price of gas will, all things being equal, translate into an 80 million mile decrease in miles driven.
The next step was to check out Silver’s assertion that recent data on miles driven is lower than the model would predict. Recall that Silver’s model over-predicted January miles driven by 8 percent.
My model predicts that in January, Americans should have driven 239.6 billion miles. The actual number was 222 billion miles. The prediction is–wait for it–7.9 percent more than the actual number!
That’s pretty amazing actually, and it indicates that my data and model must be pretty damn close to Silver’s.
With the model in hand, however, we can do a bit better than this. Below is a chart showing how close the model was for every month in my dataset. It’s similar to the graphic accompanying Silver’s
Esquire article, only not as ugly and confusing.
The graph shows the difference between the prediction and the actual number. When the point is above the zero line, it means people drove more than the model would predict. When it’s below the line,
they drove less.
You can see here that there are multiple imperfections in the model. Mileage declined a little faster than predicted in the late 90′s, and then rose faster than expected in the early 2000′s. It’s
possible that this has something to do with a policy difference between the Bush and Clinton administrations, but I’m not enough of an expert to say.
What jumps out, though, are those last three points on the right, corresponding to this past November, December, and January. All of them are way off the prediction, and the error is bigger than for
any other time period. This strongly suggests that something really has changed. What’s not totally clear, though, is whether it’s the car culture that’s different, or whether it’s this recession
that’s unlike the other two recessions in this data set (the early 90′s and early 2000′s).
The next logical step is to consider some additional variables. Some commenters at Nate’s site pointed out that you might want to factor in changes in wealth–as opposed to changes in income, which
are at least partly captured by the unemployment variable. Directly measuring wealth is a little tricky, but we can easily measure two things that are proxies for wealth, or people’s perceptions of
wealth: the stock market and the housing market. So I went google-hunting again and found two more variables: the monthly closing of the Dow, and the government’s housing price index. Put those into
the regression, and away we go:
lm(formula = miles ~ unemp + price + date + stocks + housing + month)
coef.est coef.se
(Intercept) 117.87 4.13
unemp -1.64 0.48
gasprice -0.11 0.01
date 0.01 0.00
stocks 1.01 0.30
housing 0.24 0.03
monthAugust 18.40 1.20
monthDecember -8.88 1.21
monthFebruary -30.58 1.21
monthJanuary -22.12 1.19
monthJuly 18.28 1.20
monthJune 11.74 1.20
monthMarch 0.30 1.20
monthMay 12.77 1.20
monthNovember -10.02 1.21
monthOctober 6.42 1.21
monthSeptember -1.92 1.21
n = 217, k = 17
residual sd = 3.60, R-Squared = 0.98
R-squared looks the same, but the residual standard deviation is lower, which indicates that this model predicts more of the variation in the data than the last one. And the new variables both have
pretty big and statistically significant effects. The stock market close is scaled in thousands, so the coefficient indicates that for every 1000 point increase in the Dow, we drive 1 billion more
miles. The housing price index defines 1991 prices as 100, and went into the 220′s during the bubble. Every one point increase in that index predicts a 240 million mile increase in driving.
Here’s another version of the graph above, for our new model:
The same patterns are still present, but the divergence between the predictions and the actual numbers is smaller now. (Incidentally, I have no idea what happened in January of 1995. Did everyone go
on a road trip without telling me?) It still looks like there’s been some qualitative change in US driving habits recently, but the case is less clear cut. In particular, the late 90′s now looks like
another outstanding mystery. Mileage declined by more than the model expected then, but why? At the moment I have no particular hypothesis about that.
My final model tests something else that appears in Nate’s article:
There is strong statistical evidence, in fact, that Americans respond rather slowly to changes in fuel prices. The cost of gas twelve months ago, for example, has historically been a much better
predictor of driving behavior than the cost of gas today. In the energy crisis of the early 1980s, for instance, the price of gas peaked in March 1981, but driving did not bottom out until a year
OK, so let’s try using the price of gas 12 months ago as a predictor along with current prices. This will force us to throw away a bit of data, but we can still fit a model on most of the data
lm(formula = miles ~ unemp + price + price12 + date + stocks +
housing + month, data = data)
coef.est coef.se
(Intercept) 112.28 3.82
unemp -0.93 0.42
gasprice -0.07 0.01
gasprice12 -0.08 0.01
date 0.01 0.00
stocks 0.93 0.26
housing 0.25 0.02
monthAugust 18.19 1.04
monthDecember -8.99 1.05
monthFebruary -31.26 1.06
monthJanuary -22.20 1.05
monthJuly 18.17 1.05
monthJune 11.58 1.05
monthMarch 0.10 1.06
monthMay 12.88 1.05
monthNovember -10.06 1.04
monthOctober 6.29 1.04
monthSeptember -2.08 1.04
n = 210, k = 18
residual sd = 3.07, R-Squared = 0.99
It looks like current gas prices and last year’s gas prices are about equivalent in their effect on mileage. Now let’s look at the graph of prediction error again:
Lo and behold, the apparently anomalous findings from the last few months have disappeared. This isn’t the last word, of course, nor is it the perfect model. But it no longer appears that US driving
behavior is so unusual, when you account for all the relevant economic contextual factors.
Anyhow, that’s enough playing around in the data for me for the time being. In the end, this whole exercise helped me understand what I like best about Nate Silver’s work. He’s inventing a new media
niche, call it “statistical journalist”. He uses publicly available data to produce quick, topical analysis that illuminates the issues of the data in the way neither anecdotes nore naive recitations
of descriptive statistics can. He may play fast and loose at times, but his methods are transparent enough that people like me can still check up on him. I certainly hope that this kind of writing
becomes an established sub-specialty with a wider base of practitioners than just Silver himself.
Graphs > Tables, again
March 16th, 2009 | Published in Data, Statistical Graphics
Over at the Monkey Cage, Lee Sigelman notes a new study from the CDC that tries to figure out how many people and households in each state have no land line and rely entirely on cell phones. Being a
good student of Andrew Gelman, my first thought upon clicking the link was: “these tables are horrible, they should be graphs!” My second thought was, “Gelman will probably come along and produce
graphs of the data himself”. So before that happens, I thought I’d take a stab at summarizing the paper’s first couple of tables:
Click the image to see it full-size.
The intervals aren’t classical 95% intervals–they’re some kind of fancy estimation from the CDC that you’ll have to click the link to find out about. The hollow points/dashed lines are the “modeled”
estimates, and the black points/solid lines are the “direct estimates”. The points are in order according to the modeled estimates.
The nice thing about displaying this graphically is that you can see how much uncertainty there is on some of these estimates, so you get a better idea of what this graph does and does not tell you.
For example, Washington DC is estimated to have the highest percentage of adults in cell-only households, but the confidence intervals reveal that this doesn’t really mean anything–the most you can
say is that DC is on the high end of cell-only prevalence.
Pessimism of the Intellect
October 30th, 2008 | Published in Politics, Statistical Graphics
My boss is a prominent political scientist and an Obama supporter. This afternoon, he was ribbing me for being a “pox on both your houses” ultra-leftist who only grudgingly acknowledges that electing
Obama would be good for the left.
After our meeting had ended, I came up with a perfect encapsulation of my feelings about Obama, which has the added benefit of being an extremely nerdy joke. My point estimate is that it does matter
whether Obama wins. But my confidence interval for how much it matters includes zero. In the spirit of Jessica Hagy, I present the argument in graph form: | {"url":"http://www.peterfrase.com/category/social-science/statistical-graphics-social-science/page/2/","timestamp":"2014-04-19T06:58:58Z","content_type":null,"content_length":"45516","record_id":"<urn:uuid:4e255a29-ff1a-4147-90f5-8bac5294f0fc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
isomorphism extension theorem
April 13th 2009, 06:35 PM #1
Junior Member
Oct 2008
isomorphism extension theorem
Here's the problem:
Let K be an algebraically closed field. Show any isomorphism $\sigma$ of K onto a subfield of K such that K is algebraic over $\sigma[K]$ is an automorphism of K, that is show $\sigma[K]=K$.
I know $\sigma^{-1}:\sigma[K] \rightarrow K$ can be extended to an isomorphism $\mu:K\rightarrow K'$ where K'<=K. And since K<=K'<=K we know $\sigma^{-1}$ can only be extended to an automorphism
of K. But does this help me? I don't see how to make the connection with $\sigma[K]$.
Any advice would be great! :-)
Here's the problem:
Let K be an algebraically closed field. Show any isomorphism $\sigma$ of K onto a subfield of K such that K is algebraic over $\sigma[K]$ is an automorphism of K, that is show $\sigma[K]=K$.
I know $\sigma^{-1}:\sigma[K] \rightarrow K$ can be extended to an isomorphism $\mu:K\rightarrow K'$ where K'<=K. And since K<=K'<=K we know $\sigma^{-1}$ can only be extended to an automorphism
of K. But does this help me? I don't see how to make the connection with $\sigma[K]$.
Any advice would be great! :-)
Hint: show that $\sigma(K)$ is algebraically closed too and thus, since $K$ is algebraic over $\sigma(K) \subseteq K,$ we must have $\sigma(K)=K.$
April 13th 2009, 08:48 PM #2
MHF Contributor
May 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/83603-isomorphism-extension-theorem.html","timestamp":"2014-04-20T17:28:50Z","content_type":null,"content_length":"37367","record_id":"<urn:uuid:152494f8-9ef9-489c-880a-3d94cb831ad7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Indexing Operations on Weight-Balanced Trees - MIT/GNU Scheme 9.1
11.7.4 Indexing Operations on Weight-Balanced Trees
Weight-balanced trees support operations that view the tree as sorted sequence of associations. Elements of the sequence can be accessed by position, and the position of an element in the sequence
can be determined, both in logarthmic time.
— procedure:
wt-tree/index wt-tree index
— procedure:
wt-tree/index-datum wt-tree index
— procedure:
wt-tree/index-pair wt-tree index
Returns the 0-based indexth association of wt-tree in the sorted sequence under the tree's ordering relation on the keys. wt-tree/index returns the indexth key, wt-tree/index-datum returns the
datum associated with the indexth key and wt-tree/index-pair returns a new pair (key . datum) which is the cons of the indexth key and its datum. The average and worst-case times required by this
operation are proportional to the logarithm of the number of associations in the tree.
These operations signal a condition of type condition-type:bad-range-argument if index<0 or if index is greater than or equal to the number of associations in the tree. If the tree is empty, they
signal an anonymous error.
Indexing can be used to find the median and maximum keys in the tree as follows:
median: (wt-tree/index wt-tree
(quotient (wt-tree/size wt-tree)
maximum: (wt-tree/index wt-tree
(- (wt-tree/size wt-tree)
— procedure:
wt-tree/rank wt-tree key
Determines the 0-based position of key in the sorted sequence of the keys under the tree's ordering relation, or #f if the tree has no association with for key. This procedure returns either an
exact non-negative integer or #f. The average and worst-case times required by this operation are proportional to the logarithm of the number of associations in the tree.
— procedure:
wt-tree/min wt-tree
— procedure:
wt-tree/min-datum wt-tree
— procedure:
wt-tree/min-pair wt-tree
Returns the association of wt-tree that has the least key under the tree's ordering relation. wt-tree/min returns the least key, wt-tree/min-datum returns the datum associated with the least key
and wt-tree/min-pair returns a new pair (key . datum) which is the cons of the minimum key and its datum. The average and worst-case times required by this operation are proportional to the
logarithm of the number of associations in the tree.
These operations signal an error if the tree is empty. They could have been written
(define (wt-tree/min tree)
(wt-tree/index tree 0))
(define (wt-tree/min-datum tree)
(wt-tree/index-datum tree 0))
(define (wt-tree/min-pair tree)
(wt-tree/index-pair tree 0))
— procedure:
wt-tree/delete-min wt-tree
Returns a new tree containing all of the associations in wt-tree except the association with the least key under the wt-tree's ordering relation. An error is signalled if the tree is empty. The
average and worst-case times required by this operation are proportional to the logarithm of the number of associations in the tree. This operation is equivalent to
(wt-tree/delete wt-tree (wt-tree/min wt-tree))
— procedure:
wt-tree/delete-min! wt-tree
Removes the association with the least key under the wt-tree's ordering relation. An error is signalled if the tree is empty. The average and worst-case times required by this operation are
proportional to the logarithm of the number of associations in the tree. This operation is equivalent to
(wt-tree/delete! wt-tree (wt-tree/min wt-tree)) | {"url":"http://www.gnu.org/software/mit-scheme/documentation/mit-scheme-ref/Indexing-Operations-on-Weight_002dBalanced-Trees.html","timestamp":"2014-04-17T04:21:24Z","content_type":null,"content_length":"8142","record_id":"<urn:uuid:02e11bf9-9a57-474f-8c2f-c3cc54763acb>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
sqrt function fortran 90 error
up vote 0 down vote favorite
I am trying to handle sqrt but somehow I cannot understand why when compiling this program, there goes a segmentation fault error. I have detected that it is because of the sqrt. But how can I manage
to use sqrt in these type of formulations without mistaking?
SUBROUTINE constructImages(image,w,w_b,x_w)
USE cellConst
USE simParam, ONLY: xDim, yDim, obstX, obstY, obstR,L_max
USE D2Q9Const, ONLY: v
implicit none
integer, INTENT(INOUT):: image(yDim,xDim),w_b(yDim,xDim),w(yDim,xDim)
double precision, dimension(L_max,0:1), INTENT(INOUT):: x_w
integer:: x,y,i,cont_L
double precision::x2,y2
!Disk Shape
do x = 1, xDim
do y = 1, yDim
if (((x-obstX)**2.0d0 + (y-obstY)**2.0d0) <= (obstR**2.0d0) ) then
image(y,x) = wall
w(y,x) = wall
end if
end do
end do
do x = 1, xDim
do y = 3, yDim-2
do i= 1,8
if ((w(y,x) == fluid) .and. (w(y+v(i,1),x+v(i,0)) == wall)) then
w_b(y,x) = 2
end if
end do
end do
end do
do x = 1,xDim
do y = 3, yDim-2
if (w_b(y,x) == 2) then
w_b(y,x) = wall
w(y,x) = wall
image(y,x) = wall
end if
end do
end do
x_w = 0.0d0 !Lagrangian vector for boundary exact position
cont_L = 0
do x = 1, xDim
do y = 1, yDim
do i = 1, 8
if ((w(y+v(i,1),x+v(i,0)) == fluid) .and. (w_b(y,x) == wall)) then
cont_L = cont_L +1
. x_w(cont_L,0) = x2 - ((x-obstX)^2.0d0 + (y-obstY)^2.0d0)^0.5d0 - obstR
. x_w(cont_L,1) = y2 - ((x-obstX)^2.0d0 + (y-obstY)^2.0d0)^0.5d0 - obstR
! write(*,*) x2,y2
! write(*,*) x_w(cont_L,0),x_w(cont_L,1)
end if
end do
end do
end do
END SUBROUTINE constructImages
Please, let me know if more information is required,
Albert P
PD, the 3 integer eulerian meshes are 0 / 1 2D meshes where 1 is assigned to a wall, which is delimited by a rounded disk, w_b(y,x) are the boundary points, w(y,x) are the whole disk points and x_w I
want to set a langrangian vector of the discretized exact position for the obstacle. Don't really need to understand this.
fortran90 sqrt
Does it work (not segfault) for any values at all? – wallyk Jan 3 '13 at 18:06
Here the problem occurs after compilation. I don't understand why, but It seems that when I take out the square root, placed in bold on the problem statement, it magically performs fine. wallyk,
what do you want me to change in this case? can you be a bit more specific? – Albert Pa Jan 4 '13 at 12:24
Seg faults are very strange things. The exact point of occurrence is not easy to find. You remove the square root and the fault goes away? Believe it or not, this does not prove that the root is
causing the fault. I can't see how raising something to a power is in and of itself going to give a seg fault. On the other hand, that 0 index in array x_w in the same line looks a little
suspicious. I would triple check that you are passing the x_w array you should be passing, and that the subroutine recognizes it as such. – bob.sacamento Jan 4 '13 at 22:31
One more thing, I think using the FORTRAN sqrt() intrinsic is going to be at least slightly more efficient than raising a number to the power 1/2. Others who know more about this might disagree,
though. – bob.sacamento Jan 4 '13 at 22:32
Segfaults are far from strange - try a debugger and you will see exactly where the problem occurs. Without the whole of your code, you're asking us to analyze and think out something that can be
answered by a debugging tool by you in seconds. Give one a go, and see what you find. – David Jan 5 '13 at 0:08
show 3 more comments
1 Answer
active oldest votes
The problem was actually an array index which was under 1 in w, and there were multiple problems that somehow derived to that error when setting sqrt.
do x = 1, xDim
do y = 1, yDim
do i = 1, 8
if ((w(y,x) == fluid) .and. (w(y+v(i,1),x+v(i,0)) == wall))
up vote 0 down
vote here I state that w goes >1, and then it gets out of the bounds. I have solved the problem not letting the integers go below 1 and over yDim, and this with all the routines, and I have
detected that problem with a check-bounce while executing the code.
Thank you all of you anyway. That was a tough one!!
Albert P
add comment
Not the answer you're looking for? Browse other questions tagged fortran90 sqrt or ask your own question. | {"url":"http://stackoverflow.com/questions/14144319/sqrt-function-fortran-90-error/14195866","timestamp":"2014-04-23T21:38:12Z","content_type":null,"content_length":"70178","record_id":"<urn:uuid:850709c8-2bce-4cf5-9e54-dfdc2a4ccf1c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Scipy-tickets] [SciPy] #1666: LinearNDInterpolator fails when dimensions differ by many orders of magnitude
SciPy Trac scipy-tickets@scipy....
Sat Jun 2 10:57:57 CDT 2012
#1666: LinearNDInterpolator fails when dimensions differ by many orders of
Reporter: bloop369 | Owner: somebody
Type: defect | Status: new
Priority: normal | Milestone: Unscheduled
Component: scipy.interpolate | Version: 0.10.0
Keywords: LinearNDInterpolator, QHull |
Comment(by pv):
The scattered data intepolation needs answer to the question "which data
points are close to this one", and this requires a metric. The metric used
in Qhull is ||x-y||_2. Obviously, the results are not affine-invariant
(including scaling). Physically, using such a metric implies that all the
dimensions must have the same units.
Now, your suggestion of making the data axes dimensionless by scaling them
with the range of the data is one possibility, but this may be the wrong
choice in several cases (when the dimensions already have the same units).
I believe the behavior as it is now is correct. AFAIK, it's also how
things work in other numerical packages. This means the user must scale
the data axes before interpolation in a sensible way.
This can be spelled out in more detail in the documentation, and maybe
options could be added to make scaling the data more easily controllable
by the user.
Ticket URL: <http://projects.scipy.org/scipy/ticket/1666#comment:1>
SciPy <http://www.scipy.org>
SciPy is open-source software for mathematics, science, and engineering.
More information about the Scipy-tickets mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-tickets/2012-June/005312.html","timestamp":"2014-04-20T01:58:55Z","content_type":null,"content_length":"4953","record_id":"<urn:uuid:7433e0c1-5f79-49ea-8c53-5301736810d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the value of x.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/507cf197e4b07c5f7c1fba50","timestamp":"2014-04-16T04:26:39Z","content_type":null,"content_length":"109431","record_id":"<urn:uuid:b7d591c5-78de-4637-9fb0-4792b730e0f4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: July 2011 [00554]
[Date Index] [Thread Index] [Author Index]
Re: TransformedDistribution -- odd problem
• To: mathgroup at smc.vnet.net
• Subject: [mg120497] Re: TransformedDistribution -- odd problem
• From: Paul von Hippel <paulvonhippel at yahoo.com>
• Date: Tue, 26 Jul 2011 07:06:40 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• References: <j0gh7s$bd7$1@smc.vnet.net> <201107251129.HAA25540@smc.vnet.net> <4E2EA04E.813B.006A.0@newcastle.edu.au>
• Reply-to: Paul von Hippel <paulvonhippel at yahoo.com>
Thanks -- that fixes it!
Bonus question: if we don't specify v2>2, why doesn't Mathematica return a two part solution:
k v2 / (v2-2) v2>2
Indeterminate True
That's what it does if we request the mean of F -- why doesn't it do the same if we request the mean of k*F.
From: Barrie Stokes <Barrie.Stokes at newcastle.edu.au>
To: mathgroup at smc.vnet.net; paulvonhippel at yahoo <paulvonhippel at yahoo.com>
Sent: Monday, July 25, 2011 8:09 PM
Subject: [mg120497] Re: TransformedDistribution -- odd problem
Hi Paul
There are conditions on the v1 and v2, the degrees of freedom of the F distribution:
Assuming[v2 > 2,
Mean[TransformedDistribution[F ,
F \[Distributed] FRatioDistribution[v1, v2]]]]
{Assuming[v2 > 2,
Mean[TransformedDistribution[k*F ,
F \[Distributed] FRatioDistribution[v1, v2]]]], k*v2/(-2 + v2)}
{Assuming[v2 > 2,
Mean[TransformedDistribution[k + F ,
F \[Distributed] FRatioDistribution[v1, v2]]]],
k + v2/(-2 + v2)} // FullSimplify
which shows precisely what you expect for k*F and k+F.
>>> On 25/07/2011 at 9:29 pm, in message <201107251129.HAA25540 at smc.vnet.net>,
paulvonhippel at yahoo <paulvonhippel at yahoo.com> wrote:
> A little more experimenting shows that the TransformedDistribution
> function will also not provide the mean of k+F where k is a constant
> and F has an F distribution -- i.e.,
> Mean[TransformedDistribution[k*F , F \[Distributed]
> FRatioDistribution[v, v]]]
> If I changce the distribution of F to NormalDistribution or
> ChiSquareDistribution, I can get a mean for k*F or k+F. So the problem
> only occurs when I define a simple function of an F variable using the
> TransformedDistribution function.
> This all strikes me as very strange, and I'd be curious to know if
> others can reproduce my results. If you can't reproduce my results,
> I'd be interested in theories about why my results differ from yours.
> E.g., is there a setting I should change in the software?
> I am using version 8.0.0.0 and looking to upgrade to 8.0.1, if that
> makes a difference.
> Many thanks for any pointers.
> On Jul 24, 2:22 am, paulvonhippel at yahoo <paulvonhip... at yahoo.com>
> wrote:
>> I'm having a very strange problem with TransformedDistribution, where
>> I can calculate the mean of an F distribution but I cannot calculate
>> the mean of a constant multiplied by an F distribution. That is, if I
>> type
>> Mean[TransformedDistribution[F, F \[Distributed]
>> FRatioDistribution[v, v]]]
>> Mathematica gives me an answer. But if I type
>> Mean[TransformedDistribution[k*F , F \[Distributed]
>> FRatioDistribution[v, v]]]
>> Mathematica just echoes the input. I swear I got an answer for the
>> second expression earlier today. What am I doing wrong?
• Follow-Ups:
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2011/Jul/msg00554.html","timestamp":"2014-04-18T18:48:09Z","content_type":null,"content_length":"29042","record_id":"<urn:uuid:f65f2af1-ae34-4f73-92a1-22a6ee057afc>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Efficient calculation of Faulhaber's formula (sum of powers)?
07-21-2010 #1
Efficient calculation of Faulhaber's formula (sum of powers)?
What I'm actually trying to do is efficiently calculate Faulhaber(N, E) modulo M (or at least Faulhaber(N, E) - the modulo operation could be worked in later), where N, E, and M can be
arbitrarily large. From what I have read, it seems that this is generally expressed in terms of either a polynomial to the degree of N+1 (memory-expensive) or some recursive relationship
(time-expensive). Is there any way that this can be done that requires neither very much memory nor time?
if( numeric_limits< byte >::digits != bits_per_byte )
error( "program requires bits_per_byte-bit bytes" );
Since the exponent changes, you'll be forced to use something like this:
Faulhaber's Formula -- from Wolfram MathWorld
(the first equation with the Kronecker delta etc, not the individual cases)
Power Sum -- from Wolfram MathWorld
may also be helpful.
07-21-2010 #2 | {"url":"http://cboard.cprogramming.com/general-discussions/128673-efficient-calculation-faulhaber%27s-formula-sum-powers.html","timestamp":"2014-04-17T16:41:32Z","content_type":null,"content_length":"43861","record_id":"<urn:uuid:37b49252-993a-446f-bd25-65472decb80a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The table shows the height of a tree as it grows. What equation in slope-intercept form gives the tree's height at any time? Time (months) - 2, 4, 6, 8 Height (inches)- 14, 23, 32, 41 My answer: y =
9x + 2
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510345c7e4b03186c3f919d6","timestamp":"2014-04-17T18:36:56Z","content_type":null,"content_length":"37270","record_id":"<urn:uuid:547b56f7-5f4f-4c7e-aa43-a021d7b9b606>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binomial distribution
April 25th 2010, 12:03 AM #1
Jul 2007
Binomial distribution
My Problem:
A crossword puzzle is published in a magazine 6 times a week(all days except Sunday). A woman is able to complete on average eight out of ten crosswords. Given that she completes the puzzle on
Monday, find the probability that she will complete at least four in the rest of the week.
My solution:
Let the random variable $X$ denote the number of crossword puzzles completed. The probability that she solves a crossword puzzle on Monday is $0.8$.
Hence for the rest of the week
X follows a binomial distribution with parameters (5,0.8).
$P(X \geq 4| X=1)=\frac{0.737}{0.8}=0.922$
and it is wrong. The right answer according to the book is $0.737$.
Any help is appreciated.
It looks to me like you don't have to divide out 0.8 - just basic cumulative probability will be enough:
$P(X\ge4)=P(4)+P(5)=\left({5 \choose 4} \times 0.8^4\times 0.2^1\right) + 0.8^5 = 0.737$
Last edited by losm1; April 25th 2010 at 01:41 AM.
Note that the events are independent, whether she complete the puzzle on Monday will not affect the result for the remaining days.
So $P(X \ge 4 | puzzle \; completed \;on\; Monday)<br /> = P(X \ge 4)=P(X=4)+P(X=5)$
April 25th 2010, 01:29 AM #2
Junior Member
Apr 2010
April 25th 2010, 10:08 PM #3
Oct 2006 | {"url":"http://mathhelpforum.com/statistics/141208-binomial-distribution.html","timestamp":"2014-04-16T20:53:51Z","content_type":null,"content_length":"35310","record_id":"<urn:uuid:27300349-74be-449a-b296-56d8e89105df>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linearly Dependent
By Kardi Teknomo, PhD.
<Next | Previous | Index>
In this page, you will learn more about linearly dependent and linearly independent vectors.
In the previous topic of Basis Vector, you have learned that a set of vectors can form a coordinate system in basis vectors. One main characteristic of basis vectors is that they cannot be put as a
linear combination of other basis vector. This characteristic is called linearly independent vectors. Before we discus about linearly independent vectors, we will discuss about linearly dependent
Linearly Dependent Vectors
A set of vectors of the same linearly dependent if there is a set of scalars linear combination is a zero vector
Linearly dependent vectors cannot be used to make a coordinate system. Geometrically, two vectors are linearly dependent if they point to the same direction or opposite direction. These linearly
dependent vectors are parallel or lie on the same line (collinear). Three vectors are linearly dependent if they lie in a common plane passing through the origin (coplanar).
Algebraicly, we can augment the set of vectors to form a matrix
Linearly Independent Vectors
Having discussed about linearly dependent vectors, now we are ready for linearly independent vectors.
A set of vectors that is not linearly dependent is called linearly independent. When you put linearly independent vectors in the form of linear combinationlinearly independent vectors.
Geometrically, linear independent vectors form a coordinate system.
By inspection we can determine whether a set of vectors is linearly independent or linearly dependent. If at least one vector can be expresed as a linear combination (i.e. scalar multiple or sum) of
the other vectors, then the set of vectors is linearly dependent. If no vector can be expressed as a linear combination of the other vectors, then the set of vectors is linearly independent.
A set of vectors
A set of vectors
A set of vectors
Algebraicly, the vectors
For two vectors
The interactive program below is designed to answer whether two vectors are linearly independent or linearly dependent.
See Also: Basis Vector, Changing Basis, Eigen Values & Eigen Vectors
<Next | Previous | Index>
Rate this tutorial or give your comments about this tutorial
This tutorial is copyrighted.
Preferable reference for this tutorial is
Teknomo, Kardi (2011) Linear Algebra tutorial. http:\\people.revoledu.com\kardi\ tutorial\LinearAlgebra\ | {"url":"http://people.revoledu.com/kardi/tutorial/LinearAlgebra/LinearlyIndependent.html","timestamp":"2014-04-19T01:49:17Z","content_type":null,"content_length":"27489","record_id":"<urn:uuid:f9c374b2-0ebd-43ac-b582-be7e7d31aa59>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
On a Problem of sigfpe
> {-# LANGUAGE TypeFamilies, EmptyDataDecls, TypeOperators, GADTs #-}
At the end of his most recent blog post, Divided Differences and the Tomography of Types, Dan Piponi left his readers with a challenge:
In preparation for the next installment, here’s a problem to think about: consider the tree type above. We can easily build trees whose elements are of type A or of type B. We just need f(A+B).
We can scan this tree from left to right building a list of elements of type A+B, ie. whose types are each either A or B. How can we redefine the tree so that the compiler enforces the constraint
that at no point in the list, the types of four elements in a row spell the word BABA? Start with a simpler problem, like enforcing the constraint that AA never appears.
The tree type Dan is referring to is this one:
> data F a = Leaf a | Form (F a) (F a)
This is the type of binary trees with data at the leaves, also sometimes referred to as the type of parenthesizations.
(By the way, I highly recommend reading Dan’s whole post, which is brilliant; unfortunately, to really grok it you’ll probably want to first read his previous post Finite Differences of Types and
Conor McBride’s Clowns to the Left of Me, Jokers to the Right.)
For now let’s focus on the suggested warmup, to enforce that AA never appears. For example, the following tree is OK:
> tree1 = Form (Form (Leaf (Right 'x'))
> (Leaf (Left 1)))
> (Leaf (Right 'y'))
because the types of the elements at its leaves form the sequence BAB. However, we would like to rule out trees like
> tree2 = Form (Form (Leaf (Right 'x'))
> (Leaf (Left 1)))
> (Leaf (Left 2))
which contains the forbidden sequence AA.
Checking strings to see if they contain forbidden subexpressions… sounds like a job for regular expressions and finite state automata! First, we write down a finite state automaton which checks for
strings not containing AA:
State 0 is the starting state; the blue circles represent accepting states and the red circle is a rejecting state. (I made this one by hand, but of course there are automatic methods for generating
such automata given a regular expression.)
The idea now — based on another post by Dan — is to associate with each tree a transition function $f$ such that if the FSM starts in state $s$, after processing the string corresponding to the
leaves of the tree it will end up in state $f(s)$. Composing trees then corresponds to composing transition functions.
There’s a twist, of course, due to that little phrase "compiler enforces the constraint"… we have to do all of this at the type level! Well, I’m not afraid of a little type-level computation, are
First, type-level naturals, and some aliases for readability:
> data Z
> data S n
> type S0 = Z
> type S1 = S Z
> type S2 = S (S Z)
We’ll use natural numbers to represent FSM states. Now, how can we represent transition functions at the type level? We certainly can’t represent functions in general. But transition functions are
just maps from the (finite) set of states to itself, so we can represent one just by enumerating its outputs $f(0), f(1), f(2), \dots$ So, we’ll need some type-level lists:
> data Nil
> data (x ::: xs)
> infixr 5 :::
And a list indexing function:
> type family (n :!! l) :: *
> type instance ((x ::: xs) :!! Z) = x
> type instance ((x ::: xs) :!! S n) = xs :!! n
(Did you know you could have infix type family operators? I didn’t. I just tried it and it worked!)
Finally, we need a way to compose transition functions. If f1 and f2 are transition functions, then f1 :>>> f2 is the transition function you get by doing first f1 and then f2. This is not hard to
compute: we just use each element of f1 in turn as an index into f2.
> type family (f1 :>>> f2) :: *
> type instance (Nil :>>> f2) = Nil
> type instance ((s ::: ss) :>>> f2) = (f2 :!! s) ::: (ss :>>> f2)
Great! Now we can write down a type of trees with two leaf types and a phantom type index indicating the FSM transition function for the tree.
> data Tree' a b f where
A tree containing only an A sends state 0 to state 1 and both remaining states to state 2:
> LeafA :: a -> Tree' a b (S1 ::: S2 ::: S2 ::: Nil)
A tree containing only a B sends states 0 and 1 to state 0, and leaves state 2 alone:
> LeafB :: b -> Tree' a b (S0 ::: S0 ::: S2 ::: Nil)
Finally, we compose trees by composing their transition functions:
> Branch :: Tree' a b f1 -> Tree' a b f2 -> Tree' a b (f1 :>>> f2)
For the final step, we simply note that valid trees are those which send state 0 (the starting state) to either state 0 or state 1 (state 2 means we saw an AA somewhere). We existentially quantify
over the rest of the transition functions because we don’t care what the tree does if the FSM starts in some state other than the starting state.
> data Tree a b where
> T0 :: Tree' a b (S0 ::: ss) -> Tree a b
> T1 :: Tree' a b (S1 ::: ss) -> Tree a b
Does it work? We can write down our example tree with a BAB structure just fine:
*Main> :t T0 $ Branch (Branch (LeafB 'x') (LeafA 1)) (LeafB 'y')
T0 $ Branch (Branch (LeafB 'x') (LeafA 1)) (LeafB 'y')
:: (Num a) => Tree a Char
But if we try to write down the other example, we simply can’t:
*Main> :t T0 $ Branch (Branch (LeafB 'x') (LeafA 1)) (LeafA 2)
Couldn't match expected type `Z' against inferred type `S (S Z)'
*Main> :t T1 $ Branch (Branch (LeafB 'x') (LeafA 1)) (LeafA 2)
Couldn't match expected type `Z' against inferred type `S Z'
It’s a bit annoying that for any given tree we have to know whether we ought to use T0 or T1 as the constructor. However, if we kept a bit more information around at the value level, we could write
smart constructors leafA :: a -> Tree a b, leafB :: b -> Tree a b, and branch :: Tree a b -> Tree a b -> Maybe (Tree a b) which would take care of this for us; I leave this as an exercise.
This solution can easily be adapted to solve the original problem of avoiding BABA (or any regular expression). All that would need to be changed are the types of LeafA and LeafB, to encode the
transitions in an appropriate finite state machine.
This has been fun, but I can’t help thinking there must be a cooler and more direct way to do it. I’m looking forward to Dan’s next post with eager anticipation:
Matrices of types have another deeper and surprising interpretation that will allow me to unify just about everything I’ve ever said on automatic differentiation, divided differences, and
derivatives of types as well as solve a wide class of problems relating to building data types with certain constraints on them. I’ll leave that for my next article.
If that’s not a teaser, I don’t know what is!
2 Responses to On a Problem of sigfpe
1. Very nice! Conceptually your solution is isomorphic to mine but you use very different type-level datastructures to get there. Actually, your code is more elegant than mine. But I tried to write
something more general so I could do the unification I claimed. I’ll see if I can get something finished this weekend before you guess what I’m going to write next. :-)
I’ve also not been keeping up with Haskell extensions for type-level programming. So reading your post will probably allow me to remove some of the old-fashioned type classes I’ve been using. So
□ Thanks! At this point I am quite clueless as to the connection between this and matrices of types, so I look forward to seeing how it all fits together.
As for type-level programming, the biggest thing is using type families instead of multi-parameter type classes, which often makes things much clearer. You may be interested in reading this
post, which is basically a mini-tutorial on using type families for type-level programming.
This entry was posted in haskell and tagged automata, finite, state, trees, type-level, types. Bookmark the permalink. | {"url":"https://byorgey.wordpress.com/2010/08/12/on-a-problem-of-sigfpe/","timestamp":"2014-04-20T03:11:28Z","content_type":null,"content_length":"82726","record_id":"<urn:uuid:83308767-07e1-4068-b432-08594d791f8c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transpose Formulae?
December 11th 2008, 03:24 AM #1
Dec 2008
Transpose Formulae?
I must have totally missed an entire module of this, leaving me rather dumfounded!!
1.) Y = 5x - 4 (X)
2.) E = mgh + ½mv² (H)
3.) E = mgh + ½mv² (V)
4.) E = mgh + ½mv² (M)
Apparently the letters in the brackets are the ones to "Isolate"
If anyone does help, could you please show me how to work them out for the future,
Thanks so much in advance,
D x
I must have totally missed an entire module of this, leaving me rather dumfounded!!
1.) Y = 5x - 4 (X)
2.) E = mgh + ½mv² (H)
3.) E = mgh + ½mv² (V)
4.) E = mgh + ½mv² (M)
Apparently the letters in the brackets are the ones to "Isolate"
If anyone does help, could you please show me how to work them out for the future,
Thanks so much in advance,
D x
All you're doing is treating all variables except the ones in brackets as constants. You're just rearranging the equations so that the variable in the brackets is the subject.
3 [tex]E = mgh + \frac{1}{2}mv^2[tex] (v)...
$E - mgh = \frac{1}{2}mv^2$
$2(E - mgh) = mv^2$
$\frac{2(E - mgh)}{m} = v^2$
$\sqrt{\frac{2(E - mgh)}{m}} = v$
But its how you get there that confuses me, how come you divide certain aspects?
Any help with the other ones?
December 11th 2008, 03:30 AM #2
December 12th 2008, 12:25 AM #3
Dec 2008 | {"url":"http://mathhelpforum.com/algebra/64494-transpose-formulae.html","timestamp":"2014-04-17T12:05:36Z","content_type":null,"content_length":"36775","record_id":"<urn:uuid:2933d9ea-78c3-4dea-8b92-7758eaadcc9d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning
[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning
From: Hans Aberg
Subject: Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning
Date: Thu, 10 Sep 2009 11:00:07 +0200
On 10 Sep 2009, at 10:26, Torsten Anders wrote:
[Your mail does not cc to the list - added: seems relevant.]
t is part of the font
distribution itself at
I also found
It seems to be that that staff indicates the Pythagorean tuning, with accidentals to indicate offsets relative that. Right?
Exactly: nominals (c, d, e...) and the "common accientals" (natural, #, b, x, bb) denote a spiral of Pythagorean fifths. Other accidentals detune this Pythagorean by commas etc. Multiple comma-
accidentals can be freely combined for notating arbitrary just intonation pitches. The Sagittal notation (http://users.bigpond.net.au/d.keenan/sagittal/ ) follows exactly the same idea.
Yes, I thought so.
This is in contrast, for example, to the older just intonation notation by Ben Johnston (see David B. Doty (2002). The Just Intonation Primer. Just Intonation Network), where some intervals
between nominals are Pythagorean (e.g., C G) and others are a just third etc (e.g., C E). Accidentals again denotes various comma shifts exactly. However, as the notation is less uniform music
not notated in C is harder to read. I assume this experience led to the development of the Pythagorean-based approach of the Helmholtz- Ellis and Sagittal notation.
The Sagittal notation allows for an even more fine-grained tuning (e.g., even comma fractions for adaptive just intonation), and also provides a single sign for each comma combination. However, I
find the Helmholtz-Ellis notation more easy to read (signs differ more, less signs).
The Western musical notation system is limited to what I call a diatonic pitch system (as "extended meantone" suggest certain closeness to the major third).
For a major second M and minor second m, this is the system of pitches generated by p m + q M, where p, q are integers. The case (p, q) = (0,0) could be taken to be the tuning frequency. Sharps and
flats alter with the interval M - m.
I have implemented it into ChuCK, so that it can easily be played in various tunings. The Pythagorean and quarter-comma meantone are of course special cases. But also others, like the Bohlen-Pierce
scale in which the diapason is not the octave.
Now, inspired by Hormoz Farhat's thesis on Persian music, I extended it by adding neutral seconds. For each neutral seconds n between M & m, one needs accidentals to go from m to n, and from M to n.
This suffices in Farhat's description of Persian music (sori and koron). For Turkish music, one needs the "dual" neutral n' := M - n; the reason is that different division of the perfect fourth leads
to negative n coefficients. So then one needs to more accidentals to go from m to n', and from M to n'.
In this kind of music notation, one just tries to extend the Pythagorean tuning with 5-limit intervals. So one neutral n is sufficient in this description. For higher limits, one needs more neutrals,
and for notation, a way to sort out preferred choice and order.
Now, one advantage of this model is that, like the Western notation system, one does not need to have explicit values for these symbols, though one can do so.
Basically just a FYI.
[Prev in Thread] Current Thread [Next in Thread]
• Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning, (continued)
• Message not available
• Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning, Hans Aberg <=
• Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning, Torsten Anders, 2009/09/10
• Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning, Hans Aberg, 2009/09/10
• Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning, Kees van den Doel, 2009/09/10
• Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning, Torsten Anders, 2009/09/09
• Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning, Torsten Anders, 2009/09/09
• Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning, Robin Bannister, 2009/09/10
• Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning, Torsten Anders, 2009/09/11
• Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning, Robin Bannister, 2009/09/11
• Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning, Trevor Daniels, 2009/09/11
• Re: Microtonal Helmholtz-Ellis notation in Lilypond: fine-tuning, Robin Bannister, 2009/09/11 | {"url":"http://lists.gnu.org/archive/html/lilypond-user/2009-09/msg00318.html","timestamp":"2014-04-19T05:11:42Z","content_type":null,"content_length":"13386","record_id":"<urn:uuid:9e97d397-fb65-48d8-a5d9-aa4c5fadaf36>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can some one answer these SAT questions?
Click here to go to the NEW College Discussion Forum
Discus: SAT/ACT Tests and Test Preparation: June 2004 Archive: Can some one answer these SAT questions?
If the 5 cards are placed in a row so that 1 card is never at either end, how many differnt arrangements are possible?
This is it for the present.
Your question doesn't make sense. It probably should be 5! (permutation arrangement of n items = n!) but "so that 1 card is never at either end" doesn't make sense. Is it that the "1" card is never
at either end? The way you said it then there can never be any arrangments, because always there is some card at either end.
Let's rephrase the question - if we have 5 cards labelled '1', '2', ..., '5', how many different arrangements are possible so that the '1' card is at neither end?
The answer is (total #arrangements of 5 cards) - (# arrangements with '1' card at beginning) - (# arrangements with '1' card at end)
= 5! -4! -4!
= (1)(2)(3)(4)(5) - (1)(2)(3)(4) -(1)(2)(3)(4)
= 120 - 24 - 24
= 72
An easier way to think about it is:
You have 4 choices for the #1 spot, 3 coices for the #5 spot, 3 choices for the #3 spot, 2 choices for the #2 spot, and 1 choice for the #4 spot = 4*3*3*2*1 = 72
Nice trick, processing the spots in the sequence #1,#5,#3,#2,#4 . Trying them in the instinctive sequence #1,#2,#3,#4,#5 would've given you problems, I think!
how about these:
the 6 cabins at a camp are arranged so that there is exactly 1 stragiht path between each 2 of them, but no 3 of them are on a straight path. What is the total number of such straight paths joining
these cabins?
The cells of a certain type of bacteria increase in number by each splitting into 2 new cells every 30 minutes. At that rate, if a colony of this type of bacteria starts with a single cell, how many
hours elapse before the colony contains 2^11 cells?
I used total=pe^(rt) for this one and got some completely different answer.
Q1. Think of the cabins as being the points of a hexagon. There are 5 paths linking C1 to C2,C3,...,C6, then 4 paths linking C2 to C3,C4,..C6 etc. Answer is 5+4+3+2+1 = 15 paths.
Q2. After 1 half_hour period, you have 2^1 cells; after 2 half_hour periods, you have 2^2 cells, and so on. So, after 11 half_hour periods (or 5.5 hours) you have 2^11 cells.
Report an offensive message on this page E-mail this page to a friend | {"url":"http://www.collegeconfidential.com/discus/messages/69/71762.html","timestamp":"2014-04-21T12:16:00Z","content_type":null,"content_length":"14482","record_id":"<urn:uuid:0281abb5-f40a-4761-b4cc-d0e51dfac08e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maplewood, NJ Statistics Tutor
Find a Maplewood, NJ Statistics Tutor
...Finally, I studied mathematics at Princeton University where I again encountered this material. I'm very familiar with it. I have a strong background in mathematics, including statistics and
have applied this knowledge to the statistical study of economics: econometrics.
40 Subjects: including statistics, chemistry, reading, physics
...I can show you that too. Many classes also call on you to use SPSS or SAS to run analyses. Indeed, you’ll want to have the computer do most of the work instead of doing it by hand.
4 Subjects: including statistics, SPSS, SAS, biostatistics
...I acquired my Bachelor's with high honors (GPA 3.72) in Mathematics and Economics as well as my Master's in Statistics (GPA 4.00) from Rutgers University. I think that anyone can learn and
love mathematics when the material is delivered in a fashion that is conductive to the person's understandi...
18 Subjects: including statistics, calculus, algebra 1, algebra 2
...Through the years, I have seen students do well in class, complete their homework, review the materials before an exam and yet, fail the test. This is because they are trying to memorize the
information rather than understand the concepts and apply them appropriately. Students need to appreciate the significance of mathematics.
14 Subjects: including statistics, calculus, algebra 1, algebra 2
I graduate from Columbia University, and hold both New Jersey and New York (7-12) Teaching Certification. I used to be an actuary and have a very deep understanding in Mathematics. I have years
of experience teaching in Middle/High School Mathematics and completed my student teaching in Stuyvesant...
14 Subjects: including statistics, calculus, geometry, algebra 1
Related Maplewood, NJ Tutors
Maplewood, NJ Accounting Tutors
Maplewood, NJ ACT Tutors
Maplewood, NJ Algebra Tutors
Maplewood, NJ Algebra 2 Tutors
Maplewood, NJ Calculus Tutors
Maplewood, NJ Geometry Tutors
Maplewood, NJ Math Tutors
Maplewood, NJ Prealgebra Tutors
Maplewood, NJ Precalculus Tutors
Maplewood, NJ SAT Tutors
Maplewood, NJ SAT Math Tutors
Maplewood, NJ Science Tutors
Maplewood, NJ Statistics Tutors
Maplewood, NJ Trigonometry Tutors
Nearby Cities With statistics Tutor
Cranford statistics Tutors
Hillside, NJ statistics Tutors
Irvington, NJ statistics Tutors
Livingston, NJ statistics Tutors
Maplecrest, NJ statistics Tutors
Millburn statistics Tutors
Orange, NJ statistics Tutors
Roselle, NJ statistics Tutors
Scotch Plains statistics Tutors
South Orange statistics Tutors
Springfield, NJ statistics Tutors
Summit, NJ statistics Tutors
Union Center, NJ statistics Tutors
Union, NJ statistics Tutors
Vauxhall statistics Tutors | {"url":"http://www.purplemath.com/maplewood_nj_statistics_tutors.php","timestamp":"2014-04-17T10:47:33Z","content_type":null,"content_length":"24131","record_id":"<urn:uuid:f6879f86-4307-4a07-93ec-259995441278>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: sum matrix element in different array
Replies: 2 Last Post: Jan 3, 2013 1:59 PM
Messages: [ Previous | Next ]
Saad sum matrix element in different array
Posted: Jan 3, 2013 7:24 AM
Posts: 23
Registered: 11/ Dear All,
I would appreciate some help on this one please.
I am summing matrix elements in different arrays. Say we have 2 arrays:
Price is a cell composed of different matrices. The same for Coupon.
Now the size of the matrices inside "Price" and "Coupon" are not always equal. Of course if I run a code similar to this...
for i=1:10
Price{1,i}+ coupon{1,i};
...I get an error which what you would expect because the first matrix of "Price" and "Coupon" are of different size, ....
Now I would like to run an If statement to say: if matrix sizes of Price and Coupon are similar then sum them, if they are not equal then drop that extra elements (at the end) and
just sum the first elements. How can I do that please?
example to illustrate:
size(Price{1,1})=size(Coupon{1,1})= (1000,1)
size(Price{1,2})=(995,1) < size(Coupon{1,2})= (1000,1)
I would like to ignore the extra elements (at the end) of Coupon{1,2} and then sum the remaining elements (i.e. from 1:995) with Price{1,2}? How Can I do that please?
Thanks a lot
Date Subject Author
1/3/13 sum matrix element in different array Saad
1/3/13 Re: sum matrix element in different array dpb
1/3/13 Re: sum matrix element in different array Saad | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2425956&messageID=7950975","timestamp":"2014-04-21T09:55:10Z","content_type":null,"content_length":"19268","record_id":"<urn:uuid:d189f852-f7ec-4149-be54-5a24c7f77084>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Seq -package
Sequential strategies provide ways to compositionally specify the degree of evaluation of a data type between the extremes of no evaluation and full evaluation. Sequential strategies may be viewed as
complimentary to the parallel ones (see module Control.Parallel.Strategies).
General-purpose finite sequences.
General purpose finite sequences. Apart from being finite and having strict operations, sequences also differ from lists in supporting a wider variety of operations efficiently. An amortized running
time is given for each operation, with n referring to the length of the sequence and i being the integral index used by some operations. These bounds hold even in a persistent (shared) setting. The
implementation uses 2-3 finger trees annotated with sizes, as described in section 4.2 of * Ralf Hinze and Ross Paterson, "Finger trees: a simple general-purpose data structure", Journal of
Functional Programming 16:2 (2006) pp 197-217. http://www.soi.city.ac.uk/~ross/papers/FingerTree.html Note: Many of these operations have the same names as similar operations on lists in the Prelude.
The ambiguity may be resolved using either qualification or the hiding clause.
a name for Control.Seq.Strategy, for documetnation only.
This provides String instances for RegexMaker and RegexLike based on Text.Regex.Posix.Wrap, and a (RegexContext Regex String String) instance. To use these instance, you would normally import
Text.Regex.Posix. You only need to import this module to use the medium level API of the compile, regexec, and execute functions. All of these report error by returning Left values instead of
undefined or error or fail.
Evaluates its first argument to head normal form, and then returns its second argument as the result.
Evaluate each action in the sequence from left to right, and collect the results.
Evaluate each action in the sequence from left to right, and ignore the results.
Evaluate each monadic action in the structure from left to right, and ignore the results.
Evaluate each action in the structure from left to right, and ignore the results.
Evaluate the elements of an array according to the given strategy. Evaluation of the array bounds may be triggered as a side effect.
Evaluate the bounds of an array according to the given strategy.
Evaluate the elements of a foldable data structure according to the given strategy.
Evaluate each element of a list according to the given strategy. This function is a specialisation of seqFoldable to lists.
Evaluate the first n elements of a list according to the given strategy.
Evaluate the nth element of a list (if there is such) according to the given strategy. The spine of the list up to the nth element is evaluated as a side effect.
Show more results | {"url":"http://www.haskell.org/hoogle/?hoogle=Seq+-package","timestamp":"2014-04-23T16:33:46Z","content_type":null,"content_length":"21364","record_id":"<urn:uuid:5299dbac-6fb5-4a70-88e3-61ae54532324>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding speed of a rock, given height acceleration and velocity
April 12th 2011, 10:00 AM
Finding speed of a rock, given height acceleration and velocity
I have no idea how to do this question, it was on a test so I dont know the correct answer and when the prof gave out the solutions for the test I was away, so any help with this would be greatly
appreciated :)
A rock is thrown upward from the top of a 100m building at 10 m/s. At what speed will it hit the ground (the acceleration due to gravity is -9.8 m/s^2)
I know this is probably like the easiest question ever but I, having trouble seeing what to do.
April 12th 2011, 10:10 AM
Are you familiar with the equation
V = u + at
If you google for this equation you will get some pointers to the answer
In passing why the minus in -9.8 m/s^2)
Its a 2 part question, you need to find out how far up the rock will go first
April 12th 2011, 10:15 AM
The only formulas we were taught for these types of problems were:
s= -4.9 t^2 - V0t + S0
and then the derivative of that = velocity, so theyre similar to the one you stated. But Im confused because I have no idea what to plug in for t. :S
April 12th 2011, 10:37 AM
Okay but I don't subscribe to 'plug and solve' maths, you need to understand the process, its not hard.
The equation you have provided looks like the second equation of motion which is better written
S = ut + 0.5att [ I don't have the super script to write it as t squared]
S = distance traveled,
u = initial velocity
a = acceleration
t = time
and for the first eq
V = velocity at time t under acceleration a with initial velocity of u
So for the first piece you know u, a and V, which is zero at top of flight.
So using the first equation you can get t and then you can calculate S from my second equation so for the next piece you have the total distance to ground...
Give this a shot
April 12th 2011, 11:32 AM
Okay, I think I still did this wrong so bear with me (thank you very much by the way)
so i did:
v= u + at
10 = 100 + (-9.8) t
-90 = -9.8 t
-9.813 = t
s= ut + - 0.5 att
=100 (-9.813) + 0.5(-9.8)(-9.813)^2
= -981.3 + 413.204
= -568. 095
does this make sense? i guess since its traveling downward would speed be negative?
April 12th 2011, 12:26 PM
Hello, katyi!
A rock is thrown upward from the top of a 100m building at 10 m/s.
At what speed will it hit the ground?
(The acceleration due to gravity is -9.8 m/s^2)
You know the formula: . $s \;=\;s_o + v_ot - 4.9t^2$
We are given: . $s_o = 100,\;v_o = 10$
. . So we have: . $s \;=\;100 + 10t - 4.9t^2$
"Hit the ground" means: $s = 0.$
. . Hence: . $100 + t0t - 4,9t^2 \:=\:0 \quad\Rightarrow\quad 4.9t^2 - 10t - 100 \:=\:0$
Quadratic Formula: . $t \;=\;\dfrac{\text{-}(\text{-}10) \pm \sqrt{(\text{-}10)^2 - 4(4.9)(\text{-}100)}}{2(4.9)}$
. . $\displaystyle t \;=\;\frac{10 \pm\sqrt{2060}}{9.8} \;=\;\begin{Bmatrix}5.65 \\ \text{-}3.61 \end{Bmatrix}$
The rock hits the ground 5.65 seconds after it was thrown.
The velocity of the rock is given by: . $v \:=\:10 - 9.8t$
When $t = 5.65,\; v \:=\:10-9.8(5.65) \:=\:-45.37$
The rock hits the ground at 45.37 m/s.
April 12th 2011, 12:36 PM
No bother will trying to help here. The solution offered above seems to ignore the stone being thrown up but maybe not
You need to get a handle on what what here and maybe read both the question and my posts a bit closer.
I also would like you to pay attention to the units as we go
You write
v= u + at
10 = 100 + (-9.8) t
We are told that 10 is initial velocity and in the formula V is final velocity so .....:)
To get the show on the road
V = u + at
0 = 10 + (-9.81)t
which gives t = .. well u tell me
Lets say the answer was 1
S = ut + 0.5att [ I don't have the super script to write it as t squared]
S = distance traveled,
u = initial velocity
a = acceleration
t = time
S = 10*1 + 0.5(-9.81)*1*1
So now the stone is 5 m above the top of the building, at zero velocity, and we know that
S = ut + 0.5att [ I don't have the super script to write it as t squared]
S = distance traveled,
u = initial velocity
a = acceleration
t = time
So solve for t
105 = 0t + 0.5*9.81tt
and this goes into
V = u + at
April 12th 2011, 02:05 PM
Soroban's solution is correct in all aspects. However I will note that there is a more straightforward way to obtain the solution.
$v^2 = v_0^2 + 2a(s - s_0)$
$v^2 = (10)^2 - 2g(0 - 100)$
and note that when you take the square root at the end you select the negative solution since we are assuming positive is upward.
April 12th 2011, 02:20 PM
In this case then whats the solution if the stone is just dropped from 100m?
Given the difficulty the OP was having with inserting the right values into simple equations a more step by step approach seemed a little more appropriate.
April 12th 2011, 05:30 PM
Wow... You give the very good result. I really care. Very interesting!!!
watch movies online free
April 12th 2011, 06:08 PM
Then, using my equation, v0 = 0 m/s.
I was simply pointing out that we don't need to find the max height to do the problem. It can be done in one step.
April 15th 2011, 12:34 PM
Essentially, topsquark is using "conservation" of energy. The potential energy, 100 meters above the ground, is 100mg where m is the mass of the rock and g= 9.8 m/s^2, the acceleration due to
gravity. The kinetic energy, at 10 m/s, is (1/2)m(100)= 50m. The total energy is 100mg+ 50m= (980+ 50)m= 1030m Joules At the ground the potential energy is 0 so all of that energy have converted
to kinetic energy: (1/2)mv^2= 1030 so v^2= 2060. | {"url":"http://mathhelpforum.com/math-topics/177647-finding-speed-rock-given-height-acceleration-velocity-print.html","timestamp":"2014-04-18T20:07:28Z","content_type":null,"content_length":"15897","record_id":"<urn:uuid:87e62207-7cd1-4b46-abf1-8c18d06ef488>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
matematicasVisuales | Plane developments of geometric bodies (4): Cylinders cut by an oblique plane
The solid cut from an infinite circular cylinder by two planes is a cylindrical segment or a truncated cylinder. The simplest case is when one of the cutting planes is perpendicular to the axis of
the cylinder. Then the cylindrical segment has a circular base.
The main interest of this page is to see how a truncated cylinder can be developed into a plane.
This in another example:
The volume of a cylindrical segment it is easy to obtain if we notice that two copies of the cylindrical segment one of them turned upside-down, together form a cylinder.
If the cutting plane is not perpendicular to the axis, the section is an ellipse. An ellipse is commonly defined as the locus of points P such that the sum of the distances from P to two fixed points
F1, F2 (called foci) are constant.
We are going to follow Hilbert and Cohn-Vossen's book 'Geometry and the Imagination' to see a wonderful demonstration of this fact:
"A circular cylinder intersects every plane at right angles to its axis in a circle. A plane not at right angles to the axis nor parallel to it intersects the cylinder in a curve that looks like an
ellipse. We shall prove this curve really is an ellipse. To this end, we take a sphere that just fits into the cylinder, and move it within the cylinder until it touches the intersecting plane (Fig.
Hilbert and Cohn-Vossen. Geometry and the Imagination. Chelsea Publishing Company. pag.7.
"We then take another such sphere and do the same thing with it ont the other side of the plane. The spheres touch the cylinder in two circles and touch the intersecting plane at two points, F1 and
F2. Let B be any point on the curve of intersection of the plane with the cylinder. Consider the straight line through B lying on the cylinder (i.e. parallel to the axis). It meets the circle of
contact of the spheres at two points P1 and P2. BF1 and BP1 are tangents to a fixed sphere through a fixed point B, and all such tangents must be equal, because of the rotational symmetry of the
sphere. Thus BF1=BP1; and similarly BF2=BP2. It follows that
But by the rotational symmetry of our figure, the distance P1P2 is independent of the point B on the curve. Therefore BF1+BF2 is constant for all points B of the section; i.e. the curve is an ellipse
with foci at F1 and F2."
"The fact that we have just proved can also be formulated in terms of the theory of projections as follows: The shadow that a circle throws onto an oblique plane is an ellipse if the light rays are
perpendicular to the plane of the circle." (Hilbert and Cohn-Vossen. Geometry and the Imagination)
Plane net of pyramids cut by an oblique plane.
Plane developments of cones and conical frustum. How to calculate the lateral surface area.
Plane developments of cones cut by an oblique plane. The section is an ellipse.
Plane nets of prisms with a regular base with different side number cut by an oblique plane.
We study different prisms and we can see how they develop into a plane net. Then we explain how to calculate the lateral surface area.
Every ellipse has two foci and if we add the distance between a point on the ellipse and these two foci we get a constant.
The first drawing of a plane net of a regular dodecahedron was published by Dürer in his book 'Underweysung der Messung' ('Four Books of Measurement'), published in 1525 .
Transforming a circle we can get an ellipse (as Archimedes did to calculate its area). From the equation of a circle we can deduce the equation of an ellipse.
In his book 'On Conoids and Spheroids', Archimedes calculated the area of an ellipse. We can see an intuitive approach to Archimedes' ideas.
In his book 'On Conoids and Spheroids', Archimedes calculated the area of an ellipse. It si a good example of a rigorous proof using a double reductio ad absurdum.
Using Cavalieri's Principle we can calculate the volume of a sphere.
We can cut in half a cube by a plane and get a section that is a regular hexagon. Using eight of this pieces we can made a truncated octahedron.
Using eight half cubes we can make a truncated octahedron. The cube tesselate the space an so do the truncated octahedron. We can calculate the volume of a truncated octahedron.
Leonardo da Vinci made several drawings of polyhedra for Luca Pacioli's book 'De divina proportione'. Here we can see an adaptation of the truncated octahedron.
The truncated octahedron is an Archimedean solid. It has 8 regular hexagonal faces and 6 square faces. Its volume can be calculated knowing the volume of an octahedron.
The volume of a tetrahedron is one third of the prism that contains it.
The volume of an octahedron is four times the volume of a tetrahedron. It is easy to calculate and then we can get the volume of a tetrahedron. | {"url":"http://www.matematicasvisuales.com/english/html/geometry/planenets/cylinderobliq.html","timestamp":"2014-04-16T16:11:47Z","content_type":null,"content_length":"24131","record_id":"<urn:uuid:c8e51694-5ee7-4851-a28a-0ba37135f15c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Addition Property of Equality
Hey, I am having alot of trouble with the Addition Property of Equality. I really don't get it at all. Can Some one help?
Thanks, iceking
Please reply with what your book means by this property. When you reply, please explain which part you're having trouble with.
Thank you!
stapel_eliz wrote:Please reply with what your book means by this property. When you reply, please explain which part you're having trouble with.
Thank you!
The definition is : "if a=b, then a+c=b+c"
The property says that, if you add the same thing to both sides of an equation, the equation is still true.
stapel_eliz wrote:The property says that, if you add the same thing to both sides of an equation, the equation is still true. | {"url":"http://www.purplemath.com/learning/viewtopic.php?p=2901","timestamp":"2014-04-21T10:57:25Z","content_type":null,"content_length":"23876","record_id":"<urn:uuid:db666adb-5504-4ad5-8543-1dc2dd20170d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Universality Of The Local Eigenvalue Statistics For A Class Of Unitary Invariant Random Matrix Ensembles
Universality Of The Local Eigenvalue Statistics For A Class Of Unitary Invariant Random Matrix Ensembles (1997)
Download Links
by L. Pastur , M. Shcherbina
Documents Related by Co-Citation
200 Uniform asymptotics for polynomials orthogonal with respect to varying exponential weights and applications to universality questions in random matrix theory Comm – P Deift, T Kriecherbauer, K
T-R McLaughlin, S Venakides, X Zhou - 1999
115 Semiclassical asymptotics of orthogonal polynomials, Riemann-Hilbert problem, and universality in the matrix – Pavel Bleher, Er Its - 1999
296 Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach (Courant Lecture Notes – P Deift - 2000
138 Strong asymptotics of orthogonal polynomials with respect to exponential weights – P Deift, T Kriecherbauer, K D T-R McLaughlin, S Venakides, X Zhou - 1999
109 The spectrum edge of random matrix ensembles, Nuclear Phys – P Forrester - 1993
244 Level-spacing distributions and the Airy kernel – Craig A. Tracy, Harold Widom - 1993
307 Logarithmic Potentials with External Field – E B Saff, V Totik - 1997
172 A steepest descent method for oscillatory Riemann–Hilbert problems: asymptotics for the MKdV equation – P. Deift, X. Zhou - 1993
30 A.Zee , Universality of the correlations between eigenvalues of large random matrices , Nucl.Phys. B402 – E Brezin - 1993
101 Universality at the edge of the spectrum in wigner random matrices – A Soshnikov - 1999
140 Random Matrices, 2nd ed – M L Mehta - 1991
69 A Riemann-Hilbert approach to asymptotic problems arising in the theory of random matrix models, and also in the theory of integrable statistical mechanics – P A DEIFT, A R ITS, X ZHOU - 1997
347 On the distribution of the length of the longest increasing subsequence of random permutations – Jinho Baik, Percy Deift, Kurt Johansson - 1999
25 On the statistical mechanics approach in the random matrix theory: integrated density of states – A Boutet de Monvel, L Pastur, M Shcherbina - 1995
60 New results on the equilibrium measure for logarithmic potentials in the presence of an external field – P Deift, T Kriecherbauer, K T R McLaughlin - 1998
53 New results in small dispersion KdV by an extension of the steepest descent method for Riemann–Hilbert problems – P Deift, S Venakides, X Zhou - 1997
171 On the distribution of the roots of certain symmetric matrices – E Wigner - 1958
37 Correlation functions of random matrix ensembles related to classical orthogonal polynomials – T Nagao, M Wadati - 1991
57 Correlations between eigenvalues of a random matrix – F J Dyson - 1970 | {"url":"http://citeseerx.ist.psu.edu/viewdoc/similar?doi=10.1.1.55.3541&type=cc","timestamp":"2014-04-16T09:00:09Z","content_type":null,"content_length":"19851","record_id":"<urn:uuid:e560b52f-99b1-4823-88bd-b9a3adb680a3>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
The force acting on a particle varies as in the figure below. (The...
Get your Question Solved Now!!
The force acting on a particle varies as in the figure below. (The...
Introduction: Force Questions!
More Details: The force acting on a particle varies as in the figure below. (The x axis is marked in increments of 2.00 m.)
Find the work done by the force as the particle moves across the following distances.
(a) from x = 0 m to x = 16.0 m
(b) from x = 16.0 m to x = 24.0 m
(c) from x = 0 m to x = 24.0 m
Thanks you! Will rate LIFESAVER
Please log in or register to answer this question.
0 Answers
Related questions | {"url":"http://www.thephysics.org/5006/the-force-acting-on-particle-varies-as-in-the-figure-below-the","timestamp":"2014-04-19T15:31:15Z","content_type":null,"content_length":"106443","record_id":"<urn:uuid:ee4cb4d9-4757-4aff-98dc-6bb95605dc77>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Harbor Acres, NY Algebra 2 Tutor
Find a Harbor Acres, NY Algebra 2 Tutor
...I took the test July 27th 2013. I have been accepted to medical school and am matriculating in August, though I currently tutor full time. I am excited at the idea of helping other students to
overcome the stressful burden of the MCAT.
24 Subjects: including algebra 2, chemistry, physics, ASVAB
...I taught high school students and privately tutored all levels, various subjects. I was brought up in Paris (France) and Toronto (Canada), thus I am perfectly bilingual. My main challenging
tutoring position was to tutor an 11 year old English speaking boy who joined the French Lycee in Toronto, Canada.
18 Subjects: including algebra 2, chemistry, physics, calculus
...The one semester course usually ends with Laplace Transform techniques and systems of differential equations. The geometric approach and dynamical systems can be included as well. I have
taught the course at Fairleigh Dickinson University.
19 Subjects: including algebra 2, reading, geometry, writing
...Later, as a special needs teacher in Uganda, I worked to support the literacy, math, and life skills of students with learning delays/disabilities including dyslexia and autism spectrum
disorders. Most recently, while teaching for the Pittsburgh Public Schools, I collaborated with special educat...
39 Subjects: including algebra 2, reading, Spanish, ESL/ESOL
As a former student of engineering and my long time spent tutoring students of various ages and backgrounds, I believe I have all the tools and knowledge necessary to impart current mathematical
models, concepts and application from a very basic approach to a level conceivable by any student. I hav...
20 Subjects: including algebra 2, physics, writing, algebra 1
Related Harbor Acres, NY Tutors
Harbor Acres, NY Accounting Tutors
Harbor Acres, NY ACT Tutors
Harbor Acres, NY Algebra Tutors
Harbor Acres, NY Algebra 2 Tutors
Harbor Acres, NY Calculus Tutors
Harbor Acres, NY Geometry Tutors
Harbor Acres, NY Math Tutors
Harbor Acres, NY Prealgebra Tutors
Harbor Acres, NY Precalculus Tutors
Harbor Acres, NY SAT Tutors
Harbor Acres, NY SAT Math Tutors
Harbor Acres, NY Science Tutors
Harbor Acres, NY Statistics Tutors
Harbor Acres, NY Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Baxter Estates, NY algebra 2 Tutors
East Atlantic Beach, NY algebra 2 Tutors
Fort Totten, NY algebra 2 Tutors
Garden City South, NY algebra 2 Tutors
Glenwood Landing algebra 2 Tutors
Harbor Hills, NY algebra 2 Tutors
Kenilworth, NY algebra 2 Tutors
Manorhaven, NY algebra 2 Tutors
Maplewood, NY algebra 2 Tutors
Meacham, NY algebra 2 Tutors
Port Washington, NY algebra 2 Tutors
Roslyn, NY algebra 2 Tutors
Saddle Rock Estates, NY algebra 2 Tutors
The Terrace, NY algebra 2 Tutors
University Gardens, NY algebra 2 Tutors | {"url":"http://www.purplemath.com/Harbor_Acres_NY_Algebra_2_tutors.php","timestamp":"2014-04-18T13:48:02Z","content_type":null,"content_length":"24454","record_id":"<urn:uuid:fb182d88-d7b0-4994-a802-2ad1c312aa9d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math 100 Problems
Math 100. Exploring Mathematics. Spring 2009.
Various problems
1. (Cookie Jar Problem) There was a jar of cookies on the table. Amanda was hungry because she hadn't had breakfast, so she ate half the cookies. Then Beth came along and noticed the cookies. She
thought they looked good, so she ate a third of what was left in the jar. Christine came by and decided to take a fourth of the remaining cookies with her to her next class. Then Daniel came dashing
up and took a cookie to munch on. When Eva looked into the cookie jar, she saw that there were two cookies left. "How many cookies were there in the jar to begin with?" she asked.
2. (Tile Problem) An artist is planning to construct a rectangular wall design from square tiles. The wall is 72 inches long and 42 inches wide. All the square tiles must be the same size, and the
length of the sides must be a whole number of tiles.
• Find three different sizes of square tiles that could be used to completely fill the rectangular space, with no tiles overlapping and no tiles overhanging the border.
• Determine the smallest number of square tiles that could be used to fill the rectangular space.
3. (Boxes) A carpenter has three large boxes. Inside each box are two medium-sized boxes. Inside each medium-sized box are five small boxes. How many boxes are there altogether?
4. (Pick the Numbers) Given seven numbers 2, 3, 4, 5, 7, 10, 11, pick five of them that when multiplied together give 2310. Find as many different solutions as you can.
5. (Find the Area) Find the area of the figure shown below. Which of the problem strategies have you used? Find at least two (or more: as many as you can) different solutions.
6. (Geoboard) Create other shapes (similar to 5) using Geoboard and find their areas. Use as many different problem solving strategies as possible.
Exercises with tiles
11. If the area of the triangle is 1 square unit, what are the areas of the other three figures?
12. If the area of the hexagon is 1 square unit, what are the areas of the other three figures?
13. If the side of the triangle is 1 unit, what are the areas of all four figures?
14. If the area of the given hexagon is 1 square unit, what is the area of the hexagon whose each side is twice longer? What if each side is three times longer? Cover such hexagons with the above
figures and verify.
Explorations with patterns
21. For each of the following sequences of figures:
• draw the next figure,
• find the number of squares/triangles in each figure,
• find the number of squares/triangles in the 50th figure in the sequence,
• find the area and the perimeter of each of the first five figures and of the 50th figure in the sequence,
• organize your data in a table as shown in (a).
│Figure in sequence │Number of squares│Area │Perimeter│
│1st │4 │4 square units│10 units │
│2nd │ │ │ │
│3rd │ │ │ │
│4th │ │ │ │
│5th │ │ │ │
│50th │ │ │ │
22. Create your own sequence of figures (using rectangles, triangles, hexagons, or any other shapes you like). Find the number(s) of shapes needed for the 50th figure in your sequence. Can you also
find its area and/or perimeter?
23. Find the following sums:
What do you notice? Do you think the pattern will continue? Why? Calculate 1+3+5+7+...+97+99.
24. Calculate the following the differences:
• 1/1 - 1/2
• 1/2 - 1/3
• 1/3 - 1/4
• 1/4 - 1/5
What do you notice? What is the value of 1/99-1/100?
25. Calculate the following sums (reduce your answers):
• 1/(1*2)
• 1/(1*2)+1/(2*3)
• 1/(1*2)+1/(2*3)+1/(3*4)
• 1/(1*2)+1/(2*3)+1/(3*4)+1/(4*5)
• 1/(1*2)+1/(2*3)+1/(3*4)+1/(4*5)+1/(5*6)
What do you notice? Do you think the pattern will continue? Why? (Hint: use problem 24 to write each fraction as a difference, then cancel terms.)
What is the value of 1/(1*2)+1/(2*3)+1/(3*4)+...+1/(98*99)+1/(99*100)?
26. The sequence 1, 1, 2, 3, 5, 8, 13, ... , where each number is equal to the sum of two previous numbers, is called the Fibonacci sequence. The numbers in this sequence are called Fibonacci
• Calculate the next 4 Fibonacci numbers.
• Determine the parity (i.e. whether it is even or odd) of each number in the sequence and describe the pattern. Do you think the pattern will continue? Explain why. (That is, explain how you can
be sure that it will.)
27. Calculate the following sums:
• 1 • 1^3
• 1+2 • 1^3+2^3
• 1+2+3 • 1^3+2^3+3^3
• 1+2+3+4 • 1^3+2^3+3^3+4^3
• 1+2+3+4+5 • 1^3+2^3+3^3+4^3+5^3
Compare the two sequences you get. What do you notice? Describe the pattern in words and write a formula. Can you explain why this pattern always holds, no matter how far you go in the sums?
Operations with fractions
31. Write and solve your own story problems involving fraction addition and subtraction (one problem for each operation, so you need two separate problems) similar to those in the article you've
32. Write and solve your own story problems involving fraction multiplication and division (one problem for each operation, so you need two separate problems) similar to those in the article you've
33. Two-thirds of a fish weighs 10 1/2 pounds. How heavy is the whole fish?
34. A suit is on sale for $180. What was the original price of the suit if the discount was 1/4 of the original price? Explain how you found your answer and how you can check your answer.
35. James uses 1 1/2 cups of milk and 2 1/4 cups of flour for his favorite cookie recipe. This makes 60 cookies. How much milk and flour would he need to make 40 cookies?
Problems from competitions for 6-graders
41. The figure shown consists of 8 congruent squares. The perimeter of the figure is 36 units. What is the area of the figure?
42. The cafeteria sells each apple at one price and each banana at another price. For 1 apple and 3 bananas Jose pays $2.05. For 4 apples and 2 bananas April pays $2.70. Maria buys 2 apples and 2
bananas. How much does she have to pay?
43. There are 8-legged spiders and 6-legged flies in a room. The total number of insects is 80. The total number of insect legs is 604. How many more spiders than flies are in the room?
44. A tractor has 14 gallon gasoline tank. The tractor starts with a full tank of gasoline. It runs out of gasoline when it is done plowing 3/5 of a field. How much gallons of gasoline does the
tractor need to plow the whole field?
45. If the length of a rectangle is reduced by 50% and the width of the rectangle is increased by 50%, how does the area of the rectangle change?
46. An animal walks 30 feet in 5 seconds. What is the speed of this animal in miles per hour?
Natural numbers, Integer numbers, and operations with them
51. Use the base 10 manipulatives to calculate:
52. Represent the following problems on a number line.
• 2+4
• 8-3
• 4-5
• -3+5
• -3-5
• 4+(-6)
• -1+(-3)+2+6-5
53. Place the digits 1, 2, 3, 6, 7, and 8 in the boxes below to obtain
(a) the greatest possible sum;
(b) the smallest possible sum.
Explain your strategy!
54. A group of second grade students are playing the following game. They write digits from 1 to 9 in a row, and put a "+" or a "-" between every two consecutive digits. Then they calculate the
result. For example,
1+2-3+4+5-6+7+8-9=9, etc.
The goal is to come up with a sequence of +/- signs for each answer between 1 and 10. (The person who first comes up with 10 sequences, one for each answer, will win.) Is it actually possible to do
55. Modify the game in problem 54 as follows: allow any order of the 9 digits, e.g. 4+8-1+5-7+3+6-2-9. Will your answer to the question in problem 54 change?
56. What if in the game in problem 54 we allow "combining" two or more consecutive digits, to form two- or more digit numbers. E.g., 123-4-56-7+8-9 will now be allowed. Will your answer to the
question in problem 54 change?
57. Read the following article: Understanding Subtraction (part of section 3.1 from "Mathematics for Elementary School Teachers" by T. Bassarear, a link is available on the course schedule page).
Answer the following question: Do you agree that "trading" (or "renaming" or "regrouping") is a better term than "carrying" or "borrowing"? Why or why not? If so, which term do you like most and why?
58. Place the digits 1, 2, 3, 6, 7, 8 in the boxes shown below to obtain
(a) the greatest possible difference;
(b) the smallest possible positive difference.
Explain your strategy!
59. A mule and a horse were carrying some bales of cloth. The mule said to the horse, "If you give me one of your bales, I shall carry as many as you." "If you give me one of yours," replied the
horse, "I will be carrying twice as many as you." How many bales was each animal carrying? Find as many different solutions as you can.
Divisibility. Prime factorization. GCF and LCM.
61. Using divisibility tests by 2, 3, 4, 5, and 9, explain how to determine whether a number is divisible by
Important: In questions 62 and 63, provide an explanation for each answer! Correct answer without explanation will not receive full credit.
Examples of explanations: The number 4005 is not divisible by 6 because it is not divisible by 2. The number 4005 is divisible by 15 since it is divisible by both 3 and 5.
62. Which of the following numbers divide the number 2,010?
6, 12, 15, 18
63. Which of the following numbers divide the number 1,245?
6, 12, 15, 18
64. Find prime factorizations of 2,010 and 1,245.
65. Find the greatest common factor and the least common multiple of 2,010 and 1,245.
66. The GCF of 66 and x is 11; the LCM of 66 and x is 858. Find x.
67. The GCF of two numbers m and n is 12, their LCM is 600, and both m and n are less than 500. Find m and n.
Real numbers
71. Real numbers A, B, C, D, E, and F are represented by points on the number line below.
Determine the following if each answer is one of the numbers shown:
• D+E
• C+D
• B-D
• E^2
• BE
• A^2
• E/D
• B/F
72. First read this chapter about real numbers.
Daniel writes 0.4<0.13 "because 4 is less than 13". Is he correct or wrong? Explain the correct reasoning
• by drawing hundredths charts (as shown in the chapter you've read),
• by a number line picture, and
• by converting to fractions.
• For which of your explanations does it help to write 0.4 as 0.40?
Recall: to solve an equation means to find all solutions, i.e. all numbers x that make the equality true.
Solve the following equations over the set of real numbers (i.e. find all real solutions of these equations). Show all steps of your solutions. Make sure that you can justify each step (recall that
you can add the same number to both sides of an equation, subtract the same number from both sides of an equation, multiply both sides of an equation by the same number, or divide both sides of an
equation by the same nonzero number. Be careful with division: if dividing by a variable/expression, remember that division is legal only when the variable/expression is nonzero!)
81. 2x+5=19
82. 2x+5=4x+11
83. 3x=5x
84. 3x+(x/2)=5-(x/3)
85. 3/x=9
86. x^2=16
87. x^3=-27
88. (x-3)(x+5)=0
89. x^2+2x-35=0
90. 2x^2+7x-4=0
91. 3x^2-4x-5=0
92. 2x^2-5x+7=0
93. (x+2)(x-5)=6
94. x(x-3)=(2x+1)(x+2)-8
Problems involving equations (use equations to solve these problems)
101. In a small town, three children deliver all the newspapers. Abby delivers 3 times as many papers as Bob, and Connie delivers 13 more than Abby. If the three children deliver a total of 496
papers, how many papers does each deliver?
102. The formula for converting degrees Celcius (C) to degrees Fahrenheit (F) is F=(9/5)C+32. Your European friend asks you how warm it is now in Fresno. Your outdoor thermometer shows 80 degrees
Fahrenheit. How many degrees Celcius is it?
103. Two silk butterflies and a silk rose cost $18. One silk butterfly and a silk rose cost $11. What is the cost of each?
104. A teacher instructed her class as follows: Take any number and add 15 to it. Now multiply that sum by 4. Next subtract 8 and divide the difference by 4. Now subtract 12 from the quotient and
tell me the answer, I will tell you the original number. Analyze the instructions to see how the teacher was able to determine the original number.
105. Make up your own procedure similar to that in Problem 4. Test it by asking your group members to follow the steps you give them and tell you the result.
106. For an event at school, 812 tickets were sold for a total of $1912. If students paid $2 per ticket and nonstudents paid $3 per ticket, how many student tickets were sold?
Pythagorean theorem
111. One leg of a right triangle is 2 cm longer than the other leg, and the hypothenuse is 3 cm longer than the shorter leg. Find all sides of the triangle. (Hint: use Pythagorean Theorem.)
112. Estimate the number Pi as follows: consider a circle of radius 1. Its circumference is 2Pi.
• Inscribe a regular hexagon into the circle. Find its perimeter. (Hint: divide the hexagon into 6 equilateral triangles.) Let's denote this perimeter p[1].
• Circumscribe a regular hexagon. Find its perimeter. (Hint: divide the hexagon into 6 equilateral triangles. Use the Pythagorean Theorem to find the length of the sides of these triangles.) Let's
denote this perimeter p[2].
• Notice that p[1] < 2Pi < p[2]. What inequality do you obtain for Pi?
Remark: similar procedure can be done with regular polygons with more than 6 sides. The more sides, the harder and longer calculations, but the better the estimate. As many groups indicated in their
projects, Archimedes used polygons with up to 96 sides.
Volume and surface area
121. Find the height of a regular tetrahedron with edge 1 cm. Find its volume.
122. Find the volume of the ice cream cone pictured.
123. Find the volume of the house pictured.
124. Find the surface area of the ice cream cone pictured above.
125. Find the surface area of the house pictured above.
131. An equilateral triangle with sides 2008 units long is divided into smaller equilateral triangles with sides 4 units long. How many of these small triangles are there in the big one?
132. How many times longer is the perimeter of the big triangle than the perimeter of each small triangle in problem 131?
133. In problems 123 and 125, you found the volume and surface area of a house. (Correct answers are: Volume = 24000 ft^3, Surface Area = 5360 ft^2.) What are the volume and surface area of the house
shown below?
Proportional Reasoning
141. Suppose you are given an unfamiliar book. You have 10 minutes to estimate (not just guess!) how long it will take you to read this book. How can you do this? (Hint: use proportional reasoning!)
Solve each of the following problems in at least two different ways. At least one of your solutions should be very simple (accessible to a first-grader, i.e. should not use fractions or anything a
first-grader will not understand).
142. Aby can run 5 laps in 12 minutes. Ben can run 6 laps in 14 minutes. Who is the faster runner?
143. Two camps of Scouts are having pizza parties. The Bunny Camp ordered enough so that every 5 campers will have 2 pizzas. The Fox Camp ordered enough so that every 7 campers will have 3 pizzas. If
within each camp the pizzas are split equally, campers of which camp will eat more pizza?
This page was last revised on 15 January 2009. | {"url":"http://zimmer.csufresno.edu/~mnogin/math100spring09/problems-math100.html","timestamp":"2014-04-20T16:18:28Z","content_type":null,"content_length":"23122","record_id":"<urn:uuid:76bbb783-9dfe-43bf-a1f5-92df89bf7b57>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Peoria, AZ Calculus Tutor
Find a Peoria, AZ Calculus Tutor
...Everyone learns differently (memorization, repetition, rationalization) and at different rates so it is important for me to learn about my client first, then design a tutorial program that
fits their learning style so they can readily receive the tools I am providing and incorporate them into the...
21 Subjects: including calculus, reading, geometry, algebra 1
...I have tutored many individuals, and inspired in them the ability to go one and do more complex math, and actually get excited about it! My enjoyment of physics often feeds through to them,
and will take as much time as necessary for the student to feel comfortable and confident in the subject. ...
17 Subjects: including calculus, reading, physics, algebra 2
...I am currently spending the fall semester of 2013 as a Estrella Mountain Community College math tutor. I am tutoring in algebra, trigonometry, pre-calculus, and calculus I-III. I very much
enjoy what I am doing, using my knowledge to help other people, and it is proves to be a profound learning experience for me in terms of my listening and communication skills.
7 Subjects: including calculus, geometry, algebra 1, algebra 2
...I have previous experience tutoring through Huntington Learning Center where I worked for several years in my free time and as a tutor at Arizona State University while I was working on my
undergraduate degree. I have also done private one on one and group tutoring. My goal as a tutor is to make subject material easy, understandable, and retain-able.
20 Subjects: including calculus, physics, computer programming, C
...This is done by providing them with thoughtful sequential questions. This road map I take with the students will enable them to continue the thought process without me there. That is, when it
comes to the homework and test on their own, they should ask themselves, "What questions did Dr.
20 Subjects: including calculus, chemistry, physics, geometry
Related Peoria, AZ Tutors
Peoria, AZ Accounting Tutors
Peoria, AZ ACT Tutors
Peoria, AZ Algebra Tutors
Peoria, AZ Algebra 2 Tutors
Peoria, AZ Calculus Tutors
Peoria, AZ Geometry Tutors
Peoria, AZ Math Tutors
Peoria, AZ Prealgebra Tutors
Peoria, AZ Precalculus Tutors
Peoria, AZ SAT Tutors
Peoria, AZ SAT Math Tutors
Peoria, AZ Science Tutors
Peoria, AZ Statistics Tutors
Peoria, AZ Trigonometry Tutors | {"url":"http://www.purplemath.com/peoria_az_calculus_tutors.php","timestamp":"2014-04-19T02:15:47Z","content_type":null,"content_length":"24064","record_id":"<urn:uuid:9ef07be2-1a7e-45a7-9e0b-63c89d01ca00>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Programming Help
11-13-2006 #1
Registered User
Join Date
Nov 2006
I'm just getting into C++ programming and I need some help getting these programs done here are the program problems:
Program 1
Write a program that will calculate the number of students going on a field trip to the zoo and the number going on a field trip to the museum. The information needed is contained in a data file
named “trip.txt”. The data file contains the first name of each student and next to each student’s name is a 1 if the student is going to the zoo or a 2 if the student is going to the museum.
Your program should calculate and print to the screen the total number of students going to the zoo and the total number of students going to the museum. Your program should contain at least one
while loop.
Place the following information in your data file.
Here’s the data:
Sam 1
Tim 2
Lou 1
Scott 1
Samantha 2
Sara 2
Lee 2
Malik 1
Kita 1
Bob 1
Sue 1
Ming 2
Tom 1
Larry 1
Bea 2
Jay 1
Program 2
Write an interactive C++ program that accepts as input any two positive integers (the first integer should be less than the second integer). The program should determine and print all the odd
numbers between the first integer and the second integer.
For example, if 3 is input for the first number and the input for the second number is 20, the program would print:
Your program should work for any two positive integers (the first integer should be less than the second integer).
Might as well close this thread already. Read the rules or post some code wipper snapper!
Videogame Memories!
A site dedicated to keeping videogame memories alive!
Share your experiences with us now!
"We will game forever!"
it will not let me post my tags
What? Copy and paste! It's not difficult!
Silence is better than unmeaning words.
- Pythagoras
My blog
You need to post code between code tags. [code] code here [/code]
Seek and ye shall find. quaere et invenies.
"Simplicity does not precede complexity, but follows it." -- Alan Perlis
"Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
"The only real mistake is the one from which we learn nothing." -- John Powell
Other boards: DaniWeb, TPS
Unofficial Wiki FAQ: cpwiki.sf.net
My website: http://dwks.theprogrammingsite.com/
Projects: codeform, xuni, atlantis, nort, etc.
11-13-2006 #2
11-14-2006 #3
Registered User
Join Date
Nov 2006
11-14-2006 #4
11-14-2006 #5 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/85336-programming-help.html","timestamp":"2014-04-16T22:30:48Z","content_type":null,"content_length":"52232","record_id":"<urn:uuid:0922e90c-0735-465e-9eb7-c1c8a1853b8b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
270A: Introduction to Artificial Intelligence
ICS 270A: Introduction to Artificial Intelligence
• When: Tuesdays and Thursdays, 3:30 to 5.
• Where: CS 253
• Professor: Padhraic Smyth
• Email: smyth@ics.uci.edu
• Office Location: CS 414E
• Office Hours: Tuesdays, 10 to 12.
NOTE: THE FINAL EXAM AND SOLUTIONS FOR FALL 97 ARE NOW ONLINE (IN POSTSCRIPT FORMAT).
Topics covered will include search, logic, knowledge representation, probabilistic reasoning, decision theory, learning, and (as time permits) discussion of problems in natural language, vision, and
planning. Prerequisites are a basic understanding of computer science concepts (data structures, complexity, Boolean logic), a basic understanding of linear algebra and probability, and the ability
to program in a modern programming language such as C or C++.
1. Introduction and Background
What is artificial intelligence (AI)? AI from a rational agent perspective. Related fields: philosophy, psychology, mathematics, computer engineering, etc. Review of the history of AI. Rational
action and rational agents. Autonomous agents. Agent architectures and programs.
2. Problem-Solving by Search
□ Principles of Search
Goal and problem formulation. Searching for solutions. Types of search problems. Components of search problems. Abstraction.
□ Uninformed ("Blind") Search
Breadth-first, depth-first, uniform-cost, depth-limited, iterative-deepening, and bidirectional search techniques. Constraint satisfation problems. Time-space complexity. Completeness and
□ Informed ("Heuristic") Search
Best-first, A*, iterative deepening A* (IDA*), and SMA*, search techniques. Heuristic functions. Search and optimization. Hill-climbing techniques.
□ Game Playing
Two player game trees, decision making with perfect and imperfect information, minimax principle, evaluation functions, search cutoff strategies, alpha-beta pruning, performance of
alpha-beta, state-of-the-art in game-playing programs.
3. Logical Knowledge Representation and Reasoning
□ Propositional Logic
□ First-Order Logic
□ Knowledge Bases
□ Inference
4. Probabilistic Knowledge Representation and Reasoning
□ Review of Probability Theory
Axioms of probability. Conditional probability. Bayes' rule and its application.
□ Probabilistic Reasoning with Belief Networks
Belief network semantics. Inference algorithms for singly-connected graphs. Inference in junction trees. Practical issues in building belief networks.
□ Decision-Theoretic Agents
Utility theory. Preferences and utility functions. Decision networks. Value of information.
5. Learning
□ General Principles
Representation, estimation. Inductive learning, prior knowledge, performance estimation. Learning logical descriptions. Probabilistic and statistical approaches.
□ Learning Problems and Solutions
Classification, function approximation, clustering, online learning, reinforcement learning. Learning with trees, neural networks, memory-based systems, statistical models.
6. Agents in the Real-World
□ Vision and Speech
Review of image processing and analysis techniques. Extracting information from images. Basic principles of speech recognition systems.
□ Natural Language
Grammars and their applications. Parsing algorithms. Stochastic models for handling ambiguity.
□ Planning
Planning problems and general solutions. Planning representations. Partial-Order Planning.
The required text is "Artificial Intelligence: A Modern Approach", by Stuart Russell and Peter Norvig, Prentice Hall, 1995.
• Homeworks
□ Bi-weekly homeworks, handed out Thursday in class, due at the beginning of class the following Thursday (hand them in at the start of class).
□ No late homeworks, solutions will be discussed in class after homework is handed in.
□ General discussion of homework problems with classmates allowed, details of problem must be worked out individually.
□ Important!
Homework solutions should be clear and to the point: you need to clearly convince me that you understand the solution.
• Computer Assignments
There will be two or three computer assignments/projects during the quarter. You can use whatever programming language you wish, although C or C++ is preferred. Reports will be due Thursday at
the start of class on the relevant week. You can hand in the report late, but will be graded out of 80%, 60%, etc., for every 1, 2, etc days that the report is late.
• Exams
Midterm and final
• Grading
Final grades will be a monotonic function of the sum of 30% of your homeworks, 30% of your computer assignments, and 40% of your midterm and final exams.
FOR NON-ICS MAJORS: HOW TO GET AN ICS UNIX ACCOUNT:
All projects must be working and running under an ICS Unix account to get project credit. Thus, all students in the class will need an ICS Unix account for this class. To get an ICS Unix account, see
the Lab Attendant in the 364 Hallway, CS Building: bring your ID card and it will take about 20 minutes to get you signed up, for you to read the ethical use of computing documents, and have your
account activated.
A list of Web resources about AI , organized by chapter in Russell and Norvig. Padhraic Smyth / smyth@ics.uci.edu | {"url":"http://www.ics.uci.edu/~smyth/courses/introai/","timestamp":"2014-04-24T03:15:53Z","content_type":null,"content_length":"6667","record_id":"<urn:uuid:42f6d81b-d307-4171-a0d2-1cff9ff9a615>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to find infinite limits of a function without plugging in values but instead alegbrically? - Homework Help - eNotes.com
How to find infinite limits of a function without plugging in values but instead alegbrically?
(1) Infinite limits -- a function is said to have an infinite limit at x=c if the function grows without bound as `x -> c` .(Or decreases without bound.)
The limit does not exist -- we say that the limit is `+- oo` , but this is just a description of how the limit fails to exist.
To determine if a rational function has an infinite limit, first divide out any common factors of the numerator and denominator. Then any factor of the denominator that has a value of zero at x=c
causes the function to have a vertical asymptote at x=c. The limit of the function at x=c will be positive or negative infinity. (To determine whether it is positive or negative consider the sign of
the numerator and denominator as `x->c^+,x->c^-` .
Ex `lim_(x->-2)(x^2+2x-8)/(x^2-4)`
Since the numerator is nonzero at x=-2 and the denominator is zero, the function has a vertical asymptote at x=-2; the function increases without bound as `x->-2^+` and the function decreases without
bound as `x->-2^-` .
(2) Limits at infinity -- Obviously you cannot substitute `+-oo` for x, so one method is to substitute increasingly larger (smaller) values for x.
Polynomials will have infinite limits at infinity.
Exponential functions will have infinite limits in one direction, and finite limits in the other direction. Ex `lim_(x->oo)2^(-x)=0` and `lim_(x->oo)2^(-x)+k=k` .
Rational functions -- Use `lim_(x->oo)c/(x^r)=0` to find a possible finite limit.
Ex: `lim_(x->oo)(2x-1)/(x+1)` We divide the numerator and denominator by the highest power of x in the expression -- in this case x:
There are many other specialized techniques, e.g. when a function has two horizontal asymptotes, etc...
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/how-find-infinite-limits-function-without-plugging-453816","timestamp":"2014-04-17T02:45:16Z","content_type":null,"content_length":"26650","record_id":"<urn:uuid:a83f6520-1ec0-4da5-b86e-f411ad7d3fa7>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Equations with Wolfram|Alpha
January 14, 2010
Posted by
Need a tutor for solving equations? Solving equations is just one of hundreds of mathematical tasks that can be done using Wolfram|Alpha. Wolfram|Alpha can solve equations from middle school level
all the way through college level and beyond. So next time you are stumped on an equation, consult Wolfram|Alpha for a little help.
Let’s start with the simpler stuff. Wolfram|Alpha can easily solve linear and quadratic equations, and even allows you to view a step-by-step solution of each problem.
What if the roots of the equation are complex? No worries; Wolfram|Alpha has no trouble solving equations over the complex plane.
Wolfram|Alpha can also solve cubic and quartic equations in terms of radicals.
Of course, some solutions are too large or cannot be represented in terms of radicals; Wolfram|Alpha will then return numerical solutions with a “More digits” button.
Conveniently, Wolfram|Alpha not only solves polynomial equations but also equations involving trigonometric, hyperbolic, or even special functions, as in the following example.
Do you want to solve an equation over the reals? Just tell Wolfram|Alpha to restrict the domain.
Wolfram|Alpha can also solve systems of linear and nonlinear equations.
In the near future, it will be possible to see step-by-step solutions for systems of linear equations
Let’s take it one step further. Do you need to solve a system of polynomial congruences? Wolfram|Alpha is not stumped!
Are you working with recurrence relations? Wolfram|Alpha can solve recurrence equations in seconds.
For more, check out the Examples pages. The underlying technology used here are the Mathematica functions FindInstance, Solve, RSolve, FindRoot, NSolve and Reduce.
Remember that these are just a few of the ways that Wolfram|Alpha can solve equations—try some of your own math problems and explore other equation types. In a future blog post, we’ll show you how
Wolfram|Alpha can also solve inequalities and systems of inequalities.
24 Comments
I noticed that it can show steps. I was thrilled when I typed in 1/(1-x^2) as it not only split it into partial fractions, but it revealed in the relationship that the above has with tanh^-1(x) upon
This makes it an even better reference and educational resource, but I was wondering if there was any software available for Mathematica to enable similar functionality.
Posted by gm January 14, 2010 at 6:09 pm Reply
The solutions to fourth degree polynomial equations can always be expressed in terms of radicals. The solutions to the quartic in the given example all were less than 6 in absolute value. “Of course,
some solutions are too large or cannot be represented in terms of radicals”?!
Perhaps Wolfram|Alpha does not want to display the exact solutions because they are messy, but those solutions can be expressed in terms of radicals and are not large.
Posted by Bruce Yoshiwara January 15, 2010 at 12:00 am Reply
It is sad that wolfram doesn’t have a Polish translation.
Unfortunately, someone bought the pl domain.
Posted by atominium January 18, 2010 at 9:50 am Reply
I used wolfram alpha to teach my student about solving equations,I put some examples in my blog, htttp://abdulkarim.wordpress.com
Posted by Abdul Karim February 6, 2010 at 7:52 pm Reply
Still has a long way to go. I wanted to solve (1 + 1.6*10^-19*x/(1.38*10^-23 * 300))*exp(1.6*10^-19*x/(1.38*10^-23 * 300)) = 35.68/30*10^-9 and it returned x = 0.025875 W_n((3568000000 e)/3)-0.025875
and n element Z, as the result. Strange. I did same thing with Matlab fsolve and got 0.464 which is the solution I was looking for. Alpha still has a long way to go. I didn’t think I would still need
MATLAB for little things like this…..
Posted by Ob February 9, 2010 at 4:47 pm Reply
I cannot get Wolfram|alpha to solve a system (linear or otherwise) larger than 3 by 3. Has anyone been able to do this? Yes, I have read all of the tutorials and I am familiar with the RREF command,
but I’d just like to enter in the equations and see the solution. Imagine telling someone to get the solution to a 5 by 5 system, but having to teach them basic matrix operations first. That’s kind
of annoying.
All the previous notwithstanding, I very much like Wolfram|alpha and will continue to use it in the future. I just wish solving a “larger” system was easily done.
Posted by sven February 10, 2010 at 1:59 am Reply
Can Wolfram solve linear inequalities?
Posted by Sean January 30, 2011 at 4:50 pm Reply
Can it put Linear Inequalities in Interval Notation. Example [-4,0)
Posted by Sean April 9, 2011 at 7:43 am Reply
can wolfram alpha solve radicals?
Posted by janine February 15, 2011 at 1:35 pm Reply
can wolfram alpha solve radicals like finding the 5th root of -32?
Posted by brandon February 23, 2011 at 1:24 pm Reply
Can I solve a trig equation for restricted values of theta
eg solve tan(th +pi/4) = 1 – 2tan(th) for theta from 0 to pi/2
and can I do that in degrees?
Posted by Colleen Young March 13, 2011 at 3:34 am Reply
Hello, I was wondering if Wolfram can solve system problems, such as:
4x + 5y – z = -38
3x – 4y = -3
4x + 3y = -29
Posted by Rachel March 22, 2011 at 12:28 pm Reply
Hi I’m trying to use wolfram a large system of equations entirely symbolically. Wolfram can’t seem to handle the equations. The following is what I entered, an example of only the first two
Paf= (Taf/Ta1)^Cp/R, Paf= Pa1*Va1*Taf/(Ta1*(Va1+Vb1-Vbf))
Posted by Annie March 28, 2011 at 11:30 pm Reply
Would you, please, teach us how to get answers assuming all functions in equations are over real numbers? The point is that W|A practically always knows on complex numbers. That is correct, of
However many elementary problems are restricted to a field of real numbers (for those who do not know “i”).
So, tryng to solve equations with radical one gets solutions where some terms of eq. become complex. The similar story is with trigonometry. Very often one gets answers like sin^(-1) 4 etc. That is
fine for advanced in math people, but can be treated as a “mistake” in an elementary school…
The commands like Reduce[..., Real] doesn’t provide that restriction.
Posted by gorod March 31, 2011 at 12:03 am Reply
hi there
i have a problem and i want to solve a set of equations altogether but i can’t.
i appreciate if yo can help me.
thank you
we have 12 equations and 12 unknowns.
here is the equations:
f1 = y_CO + y_CO2 + y_H2O + y_O2 + y_NO + y_H2 + y_N2 == 1
f2 = 1.641*10^-006*T^2 – 0.09051*T – 111.1 + R*T*Log[y_CO] +
lambda_C + lambda_O == 0
f3 = 8.221*10^-022*T^6 – 1.865*10^-017*T^5 + 1.676*10^-013*T^4 –
7.443*10^-010*T^3 + 2.381*10^-006*T^2 – 0.004387*T – 393.3 +
R*T*Log[y_CO2] + lambda_C + 2*lambda_O == 0
f4 = 8.058*10^-014*T^4 – 1.165*10^-009*T^3 + 6.132*10^-006*T^2 +
0.04503*T – 242.4 + R*T*Log[y_H2O] + lambda_O + 2*lambda_H == 0
f5 = R*T*Log[y_N2] + 2*lambda_N == 0
f6 = R*T*Log[y_H2] + 2*lambda_H == 0
f7 = R*T*Log[y_O2] + 2*lambda_O == 0
f8 = -0.01236*T + 89.93 + R*T*Log[y_NO] + lambda_N + lambda_O == 0
f9 = y_CO2 + y_CO – 3/sigma_n == 0
f10 = 2 y_CO2 + y_CO + y_H2O + y_NO + 2 y_O2 – 10/sigma_n == 0
f11 = 2 y_N2 + y_NO – 37.62/sigma_n == 0
f12 = 2 y_H2O + 2 y_H2 – 8/sigma_n == 0
and unknowns are y_CO, y_CO2, y_H2, y_H2O, y_N2, y_NO, y_O2, lambda_C, lambda_O, lambda_H, lambda_N, sigma_n
i am waiting for the answer
tnx again
Posted by mohammad April 17, 2011 at 11:52 pm Reply
I am able to use alpha to find the particular solution to a differential equation with initial condition like:
{f”(x) = 1, f’(0) = 0, f(0) = 1, x = 1}, i.e. f(1) = 3/2
but not for:
{f””(x)=1, f”(0)=0, f(0)=0,f”(2)=0, f(2)=0, x=1}, i.e. f(1)=5/24
Alpha produces the solution, i.e. f(x) = 1/24 x (x^3-4 x^2+8), but does not substitute x=1 like before
Is alpha able to produce the particular solution for this equation in one go? Or, alternatively, can I use the outcome of one query to feed the next? Thanks!
Posted by harry June 5, 2011 at 7:42 am Reply
Can this solve of system of equations… like this where I have m equations, n variables and a few values that are known? situations that can be done on paper?
2x+ 3y+ Z=10
Posted by RG July 30, 2011 at 10:38 am Reply
hello,how can I solve these equations with mathematica?
Posted by mohsen January 19, 2012 at 10:01 am Reply
Can WolframAlpha transpose an equation to make a certain variable the subject?
I have this equation >>> x=y*((1+B)/B)*(1-((D/d)^(2*B))) <<<
I need to solve for B… Any help?
Posted by Brad May 8, 2012 at 12:07 am Reply
A comment related to the above quote: “In the near future, it will be possible to see step-by-step solutions for systems of linear equations.” As of today June 23, 2013 you still can not see the step
by step solution of it solving a system of linear equations. Or if this exists please inform me how to do it.
Posted by Charlotte Dyck June 23, 2013 at 8:02 pm Reply
Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more.
Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies…
Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes!
Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon?
Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step. | {"url":"http://blog.wolframalpha.com/2010/01/14/solving-equations-with-wolframalpha/","timestamp":"2014-04-20T14:05:21Z","content_type":null,"content_length":"73281","record_id":"<urn:uuid:e5a86ad7-76b3-41c6-9c46-e2a875c20dea>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brevet US5113140 - Microprocessor-controlled high-voltage capacitance bridge
This invention relates to a high-voltage alternating current-comparator capacitor bridge for measuring the values of unknown impedances, such as a capacitor with unknown capacitance and dissipation
factor values, using standard capacitors and standard conductances.
U.S. Pat. No. 3,142,015 shows one type of high voltage alternating current capacitive measuring apparatus. The current comparator device in this U.S. patent consists of a primary, a secondary, a
deviation and a detection winding wound on a core with a standard capacitor being connected to a tap on the secondary winding, the other end of the standard capacitance being connected to an
alternating voltage source. An unknown capacitor is connected between the voltage source and the primary winding with a null detector being connected across the detection winding. The apparatus also
includes circuit means to obtain a balanced condition for a quadrature component so that a dissipation factor for the unknown capacitance can be determined. This circuit includes a further capacitor
connected to the other side of the secondary winding so that almost all the voltage drop occurs across the standard capacitor while a low voltage supply is obtained across the further capacitor which
is accurately proportional to the high voltage supply. This source of accurately controlled low voltage makes it practicable to obtain a correction current for the quadrature adjustment such as by
applying that voltage, or a voltage of opposite polarity, through a variable resistor to the tap on the secondary winding.
U.S. Pat. No. 3,579,101 shows another similar type of measuring apparatus wherein a variable resistor is in series with the standard capacitor and secondary winding and the low voltage is applied,
via a polarity reversing switch, to an autotransformer whose output is applied across the variable resistor.
A paper entitled "A Transformer-Ratio-Arm Bridge for Measuring Large Capacitors Above 100 Volts" by Oskars Pertersons in IEEE Transactions on Power Apparatus and Systems, Vol. PAS-87, No.5, May 1968
(pps 1354-1361) describes the basic operating principle of these types of high-voltage capacitance bridges. This paper also discusses various methods of compensating for errors caused by lead
impedances and other factors which will be discussed in more detail later.
Another paper entitled "A Wide-Range High-Voltage Capacitance Bridge With One PPM Accuracy" by Oskars Petersons et al, in IEEE Transaction On Instrumentation And Measurement, Vol. IM-24, No.4,
December 1975 (pps 336-344) describes in more detail the basic operating principle of these types of high-voltage capacitance bridges along with the nature of bridge errors and their reduction. This
paper also describes circuitry to balance the bridge to obtain the capacitance value (C.sub.x) of the unknown capacitance as well as it conductance (G.sub.x) from which the dissipation factor DF of
the unknown capacitor can be obtained. A discussion of range extending transformers is included in that paper.
It is an object of the present invention to provide an improved high-voltage current-comparator capacitance bridge measuring apparatus for measuring values of an unknown impedance, such as a
capacitor, and which is particularly adaptable to be controlled by a microprocessor through a switch controller. An automatic balancing feature facilitates the use of the bridge for load loss
measurement of large high-voltage inductive loads, such as shunt reactors and power transformers.
The invention consists of the provision in a current comparator bridge for measuring values of an unknown impedance, said bridge comprising (a) a current comparator having a pair of ratio windings
(N.sub.s, N.sub.x) and a detection winding (N.sub.D) for detecting an ampere turns unbalance in the comparator, (b) means for varying the number of turns on a first one (N.sub.s) of said ratio
windings, (c) a standard capacitor (C.sub.s) having a first side connected to a first end of said first ratio winding, (d) means for connecting a first side of an unknown impedance to a first end of
the second ratio winding, (e) means connecting the second end of the second ratio winding to ground, (f) means for connecting the second side of the standard capacitor and the second side of the
unknown impedance to an alternating voltage input (E), (g) means connected between the second end of the first ratio winding and ground for generating a voltage replica (E.sub.f) of the high voltage
input, (h) converter means connected to receive said replica voltage for providing a current (I.sub.GS) proportional to and in phase with the high voltage input, (i) means for varying the value of
said proportional current, and (j) means connecting said current generating means to the first end of the first ratio winding for passing said proportional current therethrough in phase with the
voltage input, the improvement comprising (k) phase sensitive detecting means connected to said detection winding and to said replica voltage for generating a first output signal (E.sub.O)
proportional to an ampere-turn unbalance in the comparator in phase with a reference current (I.sub.NS) passing through the standard capacitor, and a second output signal (E.sub.90) proportional to
an ampere-turn unbalance in the comparator in quadrature with said reference current, (1) an RMS/DC converter connected to the second end of the first ratio winding for generating a third output
signal (E.sub.INS) proportional to the RMS value of the voltage input and hence proportional to the value of said reference current in the standard capacitor, and (m) a microprocessor connected to
receive said first, second and third output signals for controlling said means (b) for varying the number of turns on the first ratio winding and said means (i) for varying the value of said
proportional current to bring the bridge towards ampere-turn balance with the number of turns on the first ratio winding proportional to the ratio (E.sub.O /E.sub.INS) between said first and third
signals providing a measure of the value of the reactance of the unknown impedance, said third signal (E.sub.INS) providing a measure of the value of the reference current (I.sub.NS), and the ratio
(E.sub.90 /E.sub.INS) between the second and third signals providing a measure of the value of the dissipation factor of the unknown impedance.
Other features and advantages of the present invention will become more readily apparent from the following detailed description of the invention with reference to the accompanying drawings; in
FIG. 1 illustrates a prior art capacitance bridge having a circuit to compensate for errors caused by lead impedances in both bridge arms,
FIG. 2 illustrates a prior art capacitance bridge having a circuit to compensate for the internal impedance of windings of a current comparator,
FIG. 3 is a circuit diagram of a current comparator based capacitance bridge according to the present invention,
FIG. 4 is a circuit diagram of a phase sensitive detector for the capacitance bridge shown in FIG. 3 and,
FIG. 5 is a block diagram of a microprocessor controlled high voltage capacitance bridge according to the present invention.
FIG. 1 shows a prior art current-comparator capacitor bridge which is described in a paper by Oskars Petersons in the IEEE Transactors On Power Apparatus and Systems, Vol. PAS-87, No. 5, May 1968,
pages 1354-1361. In this bridge a supply voltage V.sub.s is applied to one side of an unknown capacitor C.sub.x and a standard capacitor C.sub.s, the other side of C.sub.x being connected to a
winding N.sub.x of a current comparator TR1 while the other side of standard capacitor C.sub.s is connected to a tap on a winding N.sub.s of current comparator TR1. The other end of winding N.sub.x
is connected to one side of winding N.sub.s and to ground. The term N indicates the number of active turns on the windings. The winding N.sub.x and N.sub.s are wound so that the current I.sub.x
flowing through capacitor C.sub.x and winding N.sub.x produces a flux on the core of TR1 which opposes the flux created by the current I.sub.s flowing through standard capacitor C.sub.s and winding
N.sub.s to ground. By adjusting the tap on N.sub.s these two fluxes can be made equal which produces a total flux in the core equal to O under balanced conditions. The flux in the core can be
detected by a detection winding N.sub.D wound on the core, the winding N.sub.D being connected to a detector D. The tap of N.sub.s is adjusted until the bridge is balanced and detector D indicates no
flux in the core. Ignoring the effects of lead impedances (Z.sub.1 to Z.sub.3), at the balanced condition
I.sub.x N.sub.x =I.sub.s N.sub.s (1)
EjωC.sub.x N.sub.x =EjωC.sub.s N.sub.s (2)
from which it can be determined that ##EQU1##
If N.sub.x is constant, C.sub.s being a standard capacitor, the value of the unknown capacitor C.sub.x can be determined from the number of turns of winding N.sub.s from the tap to ground. In this
case the bridge can be made direct reading with the value of the unknown capacitor C.sub.x being determined by the location of the tap on N.sub.s at the balanced condition of this bridge.
One error that occurs in the type of bridge shown in FIG. 1 is caused by lead impedances which are indicated in this circuit by Z.sub.1 to Z.sub.4 for the unknown capacitor C.sub.x. This type of
error can be compensated for by applying a correction current into the standard capacitor branch to N.sub.s. An inverting amplifier A.sub.1 and capacitor C.sub.s ' between the low voltage sides of
C.sub.x and C.sub.s generate the necessary correction current I.sub.s '. In this circuit, at bridge balance
I.sub.x N.sub.x =(I.sub.s +I.sub.s ')N.sub.s
since I.sub.s ' is added to N.sub.s so that
(E-e.sub.1)jωC.sub.x N.sub.x =(EjωC.sub.s +e.sub.2 jωC.sub.s ')N.sub.s (5)
E is the voltage applied to C.sub.s
e.sub.1 is the voltage at the input to amplifier A.sub.1 and,
e.sub.2 is the voltage at the output of amplifier A.sub.1 whose output is applied to capacitor C.sub.s '. ##EQU2##
The amplifier gain of A.sub.1 is selected so that the voltage ratio ##EQU3## in equation (6) is equal to 1 and this results in
e.sub.2 =-C.sub.s /C.sub.s '
The current I.sub.s ' does not have to be supplied with a great deal of accuracy since it is only a very small fraction of the total current in N.sub.s. Even a modest 10% overall accuracy in the
amplifier gain and C.sub.s ' value will reduce the lead effects by a factor of 10. To adjust the gain and the C.sub.s value quickly, a simple self-contained calibration facility can be provided in
the bridge. However, high-voltage capacitance bridges are frequently operated with only one permanently installed standard capacitor so that there will rarely be a need for readjusting the
compensating circuit. In practice, the capacitance C.sub.s ' is generally selected to have the same nominal value as C.sub.s. The current injection circuit in FIG. 1 can be extended to compensate for
lead impedances in the standard capacitor branch.
Amplifier A.sub.1 can also be adapted to compensate for errors caused by shield capacitance C.sub.gs to provide an additional correction current I.sub.gs to winding N.sub.s. The lead containing
I.sub.gs and C.sub.gs is shown in dotted lines in FIG. 1 since when different standard capacitors are used, C.sub.gs is a somewhat unpredictable quality. For every variation in C.sub.gs, the
amplifier gain would have to be readjusted to compensate for errors due to shield capacitance.
The circuit shown in FIG. 1 only has a means to measure the capacitance of an unknown capacitor C.sub.x but not any means to measure the conductance associated with C.sub.x. A circuit to measure the
conductance G.sub.x of unknown capacitor C.sub.x is shown in FIG. 2 which is described in an article by Oskars Petersons et al (IEEE Transactions on Instrumentation and Measurement, op.cit.). The
currents in the unknown and standard capacitors are compared in ratio windings N.sub.x and N.sub.s of the current comparator TR1 with the number of active turns in windings N.sub.x and N.sub.s being
adjusted until ampere-turn balance is achieved. This portion of the circuit operates in the same manner as the circuit shown in FIG. 1.
The current for balancing the equivalent loss conductance (G.sub.x) of the unknown capacitor is generated by capacitor C.sub.f, amplifier A.sub.2, autotransformer T.sub.a, inverting amplifier A.sub.3
and a standard conductance G.sub.s. It is not practical to connect a standard conductance G.sub.s or an equivalent circuit across the full high supply voltage E. This difficulty is overcome by
generating an auxiliary potential E.sub.f which is an accurate scaled down replica of the high voltage E. An output terminal of winding N.sub.s is connected to the capacitor C.sub.f and amplifier
A.sub.2 whose output is connected to the other side of capacitor C.sub.f. The current in the reference capacitor C.sub.s is passed through comparator winding N.sub.s to the input of amplifier A.sub.2
which input, at bridge balance, is virtually at ground potential. C.sub.s and C.sub.f then act as a capacitive voltage divider and if the gain of amplifier A.sub.2 is high, the voltage E.sub.f at the
connection between the output of amplifier A.sub.2 and capacitor C.sub.f is defined by:
E.sub.f =(C.sub.s /C.sub.f)E (8)
The output voltage E.sub.f is thus in phase with the high voltage E but reduced in magnitude by the ratio of the capacitance C.sub.s /C.sub.f.
The current I.sub.GS for balancing the equivalent loss component of the unknown capacitor is generated by E.sub.f through autotransformer T.sub.a, inverting amplifier A.sub.3 and the standard
conductance G.sub.s which is also connected, along with standard capacitor C.sub.s, to a tap on winding N.sub.s of current comparator TR1. The autotransformer T.sub.a has N turns with the tap being
located at αN turns. The balance equations for this bridge are then
C.sub.x =(N.sub.s /N.sub.x)C.sub.s (9)
G.sub.x =(N.sub.s /N.sub.x)(C.sub.s /C.sub.f)αG.sub.s(10)
However it is customary to express the loss component as the dissipation factor DF or tangent of the loss angle (δ). In these terms, the balance equation (10) becomes
DF=tanδ=(G.sub.x /ωC.sub.x)=α(G.sub.s /ωC.sub.f)(11)
G.sub.s /ωC.sub.f is constant and any convenient value may be selected for these elements. Therefore, the circuit can be made direct reading for the dissipation factor.
Another standard conductance G.sub.s ' which is of the same value as G.sub.s, is connected between the input to amplifier A.sub.2 and the input to inverting amplifier A.sub.3. As a result, a current
equal in magnitude to the compensating current I.sub.GS is drawn from the input of amplifier A.sub.2 which will avoid second-order errors in the bridge reading. Without G.sub.s ' the bridge would not
be exactly direct reading. By interchanging the connections between the conductances and the inverting amplifier A.sub.3, the bridge can measure equivalent negative dissipation factors. This is a
necessary feature if the unknown capacitor has lower losses than the standard. It is also required when the bridge is used to measure voltage transformer ratios.
FIG. 3 is a circuit diagram of a current comparator capacitor bridge 10 according to the present invention while FIG. 5 is a block diagram illustrating the control system for that bridge. In this
bridge, the high voltage source applies a voltage E to one side of an "unknown" capacitor C.sub.x and to a standard capacitor C.sub.s. The other side of "unknown" capacitor C.sub.x is connected to a
tap of ratio winding N.sub.x on current comparator TR1, the other end of winding N.sub.x being connected to ground. The low voltage side of standard capacitor C.sub.s is connected to an adjustable
tap of ratio winding N.sub.s on current comparator TR1, the further side of N.sub.s being connected to one side of capacitor C.sub.f and a high gain open-loop amplifier A.sub.2. The other side of
capacitor C.sub.f is connected to the output of amplifier A.sub.2 so that C.sub.f is a feedback capacitor for that amplifier. C.sub.f and A.sub.2 in combination with capacitor C.sub.s forms, at the
output of amplifier A.sub.2, a low-voltage E.sub.f which is proportional to the supply voltage E. The low-voltage E.sub.f at bridge balance is equal to (C.sub.s /C.sub.f)E.sub.g E.sub.f being an
accurate scaled down replica of the high voltage E.
The bridge circuit 10 includes a circuit to compensate for errors caused by effects of lead and winding impedances consisting of an amplifier A.sub.6 and a capacitance C.sub.s ' connected between the
low voltage sides of capacitors C.sub.x and C.sub.s. In practice C.sub.s ' is generally of the same value as standard capacitor C.sub.s. This branch of circuit 10 operates in the same manner as
amplifier A.sub.1 and capacitor C.sub.s ' in FIG. 1. Capacitor C.sub.f along with capacitor C.sub.s and high gain amplifier A.sub.2 function in the same manner as the elements in FIG. 2 to provide a
voltage E.sub.f which is an accurate scaled down replica of voltage E. Instead of applying the voltage E.sub.f to an autotransformer, as previously described with reference to FIG. 2, the voltage
E.sub.f in bridge circuit 10 is applied to a multiplying digital-to-analog converter (MDAC) 2. The output of MDAC 2 is applied through a unity gain inverting amplifier A.sub.3 and a standard
conductance G.sub.s to the adjustable tap on winding N.sub.s. This arrangement supplies a current I.sub.GS to the tap on winding N.sub.s, a current I.sub.NS through standard capacitance C.sub.s being
also supplied to the adjustable tap on winding N.sub.s. The "unknown" capacitor has a conductance value G.sub.x as well as a capacitance value C.sub.x. MDAC 2, inverting amplifier A.sub.3 and
standard conductance G.sub.s can generate a dissipation factor balance current I.sub.GS to balance the bridge 10 for an in-phase impedance, the quadrature component being balanced by the current
In order to prevent second order errors in the bridge reading, as previously described with reference to FIG. 2, a conductance G.sub.s ' is connected between the input of amplifier A.sub.2 and the
input of inverting amplifier A.sub.3. The effect of the N.sub.s winding impedance is eliminated by introducing a compensating voltage at the output of inverting amplifier A.sub.3. This compensating
voltage has the same magnitude and phase as those of the voltage drop across the winding impedance of winding N.sub.s. This is accomplished by connecting an additional unity gain inverting amplifier
A.sub.4 between the adjustable tap of winding N.sub.s and the summing input of inverting amplifier A.sub.3. In addition a compensating winding N.sub.c is connected in parallel with winding N.sub.s in
order to reduce its leakage impedance.
The MDAC 2 in this bridge circuit 10 is of the four quadrant type allowing measurements of equivalent positive and negative dissipation factors without having to interchange connections between the
conductances G.sub.s and G.sub.s and the unity gain inverting amplifier A.sub.3.
The balanced equation for this bridge is as follows:
E(G.sub.x +jωC.sub.x)N.sub.x =EjωC.sub.s N.sub.s +αE(C.sub.s /C.sub.f)G.sub.s N.sub.s (12)
where ω=2πf, f being the frequency, and α is the equivalent multiplying factor of an adjustable proportion determining device MDAC 2. From this, the value C.sub.x =C.sub.s (N.sub.s /N.sub.x) and the
dissipation factor DF for the "unknown" capacitor can be determined. The dissipation factor DF=loss tangent tan δ=α(G.sub.s /ωC.sub.f) where δ is the loss angle of the unknown capacitance C.sub.x.
For 60 Hz, the term (G.sub.s /ωC.sub.f) can be made equal to 0.1, so that DF=0.1.alpha. and the dissipation factor balance is direct reading in α. The value C.sub.x can also be made direct reading in
terms of the ratio of the turns N.sub.s /N.sub.x once the value of the standard capacitor C.sub.s is established.
The current comparator TR1 is provided with a detection winding N.sub.D for detecting a balanced condition of the bridge where the ampere-turns by currents (I.sub.NS +I.sub.GS) and I.sub.NX in
windings N.sub.s and N.sub.x respectively are equal and opposed to each other. The detection winding N.sub.D is connected to a current-to-voltage converter A.sub.5 which provides an output signal
E.sub.D that is applied to an input terminal 10 of a phase sensitive detector 1. Current-to-voltage converter A.sub.5 has four gain settings covering four decades of operation, the gain being
controlled by feedback impedance R.sub.D. The voltage E.sub.f, which is a reduced replica of voltage E, is applied to input terminal 9 of phase sensitive detector 1 which supplies output signals
E.sub.O and E.sub.90 on terminals 11 and 12 respectively.
The basic circuit of the phase sensitive detector 1 is shown in FIG. 4 and includes in-phase and quadrature phase sensitive detectors of the gating type 8 and 7 respectively. The reduced replica
voltage E.sub.f from amplifier A.sub.2 is applied to input terminal 9 and then to a first comparator 6 which supplies a first square wave reference signal. E.sub.F is also applied through an
integrator 4 to a second comparator 5 to supply a second square wave reference signal. The detection winding signal E.sub.D, from current-to-voltage converter A.sub.5, at input terminal 10 is applied
to a phase sensitive detector 8 of the gating type along with the second square wave reference signal to provide an in-phase signal E.sub.O at output terminal 11. The signal E.sub.D is also applied
to a phase sensitive detector 7 of the gating type along with the first square wave reference signal to provide a quadrature signal E.sub.90 at output terminal 12. The gating-type detector will
respond to the fundamental frequency as well as to any odd harmonics. Therefore, the signal E.sub.D is prefiltered (not shown) at the detector input.
The adjustable taps for the windings in this bridge are created by switching of the winding turns using reed relays in a make-before-break sequence under control of a microprocessor 20 through a
switching controller 21 as illustrated by the block diagram in FIG. 5. The microprocessor 20 controls the gain setting for current-to-voltage converter A.sub.5 through the switch controller 21 and
also controls all of the features of the bridge including the self-balancing procedure, the data formatting and the input-output processing. The measurement process that is performed automatically by
the bridge can be considered to consist of the following three steps:
1) Measuring the current I.sub.NS in the reference side of the bridge, i.e. in the N.sub.s winding, the in-phase and quadrature outputs of the phase sensitive detector and storing the results.
2) Calculating and adjusting or revising the settings of the N.sub.s winding, the N.sub.x winding if necessary, as well as the gain of MDAC 2 based on results obtained in step 1, and
3) Repeating steps 1 and 2 until the bridge is balanced.
The I.sub.NS current is measured by measuring the RMS value of output voltage E.sub.f from amplifier A.sub.2 using a RMS/DC converter 3 as shown in FIG. 3. I.sub.NS can then be calculated by
microprocessor 20 from the formula
I.sub.NS =E.sub.f ωC.sub.f (13)
where ω=2πf, f being the frequency, and C.sub.f is the capacitances of the feedback capacitor for amplifier A.sub.2. E.sub.f is a reduced replica of E such that E.sub.f =E(C.sub.s /C.sub.f). The
output of RMS/DC converter 3 is applied as a signal E.sub.INS to a multiplexer 25 whose output is applied via a 12-bit A/D converter 26 to microprocessor 20 which can then determine the I.sub.NS
current. The in-phase (E.sub.O) and quadrature (E.sub.90) outputs of the phase sensitive detector 1 are also sent to microprocessor 20 via multiplexer 25 and A/D converter 26. E.sub.O and E.sub.90
are a measure of the in-phase and quadrature ampere-turn unbalance in the current comparator TR1.
The DC offsets present in the phase sensitive detectors are cancelled by performing a second measurement with the polarity of the detection winding reversed. Subtracting the digital representations
of the detector outputs, obtained before and after the polarity reversal, eliminates all DC offset errors. These measurements are stored in a memory of microprocessor 20 and processed to yield the
proper settings for the N.sub.s winding, N.sub.x winding and the gain of MDAC 2. The proper settings for the N.sub.s winding and MDAC 2 to achieve a balance are:
N.sub.s =(N.sub.D /I.sub.NS)
DF=0.1.alpha.=(1/I.sub.NS) /R.sub.D) (15)
where E.sub.O and E.sub.90 are the in-phase and quadrature outputs, respectively, of the phase sensitive detector 1.
In one particular bridge circuit of the type shown in FIG. 3, the ratio windings N.sub.s and N.sub.x have a nominal 100 turns providing two-digit resolution. Additional resolution in winding N.sub.s
can be obtained by cascading a 100 turn two-stage current transformer for the third and fourth digits and a 100 turn single-stage transformer for the fifth and sixth digits. Winding N.sub.x is
subdivided to yield overall ratios multipliers of 1,2,5,10,20,50 and 100. The nominal rating of the comparator and auxiliary transformers in cascade is one ampere-turn which limits the current in the
N.sub.s and N.sub.x windings to 0.01 A and 1 A, respectively. For ratios larger than 100 to one, up to 100,000 to one, and to accommodate load currents of up to 1000 A, an additional range-extending
two-stage current transformer with ratios of 1000, 100 and 10 to one is cascaded into the N.sub.x winding. The compensating winding N.sub.c, which is connected in parallel with the N.sub.s windings
to reduce its leakage impedance, also has 100 turns. In this bridge, a 500-turn detection winding N.sub.D is connected to a current-to-voltage converter A.sub.5 to obtain a voltage proportional to,
and in-phase with, the unbalanced ampere-turns in the current comparator.
The bridge settings at the start of the measurement process, unless specified otherwise by the operators, are the default settings which are: the N.sub.s winding set at zero turn; the N.sub.x winding
set at 1 turn (ratio multiplier of 100) and the current-to-voltage converter A.sub.5 set at its lowest gain. Since N.sub.x =1 turn, the calculated N.sub.s value is numerically equal to the winding
ratio N.sub.s /N.sub.x. If the calculated N.sub.s value is greater than 100, the measurement process is stopped and a message is displayed or printed (by items 22 and 23 as shown in FIG. 5)
indicating that an external range extender is required to complete the measurement process. For a calculated N.sub.s value of less than or equal to 100, the ratio multiplier of the N.sub.x winding,
the N.sub.s winding turns, and the equivalent multiplying factor α of the MDAC 2 are set accordingly.
In the iteration process (previously mentioned steps 1 and 2 being repeated) the resulting calculated N.sub.s and DF values become correction values ΔN.sub.s and ΔDF, respectively. These correction
values are then added to, or subtracted from, the previous settings obtained in order to revise the bridge balance. Before each iteration, the gain of the current-to-voltage converter A.sub.5 is
increased by a factor of ten, up to the maximum sensitivity. Four iterations or fewer are required to advance from the lowest to the higher sensitivities, a process that may require up to 20 seconds.
A balanced condition of the bridge is achieved when ΔN.sub.s and ΔDF are less than or equal to 1 ppm. This 1 ppm limit is under control of the operator and can be set to another value during
initialization of the bridge. To accommodate a measurement capability of 10% for the dissipation factor DF with 1 ppm resolution, a 17-bit MDAC 2 is used.
Provided that the ratio, multiplier and gain settings need not be changed after reaching a balance, this particular bridge recycles and provides new measurements at a rate of about once every 3
seconds. The balance settings are displayed and, if desired, printed along with the capacitance and dissipation factor of the "unknown" capacitor and the applied high voltage. The use of this bridge
is facilitated by nested menus which guide the operator through different modes of operation. For example, if the nominal value of the "unknown" capacitor is known and entered as data via the
keyboard, the bridge settings are automatically preset and balance may be reached in about 10 seconds.
Various modifications may be made to the preferred embodiments without departing from the spirit and scope of the invention as defined in the appended claims. | {"url":"http://www.google.fr/patents/US5113140?hl=fr","timestamp":"2014-04-20T13:31:16Z","content_type":null,"content_length":"91492","record_id":"<urn:uuid:29d22822-d9ac-478b-85d0-98aedb33ea9c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sum of Arithmetic Series
November 25th 2009, 01:16 PM
Sum of Arithmetic Series
I'm having trouble with this question:
Find the sum of arithmetic series 200 terms, k=1 and on the right of sigma it is 3k+4.
So it's something like: above sigma: 200, below sigma k=1 and on the right (3k+4).
I would edit this post if I could find a way to input those nice graphics with actual math symbols but I don't know where to find it on this site, so I hope it is clear.
Here are my answers:
term 1 is 3*1 + 4 = 7
term 2 is 3*2+4 = 10
term 3 is 13...
The sum of arithmetic series gives me this:
200/2 * [2x7 + (199)3]
= 6110
Please help.
November 25th 2009, 02:48 PM
Hello, thekrown!
You dropped a zero in the last step . . .
Find the sum of arithmetic series: . $\sum^{200}_{k=1}(3k+4)$
Here are my answers:
first term: $a = 7$
common difference: $d = 3$
no. of terms: $n = 200$
The sum of arithmetic series gives me this:
. . $\tfrac{200}{2}[2(7) + (199)3] \;=\; {\color{blue}(100)}(611) \:=\; {\color{blue}61,\!100}$
November 25th 2009, 03:40 PM
Nice! So aside from that small error I got it?
How did you reply with the nice image symbols to display my problem?
November 26th 2009, 02:25 AM
He is using "LaTex" codes. If you move your pointer over the formula it will turn into a little hand. Clicking on the formula then shows the code used. | {"url":"http://mathhelpforum.com/algebra/116726-sum-arithmetic-series-print.html","timestamp":"2014-04-19T10:04:29Z","content_type":null,"content_length":"6853","record_id":"<urn:uuid:f3b5901c-fd81-4520-926f-afb0ada7ab8e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abstract Algebra
These are notes from a first term abstract algebra course, an introduction to groups, rings, and fields. There is an emphasis on specific examples.
I hope to get the notes for additional topics in abstract algebra written soon.
The first link in each item is to a Web page; the second is to a PDF (Adobe Acrobat) file. Use the PDF if you want to print it.
Send comments about this page to: Bruce.Ikenaga@millersville.edu. | {"url":"http://www.millersville.edu/~bikenaga/abstract-algebra-1/abstract-algebra-1-notes.html","timestamp":"2014-04-20T05:44:12Z","content_type":null,"content_length":"4589","record_id":"<urn:uuid:a30504e0-1299-4d9e-b8ee-c3af6e2229e3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Introduction to Copulas
Results 11 - 20 of 549
- Reliable Computing , 2003
"... We present Statool, a software tool for obtaining bounds on the distributions of sums, products, and various other functions of random variables where the dependency relationship of the random
variables need not be specified. Statool implements the DEny algorithm, which we have described previously ..."
Cited by 43 (11 self)
Add to MetaCart
We present Statool, a software tool for obtaining bounds on the distributions of sums, products, and various other functions of random variables where the dependency relationship of the random
variables need not be specified. Statool implements the DEny algorithm, which we have described previously [4] but not implemented. Our earlier tool addressed only the much more elementary case of
independent random variables [3]. An existing tool, RiskCalc [13], also addresses the case of unknown dependency using a different algorithm [33] based on copulas [23], while descriptions and
implementations of still other algorithms for similar problems will be reported soon [17] as the area proceeds through a phase of rapid development.
- Advances in Neural Information Processing Systems 17 , 2005
"... Abstract Embedding algorithms search for low dimensional structure in complexdata, but most algorithms only handle objects of a single type for which pairwise distances are specified. This paper
describes a method for em-bedding objects of different types, such as images and text, into a single comm ..."
Cited by 36 (2 self)
Add to MetaCart
Abstract Embedding algorithms search for low dimensional structure in complexdata, but most algorithms only handle objects of a single type for which pairwise distances are specified. This paper
describes a method for em-bedding objects of different types, such as images and text, into a single common Euclidean space based on their co-occurrence statistics. Thejoint distributions are modeled
as exponentials of Euclidean distances in the low-dimensional embedding space, which links the problem to con-vex optimization over positive semidefinite matrices. The local structure of our
embedding corresponds to the statistical correlations via ran-dom walks in the Euclidean space. We quantify the performance of our method on two text datasets, and show that it consistently and
signifi-cantly outperforms standard methods of statistical correspondence modeling, such as multidimensional scaling and correspondence analysis. 1 Introduction Embeddings of objects in a
low-dimensional space are an important tool in unsupervisedlearning and in preprocessing data for supervised learning algorithms. They are especially valuable for exploratory data analysis and
visualization by providing easily interpretablerepresentations of the relationships among objects. Most current embedding techniques build low dimensional mappings that preserve certain relationships
among objects and dif-fer in the relationships they choose to preserve, which range from pairwise distances in multidimensional scaling (MDS) [4] to neighborhood structure in locally linear embedding
[12]. All these methods operate on objects of a single type endowed with a measure of similarity or dissimilarity. However, real-world data often involve objects of several very different types
without anatural measure of similarity. For example, typical web pages or scientific papers contain
- J. Econometrics , 2006
"... This paper studies the estimation of a class of copula-based semiparametric stationary Markov models. These models are characterized by nonparametric invariant (or marginal) distributions and
parametric copula functions that capture the temporal dependence of the processes; the implied transition di ..."
Cited by 35 (9 self)
Add to MetaCart
This paper studies the estimation of a class of copula-based semiparametric stationary Markov models. These models are characterized by nonparametric invariant (or marginal) distributions and
parametric copula functions that capture the temporal dependence of the processes; the implied transition distributions are all semiparametric. Models in this class are easy to simulate, and can be
expressed as semiparametric regression transformation models. One advantage of this copula approach is to separate out the temporal dependence (such as tail dependence) from the marginal behavior
(such as fat tailedness) of a time series. We present conditions under which processes generated by models in this class are β-mixing; naturally, these conditions depend only on the copula
specification. Simple estimators of the marginal distribution and the copula parameter are provided, and their asymptotic properties are established under easily verifiable conditions. Estimators of
important features of the transition distribution such as the (nonlinear) conditional moments and conditional quantiles are easily obtained from estimators of the marginal distribution and the copula
parameter; their √ n − consistency and asymptotic normality can be obtained using the Delta method. In addition, the semiparametric
, 2002
"... This paper inv estigates the potential for extreme co-mov ements between financial assets by directly testing the underlying dependence structure. In particular, a t-dependence structure, deriv
ed from the Student t distribution, is used as a proxy to test for this extremal behav#a(0 Tests in three ..."
Cited by 34 (5 self)
Add to MetaCart
This paper inv estigates the potential for extreme co-mov ements between financial assets by directly testing the underlying dependence structure. In particular, a t-dependence structure, deriv ed
from the Student t distribution, is used as a proxy to test for this extremal behav#a(0 Tests in three di#erent markets (equities, currencies, and commodities) indicate that extreme co-mov ements are
statistically significant. Moreov er, the "correlation-based" Gaussian dependence structure, underlying the multiv ariate Normal distribution, is rejected with negligible error probability when
tested against the t-dependencealternativ e. The economic significance of these results is illustratedv ia three examples: co-mov ements across the G5 equity markets; portfoliov alue-at-risk
calculations; and, pricing creditderiv ativ es. JEL Classification: C12, C15, C52, G11. Keywords: asset returns, extreme co-mov ements, copulas, dependence modeling, hypothesis testing,
pseudo-likelihood, portfolio models, risk management. # The authorsw ould like to thankAndrew Ang, Mark Broadie, Loran Chollete, and Paul Glasserman for their helpful comments on an earlier version
of this manuscript. Both authors arewS; the Columbia Graduate School of Business, e-mail: {rm586,assaf.zeevi}@columbia.edu, current version available at www.columbia.edu\# rm586 1 Introducti7
Specification and identification of dependencies between financial assets is a key ingredient in almost all financial applications: portfolio management, risk assessment, pricing, and hedging, to
name but a few. The seminal work of Markowitz (1959) and the early introduction of the Gaussian modeling paradigm, in particular dynamic Brownian-based models, hav e both contributed greatly to
making the concept of co rrelatio almost synony...
, 2002
"... This paper develops efficient methods for computing portfolio value-at-risk (VAR) when the underlying risk factors have a heavy-tailed distribution. In modeling heavy tails, we focus on
multivariate t distributions and some extensions thereof. We develop two methods for VAR calculation that exploit ..."
Cited by 34 (2 self)
Add to MetaCart
This paper develops efficient methods for computing portfolio value-at-risk (VAR) when the underlying risk factors have a heavy-tailed distribution. In modeling heavy tails, we focus on multivariate
t distributions and some extensions thereof. We develop two methods for VAR calculation that exploit a quadratic approximation to the portfolio loss, such as the delta-gamma approximation. In the
first method, we derive the characteristic function of the quadratic approximation and then use numerical transform inversion to approximate the portfolio loss distribution. Because the quadratic
approximation may not always yield accurate VAR estimates, we also develop a low variance Monte Carlo method. This method uses the quadratic approximation to guide the selection of an effective
importance sampling distribution that samples risk factors so that large losses occur more often. Variance is further reduced by combining the importance sampling with stratified sampling. Numerical
results on a variety of test portfolios indicate that large variance reductions are typically obtained. Both methods developed in this paper overcome difficulties associated with VAR calculation with
heavy-tailed risk factors. The Monte Carlo method also extends to the problem of estimating the conditional excess, sometimes known as the conditional VAR.
- RISK , 2000
"... We consider the modelling of dependent defaults using latent variable models (the approach that underlies KMV and CreditMetrics) and mixture models (the approach underlying CreditRisk+). We
explore the role of copulas in the latent variable framework and present results from a simulation study sh ..."
Cited by 31 (6 self)
Add to MetaCart
We consider the modelling of dependent defaults using latent variable models (the approach that underlies KMV and CreditMetrics) and mixture models (the approach underlying CreditRisk+). We explore
the role of copulas in the latent variable framework and present results from a simulation study showing that even for fixed asset correlation assumptions concerning the dependence of the latent
variables can have a large effect on the distribution of credit losses. We explore the effect of the tail of the mixing-distribution for the tail of the credit-loss distributions. Finally, we discuss
the relation between latent variable models and mixture models and provide general conditions under which these models can be mapped into each other. Our contribution can be viewed as an analysis of
the model risk associated with the modelling of dependence between credit losses.
, 2005
"... Integrated risk management in a financial institution requires an approach for aggregating risk types (market, credit, and operational) whose distributional shapes vary considerably. In this
paper, we construct the joint risk distribution for a typical large, internationally active bank using the me ..."
Cited by 31 (2 self)
Add to MetaCart
Integrated risk management in a financial institution requires an approach for aggregating risk types (market, credit, and operational) whose distributional shapes vary considerably. In this paper,
we construct the joint risk distribution for a typical large, internationally active bank using the method of copulas. This technique allows us to incorporate realistic marginal distributions, both
conditional and unconditional, that capture some of the essential empirical features of these risks such as skewness and fat-tails while allowing for a rich dependence structure. We explore the
impact of business mix and inter-risk correlations on total risk, whether measured by value-at-risk or expected shortfall. We find that given a risk type, total risk is more sensitive to differences
in business mix or risk weights than to differences in inter-risk correlations. There is a complex relationship between volatility and fat-tails in determining the total risk: depending on the
setting, they either offset or reinforce each other. The choice of copula (normal versus Student-t), which determines the level of tail dependence, has a more modest effect on risk. We then compare
the copula-based method with several conventional approaches to computing risk.
, 2006
"... In order to allow for the analysis of data sets including numerical attributes, several generalizations of association rule mining based on fuzzy sets have been proposed in the literature. While
the formal specification of fuzzy associations is more or less straightforward, the assessment of such ru ..."
Cited by 30 (6 self)
Add to MetaCart
In order to allow for the analysis of data sets including numerical attributes, several generalizations of association rule mining based on fuzzy sets have been proposed in the literature. While the
formal specification of fuzzy associations is more or less straightforward, the assessment of such rules by means of appropriate quality measures is less obvious. Particularly, it assumes an
understanding of the semantic meaning of a fuzzy rule. This aspect has been ignored by most existing proposals, which must therefore be considered as ad-hoc to some extent. In this paper, we develop
a systematic approach to the assessment of fuzzy association rules. To this end, we proceed from the idea of partitioning the data stored in a database into examples of a given rule, counterexamples,
and irrelevant data. Evaluation measures are then derived from the cardinalities of the corresponding subsets. The problem of finding a proper partition has a rather obvious solution for standard
association rules but becomes less trivial in the fuzzy case. Our results not only provide a sound justification for commonly used measures but also suggest a means for constructing meaningful
alternatives. 1.
- Journal of Econometrics
"... Recently Chen and Fan (2003a) introduced a new class of semiparametric copula-based multivariate dynamic (SCOMDY) models. A SCOMDY model specifies the conditional mean and the conditional
variance of a multivariate time series parametrically (such as VAR, GARCH), but specifies the multivariate distr ..."
Cited by 28 (4 self)
Add to MetaCart
Recently Chen and Fan (2003a) introduced a new class of semiparametric copula-based multivariate dynamic (SCOMDY) models. A SCOMDY model specifies the conditional mean and the conditional variance of
a multivariate time series parametrically (such as VAR, GARCH), but specifies the multivariate distribution of the standardized innovation semiparametrically as a parametric copula evaluated at
nonparametric marginal distributions. In this paper, we first study large sample properties of the estimators of SCOMDY model parameters under amisspecified parametric copula, and then establish
pseudo likelihood ratio (PLR) tests for model selection between two SCOMDY models with possibly misspecified copulas. Finally we develop PLR tests for model selection between more than two SCOMDY
models along the lines of the reality check of White (2000). The limiting distributions of the estimators of copula parameters and the PLR tests do not depend on the estimation of conditional mean
and conditional variance parameters. Although the tests are affected by the estimation of unknown marginal distributions of standardized innovations, they have standard parametric rates and the
limiting null distributions are very easy to simulate. Empirical applications to multiple
, 1999
"... This paper derives and implements a nonparametric, arbitrage-free technique for multivariate contingent claims (MVCC) pricing. This technique is based on nonparametric estimation of a
multivariate risk-neutral density function using data from traded options markets and historical asset returns. “New ..."
Cited by 27 (2 self)
Add to MetaCart
This paper derives and implements a nonparametric, arbitrage-free technique for multivariate contingent claims (MVCC) pricing. This technique is based on nonparametric estimation of a multivariate
risk-neutral density function using data from traded options markets and historical asset returns. “New ” multivariate claims are priced using expectations under this measure. An appealing feature of
nonparametric arbitrage-free derivative pricing is that fitted prices are obtained that are consistent with traded option prices and are not based on specific restrictions on the underlying asset
price process or the functional form of the risk-neutral density. Nonparametric MVCC pricing utilizes the method of copulas to combine nonparametrically estimated marginal risk-neutral densities
(based on options data) into a joint density using a separately estimated nonparametric dependence function (based on historical returns data). This paper provides theory linking objective and
risk-neutral dependence functions, and empirically testable conditions that justify the use of historical data for estimation of the risk-neutral dependence function. The nonparametric MVCC pricing
technique is implemented for the valuation of bivariate underperformance and outperformance options on the S&P500 and DAX index. Price deviations are | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=90702&sort=cite&start=10","timestamp":"2014-04-21T08:29:39Z","content_type":null,"content_length":"43149","record_id":"<urn:uuid:b4c7d708-758c-4892-b636-9e0ddb6edb92>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bhargava's Factorials
Bhargava’s Factorials
Wow. Hard to believe we’re already a week into February. I began my new year with a trip to the 2010 Joint Mathematical Meetings, and I came back with a head and several notebooks full of ideas to
contemplate and problems to solve. I’ve missed my little blog here while I’ve been lost in the unexplored corners of the mathemativerse. Sad truth that, until and unless I am tenured, I can’t afford
to pass up any opportunities for research; I do have a family to think of. I haven’t been around here for a while, and I’m sorry for that. But today’s post should be a doozy.
Our topic for today comes from this year’s JMM, where the inimitable Manjul Bhargava gave an MAA invited address on factorial functions. (Coincidentally, I first learned about the topic of his talk
when I read an article of his, reprinted in the charming book Biscuits of Number Theory, which I bought at the 2009 JMM.)
Wait a minute, factorial functions? That’s what gets talked about at the biggest math conference of the year, where people on the cutting edge of research mathematics get together, factorials? The
ones you learned about in high school? Can there really be more to learn about $n!$?
Amazingly enough, yes. In a conference full of zonotopal algebra and cohomology and harmonic analysis, one of the invited addresses was about factorials. And it turns out that there is more to learn;
factorials are really just the tip of an enormous iceberg.
The Factorial Function Revisited
By now you should all have heard of the traditional factorial function $n!$, defined as the product of all the integers between 1 and $n$ inclusive.
Here is an alternative definition of the factorial function.
Pick a prime number $p$.
• Pick any integer whatsoever to be $a_0$.
• Now we want to choose an integer to be $a_1$; we choose it so that $(a_1-a_0)$ is divisible by as small a power of $p$ as possible. (There will probably be many equally good choices for $a_1$, so
make the choice arbitrarily.)
• Now choose a third integer $a_2$ so that $(a_2-a_0)(a_2-a_1)$ is divisible by as small a power of $p$ as possible. (Again, if there are equally good choices, choose whichever you like.)
• In general, choose $a_k$ so that $(a_k-a_0)(a_k-a_1)(a_k-a_2)\cdots(a_k-a_{k-1})$ is divisible by the smallest power of $p$ possible.
As we go along, we record the list of $a_i$ and also (more importantly) the list of powers of $p$ which divide the products.
As an example, consider the prime 2.
• I choose $a_0=19$. This is totally arbitrary. (By convention I’ll say that the power of 2 is always 1 at the first stage, when there is no product to speak of.)
• If I take $a_1$ to be even, then $a_1-19$ won’t be divisible by two at all, so I take $a_1=42$. (The power of 2 is 1 at this step.)
• Now it is inevitable that $(a_2-19)(a_2-42)$ will be even, but we can prevent it from being divisible by 4. Take, say, $a_2=9$. (The power of 2 is 2 now.)
• Again, an even product is inevitable, but I can still keep $(a_3-19)(a_3-42)(a_3-9)$ from being divisible by 4 by taking $a_3=0$, say. (Again, the power of 2 is 2 at this stage.)
• In the product $(a_4-19)(a_4-42)(a_4-9)(a_4-2)$, no matter what I do two of the terms will be even, and one of them will be divisible by 4. So the best I can hope for is to make the product
divisible by 8 (but not 16), and that can be done by taking $a_4=15$. (The power of 2 is 8 here.)
In this example, I began the sequence $19, 42, 9, 2, 15,\ldots$, and obtained the sequence of powers $(1,)1,2,2,8,\ldots$. What is amazing, given all the free choices involved in constructing the
sequence, is that the sequence of $p$-powers is well-defined! It does not depend on any of our choices! For the prime 2, we get the sequence $1,1,2,2,8,8,16,16,128,128,256,256,\ldots$.
We repeat the process for all the other primes as well. For the prime 3 we get $1,1,1,3,3,3,9,9,9,81,81,81,243,243,243,\ldots$; for the prime 5 we get $1,1,1,1,1,5,5,5,5,5,25,25,25,25,25,\ldots$;
etc. (You may begin to see some patterns; you might want to play around with building the sequences $a_i$ for other primes.)
Once you have all these sequences of $p$-powers, we can multiply them all together. The whole big sequence starts $1,1,2,6,24,120,720,\ldots$.
And we should recognize this sequence! It’s $0!, 1!, 2!, 3!, 4!, 5!, 6!,\ldots$. This whole elaborate process serves as an alternative description of the classical factorial function!
I regret that I can’t give you any idea how a person would ever think of this construction for the factorials. I think you have to be Manjul Bhargava to do that.
And you will be forgiven for thinking that this is an absurdly elaborate way to define something that already had a perfectly simple and user-friendly definition. But read on.
Generalized Factorials
Now let $S$ be any infinite set of integers whatsoever. (Actually this definition is dramatically more general, and we can work with quite general rings instead of just the integers; but I don’t want
to assume that you are familiar with such objects.) For example, you might take the set of square numbers, or the set of prime numbers, or the set of integers with no 7′s anywhere in their decimal
Then apply the definition as before! For every prime $p$, form a sequence of $a_i$ in order to generate the sequence of powers of $p$ in the product. The only new rule is that all your $a_i$ must
come from $S$. Just as before (and just as miraculously), the resulting sequence of powers will always be the same no matter what choices you make. Then multiply the prime powers together to obtain
the values $0!_S, 1!_S, 2!_S, 3!_S,\ldots$.
I’m a number theorist, so the example I’ll play around with is the set of prime numbers. Let $P$ be the set of primes. I spent a little bit of time on the bus this morning playing with numbers to
work out this sequence; I encourage you to engage in a little number-play of your own.
• For $p=2$, one possible sequence that minimizes the powers is $2, 3, 5, 7, 17, 11, 13,\ldots$, giving the sequence of powers $1, 1, 2, 8, 16, 128, 256,\ldots$. Notice this sequences is growing
faster than the one for the classical factorial. This is because we have fewer choices for the $a_i$, so we get forced into higher powers of $2$ sooner.
• For $p=3$, one possible sequence that minimizes the powers is $2, 3, 7, 5, 13, 17, 19, \ldots$, giving the sequence of powers $1, 1, 1, 3, 3, 9, 9,\ldots$
• For $p=5$, one possible sequence that minimizes the powers is $2, 3, 5, 19, 11, 7, 13, \ldots$, giving the sequence of powers $1, 1, 1, 1, 1, 5, 5,\ldots$
• For $p\geq 7$, we have enough freedom that the sequence begins $1, 1, 1, 1, 1, 1, 1,\ldots$
Assembling these sequences gives us $1, 1, 2, 24, 48, 5760, 11520,\ldots$. This gives us the first few values of the “prime factorial function”. $0!_P=1, 1!_P=2, 2!_P=2, 3!_P=24, 4!_P=48, 5!_P=5760,
This is the upshot: when restated as above, the definition of factorial applies to any infinite set of integers whatsoever! With one definition, we get a factorial function based on the primes, a
factorial function based on the squares, etc. For centuries mathematicians have worked with the classical factorial function, little suspecting that it was just the most basic member of an enormous
family of related functions.
Key Idea: Ubiquity
Okay, you have every right not to be impressed yet. I mean, just because you can make up a function doesn’t mean that it’s interesting.
The classical factorial function is well-known to all mathematicians, and it’s not just because exclamation points are fun to write. It’s because the factorial function enjoys lots of interesting
properties and shows up in the answer to lots of questions from all corners of mathematicians.
• $\frac{(m+n)!}{m!n!}$ is always an integer, for any $m,n\geq 0$. (Moreover, this integer is the binomial coefficient, and is the answer to a whole raft of counting problems.)
• If you pick any integers $a_0, a_1, \ldots, a_n$, then the product of all the possible differences among them $\prod (a_i-a_j)$ is \emph{always} divisible by $0!1!2!3!\cdots n!$, and that’s not
true for any larger number.
• Consider the polynomial $f(x)=3x^2+5x+2$, to be evaluated at integers. Even though the coefficients (3 and 5 and 2) have no factors in common, all the values of $f(x)$ are even. (Why is that?)
You might think of 2 as a “hidden” factor of $f$ (hidden because 2 doesn’t divide all the coefficients). In general, if we form an $n$-th degree polynomial with integer coefficients, how large
can the hidden factors be? The answer is $n!$.
(And I could go on with this list almost endlessly, all across the spectrum of mathematical sophistication.)
In a word, the factorial function is ubiquitous. One of the reasons that mathematics is so powerful, so beautiful, so borderline-magical, is that there are some big ideas that manifest themselves in
totally unrelated-seeming areas. Think of $\pi$ and $e$, for example.
And this is what’s so amazing, almost belief-defying. You can adapt the questions in my above list (and lots of others) from $\mathbb{Z}$ to arbitrary sets $S$, and the answers always come by just
replacing the classical factorial function by the generalized one.
• $\frac{(m+n)!_S}{m!_Sn!_S}$ is always an integer, for any $m,n\geq 0$.
• If you pick any integers $a_0, a_1, \ldots, a_n$in $S$, then the product of all the possible differences among them $\prod (a_i-a_j)$ is \emph{always} divisible by $0!1!2!3!\cdots n!$, and that’s
not true for any larger number.
• In general, if we form an $n$-th degree polynomial $f(x)$ with integer coefficients and evaluate $f$ at numbers in $S$, how large can the common factors of all the $f(x)$ be? The answer is $n!_S$
I close on a note of wonder and mystery. Mathematical innovation is happening all the time, at the hands of mathematicians, both university professors and home enthusiasts, and admittedly most of it
is happening in distant corners of the mathiverse, in places that would take years or decades of study even to understand the questions. This might lead you to think that mathematical research has
nothing to do with anything that nonmathematicians understand. But Bhargava’s factorials tell us a different story.
The next big mathematical innovation could be about anything. Any idea, even an idea that’s been thoroughly studied for hundreds of years, might conceal a big secret, just waiting to be looked at in
the right way.
One Response to Bhargava’s Factorials
1. [...] Permalink: Bhargava’s Factorials. [...] | {"url":"http://notaboutapples.wordpress.com/2010/02/08/bhargavas-factorials/","timestamp":"2014-04-21T09:46:49Z","content_type":null,"content_length":"74221","record_id":"<urn:uuid:ffcd4c85-4781-4a18-90eb-f7f9b31d6bca>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course Description -
MATH 1411
MATH 1411 Calculus I (4-0) (Common Course Number MATH 2413)
Topics include limits, continuity, differentiation, and integration of functions of a single variable. Prerequisites: Four years of high school mathematics including trigonometry and analytic
geometry and an adequate score on a placement examination or MATH 1410 or MATH 1508. | {"url":"http://www.utep.edu/catalogs/undergrad/classes/math1411.htm","timestamp":"2014-04-19T11:57:18Z","content_type":null,"content_length":"1928","record_id":"<urn:uuid:adca46f0-08af-4438-921a-857520b1960a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - wxMaxima graph paths
Date: May 12, 2013 2:31 PM
Author: Sergio
Subject: wxMaxima graph paths
Good evening,
I am learning how to use wxMaxima so I am trying to solve some exercises, but the answers given and the ones that I think they should be are different in two of them.
Would you be so kind as to help me?
Graph G has 15 vertex, u & v are adjacent if remainder(abs(i-j),3)=0
How many closed paths with a length <= 6 which begin at the first vertex are?
The answer given is 3255 but i get 1092.
This is my code:
Bool_2(u,v):= if remainder(abs(i-j),3)=0 then true else false$
Grafo_1: make_graph(15,Bool_2);
Matriz: adjacency_matrix(Grafo_1)$
for j:1 thru 6 do cont:cont+(Matriz^^j)[1,1];
cont is like a counter, it is supposed to increase by every path it finds.
Thank you very much.
Message was edited by: Sergio
Message was edited by: Sergio | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=9122778","timestamp":"2014-04-18T00:33:06Z","content_type":null,"content_length":"1813","record_id":"<urn:uuid:64261fe3-3757-45a5-82f9-b70d3e5c3a90>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
View Source
<p>This document describes a request for comments for a Mathematics namespace within Zend Framework. The reason I am writing this RFC is that I believe mathematics will play a bigger role in the near
future in web projects. With WebGL coming up and HTML5's canvas element being there already there I am sure that more complex projects and applications will come in the future. These applications
will do more calculations than a 'normal' website. Zend Framework could have a Zend\Math namespace which will act as a core for all mathematical concepts.</p>
<h2>Math currently used in Zend Framework</h2>
<p>At the moment of writing, Zend Framework uses a BigInteger class within the Zend\Crypt namespace. This should be refactored to the Zend\Math namespace in order to improve reusability. Also within
the Zend\Locale namespace mathematical functions are often used such as a custom round function, normalize, etc. There even is a special class devoted to doing this (Zend\Locale\Math).</p>
<p>The namespace Zend\XmlRpc contains a wrapper class around Zend\Crypt\Math\BigInteger. I'd like to think that a dependency between Zend\XmlRpc and Zend\Math makes more sense than a dependency
between Zend\XmlRpc and Zend\Crypt (unless there actually is done some en/decrypting).</p>
<h2>What would Zend\Math contain?</h2>
<p>As already pointed out the Zend\Math namespace should contain the common used math-related classes in other namespaces. This improves reusability. Of course the options are endless but I'd like to
share some first ideas:</p>
<li>Basic math functions such as normalize, clamping, etc.</li>
<li>Spline algorithims such as Bezier, Hermite or Catmull-Rom</li>
<li>Random number generating algorithms where "rand();" is not sufficient enough.</li>
<li>Vector operations</li>
<li>Matrix operations</li>
<li>Quaternion operations</li>
<li>Latitude/longitude related calculations (how often did we need to calculate the distance to...?)</li>
<li>Zend\Math\BigInteger component is ready (with tests). It provides basic arithmetic operations, base conversion and int/binary conversions. Code is <a href="https://github.com/denixport/zf2/tree/
<li>Zend\Math\Rand component is ready. Please see <a href="http://framework.zend.com/wiki/x/HADTAg">updated RFC</a> and code in <a href="https://github.com/denixport/zf2/tree/feature/rand/library/
Zend/Math/Rand">this feature branch</a></li> | {"url":"http://framework.zend.com/wiki/pages/viewpagesrc.action?pageId=46793054","timestamp":"2014-04-16T16:24:01Z","content_type":null,"content_length":"7917","record_id":"<urn:uuid:16886666-9a43-4576-9ed6-ab9c9103d736>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Do the neural network solves the higher non-linear equation and can it make online real implementation?
Replies: 1 Last Post: Feb 21, 2013 8:33 PM
Re: Do the neural network solves the higher non-linear equation and can it make online real implementation?
Posted: Feb 21, 2013 8:33 PM
"Muna Adhikari" <adh.muna@gmail.com> wrote in message <kg3ie7$h5s$1@newscl01ah.mathworks.com>...
> Dear Sir or Madam;
> I am a graduate student in Physics from NEPAL. I am going to study the effects of soil moisture content and its prediction using 15 minute remotely sensed data. I am going to design the prediction
based on neural network, as it is said that it is good on forecasting. However, I need to test and validate the instantaneous data from the sensors. I need to do some experimental work. I need to
know how can I used neural network model in my case?
> I found in some literature that people use multi layer backpropagation neural network for the static design. And then, they performed sliding window algorithm or accumulated training algorithm to
make the system online? Could you provide me some idea and if some sample of matlab how I could use sliding window algorithm or accumulated training and generalization of the neural network.
> I have sample of remotely sensed data of 15 minute duration of 6 months and total sample data is 17,280. My input data are: soil temperature, air temperature, precipitation, soil adjusted
vegetation index and land surface temperature. My output data is: soil moisture content.
> Also, the effect of soil moisture content is polynomial equation based on the emphirical physical modeling.
I think the word "effect"is being misused. Do you mean you have a polynomial equation the estimates the current soil moisture and now you want a neural net that will predict future values?
The latter is a timeseries problem suitable for narxnet.
See the narxnet documentation and examples and try the SISO simpleseries_dataset.
Also look at NEWSGROUP and ANSWERS posts by searching on narxnet. Next try the MIMO pollution_dataset with 8 inputs and 3 outputs. Or you can just consider one of the outputs instead of considering
all three.
Hope this helps. | {"url":"http://mathforum.org/kb/message.jspa?messageID=8390451","timestamp":"2014-04-19T23:30:05Z","content_type":null,"content_length":"16075","record_id":"<urn:uuid:4cc65208-7fcf-4dfe-815e-8d492e170eca>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the value of a playoff win?
A question has been weighing on my mind recently. It’s something most people probably wouldn’t care about, but if you’re reading this column, it’s something I think you’ll find of great interest. At
least, I do. The question is, what is the proper amount of weight we should give to a player’s performance in the playoffs when evaluating his overall career?
See, all the sabermetric attempts at a total value measure, such as Clay Davenport’s translations or Dan Rosenheck’s WARP statistic or Bill James’ Win Shares, consider only regular season
accomplishments, even though postseason performances are arguably more memorable and certainly no less important. I haven’t seen many people point out this issue—and indeed, until recently, I
considered it no big deal—but in light of Curt Schilling’s potential retirement, it’s a problem I’ve spent the past week trying to solve.
Now this column won’t be about Schilling or about how including playoff data might impact other players—the latter topic is for some future column (which I hope to have done next week, but given my
recently sporadic publishing schedule I can in no way guarantee) and the former I already addressed in Heater Magazine; if you don’t have a subscription or don’t want one, suffice it to say, once you
include Schilling’s playoff numbers there is no way you can’t put him in the Hall of Fame.
This column is only a start. Before I move into any calculations, I want to hear reader feedback and critiques, because if what I’m about to tell you correct, the implications for anyone who cares
about historical valuations are pretty huge.
Alright, enough with the vagaries; let’s get to some real results. But first, let’s start with a graph I printed in a column last year on calculating Pennants Added. Pennants Added, for the
uninitiated, is a Bill James invention and supposed to measure not how many wins a player would have added above some baseline, but how many pennants. James’ idea—which was not ultimately really
correct—was that Pennants Added would give high-peak players extra (but deserved) credit for concentrating their greatness over a few seasons rather than a long, steady, but boring career.
I extended James’ idea one step by calculating “World Series Added” (which is really just Pennants Added divided by two), which would tell us how many World Series victories a player would add to a
randomly selected team, above a replacement level player. The graph I printed looked like this:
So what does this graph mean? That’s a fair question. Essentially, the horizontal axis gives us different wins above replacement levels (a handy reference is that two wins is an average player, five
is an All-Star, and eight is an MVP), while the vertical axis tells us how many World Series they would add to a randomly selected team (you should read both my Pennants Added articles if you want to
better understand the exact process I used).
The lines drawn represent different league setups: Eight teams, one that makes the playoffs; ten teams, with one making the playoffs; 12 teams with two making the postseason; 14 teams with two making
the postseason; 14 teams with four making the postseason; and 16 teams with four getting into the playoffs (the lines for the last two leagues overlap as the results are almost equivalent). These
numbers correspond to the following leagues:
8 Teams, 1 Playoff Spot: 1904-1960 AL and NL, 1961 NL
10 Teams, 1 Playoff Spot: 1962-1968 AL and NL, 1961 AL
12 Teams, 2 Playoff Spots: 1969-1976 AL and NL, 1977-1992 NL
14 Teams, 2 Playoff Spots: 1993 AL and NL, 1977-1992 AL
14 Teams, 4 Playoff Spots: 1996-1997 AL and NL, 1998- AL
16 Teams, 4 Playoff Spots: 1998- NL
The main thing that can be gleaned from this graph is that the value of any given player, in terms of how much he can add to a randomly selected team’s odds of winning the World Series, has vastly
decreased over time. An MVP player half-a-century ago could increase a randomly selected team’s odds of winning the World Series by 27 percent, while today, he would only up their odds by 6 percent.
Obviously, the value of an elite player 50 years ago was much different from his value today.
Or was it? Indulge me if you will, and let me do a little math. Specifically, let’s calculate the value of a great World Series performance—say, one win above replacement. What is one win above
replacement? It’s the equivalent to something like two starts of six shutout innings each. So by how much would a player raise his team’s probability of winning it all if he turned in two starts like
Well, essentially, he would turn a series in which his team needed to win four games into one in which they (theoretically) needed to win three. (And by the way, for those who take issue with using
replacement level here, it doesn’t really matter what level you use; the argument remains the same.) With a little math, we find that their odds of winning the Series improve from 50 percent
(assuming an equal match-up, and ignoring things like home field advantage) to 68.75 percent. In other words, that player would be worth .1875 World Series victories—not too bad.
Now this number is the same no matter what era you look at. There are, however, two confounding factors: The introduction of the League Championship Series and subsequently the divisional round, and
the relative value of a playoff win to a regular season win. We’ll first tackle the former.
The math for the League Championship Series is much the same as the math for the World Series; the only difference is that winning the LCS only gives a team a 50 percent shot at winning the World
Series, so a player who is one win above replacement (or whatever baseline you choose) in the LCS is worth not .1875 World Series Added, but .094 (rounded).
In the divisional round, you only need to win three games, so the value of one win is a little more—it increases the team’s odds of advancing by 25 percent—but of course, now we need to divide the
number not by two but by four, so a player who is one win above replacement in a divisional series is actually worth .0625 World Series Added.
So the first conclusion we can make is that modern players have a lot more chances to contribute in the playoffs than did players from previous eras. The (realistic) potential value of a player who
stars throughout the playoffs is twice what it was 50 years ago. Whether that makes up for their regular season disadvantage, we’ll look into at a later date. What I want to address right now is of
more pressing importance.
What I want to address right now is the great increase in the relative value of playoff contributions to regular season contributions. A player who contributed 10 wins above replacement in the
regular season 50 years ago would have .345 World Series Added. If he continued on the same pace in the playoffs (assuming all series go the distance), he would tack on .085, for a total of .43 World
Series Added.
A player who contributes 10 wins above replacement in the regular season these days gets just .073 World Series Added. If he continues on that pace in the playoffs, he ends up with .20 more, for a
total of .273.
Given this fictional example, the player from 50 years ago will still end up with a lot more credit for the same performance, but today’s player will get a lot more credit for his playoff numbers. We
can show this more clearly by graphing the relative value of a win above replacement in the playoffs versus the value of a win in the regular season for a given level of player in different league
types. For example, here is that graph for an MVP-type player (eight wins above replacement):
What this graph tells you is that while a win in the World Series 50 years ago had about five times the value of a regular season win, today its relative value is 25 times greater! The relative value
of a win in the League Championship Series is 12 times greater, and even for the divisional round, it’s eight. In other words, relative to a regular season win, one win in the divisional round has a
greater value than one win in the World Series win did half-a-century ago! Crazy.
Since the values change depending on just how good a player is, let’s take a look at the graph for an All-Star level player (five wins above replacement).
The story here is much the same, the only real difference being that in the eight and ten team leagues, the relative value of a World Series win for an All-Star would be about seven-to-eight times
greater than the value of a regular season win.
For posterity’s sake, here is the graph for an average player (two wins above replacement).
Again, the relative value of a World Series win in older years rises, but still is far below what we see today.
The pattern is clear: A great playoff performance in 2008 is worth tons more, relative to a great regular season, than a great playoff performance in 1958! The implications are potentially
staggering. For example, what if we evaluated every player’s statistics in a modern context, giving World Series performances 25 times the weight of regular season games? Would Curt Schilling end up
the greatest player of all-time?
For those interested in those types of questions, I think this is a really important point to digest. There’s a lot that can be said about this, and probably some things I didn’t think of, so if you
have any thoughts you’d like to share, feel free to e-mail me. Let’s figure this thing out together. | {"url":"http://www.hardballtimes.com/what-is-the-value-of-a-playoff-win/","timestamp":"2014-04-19T09:58:18Z","content_type":null,"content_length":"50890","record_id":"<urn:uuid:b850d448-f95d-408f-8c23-9819a50ac987>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Standard matrix of T
July 31st 2010, 05:13 PM #1
Jul 2010
Standard matrix of T
Assume that R2>R2 is a linear transformation such that
T|1| =|1|
|2| |-3|
T|3| =|2|
|-1| |1|
Find the standard matrix of T.
Ok, so I get that |1,2|=e1+2e2 and that |3,-1|=3e1-e2, but what do I do from there on? It shows in the solutions that
e1=1/7|1,2| + 2/7|3, -1|
I understand that the are using the same thing from e1+2e2 and 3e1-e2, but where is the divisor of 7 coming from?
Sorry if it's formatted badly, I can scan the original page? Thanks
Okay so you know that
$T(e_1+2e_2)=e_1-3e_2$ and
Now use the linearity of the operator to get
$T(e_1+2e_2)=T(e_1)+2T(e_2)=e_1-3e_2$ and
This gives a system of equations that you can solve for $T(e_1),T(e_2)$
After you solve for them you should get
$T(e_1)=\frac{1}{7}(5e_1-e_2); T(e_2)=\frac{1}{7}(e_1-10e_2)$
These are the columns of the matrix representation of the transform.
$\begin{bmatrix} \frac{5}{7} & \frac{-1}{7} \\ \frac{1}{7} & \frac{-10}{7}\end{bmatrix}=\frac{1}{7}\begin{bmatrix} 5 & -1\\ 1 & -10\end{bmatrix}$
Thanks for the reply but I'm still confused. I don't get where you said it gives a system of equations for T(e1) and T(e2). Where would I get that from?
is it just
1 -1
Which I'm pretty sure is wrong... could you maybe put in the step of where you got the system of equations, and where from?
Thanks a lot
Assume that R2>R2 is a linear transformation such that
T|1| =|1|
|2| |-3|
T|3| =|2|
|-1| |1|
Find the standard matrix of T.
Ok, so I get that |1,2|=e1+2e2 and that |3,-1|=3e1-e2, but what do I do from there on? It shows in the solutions that
e1=1/7|1,2| + 2/7|3, -1|
I understand that the are using the same thing from e1+2e2 and 3e1-e2, but where is the divisor of 7 coming from?
Sorry if it's formatted badly, I can scan the original page? Thanks
Let's take the following as given:
T is completely determined once the images of any basis for R^2 are known. *
With * in mind, I think I would read the initial statement, i.e.,
that T(1,2) = (1,-3) and T(3,-1) = (2,1), as telling me that
with respect to the basis B = {(1,2), (3,-1)} T is uniquely determined.
In this context the matrix $\begin{bmatrix}5/7 & 1/7 \\ -1/7 & -10/7\end{bmatrix}$ is the matrix representation of T
relative to basis B. Call it M'.
But, the problem asks you to find the "standard matrix" of T.
I read that as meaning the matrix representation of T relative to
the standard (or natural) basis E = {(1,0),(0,1)}.
So I think there's more work to do.
You already have M'.
Can you say what the matrix $\begin{bmatrix}1/7 & 3/7 \\ 2/7 & -1/7\end{bmatrix}$ might represent?
Maybe the matrix of transition from E to B?
What about $\begin{bmatrix}1 & 3 \\ 2 & -1\end{bmatrix}$?
Thanks for the reply but I'm still confused. I don't get where you said it gives a system of equations for T(e1) and T(e2). Where would I get that from?
is it just
1 -1
Which I'm pretty sure is wrong... could you maybe put in the step of where you got the system of equations, and where from?
Thanks a lot
$T(e_1+2e_2)=T(e_1)+2T(e_2)=e_1-3e_2$ and
This is the system of equations or more explicitly
$T(e_1)+2T(e_2)=e_1-3e_2$ and
Solving this gives you the transform of the standard basis.
Hi, I don't know if its because it's late, or because I'm just really stupid, but I'm still really lost.
To solve, do you mean in this fashion:
$\begin{bmatrix} 1 & -3 \\ 2 & 1\end{bmatrix}$
$\begin{bmatrix} 1 & -3 \\ 0 & 7\end{bmatrix}$
Just a wild guess here
I completely get this:
$T(e_1)+2T(e_2)=e_1-3e_2$ and
and I also get that you have to use
$T(e_1)+2T(e_2)$ and
where the e1 and e2 are (1,2) and (3,-1) respectively, but how do you find out what T is for each?
From what I'm looking at, the multiple stays the same, but there is a divisor for each T, in this case 7.
How do I solve for that T? Hopefully I'm being clear enough
EDIT I GOT IT, I guess I just needed some sleep
Last edited by shibble; August 2nd 2010 at 06:44 AM.
July 31st 2010, 11:11 PM #2
August 1st 2010, 04:49 PM #3
Jul 2010
August 1st 2010, 07:58 PM #4
Junior Member
Oct 2006
August 1st 2010, 08:29 PM #5
August 1st 2010, 09:39 PM #6
Jul 2010 | {"url":"http://mathhelpforum.com/advanced-algebra/152450-standard-matrix-t.html","timestamp":"2014-04-17T05:43:59Z","content_type":null,"content_length":"55418","record_id":"<urn:uuid:ed537021-00be-427f-967d-5216c690cd45>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
Burr Ridge, IL Algebra 2 Tutor
Find a Burr Ridge, IL Algebra 2 Tutor
...I was a French teacher, but I also did official district homebound instruction (for students that cannot attend regular high school for health related reasons) for 2 years as well. I also
tutored officially after school for students on campus, again in various subjects. I have been tutoring for WyzAnt for over 4 years now and very much enjoy it!
16 Subjects: including algebra 2, English, chemistry, French
...I have developed strategies that are very effective at helping students to efficiently process these sections, as well as to develop their skills at correctly interpreting the many charts,
tables, and graphs that appear in this section. It is a beast that can be tamed! I am particularly qualified to tutor students on the ISEE because, as a teacher, I am quite familiar with the test.
20 Subjects: including algebra 2, reading, English, writing
...Wilcox scholarship). In high school, I scored a 2400 on the SAT, and earned a 5 on the AP Calculus BC exam from self study. I also received a 5 on the AP Statistics exam. I have teaching
experience, as well.
13 Subjects: including algebra 2, calculus, geometry, statistics
...I work well with students from middle school through college and I can tutor all K-12 math, including college Calculus, Probability, Statistics, Discrete Math, Linear Algebra, and other
subjects. I have flexible days and afternoons, and I can get around Chicago without difficulty. I look forward to hearing from you.I took discrete math undergraduate at Tufts and received an A.
22 Subjects: including algebra 2, calculus, geometry, statistics
...I love teaching, and gain great enjoyment from doing so. I have a Bachelor's degree in Math and believe that I can proficiently tutor that subject. I am also a native French speaker and can
tutor that subject as well.
4 Subjects: including algebra 2, French, algebra 1, precalculus | {"url":"http://www.purplemath.com/Burr_Ridge_IL_Algebra_2_tutors.php","timestamp":"2014-04-17T16:00:25Z","content_type":null,"content_length":"24331","record_id":"<urn:uuid:03a2aa04-5799-4845-89ab-2bfeb58d04cd>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
The direct movement of viruses between contacting cells as a mode of dissemination distinct from the release of cell-free virions was hinted at in pioneering experiments first reported almost eighty
years ago [1], and confirmed and extended 30 years later [2,3]. This early work was carried out using the tools of the time in the absence of the modern cell biological, immunological and virological
techniques available today. As such, although many of the basic concepts were established for cell-to-cell spread prior to the discovery of retroviruses, descriptions of the molecular and cellular
mechanisms underlying this phenomenon were lacking. Papers from two decades ago revealed that HIV-1 could spread between cultured lymphocytes by cell-to-cell spread [4], proposed that this mechanism
of dissemination was substantially more efficient than diffusion-limited spread of cell-free virions [5,6], and suggested that this might be a mechanism of evasion from antibody neutralization [4].
Investigation of the cell-to-cell spread of viruses, and particularly retroviruses, has seen a renaissance in the past five years with the discovery of a multi-molecular structure termed the
virological, or infectious, synapse [7,8,9,10]. The definition of this structure was to a great extent based upon the paradigm established by two other well-established synaptic junctions, neural and
immunological synapses [11], and the virological synapse shares features of these synapses. Foremost amongst these shared features are the relatively stable adhesive junction formed between the
pre-synaptic (virus infected donor) cell and the post-synaptic (receptor-expressing target) cell, and the cytoskeleton-dependent directed release of intercellular information, which in the case of
the virological synapse is infectious material in the form of virions. Thus the virological synapse becomes a ‘third synapse’, distinct from the neural and immunological synapses in that it transfers
‘pathogenic information’ between cells. Although first described for retroviruses, other viruses can use virological synapses for spread between immune cells [12], and the list will no doubt grow
Although we do not yet have direct evidence supporting a role for retroviral cell-to-cell spread in vivo, its importance seems certain. HTLV-1 is almost non-infectious in vitro in a cell-free form,
strongly implying that the predominant means of spread in vitro and in vivo is cell-to-cell, and helping to explain in vivo viral tropism [13]. In the early stages of HIV-1 infection the virus
infects and kills CD4^+ T cells so rapidly that the comparatively slow dissemination by cell-free virus is unlikely to account for this [14]. Moreover, HIV-1 preferentially targets CD4^+ T cells with
T cell receptors specific for itself, implying that the virus is able to infect such cells across immunological synapses [15]. Finally, the focal distribution of SIV and HIV-1 infected cells in
secondary lymphoid tissue and the multiplicity of infection implied by multiple integration events are consistent with direct movement of virus between contacting cells [16,17,18].
Several other virus families including rhabdo, herpes, pox, paramyxo, Flavi and African Swine Fever can travel by directed cell-to-cell spread via diverse mechanisms [8]. The induction of virological
synapses dictates interaction between the host cell cytoskeleton and the pathogen in ways similar to, but distinct from that described for these other viruses and for intracellular bacteria [19].
Understanding microbial entry and spread reveals a lot about the pathogenesis of the infectious agent, but we can learn as much about host molecular cell biology using pathogens as functional probes,
as we can about the pathogens themselves. This will be the case for the virological synapse, which will shed light not only on processes relating to intercellular communication including
immunological synapse assembly and function, but may help identify potential molecular targets for intervention in the virus life cycle.
Many of the central questions relating to the cellular and molecular basis of virological synapse structure and function have been, or are being, addressed, and the concept of cell-to-cell spread by
these and related structures is well established. Nevertheless, substantial gaps remain in our knowledge, and several of the key concepts relating to this mode of viral spread are controversial and
remain to be confirmed or properly understood. This issue of Viruses presents a series of state-of-the art reviews of the field from experts in the major areas of retroviral virological synapse
research, discussing areas of particular interest and highlighting significant lacunae in our understanding. | {"url":"http://www.mdpi.com/1999-4915/2/4/1008/xml","timestamp":"2014-04-18T05:47:21Z","content_type":null,"content_length":"26635","record_id":"<urn:uuid:0088b843-1efe-4dc8-b45b-798e178febb9>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
Converse of the Pythagorean Theorem
Date: 02/14/2003 at 00:30:29
From: Michael
Subject: The Pythagorean Theorem
What is the converse of the Pythagorean theorem?
Date: 02/14/2003 at 08:38:48
From: Doctor Rick
Subject: Re: The Pythagorean Theorem
Hi, Michael.
This will help you understand what a converse is:
Converse, Inverse, Contrapositive
If we have a theorem that says "If A is true, then B is true," the
converse of this theorem would be "If B is true, then A is true."
This is a very different statement, and you can't just take any
theorem and say that its converse is also true. You need to prove it
as a separate theorem.
The Pythagorean theorem is:
In a right triangle, the sum of the squares of the two shorter sides
is equal to the square of the hypotenuse.
In other words:
IF triangle ABC is a right triangle (with the right triangle at B),
then AB^2 + BC^2 = AC^2.
The converse of this is:
If the sides of triangle ABC are such that AB^2 + BC^2 = AC^2, then
the triangle is a right triangle (with right angle at B).
The converse of the Pythagorean theorem allows you to determine
whether a triangle is a right triangle if you know the lengths of its
sides. The Pythagorean theorem itself cannot be used in this way,
because you have to know that the triangle is a right triangle before
you can apply it. Sometimes people miss this distinction, and they can
get away with it because both theorems happen to be true. But it's
important to see the difference because not every theorem is like
this: it may be that a theorem is true but its converse is not.
Is this what you wanted to know?
- Doctor Rick, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/62215.html","timestamp":"2014-04-19T02:08:45Z","content_type":null,"content_length":"6732","record_id":"<urn:uuid:99464724-5e25-4785-948e-ef9e2e63a4af>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bartlett, IL Prealgebra Tutor
Find a Bartlett, IL Prealgebra Tutor
...Geometry students can expect topics such as: proofs using theorems and axioms, calculation of distance, area and volume, congruence and similarity of triangles, transformations (rotations,
reflections, translations), and properties of geometric shapes. I can also help with the use of a compass, ...
11 Subjects: including prealgebra, calculus, geometry, algebra 1
...I received a Master's Degree in School Counseling so that I could better understand the many parts of the student that contribute to both struggles and success in achieving happiness in school
and life. I am currently employed as a high school resource teacher, specializing in working with stude...
15 Subjects: including prealgebra, English, reading, writing
...I have a bachelor's in Mathematics, so I have used algebra for many years. I have a knowledge of tips and tricks to simplify elementary algebra concepts. As an undergraduate math major, I
volunteered with a program that tutored local middle school students in math.
5 Subjects: including prealgebra, statistics, algebra 1, probability
...I am a patient and determined teacher who will keep working with my students until they understand the material. My experience includes leading workshops in Biology at the University of Miami
as well as coaching Pop-Warner football. I lead workshops for a year at the University.
27 Subjects: including prealgebra, reading, English, chemistry
I am currently employed as a high school math teacher, going into my eighth year. I will be teaching pre-algebra and Algebra 2. I have taught every math class from Pre-Algebra to Pre-Calculus. I
am also very knowledgeable about the math portion of ACT. I have numerous resources for test prep.
7 Subjects: including prealgebra, geometry, algebra 1, algebra 2
Related Bartlett, IL Tutors
Bartlett, IL Accounting Tutors
Bartlett, IL ACT Tutors
Bartlett, IL Algebra Tutors
Bartlett, IL Algebra 2 Tutors
Bartlett, IL Calculus Tutors
Bartlett, IL Geometry Tutors
Bartlett, IL Math Tutors
Bartlett, IL Prealgebra Tutors
Bartlett, IL Precalculus Tutors
Bartlett, IL SAT Tutors
Bartlett, IL SAT Math Tutors
Bartlett, IL Science Tutors
Bartlett, IL Statistics Tutors
Bartlett, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Bartlett_IL_prealgebra_tutors.php","timestamp":"2014-04-20T19:33:17Z","content_type":null,"content_length":"24218","record_id":"<urn:uuid:770cec6b-9081-4555-af07-855a3e1d892e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measuring student growth: A guide to informed decision making
Which student would you say achieved more? A high-scoring student who started out as a high-performer or an average-scoring student who started at the bottom?
Currently under the 2001 No Child Left Behind Act (NCLB) schools are given credit for the percent of students achieving the state's "proficient" level, regardless of how far students progressed to
get to proficient. Recent policy discussions about school and teacher accountability, however, are expanding the proficient view of achievement by recognizing that some students have much farther to
go to reach proficiency, even though it remains the minimum target for everyone. This has led policymakers to look at ways to measure academic growth via growth models.
Simply put, a growth model measures the amount of students' academic progress between two points in time. The terms "value added" and "growth models" are often the cited statistical methods for
measuring student growth for accountability purposes. But what exactly are these methods? Do they measure what they claim to measure? How should they be used? More important, as a school policymaker,
educator, parent, or voter, why should you care?
This guide is intended to answer these and other questions and to help you decide which model, if any, should be used in your state or district. Although we explain growth models within the framework
of NCLB, they can be used for a variety of educational purposes—not just for high-stakes accountability as we also illustrate throughout this guide. To help you get the most from this guide, it is
organized as follows:
Growth models can be sophisticated tools that help gauge how much student learning is taking place. But like all tools, they are most effective in the hands of those who understand how to use them.
This guide illustrates why.
Why are policymakers talking about growth models?
Terms you should know
Status model: A method for measuring how students perform at one point in time. For example, the percent of fourth graders scoring at proficient or above in 2006.
Growth model: A method for measuring the amount of academic progress each student makes between two points in time. For example, Johnny showed a fifty point growth by improving his math score from
three hundred last year in the fourth grade to three hundred fifty on this year's fifth grade exam.
Value-Added model: A method of measuring the degree in which teachers, schools, or education programs improve student performance.
Achievement level: Established categories of performance that describe how well students have mastered the knowledge and skills being assessed. For this guide, we use advanced, proficient, basic, and
below basic for achievement levels. Proficient or above is assumed to represent the level that meets the state standard.
Scale score: A single numeric score that shows the overall performance on a standardized test. Typically, a raw score (number of questions answered correctly) is converted to a scale score according
to the difficulty of the test and/or individual items. (For example, the 200–800 scale used for the SAT.)
Vertical scale scores: Numeric scores on standardized tests that have been constructed so that the scale used for scoring is the same for two or more grade levels. Hence, a student's scale score gain
over multiple years represents the student's level of academic growth over that period of time.
Most of us are accustomed to getting information about academic results in the form of a score. Whether it's reported as a number or a letter grade, it tells us basically the same thing—how well
students have learned certain subject matter or skills at one point in time. However, a score does not typically tell us how far students grew academically to produce that number or grade. We don't
know if the score reflects relatively normal progress, if it represents a huge leap forward, or even if students lost ground.
This poses a real question for policymakers because of the challenges to our present definition of achievement for school accountability purposes. Since the 1990s, education policy has been mostly
focused on results as defined by state academic standards, which all students are expected to meet. These standards are the tracks on which school accountability runs. Students are tested on the
material described by state standards and schools are held accountable for whether or not students meet those standards. But growth models will shift accountability to include measures for how much
progress students make, not just whether they meet state standards.
Most states had some form of standards-based accountability in place when NCLB was signed into law in 2002. However, NCLB took accountability to a new level by requiring schools to meet specific
targets each year—called "Adequate Yearly Progress," or AYP—with all groups of students. AYP targets are based on a status model of achievement, meaning that schools are evaluated on the achievement
status of their students, in this case the percent of students scoring at or above a "proficient" level of achievement. Each state establishes AYP targets that must culminate at one hundred percent
student proficiency in the year 2014.
However, many educators argue that a status criterion alone is an unfair way to measure school effectiveness, particularly for high-poverty urban and rural schools that receive a large proportion of
students who enter school already behind their peers who have greater home and community advantages. Under current law, schools can be labeled "in need of improvement" for failing to meet the state's
AYP target, even if they produced more sizable gains with their students than more affluent schools. Many educators and an increasing number of policymakers believe that these schools should still be
recognized for effecting significant student growth. California's Superintendent of Public Instruction, Jack O'Connell, echoed the sentiments of many educators when he said that "The growth model is
a much more accurate portrayal of a school's performance" (Wallis and Steptoe 2007).
In the current NCLB environment calls for growth models have largely centered on using them for high-stakes school accountability purposes, but some growth models, especially value-added models, can
also be used to evaluate teacher or program effectiveness and as a tool for school improvement. Most researchers agree that these statistical tools present a more complete picture of school
performance. However, they disagree over how precisely various growth models measure student growth and what role they should play in accountability. In the following sections, we describe each
growth model; discuss what is needed to develop, implement, and maintain a growth model; and explore the strengths and limitations of different models.
What are the different types of growth models?
Growth models measure the amount of academic progress students make between two points in time. There are numerous types of growth models but most tend to fall into five general categories:
Each of these categories encompass several variations depending on the model's purpose and available data. Because growth models are relatively new in education, and different models continue to be
developed, these five categories may not capture all models.
Also, we're including two models in this guide that do not necessarily measure the academic progress ofindividual students over time, which some researchers consider the definition of a growth model.
These models—"Improvement" and "Performance Index"—typically measure the change in the percent of students meeting a certain benchmark (typically "proficient") but do not measure the amount of growth
each individual student made from year to year. However, both have been allowed as growth models for NCLB and state accountability programs, so we discuss them immediately below. Afterward, the rest
of this guide refers only to growth models that follow individual students over time.
The improvement model
The Improvement Model compares the scores of one cohort, or class, of students in a particular grade to the scores of a subsequent cohort of students in the same grade. This model is based on
achievement status—for example, students scoring at proficient or above—but the difference in scores over time would be considered growth in the school. For example, if fifty-five percent of last
year's fourth graders scored at or above proficient and sixty percent of this year's fourth graders reached proficient, then, using the Improvement model, this school showed five percentage points in
growth for fourth grade scores.
Figure 1: How an Improvement Model works
In this hypothetical school, the performance of this year's fourth graders is compared to last year's fourth graders. The difference is the "improvement" or change.
Improvement Model
│ Achievement level │ Last year's 4th graders │ This year's 4th graders │ Change │
│ Proficient + │ 55% │ 60% │ + 5 pts. │
Sound familiar? Many of you will recognize this model as NCLB's "Safe Harbor" provision. It does not measure growth among individual students or even the same cohort of students. The model actually
compares two totally distinct groups of students, or in this example, last year's fourth graders to this year's fourth graders. The benefit of the improvement model is that it is fairly easy to
implement and understand. While it doesn't track how individual students progress, it provides some indication of whether more students in a particular grade level are getting to proficiency from
year to year. However, the change in the percent of students reaching proficiency may be due to different characteristics of the students in each cohort rather than a change in school effectiveness.
For example, the difference between last year's fourth graders' performance and this year's fourth graders could have been due to an increase in class sizes due to closing a nearby school in the
Performance index model
Most Performance Index models are status type models that give credit to schools for getting more students out of the lowest achievement levels even if they haven't reached proficiency. Just as with
Status models, Performance Index models can be used as an Improvement Model. And just as with Improvement models they do not necessarily measure the academic growth of individual students, but
account for change in the schools' performance from one year to the next. There is, however, one important distinction: Most Index models currently used by states recognize change across a range of
academic achievement levels, not just at proficient or above.^1 As the example below shows, the school received partial credit for the students scoring at the basic level but not below basic level.
In statistics, an index combines several indicators into one. Grade Point Average (GPA) is an index that we are all familiar with. It covers several indicators—grades students earn in various
courses—and it is weighted in favor of the highest grades, an "A" is worth four points, a "B" is three points, a "C" is two points, and so on. To figure the GPA it's a matter of elementary math: Add
up the grade points, divide by the number of courses, and the result is the GPA. The GPA shows how close students come to earning A's across their classes with straight A's earning a perfect 4.0 GPA.
Performance Index models developed by states for school accountability follow this same general principle. Think of it as the GPA for a school where the goal is to determine how close the school
comes to getting all students to proficiency. It does so by measuring student performance based on the percent of students scoring in each achievement level. More points are then awarded for students
scoring at the highest levels, just as students earn more points for higher grades. The points are averaged for the school and the result is the index.
Figure 2: How a Performance Index Model works
This hypothetical school is in a state using an Index Model for school accountability. The index awards points for achievement are as follows:
Students at proficient and above 100 pts.
Students at basic 50 pts.
Students at below basic 0 pts.
A perfect score of one hundred points means that all students reached proficient. Our school would earn sixty-eight points as shown in the table. Using an Improvement Model, this same school would
earn only fifty-five points for the percent of student who reached proficient.
Performance Index Model
│ Achievement level │ This year's 4th graders │ Computation │ Points awarded │
│ Proficient + │ 55% │ .55 X 100 pts │ 55 pts. │
│ Basic │ 25% │ .25 X 50 pts. │ 13 pts. │
│ Below basic │ 20% │ .20 X 0 pts │ 0 pts. │
│ Index score for school │ │ │ 68 pts. │
When comparing performance from year to year, a Performance Index will include changes that occurred at the low end of student achievement and can also be designed to include changes among students
scoring at proficient or better.
Performance Index models can mitigate one of the frequently cited cautions about using Status models in accountability systems. That is, critics charge that evaluating schools only on the percent of
students at proficient might motivate schools to concentrate their efforts on the so-called "bubble kids"—those students who score just above or just below the proficient level—to the possible
neglect of students on the lowest and the highest ends of the performance scale. By instituting a Performance Index Model schools have more incentive to concentrate on more students below proficiency
not just those on the cusp.
Although Performance Index models can be developed to give credit to schools for moving students from proficient to advanced, most do not because NCLB does not give credit for growth for students
already above the proficiency level. However, changes in the index from year to year are a better indicator of how well schools are educating students who began at the lowest achievement levels—not
just those on the proficient bubble—than the current status models.
As of 2006 twelve states have adopted some form of Index Model for NCLB purposes: Alabama, Louisiana, Massachusetts, Minnesota, Mississippi, New Mexico, New York, Oklahoma, Pennsylvania, Rhode
Island, South Carolina, and Vermont (Sunderman 2006).
From a practical standpoint, there is one good thing about Performance Index models. Most don't require sophisticated data systems. Keep in mind, however, that these models generally don't measure
the growth of individual students from year to year. They also don't capture change within each achievement level. For example, if a state set a cut score of two hundred for "basic" and three hundred
for "proficient," schools wouldn't get credit for students whose scores improved from two hundred to two hundred ninety-eight. They would get credit for students who improved from two hundred
ninety-nine to three hundred one. Establishing more achievement levels would help to capture these changes, making the model a more accurate measure of growth.
Simple growth model
In most cases, simple growth models don't require a statistician to explain or even compute data. Typically it's just the difference in scale scores from one year to the next. But unlike the
Improvement and most Performance Index models, which compare successive cohorts at the same grade level (fourth graders in our hypothetical school) Simple Growth models actually document change in
the scores of individual students as they move from grade to grade. For example if a fourth grader in school X scored three hundred fifty last year and four hundred on this year's fifth grade
assessment, the student made a fifty point growth. The growth is calculated for each student who took both the fourth and fifth grade tests and then averaged to calculate the school's growth.
Figure 3: How a Simple Growth Model works
This hypothetical school has five fifth graders who took the fourth grade assessment last year. The change in scores are calculated in the table below for each student and a school average is
Simple growth model
│ Student │ Last year's 4th grade scale score │ This year's 5th grade scale score │ Change │
│ Student A │ 350 │ 400 │ + 50 │
│ Student B │ 370 │ 415 │ + 45 │
│ Student C │ 380 │ 415 │ + 35 │
│ Student D │ 325 │ 390 │ + 65 │
│ Student E │ 310 │ 370 │ + 60 │
│ School average │ 347 │ 398 │ + 51 │
One drawback of this model? Only those students who took the tests in both years are included in the school's growth calculation. Another is that the points themselves provide no information
(DePascale 2006). A fifty point gain may or may not mean the student has met a set target or is on track to meet it in the future. For Simple Growth models to be useful, experts, educators, and in
many cases, policymakers must make informed judgments about how much growth is enough.
Growth to proficiency model
While Simple Growth models measure individual student growth, they do not indicate if students are developing the skills they need to meet state standards of proficiency. Growth to Proficiency
models—also known as Growth to Standards or On—Track—are designed to show whether students are on track to meet standards for proficient and above. As such, they have become popular, mainly in
response to the U.S. Department of Education's NCLB Growth Model Pilot program.
At this writing, the federal pilot program is allowing nine states to use Growth models for NCLB accountability. However, these nine states had to meet a list of conditions to do so, most notably
that one hundred percent of their students are still expected to be proficient by 2014, as the 2001 law states. This provision precluded from consideration some Growth models that were already in
use, such as those used in North Carolina and Tennessee because they did not require students to reach a certain benchmark in a certain amount of time.
Nearly twenty states submitted plans, including North Carolina and Tennessee who revised their models in accordance with the federal guidelines. Several states developed a hybrid of Growth and Status
models, or Growth to Proficiency. Although there are several variations, the key ingredient across all Growth to Proficiency models is that schools get credit when a student's progress keeps them on
pace to reach an established benchmark—usually proficient—at a set time in the future, typically within three to four years or by the end of high school (Davidson and Davidson 2004).
The advantages to this model are (1) that schools are recognized for producing gains in student performance even if their students score below proficient and (2) there is more incentive to focus on
all students below the proficiency level, not just the "bubble kids." There are even models that give incentives to schools to focus on students above the proficiency level. Tennessee developed its
model so schools must ensure that students who are already scoring above proficient are still on pace to remain proficient in the coming years.
However, without targets, the model itself cannot determine which students are on track for proficiency. No matter what model is chosen, Growth or Status, it is up to policymakers to set goals to
determine how much students should know and when they should know it. Then the model can be designed to determine which students are meeting those targets.
Figure 4: How a Growth to Proficiency Model works
Our hypothetical school has five students whose growth targets were established at the end of fourth grade based on meeting proficiency in seventh grade. Growth targets are based on the yearly growth
needed to hit the seventh grade proficient score:
Fifth grade proficient score = 400 Seventh grade proficient score = 500
The goal for this year (NCLB Annual Measurable Objective, or AMO) is that seventy-five percent of the students must hit their targets. If students don't score at the proficient level they must hit
their growth target for the school to make AYP.
│ │ Last year 4th grade scale score │ This year 5th grade scale score │ Change │ Did student score │ What is student's growth │ Did non-proficient students hit │ Did student make │
│ │ achieved │ achieved │ │ proficient? │ target? │ growth target? │ AYP? │
│ Student │ 350 │ 400 │ + 50 │ Yes │ -- │ -- │ Yes │
│ A │ │ │ │ │ │ │ │
│ Student │ 370 │ 415 │ + 45 │ Yes │ -- │ -- │ Yes │
│ B │ │ │ │ │ │ │ │
│ Student │ 380 │ 415 │ + 35 │ Yes │ -- │ -- │ Yes │
│ C │ │ │ │ │ │ │ │
│ Student │ 325 │ 390 │ + 65 │ No │ 59 │ Yes │ Yes │
│ D │ │ │ │ │ │ │ │
│ Student │ 310 │ 370 │ + 60 │ No │ 64 │ No │ No │
│ E │ │ │ │ │ │ │ │
In this example, three of five students met the proficient target and therefore do not have to meet a growth target. Two students did not meet the proficiency target: One met his growth target while
the other student did not meet hers. This means four out of five students met AYP, or eighty percent, which exceeds the seventy-five percent goal for this year (AMO). Therefore, this school made AYP.
Value-added model
Note: A Value-Added Model is one type of growth model, but not all growth models are Value-Added. As a growth model, Value-Added measures change in individual students' performance over a period of
time. But unlike other Growth models, Value-Added also measures how much a particular instructional resource, such as a school, teacher, or education program, contributed to that change (Hershberg,
Simon and Lea-Kruger 2004). It's a distinction that often gets lost in discussions about measuring growth and somehow the two terms have become intertwined.
A Value-Added Model (VAM) is typically the most statistically complex of all Growth models. However, if used correctly, VAMs are quite possibly the most powerful statistical tools available for
evaluating the effectiveness of teachers, schools, and education programs. Value-added models are designed to isolate the effects of outside factors—such as prior performance or student
characteristics—from student achievement in order to determine how much value teachers, schools, and/or programs added to students' academic growth.
The calculations for value-added models take many forms and cannot be easily illustrated due to their complex statistical methodology. Instead, we present a simplified version of a VAM calculation in
order to illustrate the basic principles. This Value-Added Model looks at individual students' past performance to predict how they should perform in the upcoming year. Usually an individual's
predicted score is based on the average score of similar students in previous years who shared similar past performance patterns and characteristics.
If students perform above their predicted performance, they are considered to have shown positive growth. If they perform as predicted, then they are considered to have made expected growth. If they
perform below their predicted performance, then they are considered to have negative growth. Under a VAM, a student could show improvement but still make "negative growth" if the improvement was less
than predicted.
Figure 5: How a Value-Added model works
Say a VAM is being used to evaluate the effectiveness of our hypothetical school. Each of the five students' fifth grade scores are compared to their predicted scores based on fourth grade
performance. The difference determines whether students made positive, negative, or expected growth and becomes an indicator of the value this school added to its students' learning. Value-Added
makes no statement about high or low scores, only the amount of student gains.
Fifth grade predicted growth based on fourth grade performance
│ │ Last year 4th grade scale score │ This year 5th grade scale score │ Change │ Expected growth │ Growth effect │
│ Student A │ 350 │ 400 │ + 50 │ + 50 │ Expected │
│ Student B │ 370 │ 415 │ + 45 │ + 50 │ Negative │
│ Student C │ 380 │ 415 │ + 35 │ + 55 │ Negative │
│ Student D │ 325 │ 390 │ + 65 │ + 50 │ Positive │
│ Student E │ 310 │ 370 │ + 60 │ + 50 │ Positive │
│ School effect │ 347 │ 398 │ + 51 │ + 51 │ Expected │
In this example, the school, on average, produced expected gains. However, the growth was largely due to more than expected growth among low-performers. High-performing students did not make their
predicted growth. This school should evaluate what they are doing with their high-performers to make sure they make progress, even as they continue to do what seems to be helping their
VAMs isolate the effects of teachers, schools, and education programs by using data from each of these categories. From there, statistical tools such as Hierarchal Linear Models (HLMs) are used to
separate the teacher, school, or program effectiveness from other factors that may have influenced the change in student achievement. Tennessee is the well-known VAM pioneer, but states such as
Pennsylvania and Ohio and various districts across the nation have been using VAM data in some form.
There are many variations to value-added models but they all share the same goal of measuring student growth and determining who or what is responsible for it.
What is needed to implement a Growth Model?
No matter which growth model is used and for what purpose, some basic features should be in place to design and implement a valid and reliable measure of student growth (Goldschmidt, et al. 2005).
These features include:
• A statement of policy intent
• Properly designed, annual tests
• Data systems to collect, store, and analyze the data
• Statistical expertise
• Professional development
• Transparency and good communication
• Funding
Although these features do not garner much attention in the growth model discussion they are the foundation on which most growth models should be built.
Statement of policy intent
Policymakers should have a clear statement of intent when thinking about adopting a growth model because different purposes need different models (Goldschmidt, et al. 2005, McCaffrey, et al. 2003).
Deciding exactly what is to be measured and how this information will be used is the first step. For example, a growth model used for high-stakes school accountability purposes may look different
from a model used to identify professional development needs. A "Growth to Proficiency Model" might fill the first purpose, especially if the goal is to evaluate student progress toward an
established performance target. On the other hand, decisions about professional development would benefit from understanding the effect teachers have with all their students, high- and low-achievers
alike. A Value-Added model would probably work best for this purpose because it provides information about gains made by different students that are attributable to schools or teachers, regardless of
whether they are high- or low-performers. Articulating a clear goal will assure that the growth model you design will be the best fit.
Properly designed tests
To get the most accurate results, the tests used for measuring growth should have three key characteristics: (1) they document year-to-year growth on a single scale, (2) they measure a broad range of
skills, and (3) the test content is aligned with state standards.
• Tests that document year-to-year growth on a single scale
Although not a requirement, the best tests to use for measuring yearly growth are vertically aligned and scaled. This means that each successive test builds upon the content and skills measured
by the previous test. It assures that tests taken over multiple grade levels show a coherent progression in learning. For example, by making sure the fifth grade math test represents what a
student should have learned in a year since taking the fourth grade test.
Think of a growth chart in a pediatrician's office. Children's height is measured against it and recorded during yearly check-ups. The measurements change as they grow, but the chart remains the
same. The chart is also based on physical averages, doctors don't use a twenty-foot chart to measure human growth. Yet it still accommodates the lower and upper extremes of children's height.
Tests that are vertically scaled work the same way. Knowledge is gained and students are tested over several grade levels, but students are scored against the same scale. The range of the scale
varies depending on the range of knowledge the tests are measuring and the number of grade levels they are addressing.
NCLB now requires states to conduct annual testing in grades three to eight, which takes care of one requirement of vertical scaling—annual tests. However, being tested every year doesn't
necessarily mean that the change in scores reflects a year's growth in student achievement. That is where vertical scaling comes in. Tests are developed for different grade levels—for example,
for fourth and fifth—but scored on the same scale. It's as though the student took the same test covering the range of skills in both grades. This way, educators are assured that a change in
scores represents a change in student achievement instead of differences in the tests themselves.
Although it is possible to create some growth models without vertically scaled tests, there is disagreement among researchers on the accuracy and technicalities on how to do so (CCSSO 2007). When
it's done without vertical scaling, testing experts use statistical techniques to approximate the change in growth from year to year (Gong, Perie, and Dunn 2006). Lacking a vertical scale, the
data is typically converted to a normed scale (McCall, Kingsbury, and Olson 2004), meaning statisticians compare students' performance to each other. Converting to a norm scale, sometimes called
norming, is like teachers who grade on a curve by awarding "A's" to the highest ten percent, "F's" to the lowest ten percent, and "C's" to the bulk of their students. By definition, someone will
always be above average and someone will always be below average on a normed scale, regardless of whether students are meeting standards or not. So policymakers and educators need to consider
whether such a normed scale approach is what they want in a standards-based system.^2
• Tests that measure a broad range of skills
Some researchers say to effectively measure growth tests should measure the full range of skills students may possess at that grade level, meaning that basic and advanced skills should be
measured in addition to the skills that define proficient (Sanders 2003, McCaffrey, et al. 2003). For example, a test that focuses on basic skills would be ineffective at measuring the growth of
Advanced Placement (AP) students because the test does not include high-end content and most AP students would likely get all the answers correct. This is what researchers refer to as a ceiling
effect. Conversely there is a floor effect when an assessment of advanced skills does not provide information about what a poor performance on the test means about the test-taker's skills. Using
the AP example again, if students in remedial math took the AP Calculus exam, most will likely get all the questions incorrect masking any growth they made in mastering basic math.
Here's another way to look at it: Picture the body of students' knowledge as a football field that is one hundred yards long with the fifty yard line being "proficient." To get an accurate view
of the range of student performance, the assessment would have to measure the full one hundred yards. However, many state tests currently in use are focused on the area near the fifty yard line,
because their aim is to determine if a student is proficient or not. A low score on such a test means that the student is in his or her own team's territory and therefore not proficient, but it
doesn't show whether the low-scoring student is on the ten or forty yard line. Likewise, a high score could mean the student scored a touchdown or just barely made it into field-goal range—if we
didn't measure the broad range of skills, we just wouldn't know.
Because tests focused on the proficient level are unlikely to capture progress near the ends zones, they are not the best instruments for capturing growth. On the other hand, tests that can
distinguish how low a low performance is, and how high a high performance is, are capable of showing when students improve from the one yard line to the twenty-five yard line, even if they have
yet to cross midfield.
• Test content is aligned with state standards
Researchers also emphasize that tests used for growth models should be aligned with standards (Braun 2004, Davidson, and Davidson 2004). A teacher who effectively teaches the content in the
standards and the students who learned it should not be penalized because the tests did not accurately measure the standards (Braun 2004). Even though this is a problem with today's status
models, the problem is exacerbated when comparing results at two points in time. Without proper alignment between state standards and tests the results of any growth model will be meaningless no
matter how technically sound the model is. However, according to the American Federation of Teachers, many current state tests are not adequately aligned with their state standards (AFT 2006).
Data systems to collect, store and analyze the data
It doesn't matter whether your assessments are vertically aligned and scaled, effectively measure a full range of knowledge, and are properly aligned to standards if there is nowhere to store the
data. In recent years most, if not all, states have improved their data systems in response to NCLB (McCall, Kingsbury, and Olson 2004). However, in many cases, the data systems required by growth
models, especially value-added models, tend to be more expensive (Goldschmidt, et al. 2005)—a cost that should be taken under consideration if your state or district is looking to implement a growth
To measure individual student growth you need to set up a longitudinal data system. It sounds complicated but it's really not. A longitudinal data system has the ability to follow the same students
as they move at least from grade to grade. Preferably it can also follow students from school to school and even from district to district within a state. Following students across state lines is not
possible at the present time.
Ten Essential Elements of a Longitudinal Data System
1. A unique statewide student ID system
2. Student-level enrollment, demographic and program participation information
3. The ability to match individual students' test records from year to year to measure academic growth
4. Information on untested students and the reasons why they were not tested
5. A teacher identifier system with the ability to match teachers to students
6. Student-level transcript information, including information on courses completed and grades earned
7. Student-level college readiness test scores
8. Student-level graduation and dropout data
9. The ability to match student records between the P-12 and higher education systems
10. A state audit system assessing data quality, validity and reliability
From the Data Quality Campaign
Most current data systems are designed to collect and store grade level data (not student level data) to determine what percent of students reached a certain benchmark. But they are not able to
follow those students to the next grade to determine how their performance changed.
The key ingredient in a data system for growth models is a unique student identifier, or more commonly, a student ID number. A student ID works just the same as a Social Security number. Each student
is assigned a unique number when they enter the school system and it remains with them throughout their academic career even when they change schools or move to a new district in the state. Each
student's test scores and characteristics such as race and socioeconomic status should be included with their student ID in the data system (Blank and Cavell 2005). This information is important for
accountability based on student subgroups and for value-added models. Other information to consider collecting so program effectiveness is monitored are the courses the student has taken, the grades
in those courses, and the educational programs in which the student participated.
According to the Data Quality Campaign it costs between one and three million dollars just to develop and deploy a unique student identifier system (Smith 2007). These dollars primarily represent the
cost to build technology systems associated with assigning student IDs, verifying that no students have more than one ID, sharing the ID with districts, updating state data systems, and vendor prices
/contractors costs (Smith 2007). But this cost does not include data collection at either the state or district level. That cost will vary by state depending on the size of the state and the data
systems previously in place. However, almost all states will have already incurred this cost and will have student ID systems in place by the end of the 2007–2008 school year (Smith 2007).
Some states also assign unique IDs to teachers to match teachers and students. This helps schools monitor teacher effectiveness, which can be valuable information for school improvement planning
(Doran 2003). However, including teacher data should not be undertaken lightly. Before moving ahead, teachers should be included in the discussion about what data is collected and particularly how it
will be used. Other school characteristics, such as specific academic programs, can also be coded and monitored for effectiveness.
Once a system is in place to collect individual student data there must be somewhere to store and analyze it. Databases that can store the vast amount of data needed to implement a growth model are
usually much larger and more expensive than the databases needed for most current status model systems (Goldschmidt, et al. 2005). This is because growth models require data for every student while
most current status model systems only need grade level data like the percent of students who score proficient. You will also need to invest in specialized software that can handle the calculations
required by your particular growth model.
According to testimony by Dr. Chrys Dougherty, Director of Research at the National Center for Education Accountability (NCEA) to the House of Representative's Committee on Education and Labor, only
twenty-seven states will have the capacity to implement a growth model in the 2007-2008 school year. He also said that the number is likely to grow to forty states in the following three years. The
elements in data systems missing most often are: (1) statewide student identifiers, (2) the ability to link students' test score records over time, and (3) information on untested students and the
reasons they were not tested (2007). States need to add these elements and collect those data for at least two years, preferably three, before even the most basic growth model can be implemented
(Blank and Cavell 2005, Davidson and Davidson 2004, Doran 2003).
Statistical expertise
Just as engineers are needed to build a bridge, psychometricians and statisticians are needed to build a growth model that accurately measures student achievement growth. Creating an effective model
can be a complex technical process requiring adequately trained psychometricians and other statistical experts (Goldschmidt, et al. 2005). These experts are qualified to design a growth model that is
aligned with the statement of policy intent using the data they have available to them. Sometimes these experts will already be in-house but if not, they need to be hired (Goldschmidt, et al. 2005).
Professional development/training
Although the type of training will differ depending on the purpose of the growth model, some researchers believe it is vital for all stakeholders to receive training so they understand how to use
this new information effectively (Drury and Doran 2003). Stakeholders include teachers, principals, school board members, central office administrators (Drury and Doran 2003), and students and
parents. Without buy-in from these groups, the growth model's usefulness will be seriously limited (Drury and Doran 2003). As William Sanders noted, "You can do the best analysis in the world but if
you don't have people trained and coached to use the information, not much is going to happen" (Schaeffer 2004).
Clear and open communication
Growth models can be complex and very difficult for non-statisticians to understand, therefore, some say professional development is a key factor for gaining buy-in. This is especially true for
value-added models because there is no simple way to isolate the impact of teaching on student learning (Hershberg, Simon, and Lea-Kruger 2004). Others, such as University of Massachusetts economics
professor Dale Ballou, believe that complex value-added models fail one of educators' most important criteria: That the models be transparent (Ballou 2002). For example, he says that few, if any, who
are evaluated by these sophisticated models will understand why two schools (or teachers) with similar achievement gains receive different ratings even though this can be a potential outcome in a
Value-Added system (Ballou 2002).
Others aren't so skeptical. To these researchers complexity isn't necessarily a drawback, noting that not everyone who uses a personal computer or drives a car understands how they work (Hershberg,
Simon, and Lea-Kruger 2004). However it is necessary that all stakeholders, especially teachers and administrators, know what is being measured and more important, what the results mean.
Even if states can't provide enough transparency for non-experts to understand the model, they still need to assure stakeholders that the model is statistically sound and measures what it is intended
to measure. Some researchers recommend that each state open its model's methodology to outside expert reviewers (Doran 2004), as states are required to do in the NCLB Pilot program. Educators and the
public at large are likely to be more open to accepting and using results from a model that is open to review rather than kept a secret.
Behind every great idea there is a need for money to support it. Growth-models are typically more expensive to administer than most current status models (Goldschmidt, et al. 2005). However, the cost
will vary considerably from state to state or district to district depending on what elements may already be in place (Goldschmidt, et al. 2005). Designing new tests, implementing a new longitudinal
data system, hiring statistical experts and additional staff to collect and analyze the data, and providing effective professional development can be expensive enterprises. Keep in mind there is also
likely to be a cost to districts for collecting and reporting data for each individual student to the state. Of course, the cost of developing and implementing a growth model will vary by state
depending on its size and the data systems previously in place. But the rewards of implementing a growth model may well outweigh the costs, particularly if some of these elements already exist or can
be easily refitted to serve a growth model.
Although it costs between one and three million dollars to implement a student ID system—which is required to calculate individual student growth—there is an additional cost to actually make
calculations. While there isn't much information on what it costs states to develop and run the calculations for a growth model, Ohio contracts with the SAS Institute in North Carolina for such
purposes at two dollars per student. Ohio provides the necessary student level data to the SAS Institute and the Institute ascertains the Value-Added calculations and provides that data to each
school via a secure website. Ohio also contracts with Battelle for Kids to provide training on using Value-Added data effectively in the classroom. For approximately three million dollars per year
Battelle works with Ohio using a train-the-trainer system. Ohio is just one example, but it provides some indication on some of the costs of implementing a growth model. There are likely additional
costs at both the state and district levels that have not been included in the cost of implementing a growth model, such as staff time for collecting and using the data.
What are the limitations of growth models?
Growth models hold great promise of evaluating schools on the amount of growth their students achieve. But growth models, especially value-added models, are relatively new in the education realm and
their limitations are still being debated within the research community. Please note, however, that the research community is almost united in the opinion that growth models provide more accurate
measures of schools than the current status models alone. Moreover current status models also suffer from many of the same limitations. While none of these issues should preclude states or districts
from considering implementing a growth model, they do need to be acknowledged so the model developed will be the most effective tool for its purpose.
The limitations can be described as follows:
Measures of achievement can be good, but none is perfect
This guide doesn't debate the pros and cons of standardized testing;there are plenty of publications that do. But it is necessary to discuss limitations and how they can affect the reliability of a
growth measurement.
As discussed earlier, it's important to use tests that are appropriate for growth models. Growth can be measured without tests, but any tests used should have the following features:
• They cover the lower and upper ranges of student performance, rather than cluster test content around the knowledge and skills that constitute "proficient."
• They are vertically aligned and scaled to more accurately measure student achievement from year to year.
• They are aligned with state standards.
Unfortunately, while some tests are clearly better than others, there is no perfect measure of achievement (Ballou 2002, McCaffrey, et al. 2003), a statement to which even the most ardent supporter
of standardized testing would agree.
One of the problems with tests used for growth models is that gain scores over time tend to be what statisticians call "noisier" than measures of achievement at a single point in time. By this,
statisticians mean that gain scores tend to fluctuate even though a student's true ability typically does not change much from year to year (Ballou 2002, Doran 2004). This happens because on any
given test a student's performance is a result of his or her true ability and random influences, like distractions, during the test and the selection of items—effects that statisticians call
measurement error. When scores from the two tests are subtracted from each other, as in Simple Growth models, the measurement error increases so the "true" performance becomes less clear (Ballou
There are statistical adjustments to minimize "noise," such as including scores from other subjects and previous years (Raudenbush 2004b). Another way to minimize the effect of "noisy" data is to
create rolling averages by averaging growth over multiple years to provide a more stable picture of performance (Drury and Doran 2003, Raudenbush 2004a). However, such adjustments will add to the
complexity of the growth model and may make it difficult to explain to educators why two schools (or teachers) with similar achievement gains received different ratings of effectiveness (Ballou
There is no data for untested subjects
As with all other test-based accountability systems, growth models are restricted to those subjects that can be and are tested. In many states that confines judging growth on the basis of only two
subjects, reading and mathematics, as is the case with the status models in many states today.
Some growth models, usually value-added models, incorporate scores from all tested subjects, such as social studies and science, that some argue have been overlooked since the passage of NCLB. Even
when these other subjects are included in the formulas, researchers estimate that about one-third of teachers in a school will not be included in measuring the school's effectiveness (Andrejko 2004).
For the most part, these would be teachers of subjects such as music and art which are not typically assessed using standardized tests.
There can be missing or incomplete student data
Even the best data collection system cannot assure that all data will be produced and reported for every student. In any given year, some students are absent during testing. Some students transfer
into school during the school year from other states. Others transfer out. These factors and others lead to missing or incomplete data for some students. Not all growth models are able to incorporate
information on students that do not have all their previous test scores, although some do (CCSSO 2007).
Missing data can have a large impact on growth results, depending on the characteristics of students who typically fall into this group. Students who are highly mobile, for example, tend to be lower
achievers. A high incidence of mobility in a school would produce gaps in the data and could distort the "effect" reported for the school or its teachers (McCaffrey, et al. 2003). One analysis found
gain scores would be effected if ten percent or more of the student records contained missing scores (Braun 2004).
However, researchers are still divided over the impact students with missing test scores actually have on growth calculations. Some value-added models, such as the one used in Tennessee, do not
exclude students from the calculation simply because they are missing some test scores. Tennessee is able to do this by including enough other data—such as previous scores and scores in other
subjects—that missing some data is said to have little impact. States and districts need to decide how to deal with students with missing test scores when designing their growth model.
Experts dispute how completely "value-added" models capture teacher effect
There is a continuing debate between statisticians on the extent to which preexisting student factors, such as socioeconomic status (SES) and prior achievement, can be controlled for to truly isolate
the effect a teacher has on student achievement. Since these measures are not perfect, most statisticians agree that they should not be the only tool used in evaluating teachers. However, they
disagree on the role they should play. Although this section focuses on the use of value-added models (VAMs) in evaluating teachers, many of the same issues pertain to VAMs when evaluating schools
and educational programs.
Value-added models used to evaluate teacher effectiveness are designed to measure a teacher's contribution to student achievement. VAMs typically compare an individual teacher's effectiveness to the
average effective teacher in her district. Simply put, teacher effectiveness is computed as the difference between students' achievement after being in a teacher's class compared to what their
achievement would have been if they had been with the "average" teacher. But there are other factors outside of teachers' control that could influence achievement including student characteristics,
school climate, district policies, and even the student's previous teachers (McCaffrey, et al. 2003).
Most researchers agree that VAMs must account for these outside factors to provide an accurate measure of teacher effectiveness. Statisticians have developed various techniques in an attempt to
minimize the influence of outside factors, but the debate is by no means settled (McCaffrey, Lockwood, Koretz, and Hamiltion 2004).
Another consideration is that most VAMs are meant to evaluate the effect of a single teacher. For this reason, they have been criticized for not accounting for the growing trend of team teaching. The
argument goes like this: If a student is being taught by multiple teachers how can any model isolate the effect of each one individually? Models like Tennessee's represent teacher effects as
independent and additive even though other teachers and tutors may effect academic growth as well (Kupermintz 2003). However, some statisticians say that VAMs can be designed to account for team
teaching and departmentalized instruction (Sanders 2006). And it is likely that VAMs could be set up to evaluate the impact of the team of teachers as well. Both may be done by including data from
each of the teachers in the team by which the student was taught.
A single growth model does not serve all purposes
Growth data can be helpful in many ways, and it is tempting to create one growth model and use it for multiple purposes. Policymakers and educators should resist this temptation. Although a single
model could save a lot of time and money, many researchers strongly discourage using just one model, because trying to pull distinct pieces of information from one model would likely lead to false
conclusions (Ballou 2002). For example, a growth model developed for high-stakes school accountability, such as NCLB, should not be used for program evaluation when the evaluation controls for other
variables such as socioeconomic status, which is not acceptable for NCLB (Gong, Perie, and Dunn 2006).
Measuring growth in high school is difficult
Some growth models are difficult and in some instances impossible to use in certain high school settings (Yeagley 2007). This is because high schools tend to lack key elements needed to track growth.
For one, many states assess students only once during high school and do not administer annual tests. Even in states with more high school tests, the tests are typically by subject, not grade level,
and are not typically vertically aligned and scaled.
How can growth-models be used effectively?
By acknowledging their limitations and working to minimize them, growth models can be used effectively and provide information about schooling that is not otherwise available. Keep in mind, however,
that the type of growth model used should be appropriate for its purpose.
Other ways growth models can be used include:
School accountability
Probably the most familiar application of growth models is school accountability. The discussion now is not whether growth should be used for school accountability but the best way to measure it,
particularly for NCLB. From almost the time NCLB was enacted in 2002, states and researchers have been developing growth models that would give credit to schools for making significant progress on
students who entered school far from proficient while adhering to the goal of NCLB to have all students proficient by 2014. This led to the proliferation of what are known as Growth to Proficiency or
Growth to Standards models, which we described earlier. Previous growth models struggled to answer the question, "How much growth is enough growth?" Growth to Proficiency models answer the question
by declaring, "Enough growth for the student to reach proficiency by a set time," (Gong, Perie, and Dunn 2006). However, whatever the set time should be can only be determined by policymakers. The
models can then be designed around that timeline. You should keep in mind that while a growth model may accurately measure student growth, its effectiveness can only be judged by how the data is
Growth to Proficiency models also mitigated one of the negative effects of some models that do little to narrow achievement gaps. For example, some Simple Growth models are set up to recognize
average growth or one year's worth of growth among students.
When all students stay on pace, gaps between high- and low-achievers do not change. Models based on predicted growth—such as value-added models—that only expected a student to grow similar to past
growth, can actually widen these gaps because they do not expect students to grow more than they had in the past.
Thus low-achievers, who begin with less-than-average gains, will be "expected" to fall further and further behind (Blank and Cavell 2005). In contrast, Growth to Proficiency models expect
low-performing students to accelerate their gains in order to meet a set target (Davidson and Davidson 2004, Doran 2003).
Three different types of Growth to Proficiency models include those in North Carolina and Tennessee that were approved by the Department of Education for their NCLB Growth Model Pilot Program, and
Harold Doran and Lance Izumi's Rate of Expected Academic Change (REACH). Although each model is calculated differently, they have the same overarching goals of giving credit to schools that are
getting their students on track to proficiency.
Evaluate teacher performance
Value-added models probably work best for evaluating teacher performance because they are the most effective at isolating teacher effects from outside factors. But again, the tools are not perfect.
Most researchers assert that VAMs should be just one of several indicators used when evaluating teachers (Andrejko 2004, Ballou 2002, Braun 2005, McCaffrey, et al. 2003). Other measures such as
classroom observations, examinations of lesson plans, portfolios, and other evaluations of professional practice should also be considered (Andrejko 2004, Braun 2005, Hershberg, Simon and Lea-Kruger
2004). Moreover, Value-Added measures should be used to inform local decision making, not replace it (Ballou 2002).
Improving practice
The policy discussion focuses on the usefulness of growth models in high-stakes, school accountability systems. But quite possibly the most effective use of growth information is for improving
practice: To inform instructional improvement, evaluate the effectiveness of academic programs, and target professional development for teachers and administrators.
Value-Added measures can provide valuable information about the effects of curriculum, instructional techniques, and other instructional practices on student learning. Armed with data, teachers and
administrators can help pinpoint what works and what needs to be improved to best meet the needs of their students—information that can also become the basis for school improvement plans. (Hershberg,
Simon, and Lea-Kruger 2004).
By seeing where and why they are effective, teachers can reflect on their own practices and share their best techniques with their colleagues. Administrators can analyze the data and target
professional development for staff. Clearly, teachers found to be less effective can be given the assistance they need to improve. But even otherwise effective teachers can benefit from analyzing
their own data. For example, a teacher may be effective overall, but a Value-Added analysis may suggest that she is more effective with her higher performing students than her low performers
(Hershberg, Simon, and Lea-Kruger 2004). Professional development that provides strategies for helping struggling learners would help this teacher advance the learning of all her students.
Teacher assignments
The same data from VAMs that can help pinpoint professional development needs can also provide principals with valuable information for assigning teachers strategically. An analysis of the data will
help principals identify which teachers are most effective in which subjects, grade levels, or even groups of students. They can then make the best match between teachers' individual strengths and
students' needs. (Hershberg, Simon, and Lea-Kruger 2004).
Evaluate teacher preparation programs
With the right data collection system, Value-Added measures can also be used to evaluate teacher preparation programs in state universities. The state of Louisiana is a leader in this regard. The
state's Value-Added Teacher Preparation Program Assessment Model has the capacity to examine the performance of its K–12 students, and connect growth in student learning to state teacher preparation
programs to determine the effect of their graduates (Berry and Fuller 2006). However, keep in mind that only those teachers of subject areas that are tested are included in the data, so caution
should be taken when evaluating a teacher preparation program on a subset of their graduates (Berry and Fuller 2006).
How can you get the most from growth models?
The book on growth models is just being written. Researchers have only begun to determine how accurate the measures are and how they should best be used. Educators have just scratched the surface on
how growth models can help them improve their schools.
Researchers emphasize that growth models should not be the sole basis for making high-stakes decisions, particularly in regard to teacher evaluation (Ballou 2002, Braun 2005, Kupermintz 2003,
McCaffrey, et al. 2003, Raudenbush 2004a). Others caution that using growth models such as value-added models in teacher accountability could discourage teachers from using this valuable data to
inform their instruction (Yeagley 2007).
We need to learn more before we will know the full effects of using growth models like VAMs for any sort of accountability. Nonetheless, several researchers acknowledge that even though growth models
are not perfect, they are probably better than the accountability systems we currently have in place (McCaffrey, et al. 2003).
What all researchers can agree on is that growth models provide valuable information that is not otherwise available from models that only look at achievement status. Before policymakers consider
implementing a growth model, they should ask some pertinent questions.
Questions for District Policymakers to Consider
• What is the purpose for using growth data and who will be affected?
• What additional resources will be needed at the school and district level to implement and maintain a growth model?
• How will teachers, administrators, and other education stakeholders be trained on how growth data will be calculated and used effectively?
• How will the growth data be disseminated to districts, schools, teachers, and the community at-large?
Additional Questions for State Policymakers to Consider
• Is a growth measure a better measure than is currently used?
• Which stakeholders should be involved in developing the growth model?
• What elements do we need to implement a valid and reliable growth model and how much would it cost?
□ What elements are already in place?
□ How will the absence of an element or elements affect the reliability and validity of the growth model results?
□ Do the benefits outweigh the costs of implementing a growth model?
• How important is it that all stakeholders including teachers and parents understand how the growth data is calculated? Will the stakeholders accept a growth model if they do not fully understand
how is calculated?
• Will the growth model be open for peer and public review? If so, how often will it be reviewed and by whom?
Questions for Federal Policymakers to Consider
• How many states have the necessary elements in place to develop a valid and reliable growth model?
• What will it cost states to develop, implement, and maintain a growth model?
□ How will the cost vary across states?
□ What financial burden will there be for districts? Do rural and other small districts have the capacity to collect and disseminate the data necessary for a growth model?
• Which growth models would be best to meet the goals of NCLB?
• Which growth models will provide the most accurate identification of schools that need improvement?
• How will growth models affect the goal of one hundred percent proficiency by 2014?
• Which students should be included in growth model calculations and accountability? Should all students have the same growth targets?
• How much flexibility should states have in designing growth models for NCLB?
• What others purposes should the growth model serve besides school accountability?
• Would a growth model have a greater effect on improving schools if it was designed to inform instruction rather than for high-stakes school/teacher accountability?
• What are the limitations of growth models?
• What continuing researching is needed to understand the impact growth models are having in classrooms and to help improve their impact at the classroom?
^1 Variations of the Performance Index models such as Transition Matrix models and Value Table models are able to measure individual students' growth as they score at higher achievement levels such
as going from Below Basic in fourth grade to Basic in fifth.
^2 There are other statistical techniques available such as using Z-scores or Multilevel Modeling Approaches that can be used to create a growth model in the absence of vertically scaled tests. But
they add to the already complex model and are likely to alter the results you are hoping to measure. Gong, B., Perie, M., and Dunn, J. (2006). Using Student Longitudinal Growth Measures for School
Accountability Under No Child Left Behind: An Update to Inform Design Decisions. Center for Assessment. Retrieved on June 7, 2007, from http://www.nciea.org/publications/
The document was written by Jim Hull, policy analyst, Center for Public Education.Special thanks to Cyndie Schmeiser and Jeff Allen at ACT, Inc.; Mary Delagardelle and staff at the Iowa School Boards
Foundation; and Joseph Montacalvo for their insightful feedback and suggestions. However, the opinions and any errors found within the paper are solely those of the author.
Posted: November 9, 2007
©2007 Center for Public Education
Add Your Comments
Display name as (required):
Comments (max 2000 characters):
ALSO IN THIS GUIDE...
You might also be interested in
Investing in high-quality pre-kindergarten education yields benefits for kids, school, and communities.
Read More
All in Favor
Why is it important to vote in local school board elections and questions you should ask about candidates.
Read More | {"url":"http://www.centerforpubliceducation.org/Main-Menu/Policies/Measuring-student-growth-At-a-glance/Measuring-student-growth-A-guide-to-informed-decision-making.html","timestamp":"2014-04-20T06:34:47Z","content_type":null,"content_length":"98454","record_id":"<urn:uuid:07ddd556-b26e-4cfa-af9b-8a75f098726a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00197-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: October 1999 [00323]
[Date Index] [Thread Index] [Author Index]
Re: how to avoid numeric error
• To: mathgroup at smc.vnet.net
• Subject: [mg20397] Re: [mg20309] how to avoid numeric error
• From: "Atul Sharma" <atulksharma at yahoo.com>
• Date: Tue, 26 Oct 1999 00:32:51 -0400
• References: <7ubq06$5q7$4@dragonfly.wolfram.com>
• Sender: owner-wri-mathgroup at wolfram.com
Thanks for the comment and function. I still am not clear on why the
floating point x1 doesn't force machine precision, regardless of whether the
exponent is a float or integer, since I had thought that even one floating
point parameter results in the entire expression being evaluated
BobHanlon at aol.com wrote in message <7ubq06$5q7$4 at dragonfly.wolfram.com>...
>8.551437202665365431742222523666`12.6535*^864 -
> 2.4803091328936406321397756468`12.6535*^852*I
>Despite its large magnitude, the magnitude of the imaginary part is small
>compared to the magnitude of the real part (differ by 12 orders of
>10^864*Chop[(-5.2)^1208./10^864] == (-5.2)^1208
>Extending the concept of Chop
>relativeChop[x_ , delta_:10^-10] /; (Abs[x] == 0) = 0;
>relativeChop[x_, delta_:10^-10] :=
> Module[{mag = Abs[x]}, mag*Chop[x/mag, delta]]
>relativeChop[(-5.2)^1208.] == (-5.2)^1208
>Bob Hanlon
>In a message dated 10/16/1999 12:38:51 AM, atulksharma at yahoo.com writes:
>>I am at a loss to explain this behavior, though I perhaps
>>misunderstand how Mathematica implements it's machine precision routines.
>>This is a simple example of a problem that cropped up during evaluation
>>constants of integration
>>in a WKB approximation, where I would get quite different results
>>on how the constant was evaluated. I have localized the discrepancy to
>>term of the form shown below:
>>testParameters =
>> {x1 -> 5.2, x2 -> 0.3, x3 -> 0.002, x4 -> -0.00025}
>>(-x1)^(-(x2 + x3)/x4) /. testParameters
>>In this case, as it turns out, x1 = -5.2, which is a floating point
>>and the exponent = 1208 (which may be integer or floating point, but is
>>floating point in this case).
>>I assumed that the result would be evaluated to machine precision in
>>since x1 is a float regardless. However, depending on whether the exponent
>>is integer or not, I get two different results, with a large imaginary
>>8.55143720266536543174145`12.6535*^864 -
>> 2.48026735232231456274073`12.6535*^852*I
>>I assume that this has some simple relationship to machine precision and
>>round-off error, but am I wrong in assuming that x1 should determine the
>>numeric precision of the entire operation?
>>I am using Mathematica 3.01.1 on a PC/Win95 platform.
>>I also encountered another problem, which bothers me because it's so
>>insidious. In moving a notebook from one machine to another by floppy
>>to home), a parsing error occurred buried deep inside about 30 pages of
>>code. A decimal number of the form 1.52356 was parsed as 1.5235 6 with
>>space inserted and interpreted as multiplication. The same error occured
>>the same place on several occasions (i.e. when I start getting bizarre
>>results, I know to go and correct this error).
>>I know these sound minor, but they have a large effect on the solution
>>could easily go undetected. Thanks in advance. | {"url":"http://forums.wolfram.com/mathgroup/archive/1999/Oct/msg00323.html","timestamp":"2014-04-21T15:13:33Z","content_type":null,"content_length":"37951","record_id":"<urn:uuid:8a9a299c-2d64-430b-9c2d-a4ea3953f53d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Process Capability Study
What is a Process Capability Study?
A Process Capability Study is the direct comparison of voice-of-the-process (VOP) to the voice-of-the-customer (VOC). It's the ability of a process to meet requirements, either internal or external,
without process adjustment.
Its primary metrics are Cpk and Ppk.
VOC and VOP
Voice of the Customer & Voice of the Process
A Process Capability Study is a key quality improvement tool. It tells you:
• If your process has a targeting problem.
• If your process has a variation problem.
• If your process has both targeting and variation problems.
• If your specification limits are not appropriate.
• How well the process is capable of performing.
Get Free Process Capability and SPC Templates Here
Process Capability studies have traditionally been performed on process outputs (Y's) but are even more important for process inputs (X's).
Control The Inputs (X's)....Monitor The Output (Y's)
Data Types
Attribute Data
With attribute data a processes capability is defined in terms of pass/fail.
Process Capability For Attribute Data
Continuous Data
With continuous data a processes capability is defined in terms of defects under the curve and outside of the specification limits.
Continuous Data Process Capability
Process Capability Study Steps
1. Select Study Target
2. Verify Requirement
3. Validate Specification Limits
4. Collect the Data
5. Determine Data Type (short-term or long-term)
6. Check Data Normality
7. Calculate Cp, Cpk, Pp, Ppk
Click here if you're looking for inexpensive process capability study software. It's Excel based and you can download and use it immediately. It also comes with a lot of goodies...
Step 1 - Select Study Target
There should be a good reason to perform a Process Capability Study and that reason must be that some parameters performance must be understood in relation to specification requirements.
Steps 2 & 3 - Verify Requirement & Validate Specification Limits
Specifications or requirements should be verified and validated before starting the Process Capability Study.
I can't tell you how many times I've seen situations where the specification was not what folks thought it was. In the majority of these instances the misconception about what the specification was
had been in place for years.
What's the source of the specification(s)?
• Customer requirement?
• Business requirement?
• Regulation requirement?
• Design requirements?
• Is the specification current?
• Has the specification changed?
• Is the specification understood and agreed upon?
• Is the specification clear?
Step 4 - Collect The Data
If you're going to go through the time, effort and expense to perform capability analysis it's important to pause and really consider data collection.
How you collect the data will determine how much performance information can be extracted from it. The most information is avalaible when the data is collected in rational subgroups.
Rational subgroups are simply items that are alike. They are an attempt to separate what's called "common-cause and special-cause" variation.
A Rational Subgroup Could Be-
• Close in time
• Made within the same set-up
• Done by the same people
• Processed using the same method
• Using the same material batch
• etc
The goal is to have a "rational" for collecting the data that will minimize the variation within each subgroup.
By collecting data this way we force a theoretical condition where only common-cause (normal) variation exists within each subgroup. All of the special-cause variation therefore lies between the
If you can collect the Process Capability Study data in rational subgroups, not only will you get the overall capability data, you'll also be able to understand the process potential (what is
Process Variation
Process Variation Between Lots
The Real World
Some process capability target inputs or outputs do not naturally lend themselves rational sub-grouping.
• A transactional metric that is only calculated once a day.
• A test that is performed once a shift.
• Any production, transactional or service process with very low volume.
In these instances try to create a rational subgroup of two or more by using the closest consecutive data points. These consecutive data points will be the best estimate of short-term common-cause
This will be the closest that you'll be able to get to estimating short-term variation.
If we do something on Monday and don't do it again until Wednesday, combine these two consecutive data points into one subgroup.
IMPORTANT - The goal is to separate special-cause variation from common-cause variation where practically possible. If rational sub-grouping just doesn't make sense due your volume, forget about it
and use single sample data points.
Process Control and Process Capability
The validity of the results of a Process Capability Study depends upon the overall long term stability of the process. In Quality terms "the process must be in a state of statistical control."
When a process is in a state of statistical control it is predicable. Control Charts should be used to assess stability and control. Control Charts separate common-cause variation from special-cause
A process is said to be in a state of "statistical control" when no special-cause variation exists within that process. Only common-cause variation remains.
The control chart below shows a process parameter in a "state of statistical control". The parameter is stable over time. None of the data points are falling above or below its calculated +/-3 sigma
control limits.
Control Chart
Control Chart Showing Only Common Cause Variation
The probability of getting a data point above or below these limits, with the existence of only common-cause variation, is approximately 0.3%. It's not very likely!
With 99.7% certainty we can conclude that any data point(s) that fall outside of these limits is an indication of special-cause variation. In other words, something meaningful has occurred to cause
the point to go outside of the calculated control limit.
Its important to know that these +/- 3 sigma control limits have nothing to do with the specification limits. Think of control limits as the mathematical boundaries of the Voice-of-the-Process.
The actual specification limits are used during the process capability study itself.
Takeaway - for the results of the Process Capability Study to be predictable long term, the process must first be stable.
Processes that are not in statistical control are unpredictable. No projections of long term capability should be made until you know that the process is stable.
Step 5 - Determine Data Type - S/T or L/T
Customers are typically interested in long-term data (L/T) or performance over time. And as you would imagine, variation is at its maximum long term.
Short term data (S/T) is also important. With short-term data we can learn what is possible for our process with existing resources. Variations impact on a process is typically at its minimum
Short term variation is called "entitlement". Entitlement is the best that the process can do with no changes. Entitlement is represented by the capability index Cp. More on Cp later.
Taking measurements for a day is short-term data. Taking measurements over the course of a few months is most likely short-term data. Taking measurements for an entire calendar quarter or more is
typically long-term data.
Short Term and Long Term Data
Short Term and Long term Data Examples
space block
Step 6 - Check Data Normality
Common-Cause Variation (Normal Variation) is inherent in the process/system itself and can only be reduced by changes to the system.
It's a direct result of the way the system operates. It usually requires management action due to managements control over the system - changing a process or upgrading equipment.
Special-Cause Variation is directly assignable and can often be tracked down and fixed without extensive changes to the system -broken or worn equipment, wrong materials, etc.
A process that is free from special-causes of variation, only common-causes exist, is said to be "in statistical control" and stable. An "in-control" process experiences only normal variation.
When a process is in a state of statistical control a fundamental rule of statistics applies, the empirical rule.
The rule states that for a normal distribution (bell shaped):
• 99.7% of the measurements will fall between +/-3 standard deviations from the mean.
• 95% will fall between +/- 2 standard deviations from the mean,
• 68% will fall between +/- 1 standard deviations from the mean.
Area Under The Normal Curve
Step 7 - Calculate Cp, Cpk, Pp, Cpk
Process Capability Measures - Cp, Cpk, Pp, Ppk
Cp is Process Capability: Cp compares the width of the tolerance or specification (USL-LSL) to the width of the short term process variation. Cp is entitlement which is the best the process could
possibly do with no changes.
Cpk is Process Capability Index: Cpk is almost the same as Cp but it imposes a "k factor" penalty for not being centered on the target. It adjusts Cp for the effect of the non-centered distribution
within the specification limits. Like Cp it uses the short term standard deviation in its calculation.
Calculating Cp and Cpk
Cp and Cpk Capability Calculation
Pp is Process Performance: Pp compares the width of the tolerance or specification (USL-LSL) to the width of the long term process variation.
Ppk is Process Performance Index: Ppk is almost the same as Pp but it imposes a "k factor" penalty for not being centered on the target. It adjusts Pp for the effect of a non-centered distribution
within the specification limits. Like Pp it uses the long term standard deviation in its calculation.
Calculating Pp and Ppk
Pp and Ppk Capability Calculation
Capability Analysis Example
A precision motor manufacturer produces a model of motor whose specification is 60.5 +/- 1.0 RPM. The company has been producing this model for some time with varying results and feedback from its
In order to understand the behavior of the process the company decides to perform a Process Capability Study.
The target for the study is the requirement of 60.5 RPM - Step 1.
This requirement and its tolerance of +/- 1.0 RPM are verified and validated against customer requirements and the design drawings and check out - Steps 2 and 3.
A Data Collection Plan is created to collect and format the existing data - Step 4.
Checking Data Normality (Step 5)
Looking at the Histogram below we can see that our RPM data resembles the general shape of a bell-curve. This is our first strong hint that our data is normally distributed and can be modeled by the
normal distribution.
Secondly, the P-value of 0.611 is greater than our level of significance 0.05. This tells us that we can accept our hypothesis that the data is normally distributed.
All is okay here! Now let's take a look to see if our RPM data is stable over time and in a state of statistical control?
space block
Histogram Example
Histogram Showing A Normal Distribution
Checking Process Stability (Step 6)
The X-bar & R Chart below of the last 20 weeks of data shows that all of our rational subgroup averages (sample means) and ranges are falling within the mathematically calculated upper and lower
control limits (UCL/LCL). These limits are mathematically set at +/- 3 standard deviations from the mean (average).
99.7% of our plotted data points will fall between these limits unless something in the process changes.
Our process is in statistical control - it's stable. Only common-cause, or inherent and natural, variation is acting upon it.
If special-cause variation was in the process you would see it in the control chart. One or more of the plotted data point would fall outside of the +/- 3 standard deviation (sigma) control limits.
Our process can be modeled using the normal distribution. And, because our data shows statistical control, our performance is very predictable long term.
Now it's finally time to see how we're performing against the RPM specification of 60.5 +/- 1.0 RPM.
Control Chart
Xbar and R Control Chart
Check Process Capability (Step 7)
In the Xbar & R Chart above take note that there is no mention of the specification.
The graph below is the actual Process Capability Study. It's the direct comparison of the voice-of-the-process, which is data. To the specification which is the voice-of-the-customer.
The study in the graph below shows an issue.
The motor RPM is not centered on the target of 60.5 RPM. The data is to the left and is centered around 60.0 RPM. The actual mean or average RPM is 60.01.
Our Cp and Pp values are 1.86 and 1.81. Cp and Pp measure the width of the tolerance band, which in this case is 2 RPM, to the width of the process output.
Cp uses the within subgroup variation in its calculation while Pp uses the overall variation.
Our Cp and Pp values show that the tolerance width is almost twice the size of the process output width. This is great news and is clearly visible in the capability study chart below.
Our Cpk or Ppk values are less than 1.0 which means that we will produce motors outside of the RPM specification. By viewing the PPM (parts per million) data we can estimate our performance over the
long term.
Process Capability Study Example
Capability Study with Calculated Statistics
Handling Non-normally Distributed Data
An assumption with a Process Capability Study is that our data comes from a normal distribution. It is based upon having normal or near normal data. Process Capability Analysis loses its accuracy,
either high or low, when the data departs from a normal distribution.
With a normal distribution the data falls symmetric about the mean or average. The distribution is bell-shaped with approximately 50% of the data falling on each side of the the mean.
Knowing that a normal distribution is bell shaped we can use this criteria to give our data the "eyeball test" to assess whether or not it is normally distributed.
Looking at the Histogram below we can clearly see that the data is symmetric about the mean or average value of approximately 50. The data output is bell-shaped with approximately 50% of the data
falling on each side of the mean.
Normal Distribution Example
Normal "Bell Shaped" Distribution
Another technique used to assess whether data is normally distributed is probability plotting. If the data is normally distributed the plotted data will approximate a straight line.
The data in the Histogram above is shown below in a Probability Plot. In general it follows a straight line. Pretty solid evidence that the data is normally distributed.
P-Value Method: Advanced Topic
Another technique is the P-value approach which is also widely used to assess whether data is normally distributed.
If the calculated P-value exceeds the significance level of the test, you conclude that the data is normally distributed. The significance level, or alpha risk , is usually set at 0.05 (5%) to 0.10
In the Probability Plot below you can see that the calculated P-value is 0.716. This is much greater than our alpha risk of 0.05 (5%). Further evidence that the above distribution of data is normally
Note: The p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.
Straight Linear Plot
An "eyeball test" of the data's Histogram (shape) and Probability Plot is all that is typically needed to judge a data sets distribution normality.
But, as distributions depart from this perfect bell-shape it becomes more difficult to judge normality and the probability plot and P-value is needed.
Process Capability - Non-normally Distributed Data
Many processes do not output data that follows a normal distribution.
Cycle-time data from transactional processes is rarely normally distributed. Luckily though there are methods to perform Process Capability Studies on non-normally distributed data.
To perform the analysis you first must transform your non-normal data distribution to a normal distribution.
With transformation magic you force your non-normal data to become normal........
Here's an example......
Non Normal Data Example
A custom engineering company has determined that responding to its customers Request-for-Quote (RFQ) within 5 business days is critical-to-satisfaction.
They decide to perform a Process Capability Study to quantify their performance against this 5 day requirement.
Generally following steps 1-5 above, they're fine.
Then Step 6-Check Data Normality. Looking at the Graphical Summary below they see that the RFQ cycle-time data is not symmetrical or bell-shaped.
It does not pass the "eyeball test" for a normal distribution.
Non-normal Distribution Example
Data Distribution Not Bell Shaped
The probability plot confirms it. The RFQ cycle-time data does not follow a straight line. The P-value is also below the alpha risk of 0.05 (5%). No doubt about it - this is not a normal
Non Straight Line
With non-normally distributed data you need to find a statistical transformation function to transform the non-normal distribution into a normal distribution.
This is best accomplished with a good statistical software package, there are many available. Data transformation can also be done with Excel or any standard spreadsheet type software package.
Transformation Functions
Two of the most popular transformation functions used with Process Capability Analysis are the:
• Box-Cox Transformation: transforms the data by raising it to the power lambda, where lambda is any number between –5 and 5.
• Johnson Transformation: optimally selects a function from three families of distributions of a variable, which are then easily transformed into a normal distribution.
Either the Box-Cox or Johnson Transformation will always work. Using statistical software to identify the best transformation choice shows the results in the Probability Plot graphs below.
With these plots you're looking for the Goodness-of-Fit to a straight line.
To determine the best data transformation choice or "best fit" use the P-value. The higher the P-value the better the fit of the data to the model.
In this case we choose the Johnson Transformation Function - second graph bottom right corner. It's P-value is 0.95. The Box-Cox P-value is 0.769 which is second best. Many other transformation
functions are available but the Johnson or Box-Cox will typically work.
Probability Plots
space block
Once you know which transformation function to use to transform your non-normal data to normal, you perform the transformation and perform the Process Capability Study on the transformed data.
In the analysis below notice that the observed performance is that approximately 17% of the RFQ's exceeded the 5 day requirement.
The long-term expected overall performance is that approximately 19% of RFQ's will exceed the 5 day turn around limit.
Capability Study - Transformed Data
Transformed Data
From Process Capability to Quality Control. | {"url":"http://www.free-six-sigma.com/process-capability.html","timestamp":"2014-04-16T13:03:08Z","content_type":null,"content_length":"59252","record_id":"<urn:uuid:feba646b-6c4f-4e23-82b3-c54179b9c1fb>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cycles, and the cycle decomposition of a permutation
March 1, 2009, 4:40 pm
Keeping things in context
Image via Wikipedia
I’ve started reading through Stewart and Tall’s book on algebraic number theory, partly to give myself some fodder for learning Sage and partly because it’s an area of math I’d like to explore. I’m
discovering a lot about algebra in the process that I should have known already. For example, I didn’t know until reading this book that the Gaussian integers were invented to study quadratic
reciprocity. For me, the Gaussian integers were always just this abstract construction that Gauss invented evidently for his own amusement (which maybe isn’t too far off from the truth) and which
exists primarily so that I would have something to do in abstract algebra class. Here are the Gaussian integers! Now, go and find which ones are units, whether this is a principal ideal domain, and
so on. Isn’t this fun?
Well, yes, actually it is fun for me, but that’s because I like a…
November 28, 2011, 7:45 am
Cycles, and the cycle decomposition of a permutation
Last week’s installment on columnar transposition ciphers described a formula for the underlying permutation for a CTC. If we assume that the number of columns being used divides the length of the
message, we get a nice, self-contained way of determining where the characters in the message go when enciphered. Now that we have the permutation fully specified, we’ll use it to learn a little
about how the CTC permutation works — in particular, we’re going to learn about cycles in permutations and try to understand the cycle structure of a CTC.
First, what’s a cycle? Let’s go back to a simpler permutation to get the basic concept. Consider the bijective function \(p\) that maps the set \(\{0,1,2,3,4, 5\} \) onto itself by the rule
$$p(0) = 4 \quad p(1) = 5 \quad p(2) = 0 \quad p(3) = 3 \quad p(4) = 2 \quad p(5) = 1$$
If you look carefully at the numbers here, you’ll see that some of…
Search Casting Out Nines
□ The Chronicle Blog Network, a digital salon sponsored by The Chronicle of Higher Education, features leading bloggers from all corners of academe. Content is not edited, solicited, or
necessarily endorsed by The Chronicle. More on the Network...
Casting Out Nines through your favorite RSS reader: SUBSCRIBE | {"url":"http://chronicle.com/blognetwork/castingoutnines/category/number-theory-2/","timestamp":"2014-04-21T09:48:07Z","content_type":null,"content_length":"55709","record_id":"<urn:uuid:6487c253-2912-4530-86d9-337e484466c6>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Question
Having completed algebra with flying colors lo these many moons ago, I'd have thought this would be an easy question, but for some reason it's eluding me.
Bn is the unaltered speed of a song expressed as beats per minute.
Bd is the desired speed of a song expressed as beats per minute.
Pn = 100% is the length of a song expressed as a percentage of the unaltered overall length.
Pd is the length of a song to achieve the new Bd expressed as the percentage of the unaltered overall length.
Create an expression describing Pd as a simple function of the other variables where all other variables are known.
Pd = (Bn*Pn)/Bd
As rough notation, as some units dropped out. Bn in my equation is no longer bars per minute, just bars (aka 30 bars, instead of 30 BPM), Pn is just an integer (that's where you cancel out the
minutes in BPM), and BD is just in Bars too. Hrmm, which means I've lost the Minute unit on right hand side. But whatever, believe that should work, roughed it out on paper, going to actually try
it with a song now.
And of course, don't forget proper order of operations there (though I doubt it matters in this case
Can recheck my work and do some more test "time warps", but went from 28.9 BPM to 32 BPM right on the dot (which is what I was aiming for) using that math. Normally I just guesstimate it, so
that's going to be useful in the future. Thanks.
Me New Member
*head explodes*
bluebereft Member
Is P a percentage?
Pd/Pn = Bd/Bn ?
Test file was 2:15(Pn), current BPM was 28.9(Bn). Target BPM was 32(Bd), so using my equation (least, way it works in my head, my notation might not work for anyone else
Another way to do the math (to skip the first multiplying step), is to figure out the total number of bars in the song, then just divide by the tempo you want in BPM. That will give you the
target lenght (and in fact is all my equation is doing)
Thank you etp! You are a life saver.
j_alexandra Well-Known Member
mine, too
lol @ me and j.
and you're welcome SK. Happy to help, esp. when it's somethign I shoudl have done a while ago, and can use now.
Ah, I just figured out where I was going wrong. I was getting the same formula, then plugging in the numbers for slowing a 60 MPM international Viennese to 54 MPM American and getting 111%. Then
I'd say to myself, "Self, this can't be right. Why would the number be higher if the song got slower?" forgetting that this was the percentage to <i>stretch</i>, not compress. Thanks for stepping
me through the math.
I think that's inverted.
Writing etp's result another way and expressing the number of beats as N, we could say:
BdPd = BnPn = N
or, with units:
Bd beats/min * Pd min = Bn beats/min * Pn min = N beats
That is, the number of beats is the same regardless of the rate at which they are played or how long it takes to play them.
Too late for me to do the math on your equation (not doubting, just way I was raised, DL
I just multiplied both sides of your equation by Bd, which incidentally puts the equation in a form other than OP requested. So I think you got it right just the way you had it.
BlueBereft's version / question isn't the same as yours. I added the dimensional analysis in the hopes that it made things more clear and would minimize head explosions. Maybe I just added
confusion, though. Oh, well.
ah yeah, see what you did. Makes sense to me. Course, won't say you didn't add confusion to people. Math, even algebra (I avoid calculus and higher, even though profs and dad all say I understand
it), is defenitely an acquired skill. And to really think that way, I think yhou ahve to be born with it. Otherwise you're thinkin in one "language" and converting in your head. Just like I do
with Spanish. | {"url":"http://www.dance-forums.com/threads/math-question.32762/","timestamp":"2014-04-18T08:03:05Z","content_type":null,"content_length":"55370","record_id":"<urn:uuid:84cd0151-7c54-4acc-9024-aa85b7089f06>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determining Fundamental Cosmological Parameters
At multipoles to a precision of a few percent (see eg, Bond, Efstathiou & Tegmark 1997, Zaldarriaga, Spergel & Seljak 1997, Efstathiou & Bond 1999).
The unique ability of Planck to distinguish between theoretical models with very similar cosmological parameters is illustrated in figure 1.9a. In this figure, we compare CMB power spectra for
spatially flat adiabatic CDM models with different parameters and compare the ability of MAP and Planck to differentiate between them.
Figure 1.9a: Simulations of the CMB power spectrum of a cold dark matter model illustrating how Planck can determine cosmological parameters to high precision. The solid curves in the upper panel
shows the CMB power spectrum for an adiabatic CDM model with baryon density [b] = [b] h^2= 0.0125$, CDM density [c]= [c] h^2=0.2375, zero cosmological constant, Hubble constant H[0] = 50 km s^-1
Mpc^-1, scale-invariant spectra, n[s] = 1, n[t]=0, and a ratio of r=0.2 for the tensor to scalar amplitudes. The dashed lines (barely distinguishable from the solid lines) show spatially flat
models with the parameters listed above each figure. The differences in these power spectra are plotted on an expanded scale in the lower panels. The points show simulated observations and 1
The Table below displays the 1 etc. As already demonstrated in in the Phase A report, Planck has been carefully designed to minimise these sources of systematic error. In particular, the wide
frequency coverage provided by the Planck LFI and HFI instruments allows accurate subtraction of the Galaxy and extragalactic foregrounds from the primordial cosmological signal. Furthermore, LFI
and HFI the have different beam profiles, noise characteristics and both sample the 100 GHz frequency range at comparable sensitivities. The Planck instruments have been designed to allow a large
number of cross-checks of the data. Such consistency checks are essential for a comprehensive and convincing analysis of cosmological parameters.
Parameter MAP MAP+ PLANCK LFI PLANCK HFI
[b]/ [b] 0.11 0.05 0.016 0.0068
[c]/ [c] 0.21 0.11 0.04 0.0063
[] 0.15 0.081 0.035 0.0049
0.014 0.0046 0.00139
0.81 0.67 0.13 0.49
[s] 0.066 0.032 0.01 0.005
[t] 0.74 0.72 0.57
0.22 0.19 0.18
1 [i] = [i] h^2 to denote physical densities ( [b], [c] and [] are the baryon, CDM and cosmological constant densities, respectively). Q denotes the quadrupole amplitude, n[s] (n[t]) stands for
the scalar (tensor) spectral index, while
Determining fundamental cosmological parameters with high accuracy would lead to a profound change in our understanding of cosmology. For example:
□ PLANCK promises the first accurate geometrical estimate of the spatial curvature of the Universe. Will it be compatible with the inflationary prediction of
□ Will CMB estimates of the Hubble constant be compatible with estimates from more traditional techniques, such as from Cepheid distances to galaxies in the Virgo cluster (e.g. Freedman et al.
1994) or from Type Ia supernova light curves (Riess et al. 1995)? Will the values of H[0] be compatible with the ages of stars in globular clusters?
□ PLANCK can set tight limits on the value of a cosmological constant e.g. Ostriker and Steinhardt) to solve the age-Hubble constant problem and observed large-scale structure in the Universe.
Such a stringent limit on
□ Will estimates of the baryon density and Hubble constant be compatible with the predictions of primordial nucleosynthesis? The PLANCK limits on e.g. Walker et al. 1991), and will serve as a
stimulus for more accurate measurements of primordial element abundances and for theoretical investigations of deviations in the predicted abundances caused by physics beyond the Standard
Model of particle physics (e.g. massive etc).
□ Do we require dark baryonic matter in the present Universe? Luminous stars in galaxies contribute only
These and many other questions will, for the first time, be open to quantitative analysis. PLANCK would truly revolutionize cosmology, turning it from a qualitative science fraught with
systematic errors and uncertainties, into a quantitative science in which most of the key parameters are constrained to high precision.
Figure 1.9b shows examples of the correlations between some of these parameters. In Figure 1.9b we have assumed a universe with h = 0.5, n[s]=1, r=1,
Figure 1.9b: The contours show 50, 5, 2 and 0.1 percentile likelihood contours for pairs of parameters determined from fits to the CMB power spectrum. The figures to the left show results for an
experiment with resolution [FWHM] = 1°. Those to the right for a higher resolution resolution experiment with [FWHM] = 10' plotted with the same scales (central column) and with expanded scales
(rightmost column). Notice how the accuracy of parameter estimation increases dramatically at the higher angular resolution. In these examples, we have assumed that 1/3 of the sky is observed at
a sensitivity of ^-6 per resolution element. (See Figure 1.5 for simulations of the CMB power spectra for these experimental configurations.)
This figure shows that: (1) the parameters h and Q[rms] and spectral index n[s], but again with the angular resolution of PLANCK the uncertainty on each parameter can be reduced to a few percent;
(3) the uncertainty on the amplitude of the gravitational wave component of the fluctuations, r, depends only weakly on angular resolution because it is determined by low order multipoles (cf.
e.g. Knox 1995).
In summary, the analysis of this section shows that observations of the CMB anisotropies with PLANCK are capable of determining fundamental cosmological parameters to high precision. The only
assumptions involved are that the primordial fluctuations are adiabatic and characterized by an approximately power-law spectral index, assumptions which themselves can be verified from the
PLANCK maps of the CMB anisotropies. The physics underlying these predictions is extremely well understood, involving only linear perturbation theory, and hence the theoretical predictions
presented here should be realistic. The analysis provided in From Observations to Scientific Information shows that the frequency coverage and high sensitivity of PLANCK will allow subtraction of
foregrounds and discrete sources, so that the CMB anisotropies should be retrieved over at least 1/3 of the sky with a sensitivity of
[last update: 1 August 1999 by P. Fosalba] | {"url":"http://www.rssd.esa.int/SA/PLANCK/include/report/redbook/143.htm","timestamp":"2014-04-20T11:10:53Z","content_type":null,"content_length":"16614","record_id":"<urn:uuid:50e8d090-c91f-4434-8abd-1d057f9f450d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help Me With Gcse Maths
Re: Help Me With Gcse Maths
Hi bobbym yes i mean flash cards?
Re: Help Me With Gcse Maths
I will look around for them, give me some time.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Help Me With Gcse Maths
You can try this page:
Bob is here and he will take over now.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Help Me With Gcse Maths
hi Mandy,
I've just logged on.
If you give me a moment I will mark your fractions and tell you how many you have got right.
Then we can work through any that need correcting.
stay on line please.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Help Me With Gcse Maths
Sorry. My computer crashed and I've got to type it all again.
5 minutes please.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Help Me With Gcse Maths
Right. Let's hope it works this time.
Your fractions are nearly all correct now but .....
I'll tick completely right in green.
And if part of a question is wrong, I'll make that bit red . The rest of the line is correct!
(1) 1/2 = 3/4 or 12/24
(2) 1/3 = 2/4 or 8/24
(3) 2/3 = 16/24 tick
(4) 3/4 = 3/6 or 18/24
(5) 5/12 = NO tick
(6) 2/6 = 1/3 or 2/4 or 8/24
(7) 8/24 = 1/3 or 2/4 or 2/6
(8) 16/24 = 2/3 tick
(9) 18/24 = 3/6 or 2/9 where did 9 come from?
(10) 7/24 = NO tick
You should try to correct Q9. Hint. Look at Q4
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Help Me With Gcse Maths
Hi bob bundy so how do you think i did then? and do you know if there are flash cardes that i can get so i can praktes when not on computer then? and what about giving me some help with makeing my
time table for my course work then?
Re: Help Me With Gcse Maths
I think you need more practice with fractions.
I think you should try to do a correction for Q9 (hint look at Q4)
Can you print my picture so you have it on paper?
If you can, I'll make you some flash cards that you can print off.
Meanwhile you can practice fractions at these places:
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Help Me With Gcse Maths
Hi bob yes i can print your pictures out so you can make me some flash card please? it would help me a lot? send me message back as now off to bed as getting tired ok?
Re: Help Me With Gcse Maths
Making a time plan:
Get a calendar.
Put a big "EXAM TODAY" in the days when you have exams.
Count up how many days from now until then.
Try to set aside some time every day for studying.
Decide how much time you can spend each day.
Write in how much you actually do at the end of each day.
If it is less than you should have done, make up for it the next day by doing more. There are bound to be days when you just cannot spend the right amount of time, but try to keep up the average.
Make a list of all the topics you must get good at.
Share out the list amongst the days you have, so you can see that, if you keep to the plan, you will complete the list in time.
Plan also to try past exam papers. One per week would be good. If I know what paper you have done and you post your answers here, I can do two things for you:
(i) tell you roughly what grade you would get if that was the real thing,
(ii) give some help with the bits you got wrong / couldn't do.
If you keep that up, you will gradually see your grade improving, I'm sure.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Help Me With Gcse Maths
I'll make some flash cards tomorrow (Tuesday).
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Help Me With Gcse Maths
Good morning Mandy,
I've made 4 cards. If this is what you want I'll make more.
I've put a picture for the fraction, with the numbers and words.
If you cut out each one and fold it in half and stick it, you'll have the writing on the back.
So you can look at a picture, say what fraction it is, and then turn over to see if you were correct.
Or you can read what fraction is required and draw the picture, then turn over to see if you are correct.
And later, when I've made lots more, you can shuffle up all the cards and then sort them out into piles that are the same.
And I can add decimals and percentages later too.
Let me know if you would like me to make more.
I'll be out in the garden this morning, but I'll log in around 1pm.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Help Me With Gcse Maths
Hi bob bundy Mandy here sorry i haven's got back to you soner? But had a proablum withe the computer it was on then off but my neighbour fix it for me. And yes please can you make me some more flash
cardes? The syllabus i am do is AQA 4365 B How do i get flash cardes to be bigger please? send me message now or as soon as poserable?
Last edited by mandy jane (2012-04-11 00:43:58)
Re: Help Me With Gcse Maths
hi Mandy,
Please put a ruler on your laptop screen and measure how long and wide each card is for you. Then post back the measurements. Say cm or inches I don't mind which, but say.
I'll start making more cards.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Help Me With Gcse Maths
Hi bob bundy mandy here they are 4.5cm long and 3cm wide could i have them 7cm long and 5cm wide? I am doing syllabus AQA 4365 B ok for my exam? hope that will help you? send message back as soon as
you can?
Re: Help Me With Gcse Maths
Hi guys
Can I help somehow? Make a timetable or something? Do you need any help?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Help Me With Gcse Maths
hi anonimmystefy mandy here how do i sort out my timetable for study then? can you help?
Re: Help Me With Gcse Maths
Hi mandy;
What do you have besides math?Tell me all the subjects you have and how easy it is for you to learn each one.
Last edited by anonimnystefy (2012-04-11 06:21:25)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Help Me With Gcse Maths
hi mandy here i only have maths so how do i make a timtable for this subject and do you know much about flash cardes? send message?
Re: Help Me With Gcse Maths
Hi mandy;
Bob is working on them, I believe.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Help Me With Gcse Maths
Hi Mandy,
I'll make those cards again the size you have said.
And some more tomorrow.
I've got the exam syllabus.
It has a list of all the things you have to be able to do.
Have you got this list?
I have also been looking at what special arrangements are allowable for candidates who have a specific learning difficulty.
I would like to ask you about your dyslexia and how it affects your learning.
Would you be prepared to answer questions about this.
If you would like I can send you a private message about your dyslexia so you do not have to tell everyone who might read this post.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Help Me With Gcse Maths
mandy jane wrote:
hi mandy here i only have maths so how do i make a timtable for this subject and do you know much about flash cardes? send message?
bobbym wrote:
Hi mandy;
Bob is working on them, I believe.
I believe bobbym is right.Bob is working on the flashcards.
Tell me what is it you need to practice for your GSCE?Which parts of math?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Help Me With Gcse Maths
hi everybody,
My router keeps tripping out so I lose the connection.
If the forum 'says' I'm logged in and it all goes quiet, that's why.
The difficulty is deciding whether it is a faulty router or my broadband connection that is at fault. The last time this happened (last summer) it was because bt had plugged the wrong device in to my
line at the telephone exchange. No point buying a new router if it's the line.
It seems to be worse in the evenings so maybe the exchange cannot handle the extra demand.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Help Me With Gcse Maths
Hi bob
Is it on the new laptop or the old one? I don't remember if you got the new one yet.
If you are on the new one than,I don't know,but I would suggest trying to call someone to see if the cables or whatever are wrongly connected.Hope you get it fixed soon!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Help Me With Gcse Maths
hi bob no i don't have the list of what i should beable to do for the exame ok? can you send it to me then? yes i would answer questions on my dyslexia and no i don't mind who read this? send message
now please? | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=209098","timestamp":"2014-04-18T13:26:20Z","content_type":null,"content_length":"43130","record_id":"<urn:uuid:3c0197c7-9590-4310-91fa-ea28c5f1648b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
ECCC - Reports tagged with Interactive proofs
Reports tagged with Interactive proofs:
TR94-007 | 12th December 1994
Oded Goldreich, Rafail Ostrovsky, Erez Petrank
Computational Complexity and Knowledge Complexity
We study the computational complexity of languages which have
interactive proofs of logarithmic knowledge complexity. We show that
all such languages can be recognized in ${\cal BPP}^{\cal NP}$. Prior
to this work, for languages with greater-than-zero knowledge
complexity (and specifically, even for knowledge complexity 1) only
trivial computational complexity bounds ... more >>>
TR94-008 | 12th December 1994
Oded Goldreich
Probabilistic Proof Systems (A Survey)
Various types of probabilistic proof systems have played
a central role in the development of computer science in the last decade.
In this exposition, we concentrate on three such proof systems ---
interactive proofs, zero-knowledge proofs,
and probabilistic checkable proofs --- stressing the essential
role of randomness in each ... more >>>
TR95-024 | 23rd May 1995
Mihir Bellare, Oded Goldreich, Madhu Sudan
Free bits, PCP and Non-Approximability - Towards tight results
Revisions: 4
This paper continues the investigation of the connection between proof
systems and approximation. The emphasis is on proving ``tight''
non-approximability results via consideration of measures like the
``free bit complexity'' and the ``amortized free bit complexity'' of
proof systems.
The first part of the paper presents a collection of new ... more >>>
TR98-075 | 9th December 1998
Adam Klivans, Dieter van Melkebeek
Graph Nonisomorphism has Subexponential Size Proofs Unless the Polynomial-Time Hierarchy Collapses.
We establish hardness versus randomness trade-offs for a
broad class of randomized procedures. In particular, we create efficient
nondeterministic simulations of bounded round Arthur-Merlin games using
a language in exponential time that cannot be decided by polynomial
size oracle circuits with access to satisfiability. We show that every
language with ... more >>>
TR99-025 | 2nd July 1999
Yonatan Aumann, Johan Hastad, Michael O. Rabin, Madhu Sudan
Linear Consistency Testing
We extend the notion of linearity testing to the task of checking
linear-consistency of multiple functions. Informally, functions
are ``linear'' if their graphs form straight lines on the plane.
Two such functions are ``consistent'' if the lines have the same
slope. We propose a variant of a test of ... more >>>
TR01-046 | 2nd July 2001
Oded Goldreich, Salil Vadhan, Avi Wigderson
On Interactive Proofs with a Laconic Prover
We continue the investigation of interactive proofs with bounded
communication, as initiated by Goldreich and Hastad (IPL 1998).
Let $L$ be a language that has an interactive proof in which the prover
sends few (say $b$) bits to the verifier.
We prove that the complement $\bar L$ has ... more >>>
TR05-114 | 9th October 2005
Boaz Barak, Shien Jin Ong, Salil Vadhan
Derandomization in Cryptography
We give two applications of Nisan--Wigderson-type ("non-cryptographic") pseudorandom generators in cryptography. Specifically, assuming the existence of an appropriate NW-type generator, we
A one-message witness-indistinguishable proof system for every language in NP, based on any trapdoor permutation. This proof system does not assume a shared random string or any ... more >>>
TR07-031 | 26th March 2007
Yael Tauman Kalai, Ran Raz
Interactive PCP
An interactive-PCP (say, for the membership $x \in L$) is a
proof that can be verified by reading only one of its bits, with the
help of a very short interactive-proof.
We show that for membership in some languages $L$, there are
interactive-PCPs that are significantly shorter than the known
more >>>
TR08-005 | 15th January 2008
Scott Aaronson, Avi Wigderson
Algebrization: A New Barrier in Complexity Theory
Any proof of P!=NP will have to overcome two barriers: relativization
and natural proofs. Yet over the last decade, we have seen circuit
lower bounds (for example, that PP does not have linear-size circuits)
that overcome both barriers simultaneously. So the question arises of
whether there ... more >>>
TR10-155 | 14th October 2010
Brendan Juba, Madhu Sudan
Efficient Semantic Communication via Compatible Beliefs
In previous works, Juba and Sudan (STOC 2008) and Goldreich, Juba and Sudan (ECCC TR09-075) considered the idea of "semantic communication", wherein two players, a user and a server, attempt to
communicate with each other without any prior common language (or communication protocol). They showed that if communication was goal-oriented ... more >>>
TR10-159 | 28th October 2010
Graham Cormode, Justin Thaler, Ke Yi
Verifying Computations with Streaming Interactive Proofs
Applications based on outsourcing computation require guarantees to the data owner that the desired computation has been performed correctly by the service provider. Methods based on proof systems
can give the data owner the necessary assurance, but previous work does not give a sufficiently scalable and practical solution, requiring a ... more >>>
TR11-122 | 14th September 2011
Gillat Kol, Ran Raz
Competing Provers Protocols for Circuit Evaluation
Let $C$ be a (fan-in $2$) Boolean circuit of size $s$ and depth $d$, and let $x$ be an input for $C$. Assume that a verifier that knows $C$ but doesn't know $x$ can access the low degree extension of
$x$ at one random point. Two competing provers try to ... more >>>
TR12-156 | 12th November 2012
Andrej Bogdanov, Chin Ho Lee
Limits of provable security for homomorphic encryption
Revisions: 1
We show that public-key bit encryption schemes which support weak homomorphic evaluation of parity or majority cannot be proved message indistinguishable beyond AM intersect coAM via general
(adaptive) reductions, and beyond statistical zero-knowledge via reductions of constant query complexity.
Previous works on the limitation of reductions for proving security of ... more >>> | {"url":"http://eccc.hpi-web.de/keyword/13564/","timestamp":"2014-04-19T04:19:06Z","content_type":null,"content_length":"28507","record_id":"<urn:uuid:76c3daa3-a3cd-41fe-89f9-28e081fbb1b4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
[racket] weirdness with complex numbers
From: Todd O'Bryan (toddobryan at gmail.com)
Date: Mon Aug 6 19:29:07 EDT 2012
Oh, I didn't expect Racket to change and I think when you're teaching
programming you can just say "this is how the language is." My
question is what people might suggest for a system that's geared
toward algebra help, not programming.
On Mon, Aug 6, 2012 at 6:18 PM, J. Ian Johnson <ianj at ccs.neu.edu> wrote:
> The identifier/number grammar is unlikely to change. If it really bugs you, perhaps you can re-provide make-rectangular as c or complex and have students write (complex real imag). You could also use @ notation and have them write @complex[real imag].
> -Ian
> ----- Original Message -----
> From: "Todd O'Bryan" <toddobryan at gmail.com>
> To: "PLT-Scheme Mailing List" <users at lists.racket-lang.org>
> Sent: Monday, August 6, 2012 6:05:31 PM GMT -05:00 US/Canada Eastern
> Subject: [racket] weirdness with complex numbers
> I just discovered that the way you enter (and display) a number like
> 1/2 + (2/3)i
> in Racket (and Scheme, presumably) is 1/2+2/3i.
> I understand why that is, and can't think of what else to do, but has
> anyone had students get confused because the form looks like the i is
> in the denominator of the imaginary part?
> What's more potentially confusing is that 1/2+2i/3 is a legal
> identifier in its own right.
> I'm working on a program that models basic algebra in the way that
> high school students are taught to do it, and one of my self-imposed
> rules has been that "math should look like math." In other words, I'm
> trying to minimize the conversion gymnastics that students have to put
> up with when they enter math in calculators or computer programs. In
> that spirit, I'm not sure if it would be better to allow the
> inconsistency with the way order of operations normally works or just
> have students enter 1/2+(2/3)i (or 1/2+2i/3, maybe) and do the
> conversion behind the scenes.
> Anyone have any thoughts or prejudices one way or the other?
> Todd
> ____________________
> Racket Users list:
> http://lists.racket-lang.org/users
Posted on the users mailing list. | {"url":"http://lists.racket-lang.org/users/archive/2012-August/053348.html","timestamp":"2014-04-21T13:24:16Z","content_type":null,"content_length":"7912","record_id":"<urn:uuid:86080ed7-1057-4252-b02e-b5fa84c3750d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Welcome to The Pre-lab Quiz Preparation Page
This page is an index file of interactive quizzes which should help you prepare for part of the lab quiz which occurs during the start of your lab period. Note, the actual quiz covers theoretical
concepts, mathematical skills and procedural material relating to the laboratory you are about to perform. The objective is to be sure you come to lab prepared. The quizzes on this site will cover
the mathematical competencies associated with the theoretical concepts behind the experiment.
You must allow pop-up windows if you wish to see the hints. They are safe for your computer
Note: The printer friendly version is more concise, opens in a new window, is easier to print and is more ecologically sound. Please Do NOT print the interactive version. Thank you.
If you have any questions or comments please feel free to contact your instructor or Dr. Belford rebelford@ualr.edu.
Laboratory Prep Quizzes
│Printer Friendly Version│Interactive Version │
│Print 2A1: │Experiment 2A: Solution Concentrations │
│Print 2B1: │Experiment 2B: Solubility Curves │
│Not Available: │Experiment 4: Introduction to Spectroscopy │
│Print 5a1 │5a1: Kinetics and Rate Data │
│Print 5b1 │5b1:Zero Order Reactions │
│Print 5b2 │5b2: First Order Reactions │
│Print 5b3 │5b3: Second Order Reactions │
│Print 6a1: TBA │6a1" Equilibrium constant calculations │
│Print 7a1: │7a1: Stoichiometric Acid Base Titrations │
│Print 8a1: │8a1: Potentiometric Titrations │
│Print 8b1: │8b1: More Potentiometric Titrations and Gram Equivalent Weight (Molar Mass) Calculations│
│TBA │9: Calculating Free Energy and Entropy (says 6b) │ | {"url":"http://www.ualr.edu/rebelford/labs/plq/plq.htm","timestamp":"2014-04-18T23:15:55Z","content_type":null,"content_length":"6544","record_id":"<urn:uuid:0726539f-70ce-4e46-8003-36a228bbf9b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Non-proper intersection of projective schemes
up vote 1 down vote favorite
Let $X, Y$ be projective varieties in $\mathbb{P}^n$ for $n>10$. Assume that dimensions of $X,Y$ are greater than $n/2$. My first question is as follows: Is there any criterion (other than the
definition) which tells us when $X$ and $Y$ will intersect properly?
Suppose that $X, Y$ do not intersect properly. Is there any general way of checking whether/when the intersection of $X$ and $Y$ (by intersection we mean the fiber product $X \times_{\mathbb{P}^n} Y$
followed by the pull-back by the diagonal morphism) is a non-reduced scheme?
ag.algebraic-geometry intersection-theory deformation-theory
For the first question, maybe this is too obvious, but what about transversality ? – aginensky Apr 8 '13 at 21:20
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ag.algebraic-geometry intersection-theory deformation-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/126861/non-proper-intersection-of-projective-schemes","timestamp":"2014-04-20T06:13:58Z","content_type":null,"content_length":"47006","record_id":"<urn:uuid:07c9c7e8-cc1c-4643-a805-65ba8ecdaf5f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Schaum's Outline of Continuum Mechanics
For comprehensive—and comprehensible—coverage of both theory and real-world applications, you can’t find a better study guide than Schaum’s Outline of Continuum Mechanics. It gives you everything you
need to get ready for tests and earn better grades! You get plenty of worked problems—solved for you step by step—along with hundreds of practice problems. From the mathematical foundations to fluid
mechanics and viscoelasticity, this guide covers all the fundamentals—plus it shows you how theory is applied. This is the study guide to choose if you want to ace continuum mechanics!
User Review - Flag as inappropriate
It is a well written continuum mechanics books that encompasses most of the introductory topics. However, the author has a newer book on the subject with new and revised content, issued by a
different editorial company. It is a good book to learn, study, and review concepts.
Chapter MATHEMATICAL FOUNDATIONS 1
ANALYSIS OF STRESS 44
DEFORMATION AND STRAIN 77
7 other sections not shown
References from web pages
New York, mcgraw-Hill [1970] [request_ebook] Schaum\'s outline of ...
Download Free ebook:New York, mcgraw-Hill [1970][request_ebook] Schaum\'s outline of theory and problems of continuum mechanics by George E Mase - Free chm, ...
www.ebookee.com/ -request_ebook-Schaum-s-outline-of-theory-and-problems-of-continuum-mechanics_158573.html
6 : 1 - 6 Detaylı Liste için tıklayın sn Detay eseradı/Yazar ...
sn. Detay. eseradı/Yazar. Sınıflama/Yer. Ktp/Bölüm. Demirbaş. Durum. 1. Detay için tıklayın. Schaum’s outline of theory and problems of continuum mechanics ...
library.kou.edu.tr:591/ FMPro?-db=2001katalogeser.fp5& -lay=ntrnt& -Format=2001liste.html& -max=10-& skip=1& Eleme::a_650=...
Schaum's outline of theory and problems of continuum mechanics ...
Schaum's outline of theory and problems of continuum mechanics,. 저자/편저자. by George E. Mase./. 출판사. New York, mcgraw-Hill [1970]. ISSN / ISBN / LCCN ...
www.mathnet.or.kr/ mathnet/ book_info.php?no=7995
Bibliographic information | {"url":"http://books.google.ca/books?id=bAdg6yxC0xUC&rview=1","timestamp":"2014-04-19T12:01:09Z","content_type":null,"content_length":"113713","record_id":"<urn:uuid:4c0b4824-7aad-4c38-93c1-32fc8f88d61e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Word Problem - I think. Don't know how to setup the equation
November 15th 2011, 09:55 PM
Algebra Word Problem - I think. Don't know how to setup the equation
Please give me a hint on how to setup the equation for this word problem.
Mickey and Minnie are on different mobile plans.
Minnie pays 55 cents a call and 3 cents a minute
Mickey pays 51 cents a call and 4 cents a minute
If a call costs the same on both plans, how long is the call?
My first guess is
.55 + .03x = .51 + .04x
And then solve for x?
November 16th 2011, 02:03 AM
Re: Algebra Word Problem - I think. Don't know how to setup the equation
Please give me a hint on how to setup the equation for this word problem.
Mickey and Minnie are on different mobile plans.
Minnie pays 55 cents a call and 3 cents a minute
Mickey pays 51 cents a call and 4 cents a minute
If a call costs the same on both plans, how long is the call?
My first guess is
.55 + .03x = .51 + .04x
And then solve for x?
Correct! (Yes) | {"url":"http://mathhelpforum.com/algebra/192017-algebra-word-problem-i-think-dont-know-how-setup-equation-print.html","timestamp":"2014-04-20T18:25:43Z","content_type":null,"content_length":"4846","record_id":"<urn:uuid:f05924c8-cd30-4515-9946-36ee6b5a16f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaching Activities
Current Search Limits
Results 1 - 10 of 35 matches
Shift in life expectancy part of 2012 Sustainability in Math Workshop:Activities
Holly Partridge
Determining the shift in expected life span over a century and the social and environmental impact
Population Growth, Ecological Footprints, and Overshoot part of 2012 Sustainability in Math Workshop:Activities
Rikki Wagstrom
In this activity, students develop and apply linear, exponential, and rational functions to explore past and projected U.S. population growth, carbon footprint trend, ecological overshoot, and
effectiveness of hypothetical carbon dioxide reduction initiatives.
Choosing Between Home Appliances: Benefits to the Planet and Your Wallet part of 2012 Sustainability in Math Workshop:Activities
Corri Taylor, Wellesley College
Students research various options for new appliances and make purchasing decisions based not merely on purchase price, but also on energy efficiency, which has implications for the planet AND for
longer-term personal finances. Students calculate the "payback period" for the more energy efficient appliance and calculate long-term savings.
Plastic Waste Production part of 2012 Sustainability in Math Workshop:Activities
Karen Bliss
In this exercise, students will use data to predict the amount of plastic waste in the next ten years.
Control Chart Project part of 2012 Sustainability in Math Workshop:Activities
Owen Byer
This is a short assignment that asks students to find some data related to sustainability and determine whether the mean of that data set is statistically stable, and whether the process being
measured is in control or out of control. It is often used for quality control in a production process, but in this activity, it is used to see if an ecosystem process is stable and healthy or
disrupted (out of control.)
Economics of installing Solar PV panels: is it worth it to the individual? part of 2012 Sustainability in Math Workshop:Activities
Martin Walter
We show that it is economical for an individual to install solar photovoltaic panels in Denver, Colorado; and this is a sustainable strategy for society at large.
Replacing Household Appliances: Refrigerator part of 2012 Sustainability in Math Workshop:Activities
Krys Stave, University of Nevada Las Vegas (UNLV)
In this problem, students compare the energy use of their existing refrigerator with a new refrigerator.
Teaching Mathematics as Though Our Survival Mattered part of 2012 Sustainability in Math Workshop:Activities
Martin Walker
Mathematics plays a pivotal role in helping us understand "the current human condition." This attached article provides multiple examples and is useful as a supplemental reading. A variety of math
problems could also be extracted for course use.
Energy Cost of Engine Idling part of 2012 Sustainability in Math Workshop:Activities
Ben Fusaro
This is an open-ended but elementary modeling exercise about idling energy behaviors and impacts.
What's for Dinner? Analyzing Historical Data about the American Diet part of 2012 Sustainability in Math Workshop:Activities
Jessica Libertini
In this activity, students research the historical food consumption data from the U.S. Department of Agriculture to observe trends, develop regressions, predict future behavior, and discuss broader | {"url":"http://serc.carleton.edu/sisl/activities.html?q1=sercvocabs__43%3A8","timestamp":"2014-04-18T21:02:37Z","content_type":null,"content_length":"27629","record_id":"<urn:uuid:bd0c4bd3-cf40-4613-9c35-09a6bfa40c8f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Heat equation with Neumann BC
up vote 1 down vote favorite
Consider the heat equation $u_t=\Delta u$ with Neumann boundary condition in a bounded domain $\Omega$.
Is this true to say:
$$\|u(. , t)-v(. , t)\|_p\leq \|u(. , 0)-v(. , 0)\|_p$$ where $u$ and $v$ are two solutions of the heat equation in $W^{2,p}$.
linear-pde ap.analysis-of-pdes
Two questions: (1) Are you interested in all values of $p$ or just $p > 1$?; (2) Is $\|\cdot\|_p$ the $L^p$ norm or the $H^p$ norm? Since the equation is linear, it suffices to consider v = 0. If
you care about $L^p$ norms, the maximum principle gives the result for $p = \infty$ while $\|f \ast g\|_1 \le \|f\|_1 \|g\|_1$ gives the result for $p = 1$ (together with the fact that the Green's
function has unit $L^1$ norm constant in time). Interpolation now gives the result for $1 < p < \infty$. I might be missing something of course. – Aaron Hoffman Oct 19 '11 at 0:24
Can we simply use the Young's inequality for convolution and say: $\|K\ast u_0\|_p\leq \|u_0\|_p\|K\|_1$ where $K$ is the kernel with $\|K\|_1=1$ or any constant? – user18626 Oct 21 '11 at 2:47
no, we can't. because in general the heat semigroup is not given by a convolution (although this is true if $\Omega=R^n$). – Delio M. Dec 12 '12 at 21:24
add comment
2 Answers
active oldest votes
yes, with some regularity on the boundary.
up vote 1 down Theorem 3.2.9 p. 90 of E.B. Davies book, Heat Kernels and Spectral Theory gives Gaussian bounds for the heat kernel of an elliptic operator with Neumann boundary conditions. These
vote accepted bounds imply that the heat flow preserves L^p.
Actually, much more is true: you even have "ultracontractivity", meaning that the solution is immediately in $L^\infty$ (not clear a priori, unless you are in dimension 1 where
you can use the Sobolev embedding) and moreover you can estimate the $L^\infty$-norm of $u(t)$ by the $L^1$-norm of $u(0)$. – Delio M. Dec 20 '12 at 16:11
add comment
A very naive answer:
Assume that the initial data is positive (the same should be true dealing with absolute values...) and take p>2 (p=2 follows the same idea). Multiply the equation by $pu^{p-1}$ and
up vote 0 integrate. One gets $$ p\int_\Omega u^{p-1}u_t dx=\frac{d}{dt}\int_\Omega u^p dx=p\int_\Omega \Delta u u^{p-1}dx=-p\int_\Omega \nabla u\cdot((p-1)u^{p-2}\nabla u)\leq0, $$ where in the last
down vote equality we use Green formula an homogeneous Neumann BC. Due to the linearity of the equation one gets the same result for the difference of two solutions.
add comment
Not the answer you're looking for? Browse other questions tagged linear-pde ap.analysis-of-pdes or ask your own question. | {"url":"http://mathoverflow.net/questions/78488/heat-equation-with-neumann-bc?sort=newest","timestamp":"2014-04-16T04:49:49Z","content_type":null,"content_length":"57852","record_id":"<urn:uuid:577f33f3-d05f-47ab-9f02-406c5e7e3499>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
Show your work and calculate limit of the following questions.
For the first problem, which do you mean? $\lim_{x \to \infty} 3 \sqrt{n}^\frac {1} {2n}$ or $\lim_{x \to \infty} 3 \sqrt{n}^{\frac {1} {2} n}$ 2nd problem $\lim_{x \to \infty} (n+1)^\frac {1} {ln
(n+1)}$ Take the ln of the lim, just remember to rise it to e later. $\lim_{x \to \infty} \frac {ln(n+1)} {ln(n+1)}$
the first way you wrote it. and how can i just take the ln and do what you said for the second problem
I took the ln for you in problem 2...I guess i'll show it to you step by step $\lim_{x \to \infty} (n+1)^\frac {1} {ln(n+1)}$ Just remember to rise the answer to e. $\lim_{x \to \infty} ln(n+1)^\frac
{1} {ln(n+1)}$ By the law of lns.. $\lim_{x \to \infty} \frac {1} {ln(n+1)}ln(n+1)$ $<br /> <br /> \lim_{x \to \infty} \frac {ln(n+1)} {ln(n+1)}=1<br />$ So the answer will be $e^1$ or just plain e.
Try the first problem this way. You will probably need to l'hospital it.
Have you tried it? I just did and you don't actually need l'hospital. $<br /> <br /> \lim_{n \to \infty} 3 \sqrt{n}^\frac {1} {2n}<br />$ $<br /> 3 \lim_{n \to \infty} \sqrt{n}^\frac {1} {2n}<br />$
$<br /> 3 \lim_{n \to \infty} \frac {ln\sqrt {n}} {2n}<br />$ It should be obvious from here. Remeber just rise the limit to e, not including the 3. $<br /> 3 \lim_{n \to \infty} \frac {\ln {n}} {4n}
<br />$
Last edited by Linnus; September 25th 2008 at 04:45 PM. | {"url":"http://mathhelpforum.com/calculus/50629-limit.html","timestamp":"2014-04-20T08:35:06Z","content_type":null,"content_length":"50475","record_id":"<urn:uuid:8dff4f5b-25ad-4fe3-b098-69fedf91c46b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pacific Institute for the Mathematical Sciences - PIMS
Probability Seminar: Codina Cotar
• Date: 04/18/2012
• Time: 15:00
Lecturer(s): Codina Cotar
University of British Columbia
Density functional theory and optimal transport with Coulomb cost
In this talk I explain a promising and previously unnoticed link between
electronic structure of molecules and optimal transportation (OT), and I
give some
first results. The `exact' mathematical model for electronic structure,
the many-electron Schroedinger equation, becomes computationally
unfeasible for more than a dozen or so electrons. For larger systems,
the standard model underlying a huge literature in computational
physics/chemistry/materials science is density functional theory (DFT).
In DFT, one only computes the single-particle density instead of the
full many-particle wave function. In order to obtain a closed equation,
one needs a closure assumption which expresses the pair density in terms
of the single-particle density rho.
We show that in the semiclassical Hohenberg-Kohn limit, there holds an
exact closure relation, namely the pair density is the solution to a
optimal transport problem with Coulomb cost. We prove that for the case
with $N=2$ electrons this problem has a unique solution given by an
optimal map; moreover we derive an explicit formula for the optimal map
in the case when $\rho$ is radially symmetric (note: atomic ground state
densities are radially symmetric for many atoms such as He, Li, N, Ne,
Na, Mg, Cu).
In my talk I focus on how to deal with its main mathematical novelties
(cost decreases with distance; cost has a singularity on the diagonal). I
also discus the derivation of the Coulombic OT problem from the
many-electron Schroedinger equation for the case with $N\ge 3$
electrons, and give some results and explicit solutions for the
many-marginals OT problem.
Joint works with Gero Friesecke (TU Munich) and Claudia Klueppelberg (TU Munich). | {"url":"http://www.pims.math.ca/scientific-event/120418-pscc","timestamp":"2014-04-16T07:19:44Z","content_type":null,"content_length":"18371","record_id":"<urn:uuid:bf827979-1a4f-47f7-8033-fdf0e7550191>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Digraph walk, path, circuit
December 17th 2012, 07:00 PM #1
Junior Member
Oct 2012
Digraph walk, path, circuit
Given the graph, determine if the following sequences form a walk, path and/or a circuit.
1. a, b, c, e
2. b, c, d, d, e, c, f
3. a, b, c, f, g, a
4. b, c, d, e
My answer:
1. It's not possible for the vertices to form a walk, path, or a circuit in this configuration. There is no relation from c to e.
2. walk
3. walk, circuit
4. walk, path
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/discrete-math/210009-digraph-walk-path-circuit.html","timestamp":"2014-04-19T10:34:15Z","content_type":null,"content_length":"28839","record_id":"<urn:uuid:83902297-7c28-4282-96f6-b3b33ea840b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 203
factor completely
simplify 4+17+(-29) + 13 + (-22)+ (-3)
simplify 4+17+(-29) + 13 + (-22)+ (-3)
median: find 3% of 80 answer:2.4
i did and i got 25
mean 18,25,32 answer:25
mean 18,25,32 answer:25
(2r+s)14sr= 28rs
(2r+s)14sr= 28rs
y^3- 7y^2+ 8y -13 y=2 9
what is the product of (x-3) and (x-1) x^2 - 4x +3
what is the product of (x+4)and (x-5) j
which of the following is the gcf of 4x^2y and 4x f- x^2 g-4x h-4 j-2x
which of the following is a prince F)1 G)31 H)51 J)57
12ab + 8ab + 5ab is it 25ab
if p=a^2+a-1,which expression represents P+R
foods that are canned are cooked at high temperatures and then placed in airtight containers what efffect do these action have on the food
what are 3 ways bacteria affect your life?
the plant disease that causes flower young leaves and stems to die quickly is called ____________.
its fill in the blanks
bacteria that change nitrogen from the air into compounds that plants can use is called _______________ i dont know what the answer is
i need a science fair project that will last more than a week
(3x-5)(2x -8) 5x^2+3
3a^2b + 6a^2b 9a^4b^2
simplify the expression below (3x^2y-5xy+12xy^2)-(5xy^2+4xy) 3x^2y - xy + 17xy^2
a) 4x^2+9 B) 4x^2-9 c) 4x^2-6x-9 d) 4x^2 -12x +9
multiply the two binomial below (2x-3)(2x+3) the answer is 4x^2+9 is this correct
If 3-x>10 x<-7
(5+ 4x)-(3-3x)= 2+7x
7x-x= 6x
2x(3y) 6xy
wai ling averaged 84 on her first three exams and 82 on her next 2 exams.what grade must she obtain on her sixth test in order to average 85 for all six exams? 94
the value of 3^5+3^5+3^5 is 3^6
eight year from now. Oana will be twice as old as her brother is now. Oana is now 12 year old. how old her brother now umm 10
60x30/60+30 20
(x+ 6)(x-2)
which transformation is the image not congruent line reflection translation rotation dilation
under which transformation is the area of a triangle not equal to the area of its image? i think the answer is dilation
helllppp wannnteddd
x y -1 1 0 4 1 7 2 10 3 5 i got the answer it 13 and 15 what duhh rule
4/1 12/x
is it 3
bob has a container with a capacity of 12 quarts. he wants to express this capacity in gallons. write a proportion to find the number of gallons of water bob's container can hold.
x y -1 1 0 4 1 7 2 10 3 5 i got the answer it 13 and 15 what duhh rule
Rowan raised $640 in a chairty walked last year . this year he raised 15 percent more than last year. how much did rowan raise this year. wat do i do
when simipliified (2x10^3)(3x10^4)is equivalent 6x10^7
what is 10^5-10^6 is it 10^11
48/(-12) + / - = - 48 / -12 = -4
-5-(-5) 0
-6x8 -x+=- -6x8=48
9/(-3) + / - = - 9/-3=-3
-15/(-5)= -x-=+ 15/5= 3 3
-2x-5= negative x negative= positive 5x2=10 -5x-2=10
in human most of the energy needed for life activities come from carbon dioxide
maria collected the gas given off by a glowing piece of charcoal . the gas was then bubbled through a small amount of colorless limewater . part of maria's report stated . after the gas was put into
the jar the lime water gradually changed to milky white color .the statmen...
what is the distance between the points (-1,7) and (5,7)?
what is the value of 9090/90 in decimal
Louis attended some baseball games this past summer. The last four games he attended were at the shea stadium. All but five games he attended were at the Yankee Stadium. All but five games he
attended were at the Yankee Stadium. At least three games he atttended were Yankee St...
social studies
what was the importance of cash crops?
social studies
which group's lifestyle was the purpose of navigation act
r|2=24 is it 48
WAT IS 12 FACTORIAL
what is 6 fatorial
what is5^3
3 squared
65% of 298
19% of 112
84is 7% of wat number
40 is 160% of wat question
51 is 755 of wat number
55 is 50% of what number
how is ! used in math
what does ! mean in math
what is 8P3
how do you this question: 4P2
what is 7c2
:D HOW DO U DO 7C4
I MEAN 4P3
how do u do 3P4
in math i think it called factorial
Pages: 1 | 2 | 3 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=pavleen","timestamp":"2014-04-21T03:20:30Z","content_type":null,"content_length":"16265","record_id":"<urn:uuid:97fe8dd3-05d4-4a47-98a8-09a5e4f13331>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Is logic part of mathematics - or is mathematics part of logic?
Replies: 52 Last Post: Jul 5, 2013 1:27 AM
Messages: [ Previous | Next ]
Re: Is logic part of mathematics - or is mathematics part of logic?
Posted: Jul 2, 2013 9:30 AM
G S Chandy
>Our previous exchanges at this thread clearly indicate that the 'world of academic scholarship and convivial discussion' itself has thus far failed to "say what 'mathematics' and 'logic' may mean
This is quite absurd. This is large body of literature from the last 150 years which shows that people have not only agreed to fix the terms math and logic precisely enough to discuss, study, and
draw conclusions, they've delved into incredibly precise details about exactly how, why and where logic fails to capture "classical" mathematics, and this has led to a profusion of new
understandings about logical systems, formal systems, axiomatics, new sorts of restricted mathematics, etc. etc. etc.
That was the entire point of my first response to this thread.
On the other hand, when you print in all capital letters "IS INCLUDED IN", I have no idea what you mean. Do you mean set theoretical inclusion? Why not say so? Or if its not, how does it differ? As
I pointed out, if you do mean something like that then the notion that logic "IS INCLUDED IN" math is quite out there on the spectrum of craziness. Why bother?
As far as I can tell ps+g doesn't even turn over, much less take off.
Joe N
------- End of Forwarded Message | {"url":"http://mathforum.org/kb/message.jspa?messageID=9151906","timestamp":"2014-04-21T13:23:13Z","content_type":null,"content_length":"80642","record_id":"<urn:uuid:4f7990a2-69d9-4448-9f52-6aad622b1d46>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
The New Tesla Electromagnetics and the Secrets of Free Electrical Energy
Written By: T. E. Bearden
A: Discrepancies in Present EM Theory
There are at least twenty-two major discrepancies presently existing in conventional electromagnetcs theory. This paper presends a summary of those flaws, and is a further commentary on my discussio
of scalar longitudinal waves in a previous paper, "S olutions to Tesla’s Secrets and the Soviet Tesla Weapons," Tesla Book Company, 1981 and 1982.
I particularly wish to express my deep appreciation to two of my friends and colleagues who at thi time, I believe, wish to remain anonymous. One of the two is an experimental genius who can produc
items that do not work by orthodox theory. The secon d is a master of materials science and electromagnetics theory. I thank them both for their exceptioal contributions and stimuli regarding
potential shortcoming in present electromagnetics theory, andtheir forbearance with the many discussions we have h eld on this and related subjects.
It goes without saying that any etrors in this paper are strictly my own, and not the fault of eiter of my distinguished colleagues.
(1) In present electromagnetics theory, charge and charged mass are falsely made identical. Actualy, on a charged particle, the "charge" is the flux of virtual particles on the "bare particle" of
obervable mass. The charged particle is thus a "system" of true massless charge coupled to a bare chargeless mass. The observable "mass" is static, three-dmensional and totally spatial. "Charge" is
dynamic, four-dimensional or more, virtual and spatiotempral. Further, the charge and observable mass can be de-coupled, contrary to present theory. Decoupled charge -- that is, the absence of mass
-- is simplywhat we presently refer to as "Vacuum." Vacuum, spacetime, and massless charge are all identical. Riorously, we should utilize any of these three as an "ether," as suggested for vacuum by
Einstein himself (see Max Born, Einstiein’s Theory of Relativity,Revised Edition, Dover Publications, New York, 1965, p. 224). And all three of them are identically nenergy -- not energy, but more
fundamental component s of energy.
(2) Electrostatic potential is regarded as a purely 3-dimensional spatial stress. Instead, it is te intensity of a many-dimensional (at least four-dimensional) virtual flux and a stress on all four
imensions of spacetime. This is easily seen, once one recognizes that spacetime is identically masless charged. (It is not "filled" with charge; rather,it is charge!) Just as, in a gas under
pressure, the accumulation of additional gas further stressesthe gas, the accumulation of charge (spacetime) stres ses charge (spacetime). Further, if freed from its attachment to mass, charge can
flow exclusively i time, exclusively in space, or in any combination of the two. Tesla waves -- which are scalar wavesin pure massless charge flux itself -- thus can exhib it extraordinary
characteristics that ordinary vector waves do not possess. And Tesla waves have exta dimensional degrees of freedom in which to move, as compared to vector waves. Indeed, one way to vsualize a tesla
scalar wave is to regard it as a pure oscillation of time itself.
(3) Voltage and potential are often confused in the electrostatic case, or at least thought of as composed of the same thing." For that reason, voltage is regarded as "potential drop." This also is
ot true. Rigorously, the potential is the intensity of the virtual particle flux at a single point -- whether or not there is any mass at the point -- an both the pressure and the point itself are
spatiotemporal (4-dimensional) and not spatial (3-dimensonal) as presently assumed. Voltage represents th e spatial intersection of the difference in potential between two seperated spatial points,
ad always implies at least a miniscule flow of mass current (that is what makes it spatial!). "Voltag" is spatial and depends upon the presence of observable mass flow, while scalar electrostatic
potential is spatiotemporal and depends upon the absence o observable mass flow. The two are not even of the same dimensionality.
(4) The charge of vacuum spacetime is assumed to be zero, when in fact it is a very high value. Vauum has no mass, but it has great massless charge and virtual particle charge flux. For proof that
acharged vacuum is the seat of something in motion, se e G. M. Graham and D. G. Lahoz, "Observation of static electromagnetic angular momentum in vacuo," Nture, Vol. 285, 15 May 1980, pp. 154-155. In
fact, vacuum IS charge, identically, and it is also spaetime, and at least four-dimensional.
(5) Contrary to its present usage, zero is dimensional and relative in its context. A three-dimensonal spatial hole, for example, exists in time. If we model time as a dimension, then the spatial hoe
has one dimension in 4-space. So a spatial absence is a spatiotemporal presence. In the vacuum 4-space, a spatial nothing is still a something. The "vitual" concept and mathematical concept of a
derivative are simply two present ways of unconsciously ddressing this fundamental problem of the dimensional relativity of zero.
(6) The concepts of "space" and "time" imply that spacetime (vacuum) has been seperated into two prts. We can only think of a space as "continuing to exist in time." To separate vacuum spacetime nto
two pieces, an operation is continually requir ed. The operator that accomplishes this splitting operation is the photon interaction, the interactin of vector electromagnetic energy or waves with
mass. I have already strongly pointed out this effet and presented a "raindrop model" or first-order phy sical change itself in my book, The Excalibur Briefing, Strawberry Hill Press, San Francisco,
1980,pp. 128-130.
(7) "Vector magnetic potential" is assumed to be always an aspect of (and connected to) the magnetc field. In fact it is a separate, fundamental field of nature and it can be entirely disconnected
fom the magnetic field. See Richard P. Feynman et al, The Feynman Lectures on Physics, Addison-Wesley Publishing Co., New York, 1964, Vol. II, pp. 15-8 to15-14. Curiously, this fact has been proven
for years, yet it has been almost completely ignored in he West. The "(triangle)x" operator, when applied to the A-field, makes B-field. If the (triangle)x operator is not applied, the "freed"
A-field possesse much-expanded characteristics from those presently allowed in the "bound" theory. Specifically, it ecomes a scalar or "shadow vector" field; it is not a normal vector field.
(8) The speed of light in vacuum is assumed to be a fundamental constant of nature. Instead it is function of the intensity of the massless charge flux (that is, of the magnitude of the electrostatc
potential) of the vacuum in which it moves. (Indeed , since vacuum and masless charge are one and the same, one may say that the speed of light is a funtion of the intensity of the spatiotemporal
vacuum!). The higher the flux intensity (charge) of the acuum, the faster the speed of light in it. This is a n observed fact and already shown by hardcore measurements. For example, distinct
differences actualy exist in the speed of light in vacuo, when measured on the surface of the earth as compared to mesurements in space away from planetary masses. In a vacuum on the surface of the
earth, light moves significantly faster. For a discussion and the statisics, see B. N. Belyaev, "On Random Fluctuations of the Velocity of Light in Vacuum," Soviet Physics ournal, No. 11, Nov. 1980,
pp. 37-42 (original in Russian, translation by Plenum Publishing Corporation.) The Russians have used this knowledge for overtwo decades in their strategic psychotronics (energetics) program; yet
hardly a single U.S. scientis is aware of the measured variation of c in vacuo. In fact, most Western scientists simply cannot believe it when it is pointed out to them!
(9) Energy is considered fundamental and equivalent to work. In fact, energy arises from vector prcesses, and it can be disassembled into more fundamental (anenergy) scalar components, since the
vecors can. These scalar components individually can be moved to a distant location without expending work, since one is not moving force vectors. There thescalar components can be joined and
reassembled into vectors to provide "free energy" appearing at adistance, with no loss in between the initial and distant points. For proof that a vector field can be replaced by (and considered to
be composed of) twoscalar fields, see E. T. Whittaker, Proceedings of the London Mathematical Society, Volume 1, 1903, . 367. By extension, any vector wave can be replaced by two coupled scalar
(10) The classical Poynting vector predicts no longitudinal wave of energy from a time-varying, elctrically charged source. In fact, an exact solution of the problem does allow this longitudinal wav.
See T. D. Keech and J. F. Corum, "A New Derivation for the Field of a Time-Varying Charge in Einsteins Theory," International Journal of Theoretical Phsics, Vol. 20, No. 1, 1981, pp. 63-68 for the
(11) The present concepts of vector and scalar are severely limited, and do not permit the explici consideration of the internal, finer-grained structures of a vector or a scalar. That is, a
funamental problem exists with the basic assumptions in the vector mathematics itself. The "space" of a vector field, for example, does not have inter-nesed sublevels (subspaces) containing finer
"shadow vectors" or "virtual vectors." Yet particle physic has already discovered that electrical reality is built that way. Thus one should actually use a "hypernumber" theory after the manner of
Charles Muses.A scalar is filled with (and composed of) nested levels of other "spaces" containing vectors, where hese sum to "zero" in the ordinary observable frame w ithout an observable vector
resultant. In Muses’ mathematics, for example, zero has real roots. Realphysical devices can be -- and have been -- constructed in accordance with Muses’ theory. For an intoduction to Muses’ profound
hypernumberss approach, s ee Charles Muses’ forward to Jerome Rothstein, Communication, Ogranization and Science, The Falcon’sWing Press, Indian Hills, Colorado, 1958. See also Charles Muses’,
"Applied Hypernumbers: Computatioal Convepts," Applied Mathematics and Computation, Vo l. 3, 1976. See also Charles Muses’ "Hypernumbers II", Aoplied Mathematics and Computation, Janurary 1978.
(12) With the expanded Tesla electromagnetics, a new conservation of energy law is required. Let u recapitulate for a moment. The oldest law called for the conservation of mass. The present law call
for the conservation of "mass and energy", but not each separately. If mass is regarded as simply another aspect of energy, then the present law calls fr the conservation of energy. However, this
assumes that energy is a basic, fundamental concept. Sine the energy concept is tied to work and the movement of vector forces, it implicitly assumes "vector movement2 to be a "most fundamental" and
irreducibl concept. But as we pointed out, Whittaker showed that vectors can always be further broken down int more fundamental coupled scalar components. Further, Tesla discovered that these
"coupled components" of "energy" can be individually separated, transmited, processed, rejoined, etc. This directly implies that energy per se need not be conserved. The nw law therefore calls for
the conservation of anenerg y, the components of energy. These components may be coupled into energy, and the energy may be furter compacted into mass. It is the sum total of the (anenergy)
components -- coupled and uncoupled --that is conserved, not the matter or the energy per s e. Further, this conservation of anenergy is not spatial; rather it is spatiotemporal in a spacetimeof at
least four or more dimensions.
(13) Relativity is presently regarded as a theory or statement about fundamental physical reality.In fact, it is only a statement about FIRST ORDER reality -- the reality that emerges from th vector
interaction of electromagnetic energy with matter. When we break down the vectors into scalars (shadow vectors or hypervectors), we immediatlyenter a vastly different, far more fundamental reality.
In this reality superluminal velocity, multile universes, travel back and forth in time, higher dimensions, variation of all "fundamental constants" of nature, materialization and dematerialization
and violation of the "conservation of energy" are all involved. Even our present Aristotlean logic - fitted to the photon interaction by vector light as the fundamental observation mechanism -- is
incapable of describing or modeling this more fundamentl reality. Using scalar waves and scalar interactions as much subtler, far less limited observation/etection mechanisms, we must have a new
"superrelativ ity" to describe the expanded electromagnetic reality uncovered by Nikola Tesla.
(14) "Charge" is assumed to be quantized, in addition to always occuring with -- and locked to --mass. Indeed, charge is not necessarily quantized, just as it is not necessarily locked to mass.
Ehrnhaft discovered and reported fractional charges for years, in the 30’s and 40’s, and was ignored. See P.A.M. Dirac, "Development of the Physicist’s Concption of Nature", Sumposium on the
Development of the Physicist’s Conception of Nature, ed. Jagdish erha, D. Reidel, Boston, 1973, pp. 12-14 for a presen tation of some of Ehrenhaft’s results. Within the last few years Stanford
University researchers hav also positively demonstrated the existence of "fractional charge." For a layman’s description of thir work, see "A Spector Haunting Physics," Science Ne ws, Vol. 119,
January 31, 1981, pp. 68-69. Indeed, Dirac in his referenced article points out that Mllikan himself -- in his original oildrop experiments -- reported one measurement of fractional chare, but
discounted it as probably due to error.
(15) Presently, things are always regarded as traveling through normal space. Thus we use or modelonly the most elementary type of motion -- that performed by vector electromagnetic energy. We do no
allow for things to "travel inside the vector flow i tself." Yet, actually, there is a second, more subtle flow inside the first, and a third, even more ubtle flow inside the second, and so on. We
may operate inside, onto, into, and out of energy itself-- and any anenergy component of energy. There are hy pervectors and hyperscalars unlimited , within the ordinary vectors and scalars we
already know. Furher, these "interlan flows" can be engineered and utilized, allowing physical reality itself to be drectly engineered, almost without limits.
(16) We always assume everything exists in time. Actually, nothing presently measured exists in tie, because the physicical detection/measurement process of our present instruments destroys time,
riping it off and tossing it away -- and thereby "colla psing the wave function." Present scientific methodology thus is seriously flawed. It does not yieldfundamental (spacetime) truth, but only a
partial (spatial) truth. This in turn leads to great scienific oversights. For example. mass does not exist in time, but mass x time (masstime) does. A fundamental constant does not exist in time,
but "constant time" does. Energy does not exist in time, but energy x time (action) does. Even space itself does ot exist in time -- spacetime does. We are almost alw ays one dimension short in every
observable we model. Yet we persist in thinking spatially, and we hve developed instruments that detect and measure spatially only. Such instruments can never measure nd detect the phenomenology of
the nested substrata of time. By using scalar technology, however, less limited instruments can indeed be constructed -- ad they have been. With such new instruments, the phenomenology of the new
electromagnetics can be exlored and an engineering technology developed.
(17) We do not recognize the connection between nested levels of virtual state (particle physics) nd orthogonally rotated frames (hyperspaces). Actually, the two are identical, as I showed in the
apendix to my book, The Excalibur Briefing, Strawberry Hills Press, San Francisco, 1980, pp. 233-235. A virtual particle in the laborotory frame is an obsevable particle in a hyperspatial frame
rotated more than one orthogonal turn away. This of course imlies that the hyperspatial velocity of all virtual particles is greater than the speed of light. The particle physicist is already deeply
involved in hyprspaces and hyperspatial charge fluxes without realizing it. In other words , he is using tachyons (articles that move faster than light) without realizi ng it.
(18) Presently quantum mechanics rigorously states that time is not an observable, and therefore i cannot be measured or detected. According to this assumption, one must always infer time from spatil
measurements, because all detections and measurements are spatial. With this assumption, our scientists prejudice themselves against looking for finer,subquantal measurement methodologies and
instrumentation. Actually this present limitation is the reult of the type of electromagnetics we presently know, where all instruments (the "measurers") have been interacted with by vector
electromagnetic energy(light). Every mass that has temperature (and all masses do!) is continually absorbing and emitting hotons, and in the process they are continually conne cting to time and
disconnecting from time. If time is continually being carried away from the detectr itself by its emitted photons, then the detector cannot hold and "detect" that which it has just lost. With Tesla
electromagnetics, however, the fundamental limitation of our present instru- ments need not apply. With finer instruments, we cn show there are an infinite number of levels to "time", and it is only
the "quantum level time" whih is continually being lost by vector light (photon) interaction. By using subquantal scalar waves, instruments can move to deeper levels of time -- in wich case the upper
levels of time ARE measureable and detectable, in contradistinction to present asumptions.
(19) In the present physics, time is modeled as, and considered to be, a continuous dimension suchas length. This is only a gross approximation. Indeed , time is not like a continuous "dimension," bt
more like a series of "stiches," each of which is individually made and then ripped out before the next stitch appears. "Vector light" photons interactone at a time, and it is this interaction with
mass that creates quantum change itself. The absorbtin of a photon -- which is energy x time -- by a spati al mass converts it to masstime: the time was added by the photon. The emission of a photon
tears awy the time, leaving behind again a spatial mass. It is not accidental, then, that time flows at the peed of light, for it is light which contains and carries time. It is also not accidental
that the photon IS the individual quantum. Since all our instruents presently are continually absorbing and emitting photons, they are all "quantized," and they acordingly "quantize" their
detections. This is true be cause all detection is totally internal to the detector, and the instruments only detect only their wn internal changes. Since these detections are on a totally granular
quantized background, the detetions themselves are quantized. The Minkowski model i s fundamentally erroneous in its modeling of time, and for that reason relativity and quantum mechancs continue to
resist all attempts to successfully combine them, quantum field theory notwithstandin.
(20) Presently, gravitational field and electrical field are considered mutually exclusive. Actualy this is also untrue. In 1974, for example, Santilli proved that electrical field and
gravitationalfiend indeed are not mutually exclusive. In that case one is left with two possibilities: (a) they are totally the same thing, or (b) they are partially he same thing. For the proof, see
R. M. Santilli, "Partons and Gravitation: Some Puzzling Questions, Annals of Physics, Vol. 83, No. 1, March 1974. With the new Tesla electromagnetics, pure scalar waves in time itself can be produced
electrically , and lectrostatics (when the charge has been seperated from the mass) becomes a "magic" tool capable of drectly affecting anything that exists in time -- incl uding the gravitational
field. Antigravity and the intertial drive are immediate and direct consequnces of the new electromagnetics.
(21) Presently, mind is considered metaphysical, not a part of physics, and not affected by physicl means. Literally, the prevailing belief of Western scientists is that man is a mechanical robot
--even though relativity depends entirely upon the idea of the idea of the "observer." Western science today thus has essentially become dogmatic, and in tis respect borders on a religion. Since this
"religion," so to speak, is now fairly well entrenched n its power in the state, Western science is turning itself into an oligarchy. But mind occupies time, and when we measure and affect time, we
can directy measure and affect mind itself. In the new electromagnetics, then, Man regains his dignity and hishumanity by restoring the reality of mind and thought to science. In my book, The
Excalibur Briefing, I have already pointed out the reality of mind and simplified way in which it can be modeled to the first order. With scalar wave instruments, the reaity of mind and thought can
be measured in the laboratory, and parapsychology becomes a working, engineering, scientific discipline.
(22) Multiple valued basic dimensional functions are either not permitted or severely discouraged n the present theory. For one thing, integrals of multiple valued derivative functions have the
annoing habit of "blowing up" and yielding erroneous answers, or none at all. And we certainly do not allow multiple types of time! This leads to the absurdiy of the present interpretation of
relativity, which permits only a single observer (and a single obervation) at a time. So if one believes as "absurd" a thing as the fact that more than one person can observe an apple at the same
time, the present physcs fails. However, the acceptance of such a simple proposition as multiple simultaneous observationleads to a physics so bizarre and incredible that mos t Western physicists
have been unable to tolerate it, much less examine its consequences. In the phyics that emerges from multiple simultaneous observation, all possibilities are real and physical. There are an infiite
number of worlds, orthogonal to one another, and each world is continually splitting into additional such "worlds" at a stupendous rat. Nonetheless, this physics was worked out by Everett for his
doctoral thesis in 1956, and the thess was published in 1957. (See Hugh Everett, III, The Many-Worlds Interpretation of Quantum Mechanics: A Fundamental Exposition, with papers by J. A. Wheeer, B. S.
DeWitt, L. N. Cooper and D. Van Vechten, and N. Graham; eds. Bryce S. Dewitt and Neill Graam, Princeton Series in Physics, Princeton University Press, 1973.) Even though it is bizarre, Everett’s
physics is entirely consistent with the present xperimental basis of physics. The present electromagnetic theory is constructed for only a single "rdl" or universe -- or "level." The expanded theory,
on the other hand, contains multiply nested levels of virtual state charge -- and these levels are idntically the same as orthogonal universes, or "hyperframes." Multiple kinds -- and values -- of
timealso exist. The new concept differs from Everett’s, h owever, in that the orthogonal universes intercommunicate in the virtual state. That is, an observabe in one universe is always a virtual
quantity in each of the other universes. Thus one can have muli-level "continuities" and "discontinuities" simultan eously, without logical conflict. It is precisely these levels of charge -- these
levels of scalar vcuum -- that lace together the discontinuous quanta generated by the interaction of vector light wit mass.
However, to understand the new electromagnetic reality, one requires a new, expanded logic which cntains the old Aristotlean logic as a subset. I have already pointed out the new logic in my paper, A
Conditional Criterion for Identity, Leading to a Fo urth Law of Logic," 1979, available from the National Technical Information Center, AD-A071032.
Even as logic is extended, quantum mechanics, quantum electrodynamics, and relativity are drasticaly changed by the Tesla electromagnetics, as I point- ed out in my paper, "Solutions to Tesla’s
Secrts and the Soviet Tesla Weapons," Tesla Book Company, 1580 Magnolia, Millbrae, CA, 94030, 1980.
The present electromagnetics is just a special case of a much more fundamental electromgnetics discovered by Nikola Tesla, just as Newtonian physics is a special case of the relativistic hysics. But
in the new electromagnetics case, the differences between the old and the new are far more drastic and profound.
Additional References
1. Boren, Dr. Lawence Milton, "Discovery of the Fundamental Magnetic Charge (Arising from the new Coservation of Magnetic Energy)," 1981/1982 (private communication). Dr. Boren has a cogent argument
tat the positron is the fundamental unit of magnetic charge. His theory thus assigns fundamentally different natures to positive charge and negative charg. In support of Dr. Boren, one should point
out that the "positive" end of circuits can simply be "lss negative" than the "negative" end. In other words, the circuit works simply from higher accumulation of negative charges (the "negative"
end) to a leser accumulation of negative charges (the "positive" end). Nowhere needthere be positive charges (proons, positrons, etc.) to make the circuit work. Dr. B orens theory, though dramatic at
first encounter, nonetheless bears close and meticulous examination-- particularly since he has been able to gather experimental data which support his theory and disgree with present theory.
2. Eagle, Albert, "An Alternative Explanation of Relativity Phenomena," philosophical Magazine and Jurnal of Science, No. 191, December 1939, pp. 694 -701.
3. Ehrenaft, Felix and Wasser, Emanuel, "Determination of the Size and Weight of Single Submicroscopc Spheres of the Order of Magnitude r = 4 x 10(-5) cm. to 5 x 10(-6) cm., as well as the Production
f Real Images of Submicroscopic Particles by means of Ultraviolet Light," Phil. Mag. and Jour. of Sci., Vol. II (Seventh Series), No. 7, July 1926, pp. 3-51.
4. Ehrenhaft, Felix and Wasser, Emanuel, "New Evidence of the Existance of Charges smaller than the lectron - (a) The Micromagnet; (b) The Law of Resistance; © The computation of errors of the
Metho," Phil. Mag. and Jour. of Sci., Vol. V (Seventh Series), No. 28, February 1928, pp. 225-241.
5. See also Ehrenhaft’s last paper dealing with the electronic charge, in Philosophy of Science, Vo. 8, 1941, p. 403.
6. McGregor, Donald Rait, The Inertia of the Vacuum: A New Foundation for Theoretical Physics, Expostion Press, Smithtown, NY, First Edition, 1981, pp. 15-20.
7. Ignat’ev, Yu. G. and Balakin, A. B., "Nonliner Gravitational Waves in Plasma," Soviet Physics Jounal, Vol. 24, No. 7, July 1981, (U.S. Translation, Consultants Bureau, NY, JAnurary 1982), pp.
8. Yater, Joseph C., "Relation of the second law of thermodynamics to the power conversion of energyfluctuations," Phys. Review A, Vol. 20, no. 4, October 1979, pp. 1614-1618.
9. DeSantis, Romano M. et al, "On the Analysis of Feedback Systems With a Multipower Open Loop Chain" October 1973, available through the Defense Technical Information Center (AD 773188).
10. Graneau, Peter, "Electromagnetic Jet-Propulsion in the Direction of current flow," Nature, Vol. 95, 28 Janurary 1982, pp. 311-312
11. "Gravity and acceleration aren’t always equivalent," New Scientist, 17 September 1981, p. 723.
12. Gonyaev, V. V., "Experimental Determination of the Free-Fall Acceleration of a Relativistic Chared Particle. II. A Cylindrical Solenoid in a Time- Independent Field of Inertial Forces," Izvestiya
UZ, Fizika, No. 7, 1979, pp. 28-32. English Translati on: Soviet Physics Journal, No. 7, 1979, pp. 829-833. If one understands the new, expanded electromanetics, this Soviet paper indicates a means
of generating antigravity and pure inertial fields.
13. R. Schaffranke, "The Development of Post-Relativistic Concepts in Physics and Advanced Technolog Abroad," Energy Unlimited, No. 12, Winter 1981, pp. 15-20.
14. F. K. Preikschat, A Critical look at the theory of Relativity, Library of Congress Catalogue No.77-670044. Extensive compilation of measurements of the speed of light. Clearly shows the speed of
lght is not constant but changes, sometimes even daily.
B: The Secret of Electrical Free Energy
Present electromagnetic theory is only a special case of the much more funda- mental electromagnetictheory discovered by Nikola Tesla at the turn of the century.
Pure vacuum is pure charge flux, without mass. The vacuum has a very high electrical potential -- soething on the order of 200 million volts, with respect to a hypothetical zero charge.
Thus in an ordinary electrical circuit, each point of the "ground" -- which has the same potential a the vacuum -- actually has a non-zero absolute potential. This circuit ground has a value of zero
oly with respect to something else which has the same absolute electrical potential.
Voltage, which is always associated with a flow of electrical "mass" current (even if only a miniscue flow), is, by definition, a difference dropped in potential when a charge mass moves between two
satially seperated points. What we have termed "electrical current" only flows where there is a suitable conducting medium between things which have a diference in absolute potential. Furthermore,
between any two points in any material, there is considerd to be a finite resistance -- if we apply a voltage ahd have a mass current flowing between the two points! Rigorously, to have one of the
three is to hve them all. To lose one is to lose all three. Immediately we see a major error in present theory: Oe can have a "difference in scalar potential" between two points without having a
"voltage drop" between them. Specifically, if no mass current flows beteen them, no resistance exists between them, and no voltage drop exists between them.
In the same fashion, one can have a "scalar wave" through the vacuum without a voltage wave. In thatcase, the wave has no E-field and no H-field. The only reason one has an E field around a
staticallycharged object is because the charged electrons accum ulated on the object are actually in violent motion. It is this motion of the charged masses that prduces E-field -- as well as H-field
whenever that entire E-field ensemble moves through laborotory sace.
Now let us reason together the "approximate" manner utilized in present electromagnetic theory. For xample, let us examine a bird sitting on a high tension line.
The bird sits on the high tension line without a flow of mass electricity, because there is no signiicant difference in potential drop between the bird and the line. Specifically, between the birds
tw feet -- each in contact with a different portion of the line -- there exists no potential difference. This is true even though, with respect to the vacum, each foot is at a potential that would be
"100,000 volts higher," were a mass current flowing. An it is true even though the absolute potential of each foot may be some 200.1 million "volts," were a mass current flowing.
Now an interesting thing happens to the bird when he flies through the air to light upon the high tesion wire. As he flies towards the wire, he is flying through the massless electrostatic potential
feld of the wire, for that field extends an infinite d istance away from the wire. The electrostatic potential field -- pure 0-field -- is actually the spaiotemporal intensity of the massless charge
at a point. In other words, as the bird flies to the wir, he flies into an increasing "massless charge" poten tial, building up to 100,000 "volts" higher than the earth. However, very little (if any)
"mass flow potential difference is experienced upon his body in approaching the wire, and so essientially no "harged mass currents" are induced in his body. Thus the little flier safely navigates
into the teeth of a very high electrostatic potential, lights upon he wire, and is not "fried" in the process. When he lights on the wire, his body has reached the eletrostatic potential that each
foot’s contact point ha s. Again, there is no mass current flow. But his body is immersed in an increased flux of massless carge -- which is what the electrostatic potential represents. And each
"virtual particle" flow in tht charge represents a "massless (scalar)" electrical current.
The point is, one can have any amount of massless charge flow -- "scalar" current -- without any mecanical work being done in the system. All electrical work in a circuit is done against the physical
ass of the charged masses that flow. Rigorously, forc e is defined as the time rate of charge of momentum. Even in the relativistic case where F = ma + v(m/dt), change of momentum requires mass
movement. No mechanical work, and hence no energy, is expendd by massless charge flow.
That is why the vacuum massless charge -- which is composed of a very high flux of massless "particls" -- normally does no work on our systems, and expends none of its very high "potential energy."
Itis exactly the same as the bird which flew into an in creasing scalar field as it approached the high tension wire -- no work was done upon the bird by th increasing scalar flux currents
encountered by its body.
By existing "in the vacuum," so to speak, we (the whole earth) are as birds sitting on a high tensio line! Until we create a significant differece in potential, via our present electromagnetic
circuit, no current can flow -- anywhere. Even if we produce potential differences, we must have a conductor and charged masses to flow, if we with to produce echanical work. Presently our
electromagnetic theory allows us to create a difference in potential wthin different parts of a circuit, but only by moving and shifting charged mass. We therefore have to do work on this electrical
mass in moving it around and we only get back the work we have put into the circuit. In other words, presently all wee do is"pump" electrical mass.
Now notice what would happen to the bird on the line if we substantially "pulsed" the potential on te line. Suppose we "pulsed" it such that the bird’s physical system -- considered as a circuit
contining a capicitance, a resistance, an inductance, and many free electrons -- became resonant to the pulsing frequency. In that case the "bird system" woud resonate, and a great deal of electrical
mass would surge back and forth in the body of the bird. n the birds body, voltage would exist, charged mass c urrent would flow, work would be done, and the bird would be electrocuted.
Also, note that, without mass movement, electromagnetic vector fields are not produced (and a portio of the difficulty lies with the actual vector mechanics itself). Scalar (nonvector) waves
continualy penetrate the "space" where there is no mass moveme nt. This means there can exist a "delta-0" without a voltage or an E-field. The present theory does ot allow this, because it always
uses "q" (charge) to be charged mass. Briefly, without belaboring te point, let us just say that is the mechanical spin of the individual charged particle -- such as the electron -- which "entangles"
or "knits together" r "couples" independent scalar waves into vector waves. A vector wave is simply two coupled scalar wves. The entire force field concept -- such as the E-field and the B-field --
is operationally Defined in terms of the force exhibited on a test particle or test mass. Rigorusly, an E-field does not exist as a force field in a vacuum, but as two coupledscalar 0-fields
"tumbling about each other." When the se two coupled, tumbling fields meet a spinning electron, e.g., the force emerges on the electron mas. In short, movement of a rotating mass changes delta-0 to
"voltage", creating the V/I/R triad.
By "accululating charged mass particles" -- such as electrons -- one certainly can increase the valu of 0, which represents the charge intensity or "scalar electrostatic potential." However, that is
nt the only way to increase it. Resonance and rotatio n of charged mass can also be appropriately employed to vary the vacuum charge potential 0, under prper circumstances.
By the correct application of rotary principles and Tesla electromagnetic theory, it is possible to scillate -- and change the vacuum potential itself, in one part of an electrical system.
This information can be found at:http://www.totse.com/en/fringe/free_energy/freepwr.html
Posted by: Lisa Marie Storm
Contributing Editor for Paranormalnews.com
E-mail: Lisa@Paranormalnews.com
Please login or become a member to post a comment. | {"url":"http://www.paranormalnews.com/article.aspx?id=869","timestamp":"2014-04-19T17:32:56Z","content_type":null,"content_length":"57025","record_id":"<urn:uuid:3968bb11-efda-4539-826f-f5ab212956f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
probability of a game show
January 29th 2009, 10:46 AM #1
Oct 2008
probability of a game show
On a weekly television game show, contestants are shown three closed boxes and asked to choose on of them. They know that one of carrots. Each week, after a contestant has chosen a box, the game
show host opens one of the other two boxes to reveal that it contains carrots. The contestant is then given a choice: stick with the box thay have already chosen or change their mind and pick the
remaining unopened box.
Use a suitable probability calculation to decide what contestant should do ( assume that they are aiming for cash rather than the carrots)
Ok, for this question I dont really understand, just know that P(carrot) = 1/3. I dont even have a clue how to do it and what kind of probability calculation to use. Can somebody give me the
structure to do this question please?.
Any help would be appreciated! . Thanks for your time
On a weekly television game show, contestants are shown three closed boxes and asked to choose on of them. They know that one of carrots. Each week, after a contestant has chosen a box, the game
show host opens one of the other two boxes to reveal that it contains carrots. The contestant is then given a choice: stick with the box thay have already chosen or change their mind and pick the
remaining unopened box.
Use a suitable probability calculation to decide what contestant should do ( assume that they are aiming for cash rather than the carrots)
Ok, for this question I dont really understand, just know that P(carrot) = 1/3. I dont even have a clue how to do it and what kind of probability calculation to use. Can somebody give me the
structure to do this question please?.
Any help would be appreciated! . Thanks for your time
monty hall problem - Google Search
I like this site: Marilyn vos Savant | The Game Show Problem
January 29th 2009, 03:14 PM #2 | {"url":"http://mathhelpforum.com/advanced-statistics/70638-probability-game-show.html","timestamp":"2014-04-17T08:28:30Z","content_type":null,"content_length":"35024","record_id":"<urn:uuid:3b806c6e-00c2-4205-b073-2cf5603a49e5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - What do you do when instructors inflict horrible books on you?
Students often characterize textbooks as difficult when (a) they do not invest sufficient time into learning the material or (b) they do not invest sufficient time into reviewing the prerequisite
I recommend that you review the prerequisite material over the summer. When your course begins, attend every lecture, read the chapters multiple times, and meet with your teacher regularly to clarify
and/or fortify the concepts.
that's not necessarily true. I heard Arfken does not have solutions to the problems. That is a major negative. There are books like that with very few solutions, few examples, etc. All these factors
make it much more difficult to learn from, than a book that has a solutions manual, numerous worked examples, etc.
Also, some books have typographic errors, small and/or hard to read fonts, or bad organization.
Grab a bunch of books from the library and see which one "clicks" the best. I always do that, even if the assigned book is good. Sometimes if just helps hearing a different author explain it, or to
read topics presented in slightly different orders.
Thank you, that's what I have in mind now. Would you consider "Mathematical Methods for Physics and Engineering" by Riley a good mathematical physics text to study from? The major plus is it has an
answer book for every odd problem and numerous examples.
In addition, what would you recommend for a readable graduate level quantum mechanics text that has numerous examples and a solutions page at the least? | {"url":"http://www.physicsforums.com/showpost.php?p=3800009&postcount=7","timestamp":"2014-04-21T07:23:15Z","content_type":null,"content_length":"9757","record_id":"<urn:uuid:90f948d5-1e4e-4ab4-b2b4-ae9f932554aa>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thin-film Interference
Note that, in the simulation, the incident wave is shown on the left. The wave that reflects off the top surface of the film is moved horizontally to the right, so we can see it easily without it
being on top of the incident wave. The wave that reflects off the bottom surface of the film is moved even farther to the right. Look at the interference that occurs between the two waves traveling
up in the top medium.
We can start our analysis by thinking about the path-length difference that occurs for the two waves. One wave just bounces off the film, while the other wave goes through the film, reflects, and
travels through the film again before emerging back into the first medium. If the film thickness is t, then the second wave travels an extra distance of 2t compared to the first wave. The path-length
difference, in other words, is 2t.
Based on our previous understanding of interference, we might expect that if this path-length difference was equal to an integer number of wavelengths, we would see constructive interference, and if
the path-length difference was an integer number of wavelengths, plus or minus one half of the wavelength, we would see destructive interference. It is just a little more complicated than this,
however - there are two more ideas that we need to consider.
First, we have up to three media in this situation, and the wavelength of the light is different in the different media - which wavelength is it that really matters? To satisfy the interference
conditions, we need to align the wave that goes down and back in the film with the wave that bounces off the top of the film. Thus, it is the wavelength in the film that really matters.
Note that the wavelength in any medium is related to the wavelength in vacuum by the equation: λmedium = λvacuum / nmedium
Second, we have to account for the fact that when light reflects from a higher-n medium, it gets inverted (there is no inversion when light refllects from a lower-n medium). Inverting a sine wave is
equivalent to simply moving the wave half a wavelength. Thus, in our thin-film situation, if both reflections result in an inversion, or neither one does, the 2t path-length difference we derived
above is all we need to consider. If only one of the reflections results in an inversion, however, the effective path-length difference is 2t plus or minus (it doesn't really matter which) half a
Once we've determined the effective path-length difference between the two waves, we can set that equal to the appropriate interference condition. This gives us an equation that relates the thickness
of the thin film to the wavelength of light in the film.
1. Start with only the red light source on, and showing the interference for red light. With the initial settings for the indices of refraction of the various layers, vary the film thickness to
determine which film thicknesses result in constructive interference for the reflected light, and which result in destructive interference for the reflected light. Express these thicknesses in terms
of the wavelength of the red light in the film. Do you see a pattern in these two sets of thicknesses?
2. In the limit that the film thickness goes to zero, what kind of interference occurs for the reflected light? How can you explain this?
3. Now, adjust the index of refraction of medium 1 so that it is larger than that of medium 2. Repeat the observations you made in steps 1 and 2 above. What similarities and differences do you
observe for your two sets of observations?
4. Find the smallest non-zero film thickness that gives constructive interference for the reflected light when the light is red. Now, make a prediction - when you switch to green light, will the
smallest non-zero film thickness that gives constructive interference for green light be larger than, smaller than, or equal to the thickness you found for the red light. Justify your prediction, and
then try it to see if you were correct. Repeat the process for blue light.
5. At the left of the simulation, you can see some colored boxes representing the color of the incident light, the reflected light, and the transmitted light. For instance, if you have both red and
blue incident light, the incident light would look purple to you, because it is actually red and blue mixed together. With this purple (red and blue, that is) incident light, can you find a film
thickness that produces blue reflected light and red transmitted light? If so, how can you explain this?
This simulation was developed by Andrew Duffy (original at
) and slightly modified by Taha Mzoughi.
Embed a running copy of this simulation
Embed a running copy link(show simulation in a popuped window)
Full screen applet
or Problem viewing java?
Add http://www.phy.ntnu.edu.tw/ to exception site listPress the Alt key and the left mouse button to drag the applet off the browser and onto the desktop.
This work is licensed under a
Creative Commons Attribution 2.5 Taiwan License
• Please feel free to post your ideas about how to use the simulation for better teaching and learning.
• Post questions to be asked to help students to think, to explore.
• Upload worksheets as attached files to share with more users.
Let's work together. We can help more users understand physics conceptually and enjoy the fun of learning physics! | {"url":"http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=121.0;prev_next=next","timestamp":"2014-04-20T19:28:31Z","content_type":null,"content_length":"56514","record_id":"<urn:uuid:eab2f4af-a790-4b56-bc9b-b8e001500615>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |