name stringlengths 5 6 | title stringlengths 8 144 | abstract stringlengths 0 2.68k | fulltext stringlengths 1.78k 95k | keywords stringlengths 22 532 |
|---|---|---|---|---|
261998 | Practical Algorithms for Selection on Coarse-Grained Parallel Computers. | AbstractIn this paper, we consider the problem of selection on coarse-grained distributed memory parallel computers. We discuss several deterministic and randomized algorithms for parallel selection. We also consider several algorithms for load balancing needed to keep a balanced distribution of data across processors ... | better in practice than its deterministic counterpart due to the low constant associated with the
algorithm.
Parallel selection algorithms are useful in such practical applications as dynamic distribution
of multidimensional data sets, parallel graph partitioning and parallel construction of multidimensional
binary sea... | parallel algorithms;selection;parallel computers;coarse-grained;median finding;randomized algorithms;load balancing;meshes;hypercubes |
262003 | Parallel Incremental Graph Partitioning. | AbstractPartitioning graphs into equally large groups of nodes while minimizing the number of edges between different groups is an extremely important problem in parallel computing. For instance, efficiently parallelizing several scientific and engineering applications requires the partitioning of data or tasks among p... | Introduction
Graph partitioning is a well-known problem for which fast solutions are extremely important in parallel
computing and in research areas such as circuit partitioning for VLSI design. For instance, parallelization
of many scientific and engineering problems requires partitioning data among the processors in ... | remapping;mapping;refinement;parallel;linear-programming |
262369 | Computing Accumulated Delays in Real-time Systems. | We present a verification algorithm for duration properties of real-time systems. While simple real-time properties constrain the total elapsed time between events, duration properties constrain the accumulated satisfaction time of state predicates. We formalize the concept of durations by introducing duration measures... | Introduction
Over the past decade, model checking [CE81, QS81] has emerged as a powerful tool for the automatic
verification of finite-state systems. Recently the model-checking paradigm has been extended to
real-time systems [ACD93, HNSY94, AFH96]. Thus, given the description of a finite-state system
together with its... | real-time systems;duration properties;formal verification;model checking |
262521 | Compile-Time Scheduling of Dynamic Constructs in Dataflow Program Graphs. | AbstractScheduling dataflow graphs onto processors consists of assigning actors to processors, ordering their execution within the processors, and specifying their firing time. While all scheduling decisions can be made at runtime, the overhead is excessive for most real systems. To reduce this overhead, compile-time d... | Introduction
A D ataflow graph representation, either as a programming
language or as an intermediate representation
during compilation, is suitable for programming multiprocessors
because parallelism can be extracted automatically
from the representation [1], [2] Each node, or actor, in a
dataflow graph represents eit... | macro actor;dynamic constructs;dataflow program graphs;profile;multiprocessor scheduling |
262549 | Singular and Plural Nondeterministic Parameters. | The article defines algebraic semantics of singular (call-time-choice) and plural (run-time-choice) nondeterministic parameter passing and presents a specification language in which operations with both kinds of parameters can be defined simultaneously. Sound and complete calculi for both semantics are introduced. We s... | Introduction
The notion of nondeterminism arises naturally in describing concurrent systems. Various
approaches to the theory and specification of such systems, for instance, CCS [16], CSP [9],
process algebras [1], event structures [26], include the phenomenon of nondeterminism.
But nondeterminism is also a natural co... | many-sorted algebra;sequent calculus;nondeterminism;algebraic specification |
262588 | Adaptive Multilevel Techniques for Mixed Finite Element Discretizations of Elliptic Boundary Value Problems. | We consider mixed finite element discretizations of linear second-order elliptic boundary value problems with respect to an adaptively generated hierarchy of possibly highly nonuniform simplicial triangulations. By a well-known postprocessing technique the discrete problem is equivalent to a modified nonconforming disc... | Introduction
.
In this work, we are concerned with adaptive multilevel techniques for the efficient
solution of mixed finite element discretizations of linear second order
elliptic boundary value problems. In recent years, mixed finite element methods
have been increasingly used in applications, in particular for such ... | mixed finite elements;multilevel preconditioned CG iterations;a posteriori error estimator |
262640 | Decomposition of Gray-Scale Morphological Templates Using the Rank Method. | AbstractConvolutions are a fundamental tool in image processing. Classical examples of two dimensional linear convolutions include image correlation, the mean filter, the discrete Fourier transform, and a multitude of edge mask filters. Nonlinear convolutions are used in such operations as the median filter, the medial... | INTRODUCTION
OTH linear convolution and morphological methods are
widely used in image processing. One of the common
characteristics among them is that they both require applying
a template to a given image, pixel by pixel, to yield a
new image. In the case of convolution, the template is usually
called convolution win... | structuring element;morphology;morphological template;template rank;convolution;template decomposition |
263203 | The Matrix Sign Function Method and the Computation of Invariant Subspaces. | A perturbation analysis shows that if a numerically stable procedure is used to compute the matrix sign function, then it is competitive with conventional methods for computing invariant subspaces. Stability analysis of the Newton iteration improves an earlier result of Byers and confirms that ill-conditioned iterates ... | Introduction
. If A 2 R n\Thetan has no eigenvalue on the imaginary axis, then the
matrix sign function sign(A) may be defined as
Z
(1)
where fl is any simple closed curve in the complex plane enclosing all eigenvalues of A
with positive real part. The sign function is used to compute eigenvalues and invariant
subspace... | matrix sign function;invariant subspaces;perturbation theory |
263207 | An Analysis of Spectral Envelope Reduction via Quadratic Assignment Problems. | A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described in [Barnard, Pothen, and Simon, Numer. Linear Algebra Appl., 2 (1995), pp. 317--334]. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specifi... | Introduction
. We provide a raison d'-etre for a novel spectral algorithm to reduce
the envelope of a sparse, symmetric matrix, described in a companion paper [2]. The algorithm
associates a discrete Laplacian matrix with the given symmetric matrix, and then
computes a reordering of the matrix by sorting the components... | sparse matrices;quadratic assignment problems;reordering algorithms;2-sum problem;envelope reduction;eigenvalues of graphs;1-sum problem;laplacian matrices |
263210 | Perturbation Analyses for the QR Factorization. | This paper gives perturbation analyses for $Q_1$ and $R$ in the QR factorization $A=Q_1R$, $Q_1^TQ_1=I$ for a given real $m\times n$ matrix $A$ of rank $n$ and general perturbations in $A$ which are sufficiently small in norm. The analyses more accurately reflect the sensitivity of the problem than previous such result... | Introduction
. The QR factorization is an important tool in matrix computations
(see for example [4]): given an m \Theta n real matrix A with full column rank,
there exists a unique m \Theta n real matrix Q 1 with orthonormal columns, and a unique
nonsingular upper triangular n \Theta n real matrix R with positive diag... | matrix equations;pivoting;condition estimation;QR factorization;perturbation analysis |
263442 | Optimal Local Weighted Averaging Methods in Contour Smoothing. | AbstractIn several applications where binary contours are used to represent and classify patterns, smoothing must be performed to attenuate noise and quantization error. This is often implemented with local weighted averaging of contour point coordinates, because of the simplicity, low-cost and effectiveness of such me... | Introduction
There are numerous applications involving the processing of 2-D images, and 2-D views
of 3-D images, where binary contours are used to represent and classify patterns of inter-
est. Measurements are then made using the contour information (e.g. perimeter, area,
moments, slopes, curvature, deviation angles ... | optimal local weighted averaging;contour smoothing;gaussian smoothing;digitization noise modeling |
263443 | Bias in Robust Estimation Caused by Discontinuities and Multiple Structures. | AbstractWhen fitting models to data containing multiple structures, such as when fitting surface patches to data taken from a neighborhood that includes a range discontinuity, robust estimators must tolerate both gross outliers and pseudo outliers. Pseudo outliers are outliers to the structure of interest, but inliers ... | Introduction
Robust estimation techniques have been used with increasing frequency in computer vision
applications because they have proven effective in tolerating the gross errors (outliers) characteristic
of both sensors and low-level vision algorithms. Most often, robust estimators are
used when fitting model parame... | parameter estimation;multiple structures;outliers;discontinuities;robust estimation |
263444 | Affine Structure from Line Correspondences With Uncalibrated Affine Cameras. | AbstractThis paper presents a linear algorithm for recovering 3D affine shape and motion from line correspondences with uncalibrated affine cameras. The algorithm requires a minimum of seven line correspondences over three views. The key idea is the introduction of a one-dimensional projective camera. This converts 3D ... | Introduction
Using line segments instead of points as features has attracted
the attention of many researchers [1], [2], [3], [4],
[5], [6], [7], [8], [9] for various tasks such as pose estima-
tion, stereo and structure from motion. In this paper, we
are interested in structure from motion using line correspondences
a... | structure from motion;one-dimensional camera;uncalibrated image;factorization method;affine structure;affine camera;line correspondence |
263446 | A Sequential Factorization Method for Recovering Shape and Motion From Image Streams. | AbstractWe present a sequential factorization method for recovering the three-dimensional shape of an object and the motion of the camera from a sequence of images, using tracked features. The factorization method originally proposed by Tomasi and Kanade produces robust and accurate results incorporating the singular v... | Introduction
Recovering both the 3D shape of an object and the motion of
the camera simultaneously from a stream of images is an
important task and has wide applicability in many tasks such
as navigation and robot manipulation. Tomasi and Kanade[1]
first developed a factorization method to recover shape and
motion unde... | image understanding;feature tracking;shape from motion;3D object reconstruction;real-time vision;singular value decomposition |
263449 | On the Sequential Determination of Model Misfit. | AbstractMany strategies in computer vision assume the existence of general purpose models that can be used to characterize a scene or environment at various levels of abstraction. The usual assumptions are that a selected model is competent to describe a particular attribute and that the parameters of this model can be... | Introduction
(a) (b)
(c)
a) A shaded range image scanned from above a wooden mannequin
lying face down.
Superellipsoids fitted to segmented data from (a). The dark dots
are the range data points. Note that the mannequin's right arm
has failed to segment and only a single model has been fitted
where two would have been ... | lack-of-fit statistics;active vision;autonomous exploration;misfit |
263452 | Approximating Bayesian Belief Networks by Arc Removal. | AbstractI propose a general framework for approximating Bayesian belief networks through model simplification by arc removal. Given an upper bound on the absolute error allowed on the prior and posterior probability distributions of the approximated network, a subset of arcs is removed, thereby speeding up probabilisti... | Introduction
Today, more and more applications based on the Bayesian belief network 1 formalism are
emerging for reasoning and decision making in problem domains with inherent uncertainty.
Current applications range from medical diagnosis and prognosis [1], computer vision [10], to
information retrieval [2]. As applica... | belief network approximation;approximate probabilistic inference;information theory;model simplification;bayesian belief networks |
263489 | Closure properties of constraints. | Many combinatorial search problems can be expressed as constraint satisfaction problems and this class of problems is known to be NP-complete in general. In this paper, we investigate the subclasses that arise from restricting the possible constraint types. We first show that any set of constraints that does not give r... | Introduction
Solving a constraint satisfaction problem is known to be NP-complete [20]. However,
many of the problems which arise in practice have special properties which allow them to
be solved efficiently. The question of identifying restrictions to the general problem which
are sufficient to ensure tractability is ... | complexity;NP-completeness;indicator problem;constraint satisfaction problem |
263499 | Constraint tightness and looseness versus local and global consistency. | Constraint networks are a simple representation and reasoning framework with diverse applications. In this paper, we identify two new complementary properties on the restrictiveness of the constraints in a networkconstraint tightness and constraint loosenessand we show their usefulness for estimating the level of local... | Introduction
Constraint networks are a simple representation and reasoning framework. A
problem is represented as a set of variables, a domain of values for each variable,
and a set of constraints between the variables. A central reasoning task is then
to find an instantiation of the variables that satisfies the constr... | relations;constraint networks;constraint-based reasoning;constraint satisfaction problems;local consistency |
263881 | Effective erasure codes for reliable computer communication protocols. | Reliable communication protocols require that all the intended recipients of a message receive the message intact. Automatic Repeat reQuest (ARQ) techniques are used in unicast protocols, but they do not scale well to multicast protocols with large groups of receivers, since segment losses tend to become uncorrelated t... | Introduction
Computer communications generally require reliable 1 data transfers among the communicating
parties. This is usually achieved by implementing reliability at different levels in the protocol
This paper appears on ACM Computer Communication Review, Vol.27, n.2, Apr.97, pp.24-36.
y The work described in this ... | FEC;reliable multicast;erasure codes |
263883 | Possibilities of using protocol converters for NIR system construction. | Volumes of information available from network information services have been increasing considerably in recent years. Users' satisfaction with an information service depends very much on the quality of the network information retrieval (NIR) system used to retrieve information. The construction of such a system involve... | Introduction
Important terms used in this paper are illustrated in
Fig. 1. An information provider makes some information
available to a user through a network by means of
a network information retrieval (NIR) system, thereby
server
client
user information
provider
NIR system
NIR protocol
information service
Figure
1... | network information services;network information retrieval;protocol conversion |
263927 | Scale-sensitive dimensions, uniform convergence, and learnability. | Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property ... | Introduction
In typical learning problems, the learner is presented with a finite sample of data generated by an
unknown source and has to find, within a given class, the model yielding best predictions on future
data generated by the same source. In a realistic scenario, the information provided by the sample
is incom... | uniform laws of large numbers;vapnik-chervonenkis dimension;PAC learning |
264145 | Software Reuse by Specialization of Generic Procedures through Views. | AbstractA generic procedure can be specialized, by compilation through views, to operate directly on concrete data. A view is a computational mapping that describes how a concrete type implements an abstract type. Clusters of related views are needed for specialization of generic procedures that involve several types o... | Introduction
Reuse of software has the potential to reduce cost, increase the speed of software production,
and increase reliability. Facilitating the reuse of software could therefore be of great benefit.
G. S. Novak, Jr. is with the Department of Computer Sciences, University of Texas, Austin,
An Automatic Programmin... | abstract data type;software reuse;direct-manipulation editor;generic procedure;partial evaluation;generic algorithm;algorithm specialization |
264211 | Trading conflict and capacity aliasing in conditional branch predictors. | As modern microprocessors employ deeper pipelines and issue multiple instructions per cycle, they are becoming increasingly dependent on accurate branch prediction. Because hardware resources for branch-predictor tables are invariably limited, it is not possible to hold all relevant branch history for all active branch... | to the branch-predictor tables. Although this redundancy
increases capacity aliasing compared to a standard one-bank structure
of comparable size, our simulations show that the reduction in
conflict aliasing overcomes this effect to yield a gain in prediction
accuracy. Alternatively, we show that a skewed organization ... | 3 C's classification;aliasing;skewed branch predictor;branch prediction |
264266 | Scheduling and data layout policies for a near-line multimedia storage architecture. | Recent advances in computer technologies have made it feasible to provide multimedia services, such as news distribution and entertainment, via high-bandwidth networks. The storage and retrieval of large multimedia objects (e.g., video) becomes a major design issue of the multimedia information system. While most other... | Introduction
In the past few years, we have witnessed tremendous advances in computer technologies,
such as storage architectures (e.g. fault tolerant disk arrays and parallel I/O architec-
tures), high speed networking systems (e.g., ATM switching technology), compression and
coding algorithms. These advances have mad... | multimedia storage;scheduling;data layout |
264406 | Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. | A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper consider... | Introduction
One of the first results in the mathematics of computation, which underlies the subsequent
development of much of theoretical computer science, was the distinction between
computable and non-computable functions shown in papers of Church [1936], Turing
[1936], and Post [1936]. Central to this result is Chu... | spin systems;quantum computers;algorithmic number theory;church's thesis;fourier transforms;foundations of quantum mechanics;prime factorization;discrete logarithms |
264407 | Strengths and Weaknesses of Quantum Computing. | Recently a great deal of attention has been focused on quantum computation following a sequence of results [Bernstein and Vazirani, in Proc. 25th Annual ACM Symposium Theory Comput., 1993, pp. 11--20, SIAM J. Comput., 26 (1997), pp. 1277--1339], [Simon, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994... | Introduction
Quantum computational complexity is an exciting new area that touches upon the foundations
of both theoretical computer science and quantum physics. In the early eighties,
Feynman [12] pointed out that straightforward simulations of quantum mechanics on a classical
computer appear to require a simulation o... | quantum Turing machines;quantum polynomial time;oracle quantum Turing machines |
264525 | A Prioritized Multiprocessor Spin Lock. | AbstractIn this paper, we present the PR lock, a prioritized spin lock mutual exclusion algorithm. The PR lock is a contention-free spin lock, in which blocked processes spin on locally stored or cached variables. In contrast to previous work on prioritized spin locks, our algorithm maintains a pointer to the lock hold... | Introduction
Mutual exclusion is a fundamental synchronization primitive for exclusive access to critical sections or shared
resources on multiprocessors [17]. The spin-lock is one of the mechanisms that can be used to provide mutual
exclusion on shared memory multiprocessors [2]. A spin-lock usually is implemented usi... | spin lock;priority queue;mutual exclusion;parallel processing;real-time system |
264530 | An Optimal Algorithm for the Angle-Restricted All Nearest Neighbor Problem on the Reconfigurable Mesh, with Applications. | AbstractGiven a set S of n points in the plane and two directions $r_1$ and $r_2,$ the Angle-Restricted All Nearest Neighbor problem (ARANN, for short) asks to compute, for every point p in S, the nearest point in S lying in the planar region bounded by two rays in the directions $r_1$ and $r_2$ emanating from p. The A... | Introduction
Recently, in an effort to enhance both its power and flexibility, the mesh-connected architecture
has been endowed with various reconfigurable features. Examples include the bus
automaton [21, 22], the reconfigurable mesh [15], the mesh with bypass capability [8], the
content addressable array processor [2... | reconfigurable mesh;mobile computing;ARANN;proximity problems;lower bounds;ANN |
264559 | Optimal Registration of Object Views Using Range Data. | AbstractThis paper deals with robust registration of object views in the presence of uncertainties and noise in depth data. Errors in registration of multiple views of a 3D object severely affect view integration during automatic construction of object models. We derive a minimum variance estimator (MVE) for computing ... | Introduction
An important issue in the design of 3D object recognition systems is building models of physical
objects. Object models are extensively used for synthesizing and predicting object appearances from
desired viewpoints and also for recognizing them in many applications such as robot navigation and
industrial ... | 3D free-form objects;automatic object modeling;view transformation estimation;image registration;range data;view integration |
264997 | Datapath scheduling with multiple supply voltages and level converters. | We present an algorithm called MOVER (Multiple Operating Voltage Energy Reduction) to minimize datapath energy dissipation through use of multiple supply voltages. In a single voltage design, the critical path length, clock period, and number of control steps limit minimization of voltage and power. Multiple supply vol... | INTRODUCTION
A great deal of current research is motivated by the need for decreased power dissipation
while satisfying requirements for increased computing capacity. In portable
An earlier abbreviated version of this work was reported in the Proceedings of the 1997 IEEE
International Symposium on Circuits and Systems,... | DSP;low power design;multiple voltage;datapath scheduling;power optimization;scheduling;high-level synthesis;level conversion |
265177 | On Parallelization of Static Scheduling Algorithms. | AbstractMost static algorithms that schedule parallel programs represented by macro dataflow graphs are sequential. This paper discusses the essential issues pertaining to parallelization of static scheduling and presents two efficient parallel scheduling algorithms. The proposed algorithms have been implemented on an ... | Introduction
Static scheduling utilizes the knowledge of problem characteristics to reach a global optimal, or
near optimal, solution. Although many people have conducted their research in various manners,
they all share a similar underlying idea: take a directed acyclic graph representing the parallel
program as input... | parallel scheduling algorithm;static scheduling;modified critical-path algorithm;macro dataflow graph |
265189 | A Simple Algorithm for Nearest Neighbor Search in High Dimensions. | AbstractThe problem of finding the closest point in high-dimensional spaces is common in pattern recognition. Unfortunately, the complexity of most existing search algorithms, such as k-d tree and R-tree, grows exponentially with dimension, making them impractical for dimensionality above 15. In nearly all applications... | Introduction
Searching for nearest neighbors continues to prove itself as an important problem in many
fields of science and engineering. The nearest neighbor problem in multiple dimensions is
stated as follows: given a set of n points and a novel query point Q in a d-dimensional
space, "Find a point in the set such th... | object recognition;benchmarks;nearest neighbor;searching by slicing;visual correspondence;pattern classification;hardware architecture |
265213 | Design and Evaluation of a Window-Consistent Replication Service. | AbstractReal-time applications typically operate under strict timing and dependability constraints. Although traditional data replication protocols provide fault tolerance, real-time guarantees require bounded overhead for managing this redundancy. This paper presents the design and evaluation of a window-consistent pr... | Introduction
Many embedded real-time applications, such as automated manufacturing and process control, require
timely access to a fault-tolerant data repository. Fault-tolerant systems typically employ some form of
redundancy to insulate applications from failures. Time redundancy protects applications by repeating
co... | temporal consistency;real-time systems;replication protocols;scheduling;fault tolerance |
266408 | Decomposition of timed decision tables and its use in presynthesis optimizations. | Presynthesis optimizations transform a behavioral HDL description into an optimized HDL description that results in improved synthesis results. We introduce the decomposition of timed decision tables (TDT), a tabular model of system behavior. The TDT decomposition is based on the kernel extraction algorithm. By experim... | Introduction
Presynthesis optimizations have been introduced in [1] as source-level transformations that produce
"better" HDL descriptions. For instance, these transformations are used to reduce control-flow
redundancies and make synthesis result relatively insensitive to the HDL coding-style. They are
also used to red... | benchmarks;presynthesis optimizations;timed decision table decomposition;decision tables;system behavior model;TDT decomposition;behavioral HDL description;circuit synthesis;optimized HDL description;kernel extraction algorithm |
266421 | Effects of delay models on peak power estimation of VLSI sequential circuits. | Previous work has shown that maximum switching density at a given node is extremely sensitive to a slight change in the delay at that node. However, when estimating the peak power for the entire circuit, the powers estimated must not be as sensitive to a slight variation or inaccuracy in the assumed gate delays because... | Introduction
The continuing decrease in feature size and increase in chip
density in recent years give rise to concerns about excessive
power dissipation in VLSI chips. As pointed out in [1], large
instantaneous power dissipation can cause overheating (lo-
cal hot spots), and the failure rate for components roughly
dou... | n-cycle power;sustainable power;variable delay;peak power;genetic optimization |
266441 | Power optimization using divide-and-conquer techniques for minimization of the number of operations. | We develop an approach to minimizing power consumption of portable wireless DSP applications using a set of compilation and architectural techniques. The key technical innovation is a novel divide-and-conquer compilation technique to minimize the number of operations for general DSP computations. Our technique optimize... | INTRODUCTION
1.1 Motivation
The pace of progress in integrated circuits and system design has been dictated by
the push from application trends and the pull from technology improvements. The
goal and role of designers and design tool developers has been to develop design
methodologies, architectures, and synthesis tool... | power consumption;data flow graphs;portable wireless DSP applications;architectural techniques;DSP computations;compilation;divide-and-conquer compilation |
266458 | Approximate timing analysis of combinational circuits under the XBD0 model. | This paper is concerned with approximate delay computation algorithms for combinational circuits. As a result of intensive research in the early 90's efficient tools exist which can analyze circuits of thousands of gates in a few minutes or even in seconds for many cases. However, the computation time of these tools is... | Introduction
During late 80's and early 90's significant progress [2, 8] was made
in the theory of exact gate-level timing analysis. In this, false
paths are correctly identified so that exact delays can be computed.
As the theory progressed, the efficiency and size limitation of actual
implementations of timing analys... | false path;delay computation;timing analysis |
266472 | Sequential optimisation without state space exploration. | We propose an algorithm for area optimization of sequential circuits through redundancy removal. The algorithm finds compatible redundancies by implying values over nets in the circuit. The potentially exponential cost of state space traversal is avoided and the redundancies found can all be removed at once. The optimi... | Introduction
Sequential optimisation seeks to replace a given sequential circuit
with another one optimised with respect to some criterion
area, performance or power, in a way such that the environment
of the circuit cannot detect the replacement. In this work,
we deal with the problem of optimising sequential circuits... | sequential optimization;recursive learning;sequential circuits;safe delay replacement;compatible unobservability |
266476 | Decomposition and technology mapping of speed-independent circuits using Boolean relations. | Presents a new technique for the decomposition and technology mapping of speed-independent circuits. An initial circuit implementation is obtained in the form of a netlist of complex gates, which may not be available in the design library. The proposed method iteratively performs Boolean decomposition of each such gate... | Introduction
Speed-independent circuits, originating from D.E. Muller's work [12], are hazard-free under the unbounded
gate delay model . With recent progress in developing efficient analysis and synthesis techniques, supported
by CAD tools, this sub-class has moved closer to practice, bearing in mind the advantages of... | technology mapping;two-input sequential gate;complex gates;decomposed logic sharing;netlist;speed-independent circuits;two-input combinational gate;signal insertion;optimization;library matching;boolean decomposition;boolean relations;circuit CAD;logic resynthesis;design library;logic decomposition |
266552 | A deductive technique for diagnosis of bridging faults. | A deductive technique is presented that uses voltage testing for the diagnosis of single bridging faults between two gate input or output lines and is applicable to combinational or full-scan sequential circuits. For defects in this class of faults the method is accurate by construction while making no assumptions abou... | Introduction
A bridging fault [1] between two lines A and B in a circuit occurs
when the two lines are unintentionally shorted. When the lines A
and B have different logic values, the gates driving the lines will be
engaged in a drive fight (logic contention). Depending on the gates
driving the lines A and B, their inp... | bridging faults;deduction;diagnosis |
266569 | A SAT-based implication engine for efficient ATPG, equivalence checking, and optimization of netlists. | The paper presents a flexible and efficient approach to evaluating implications as well as deriving indirect implications in logic circuits. Evaluation and derivation of implications are essential in ATPG, equivalence checking, and netlist optimization. Contrary to other methods, the approach is based on a graph model ... | Introduction
Recently, substantial progress has been achieved in the fields of
Boolean equivalence checking and optimization of netlists. Techniques
for deriving indirect implications, which were originally developed
for ATPG tools, play a key role in this development.
Indirect implications have been successfully appli... | efficient ATPG;structure based methods;SAT-based implication engine;logic circuits;implication evaluation;indirect implications;implication graph;equivalence checking;graph algorithms;netlist optimization;automatic testing;graph model;circuit clause description |
266611 | Java as a specification language for hardware-software systems. | The specification language is a critical component of the hardware-software co-design process since it is used for functional validation and as a starting point for hardware- software partitioning and co-synthesis. This paper pro poses the Java programming language as a specification language for hardware-software syst... | Introduction
Hardware-software system solutions have increased in
popularity in a variety of design domains [1] because these
systems provide both high performance and flexibility.
Mixed hardware-software implementations have a number
of benefits. Hardware components provide higher performance
than can be achieved by s... | specification languages;hardware-software co-design |
266615 | Delay bounded buffered tree construction for timing driven floorplanning. | As devices and lines shrink into the deep submicron range, the propagation delay of signals can be effectively improved by repowering the signals using intermediate buffers placed within the routing trees. Almost no existing timing driven floorplanning and placement approaches consider the option of buffer insertion. A... | Introduction
In high speed design, long on-chip interconnects can be modeled as distributed delay lines,
where the delay of the lines can often be reduced by wire sizing or intermediate buffer
insertion. Simple wire sizing is one degree of freedom available to the designer, but often it
is ineffective due to area, rout... | Total Wire Length;MST;floorplanning;DBB-tree;SPT;elmore delay;buffer insertion;delay bounds |
266802 | Path-based next trace prediction. | The trace cache has been proposed as a mechanism for providing increased fetch bandwidth by allowing the processor to fetch across multiple branches in a single cycle. But to date predicting multiple branches per cycle has meant paying a penalty in prediction accuracy. We propose a next trace predictor that treats the ... | Introduction
Current superscalar processors fetch and issue four to
six instructions per cycle - about the same number as in
an average basic block for integer programs. It is obvious
that as designers reach for higher levels of instruction
level parallelism, it will become necessary to fetch more
than one basic block ... | trace cache;Return History Stack;Next Trace Prediction;Multiple Branch Prediction;Path-Based Prediction |
266806 | Run-time spatial locality detection and optimization. | As the disparity between processor and main memory performance grows, the number of execution cycles spent waiting for memory accesses to complete also increases. As a result, latency hiding techniques are critical for improved application performance on future processors. We present a microarchitecture scheme which de... | Introduction
This paper introduces an approach to solving the growing
memory latency problem [2] by intelligently exploiting spatial
locality. Spatial locality refers to the tendency for neighboring
memory locations to be referenced close together in
time. Traditionally there have been two main approaches
used to explo... | data cache;prefetching;block size;cache management;spatial locality |
266808 | The design and performance of a conflict-avoiding cache. | High performance architectures depend heavily on efficient multi-level memory hierarchies to minimize the cost of accessing data. This dependence will increase with the expected increases in relative distance to main memory. There have been a number of published proposals for cache conflict-avoidance schemes. We invest... | Introduction
On current projections the next 10 years could see CPU clock frequencies increase by a factor of
twenty whereas DRAM row-address-strobe delays are projected to decrease by only a factor of
two. This potential ten-fold increase in the distance to main memory has serious implications for
the design of future... | multi-level memory hierarchies;cache architecture design;polynomial modulus functions;conflict-avoiding cache performance;high performance architectures;cache storage;data access cost minimization;main memory;conflict miss ratios |
266810 | A framework for balancing control flow and predication. | Predicated execution is a promising architectural feature for exploiting instruction-level parallelism in the presence of control flow. Compiling for predicated execution involves converting program control flow into conditional, or predicated, instructions. This process is known as if-conversion. In order to effective... | Introduction
The performance of modern processors is becoming highly
dependent on the ability to execute multiple instructions per cy-
cle. In order to realize their performance potential, these processors
demand that increasing levels of instruction-level parallelism
(ILP) be exposed in programs. One of the major chal... | conditional instructions;compiler;instruction-level parallelism;parallel architecture;schedule time;program control flow;scheduling decisions;optimising compilers;predicated instructions;if-conversion;predicated execution |
266812 | Tuning compiler optimizations for simultaneous multithreading. | Compiler optimizations are often driven by specific assumptions about the underlying architecture and implementation of the target machine. For example, when targeting shared-memory multiprocessors, parallel programs are compiled to minimize sharing, in order to decrease high-cost, inter-processor communication. This p... | Introduction
Compiler optimizations are typically driven by
specific assumptions about the underlying architecture
and implementation of the target machine. For example,
compilers schedule long-latency operations early to
minimize critical paths, order instructions based on the
processor's issue slot restrictions to ma... | simultaneous multithreading;compiler optimizations;processor architecture;software speculative execution;performance;loop-iteration scheduling;parallel architecture;cache size;inter-processor communication;memory system resources;latency hiding;parallel programs;optimising compilers;shared-memory multiprocessors;loop t... |
266814 | Trace processors. | Traces are dynamic instruction sequences constructed and cached by hardware. A microarchitecture organized around traces is presented as a means for efficiently executing many instructions per cycle. Trace processors exploit both control flow and data flow hierarchy to overcome complexity and architectural limitations ... | Introduction
Improvements in processor performance come about
in two ways - advances in semiconductor technology and
advances in processor microarchitecture. To sustain the
historic rate of increase in computing power, it is important
for both kinds of advances to continue. It is almost
certain that clock frequencies w... | trace cache;selective reissuing;context-based value prediction;next trace prediction;trace processors;multiscalar processors |
266816 | Out-of-order vector architectures. | Register renaming and out-of-order instruction issue are now commonly used in superscalar processors. These techniques can also be used to significant advantage in vector processors, as this paper shows. Performance is improved and available memory bandwidth is used more effectively. Using a trace driven simulation we ... | Introduction
Vector architectures have been used for many years
for high performance numerical applications - an area
where they still excel. The first vector machines were
supercomputers using memory-to-memory operation,
but vector machines only became commercially successful
with the addition of vector registers in t... | memory latency;memory traffic elimination;vector architecture;register renaming;microarchitecture;precise interrupts;out-of-order execution |
266819 | Improving code density using compression techniques. | We propose a method for compressing programs in embedded processors where instruction memory size dominates cost. A post-compilation analyzer examines a program and replaces common sequences of instructions with a single instruction codeword. A microprocessor executes the compressed instruction sequences by fetching co... | Introduction
According to a recent prediction by In-Stat Inc., the merchant processor market is set to
exceed $60 billion by 1999, and nearly half of that will be for embedded processors. However, by
unit count, embedded processors will exceed the number of general purpose microprocessors by a
factor of 20. Compared to... | embedded systems;compression;code density;Code Space Optimization |
266820 | Procedure based program compression. | Cost and power consumption are two of the most important design factors for many embedded systems, particularly consumer devices. Products such as personal digital assistants, pagers with integrated data services and smart phones have fixed performance requirements but unlimited appetites for reduced cost and increased... | Introduction
We will present a technique for saving power and
reducing cost in embedded systems. We are concerned
primarily with data-rich consumer devices used for
computation and communications, e.g. the so-called
information appliance. Currently this product category
includes devices such as simple Personal Digital
... | pagers;run-time performance overhead;procedural reference resolution;multimedia applications;procedure-based program compression;compressed memory;cached procedures;battery life;power consumption;high-capacitance bus traffic;consumer devices;embedded systems;RAM;memory references;performance requirements;source coding;... |
266821 | Improving the accuracy and performance of memory communication through renaming. | As processors continue to exploit more instruction-level parallelism, a greater demand is placed on reducing the effects of memory access latency. In this paper, we introduce a novel modification of the processor pipeline called memory renaming. Memory renaming applies register access techniques to load instructions, r... | Introduction
Two trends in the design of microprocessors combine
to place an increased burden on the implementation
of the memory system: more aggressive, wider
instruction issue and higher clock speeds. As more instructions
are pushed through the pipeline per cycle,
there is a proportionate increase in the processing ... | address calculation;heap segment;instruction-level parallelism;stores;storage allocation;performance;data fetching;execution time;data value speculation;data dependence speculation;instruction loading;memory renaming;memory references;processor pipeline;memory communication;memory segments;delays;memory access latency;... |
266824 | The predictability of data values. | The predictability of data values is studied at a fundamental level. Two basic predictor models are defined: Computational predictors perform an operation on previous values to yield predicted next values. Examples we study are stride value prediction (which adds a delta to a previous value) and last value prediction (... | Introduction
There is a clear trend in high performance processors toward
performing operations speculatively, based on predic-
tions. If predictions are correct, the speculatively executed
instructions usually translate into improved performance.
Although program execution contains a variety of information
that can be... | Context Based Prediction;Last Value Prediction;stride prediction;prediction;value prediction |
266825 | Value profiling. | variables as invariant or constant at compile-time allows the compiler to perform optimizations including constant folding, code specialization, and partial evaluation. Some variables, which cannot be labeled as constants, may exhibit semi-invariant behavior. A "semi-invariant" variable is one that cannot be identified... | Introduction
Many compiler optimization techniques depend upon
analysis to determine which variables have invariant be-
havior. Variables which have invariant run-time behav-
ior, but cannot be labeled as such at compile-time, do not
fully benefit from these optimizations. This paper examines
using profile feedback inf... | profiling;invariance;compiler optimization |
266826 | Can program profiling support value prediction?. | This paper explores the possibility of using program profiling to enhance the efficiency of value prediction. Value prediction attempts to eliminate true-data dependencies by predicting the outcome values of instructions at run-time and executing true-data dependent instructions based on that prediction. So far, all pu... | Introduction
Modern microprocessor architectures are increasingly
designed to employ multiple execution units that are
capable of executing several instructions (retrieved from a
sequential instruction stream) in parallel. The efficiency of
such architectures is highly dependent on the
instruction-level parallelism (IL... | value-prediction;speculative execution;instruction-level parallelism |
266829 | Procedure placement using temporal ordering information. | Instruction cache performance is very important to instruction fetch efficiency and overall processor performance. The layout of an executable has a substantial effect on the cache miss rate during execution. This means that the performance of an executable can be improved significantly by applying a code-placement alg... | Introduction
The linear ordering of procedures in a program's text segment fixes the addresses of each of these
procedures and this in turn determines the cache line(s) that each procedure will occupy in the
instruction cache. In the case of a direct-mapped cache, conflict misses result when the execution
of the progra... | profiling;conflict misses;code layout |
266830 | Predicting data cache misses in non-numeric applications through correlation profiling. | To maximize the benefit and minimize the overhead of software-based latency tolerance techniques, we would like to apply them precisely to the set of dynamic references that suffer cache misses. Unfortunately, the information provided by the state-of-the-art cache miss profiling technique (summary profiling) is inadequ... | Introduction
As the disparity between processor and memory speeds continues to grow, memory latency is becoming an
increasingly important performance bottleneck. Cache hierarchies are an essential step toward coping with
this problem, but they are not a complete solution. To further tolerate latency, a number of promis... | non-numeric applications;profiling;latency tolerance;correlation;cache miss prediction |
266833 | Cache sensitive modulo scheduling. | This paper focuses on the interaction between software prefetching (both binding and nonbinding) and software pipelining for VLIW machines. First, it is shown that evaluating software pipelined schedules without considering memory effects can be rather inaccurate due to stalls caused by dependences with memory instruct... | Introduction
Software pipelining is a well-known loop scheduling technique that tries to exploit instruction
level parallelism by overlapping several consecutive iterations of the loop and executing them in
parallel ([14]).
Different algorithms can be found in the literature for generating software pipelined sched-
ule... | locality analysis;software pipelining;software prefetching;VLIW machines |
266836 | Resource-sensitive profile-directed data flow analysis for code optimization. | Instruction schedulers employ code motion as a means of instruction reordering to enable scheduling of instructions at points where the resources required for their execution are available. In addition, driven by the profiling data, schedulers take advantage of predication and speculation for aggressive code motion acr... | Introduction
Data flow analysis provides us with facts about
a program by statically analyzing the program. Algorithms
for partial dead code elimination (PDE)
Copyright 1997 IEEE. Published in the Proceedings of
Micro-30, December 1-3, 1997 in Research Triangle Park, North
Carolina. Personal use of this material is per... | code optimization;functional unit resources;aggressive code motion;instruction schedulers;data flow algorithms;optimization;data flow analysis;partial dead code elimination;instruction reordering;resource availability;partial redundancy elimination;resource-sensitive profile-directed data flow analysis |
267960 | An approach for exploring code improving transformations. | Although code transformations are routinely applied to improve the performance of programs for both scalar and parallel machines, the properties of code-improving transformations are not well understood. In this article we present a framework that enables the exploration, both analytically and experimentally, of proper... | Introduction
Although code improving transformations have been applied by compilers for many years,
the properties of these transformations are not well understood. It is widely recognized that the
place in the program code where a transformation is applied, the order of applying code
transformations, and the selection... | code-improving transformations;enabling and disabling of optimizations;automatic generation of optimizers;specification of program optimizations;parallelizing transformations |
268184 | The T-Ruby Design System. | This paper describes the T-Ruby system for designing VLSI circuits, starting from formal specifications in which they are described in terms of relational abstractions of their behaviour. The design process involves correctness-preserving transformations based on proved equivalences between relations, together with the... | Introduction
This paper describes a computer-based system, known as T-Ruby [12], for designing
VLSI circuits starting from a high-level, mathematical specification of their behaviour:
A circuit is described by a binary relation between appropriate, possibly complex domains
of values, and simple relations can be compose... | hardware description languages;relational specification;synchronous circuit design;correctness-preserving transformations |
268186 | Bounded Delay Timing Analysis of a Class of CSP Programs. | We describe an algebraic technique for performing timing analysis of a class of asynchronous circuits described as CSP programs (including Martins probe operator) with the restrictions that there is no OR-causality and that guard selection is either completely free or mutually exclusive. Such a description is transform... | Introduction
There has been much work in the past decade on the synthesis of speed-independent (quasi-delay-
insensitive) circuits. What we develop in this paper are basic results that allow designers to reason
about, and thus synthesize, non-speed-independent or timed circuits. Whether designing timed
asynchronous cir... | asynchronous systems;concurrent systems;time separation of events;timing analysis;abstract algebra |
268421 | Understanding the sources of variation in software inspections. | In a previous experiment, we determined how various changes in three structural elements of the software inspection process (team size and the number and sequencing of sessions) altered effectiveness and interval. Our results showed that such changes did not significantly influence the defect detection rate, but that c... | Introduction
Software inspection has long been regarded as a simple, effective, and inexpensive way of detecting
and removing defects from software artifacts. Most organizations follow a three-step procedure
of Preparation, Collection, and Repair. First, each member of a team of reviewers reads the
artifact, detecting ... | software process;software inspection;empirical studies;statistical models |
268898 | Asynchronous parallel algorithms for test set partitioned fault simulation. | We propose two new asynchronous parallel algorithms for test set partitioned fault simulation. The algorithms are based on a new two-stage approach to parallelizing fault simulation for sequential VLSI circuits in which the test set is partitioned among the available processors. These algorithms provide the same result... | Introduction
Fault simulation is an important step in the electronic design
process and is used to identify faults that cause erroneous
responses at the outputs of a circuit for a given test set. The
objective of a fault simulation algorithm is to find the fraction
of total faults in a sequential circuit that is detect... | circuit analysis computing;synchronous two stage approach;test set partitioned fault simulation;MPI;dynamic characteristics;shared memory multiprocessor;redundant work;sequential VLSI circuits;circuit CAD;software portability;Message Passing Interface;asynchronous parallel algorithms |
268906 | Optimistic distributed simulation based on transitive dependency tracking. | In traditional optimistic distributed simulation protocols, a logical process (LP) receiving a straggler rolls back and sends out anti-messages. The receiver of an anti-message may also roll back and send out more anti-messages. So a single straggler may result in a large number of anti-messages and multiple rollbacks ... | Introduction
We modify the time warp algorithm to quickly stop
the spread of erroneous computation. Our scheme
does not require output queues and anti-messages.
This results in less memory overhead and simple memory
management algorithms. It also eliminates the
problem of cascading rollbacks and echoing [15], resulting... | message tagging;optimistic distributed simulation;transitive dependency information;transitive dependency tracking;time warp simulation;process rollback;rollback broadcasting;straggler;memory management;dependency information;distributed recovery;anti-messages;optimistic distributed simulation protocols;logical process |
268910 | Breadth-first rollback in spatially explicit simulations. | The efficiency of parallel discrete event simulations that use the optimistic protocol is strongly dependent on the overhead incurred by rollbacks. The paper introduces a novel approach to rollback processing which limits the number of events rolled back as a result of a straggler or antimessage. The method, called bre... | Introduction
One of the major challenges of Parallel Discrete
Event Simulation (PDES) is to achieve good perfor-
mance. This goal is difficult to attain, because, by its
very nature, discrete event simulation organizes events
in a priority queue based on the timestamp of events,
and processes them in that order. When p... | speedup;simulation objects;spatially explicit simulations;discrete event simulation;causal relationship recovery;rollback overhead;incremental state saving;optimistic protocol;straggler;antimessage;breadth-first rollback;parallel discrete event simulations;rollback processing |
268916 | Billiards and related systems on the bulk-synchronous parallel model. | With two examples we show the suitability of the bulk-synchronous parallel (BSP) model for discrete-event simulation of homogeneous large-scale systems. This model provides a unifying approach for general purpose parallel computing which in addition to efficient and scalable computation, ensures portability across diff... | Introduction
Parallel discrete-event simulation of billiards and
related systems is considered a non-obvious algorithmic
problem, and has deserved attention in the literature
[1, 5, 7, 8, 9, 11, 13, 18, 24, 23, 25]. Currently an
important class of applications for these simulations
is in computational physics [6, 7, 10... | billiards;general purpose parallel computing;discrete-event simulation;homogeneous large-scale systems;discrete event simulation;ising-spin models;scalable computation;colliding hard-spheres;bulk-synchronous parallel model |
269004 | Compositional refinement of interactive systems. | We introduce a method to describe systems and their components by functional specification techniques. We define notions of interface and interaction refinement for interactive systems and their components. These notions of refinement allow us to change both the syntactic (the number of channels and sorts of messages a... | Introduction
A distributed interactive system consists of a family of interacting components.
For reducing the complexity of the development of distributed interactive systems
they are developed by a number of successive development steps. By each
step the system is described in more detail and closer to an implementat... | specification;interactive systems;refinement |
269971 | Making graphs reducible with controlled node splitting. | Several compiler optimizations, such as data flow analysis, the exploitation of instruction-level parallelism (ILP), loop transformations, and memory disambiguation, require programs with reducible control flow graphs. However, not all programs satisfy this property. A new method for transforming irreducible control fl... | Introduction
In current computer architectures improvements can be obtained by the exploitation of instruction level parallelism
(ILP). ILP is made possible due to higher transistor densities which allows the duplication of function units
and data paths. Exploitation of ILP consists of mapping the ILP of the applicatio... | instruction-level parallelism;control flow graphs;reducibility;irreducibility;compilation;node splitting |
270413 | Optimal Parallel Routing in Star Networks. | AbstractStar networks have recently been proposed as attractive alternatives to the popular hypercube for interconnecting processors on a parallel computer. In this paper, we present an efficient algorithm that constructs an optimal parallel routing in star networks. Our result improves previous results for the problem... | Introduction
The star network [2] has received considerable attention recently by researchers as a graph model
for interconnection network. It has been shown that it is an attractive alternative to the widely used
hypercube model. Like the hypercube, the star network is vertex- and edge-symmetric, strongly
hierarchical... | shortest path;parallel routing;star network;partition matching;network routing;graph container |
270650 | Analysis for Chorin''s Original Fully Discrete Projection Method and Regularizations in Space and Time. | Over twenty-five years ago, Chorin proposed a computationally efficient method for computing viscous incompressible flow which has influenced the development of efficient modern methods and inspired much analytical work. Using asymptotic error analysis techniques, it is now possible to describe precisely the kind of er... | Introduction
In 1968, Chorin [3] proposed a computationally efficient method for computing viscous,
incompressible flow. The method was based on the primitive variables, velocity and
pressure, with all unknowns at the same grid points. The discretization was centered
in space (second order in space step h) and implicit... | parasitic modes;numerical boundary layers;projection methods |
270656 | First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity. | Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equation... | Introduction
. In earlier work [10], [11], we developed least-squares functionals for
a first-order system formulation of general second-order elliptic scalar partial differential
equations. The functional developed in [11] was shown to be elliptic in the sense that its
homogeneous form applied to the (pressure and vel... | stokes equations;multigrid;least squares |
270671 | On Krylov Subspace Approximations to the Matrix Exponential Operator. | Krylov subspace methods for approximating the action of matrix exponentials are analyzed in this paper. We derive error bounds via a functional calculus of Arnoldi and Lanczos methods that reduces the study of Krylov subspace approximations of functions of matrices to that of linear systems of equations. As a side res... | Introduction
. In this article we study Krylov subspace methods for the approximation
of exp(-A)v when A is a matrix of large dimension, v is a given vector,
scaling factor which may be associated with the step size in a time integration
method. Such Krylov approximations were apparently first used in Chemical
Physics ... | matrix exponential function;matrix-free time integration methods;arnoldi method;superlinear convergence;lanczos method;conjugate gradient-type methods;krylov subspace methods |
270932 | Star Unfolding of a Polytope with Applications. | We introduce the notion of a star unfolding of the surface ${\cal P}$ of a three-dimensional convex polytope with n vertices, and use it to solve several problems related to shortest paths on ${\cal P}$.The first algorithm computes the edge sequences traversed by shortest paths on ${\cal P}$ in time $O(n^6 \beta (n) \l... | Introduction
The problem of computing shortest paths in Euclidean space amidst polyhedral obstacles arises
in planning optimal collision-free paths for a given robot, and has been widely studied. In two
dimensions, the problem is easy to solve and a number of efficient algorithms have been developed,
see e.g. [SS86, We... | geodesics;shortest paths;star unfolding;convex polytopes |
270934 | Scalable Parallel Implementations of List Ranking on Fine-Grained Machines. | AbstractWe present analytical and experimental results for fine-grained list ranking algorithms. We compare the scalability of two representative algorithms on random lists, then address the question of how the locality properties of image edge lists can be used to improve the performance of this highly data-dependent ... | Introduction
List ranking is a fundamental operation in many algorithms for graph theory and computer vision
problems. Moreover, it is representative of a large class of fine grained data dependent algorithms.
Given a linked list of n cells, list ranking determines the distance of each cell from the head of
the list. O... | parallel algorithms;fine-grained parallel processing;image processing;scalable algorithms;list ranking;computer vision |
270935 | A Spectral Technique for Coloring Random 3-Colorable Graphs. | Let G3n,p,3 be a random 3-colorable graph on a set of 3n vertices generated as follows. First, split the vertices arbitrarily into three equal color classes, and then choose every pair of vertices of distinct color classes, randomly and independently, to be edges with probability p. We describe a polynomial-time algori... | Introduction
A vertex coloring of a graph G is proper if no adjacent vertices receive the same color. The chromatic
number -(G) of G is the minimum number of colors in a proper vertex coloring of it. The problem
of determining or estimating this parameter has received a considerable amount of attention in
Combinatorics... | graph eigenvalues;graph coloring;algorithms;random graphs |
270948 | Vienna-Fortran/HPF Extensions for Sparse and Irregular Problems and Their Compilation. | AbstractVienna Fortran, High Performance Fortran (HPF), and other data parallel languages have been introduced to allow the programming of massively parallel distributed-memory machines (DMMP) at a relatively high level of abstraction, based on the SPMD paradigm. Their main features include directives to express the di... | Introduction
During the past few years, significant efforts have been undertaken by academia, government
laboratories and industry to define high-level extensions of standard programming languages, in
particular Fortran, to facilitate data parallel programming on a wide range of parallel architectures
without sacrifici... | sparse computation;data-parallel language and compiler;distributed-memory machines;runtime support |
271013 | Structuring Communication Software for Quality-of-Service Guarantees. | AbstractA growing number of real-time applications require quality-of-service (QoS) guarantees from the underlying communication subsystem. The communication subsystem (host and network) must support real-time communication services to provide the required QoS of these applications. In this paper, we propose architectu... | Introduction
Distributed multimedia applications (e.g., video conferenc-
ing, video-on-demand, digital libraries) and distributed real-time
command/control systems require certain quality-of-
service (QoS) guarantees from the underlyingnetwork. QoS
guarantees may be specified in terms of parameters such
as the end-to-e... | traffic enforcement;CPU;QoS-sensitive resource management;and link scheduling;real-time communication |
271016 | A Multiframe Model for Real-Time Tasks. | AbstractThe well-known periodic task model of Liu and Layland [10] assumes a worst-case execution time bound for every task and may be too pessimistic if the worst-case execution time of a task is much longer than the average. In this paper, we give a multiframe real-time task model which allows the execution time of a... | Introduction
The well-known periodic task model by Liu and Layland(L&L) [1] assumes a worst-case execution
time bound for every task. While this is a reasonable assumption for process-control-type real-time
applications, it may be overly conservative [4] for situations where the average-case execution
time of a task is... | utilization bound;task model;scheduling;real-time |
271027 | An interaction of coherence protocols and memory consistency models in DSM systems. | Coherence protocols and memory consistency models are two improtant issues in hardware coherent shared memory multiprocessors and softare distributed shared memory(DSM) systems. Over the years, many researchers have made extensive study on these two issues repectively. However, the interaction between them has not been... | Introduction
Distributed Shared Memory(DSM) systems have gained popular acceptance by combining the scalability
and low cost of distributed system with the ease of use of the single address space. Generally,
there are two methods to implement DSM systems: hardware vs software. Cache-Coherent Non-
Uniform-Memory-Access ... | event ordering;software DSM systems;memory consistency models;coherence protocol;hardware DSM systems |
271188 | Analysis and Reduction for Angle Calculation Using the CORDIC Algorithm. | AbstractIn this paper, we consider the errors appearing in angle computations with the CORDIC algorithm (circular and hyperbolic coordinate systems) using fixed-point arithmetic. We include errors arising not only from the finite number of iterations and the finite width of the data path, but also from the finite numbe... | INTRODUCTION
The CORDIC (Coordinate Rotation DIgital Computer) algorithm is an iterative technique that
permits computing several transcendental functions using only additions and shifts operations
[15] [16]. Among these functions are included trigonometric functions, like sine, cosine, tan-
gent, arctangent and module... | error analysis;operand normalization;angle computation;redundant arithmetic;CORDIC algorithm |
271426 | Parallel Cluster Identification for Multidimensional Lattices. | AbstractThe cluster identification problem is a variant of connected component labeling that arises in cluster algorithms for spin models in statistical physics. We present a multidimensional version of Belkhale and Banerjee's Quad algorithm for connected component labeling on distributed memory parallel computers. Our... | Introduction
The cluster identification problem is a variant of connected component labeling that arises
in cluster algorithms for spin models in statistical mechanics. In these applications, the
graph to be labeled is a d-dimensional hypercubic lattice of variables called spins, with edges
(bonds) that may exist betwe... | swendson-wang dynamics;parallel algorithm;cluster identification;ising model;connected component labeling |
271442 | Prior Learning and Gibbs Reaction-Diffusion. | AbstractThis article addresses two important themes in early visual computation: First, it presents a novel theory for learning the universal statistics of natural imagesa prior model for typical cluttered scenes of the worldfrom a set of natural images, and, second, it proposes a general framework of designing reactio... | texture pattern rendering, denoising, image enhancement and clutter removal by careful
choice of both prior and data models of this type, incorporating the appropriate features.
Song Chun Zhu is now with the Computer Science Department, Stanford University,
Stanford, CA 94305, and David Mumford is with the Division of ... | clutter modeling;reaction-diffusion;anisotropic diffusion;gibbs distribution;image restoration;texture synthesis;visual learning |
271577 | Proximal Minimization Methods with Generalized Bregman Functions. | We consider methods for minimizing a convex function f that generate a sequence {xk} by taking xk+1 to be an approximate minimizer of f(x)+Dh(x,xk)/ck, where ck > 0 and Dh is the D-function of a Bregman function h. Extensions are made to B-functions that generalize Bregman functions and cover more applications. Conve... | Introduction
We consider the convex minimization problem
is a closed proper convex function and X is a nonempty closed
convex set in IR n . One method for solving (1.1) is the proximal point algorithm (PPA)
[Mar70, Roc76b] which generates a sequence
starting from any point x is the Euclidean norm and fc k g is a sequen... | convex programming;nondifferentiable optimization;b-functions;bregman functions;proximal methods |
271602 | A Cartesian Grid Projection Method for the Incompressible Euler Equations in Complex Geometries. | Many problems in fluid dynamics require the representation of complicated internal or external boundaries of the flow. Here we present a method for calculating time-dependent incompressible inviscid flow which combines a projection method with a "Cartesian grid" approach for representing geometry. In this approach, t... | Introduction
In this paper, we present a numerical method for solving the unsteady incompressible Euler
equations in domains with irregular boundaries. The underlying discretization method is
a projection method [22, 5]. Discretizations of the nonlinear convective terms and lagged
pressure gradient are first used to co... | cartesian grid;incompressible Euler equations;projection method |
271621 | The Spectral Decomposition of Nonsymmetric Matrices on Distributed Memory Parallel Computers. | The implementation and performance of a class of divide-and-conquer algorithms for computing the spectral decomposition of nonsymmetric matrices on distributed memory parallel computers are studied in this paper. After presenting a general framework, we focus on a spectral divide-and-conquer (SDC) algorithm with Newton... | Introduction
A standard technique in parallel computing is to build new algorithms from existing high
performance building blocks. For example, the LAPACK linear algebra library [1] is writ-
Department of Mathematics, University of Kentucky, Lexington, KY 40506.
y Computer Science Division and Mathematics Department, U... | spectral divide-and-conquer;invariant subspaces;nonsymmetric matrices;ScaLAPACK;parallelizable;eigenvalue problem;spectral decomposition |
271753 | The Influence of Interface Conditions on Convergence of Krylov-Schwarz Domain Decomposition for the Advection-Diffusion Equation. | Several variants of Schwarz domain decomposition, which differ in the choice of interface conditions, are studied in a finite volume context. Krylov subspace acceleration, GMRES in this paper, is used to accelerate convergence. Using a detailed investigation of the systems involved, we can minimize the memory requireme... | Introduction
We consider domain decomposition for the two-dimensional advection-diffusion equation with
application to a boundary conforming finite volume incompressible Navier-Stokes solver in mind,
see [27, 6, 20]. Therefore, our interests are more practical than theoretical. An advantage of the
boundary conforming a... | krylov subspace method;advection-diffusion equation;neumann-dirichlet method;schwarz alternating method;domain decomposition;krylov-schwarz algorithm |
271759 | A Parallel Grid Modification and Domain Decomposition Algorithm for Local Phenomena Capturing and Load Balancing. | Lion's nonoverlapping Schwarz domain decomposition method based on a finite difference discretization is applied to problems with fronts or layers. For the purpose of getting accurate approximation of the solution by solving small linear systems, grid refinement is made on subdomains that contain fronts and layers and ... | Introduction
Grid refinement methods have been proved to be essential and efficient in solving large-scale problems
with localized phenomena, such as boundary layers or wave fronts. For many engineering
problems, however, this still leads to large-size linear systems of algebraic equations, which can not
be solved easi... | finite difference method;grid modification;parallel computing;domain decomposition method |
271778 | Constructing compact models of concurrent Java programs. | Finite-state verification technology (e.g., model checking) provides a powerful means to detect concurrency errors, which are often subtle and difficult to reproduce. Nevertheless, widespread use of this technology by developers is unlikely until tools provide automated support for extracting the required finite-state ... | Introduction
Finite-state analysis tools (e.g., model checkers) can automatically
detect concurrency errors, which are often subtle
and difficult to reproduce. Before such tools can be applied
to software, a finite-state model of the program must be
constructed. This model must be accurate enough to verify
the requirem... | model extraction;finite-state verification;static analysis |
271798 | Improving efficiency of symbolic model checking for state-based system requirements. | We present various techniques for improving the time and space efficiency of symbolic model checking for system requirements specified as synchronous finite state machines. We used these techniques in our analysis of the system requirements specification of TCAS II, a complex aircraft collision avoidance system. They t... | Introduction
Formal verification based on state exploration can be considered
an extreme form of simulation: every possible behavior
of the system is checked for correctness. Symbolic model
checking [?] using binary decision diagrams (BDDs) [?] is
an efficient state-exploration technique for finite state sys-
tems; it ... | formal verification;reachability analysis;abstraction;system requirements specification;TCAS II;partitioned transition relation;binary decision diagrams;statecharts;symbolic model checking;RSML |
271802 | On the limit of control flow analysis for regression test selection. | Automated analyses for regression test selection (RTS) attempt to determine if a modified program, when run on a test t, will have the same behavior as an old version of the program run on t, but without running the new program on t. RTS analyses must confront a price/performance tradeoff: a more precise analysis might... | Introduction
The goal of regression test selection (RTS) analysis is to answer the
following question as inexpensively as possible:
Given test input t and programs old and new, does new(t)
have the same observable behavior as old(t)?
To appear, 1998 ACM/SIGSOFT International Symposium on
Software Testing and Analysis
O... | profiling;coverage;control flow analysis;regression testing |
271804 | All-du-path coverage for parallel programs. | One significant challenge in bringing the power of parallel machines to application programmers is providing them with a suite of software tools similar to the tools that sequential programmers currently utilize. In particular, automatic or semi-automatic testing tools for parallel programs are lacking. This paper desc... | Introduction
Recent trends in computer architecture and computer networks suggest
that parallelism will pervade workstations, personal comput-
ers, and network clusters, causing parallelism to become available
to more than just the users of traditional supercomputers. Experience
with using parallelizing compilers and a... | parallel programming;all-du-path coverage;testing tool |
272132 | The Static Parallelization of Loops and Recursions. | We demonstrate approaches to the static parallelization of loops and recursions on the example of the polynomial product. Phrased as a loop nest, the polynomial product can be parallelized automatically by applying a space-time mapping technique based on linear algebra and linear programming. One can choose a parallel ... | Introduction
We give an overview of several approaches to the static parallelization of loops and recursions 1 ,
which we pursue at the University of Passau. Our emphasis in this paper is on divide-and-
conquer recursions.
Static parallelization has the following benefits:
1. Efficiency of the target code. One avoids t... | polytope model;divide-and-conquer;higher-order function;SPMD;loop nest;homomorphism;parallelization;skeletons |
272790 | Approximate Inverse Techniques for Block-Partitioned Matrices. | This paper proposes some preconditioning options when the system matrix is in block-partitioned form. This form may arise naturally, for example, from the incompressible Navier--Stokes equations, or may be imposed after a domain decomposition reordering. Approximate inverse techniques are used to generate sparse appr... | Introduction
Consider the block partitioning of a matrix A, in the form
(1)
where the blocking naturally occurs due the ordering of the equations and the variables.
Matrices of this form arise in many applications, such as in the incompressible Navier-Stokes
equations, where the scalar momentum equations and the contin... | sparse approximate inverse;block-partitioned matrix;navier-stokes;schur complement;preconditioning |
272879 | Circuit Retiming Applied to Decomposed Software Pipelining. | AbstractThis paper elaborates on a new view on software pipelining, called decomposed software pipelining, and introduced by Gasperoni and Schwiegelshohn, and by Wang, Eisenbeis, Jourdan, and Su. The approach is to decouple the problem into resource constraints and dependence constraints. Resource constraints managemen... | Introduction
OFTWARE PIPELINING is an instruction-level loop
scheduling technique for achieving high performance on
processors such as superscalar or VLIW (Very Long Instruction
Word) architectures. The main problem is to
cope with both data dependences and resource constraints
which make the problem NP-complete in gen... | software pipelining;circuit retiming;list scheduling;cyclic scheduling;modulo scheduling |
272885 | Abstractions for Portable, Scalable Parallel Programming. | AbstractIn parallel programming, the need to manage communication, load imbalance, and irregularities in the computation puts substantial demands on the programmer. Key properties of the architecture, such as the number of processors and the cost of communication, must be exploited to achieve good performance, but codi... | Introduction
The diversity of parallel architectures puts the goals of performance and portability in conflict. Programmers
are tempted to exploit machine details-such as the interconnection structure and the granularity of
parallelism-to maximize performance. Yet software portability is needed to reduce the high cost ... | portable;programming model;scalable;parallel;MIMD |
273411 | Automatic Determination of an Initial Trust Region in Nonlinear Programming. | This paper presents a simple but efficient way to find a good initial trust region radius (ITRR) in trust region methods for nonlinear optimization. The method consists of monitoring the agreement between the model and the objective function along the steepest descent direction, computed at the st... | Introduction
. Trust region methods for unconstrained optimization were
first introduced by Powell in [14]. Since then, these methods have enjoyed a good
reputation on the basis of their remarkable numerical reliability in conjunction with
a sound and complete convergence theory. They have been intensively studied and
... | initial trust region;trust region methods;numerical results;nonlinear optimization |
273573 | On the Stability of Null-Space Methods for KKT Systems. | This paper considers the numerical stability of null-space methods for Karush--Kuhn--Tucker (KKT) systems, particularly in the context of quadratic programming. The methods we consider are based on the direct elimination of variables, which is attractive for solving large sparse systems. Ill-conditioning in a certain s... | Introduction
A Karush-Kuhn-Tucker (KKT) system is a linear system
y
version of this paper was presented at the Dundee Biennial Conference in Numerical
Analysis, June, 1995 and the Manchester IMA Conference on Linear Algebra, July 1995.
R. Fletcher and T. Johnson
involving a symmetric matrix of the form
Such systems are... | null-space method;KKT system;ill-conditioning |
273575 | Spectral Perturbation Bounds for Positive Definite Matrices. | Let H and H positive definite matrices. It was shown by Barlow and Demmel and Demmel and Veselic that if one takes a componentwise approach one can prove much stronger bounds on $\lambda_i(H)/\lambda_i(H and the components of the eigenvectors of H and H than by using the standard normwise perturbation theory. Here a un... | Introduction
. If the positive definite matrix H can be written as
where D is diagonal and A is much better conditioned than H then the eigenvalues
and eigenvectors of H are determined to a high relative accuracy if the entries of the
matrix H are determined to a high relative accuracy. This was shown by Demmel and
Ves... | jacobi's method;symmetric eigenvalue problem;error analysis;perturbation theory;positive definite matrix;graded matrix |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.