signature stringlengths 8 3.44k | body stringlengths 0 1.41M | docstring stringlengths 1 122k | id stringlengths 5 17 |
|---|---|---|---|
@callback_register.register("<STR_LIT>")<EOL>@dice_result(higher_than=<NUM_LIT:3>)<EOL>def print_high_result(result): | print("<STR_LIT>" % result)<EOL>assert result > <NUM_LIT:3><EOL> | Function that prints all higher than 3 rolls. | f13339:m3 |
@callback_register.register("<STR_LIT>")<EOL>@dice_result(equal=<NUM_LIT:3>)<EOL>def print_three_message(result): | print("<STR_LIT>" % result)<EOL>assert result == <NUM_LIT:3><EOL> | Function that prints all results equal to 3. | f13339:m4 |
@callback_register.register("<STR_LIT>")<EOL>@dice_result(equal=<NUM_LIT:2>)<EOL>def print_two_message(result): | print("<STR_LIT>" % result)<EOL>assert result == <NUM_LIT:2><EOL> | Function that prints all results equal to 2. | f13339:m5 |
@callback_register.register("<STR_LIT>")<EOL>@dice_result(equal=<NUM_LIT:1>)<EOL>def print_one_message(result): | print("<STR_LIT>" % result)<EOL>assert result == <NUM_LIT:1><EOL> | Function that prints all results equal to 1. | f13339:m6 |
@classmethod<EOL><INDENT>def setup_class(cls):<DEDENT> | class Register(SequenceModel):<EOL><INDENT>id = AttributeField(ID())<EOL>name = CharField()<EOL>addressOffset = IntegerField()<EOL>size = ModelField(Size)<EOL>class Meta:<EOL><INDENT>sequence = [<EOL>SequenceElement('<STR_LIT:name>', min_occurs=<NUM_LIT:1>),<EOL>SequenceElement('<STR_LIT>', min_occurs=<NUM_LIT:1>),<EOL>SequenceElement('<STR_LIT:size>', min_occurs=<NUM_LIT:1>),<EOL>]<EOL><DEDENT><DEDENT>cls.register_dict = {<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT:0>',<EOL>'<STR_LIT:size>': '<STR_LIT>'<EOL>}<EOL>cls.cls = Register<EOL>cls.instance = Register.from_dict(cls.register_dict)<EOL> | setup any state specific to the execution of the given class (which
usually contains tests). | f13340:c0:m0 |
@classmethod<EOL><INDENT>def setup_class(cls):<DEDENT> | class Register(Model):<EOL><INDENT>id = AttributeField(ID())<EOL>name = CharField()<EOL>addressOffset = IntegerField()<EOL>size = ModelField(Size)<EOL><DEDENT>register_dict = {<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT:0>',<EOL>'<STR_LIT:size>': {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>}<EOL>cls.instance = Register()<EOL>cls.instance.populate(register_dict)<EOL>cls.instance.validate()<EOL> | setup any state specific to the execution of the given class (which
usually contains tests). | f13340:c1:m0 |
def parse_time(timestring): | timestring = str(timestring).strip()<EOL>for regex, pattern in TIME_FORMATS:<EOL><INDENT>if regex.match(timestring):<EOL><INDENT>found = regex.search(timestring).groupdict()<EOL>dt = datetime.utcnow().strptime(found['<STR_LIT:time>'], pattern)<EOL>dt = datetime.combine(date.today(), dt.time())<EOL>if '<STR_LIT>' in found and found['<STR_LIT>'] is not None:<EOL><INDENT>dt = dt.replace(microsecond=int(found['<STR_LIT>'][<NUM_LIT:1>:]))<EOL><DEDENT>if '<STR_LIT>' in found and found['<STR_LIT>'] is not None:<EOL><INDENT>dt = dt.replace(tzinfo=Timezone(found.get('<STR_LIT>', '<STR_LIT>')))<EOL><DEDENT>return dt<EOL><DEDENT><DEDENT>raise ParseError()<EOL> | Attepmts to parse an ISO8601 formatted ``timestring``.
Returns a ``datetime.datetime`` object. | f13346:m1 |
def parse(representation): | representation = str(representation).upper().strip()<EOL>return parse_date(representation)<EOL> | Attempts to parse an ISO8601 formatted ``representation`` string,
which could be of any valid ISO8601 format (date, time, duration,
interval).
Return value is specific to ``representation``. | f13349:m0 |
def get_source(self, key, name_spaces=None, default_prefix='<STR_LIT>'): | source = self.source or key<EOL>prefix = default_prefix<EOL>if name_spaces and self.name_space and self.name_space in name_spaces:<EOL><INDENT>prefix = '<STR_LIT>'.join([name_spaces[self.name_space], '<STR_LIT::>'])<EOL><DEDENT>return '<STR_LIT>'.join([prefix, source])<EOL> | Generates the dictionary key for the serialized representation
based on the instance variable source and a provided key.
:param str key: name of the field in model
:returns: self.source or key | f13351:c1:m3 |
def validate(self, raw_data, **kwargs): | return raw_data<EOL> | The validate method validates raw_data against the field .
:param raw_data: raw data for field
:type raw_data: str or other valid formats
:returns: validated_data
:raises ValidationException: if self.required and raw_data is None | f13351:c1:m4 |
def validate(self, raw_data, **kwargs): | try:<EOL><INDENT>converted_data = int(raw_data)<EOL>return super(IntegerField, self).validate(converted_data)<EOL><DEDENT>except ValueError:<EOL><INDENT>raise ValidationException(self.messages['<STR_LIT>'], repr(raw_data))<EOL><DEDENT> | Convert the raw_data to an integer. | f13351:c12:m1 |
def validate(self, raw_data, **kwargs): | try:<EOL><INDENT>converted_data = float(raw_data)<EOL>super(FloatField, self).validate(converted_data, **kwargs)<EOL>return raw_data<EOL><DEDENT>except ValueError:<EOL><INDENT>raise ValidationException(self.messages['<STR_LIT>'], repr(raw_data))<EOL><DEDENT> | Convert the raw_data to a float. | f13351:c16:m1 |
def validate(self, raw_data, **kwargs): | super(BooleanField, self).validate(raw_data, **kwargs)<EOL>if isinstance(raw_data, string_types):<EOL><INDENT>valid_data = raw_data.strip().lower() == '<STR_LIT:true>'<EOL><DEDENT>elif isinstance(raw_data, bool):<EOL><INDENT>valid_data = raw_data<EOL><DEDENT>else:<EOL><INDENT>valid_data = raw_data > <NUM_LIT:0><EOL><DEDENT>return valid_data<EOL> | The string ``'True'`` (case insensitive) will be converted
to ``True``, as will any positive integers. | f13351:c18:m0 |
def validate(self, raw_data, **kwargs): | super(DateTimeField, self).validate(raw_data, **kwargs)<EOL>try:<EOL><INDENT>if isinstance(raw_data, datetime.datetime):<EOL><INDENT>self.converted = raw_data<EOL><DEDENT>elif self.serial_format is None:<EOL><INDENT>self.converted = parse(raw_data)<EOL><DEDENT>else:<EOL><INDENT>self.converted = datetime.datetime.strptime(raw_data,<EOL>self.serial_format)<EOL><DEDENT>return raw_data<EOL><DEDENT>except (ParseError, ValueError) as e:<EOL><INDENT>msg = self.messages['<STR_LIT>'] % dict(cls=self.__class__.__name__,<EOL>data=raw_data,<EOL>format=self.serial_format)<EOL>raise ValidationException(msg, raw_data)<EOL><DEDENT> | The raw_data is returned unchanged. | f13351:c20:m1 |
def deserialize(self, raw_data, **kwargs): | super(DateTimeField, self).deserialize(raw_data, **kwargs)<EOL>return self.converted<EOL> | A :class:`datetime.datetime` object is returned. | f13351:c20:m2 |
@classmethod<EOL><INDENT>def from_dict(cls, raw_data, **kwargs):<DEDENT> | instance = cls()<EOL>instance.populate(raw_data, **kwargs)<EOL>instance.validate(**kwargs)<EOL>return instance<EOL> | This factory for :class:`Model` creates a Model from a dict object. | f13355:c4:m5 |
def tt2nav(toctree, klass=None, appendix=None, divider=False): | tt = toctree<EOL>divider = '<STR_LIT>' if divider else '<STR_LIT>'<EOL>if appendix:<EOL><INDENT>tt = re.sub(r'<STR_LIT>', r'<STR_LIT>'.format(appendix), tt)<EOL><DEDENT>tt = re.sub(r'<STR_LIT>', r'<STR_LIT>', tt)<EOL>if klass:<EOL><INDENT>tt = re.sub(r'<STR_LIT>', r'<STR_LIT>'.format(klass), tt)<EOL> | Injects ``has-dropdown`` and ``dropdown`` classes to HTML
generated by the :func:`toctree` function.
:param str toctree:
HTML generated by the :func:`toctree` function. | f13359:m0 |
def patch_connection(filename='<STR_LIT>'): | if no_redis:<EOL><INDENT>raise Exception("<STR_LIT>")<EOL><DEDENT>RliteConnection.set_file(filename)<EOL>global orig_classes<EOL>if orig_classes:<EOL><INDENT>return<EOL><DEDENT>orig_classes = (redis.connection.Connection,<EOL>redis.connection.ConnectionPool)<EOL>_set_classes(RliteConnection, RliteConnectionPool)<EOL> | ``filename``: rlite filename to store db in, or memory
Patch the redis-py Connection and the
static from_url() of Redis and StrictRedis to use RliteConnection | f13362:m0 |
@contextmanager<EOL>def patch(filename='<STR_LIT>'): | patch_connection(filename)<EOL>yield<EOL>unpatch_connection()<EOL> | Context manager version of patch_connection/unpatch_connection | f13362:m2 |
def open(filename, mode = '<STR_LIT:r>', iline = <NUM_LIT>,<EOL>xline = <NUM_LIT>,<EOL>strict = True,<EOL>ignore_geometry = False,<EOL>endian = '<STR_LIT>' ): | if '<STR_LIT:w>' in mode:<EOL><INDENT>problem = '<STR_LIT>'<EOL>solution = '<STR_LIT>'<EOL>raise ValueError('<STR_LIT:U+002CU+0020>'.join((problem, solution)))<EOL><DEDENT>endians = {<EOL>'<STR_LIT>': <NUM_LIT>, <EOL>'<STR_LIT>': <NUM_LIT>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>}<EOL>if endian not in endians:<EOL><INDENT>problem = '<STR_LIT>'<EOL>candidates = '<STR_LIT:U+0020>'.join(endians.keys())<EOL>raise ValueError(problem + candidates)<EOL><DEDENT>from .. import _segyio<EOL>fd = _segyio.segyiofd(str(filename), mode, endians[endian])<EOL>fd.suopen()<EOL>metrics = fd.metrics()<EOL>f = sufile(<EOL>fd,<EOL>filename = str(filename),<EOL>mode = mode,<EOL>iline = iline,<EOL>xline = xline,<EOL>)<EOL>h0 = f.header[<NUM_LIT:0>]<EOL>dt = h0[words.dt] / <NUM_LIT><EOL>t0 = h0[words.delrt]<EOL>samples = metrics['<STR_LIT>']<EOL>f._samples = (numpy.arange(samples) * dt) + t0<EOL>if ignore_geometry:<EOL><INDENT>return f<EOL><DEDENT>return infer_geometry(f, metrics, iline, xline, strict)<EOL> | Open a seismic unix file.
Behaves identically to open(), except it expects the seismic unix format,
not SEG-Y.
Parameters
----------
filename : str
Path to file to open
mode : {'r', 'r+'}
File access mode, read-only ('r', default) or read-write ('r+')
iline : int or segyio.TraceField
Inline number field in the trace headers. Defaults to 189 as per the
SEG-Y rev1 specification
xline : int or segyio.TraceField
Crossline number field in the trace headers. Defaults to 193 as per the
SEG-Y rev1 specification
strict : bool, optional
Abort if a geometry cannot be inferred. Defaults to True.
ignore_geometry : bool, optional
Opt out on building geometry information, useful for e.g. shot
organised files. Defaults to False.
endian : {'big', 'msb', 'little', 'lsb'}
File endianness, big/msb (default) or little/lsb
Returns
-------
file : segyio.su.file
An open seismic unix file handle
Raises
------
ValueError
If the mode string contains 'w', as it would truncate the file
See also
--------
segyio.open : SEG-Y open
Notes
-----
.. versionadded:: 1.8 | f13371:m0 |
def create(filename, spec): | from . import _segyio<EOL>if not structured(spec):<EOL><INDENT>tracecount = spec.tracecount<EOL><DEDENT>else:<EOL><INDENT>tracecount = len(spec.ilines) * len(spec.xlines) * len(spec.offsets)<EOL><DEDENT>ext_headers = spec.ext_headers if hasattr(spec, '<STR_LIT>') else <NUM_LIT:0><EOL>samples = numpy.asarray(spec.samples)<EOL>endians = {<EOL>'<STR_LIT>': <NUM_LIT>, <EOL>'<STR_LIT>': <NUM_LIT>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>}<EOL>endian = spec.endian if hasattr(spec, '<STR_LIT>') else '<STR_LIT>'<EOL>if endian is None:<EOL><INDENT>endian = '<STR_LIT>'<EOL><DEDENT>if endian not in endians:<EOL><INDENT>problem = '<STR_LIT>'<EOL>opts = '<STR_LIT:U+0020>'.join(endians.keys())<EOL>raise ValueError(problem.format(endian) + opts)<EOL><DEDENT>fd = _segyio.segyiofd(str(filename), '<STR_LIT>', endians[endian])<EOL>fd.segymake(<EOL>samples = len(samples),<EOL>tracecount = tracecount,<EOL>format = int(spec.format),<EOL>ext_headers = int(ext_headers),<EOL>)<EOL>f = segyio.SegyFile(fd,<EOL>filename = str(filename),<EOL>mode = '<STR_LIT>',<EOL>iline = int(spec.iline),<EOL>xline = int(spec.xline),<EOL>endian = endian,<EOL>)<EOL>f._samples = samples<EOL>if structured(spec):<EOL><INDENT>sorting = spec.sorting if hasattr(spec, '<STR_LIT>') else None<EOL>if sorting is None:<EOL><INDENT>sorting = TraceSortingFormat.INLINE_SORTING<EOL><DEDENT>f.interpret(spec.ilines, spec.xlines, spec.offsets, sorting)<EOL><DEDENT>f.text[<NUM_LIT:0>] = default_text_header(f._il, f._xl, segyio.TraceField.offset)<EOL>if len(samples) == <NUM_LIT:1>:<EOL><INDENT>interval = int(samples[<NUM_LIT:0>] * <NUM_LIT:1000>)<EOL><DEDENT>else:<EOL><INDENT>interval = int((samples[<NUM_LIT:1>] - samples[<NUM_LIT:0>]) * <NUM_LIT:1000>)<EOL><DEDENT>f.bin.update(<EOL>ntrpr = tracecount,<EOL>nart = tracecount,<EOL>hdt = interval,<EOL>dto = interval,<EOL>hns = len(samples),<EOL>nso = len(samples),<EOL>format = int(spec.format),<EOL>exth = ext_headers,<EOL>)<EOL>return f<EOL> | Create a new segy file.
Create a new segy file with the geometry and properties given by `spec`.
This enables creating SEGY files from your data. The created file supports
all segyio modes, but has an emphasis on writing. The spec must be
complete, otherwise an exception will be raised. A default, empty spec can
be created with ``segyio.spec()``.
Very little data is written to the file, so just calling create is not
sufficient to re-read the file with segyio. Rather, every trace header and
trace must be written to the file to be considered complete.
Create should be used together with python's ``with`` statement. This ensure
the data is written. Please refer to the examples.
The ``segyio.spec()`` function will default sorting, offsets and everything
in the mandatory group, except format and samples, and requires the caller
to fill in *all* the fields in either of the exclusive groups.
If any field is missing from the first exclusive group, and the tracecount
is set, the resulting file will be considered unstructured. If the
tracecount is set, and all fields of the first exclusive group are
specified, the file is considered structured and the tracecount is inferred
from the xlines/ilines/offsets. The offsets are defaulted to ``[1]`` by
``segyio.spec()``.
Parameters
----------
filename : str
Path to file to create
spec : segyio.spec
Structure of the segy file
Returns
-------
file : segyio.SegyFile
An open segyio file handle, similar to that returned by `segyio.open`
See also
--------
segyio.spec : template for the `spec` argument
Notes
-----
.. versionadded:: 1.1
.. versionchanged:: 1.4
Support for creating unstructured files
.. versionchanged:: 1.8
Support for creating lsb files
The ``spec`` is any object that has the following attributes
Mandatory::
iline : int or segyio.BinField
xline : int or segyio.BinField
samples : array of int
format : { 1, 5 }
1 = IBM float, 5 = IEEE float
Exclusive::
ilines : array_like of int
xlines : array_like of int
offsets : array_like of int
sorting : int or segyio.TraceSortingFormat
OR
tracecount : int
Optional::
ext_headers : int
endian : str { 'big', 'msb', 'little', 'lsb' }
defaults to 'big'
Examples
--------
Create a file:
>>> spec = segyio.spec()
>>> spec.ilines = [1, 2, 3, 4]
>>> spec.xlines = [11, 12, 13]
>>> spec.samples = list(range(50))
>>> spec.sorting = 2
>>> spec.format = 1
>>> with segyio.create(path, spec) as f:
... ## fill the file with data
... pass
...
Copy a file, but shorten all traces by 50 samples:
>>> with segyio.open(srcpath) as src:
... spec = segyio.spec()
... spec.sorting = src.sorting
... spec.format = src.format
... spec.samples = src.samples[:len(src.samples) - 50]
... spec.ilines = src.ilines
... spec.xline = src.xlines
... with segyio.create(dstpath, spec) as dst:
... dst.text[0] = src.text[0]
... dst.bin = src.bin
... dst.header = src.header
... dst.trace = src.trace
Copy a file, but shift samples time by 50:
>>> with segyio.open(srcpath) as src:
... delrt = 50
... spec = segyio.spec()
... spec.samples = src.samples + delrt
... spec.ilines = src.ilines
... spec.xline = src.xlines
... with segyio.create(dstpath, spec) as dst:
... dst.text[0] = src.text[0]
... dst.bin = src.bin
... dst.header = src.header
... dst.header = { TraceField.DelayRecordingTime: delrt }
... dst.trace = src.trace
Copy a file, but shorten all traces by 50 samples (since v1.4):
>>> with segyio.open(srcpath) as src:
... spec = segyio.tools.metadata(src)
... spec.samples = spec.samples[:len(spec.samples) - 50]
... with segyio.create(dstpath, spec) as dst:
... dst.text[0] = src.text[0]
... dst.bin = src.bin
... dst.header = src.header
... dst.trace = src.trace | f13373:m2 |
def flush(self): | self.xfd.flush()<EOL> | Flush the file
Write the library buffers to disk, like C's ``fflush``. This method is
mostly useful for testing.
It is not necessary to call this method unless you want to observe your
changes on-disk while the file is still open. The file will
automatically be flushed for you if you use the `with` statement when
your routine is completed.
Notes
-----
.. versionadded:: 1.1
.. warning::
This is not guaranteed to actually write changes to disk, it only
flushes the library buffers. Your kernel is free to defer writing
changes to disk until a later time.
Examples
--------
Flush:
>>> with segyio.open(path) as f:
... # write something to f
... f.flush() | f13375:c0:m5 |
def close(self): | self.xfd.close()<EOL> | Close the file
This method is mostly useful for testing.
It is not necessary to call this method if you're using the `with`
statement, which will close the file for you. Calling methods on a
previously-closed file will raise `IOError`.
Notes
-----
.. versionadded:: 1.1 | f13375:c0:m6 |
def mmap(self): | return self.xfd.mmap()<EOL> | Memory map the file
Memory map the file. This is an advanced feature for speed and
optimization; however, it is no silver bullet. If your file is smaller
than the memory available on your system this will likely result in
faster reads and writes, especially for line modes. However, if the
file is very large, or memory is very pressured, this optimization
might cause overall system slowdowns. However, if you're opening the
same file from many different instances of segyio then memory mapping
may significantly reduce the memory pressure.
If this call returns true, the file is memory mapped. If memory mapping
was build-time disabled or is not available for your platform this call
always return false. If the memory mapping is unsuccessful you can keep
using segyio - reading and writing falls back on non-memory mapped
features.
Returns
-------
success : bool
Returns True if the file was successfully memory mapped, False if
not
Notes
-----
.. versionadded:: 1.1
Examples
--------
Memory map:
>>> mapped = f.mmap()
>>> if mapped: print( "File is memory mapped!" )
File is memory mapped!
>>> pass # keep using segyio as per usual
>>> print( f.trace[10][7] )
1.02548 | f13375:c0:m7 |
@property<EOL><INDENT>def dtype(self):<DEDENT> | return self._dtype<EOL> | The data type object of the traces. This is the format most accurate
and efficient to exchange with the underlying file, and the data type
you will find the data traces.
Returns
-------
dtype : numpy.dtype
Notes
-----
.. versionadded:: 1.6 | f13375:c0:m8 |
@property<EOL><INDENT>def sorting(self):<DEDENT> | return self._sorting<EOL> | Inline or crossline sorting, or Falsey (None or 0) if unstructured.
Returns
-------
sorting : int | f13375:c0:m9 |
@property<EOL><INDENT>def tracecount(self):<DEDENT> | return self._tracecount<EOL> | Number of traces in this file
Equivalent to ``len(f.trace)``
Returns
-------
count : int
Number of traces in this file | f13375:c0:m10 |
@property<EOL><INDENT>def samples(self):<DEDENT> | return self._samples<EOL> | Return the array of samples with appropriate intervals.
Returns
-------
samples : numpy.ndarray of int
Notes
-----
It holds that ``len(f.samples) == len(f.trace[0])`` | f13375:c0:m11 |
@property<EOL><INDENT>def offsets(self):<DEDENT> | return self._offsets<EOL> | Return the array of offset names. For post-stack data, this array has a
length of 1
Returns
-------
offsets : numpy.ndarray of int | f13375:c0:m12 |
@property<EOL><INDENT>def ext_headers(self):<DEDENT> | return self._ext_headers<EOL> | Extra text headers
The number of extra text headers, given by the ``ExtendedHeaders``
field in the binary header.
Returns
-------
headers : int
Number of extra text headers | f13375:c0:m13 |
@property<EOL><INDENT>def unstructured(self):<DEDENT> | return self.ilines is None<EOL> | If the file is unstructured, sophisticated addressing modes that
require the file to represent a proper cube won't work, and only raw
data reading and writing is supported.
Returns
-------
unstructured : bool
``True`` if this file is unstructured, ``False`` if not | f13375:c0:m14 |
@property<EOL><INDENT>def header(self):<DEDENT> | return self._header<EOL> | Interact with segy in header mode
Returns
-------
header : Header
Notes
-----
.. versionadded:: 1.1 | f13375:c0:m15 |
@header.setter<EOL><INDENT>def header(self, val):<DEDENT> | self.header[:] = val<EOL> | headers macro assignment
A convenient way for operating on all headers of a file is to use the
default full-file range. It will write headers 0, 1, ..., n, but uses
the iteration specified by the right-hand side (i.e. can skip headers
etc).
If the right-hand-side headers are exhausted before all the destination
file headers the writing will stop, i.e. not all all headers in the
destination file will be written to.
Examples
--------
Copy headers from file g to file f:
>>> f.header = g.header
Set offset field:
>>> f.header = { TraceField.offset: 5 }
Copy every 12th header from the file g to f's 0, 1, 2...:
>>> f.header = g.header[::12]
>>> f.header[0] == g.header[0]
True
>>> f.header[1] == g.header[12]
True
>>> f.header[2] == g.header[2]
False
>>> f.header[2] == g.header[24]
True | f13375:c0:m16 |
def attributes(self, field): | return Attributes(field, self.xfd, self.tracecount)<EOL> | File-wide attribute (header word) reading
Lazily gather a single header word for every trace in the file. The
array can be sliced, supports index lookup, and numpy-style
list-of-indices.
Parameters
----------
field : int or segyio.TraceField
field
Returns
-------
attrs : Attributes
A sliceable array_like of header words
Notes
-----
.. versionadded:: 1.1 | f13375:c0:m17 |
@property<EOL><INDENT>def trace(self):<DEDENT> | return self._trace<EOL> | Interact with segy in trace mode.
Returns
-------
trace : Trace
Notes
-----
.. versionadded:: 1.1 | f13375:c0:m18 |
@trace.setter<EOL><INDENT>def trace(self, val):<DEDENT> | self.trace[:] = val<EOL> | traces macro assignment
Convenient way for setting traces from 0, 1, ... n, based on the
iterable set of traces on the right-hand-side.
If the right-hand-side traces are exhausted before all the destination
file traces the writing will stop, i.e. not all all traces in the
destination file will be written.
Notes
-----
.. versionadded:: 1.1
Examples
--------
Copy traces from file f to file g:
>>> f.trace = g.trace
Copy first half of the traces from g to f:
>>> f.trace = g.trace[:len(g.trace)/2]
Fill the file with one trace (filled with zeros):
>>> tr = np.zeros(f.samples)
>>> f.trace = itertools.repeat(tr)
For advanced users: sometimes you want to load the entire segy file
to memory and apply your own structural manipulations or operations
on it. Some segy files are very large and may not fit, in which
case this feature will break down. This is an optimisation feature;
using it should generally be driven by measurements.
Read the first 10 traces:
>>> f.trace.raw[0:10]
Read *all* traces to memory:
>>> f.trace.raw[:]
Read every other trace to memory:
>>> f.trace.raw[::2] | f13375:c0:m19 |
@property<EOL><INDENT>def ilines(self):<DEDENT> | return self._ilines<EOL> | Inline labels
The inline labels in this file, if structured, else None
Returns
-------
inlines : array_like of int or None | f13375:c0:m20 |
@property<EOL><INDENT>def iline(self):<DEDENT> | if self.unstructured:<EOL><INDENT>raise ValueError(self._unstructured_errmsg)<EOL><DEDENT>if self._iline is not None:<EOL><INDENT>return self._iline<EOL><DEDENT>self._iline = Line(self,<EOL>self.ilines,<EOL>self._iline_length,<EOL>self._iline_stride,<EOL>self.offsets,<EOL>'<STR_LIT>',<EOL>)<EOL>return self._iline<EOL> | Interact with segy in inline mode
Returns
-------
iline : Line or None
Raises
------
ValueError
If the file is unstructured
Notes
-----
.. versionadded:: 1.1 | f13375:c0:m21 |
@iline.setter<EOL><INDENT>def iline(self, value):<DEDENT> | self.iline[:] = value<EOL> | inlines macro assignment
Convenient way for setting inlines, from left-to-right as the inline
numbers are specified in the file.ilines property, from an iterable
set on the right-hand-side.
If the right-hand-side inlines are exhausted before all the destination
file inlines the writing will stop, i.e. not all all inlines in the
destination file will be written.
Notes
-----
.. versionadded:: 1.1
Examples
--------
Copy inlines from file f to file g:
>>> f.iline = g.iline | f13375:c0:m22 |
@property<EOL><INDENT>def xlines(self):<DEDENT> | return self._xlines<EOL> | Crossline labels
The crosslane labels in this file, if structured, else None
Returns
-------
crosslines : array_like of int or None | f13375:c0:m23 |
@property<EOL><INDENT>def xline(self):<DEDENT> | if self.unstructured:<EOL><INDENT>raise ValueError(self._unstructured_errmsg)<EOL><DEDENT>if self._xline is not None:<EOL><INDENT>return self._xline<EOL><DEDENT>self._xline = Line(self,<EOL>self.xlines,<EOL>self._xline_length,<EOL>self._xline_stride,<EOL>self.offsets,<EOL>'<STR_LIT>',<EOL>)<EOL>return self._xline<EOL> | Interact with segy in crossline mode
Returns
-------
xline : Line or None
Raises
------
ValueError
If the file is unstructured
Notes
-----
.. versionadded:: 1.1 | f13375:c0:m24 |
@xline.setter<EOL><INDENT>def xline(self, value):<DEDENT> | self.xline[:] = value<EOL> | crosslines macro assignment
Convenient way for setting crosslines, from left-to-right as the inline
numbers are specified in the file.ilines property, from an iterable set
on the right-hand-side.
If the right-hand-side inlines are exhausted before all the destination
file inlines the writing will stop, i.e. not all all inlines in the
destination file will be written.
Notes
-----
.. versionadded:: 1.1
Examples
--------
Copy crosslines from file f to file g:
>>> f.xline = g.xline | f13375:c0:m25 |
@property<EOL><INDENT>def fast(self):<DEDENT> | if self.sorting == TraceSortingFormat.INLINE_SORTING:<EOL><INDENT>return self.iline<EOL><DEDENT>elif self.sorting == TraceSortingFormat.CROSSLINE_SORTING:<EOL><INDENT>return self.xline<EOL><DEDENT>else:<EOL><INDENT>raise RuntimeError("<STR_LIT>")<EOL><DEDENT> | Access the 'fast' dimension
This mode yields iline or xline mode, depending on which one is laid
out `faster`, i.e. the line with linear disk layout. Use this mode if
the inline/crossline distinction isn't as interesting as traversing in
a fast manner (typically when you want to apply a function to the whole
file, line-by-line).
Returns
-------
fast : Line
line addressing mode
Notes
-----
.. versionadded:: 1.1 | f13375:c0:m26 |
@property<EOL><INDENT>def slow(self):<DEDENT> | if self.sorting == TraceSortingFormat.INLINE_SORTING:<EOL><INDENT>return self.xline<EOL><DEDENT>elif self.sorting == TraceSortingFormat.CROSSLINE_SORTING:<EOL><INDENT>return self.iline<EOL><DEDENT>else:<EOL><INDENT>raise RuntimeError("<STR_LIT>")<EOL><DEDENT> | Access the 'slow' dimension
This mode yields iline or xline mode, depending on which one is laid
out `slower`, i.e. the line with strided disk layout. Use this mode if
the inline/crossline distinction isn't as interesting as traversing in
the slower direction.
Returns
-------
slow : Line
line addressing mode
Notes
-----
.. versionadded:: 1.1 | f13375:c0:m27 |
@property<EOL><INDENT>def depth_slice(self):<DEDENT> | if self.depth is not None:<EOL><INDENT>return self.depth<EOL><DEDENT>from .depth import Depth<EOL>self.depth = Depth(self)<EOL>return self.depth<EOL> | Interact with segy in depth slice mode (fixed z-coordinate)
Returns
-------
depth : Depth
Notes
-----
.. versionadded:: 1.1
.. versionchanged:: 1.7.1
enabled for unstructured files | f13375:c0:m28 |
@depth_slice.setter<EOL><INDENT>def depth_slice(self, value):<DEDENT> | self.depth_slice[:] = value<EOL> | depth macro assignment
Convenient way for setting depth slices, from left-to-right as the depth slices
numbers are specified in the file.depth_slice property, from an iterable
set on the right-hand-side.
If the right-hand-side depth slices are exhausted before all the destination
file depth slices the writing will stop, i.e. not all all depth slices in the
destination file will be written.
Examples
--------
Copy depth slices from file f to file g:
>>> f.depth_slice = g.depth_slice
Copy first half of the depth slices from g to f:
>>> f.depth_slice = g.depth_slice[:g.samples/2]] | f13375:c0:m29 |
@property<EOL><INDENT>def gather(self):<DEDENT> | if self.unstructured:<EOL><INDENT>raise ValueError(self._unstructured_errmsg)<EOL><DEDENT>if self._gather is not None:<EOL><INDENT>return self._gather<EOL><DEDENT>self._gather = Gather(self.trace, self.iline, self.xline, self.offsets)<EOL>return self._gather<EOL> | Interact with segy in gather mode
Returns
-------
gather : Gather | f13375:c0:m30 |
@property<EOL><INDENT>def text(self):<DEDENT> | return Text(self.xfd, self._ext_headers + <NUM_LIT:1>)<EOL> | Interact with segy in text mode
This mode gives access to reading and writing functionality for textual
headers.
The primary data type is the python string. Reading textual headers is
done with ``[]``, and writing is done via assignment. No additional
structure is built around the textual header, so everything is treated
as one long string without line breaks.
Returns
-------
text : Text
See also
--------
segyio.tools.wrap : line-wrap a text header
Notes
-----
.. versionadded:: 1.1 | f13375:c0:m31 |
@property<EOL><INDENT>def bin(self):<DEDENT> | return Field.binary(self)<EOL> | Interact with segy in binary mode
This mode gives access to reading and writing functionality for the
binary header. Please note that using numeric binary offsets uses the
offset numbers from the specification, i.e. the first field of the
binary header starts at 3201, not 1. If you're using the enumerations
this is handled for you.
Returns
-------
binary : Field
Notes
-----
.. versionadded:: 1.1 | f13375:c0:m32 |
@bin.setter<EOL><INDENT>def bin(self, value):<DEDENT> | self.bin.update(value)<EOL> | Update binary header
Update a value or replace the binary header
Parameters
----------
value : dict_like
dict_like, keys of int or segyio.BinField or segyio.su | f13375:c0:m33 |
@property<EOL><INDENT>def readonly(self):<DEDENT> | return '<STR_LIT:+>' not in self._mode<EOL> | File is read-only
Returns
-------
readonly : bool
True if this file is read-only
Notes
-----
.. versionadded:: 1.6 | f13375:c0:m35 |
def interpret(self, ilines, xlines, offsets=None, sorting=TraceSortingFormat.INLINE_SORTING): | valid_sortings = {<EOL><NUM_LIT:1> : TraceSortingFormat.CROSSLINE_SORTING,<EOL><NUM_LIT:2> : TraceSortingFormat.INLINE_SORTING,<EOL>'<STR_LIT>' : TraceSortingFormat.INLINE_SORTING,<EOL>'<STR_LIT>' : TraceSortingFormat.INLINE_SORTING,<EOL>'<STR_LIT>' : TraceSortingFormat.CROSSLINE_SORTING,<EOL>'<STR_LIT>' : TraceSortingFormat.CROSSLINE_SORTING,<EOL>TraceSortingFormat.INLINE_SORTING : TraceSortingFormat.INLINE_SORTING,<EOL>TraceSortingFormat.CROSSLINE_SORTING : TraceSortingFormat.CROSSLINE_SORTING,<EOL>}<EOL>if sorting not in valid_sortings:<EOL><INDENT>error = "<STR_LIT>"<EOL>solution = "<STR_LIT>".format(valid_sortings.keys())<EOL>raise ValueError('<STR_LIT>'.format(error, solution))<EOL><DEDENT>if offsets is None:<EOL><INDENT>offsets = np.arange(<NUM_LIT:1>)<EOL><DEDENT>ilines = np.copy(np.asarray(ilines, dtype=np.intc))<EOL>xlines = np.copy(np.asarray(xlines, dtype=np.intc))<EOL>offsets = np.copy(np.asarray(offsets, dtype=np.intc))<EOL>if np.unique(ilines).size != ilines.size:<EOL><INDENT>error = "<STR_LIT>"<EOL>solution = "<STR_LIT>"<EOL>raise ValueError("<STR_LIT>".format(error, solution))<EOL><DEDENT>if np.unique(xlines).size != xlines.size:<EOL><INDENT>error = "<STR_LIT>"<EOL>solution = "<STR_LIT>"<EOL>raise ValueError("<STR_LIT>".format(error, solution))<EOL><DEDENT>if np.unique(offsets).size != offsets.size:<EOL><INDENT>error = "<STR_LIT>"<EOL>solution = "<STR_LIT>"<EOL>raise ValueError("<STR_LIT>".format(error, solution))<EOL><DEDENT>if ilines.size * xlines.size * offsets.size != self.tracecount:<EOL><INDENT>error = ("<STR_LIT>"<EOL>"<STR_LIT>").format(ilines.size,<EOL>xlines.size,<EOL>offsets.size,<EOL>self.tracecount)<EOL>raise ValueError(error)<EOL><DEDENT>from . import _segyio<EOL>line_metrics = _segyio.line_metrics(sorting,<EOL>self.tracecount,<EOL>ilines.size,<EOL>xlines.size,<EOL>offsets.size)<EOL>self._iline_length = line_metrics['<STR_LIT>']<EOL>self._iline_stride = line_metrics['<STR_LIT>']<EOL>self._xline_length = line_metrics['<STR_LIT>']<EOL>self._xline_stride = line_metrics['<STR_LIT>']<EOL>self._sorting = sorting<EOL>self._offsets = offsets<EOL>self._ilines = ilines<EOL>self._xlines = xlines<EOL>return self<EOL> | (Re-)interpret structure on top of a file
(Re-)interpret the structure of the file given the new sorting, ilines,
xlines and offset indices. Note that file itself is not changed in any
way, it is only segyio's interpretation of the file that changes. It's
a way of telling segyio that a file is laid out in a particular way,
even though the header fields say otherwise.
`interpret` expect that the ilines-, xlines- and offsets-indices are
unique. It also expect the dimensions of ilines, xlines and offset to
match the tracecount.
Parameters
----------
f : SegyFile
ilines : array_like
ilines indices in new structure
xlines : array_like
xlines indices in new structure
offsets : array_like
offset indices in new structure
sorting : int, string or TraceSortingFormat
Sorting in new structure
Notes
-----
.. versionadded:: 1.8
Examples
--------
(Re)interpret the structure of the file:
>>> ilines = [10, 11, 12, 13]
>>> xlines = [20, 21, 22, 23, 24]
>>> with segyio.open(file, ignore_geometry=True) as f:
... f.interpret(ilines, xlines) | f13375:c0:m36 |
def __getitem__(self, i): | try:<EOL><INDENT>i = self.wrapindex(i)<EOL>buf = np.empty(self.shape, dtype=self.dtype)<EOL>return self.filehandle.getdepth(i, buf.size, self.offsets, buf)<EOL><DEDENT>except TypeError:<EOL><INDENT>try:<EOL><INDENT>indices = i.indices(len(self))<EOL><DEDENT>except AttributeError:<EOL><INDENT>msg = '<STR_LIT>'<EOL>raise TypeError(msg.format(type(i).__name__))<EOL><DEDENT>def gen():<EOL><INDENT>x = np.empty(self.shape, dtype=self.dtype)<EOL>y = np.copy(x)<EOL>for j in range(*indices):<EOL><INDENT>self.filehandle.getdepth(j, x.size, self.offsets, x)<EOL>x, y = y, x<EOL>yield y<EOL><DEDENT><DEDENT>return gen()<EOL><DEDENT> | depth[i]
ith depth, a horizontal cross-section of the file, starting at 0.
depth[i] returns a numpy.ndarray, and changes to this array will *not*
be reflected on disk.
When i is a slice, a generator of numpy.ndarray is returned.
The depth slices are returned as a fast-by-slow shaped array, i.e. an
inline sorted file with 10 inlines and 5 crosslines has the shape
(10,5). If the file is unsorted, the array shape == tracecount.
Be aware that this interface uses zero-based indices (like traces) and
*not keys* (like ilines), so you can *not* use the values file.samples
as indices.
Parameters
----------
i : int or slice
Returns
-------
depth : numpy.ndarray of dtype or generator of numpy.ndarray of dtype
Notes
-----
.. versionadded:: 1.1
.. warning::
The segyio 1.5 and 1.6 series, and 1.7.0, would return the depth_slice in the
wrong shape for most files. Since segyio 1.7.1, the arrays have the
correct shape, i.e. fast-by-slow. The underlying data was always
fast-by-slow, so a numpy array reshape can fix programs using the
1.5 and 1.6 series.
Behaves like [] for lists.
Examples
--------
Read a single cut (one sample per trace):
>>> x = f.depth_slice[199]
Copy every depth slice into a list:
>>> l = [numpy.copy(x) for x in depth[:]
Every third depth:
>>> for d in depth[::3]:
... (d * 6).mean()
Read up to 250:
>>> for d in depth[:250]:
... d.mean()
>>> len(ilines), len(xlines)
(1, 6)
>>> f.depth_slice[0]
array([[0. , 0.01, 0.02, 0.03, 0.04, 0.05]], dtype=float32) | f13377:c0:m1 |
def __setitem__(self, depth, val): | if isinstance(depth, slice):<EOL><INDENT>for i, x in zip(range(*depth.indices(len(self))), val):<EOL><INDENT>self[i] = x<EOL><DEDENT>return<EOL><DEDENT>val = castarray(val, dtype = self.dtype)<EOL>self.filehandle.putdepth(depth, val.size, self.offsets, val)<EOL> | depth[i] = val
Write the ith depth, a horizontal cross-section, of the file, starting
at 0. It accepts any array_like, but `val` must be at least as big as
the underlying data slice.
If `val` is longer than the underlying trace, `val` is essentially truncated.
Parameters
----------
i : int or slice
val : array_like
Notes
-----
.. versionadded:: 1.1
Behaves like [] for lists.
Examples
--------
Copy a depth:
>>> depth[4] = other[19]
Copy consecutive depths, and assign to a sub volume (inject a sub cube
into the larger volume):
>>> depth[10:50] = other[:]
Copy into every other depth from an iterable:
>>> depth[::2] = other | f13377:c0:m2 |
def open(filename, mode="<STR_LIT:r>", iline = <NUM_LIT>,<EOL>xline = <NUM_LIT>,<EOL>strict = True,<EOL>ignore_geometry = False,<EOL>endian = '<STR_LIT>'): | if '<STR_LIT:w>' in mode:<EOL><INDENT>problem = '<STR_LIT>'<EOL>solution = '<STR_LIT>'<EOL>raise ValueError('<STR_LIT:U+002CU+0020>'.join((problem, solution)))<EOL><DEDENT>endians = {<EOL>'<STR_LIT>': <NUM_LIT>, <EOL>'<STR_LIT>': <NUM_LIT>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>}<EOL>if endian not in endians:<EOL><INDENT>problem = '<STR_LIT>'<EOL>opts = '<STR_LIT:U+0020>'.join(endians.keys())<EOL>raise ValueError(problem.format(endian) + opts)<EOL><DEDENT>from . import _segyio<EOL>fd = _segyio.segyiofd(str(filename), mode, endians[endian])<EOL>fd.segyopen()<EOL>metrics = fd.metrics()<EOL>f = segyio.SegyFile(fd,<EOL>filename = str(filename),<EOL>mode = mode,<EOL>iline = iline,<EOL>xline = xline,<EOL>endian = endian,<EOL>)<EOL>try:<EOL><INDENT>dt = segyio.tools.dt(f, fallback_dt = <NUM_LIT>) / <NUM_LIT><EOL>t0 = f.header[<NUM_LIT:0>][segyio.TraceField.DelayRecordingTime]<EOL>samples = metrics['<STR_LIT>']<EOL>f._samples = (numpy.arange(samples) * dt) + t0<EOL><DEDENT>except:<EOL><INDENT>f.close()<EOL>raise<EOL><DEDENT>if ignore_geometry:<EOL><INDENT>return f<EOL><DEDENT>return infer_geometry(f, metrics, iline, xline, strict)<EOL> | Open a segy file.
Opens a segy file and tries to figure out its sorting, inline numbers,
crossline numbers, and offsets, and enables reading and writing to this
file in a simple manner.
For reading, the access mode `r` is preferred. All write operations will
raise an exception. For writing, the mode `r+` is preferred (as `rw` would
truncate the file). Any mode with `w` will raise an error. The modes used
are standard C file modes; please refer to that documentation for a
complete reference.
Open should be used together with python's ``with`` statement. Please refer
to the examples. When the ``with`` statement is used the file will
automatically be closed when the routine completes or an exception is
raised.
By default, segyio tries to open in ``strict`` mode. This means the file will
be assumed to represent a geometry with consistent inline, crosslines and
offsets. If strict is False, segyio will still try to establish a geometry,
but it won't abort if it fails. When in non-strict mode is opened,
geometry-dependent modes such as iline will raise an error.
If ``ignore_geometry=True``, segyio will *not* try to build iline/xline or
other geometry related structures, which leads to faster opens. This is
essentially the same as using ``strict=False`` on a file that has no
geometry.
Parameters
----------
filename : str
Path to file to open
mode : {'r', 'r+'}
File access mode, read-only ('r', default) or read-write ('r+')
iline : int or segyio.TraceField
Inline number field in the trace headers. Defaults to 189 as per the
SEG-Y rev1 specification
xline : int or segyio.TraceField
Crossline number field in the trace headers. Defaults to 193 as per the
SEG-Y rev1 specification
strict : bool, optional
Abort if a geometry cannot be inferred. Defaults to True.
ignore_geometry : bool, optional
Opt out on building geometry information, useful for e.g. shot
organised files. Defaults to False.
endian : {'big', 'msb', 'little', 'lsb'}
File endianness, big/msb (default) or little/lsb
Returns
-------
file : segyio.SegyFile
An open segyio file handle
Raises
------
ValueError
If the mode string contains 'w', as it would truncate the file
Notes
-----
.. versionadded:: 1.1
.. versionchanged:: 1.8
endian argument
When a file is opened non-strict, only raw traces access is allowed, and
using modes such as ``iline`` raise an error.
Examples
--------
Open a file in read-only mode:
>>> with segyio.open(path, "r") as f:
... print(f.ilines)
...
[1, 2, 3, 4, 5]
Open a file in read-write mode:
>>> with segyio.open(path, "r+") as f:
... f.trace = np.arange(100)
Open two files at once:
>>> with segyio.open(path) as src, segyio.open(path, "r+") as dst:
... dst.trace = src.trace # copy all traces from src to dst
Open a file little-endian file:
>>> with segyio.open(path, endian = 'little') as f:
... f.trace[0] | f13378:m1 |
def __getitem__(self, index): | if len(index) < <NUM_LIT:3>:<EOL><INDENT>index = (index[<NUM_LIT:0>], index[<NUM_LIT:1>], None)<EOL><DEDENT>il, xl, off = index<EOL>if off is None and len(self.offsets) == <NUM_LIT:1>:<EOL><INDENT>off = self.offsets[<NUM_LIT:0>]<EOL><DEDENT>off = off or slice(None)<EOL>def isslice(x): return isinstance(x, slice)<EOL>if not any(map(isslice, [il, xl, off])):<EOL><INDENT>o = self.iline.offsets[off]<EOL>i = o + self.iline.heads[il] + self.xline.heads[xl]<EOL>return self.trace[i]<EOL><DEDENT>offs = off if isslice(off) else slice(off, off+<NUM_LIT:1>, <NUM_LIT:1>)<EOL>xs = list(filter(self.offsets.__contains__,<EOL>range(*offs.indices(self.offsets[-<NUM_LIT:1>]+<NUM_LIT:1>))))<EOL>empty = np.empty(<NUM_LIT:0>, dtype = self.trace.dtype)<EOL>if not any(map(isslice, [il, xl])):<EOL><INDENT>if len(xs) == <NUM_LIT:0>: return empty<EOL>i = self.iline.heads[il] + self.xline.heads[xl]<EOL>return tools.collect(self.trace[i + self.iline.offsets[x]] for x in xs)<EOL><DEDENT>def gen():<EOL><INDENT>xlinds = { xlno: i for i, xlno in enumerate(self.xline.keys()) }<EOL>last_il = self.iline.keys()[-<NUM_LIT:1>] + <NUM_LIT:1><EOL>last_xl = self.xline.keys()[-<NUM_LIT:1>] + <NUM_LIT:1><EOL>il_slice = il if isslice(il) else slice(il, il+<NUM_LIT:1>)<EOL>xl_slice = xl if isslice(xl) else slice(xl, xl+<NUM_LIT:1>)<EOL>if il_slice.start is None:<EOL><INDENT>start = self.iline.keys()[<NUM_LIT:0>]<EOL>il_slice = slice(start, il_slice.stop, il_slice.step)<EOL><DEDENT>if xl_slice.start is None:<EOL><INDENT>start = self.xline.keys()[<NUM_LIT:0>]<EOL>xl_slice = slice(start, xl_slice.stop, xl_slice.step)<EOL><DEDENT>il_range = range(*il_slice.indices(last_il))<EOL>xl_range = range(*xl_slice.indices(last_xl))<EOL>if not isslice(off):<EOL><INDENT>for iline in self.iline[il_slice, off]:<EOL><INDENT>for xlno in xl_range:<EOL><INDENT>try:<EOL><INDENT>xind = xlinds[xlno]<EOL><DEDENT>except KeyError:<EOL><INDENT>pass<EOL><DEDENT>else:<EOL><INDENT>yield iline[xind]<EOL><DEDENT><DEDENT><DEDENT>return<EOL><DEDENT>if len(xs) == <NUM_LIT:0>:<EOL><INDENT>for _, _ in itertools.product(il_range, xl_range): yield empty<EOL>return<EOL><DEDENT>for ilno in il_range:<EOL><INDENT>iline = tools.collect(self.iline[ilno, off])<EOL>for x in xl_range:<EOL><INDENT>try:<EOL><INDENT>xind = xlinds[x]<EOL><DEDENT>except KeyError:<EOL><INDENT>pass<EOL><DEDENT>else:<EOL><INDENT>yield iline[:, xind]<EOL><DEDENT><DEDENT><DEDENT><DEDENT>return gen()<EOL> | gather[i, x, o], gather[:,:,:]
Get the gather or range of gathers, defined as offsets intersection
between an in- and a crossline. Also works on post-stack files (with
only 1 offset), although it is less useful in those cases.
If offsets are omitted, the default is all offsets.
A group of offsets is always returned as an offset-by-samples
numpy.ndarray. If either inline, crossline, or both, are slices, a
generator of such ndarrays are returned.
If the slice of offsets misses all offsets, a special, empty ndarray is
returned.
Parameters
----------
i : int or slice
inline
x : int or slice
crossline
o : int or slice
offsets (default is :)
Returns
-------
gather : numpy.ndarray or generator of numpy.ndarray
Notes
-----
.. versionadded:: 1.1
Examples
--------
Read one offset at an intersection:
>>> gather[200, 241, 25] # returns same shape as trace
Read all offsets at an intersection:
>>> gather[200, 241, :] # returns offsets x samples ndarray
>>> # If no offset is specified, this is implicitly (:)
>>> gather[200, 241, :] == gather[200, 241]
All offsets for a set of ilines, intersecting one crossline:
>>> gather[200:300, 241, :] == gather[200:300, 241]
Some offsets for a set of ilines, interescting one crossline:
>>> gather[200:300, 241, 10:25:5]
Some offsets for a set of ilines and xlines. This effectively yields a
subcube:
>>> f.gather[200:300, 241:248, 1:10] | f13381:c0:m1 |
def __len__(self): | return self.length<EOL> | x.__len__() <==> len(x) | f13382:c0:m1 |
def __iter__(self): | <EOL>return self[:]<EOL> | x.__iter__() <==> iter(x) | f13382:c0:m2 |
def __getitem__(self, i): | try:<EOL><INDENT>i = self.wrapindex(i)<EOL>buf = np.zeros(self.shape, dtype = self.dtype)<EOL>return self.filehandle.gettr(buf, i, <NUM_LIT:1>, <NUM_LIT:1>)<EOL><DEDENT>except TypeError:<EOL><INDENT>try:<EOL><INDENT>indices = i.indices(len(self))<EOL><DEDENT>except AttributeError:<EOL><INDENT>msg = '<STR_LIT>'<EOL>raise TypeError(msg.format(type(i).__name__))<EOL><DEDENT>def gen():<EOL><INDENT>x = np.zeros(self.shape, dtype=self.dtype)<EOL>y = np.zeros(self.shape, dtype=self.dtype)<EOL>for j in range(*indices):<EOL><INDENT>self.filehandle.gettr(x, j, <NUM_LIT:1>, <NUM_LIT:1>)<EOL>x, y = y, x<EOL>yield y<EOL><DEDENT><DEDENT>return gen()<EOL><DEDENT> | trace[i]
ith trace of the file, starting at 0. trace[i] returns a numpy array,
and changes to this array will *not* be reflected on disk.
When i is a slice, a generator of numpy arrays is returned.
Parameters
----------
i : int or slice
Returns
-------
trace : numpy.ndarray of dtype or generator of numpy.ndarray of dtype
Notes
-----
.. versionadded:: 1.1
Behaves like [] for lists.
.. note::
This operator reads lazily from the file, meaning the file is read
on ``next()``, and only one trace is fixed in memory. This means
segyio can run through arbitrarily large files without consuming
much memory, but it is potentially slow if the goal is to read the
entire file into memory. If that is the case, consider using
`trace.raw`, which reads eagerly.
Examples
--------
Read every other trace:
>>> for tr in trace[::2]:
... print(tr)
Read all traces, last-to-first:
>>> for tr in trace[::-1]:
... tr.mean()
Read a single value. The second [] is regular numpy array indexing, and
supports all numpy operations, including negative indexing and slicing:
>>> trace[0][0]
1490.2
>>> trace[0][1]
1490.8
>>> trace[0][-1]
1871.3
>>> trace[-1][100]
1562.0 | f13382:c1:m1 |
def __setitem__(self, i, val): | if isinstance(i, slice):<EOL><INDENT>for j, x in zip(range(*i.indices(len(self))), val):<EOL><INDENT>self[j] = x<EOL><DEDENT>return<EOL><DEDENT>xs = castarray(val, self.dtype)<EOL>self.filehandle.puttr(self.wrapindex(i), xs)<EOL> | trace[i] = val
Write the ith trace of the file, starting at 0. It accepts any
array_like, but val must be at least as big as the underlying data
trace.
If val is longer than the underlying trace, it is essentially
truncated.
For the best performance, val should be a numpy.ndarray of sufficient
size and same dtype as the file. segyio will warn on mismatched types,
and attempt a conversion for you.
Data is written immediately to disk. If writing multiple traces at
once, and a write fails partway through, the resulting file is left in
an unspecified state.
Parameters
----------
i : int or slice
val : array_like
Notes
-----
.. versionadded:: 1.1
Behaves like [] for lists.
Examples
--------
Write a single trace:
>>> trace[10] = list(range(1000))
Write multiple traces:
>>> trace[10:15] = np.array([cube[i] for i in range(5)])
Write multiple traces with stride:
>>> trace[10:20:2] = np.array([cube[i] for i in range(5)]) | f13382:c1:m2 |
@property<EOL><INDENT>def raw(self):<DEDENT> | return RawTrace(self.filehandle,<EOL>self.dtype,<EOL>len(self),<EOL>self.shape,<EOL>self.readonly,<EOL>)<EOL> | An eager version of Trace
Returns
-------
raw : RawTrace | f13382:c1:m4 |
@property<EOL><INDENT>@contextlib.contextmanager<EOL>def ref(self):<DEDENT> | x = RefTrace(self.filehandle,<EOL>self.dtype,<EOL>len(self),<EOL>self.shape,<EOL>self.readonly,<EOL>)<EOL>yield x<EOL>x.flush()<EOL> | A write-back version of Trace
Returns
-------
ref : RefTrace
`ref` is returned in a context manager, and must be in a ``with``
statement
Notes
-----
.. versionadded:: 1.6
Examples
--------
>>> with trace.ref as ref:
... ref[10] += 1.617 | f13382:c1:m5 |
def __getitem__(self, i): | try:<EOL><INDENT>i = self.wrapindex(i)<EOL>buf = np.zeros(self.shape, dtype = self.dtype)<EOL>return self.filehandle.gettr(buf, i, <NUM_LIT:1>, <NUM_LIT:1>)<EOL><DEDENT>except TypeError:<EOL><INDENT>try:<EOL><INDENT>indices = i.indices(len(self))<EOL><DEDENT>except AttributeError:<EOL><INDENT>msg = '<STR_LIT>'<EOL>raise TypeError(msg.format(type(i).__name__))<EOL><DEDENT>start, _, step = indices<EOL>length = len(range(*indices))<EOL>buf = np.empty((length, self.shape), dtype = self.dtype)<EOL>return self.filehandle.gettr(buf, start, step, length)<EOL><DEDENT> | trace[i]
Eagerly read the ith trace of the file, starting at 0. trace[i] returns
a numpy array, and changes to this array will *not* be reflected on
disk.
When i is a slice, this returns a 2-dimensional numpy.ndarray .
Parameters
----------
i : int or slice
Returns
-------
trace : numpy.ndarray of dtype
Notes
-----
.. versionadded:: 1.1
Behaves like [] for lists.
.. note::
Reading this way is more efficient if you know you can afford the
extra memory usage. It reads the requested traces immediately to
memory. | f13382:c2:m1 |
def flush(self): | garbage = []<EOL>for i, (x, signature) in self.refs.items():<EOL><INDENT>if sys.getrefcount(x) == <NUM_LIT:3>:<EOL><INDENT>garbage.append(i)<EOL><DEDENT>if fingerprint(x) == signature: continue<EOL>self.filehandle.puttr(i, x)<EOL>signature = fingerprint(x)<EOL><DEDENT>for i in garbage:<EOL><INDENT>del self.refs[i]<EOL><DEDENT> | Commit cached writes to the file handle. Does not flush libc buffers or
notifies the kernel, so these changes may not immediately be visible to
other processes.
Updates the fingerprints whena writes happen, so successive ``flush()``
invocations are no-ops.
It is not necessary to call this method in user code.
Notes
-----
.. versionadded:: 1.6
This method is not intended as user-oriented functionality, but might
be useful in certain contexts to provide stronger guarantees. | f13382:c3:m1 |
def __getitem__(self, i): | try:<EOL><INDENT>i = self.wrapindex(i)<EOL>if i in self.refs:<EOL><INDENT>return self.refs[i][<NUM_LIT:0>]<EOL><DEDENT>x = self.fetch(i)<EOL>self.refs[i] = (x, fingerprint(x))<EOL>return x<EOL><DEDENT>except TypeError:<EOL><INDENT>try:<EOL><INDENT>indices = i.indices(len(self))<EOL><DEDENT>except AttributeError:<EOL><INDENT>msg = '<STR_LIT>'<EOL>raise TypeError(msg.format(type(i).__name__))<EOL><DEDENT>def gen():<EOL><INDENT>x = np.zeros(self.shape, dtype = self.dtype)<EOL>try:<EOL><INDENT>for j in range(*indices):<EOL><INDENT>x = self.fetch(j, x)<EOL>y = fingerprint(x)<EOL>yield x<EOL>if not fingerprint(x) == y:<EOL><INDENT>self.filehandle.puttr(j, x)<EOL><DEDENT><DEDENT><DEDENT>finally:<EOL><INDENT>self.refs[j] = (x, y)<EOL><DEDENT><DEDENT>return gen()<EOL><DEDENT> | trace[i]
Read the ith trace of the file, starting at 0. trace[i] returns a numpy
array, but unlike Trace, changes to this array *will* be reflected on
disk. The modifications must happen to the actual array (views are ok),
so in-place operations work, but assignments will not::
>>> with ref as ref:
... x = ref[10]
... x += 1.617 # in-place, works
... numpy.copyto(x, x + 10) # works
... x = x + 10 # re-assignment, won't change the original x
Works on newly created files that has yet to have any traces written,
which opens up a natural way of filling newly created files with data.
When getting unwritten traces, a trace filled with zeros is returned.
Parameters
----------
i : int or slice
Returns
-------
trace : numpy.ndarray of dtype
Notes
-----
.. versionadded:: 1.6
Behaves like [] for lists.
Examples
--------
Merge two files with a binary operation. Relies on python3 iterator
zip:
>>> with ref as ref:
... for x, lhs, rhs in zip(ref, L, R):
... numpy.copyto(x, lhs + rhs)
Create a file and fill with data (the repeated trace index):
>>> f = create()
>>> with f.trace.ref as ref:
... for i, x in enumerate(ref):
... x.fill(i) | f13382:c3:m3 |
def __getitem__(self, i): | try:<EOL><INDENT>i = self.wrapindex(i)<EOL>return Field.trace(traceno = i, segy = self.segy)<EOL><DEDENT>except TypeError:<EOL><INDENT>try:<EOL><INDENT>indices = i.indices(len(self))<EOL><DEDENT>except AttributeError:<EOL><INDENT>msg = '<STR_LIT>'<EOL>raise TypeError(msg.format(type(i).__name__))<EOL><DEDENT>def gen():<EOL><INDENT>x = Field.trace(None, self.segy)<EOL>buf = bytearray(x.buf)<EOL>for j in range(*indices):<EOL><INDENT>buf = x.fetch(buf, j)<EOL>x.buf[:] = buf<EOL>x.traceno = j<EOL>yield x<EOL><DEDENT><DEDENT>return gen()<EOL><DEDENT> | header[i]
ith header of the file, starting at 0.
Parameters
----------
i : int or slice
Returns
-------
field : Field
dict_like header
Notes
-----
.. versionadded:: 1.1
Behaves like [] for lists.
Examples
--------
Reading a header:
>>> header[10]
Read a field in the first 5 headers:
>>> [x[25] for x in header[:5]]
[1, 2, 3, 4]
Read a field in every other header:
>>> [x[37] for x in header[::2]]
[1, 3, 1, 3, 1, 3] | f13382:c4:m1 |
def __setitem__(self, i, val): | x = self[i]<EOL>try:<EOL><INDENT>x.update(val)<EOL><DEDENT>except AttributeError:<EOL><INDENT>if isinstance(val, Field) or isinstance(val, dict):<EOL><INDENT>val = itertools.repeat(val)<EOL><DEDENT>for h, v in zip(x, val):<EOL><INDENT>h.update(v)<EOL><DEDENT><DEDENT> | header[i] = val
Write the ith header of the file, starting at 0. Unlike data traces
(which return numpy.ndarrays), changes to returned headers being
iterated over *will* be reflected on disk.
Parameters
----------
i : int or slice
val : Field or array_like of dict_like
Notes
-----
.. versionadded:: 1.1
Behaves like [] for lists
Examples
--------
Copy a header to a different trace:
>>> header[28] = header[29]
Write multiple fields in a trace:
>>> header[10] = { 37: 5, TraceField.INLINE_3D: 2484 }
Set a fixed set of values in all headers:
>>> for x in header[:]:
... x[37] = 1
... x.update({ TraceField.offset: 1, 2484: 10 })
Write a field in multiple headers
>>> for x in header[:10]:
... x.update({ TraceField.offset : 2 })
Write a field in every other header:
>>> for x in header[::2]:
... x.update({ TraceField.offset : 2 }) | f13382:c4:m2 |
@property<EOL><INDENT>def iline(self):<DEDENT> | return HeaderLine(self, self.segy.iline, '<STR_LIT>')<EOL> | Headers, accessed by inline
Returns
-------
line : HeaderLine | f13382:c4:m3 |
@iline.setter<EOL><INDENT>def iline(self, value):<DEDENT> | for i, src in zip(self.segy.ilines, value):<EOL><INDENT>self.iline[i] = src<EOL><DEDENT> | Write iterables to lines
Examples:
Supports writing to *all* crosslines via assignment, regardless of
data source and format. Will respect the sample size and structure
of the file being assigned to, so if the argument traces are longer
than that of the file being written to the surplus data will be
ignored. Uses same rules for writing as `f.iline[i] = x`. | f13382:c4:m4 |
@property<EOL><INDENT>def xline(self):<DEDENT> | return HeaderLine(self, self.segy.xline, '<STR_LIT>')<EOL> | Headers, accessed by crossline
Returns
-------
line : HeaderLine | f13382:c4:m5 |
@xline.setter<EOL><INDENT>def xline(self, value):<DEDENT> | for i, src in zip(self.segy.xlines, value):<EOL><INDENT>self.xline[i] = src<EOL><DEDENT> | Write iterables to lines
Examples:
Supports writing to *all* crosslines via assignment, regardless of
data source and format. Will respect the sample size and structure
of the file being assigned to, so if the argument traces are longer
than that of the file being written to the surplus data will be
ignored. Uses same rules for writing as `f.xline[i] = x`. | f13382:c4:m6 |
def __getitem__(self, i): | try:<EOL><INDENT>xs = np.asarray(i, dtype = self.dtype)<EOL>xs = xs.astype(dtype = self.dtype, order = '<STR_LIT:C>', copy = False)<EOL>attrs = np.empty(len(xs), dtype = self.dtype)<EOL>return self.filehandle.field_foreach(attrs, xs, self.field)<EOL><DEDENT>except TypeError:<EOL><INDENT>try:<EOL><INDENT>i = slice(i, i + <NUM_LIT:1>, <NUM_LIT:1>)<EOL><DEDENT>except TypeError:<EOL><INDENT>pass<EOL><DEDENT>traces = self.tracecount<EOL>filehandle = self.filehandle<EOL>field = self.field<EOL>start, stop, step = i.indices(traces)<EOL>indices = range(start, stop, step)<EOL>attrs = np.empty(len(indices), dtype = self.dtype)<EOL>return filehandle.field_forall(attrs, start, stop, step, field)<EOL><DEDENT> | attributes[:]
Parameters
----------
i : int or slice or array_like
Returns
-------
attributes : array_like of dtype
Examples
--------
Read all unique sweep frequency end:
>>> end = segyio.TraceField.SweepFrequencyEnd
>>> sfe = np.unique(f.attributes( end )[:])
Discover the first traces of each unique sweep frequency end:
>>> end = segyio.TraceField.SweepFrequencyEnd
>>> attrs = f.attributes(end)
>>> sfe, tracenos = np.unique(attrs[:], return_index = True)
Scatter plot group x/y-coordinates with SFEs (using matplotlib):
>>> end = segyio.TraceField.SweepFrequencyEnd
>>> attrs = f.attributes(end)
>>> _, tracenos = np.unique(attrs[:], return_index = True)
>>> gx = f.attributes(segyio.TraceField.GroupX)[tracenos]
>>> gy = f.attributes(segyio.TraceField.GroupY)[tracenos]
>>> scatter(gx, gy) | f13382:c5:m2 |
def __getitem__(self, i): | try:<EOL><INDENT>i = self.wrapindex(i)<EOL>return self.filehandle.gettext(i)<EOL><DEDENT>except TypeError:<EOL><INDENT>try:<EOL><INDENT>indices = i.indices(len(self))<EOL><DEDENT>except AttributeError:<EOL><INDENT>msg = '<STR_LIT>'<EOL>raise TypeError(msg.format(type(i).__name__))<EOL><DEDENT>def gen():<EOL><INDENT>for j in range(*indices):<EOL><INDENT>yield self.filehandle.gettext(j)<EOL><DEDENT><DEDENT>return gen()<EOL><DEDENT> | text[i]
Read the text header at i. 0 is the mandatory, main
Examples
--------
Print the textual header:
>>> print(f.text[0])
Print the first extended textual header:
>>> print(f.text[1])
Print a textual header line-by-line:
>>> # using zip, from the zip documentation
>>> text = str(f.text[0])
>>> lines = map(''.join, zip( *[iter(text)] * 80))
>>> for line in lines:
... print(line)
... | f13382:c6:m1 |
def __setitem__(self, i, val): | if isinstance(val, Text):<EOL><INDENT>self[i] = val[<NUM_LIT:0>]<EOL>return<EOL><DEDENT>try:<EOL><INDENT>i = self.wrapindex(i)<EOL>self.filehandle.puttext(i, val)<EOL><DEDENT>except TypeError:<EOL><INDENT>try:<EOL><INDENT>indices = i.indices(len(self))<EOL><DEDENT>except AttributeError:<EOL><INDENT>msg = '<STR_LIT>'<EOL>raise TypeError(msg.format(type(i).__name__))<EOL><DEDENT>for i, text in zip(range(*indices), val):<EOL><INDENT>if isinstance(text, Text):<EOL><INDENT>text = text[<NUM_LIT:0>]<EOL><DEDENT>self.filehandle.puttext(i, text)<EOL><DEDENT><DEDENT> | text[i] = val
Write the ith text header of the file, starting at 0.
If val is instance of Text or iterable of Text,
value is set to be the first element of every Text
Parameters
----------
i : int or slice
val : str, Text or iterable if i is slice
Examples
--------
Write a new textual header:
>>> f.text[0] = make_new_header()
>>> f.text[1:3] = ["new_header1", "new_header_2"]
Copy a textual header:
>>> f.text[1] = g.text[0]
Write a textual header based on Text:
>>> f.text[1] = g.text
>>> assert f.text[1] == g.text[0]
>>> f.text[1:3] = [g1.text, g2.text]
>>> assert f.text[1] == g1.text[0]
>>> assert f.text[2] == g2.text[0] | f13382:c6:m2 |
def dt(f, fallback_dt=<NUM_LIT>): | return f.xfd.getdt(fallback_dt)<EOL> | Delta-time
Infer a ``dt``, the sample rate, from the file. If none is found, use the
fallback.
Parameters
----------
f : segyio.SegyFile
fallback_dt : float
delta-time to fall back to, in microseconds
Returns
-------
dt : float
Notes
-----
.. versionadded:: 1.1 | f13383:m0 |
def sample_indexes(segyfile, t0=<NUM_LIT:0.0>, dt_override=None): | if dt_override is None:<EOL><INDENT>dt_override = dt(segyfile)<EOL><DEDENT>return [t0 + t * dt_override for t in range(len(segyfile.samples))]<EOL> | Creates a list of values representing the samples in a trace at depth or time.
The list starts at *t0* and is incremented with am*dt* for the number of samples.
If a *dt_override* is not provided it will try to find a *dt* in the file.
Parameters
----------
segyfile : segyio.SegyFile
t0 : float
initial sample, or delay-recording-time
dt_override : float or None
Returns
-------
samples : array_like of float
Notes
-----
.. versionadded:: 1.1 | f13383:m1 |
def create_text_header(lines): | rows = []<EOL>for line_no in range(<NUM_LIT:1>, <NUM_LIT>):<EOL><INDENT>line = "<STR_LIT>"<EOL>if line_no in lines:<EOL><INDENT>line = lines[line_no]<EOL><DEDENT>row = "<STR_LIT>".format(line_no, line)<EOL>rows.append(row)<EOL><DEDENT>rows = '<STR_LIT>'.join(rows)<EOL>return rows<EOL> | Format textual header
Create a "correct" SEG-Y textual header. Every line will be prefixed with
C## and there are 40 lines. The input must be a dictionary with the line
number[1-40] as a key. The value for each key should be up to 76 character
long string.
Parameters
----------
lines : dict
`lines` dictionary with fields:
- ``no`` : line number (`int`)
- ``line`` : line (`str`)
Returns
-------
text : str | f13383:m2 |
def wrap(s, width=<NUM_LIT>): | return '<STR_LIT:\n>'.join(textwrap.wrap(str(s), width=width))<EOL> | Formats the text input with newlines given the user specified width for
each line.
Parameters
----------
s : str
width : int
Returns
-------
text : str
Notes
-----
.. versionadded:: 1.1 | f13383:m3 |
def native(data,<EOL>format = segyio.SegySampleFormat.IBM_FLOAT_4_BYTE,<EOL>copy = True): | data = data.view( dtype = np.single )<EOL>if copy:<EOL><INDENT>data = np.copy( data )<EOL><DEDENT>format = int(segyio.SegySampleFormat(format))<EOL>return segyio._segyio.native(data, format)<EOL> | Convert numpy array to native float
Converts a numpy array from raw segy trace data to native floats. Works for numpy ndarrays.
Parameters
----------
data : numpy.ndarray
format : int or segyio.SegySampleFormat
copy : bool
If True, convert on a copy, and leave the input array unmodified
Returns
-------
data : numpy.ndarray
Notes
-----
.. versionadded:: 1.1
Examples
--------
Convert mmap'd trace to native float:
>>> d = np.memmap('file.sgy', offset = 3600, dtype = np.uintc)
>>> samples = 1500
>>> trace = segyio.tools.native(d[240:240+samples]) | f13383:m4 |
def collect(itr): | return np.stack([np.copy(x) for x in itr])<EOL> | Collect traces or lines into one ndarray
Eagerly copy a series of traces, lines or depths into one numpy ndarray. If
collecting traces or fast-direction over a post-stacked file, reshaping the
resulting array is equivalent to calling ``segyio.tools.cube``.
Parameters
----------
itr : iterable of numpy.ndarray
Returns
-------
data : numpy.ndarray
Notes
-----
.. versionadded:: 1.1
Examples
--------
collect-cube identity:
>>> f = segyio.open('post-stack.sgy')
>>> x = segyio.tools.collect(f.traces[:])
>>> x = x.reshape((len(f.ilines), len(f.xlines), f.samples))
>>> numpy.all(x == segyio.tools.cube(f)) | f13383:m5 |
def cube(f): | if not isinstance(f, segyio.SegyFile):<EOL><INDENT>with segyio.open(f) as fl:<EOL><INDENT>return cube(fl)<EOL><DEDENT><DEDENT>ilsort = f.sorting == segyio.TraceSortingFormat.INLINE_SORTING<EOL>fast = f.ilines if ilsort else f.xlines<EOL>slow = f.xlines if ilsort else f.ilines<EOL>fast, slow, offs = len(fast), len(slow), len(f.offsets)<EOL>smps = len(f.samples)<EOL>dims = (fast, slow, smps) if offs == <NUM_LIT:1> else (fast, slow, offs, smps)<EOL>return f.trace.raw[:].reshape(dims)<EOL> | Read a full cube from a file
Takes an open segy file (created with segyio.open) or a file name.
If the file is a prestack file, the cube returned has the dimensions
``(fast, slow, offset, sample)``. If it is post-stack (only the one
offset), the dimensions are normalised to ``(fast, slow, sample)``
Parameters
----------
f : str or segyio.SegyFile
Returns
-------
cube : numpy.ndarray
Notes
-----
.. versionadded:: 1.1 | f13383:m6 |
def rotation(f, line = '<STR_LIT>'): | if f.unstructured:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>lines = { '<STR_LIT>': f.fast,<EOL>'<STR_LIT>': f.slow,<EOL>'<STR_LIT>': f.iline,<EOL>'<STR_LIT>': f.xline,<EOL>}<EOL>if line not in lines:<EOL><INDENT>error = "<STR_LIT>".format(line)<EOL>solution = "<STR_LIT>".format('<STR_LIT:U+0020>'.join(lines.keys()))<EOL>raise ValueError('<STR_LIT>'.format(error, solution))<EOL><DEDENT>l = lines[line]<EOL>origin = f.header[<NUM_LIT:0>][segyio.su.cdpx, segyio.su.cdpy]<EOL>cdpx, cdpy = origin[segyio.su.cdpx], origin[segyio.su.cdpy]<EOL>rot = f.xfd.rotation( len(l),<EOL>l.stride,<EOL>len(f.offsets),<EOL>np.fromiter(l.keys(), dtype = np.intc) )<EOL>return rot, cdpx, cdpy<EOL> | Find rotation of the survey
Find the clock-wise rotation and origin of `line` as ``(rot, cdpx, cdpy)``
The clock-wise rotation is defined as the angle in radians between line
given by the first and last trace of the first line and the axis that gives
increasing CDP-Y, in the direction that gives increasing CDP-X.
By default, the first line is the 'fast' direction, which is inlines if the
file is inline sorted, and crossline if it's crossline sorted.
Parameters
----------
f : SegyFile
line : { 'fast', 'slow', 'iline', 'xline' }
Returns
-------
rotation : float
cdpx : int
cdpy : int
Notes
-----
.. versionadded:: 1.2 | f13383:m7 |
def metadata(f): | if not isinstance(f, segyio.SegyFile):<EOL><INDENT>with segyio.open(f) as fl:<EOL><INDENT>return metadata(fl)<EOL><DEDENT><DEDENT>spec = segyio.spec()<EOL>spec.iline = f._il<EOL>spec.xline = f._xl<EOL>spec.samples = f.samples<EOL>spec.format = f.format<EOL>spec.ilines = f.ilines<EOL>spec.xlines = f.xlines<EOL>spec.offsets = f.offsets<EOL>spec.sorting = f.sorting<EOL>spec.tracecount = f.tracecount<EOL>spec.ext_headers = f.ext_headers<EOL>spec.endian = f.endian<EOL>return spec<EOL> | Get survey structural properties and metadata
Create a description object that, when passed to ``segyio.create()``, would
create a new file with the same structure, dimensions, and metadata as
``f``.
Takes an open segy file (created with segyio.open) or a file name.
Parameters
----------
f : str or segyio.SegyFile
Returns
-------
spec : segyio.spec
Notes
-----
.. versionadded:: 1.4 | f13383:m8 |
def resample(f, rate = None, delay = None, micro = False,<EOL>trace = True,<EOL>binary = True): | if rate is not None:<EOL><INDENT>if not micro: rate *= <NUM_LIT:1000><EOL>if binary: f.bin[segyio.su.hdt] = rate<EOL>if trace: f.header = { segyio.su.dt: rate}<EOL><DEDENT>if delay is not None:<EOL><INDENT>f.header = { segyio.su.delrt: delay }<EOL><DEDENT>t0 = delay if delay is not None else f.samples[<NUM_LIT:0>]<EOL>rate = rate / <NUM_LIT:1000> if rate is not None else f.samples[<NUM_LIT:1>] - f.samples[<NUM_LIT:0>]<EOL>f._samples = (np.arange(len(f.samples)) * rate) + t0<EOL>return f<EOL> | Resample a file
Resample all data traces, and update the file handle to reflect the new
sample rate. No actual samples (data traces) are modified, only the header
fields and interpretation.
By default, the rate and the delay are in millseconds - if you need higher
resolution, passing micro=True interprets rate as microseconds (as it is
represented in the file). Delay is always milliseconds.
By default, both the global binary header and the trace headers are updated
to reflect this. If preserving either the trace header interval field or
the binary header interval field is important, pass trace=False and
binary=False respectively, to not have that field updated. This only apply
to sample rates - the recording delay is only found in trace headers and
will be written unconditionally, if delay is not None.
.. warning::
This function requires an open file handle and is **DESTRUCTIVE**. It
will modify the file, and if an exception is raised then partial writes
might have happened and the file might be corrupted.
This function assumes all traces have uniform delays and frequencies.
Parameters
----------
f : SegyFile
rate : int
delay : int
micro : bool
if True, interpret rate as microseconds
trace : bool
Update the trace header if True
binary : bool
Update the binary header if True
Notes
-----
.. versionadded:: 1.4 | f13383:m9 |
def from_array(filename, data, iline=<NUM_LIT>,<EOL>xline=<NUM_LIT>,<EOL>format=SegySampleFormat.IBM_FLOAT_4_BYTE,<EOL>dt=<NUM_LIT>,<EOL>delrt=<NUM_LIT:0>): | dt = int(dt)<EOL>delrt = int(delrt)<EOL>data = np.asarray(data)<EOL>dimensions = len(data.shape)<EOL>if dimensions not in range(<NUM_LIT:2>, <NUM_LIT:5>):<EOL><INDENT>problem = "<STR_LIT>".format(dimensions)<EOL>raise ValueError(problem)<EOL><DEDENT>spec = segyio.spec()<EOL>spec.iline = iline<EOL>spec.xline = xline<EOL>spec.format = format<EOL>spec.sorting = TraceSortingFormat.INLINE_SORTING<EOL>if dimensions == <NUM_LIT:2>:<EOL><INDENT>spec.ilines = [<NUM_LIT:1>]<EOL>spec.xlines = list(range(<NUM_LIT:1>, np.size(data,<NUM_LIT:0>) + <NUM_LIT:1>))<EOL>spec.samples = list(range(np.size(data,<NUM_LIT:1>)))<EOL>spec.tracecount = np.size(data, <NUM_LIT:1>)<EOL><DEDENT>if dimensions == <NUM_LIT:3>:<EOL><INDENT>spec.ilines = list(range(<NUM_LIT:1>, np.size(data, <NUM_LIT:0>) + <NUM_LIT:1>))<EOL>spec.xlines = list(range(<NUM_LIT:1>, np.size(data, <NUM_LIT:1>) + <NUM_LIT:1>))<EOL>spec.samples = list(range(np.size(data, <NUM_LIT:2>)))<EOL><DEDENT>if dimensions == <NUM_LIT:4>:<EOL><INDENT>spec.ilines = list(range(<NUM_LIT:1>, np.size(data, <NUM_LIT:0>) + <NUM_LIT:1>))<EOL>spec.xlines = list(range(<NUM_LIT:1>, np.size(data, <NUM_LIT:1>) + <NUM_LIT:1>))<EOL>spec.offsets = list(range(<NUM_LIT:1>, np.size(data, <NUM_LIT:2>)+ <NUM_LIT:1>))<EOL>spec.samples = list(range(np.size(data,<NUM_LIT:3>)))<EOL><DEDENT>samplecount = len(spec.samples)<EOL>with segyio.create(filename, spec) as f:<EOL><INDENT>tr = <NUM_LIT:0><EOL>for ilno, il in enumerate(spec.ilines):<EOL><INDENT>for xlno, xl in enumerate(spec.xlines):<EOL><INDENT>for offno, off in enumerate(spec.offsets):<EOL><INDENT>f.header[tr] = {<EOL>segyio.su.tracf : tr,<EOL>segyio.su.cdpt : tr,<EOL>segyio.su.offset : off,<EOL>segyio.su.ns : samplecount,<EOL>segyio.su.dt : dt,<EOL>segyio.su.delrt : delrt,<EOL>segyio.su.iline : il,<EOL>segyio.su.xline : xl<EOL>}<EOL>if dimensions == <NUM_LIT:2>: f.trace[tr] = data[tr, :]<EOL>if dimensions == <NUM_LIT:3>: f.trace[tr] = data[ilno, xlno, :]<EOL>if dimensions == <NUM_LIT:4>: f.trace[tr] = data[ilno, xlno, offno, :]<EOL>tr += <NUM_LIT:1><EOL><DEDENT><DEDENT><DEDENT>f.bin.update(<EOL>tsort=TraceSortingFormat.INLINE_SORTING,<EOL>hdt=dt,<EOL>dto=dt<EOL>)<EOL><DEDENT> | Create a new SEGY file from an n-dimentional array. Create a structured
SEGY file with defaulted headers from a 2-, 3- or 4-dimensional array.
ilines, xlines, offsets and samples are inferred from the size of the
array. Please refer to the documentation for functions from_array2D,
from_array3D and from_array4D to see how the arrays are interpreted.
Structure-defining fields in the binary header and in the traceheaders are
set accordingly. Such fields include, but are not limited to iline, xline
and offset. The file also contains a defaulted textual header.
Parameters
----------
filename : string-like
Path to new file
data : 2-,3- or 4-dimensional array-like
iline : int or segyio.TraceField
Inline number field in the trace headers. Defaults to 189 as per the
SEG-Y rev1 specification
xline : int or segyio.TraceField
Crossline number field in the trace headers. Defaults to 193 as per the
SEG-Y rev1 specification
format : int or segyio.SegySampleFormat
Sample format field in the trace header. Defaults to IBM float 4 byte
dt : int-like
sample interval
delrt : int-like
Notes
-----
.. versionadded:: 1.8
Examples
--------
Create a file from a 3D array, open it and read an iline:
>>> segyio.tools.from_array(path, array3d)
>>> segyio.open(path, mode) as f:
... iline = f.iline[0]
... | f13383:m10 |
def from_array2D(filename, data, iline=<NUM_LIT>,<EOL>xline=<NUM_LIT>,<EOL>format=SegySampleFormat.IBM_FLOAT_4_BYTE,<EOL>dt=<NUM_LIT>,<EOL>delrt=<NUM_LIT:0>): | data = np.asarray(data)<EOL>dimensions = len(data.shape)<EOL>if dimensions != <NUM_LIT:2>:<EOL><INDENT>problem = "<STR_LIT>".format(dimensions)<EOL>raise ValueError(problem)<EOL><DEDENT>from_array(filename, data, iline=iline, xline=xline, format=format,<EOL>dt=dt,<EOL>delrt=delrt)<EOL> | Create a new SEGY file from a 2D array
Create an structured SEGY file with defaulted headers from a 2-dimensional
array. The file is inline-sorted and structured as a slice, i.e. it has one
iline and the xlinecount equals the tracecount. The tracecount and
samplecount are inferred from the size of the array. Structure-defining
fields in the binary header and in the traceheaders are set accordingly.
Such fields include, but are not limited to iline, xline and offset. The
file also contains a defaulted textual header.
The 2 dimensional array is interpreted as::
samples
--------------------
trace 0 | s0 | s1 | ... | sn |
--------------------
trace 1 | s0 | s1 | ... | sn |
--------------------
.
.
--------------------
trace n | s0 | s1 | ... | sn |
--------------------
traces = [0, len(axis(0)]
samples = [0, len(axis(1)]
Parameters
----------
filename : string-like
Path to new file
data : 2-dimensional array-like
iline : int or segyio.TraceField
Inline number field in the trace headers. Defaults to 189 as per the
SEG-Y rev1 specification
xline : int or segyio.TraceField
Crossline number field in the trace headers. Defaults to 193 as per the
SEG-Y rev1 specification
format : int or segyio.SegySampleFormat
Sample format field in the trace header. Defaults to IBM float 4 byte
dt : int-like
sample interval
delrt : int-like
Notes
-----
.. versionadded:: 1.8
Examples
--------
Create a file from a 2D array, open it and read a trace:
>>> segyio.tools.from_array2D(path, array2d)
>>> segyio.open(path, mode, strict=False) as f:
... tr = f.trace[0] | f13383:m11 |
def from_array3D(filename, data, iline=<NUM_LIT>,<EOL>xline=<NUM_LIT>,<EOL>format=SegySampleFormat.IBM_FLOAT_4_BYTE,<EOL>dt=<NUM_LIT>,<EOL>delrt=<NUM_LIT:0>): | data = np.asarray(data)<EOL>dimensions = len(data.shape)<EOL>if dimensions != <NUM_LIT:3>:<EOL><INDENT>problem = "<STR_LIT>".format(dimensions)<EOL>raise ValueError(problem)<EOL><DEDENT>from_array(filename, data, iline=iline, xline=xline, format=format,<EOL>dt=dt,<EOL>delrt=delrt)<EOL> | Create a new SEGY file from a 3D array
Create an structured SEGY file with defaulted headers from a 3-dimensional
array. The file is inline-sorted. ilines, xlines and samples are inferred
from the array. Structure-defining fields in the binary header and
in the traceheaders are set accordingly. Such fields include, but are not
limited to iline, xline and offset. The file also contains a defaulted
textual header.
The 3-dimensional array is interpreted as::
xl0 xl1 xl2
-----------------
/ | tr0 | tr1 | tr2 | il0
-----------------
| / | tr3 | tr4 | tr5 | il1
-----------------
| / | tr6 | tr7 | tr8 | il2
-----------------
| / / / / n-samples
------------------
ilines = [1, len(axis(0) + 1]
xlines = [1, len(axis(1) + 1]
samples = [0, len(axis(2)]
Parameters
----------
filename : string-like
Path to new file
data : 3-dimensional array-like
iline : int or segyio.TraceField
Inline number field in the trace headers. Defaults to 189 as per the
SEG-Y rev1 specification
xline : int or segyio.TraceField
Crossline number field in the trace headers. Defaults to 193 as per the
SEG-Y rev1 specification
format : int or segyio.SegySampleFormat
Sample format field in the trace header. Defaults to IBM float 4 byte
dt : int-like
sample interval
delrt : int-like
Notes
-----
.. versionadded:: 1.8
Examples
--------
Create a file from a 3D array, open it and read an iline:
>>> segyio.tools.from_array3D(path, array3d)
>>> segyio.open(path, mode) as f:
... iline = f.iline[0]
... | f13383:m12 |
def from_array4D(filename, data, iline=<NUM_LIT>,<EOL>xline=<NUM_LIT>,<EOL>format=SegySampleFormat.IBM_FLOAT_4_BYTE,<EOL>dt=<NUM_LIT>,<EOL>delrt=<NUM_LIT:0>): | data = np.asarray(data)<EOL>dimensions = len(data.shape)<EOL>if dimensions != <NUM_LIT:4>:<EOL><INDENT>problem = "<STR_LIT>".format(dimensions)<EOL>raise ValueError(problem)<EOL><DEDENT>from_array(filename, data, iline=iline, xline=xline, format=format,<EOL>dt=dt,<EOL>delrt=delrt)<EOL> | Create a new SEGY file from a 4D array
Create an structured SEGY file with defaulted headers from a 4-dimensional
array. The file is inline-sorted. ilines, xlines, offsets and samples are
inferred from the array. Structure-defining fields in the binary header and
in the traceheaders are set accordingly. Such fields include, but are not
limited to iline, xline and offset. The file also contains a defaulted
textual header.
The 4D array is interpreted:
ilines = [1, len(axis(0) + 1]
xlines = [1, len(axis(1) + 1]
offsets = [1, len(axis(2) + 1]
samples = [0, len(axis(3)]
Parameters
----------
filename : string-like
Path to new file
data : 4-dimensional array-like
iline : int or segyio.TraceField
Inline number field in the trace headers. Defaults to 189 as per the
SEG-Y rev1 specification
xline : int or segyio.TraceField
Crossline number field in the trace headers. Defaults to 193 as per the
SEG-Y rev1 specification
format : int or segyio.SegySampleFormat
Sample format field in the trace header. Defaults to IBM float 4 byte
dt : int-like
sample interval
delrt : int-like
Notes
-----
.. versionadded:: 1.8
Examples
--------
Create a file from a 3D array, open it and read an iline:
>>> segyio.tools.create_from_array4D(path, array4d)
>>> segyio.open(path, mode) as f:
... iline = f.iline[0]
... | f13383:m13 |
def fetch(self, buf = None, traceno = None): | if buf is None:<EOL><INDENT>buf = self.buf<EOL><DEDENT>if traceno is None:<EOL><INDENT>traceno = self.traceno<EOL><DEDENT>try:<EOL><INDENT>if self.kind == TraceField:<EOL><INDENT>if traceno is None: return buf<EOL>return self.filehandle.getth(traceno, buf)<EOL><DEDENT>else:<EOL><INDENT>return self.filehandle.getbin()<EOL><DEDENT><DEDENT>except IOError:<EOL><INDENT>if not self.readonly:<EOL><INDENT>return bytearray(len(self.buf))<EOL><DEDENT>else: raise<EOL><DEDENT> | Fetch the header from disk
This object will read header when it is constructed, which means it
might be out-of-date if the file is updated through some other handle.
This method is largely meant for internal use - if you need to reload
disk contents, use ``reload``.
Fetch does not update any internal state (unless `buf` is ``None`` on a
trace header, and the read succeeds), but returns the fetched header
contents.
This method can be used to reposition the trace header, which is useful
for constructing generators.
If this is called on a writable, new file, and this header has not yet
been written to, it will successfully return an empty buffer that, when
written to, will be reflected on disk.
Parameters
----------
buf : bytearray
buffer to read into instead of ``self.buf``
traceno : int
Returns
-------
buf : bytearray
Notes
-----
.. versionadded:: 1.6
This method is not intended as user-oriented functionality, but might
be useful in high-performance code. | f13385:c0:m1 |
def reload(self): | self.buf = self.fetch(buf = self.buf)<EOL>return self<EOL> | This object will read header when it is constructed, which means it
might be out-of-date if the file is updated through some other handle.
It's rarely required to call this method, and it's a symptom of fragile
code. However, if you have multiple handles to the same header, it
might be necessary. Consider the following example::
>>> x = f.header[10]
>>> y = f.header[10]
>>> x[1, 5]
{ 1: 5, 5: 10 }
>>> y[1, 5]
{ 1: 5, 5: 10 }
>>> x[1] = 6
>>> x[1], y[1] # write to x[1] is invisible to y
6, 5
>>> y.reload()
>>> x[1], y[1]
6, 6
>>> x[1] = 5
>>> x[1], y[1]
5, 6
>>> y[5] = 1
>>> x.reload()
>>> x[1], y[1, 5] # the write to x[1] is lost
6, { 1: 6; 5: 1 }
In segyio, headers writes are atomic, and the write to disk writes the
full cache. If this cache is out of date, some writes might get lost,
even though the updates are compatible.
The fix to this issue is either to use ``reload`` and maintain buffer
consistency, or simply don't let header handles alias and overlap in
lifetime.
Notes
-----
.. versionadded:: 1.6 | f13385:c0:m2 |
def flush(self): | if self.kind == TraceField:<EOL><INDENT>self.filehandle.putth(self.traceno, self.buf)<EOL><DEDENT>elif self.kind == BinField:<EOL><INDENT>self.filehandle.putbin(self.buf)<EOL><DEDENT>else:<EOL><INDENT>msg = '<STR_LIT>'<EOL>raise RuntimeError(msg.format(self.kind))<EOL><DEDENT> | Commit backing storage to disk
This method is largely internal, and it is not necessary to call this
from user code. It should not be explicitly invoked and may be removed
in future versions. | f13385:c0:m3 |
def __getitem__(self, key): | try: return self.getfield(self.buf, int(key))<EOL>except TypeError: pass<EOL>return {self.kind(k): self.getfield(self.buf, int(k)) for k in key}<EOL> | d[key]
Read the associated value of `key`.
`key` can be any iterable, to retrieve multiple keys at once. In this
case, a mapping of key -> value is returned.
Parameters
----------
key : int, or iterable of int
Returns
-------
value : int or dict_like
Notes
-----
.. versionadded:: 1.1
.. note::
Since version 1.6, KeyError is appropriately raised on key misses,
whereas ``IndexError`` was raised before. This is an old bug, since
header types were documented to be dict-like. If you rely on
catching key-miss errors in your code, you might want to handle
both ``IndexError`` and ``KeyError`` for multi-version robustness.
.. warning::
segyio considers reads/writes full headers, not individual fields,
and does the read from disk when this class is constructed. If the
file is updated through some other handle, including a secondary
access via `f.header`, this cache might be out-of-date.
Examples
--------
Read a single value:
>>> d[3213]
15000
Read multiple values at once:
>>> d[37, 189]
{ 37: 5, 189: 2484 }
>>> d[37, TraceField.INLINE_3D]
{ 37: 5, 189: 2484 } | f13385:c0:m4 |
def __setitem__(self, key, val): | self.putfield(self.buf, key, val)<EOL>self.flush()<EOL>return val<EOL> | d[key] = val
Set d[key] to val. Setting keys commits changes to disk, although the
changes may not be visible until the kernel schedules the write.
Unlike d[key], this method does not support assigning multiple values
at once. To set multiple values at once, use the `update` method.
Parameters
----------
key : int_like
val : int_like
Returns
-------
val : int
The value set
Notes
-----
.. versionadded:: 1.1
.. note::
Since version 1.6, KeyError is appropriately raised on key misses,
whereas ``IndexError`` was raised before. This is an old bug, since
header types were documented to be dict-like. If you rely on
catching key-miss errors in your code, you might want to handle
both ``IndexError`` and ``KeyError`` for multi-version robustness.
.. warning::
segyio considers reads/writes full headers, not individual fields,
and does the read from disk when this class is constructed. If the
file is updated through some other handle, including a secondary
access via `f.header`, this cache might be out-of-date. That means
writing an individual field will write the full header to disk,
possibly overwriting previously set values.
Examples
--------
Set a value and keep in a variable:
>>> x = header[189] = 5
>>> x
5 | f13385:c0:m5 |
def __delitem__(self, key): | self[key] = <NUM_LIT:0><EOL> | del d[key]
'Delete' the key by setting value to zero. Equivalent to ``d[key] =
0``.
Notes
-----
.. versionadded:: 1.6 | f13385:c0:m6 |
def keys(self): | return list(self._keys)<EOL> | D.keys() -> a set-like object providing a view on D's keys | f13385:c0:m7 |
def __len__(self): | return len(self._keys)<EOL> | x.__len__() <==> len(x) | f13385:c0:m8 |
def __iter__(self): | return iter(self._keys)<EOL> | x.__iter__() <==> iter(x) | f13385:c0:m9 |
def __eq__(self, other): | if not isinstance(other, collections.Mapping):<EOL><INDENT>return NotImplemented<EOL><DEDENT>if len(self) != len(other):<EOL><INDENT>return False<EOL><DEDENT>def intkeys(d):<EOL><INDENT>return { int(k): v for k, v in d.items() }<EOL><DEDENT>return intkeys(self) == intkeys(other)<EOL> | x.__eq__(y) <==> x == y | f13385:c0:m10 |
def update(self, *args, **kwargs): | if len(args) > <NUM_LIT:1>:<EOL><INDENT>msg = '<STR_LIT>'<EOL>raise TypeError(msg.format(len(args)))<EOL><DEDENT>buf = bytearray(self.buf)<EOL>if len(args) == <NUM_LIT:1>:<EOL><INDENT>other = args[<NUM_LIT:0>]<EOL>if isinstance(other, collections.Mapping):<EOL><INDENT>for key in other:<EOL><INDENT>self.putfield(buf, int(key), other[key])<EOL><DEDENT><DEDENT>elif hasattr(other, "<STR_LIT>"):<EOL><INDENT>for key in other.keys():<EOL><INDENT>self.putfield(buf, int(key), other[key])<EOL><DEDENT><DEDENT>else:<EOL><INDENT>for key, value in other:<EOL><INDENT>self.putfield(buf, int(key), value)<EOL><DEDENT><DEDENT><DEDENT>for key, value in kwargs.items():<EOL><INDENT>self.putfield(buf, int(self._kwargs[key]), value)<EOL><DEDENT>self.buf = buf<EOL>self.flush()<EOL> | d.update([E, ]**F) -> None. Update D from mapping/iterable E and F.
Overwrite the values in `d` with the keys from `E` and `F`. If any key
in `value` is invalid in `d`, ``KeyError`` is raised.
This method is atomic - either all values in `value` are set in `d`, or
none are. ``update`` does not commit a partially-updated version to
disk.
For kwargs, Seismic Unix-style names are supported. `BinField` and
`TraceField` are not, because there are name collisions between them,
although this restriction may be lifted in the future.
Notes
-----
.. versionchanged:: 1.3
Support for common dict operations (update, keys, values)
.. versionchanged:: 1.6
Atomicity guarantee
.. versionchanged:: 1.6
`**kwargs` support
Examples
--------
>>> e = { 1: 10, 9: 5 }
>>> d.update(e)
>>> l = [ (105, 11), (169, 4) ]
>>> d.update(l)
>>> d.update(e, iline=189, xline=193, hour=5)
>>> d.update(sx=7) | f13385:c0:m11 |
def __getitem__(self, index): | offset = self.default_offset<EOL>try: index, offset = index<EOL>except TypeError: pass<EOL>try:<EOL><INDENT>head = self.heads[index] + self.offsets[offset]<EOL><DEDENT>except TypeError:<EOL><INDENT>pass<EOL><DEDENT>else:<EOL><INDENT>return self.filehandle.getline(head,<EOL>self.length,<EOL>self.stride,<EOL>len(self.offsets),<EOL>np.empty(self.shape, dtype=self.dtype),<EOL>)<EOL><DEDENT>irange, orange = self.ranges(index, offset)<EOL>def gen():<EOL><INDENT>x = np.empty(self.shape, dtype=self.dtype)<EOL>y = np.copy(x)<EOL>for line in irange:<EOL><INDENT>for off in orange:<EOL><INDENT>head = self.heads[line] + self.offsets[off]<EOL>self.filehandle.getline(head,<EOL>self.length,<EOL>self.stride,<EOL>len(self.offsets),<EOL>y,<EOL>)<EOL>y, x = x, y<EOL>yield x<EOL><DEDENT><DEDENT><DEDENT>return gen()<EOL> | line[i] or line[i, o]
The line `i`, or the line `i` at a specific offset `o`. ``line[i]``
returns a numpy array, and changes to this array will *not* be
reflected on disk.
The `i` and `o` are *keys*, and should correspond to the line- and
offset labels in your file, and in the `ilines`, `xlines`, and
`offsets` attributes.
Slices can contain lines and offsets not in the file, and like with
list slicing, these are handled gracefully and ignored.
When `i` or `o` is a slice, a generator of numpy arrays is returned. If
the slice is defaulted (:), segyio knows enough about the structure to
give you all of the respective labels.
When both `i` and `o` are slices, only one generator is returned, and
the lines are yielded offsets-first, roughly equivalent to the double
for loop::
>>> for line in lines:
... for off in offsets:
... yield line[line, off]
...
Parameters
----------
i : int or slice
o : int or slice
Returns
-------
line : numpy.ndarray of dtype or generator of numpy.ndarray of dtype
Raises
------
KeyError
If `i` or `o` don't exist
Notes
-----
.. versionadded:: 1.1
Examples
--------
Read an inline:
>>> x = line[2400]
Copy every inline into a list:
>>> l = [numpy.copy(x) for x in iline[:]]
Numpy operations on every other inline:
>>> for line in line[::2]:
... line = line * 2
... avg = np.average(line)
Read lines up to 2430:
>>> for line in line[:2430]:
... line.mean()
Copy all lines at all offsets:
>>> l = [numpy.copy(x) for x in line[:,:]]
Copy all offsets of a line:
>>> x = numpy.copy(iline[10,:])
Copy all lines at a fixed offset:
>>> x = numpy.copy(iline[:, 120])
Copy every other line and offset:
>>> map(numpy.copy, line[::2, ::2])
Copy all offsets [200, 250, 300, 350, ...] in the range [200, 800) for
all lines [2420,2460):
>>> l = [numpy.copy(x) for x in line[2420:2460, 200:800:50]] | f13387:c0:m2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.