signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
def append(self, element):
assert element.locus == self.locus, (<EOL>"<STR_LIT>"<EOL>% (element.locus, self.locus))<EOL>self.elements[element] = None<EOL>
Append a PileupElement to this Pileup. If an identical PileupElement is already part of this Pileup, do nothing.
f14239:c0:m3
def update(self, other):
assert self.locus == other.locus<EOL>self.elements.update(other.elements)<EOL>
Add all pileup elements from other into self.
f14239:c0:m4
def filter(self, filters):
new_elements = [<EOL>e for e in self.elements<EOL>if all(function(e) for function in filters)]<EOL>return Pileup(self.locus, new_elements)<EOL>
Apply filters to the pileup elements, and return a new Pileup with the filtered elements removed. Parameters ---------- filters : list of PileupElement -> bool callables A PileupUp element is retained if all filters return True when called on it.
f14239:c0:m5
def __init__(self, locus, offset_start, offset_end, alignment):
assert offset_end >= offset_start,"<STR_LIT>" % (offset_start, offset_end)<EOL>self.locus = locus<EOL>self.offset_start = offset_start<EOL>self.offset_end = offset_end<EOL>self.alignment = alignment<EOL>self.alignment_key = alignment_key(self.alignment)<EOL>
Construct a PileupElement object.
f14240:c0:m0
def fields(self):
return (<EOL>self.locus, self.offset_start, self.offset_end, self.alignment_key)<EOL>
Fields that should be considered for our notion of object equality.
f14240:c0:m1
@property<EOL><INDENT>def bases(self):<DEDENT>
sequence = self.alignment.query_sequence<EOL>assert self.offset_end <= len(sequence),"<STR_LIT>" % (<EOL>self.offset_end,<EOL>len(sequence),<EOL>self.alignment.cigarstring,<EOL>sequence)<EOL>return sequence[self.offset_start:self.offset_end]<EOL>
The sequenced bases in the alignment that align to this locus in the genome, as a string. Empty string in the case of a deletion. String of length > 1 if there is an insertion here.
f14240:c0:m4
@property<EOL><INDENT>def base_qualities(self):<DEDENT>
return self.alignment.query_qualities[<EOL>self.offset_start:self.offset_end]<EOL>
The phred-scaled base quality scores corresponding to `self.bases`, as a list.
f14240:c0:m5
@property<EOL><INDENT>def min_base_quality(self):<DEDENT>
try:<EOL><INDENT>return min(self.base_qualities)<EOL><DEDENT>except ValueError:<EOL><INDENT>assert self.offset_start == self.offset_end<EOL>adjacent_qualities = [<EOL>self.alignment.query_qualities[offset]<EOL>for offset in [self.offset_start - <NUM_LIT:1>, self.offset_start]<EOL>if <NUM_LIT:0> <= offset < len(self.alignment.query_qualities)<EOL>]<EOL>return min(adjacent_qualities)<EOL><DEDENT>
The minimum of the base qualities. In the case of a deletion, in which case there are no bases in this PileupElement, the minimum is taken over the sequenced bases immediately before and after the deletion.
f14240:c0:m6
@staticmethod<EOL><INDENT>def from_pysam_alignment(locus, pileup_read):<DEDENT>
assert not pileup_read.is_refskip, (<EOL>"<STR_LIT>"<EOL>"<STR_LIT>")<EOL>offset_start = None<EOL>offset_end = len(pileup_read.alignment.query_sequence)<EOL>for (offset, position) in pileup_read.alignment.aligned_pairs:<EOL><INDENT>if offset is not None and position is not None:<EOL><INDENT>if position == locus.position:<EOL><INDENT>offset_start = offset<EOL><DEDENT>elif position > locus.position:<EOL><INDENT>offset_end = offset<EOL>break<EOL><DEDENT><DEDENT><DEDENT>if offset_start is None:<EOL><INDENT>offset_start = offset_end<EOL><DEDENT>assert pileup_read.is_del == (offset_end - offset_start == <NUM_LIT:0>),"<STR_LIT>" % (<EOL>pileup_read.is_del,<EOL>offset_start,<EOL>offset_end,<EOL>offset_end - offset_start,<EOL>locus.position,<EOL>pileup_read.alignment.aligned_pairs)<EOL>assert offset_end >= offset_start<EOL>result = PileupElement(<EOL>locus, offset_start, offset_end, pileup_read.alignment)<EOL>return result<EOL>
Factory function to create a new PileupElement from a pysam `PileupRead`. Parameters ---------- locus : varcode.Locus Reference locus for which to construct a PileupElement. Must include exactly one base. pileup_read : pysam.calignmentfile.PileupRead pysam PileupRead instance. Its alignment must overlap the locus. Returns ---------- PileupElement
f14240:c0:m7
def to_locus(variant_or_locus):
if isinstance(variant_or_locus, Locus):<EOL><INDENT>return variant_or_locus<EOL><DEDENT>try:<EOL><INDENT>return variant_or_locus.locus<EOL><DEDENT>except AttributeError:<EOL><INDENT>return Locus.from_inclusive_coordinates(<EOL>variant_or_locus.contig,<EOL>variant_or_locus.start,<EOL>variant_or_locus.end)<EOL><DEDENT>
Return a Locus object for a Variant instance. This is necessary since the read evidence module expects Variant instances to have a locus attribute st to a varcode.Locus instance of interbase genomic coordinates. The rest of varcode uses a different Variant class, but will eventually be transitioned to interbase coordinates. See test/test_read_evidence.py for a definition of the Variant class that the read_evidence module is meant to work with. This function can be passed a regular varcode.Variant instance (with fields start, end, contig, etc.), a different kind of variant object that has a 'locus' field, or a varcode.Locus. It will return a varcode.Locus instance. This should all get cleaned up once varcode switches to interbase coordinates and we standardize on a Variant class.
f14242:m0
def __init__(self, pileups=None, parent=None):
if pileups is None:<EOL><INDENT>pileups = {}<EOL><DEDENT>self.pileups = pileups<EOL>self.parent = parent<EOL>
Construct a new PileupCollection.
f14242:c0:m0
def pileup(self, locus):
locus = to_locus(locus)<EOL>if len(locus.positions) != <NUM_LIT:1>:<EOL><INDENT>raise ValueError("<STR_LIT>" % locus)<EOL><DEDENT>return self.pileups[locus]<EOL>
Given a 1-base locus, return the Pileup at that locus. Raises a KeyError if this PileupCollection does not have a Pileup at the specified locus.
f14242:c0:m1
def at(self, *loci):
loci = [to_locus(obj) for obj in loci]<EOL>single_position_loci = []<EOL>for locus in loci:<EOL><INDENT>for position in locus.positions:<EOL><INDENT>single_position_loci.append(<EOL>Locus.from_interbase_coordinates(locus.contig, position))<EOL><DEDENT><DEDENT>pileups = dict(<EOL>(locus, self.pileups[locus]) for locus in single_position_loci)<EOL>return PileupCollection(pileups, self)<EOL>
Return a new PileupCollection instance including only pileups for the specified loci.
f14242:c0:m2
def loci(self):
return list(self.pileups)<EOL>
Returns the loci included in this instance.
f14242:c0:m3
def reads(self):
<EOL>def alignment_precedence(pysam_alignment_record):<EOL><INDENT>return pysam_alignment_record.mapping_quality<EOL><DEDENT>result = {}<EOL>for pileup in self.pileups.values():<EOL><INDENT>for e in pileup.elements:<EOL><INDENT>key = read_key(e.alignment)<EOL>if key not in result or (<EOL>alignment_precedence(e.alignment) ><EOL>alignment_precedence(result[key])):<EOL><INDENT>result[key] = e.alignment<EOL><DEDENT><DEDENT><DEDENT>return list(result.values())<EOL>
The reads in this PileupCollection. All reads will have an alignment that overlaps at least one of the included loci. Since SAM (and pysam) have no real notion of a "read", the returned instances are actually pysam.AlignedSegment instances, (i.e. alignments). However, only one alignment will be returned by this method per read. Returns ---------- List of pysam.AlignedSegment instances. If a particular read has more than one alignment in this PileupCollection (e.g. one primary and one secondary), then the alignment returned is the one with the highest mapping quality.
f14242:c0:m4
def num_reads(self):
return len(self.reads())<EOL>
Returns the number of reads in this PileupCollection.
f14242:c0:m5
def read_attribute(self, attribute):
return self.read_attributes([attribute])[attribute]<EOL>
Query a read attribute across all reads in this PileupCollection. Parameters ---------- attribute : string Attribute to query. See `PileupCollection.read_attributes` for possible attributes. Returns ---------- pandas.Series instance giving values for the attribute in each read. The order of reads is fixed, so multiple calls to this function will return corresponding values.
f14242:c0:m6
def read_attributes(self, attributes=None):
def include(attribute):<EOL><INDENT>return attributes is None or attribute in attributes<EOL><DEDENT>reads = self.reads()<EOL>possible_column_names = list(PileupCollection._READ_ATTRIBUTE_NAMES)<EOL>result = OrderedDict(<EOL>(name, [getattr(read, name) for read in reads])<EOL>for name in PileupCollection._READ_ATTRIBUTE_NAMES<EOL>if include(name))<EOL>if reads:<EOL><INDENT>tag_dicts = [dict(x.get_tags()) for x in reads]<EOL>tag_keys = set.union(<EOL>*[set(item.keys()) for item in tag_dicts])<EOL>for tag_key in sorted(tag_keys):<EOL><INDENT>column_name = "<STR_LIT>" % tag_key<EOL>possible_column_names.append(column_name)<EOL>if include(column_name):<EOL><INDENT>result[column_name] = [d.get(tag_key) for d in tag_dicts]<EOL><DEDENT><DEDENT><DEDENT>possible_column_names.append("<STR_LIT>")<EOL>if include("<STR_LIT>"):<EOL><INDENT>result["<STR_LIT>"] = reads<EOL><DEDENT>if attributes is not None:<EOL><INDENT>for attribute in attributes:<EOL><INDENT>if attribute not in result:<EOL><INDENT>raise ValueError(<EOL>"<STR_LIT>"<EOL>% (attribute, "<STR_LIT:U+0020>".join(possible_column_names)))<EOL><DEDENT><DEDENT>assert set(attributes) == set(result)<EOL><DEDENT>return pandas.DataFrame(result)<EOL>
Collect read attributes across reads in this PileupCollection into a pandas.DataFrame. Valid attributes are the following properties of a pysam.AlignedSegment instance. See: http://pysam.readthedocs.org/en/latest/api.html#pysam.AlignedSegment for the meaning of these attributes. * cigarstring * flag * inferred_length * is_duplicate * is_paired * is_proper_pair * is_qcfail * is_read1 * is_read2 * is_reverse * is_secondary * is_unmapped * mapping_quality * mate_is_reverse * mate_is_unmapped * next_reference_id * next_reference_start * query_alignment_end * query_alignment_length * query_alignment_qualities * query_alignment_sequence * query_alignment_start * query_length * query_name * reference_end * reference_id * reference_length * reference_start * template_length (Note: the above list is parsed into the _READ_ATTRIBUTE_NAMES class variable, so be careful when modifying it.) Additionally, for alignment "tags" (arbitrary key values associated with an alignment), a column of the form "TAG_{tag name}" is included. Finally, the column "pysam_alignment_record" gives the underlying `pysam.AlignedSegment` instances. Parameters ---------- attributes (optional): list of strings List of columns to include. If unspecified, all columns are included in the result. Returns ---------- pandas.DataFrame of read attributes.
f14242:c0:m7
def group_by_allele(self, locus):
locus = to_locus(locus)<EOL>read_to_allele = None<EOL>loci = []<EOL>if locus.positions:<EOL><INDENT>for position in locus.positions:<EOL><INDENT>base_position = Locus.from_interbase_coordinates(<EOL>locus.contig, position)<EOL>loci.append(base_position)<EOL>new_read_to_allele = {}<EOL>for element in self.pileups[base_position]:<EOL><INDENT>allele_prefix = "<STR_LIT>"<EOL>key = alignment_key(element.alignment)<EOL>if read_to_allele is not None:<EOL><INDENT>try:<EOL><INDENT>allele_prefix = read_to_allele[key]<EOL><DEDENT>except KeyError:<EOL><INDENT>continue<EOL><DEDENT><DEDENT>allele = allele_prefix + element.bases<EOL>new_read_to_allele[key] = allele<EOL><DEDENT>read_to_allele = new_read_to_allele<EOL><DEDENT><DEDENT>else:<EOL><INDENT>position_before = Locus.from_interbase_coordinates(<EOL>locus.contig, locus.start)<EOL>loci.append(position_before)<EOL>read_to_allele = {}<EOL>for element in self.pileups[position_before]:<EOL><INDENT>allele = element.bases[<NUM_LIT:1>:]<EOL>read_to_allele[alignment_key(element.alignment)] = allele<EOL><DEDENT><DEDENT>split = defaultdict(lambda: PileupCollection(pileups={}, parent=self))<EOL>for locus in loci:<EOL><INDENT>pileup = self.pileups[locus]<EOL>for e in pileup.elements:<EOL><INDENT>key = read_to_allele.get(alignment_key(e.alignment))<EOL>if key is not None:<EOL><INDENT>if locus in split[key].pileups:<EOL><INDENT>split[key].pileups[locus].append(e)<EOL><DEDENT>else:<EOL><INDENT>split[key].pileups[locus] = Pileup(locus, [e])<EOL><DEDENT><DEDENT><DEDENT><DEDENT>def sorter(pair):<EOL><INDENT>(allele, pileup_collection) = pair<EOL>return (-<NUM_LIT:1> * pileup_collection.num_reads(), allele)<EOL><DEDENT>return OrderedDict(sorted(split.items(), key=sorter))<EOL>
Split the PileupCollection by the alleles suggested by the reads at the specified locus. If a read has an insertion immediately following the locus, then the insertion is included in the allele. For example, if locus is the 1-base range [5,6), one allele might be "AGA", indicating that at locus 5 some read has an "A" followed by a 2-base insertion ("GA"). If a read has a deletion at the specified locus, the allele is the empty string. The given locus may include any number of bases. If the locus includes multiple bases, then the alleles consist of all bases aligning to that range in any read. Note that only sequences actually sequenced in a particular read are included. For example, if one read has "ATT" at a locus and another read has "GCC", then the alleles are "ATT" and "GCC", but not "GTT". That is, the bases in each allele are phased. For this reason, only reads that overlap the entire locus are included. If the locus is an empty interval (e.g. [5,5) ), then the alleles consist only of inserted bases. In this example, only bases inserted immediately after locus 5 would be included (but *not* the base actually at locus 5). In the previous insertion example, the allele would be "GA", indicating a 2-base insertion. Reads that have no insertion at that position (matches or deletions) would have the empty string as their allele. Parameters ---------- locus : Locus The reference locus, encompassing 0 or more bases. Returns ---------- A dict of string -> PileupCollection. The keys are nucleotide strings giving the bases sequenced at the locus, and the values are PileupCollection instances of the alignments that support that allele.
f14242:c0:m8
def allele_summary(self, locus, score=lambda x: x.num_reads()):
locus = to_locus(locus)<EOL>return [<EOL>(allele, score(x))<EOL>for (allele, x) in self.group_by_allele(locus).items()<EOL>]<EOL>
Convenience method to summarize the evidence for each of the alleles present at a locus. Applies a score function to the PileupCollection associated with each allele. See also `PileupCollection.group_by_allele`. Parameters ---------- locus : Locus The reference locus, encompassing 0 or more bases. score (optional) : PileupCollection -> object Function to apply to summarize the evidence for each allele. Default: count number of reads. Returns ---------- List of (allele, score) pairs.
f14242:c0:m9
def group_by_match(self, variant):
locus = to_locus(variant)<EOL>if len(variant.ref) != len(locus.positions):<EOL><INDENT>logging.warning(<EOL>"<STR_LIT>" %<EOL>(len(variant.ref), len(locus.positions), str(variant)))<EOL><DEDENT>alleles_dict = self.group_by_allele(locus)<EOL>single_base_loci = [<EOL>Locus.from_interbase_coordinates(locus.contig, position)<EOL>for position in locus.positions<EOL>]<EOL>empty_pileups = dict(<EOL>(locus, Pileup(locus=locus, elements=[]))<EOL>for locus in single_base_loci)<EOL>empty_collection = PileupCollection(pileups=empty_pileups, parent=self)<EOL>ref = {variant.ref: alleles_dict.pop(variant.ref, empty_collection)}<EOL>alt = {variant.alt: alleles_dict.pop(variant.alt, empty_collection)}<EOL>other = alleles_dict<EOL>return MatchingEvidence(ref, alt, other)<EOL>
Given a variant, split the PileupCollection based on whether it the data supports the reference allele, the alternate allele, or neither. Parameters ---------- variant : Variant The variant. Must have fields 'locus', 'ref', and 'alt'. Returns ---------- A MatchingEvidence named tuple with fields (ref, alt, other), each of which is a string -> PileupCollection dict mapping alleles to the PileupCollection of evidence supporting them.
f14242:c0:m10
def match_summary(self, variant, score=lambda x: x.num_reads()):
split = self.group_by_match(variant)<EOL>def name(allele_to_pileup_collection):<EOL><INDENT>return "<STR_LIT:U+002C>".join(allele_to_pileup_collection)<EOL><DEDENT>def aggregate_and_score(pileup_collections):<EOL><INDENT>merged = PileupCollection.merge(*pileup_collections)<EOL>return score(merged)<EOL><DEDENT>result = [<EOL>(name(split.ref), aggregate_and_score(split.ref.values())),<EOL>(name(split.alt), aggregate_and_score(split.alt.values())),<EOL>]<EOL>result.extend(<EOL>(allele, score(collection))<EOL>for (allele, collection) in split.other.items())<EOL>return result<EOL>
Convenience method to summarize the evidence for and against a variant using a user-specified score function. See also `PileupCollection.group_by_match`. Parameters ---------- variant : Variant The variant. Must have fields 'locus', 'ref', and 'alt'. score (optional) : PileupCollection -> object Function to apply to summarize the evidence for each allele. Default: count number of reads. Returns ---------- List of (allele, score) pairs. This list will always have at least two elements. The first pair in the list is the reference allele. The second pair is the alternate. The subsequent items give the "third" alleles (neither ref nor alt), if any.
f14242:c0:m11
def filter(self,<EOL>drop_duplicates=False,<EOL>drop_improper_mate_pairs=False,<EOL>min_mapping_quality=None,<EOL>min_base_quality=None,<EOL>filters=None):
if filters is None:<EOL><INDENT>filters = []<EOL><DEDENT>if drop_duplicates:<EOL><INDENT>filters.append(lambda e: not e.alignment.is_duplicate)<EOL><DEDENT>if drop_improper_mate_pairs:<EOL><INDENT>filters.append(lambda e: e.alignment.is_proper_pair)<EOL><DEDENT>if min_mapping_quality is not None:<EOL><INDENT>filters.append(<EOL>lambda e: e.alignment.mapping_quality >= min_mapping_quality)<EOL><DEDENT>if min_base_quality is not None:<EOL><INDENT>filters.append(<EOL>lambda e: e.min_base_quality >= min_base_quality)<EOL><DEDENT>pileups = OrderedDict(<EOL>(locus, pileup.filter(filters))<EOL>for (locus, pileup)<EOL>in self.pileups.items())<EOL>return PileupCollection(pileups=pileups, parent=self)<EOL>
Return a new PileupCollection that includes only pileup elements satisfying the specified criteria. Parameters ---------- drop_duplicates (optional, default False) : boolean Remove alignments with the is_duplicate flag. drop_improper_mate_pairs (optional, default False) : boolean Retain only alignments that have mapped mate pairs, where one alignment in the pair is on the forward strand and the other is on the reverse. min_mapping_quality (optional) : int If specified, retain only alignments with mapping quality >= the specified threshold. min_base_quality (optional) : int If specified, retain only pileup elements where the base quality for the bases aligning to the pileup's locus are >= the specified threshold. filters (optional) : list of PileupElement -> bool functions User-specified filter functions to apply. This will be called on each PileupElement, and should return True if the element should be retained. Returns ---------- A new PileupCollection that includes the subset of the pileup elements matching all the specified filters.
f14242:c0:m12
def merge(self, *others):
new_pileups = {}<EOL>for collection in (self,) + others:<EOL><INDENT>for (locus, pileup) in collection.pileups.items():<EOL><INDENT>if locus in new_pileups:<EOL><INDENT>new_pileups[locus].update(pileup)<EOL><DEDENT>else:<EOL><INDENT>new_pileups[locus] = Pileup(locus, pileup.elements)<EOL><DEDENT><DEDENT><DEDENT>return PileupCollection(new_pileups, parent=self)<EOL>
Return a new PileupCollection that is the union of self and the other specified collections.
f14242:c0:m13
@staticmethod<EOL><INDENT>def from_bam(pysam_samfile, loci, normalized_contig_names=True):<DEDENT>
loci = [to_locus(obj) for obj in loci]<EOL>close_on_completion = False<EOL>if typechecks.is_string(pysam_samfile):<EOL><INDENT>pysam_samfile = Samfile(pysam_samfile)<EOL>close_on_completion = True <EOL><DEDENT>try:<EOL><INDENT>if normalized_contig_names:<EOL><INDENT>chromosome_name_map = {}<EOL>for name in pysam_samfile.references:<EOL><INDENT>normalized = pyensembl.locus.normalize_chromosome(name)<EOL>chromosome_name_map[normalized] = name<EOL>chromosome_name_map[name] = name<EOL><DEDENT><DEDENT>else:<EOL><INDENT>chromosome_name_map = None<EOL><DEDENT>result = PileupCollection({})<EOL>locus_iterator = itertools.chain.from_iterable(<EOL>(Locus.from_interbase_coordinates(locus_interval.contig, pos)<EOL>for pos<EOL>in locus_interval.positions)<EOL>for locus_interval in sorted(loci))<EOL>for locus in locus_iterator:<EOL><INDENT>result.pileups[locus] = Pileup(locus, [])<EOL>if normalized_contig_names:<EOL><INDENT>try:<EOL><INDENT>chromosome = chromosome_name_map[locus.contig]<EOL><DEDENT>except KeyError:<EOL><INDENT>logging.warn("<STR_LIT>" % locus.contig)<EOL>continue<EOL><DEDENT><DEDENT>else:<EOL><INDENT>chromosome = locus.contig<EOL><DEDENT>columns = pysam_samfile.pileup(<EOL>chromosome,<EOL>locus.position,<EOL>locus.position + <NUM_LIT:1>, <EOL>truncate=True,<EOL>stepper="<STR_LIT>")<EOL>try:<EOL><INDENT>column = next(columns)<EOL><DEDENT>except StopIteration:<EOL><INDENT>continue<EOL><DEDENT>pileups = column.pileups<EOL>assert list(columns) == [] <EOL>for pileup_read in pileups:<EOL><INDENT>if not pileup_read.is_refskip:<EOL><INDENT>element = PileupElement.from_pysam_alignment(<EOL>locus, pileup_read)<EOL>result.pileups[locus].append(element)<EOL><DEDENT><DEDENT><DEDENT>return result<EOL><DEDENT>finally:<EOL><INDENT>if close_on_completion:<EOL><INDENT>pysam_samfile.close()<EOL><DEDENT><DEDENT>
Create a PileupCollection for a set of loci from a BAM file. Parameters ---------- pysam_samfile : `pysam.Samfile` instance, or filename string to a BAM file. The BAM file must be indexed. loci : list of Locus instances Loci to collect pileups for. normalized_contig_names : whether the contig names have been normalized (e.g. pyensembl removes the 'chr' prefix). Set to true to de-normalize the names when querying the BAM file. Returns ---------- PileupCollection instance containing pileups for the specified loci. All alignments in the BAM file are included (e.g. duplicate reads, secondary alignments, etc.). See `PileupCollection.filter` if these need to be removed.
f14242:c0:m14
def load_from_args_as_dataframe(args):
if not args.variants and not args.single_variant:<EOL><INDENT>return None<EOL><DEDENT>if args.variant_source_name:<EOL><INDENT>variant_source_names = util.expand(<EOL>args.variant_source_name,<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>len(args.variants))<EOL><DEDENT>else:<EOL><INDENT>variant_source_names = util.drop_prefix(args.variants)<EOL><DEDENT>variant_to_sources = collections.defaultdict(list)<EOL>dfs = []<EOL>for i in range(len(args.variants)):<EOL><INDENT>name = variant_source_names[i]<EOL>prefix = (<EOL>'<STR_LIT>' if len(args.variants) == <NUM_LIT:1> else "<STR_LIT>" % name)<EOL>df = load_as_dataframe(<EOL>args.variants[i],<EOL>name=name,<EOL>genome=args.genome,<EOL>max_variants=args.max_variants_per_source,<EOL>only_passing=not args.include_failing_variants,<EOL>metadata_column_prefix=prefix)<EOL>if df.shape[<NUM_LIT:0>] == <NUM_LIT:0>:<EOL><INDENT>logging.warn("<STR_LIT>" % args.variants[i])<EOL><DEDENT>else:<EOL><INDENT>for variant in df.variant:<EOL><INDENT>variant_to_sources[variant].append(name)<EOL><DEDENT>dfs.append(df)<EOL><DEDENT><DEDENT>if args.single_variant:<EOL><INDENT>variants = []<EOL>extra_args = {}<EOL>if args.genome:<EOL><INDENT>extra_args = {<EOL>'<STR_LIT>': varcode.reference.infer_genome(args.genome)<EOL>}<EOL><DEDENT>for (locus_str, ref, alt) in args.single_variant:<EOL><INDENT>locus = Locus.parse(locus_str)<EOL>variant = varcode.Variant(<EOL>locus.contig,<EOL>locus.inclusive_start,<EOL>ref,<EOL>alt,<EOL>**extra_args)<EOL>variants.append(variant)<EOL>variant_to_sources[variant].append("<STR_LIT>")<EOL><DEDENT>dfs.append(variants_to_dataframe(variants))<EOL><DEDENT>df = dfs.pop(<NUM_LIT:0>)<EOL>for other_df in dfs:<EOL><INDENT>df = pandas.merge(<EOL>df,<EOL>other_df,<EOL>how='<STR_LIT>',<EOL>on=["<STR_LIT>"] + STANDARD_DATAFRAME_COLUMNS)<EOL><DEDENT>genomes = df["<STR_LIT>"].unique()<EOL>if len(genomes) > <NUM_LIT:1>:<EOL><INDENT>raise ValueError(<EOL>"<STR_LIT>"<EOL>"<STR_LIT>" % ("<STR_LIT:U+002CU+0020>".join(genomes)))<EOL><DEDENT>df["<STR_LIT>"] = ["<STR_LIT:U+0020>".join(variant_to_sources[v]) for v in df.variant]<EOL>if args.ref:<EOL><INDENT>df = df.ix[df.ref.isin(args.ref)]<EOL><DEDENT>if args.alt:<EOL><INDENT>df = df.ix[df.alt.isin(args.alt)]<EOL><DEDENT>loci = loci_util.load_from_args(<EOL>util.remove_prefix_from_parsed_args(args, "<STR_LIT>"))<EOL>if loci is not None:<EOL><INDENT>df = df.ix[[<EOL>loci.intersects(pileup_collection.to_locus(v))<EOL>for v in df.variant<EOL>]]<EOL><DEDENT>return df<EOL>
Given parsed variant-loading arguments, return a pandas DataFrame. If no variant loading arguments are specified, return None.
f14243:m1
def load_from_args(args):
if not args.locus:<EOL><INDENT>return None<EOL><DEDENT>loci_iterator = (Locus.parse(locus) for locus in args.locus)<EOL>if args.neighbor_offsets:<EOL><INDENT>loci_iterator = expand_with_neighbors(<EOL>loci_iterator, args.neighbor_offsets)<EOL><DEDENT>return Loci(loci_iterator)<EOL>
Return a Loci object giving the loci specified on the command line. If no loci-related arguments are specified, return None. This makes it possible to distinguish an empty set of loci, for example due to filters removing all loci, from the case where the user didn't specify any arguments.
f14244:m1
def __init__(self, hla=None, hla_dataframe=None, donor_to_hla=None):
if bool(hla) + (hla_dataframe is not None) + bool(donor_to_hla) != <NUM_LIT:1>:<EOL><INDENT>raise TypeError(<EOL>"<STR_LIT>")<EOL><DEDENT>self.hla = (<EOL>self.string_to_hla_alleles(hla) if typechecks.is_string(hla)<EOL>else hla)<EOL>self.donor_to_hla = donor_to_hla<EOL>if hla_dataframe is not None:<EOL><INDENT>self.donor_to_hla = {}<EOL>for (i, row) in hla_dataframe.iterrows():<EOL><INDENT>if row.donor in self.donor_to_hla:<EOL><INDENT>raise ValueError("<STR_LIT>" % row.donor)<EOL><DEDENT>if pandas.isnull(row.hla):<EOL><INDENT>self.donor_to_hla[row.donor] = None<EOL><DEDENT>else:<EOL><INDENT>self.donor_to_hla[row.donor] = self.string_to_hla_alleles(<EOL>row.hla)<EOL><DEDENT><DEDENT><DEDENT>assert self.hla is not None or self.donor_to_hla is not None<EOL>
Specify exactly one of hla, hla_dataframe, or donor_to_hla. Parameters ----------- hla : list of string HLA alleles to use for all donors hla_dataframe : pandas.DataFrame with columns 'donor' and 'hla' DataFrame giving HLA alleles for each donor. The 'hla' column should be a space separated list of alleles for that donor. donor_to_hla : dict of string -> string list Map from donor to HLA alleles for that donor.
f14245:c4:m3
def column_name(self, source, allele_group):
return self.column_format.format(<EOL>source=source,<EOL>allele_group=allele_group)<EOL>
Parameters ---------- source : string name of the ReadSource allele_group : string one of: num_ref, num_alt, total_depth Returns ---------- column name : string
f14245:c5:m5
def allele_support_df(loci, sources):
return pandas.DataFrame(<EOL>allele_support_rows(loci, sources),<EOL>columns=EXPECTED_COLUMNS)<EOL>
Returns a DataFrame of allele counts for all given loci in the read sources
f14246:m0
def variant_support(variants, allele_support_df, ignore_missing=False):
missing = [<EOL>c for c in EXPECTED_COLUMNS if c not in allele_support_df.columns<EOL>]<EOL>if missing:<EOL><INDENT>raise ValueError("<STR_LIT>" % "<STR_LIT:U+0020>".join(missing))<EOL><DEDENT>allele_support_df[["<STR_LIT>", "<STR_LIT>"]] = (<EOL>allele_support_df[["<STR_LIT>", "<STR_LIT>"]].astype(int))<EOL>sources = sorted(allele_support_df["<STR_LIT:source>"].unique())<EOL>allele_support_dict = collections.defaultdict(dict)<EOL>for (i, row) in allele_support_df.iterrows():<EOL><INDENT>key = (<EOL>row['<STR_LIT:source>'],<EOL>row.contig,<EOL>row.interbase_start,<EOL>row.interbase_end)<EOL>allele_support_dict[key][row.allele] = row["<STR_LIT:count>"]<EOL><DEDENT>allele_support_dict = dict(allele_support_dict)<EOL>dataframe_dicts = collections.defaultdict(<EOL>lambda: collections.defaultdict(list))<EOL>for variant in variants:<EOL><INDENT>for source in sources:<EOL><INDENT>key = (source, variant.contig, variant.start - <NUM_LIT:1>, variant.end)<EOL>try:<EOL><INDENT>alleles = allele_support_dict[key]<EOL><DEDENT>except KeyError:<EOL><INDENT>message = (<EOL>"<STR_LIT>" % (<EOL>source, str(variant)))<EOL>if ignore_missing:<EOL><INDENT>logging.warning(message)<EOL>alleles = {}<EOL><DEDENT>else:<EOL><INDENT>raise ValueError(message)<EOL><DEDENT><DEDENT>alt = alleles.get(variant.alt, <NUM_LIT:0>)<EOL>ref = alleles.get(variant.ref, <NUM_LIT:0>)<EOL>total = sum(alleles.values())<EOL>other = total - alt - ref<EOL>dataframe_dicts["<STR_LIT>"][source].append(alt)<EOL>dataframe_dicts["<STR_LIT>"][source].append(ref)<EOL>dataframe_dicts["<STR_LIT>"][source].append(other)<EOL>dataframe_dicts["<STR_LIT>"][source].append(total)<EOL>dataframe_dicts["<STR_LIT>"][source].append(<EOL>float(alt) / max(<NUM_LIT:1>, total))<EOL>dataframe_dicts["<STR_LIT>"][source].append(<EOL>float(alt + other) / max(<NUM_LIT:1>, total))<EOL><DEDENT><DEDENT>dataframes = dict(<EOL>(label, pandas.DataFrame(value, index=variants))<EOL>for (label, value) in dataframe_dicts.items())<EOL>return pandas.Panel(dataframes)<EOL>
Collect the read evidence support for the given variants. Parameters ---------- variants : iterable of varcode.Variant allele_support_df : dataframe Allele support dataframe, as output by the varlens-allele-support tool. It should have columns: source, contig, interbase_start, interbase_end, allele. The remaining columns are interpreted as read counts of various subsets of reads (e.g. all reads, non-duplicate reads, etc.) ignore_missing : boolean If True, then varaints with no allele counts will be interpreted as having 0 depth. If False, then an exception will be raised if any variants have no allele counts. Returns ---------- A pandas.Panel4D frame with these axes: labels (axis=0) : the type of read being counted, i.e. the read count fields in allele_support_df. items (axis=1) : the type of measurement (num_alt, num_ref, num_other, total_depth, alt_fraction, any_alt_fraction) major axis (axis=2) : the variants minor axis (axis=3) : the sources
f14246:m2
def __init__(self, name=None, version="<STR_LIT>", **kwargs):
if not name:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>self.proxy = kwargs.get('<STR_LIT>', None)<EOL>self.max_pages = kwargs.get('<STR_LIT>', <NUM_LIT:5>)<EOL>self.page_size = kwargs.get('<STR_LIT>', <NUM_LIT:100>)<EOL>self.key = kwargs.get('<STR_LIT:key>', None)<EOL>self.access_token = kwargs.get('<STR_LIT>', None)<EOL>self._endpoint = None<EOL>self._api_key = None<EOL>self._name = None<EOL>self._version = version<EOL>self._previous_call = None<EOL>self._base_url = '<STR_LIT>'.format(version)<EOL>sites = self.fetch('<STR_LIT>', filter='<STR_LIT>', pagesize=<NUM_LIT:1000>)<EOL>for s in sites['<STR_LIT>']:<EOL><INDENT>if name == s['<STR_LIT>']:<EOL><INDENT>self._name = s['<STR_LIT:name>']<EOL>self._api_key = s['<STR_LIT>']<EOL>break<EOL><DEDENT><DEDENT>if not self._name:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>
The object used to interact with the Stack Exchange API :param name: (string) **(Required)** A valid ``api_site_parameter`` (available from http://api.stackexchange.com/docs/sites) which will be used to connect to a particular site on the Stack Exchange Network. :param version: (float) **(Required)** The version of the API you are connecting to. The default of ``2.2`` is the current version :param proxy: (dict) (optional) A dictionary of http and https proxy locations Example: .. code-block:: python {'http': 'http://example.com', 'https': 'https://example.com'} By default, this is ``None``. :param max_pages: (int) (optional) The maximum number of pages to retrieve (Default: ``100``) :param page_size: (int) (optional) The number of elements per page. The API limits this to a maximum of 100 items on all end points except ``site`` :param key: (string) (optional) An API key :param access_token: (string) (optional) An access token associated with an application and a user, to grant more permissions (such as write access)
f14250:c1:m0
def fetch(self, endpoint=None, page=<NUM_LIT:1>, key=None, filter='<STR_LIT:default>', **kwargs):
if not endpoint:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>self._endpoint = endpoint<EOL>params = {<EOL>"<STR_LIT>": self.page_size,<EOL>"<STR_LIT>": page,<EOL>"<STR_LIT>": filter<EOL>}<EOL>if self.key:<EOL><INDENT>params['<STR_LIT:key>'] = self.key<EOL><DEDENT>if self.access_token:<EOL><INDENT>params['<STR_LIT>'] = self.access_token<EOL><DEDENT>for k, value in list(kwargs.items()):<EOL><INDENT>if "<STR_LIT:{>" + k + "<STR_LIT:}>" in endpoint:<EOL><INDENT>endpoint = endpoint.replace("<STR_LIT:{>"+k+"<STR_LIT:}>", '<STR_LIT:;>'.join(requests.compat.quote_plus(str(x)) for x in value))<EOL>kwargs.pop(k, None)<EOL><DEDENT><DEDENT>date_time_keys = ['<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>']<EOL>for k in date_time_keys:<EOL><INDENT>if k in kwargs:<EOL><INDENT>if isinstance(kwargs[k], datetime.datetime):<EOL><INDENT>kwargs[k] = int(calendar.timegm(kwargs[k].utctimetuple()))<EOL><DEDENT><DEDENT><DEDENT>if '<STR_LIT>' in kwargs:<EOL><INDENT>ids = '<STR_LIT:;>'.join(str(x) for x in kwargs['<STR_LIT>'])<EOL>kwargs.pop('<STR_LIT>', None)<EOL>endpoint += "<STR_LIT>".format(ids)<EOL><DEDENT>params.update(kwargs)<EOL>if self._api_key:<EOL><INDENT>params['<STR_LIT>'] = self._api_key<EOL><DEDENT>data = []<EOL>run_cnt = <NUM_LIT:1><EOL>backoff = <NUM_LIT:0><EOL>total = <NUM_LIT:0><EOL>while run_cnt <= self.max_pages:<EOL><INDENT>run_cnt += <NUM_LIT:1><EOL>base_url = "<STR_LIT>".format(self._base_url, endpoint)<EOL>try:<EOL><INDENT>response = requests.get(base_url, params=params, proxies=self.proxy)<EOL><DEDENT>except requests.exceptions.ConnectionError as e:<EOL><INDENT>raise StackAPIError(self._previous_call, str(e), str(e), str(e))<EOL><DEDENT>self._previous_call = response.url<EOL>try:<EOL><INDENT>response.encoding = '<STR_LIT>'<EOL>response = response.json()<EOL><DEDENT>except ValueError as e:<EOL><INDENT>raise StackAPIError(self._previous_call, str(e), str(e), str(e))<EOL><DEDENT>try:<EOL><INDENT>error = response["<STR_LIT>"]<EOL>code = response["<STR_LIT>"]<EOL>message = response["<STR_LIT>"]<EOL>raise StackAPIError(self._previous_call, error, code, message)<EOL><DEDENT>except KeyError:<EOL><INDENT>pass <EOL><DEDENT>if key:<EOL><INDENT>data.append(response[key])<EOL><DEDENT>else:<EOL><INDENT>data.append(response)<EOL><DEDENT>if len(data) < <NUM_LIT:1>:<EOL><INDENT>break<EOL><DEDENT>backoff = <NUM_LIT:0><EOL>total = <NUM_LIT:0><EOL>page = <NUM_LIT:1><EOL>if '<STR_LIT>' in response:<EOL><INDENT>backoff = int(response['<STR_LIT>'])<EOL>sleep(backoff+<NUM_LIT:1>) <EOL><DEDENT>if '<STR_LIT>' in response:<EOL><INDENT>total = response['<STR_LIT>']<EOL><DEDENT>if '<STR_LIT>' in response and response['<STR_LIT>'] and run_cnt <= self.max_pages:<EOL><INDENT>params["<STR_LIT>"] += <NUM_LIT:1><EOL><DEDENT>else:<EOL><INDENT>break<EOL><DEDENT><DEDENT>r = []<EOL>for d in data:<EOL><INDENT>if '<STR_LIT>' in d:<EOL><INDENT>r.extend(d['<STR_LIT>'])<EOL><DEDENT><DEDENT>result = {'<STR_LIT>': backoff,<EOL>'<STR_LIT>': False if '<STR_LIT>' not in data[-<NUM_LIT:1>] else data[-<NUM_LIT:1>]['<STR_LIT>'],<EOL>'<STR_LIT>': params['<STR_LIT>'],<EOL>'<STR_LIT>': -<NUM_LIT:1> if '<STR_LIT>' not in data[-<NUM_LIT:1>] else data[-<NUM_LIT:1>]['<STR_LIT>'],<EOL>'<STR_LIT>': -<NUM_LIT:1> if '<STR_LIT>' not in data[-<NUM_LIT:1>] else data[-<NUM_LIT:1>]['<STR_LIT>'],<EOL>'<STR_LIT>': total,<EOL>'<STR_LIT>': list(chain(r))}<EOL>return result<EOL>
Returns the results of an API call. This is the main work horse of the class. It builds the API query string and sends the request to Stack Exchange. If there are multiple pages of results, and we've configured `max_pages` to be greater than 1, it will automatically paginate through the results and return a single object. Returned data will appear in the `items` key of the resulting dictionary. :param endpoint: (string) The API end point being called. Available endpoints are listed on the official API documentation: http://api.stackexchange.com/docs This can be as simple as ``fetch('answers')``, to call the answers end point If calling an end point that takes additional parameter, such as `id`s pass the ids as a list to the `ids` key: .. code-block:: python fetch('answers/{}', ids=[1,2,3]) This will attempt to retrieve the answers for the three listed ids. If no end point is passed, a ``ValueError`` will be raised :param page: (int) The page in the results to start at. By default, it will start on the first page and automatically paginate until the result set reached ``max_pages``. :param key: (string) The site you are issuing queries to. :param filter: (string) The filter to utilize when calling an endpoint. Different filters will return different keys. The default is ``default`` and this will still vary depending on what the API returns as default for a particular endpoint :param kwargs: Parameters accepted by individual endpoints. These parameters **must** be named the same as described in the endpoint documentation :rtype: (dictionary) A dictionary containing wrapper data regarding the API call and the results of the call in the `items` key. If multiple pages were received, all of the results will appear in the ``items`` tag.
f14250:c1:m2
def send_data(self, endpoint=None, page=<NUM_LIT:1>, key=None, filter='<STR_LIT:default>', **kwargs):
if not endpoint:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>self._endpoint = endpoint<EOL>params = {<EOL>"<STR_LIT>": self.page_size,<EOL>"<STR_LIT>": page,<EOL>"<STR_LIT>": filter<EOL>}<EOL>if self.key:<EOL><INDENT>params['<STR_LIT:key>'] = self.key<EOL><DEDENT>if self.access_token:<EOL><INDENT>params['<STR_LIT>'] = self.access_token<EOL><DEDENT>if '<STR_LIT>' in kwargs:<EOL><INDENT>ids = '<STR_LIT:;>'.join(str(x) for x in kwargs['<STR_LIT>'])<EOL>kwargs.pop('<STR_LIT>', None)<EOL><DEDENT>else:<EOL><INDENT>ids = None<EOL><DEDENT>params.update(kwargs)<EOL>if self._api_key:<EOL><INDENT>params['<STR_LIT>'] = self._api_key<EOL><DEDENT>data = []<EOL>base_url = "<STR_LIT>".format(self._base_url, endpoint)<EOL>response = requests.post(base_url, data=params, proxies=self.proxy)<EOL>self._previous_call = response.url<EOL>response = response.json()<EOL>try:<EOL><INDENT>error = response["<STR_LIT>"]<EOL>code = response["<STR_LIT>"]<EOL>message = response["<STR_LIT>"]<EOL>raise StackAPIError(self._previous_call, error, code, message)<EOL><DEDENT>except KeyError:<EOL><INDENT>pass <EOL><DEDENT>data.append(response)<EOL>r = []<EOL>for d in data:<EOL><INDENT>r.extend(d['<STR_LIT>'])<EOL><DEDENT>result = {'<STR_LIT>': data[-<NUM_LIT:1>]['<STR_LIT>'],<EOL>'<STR_LIT>': params['<STR_LIT>'],<EOL>'<STR_LIT>': data[-<NUM_LIT:1>]['<STR_LIT>'],<EOL>'<STR_LIT>': data[-<NUM_LIT:1>]['<STR_LIT>'],<EOL>'<STR_LIT>': list(chain(r))}<EOL>return result<EOL>
Sends data to the API. This call is similar to ``fetch``, but **sends** data to the API instead of retrieving it. Returned data will appear in the ``items`` key of the resulting dictionary. Sending data **requires** that the ``access_token`` is set. This is enforced on the API side, not within this library. :param endpoint: (string) The API end point being called. Available endpoints are listed on the official API documentation: http://api.stackexchange.com/docs This can be as simple as ``fetch('answers')``, to call the answers end point If calling an end point that takes additional parameter, such as `id`s pass the ids as a list to the `ids` key: .. code-block:: python fetch('answers/{}', ids=[1,2,3]) This will attempt to retrieve the answers for the three listed ids. If no end point is passed, a ``ValueError`` will be raised :param page: (int) The page in the results to start at. By default, it will start on the first page and automatically paginate until the result set reached ``max_pages``. :param key: (string) The site you are issuing queries to. :param filter: (string) The filter to utilize when calling an endpoint. Different filters will return different keys. The default is ``default`` and this will still vary depending on what the API returns as default for a particular endpoint :param kwargs: Parameters accepted by individual endpoints. These parameters **must** be named the same as described in the endpoint documentation :rtype: (dictionary) A dictionary containing wrapper data regarding the API call and the results of the call in the `items` key. If multiple pages were received, all of the results will appear in the ``items`` tag.
f14250:c1:m3
@task<EOL>def release(part='<STR_LIT>'):
<EOL>bumpver = subprocess.check_output(<EOL>['<STR_LIT>', part, '<STR_LIT>', '<STR_LIT>'],<EOL>stderr=subprocess.STDOUT)<EOL>m = re.search(r'<STR_LIT>', bumpver.decode('<STR_LIT:utf-8>'))<EOL>version = m.groups(<NUM_LIT:0>)[<NUM_LIT:0>]<EOL>bv_args = ['<STR_LIT>', part]<EOL>bv_args += ['<STR_LIT>', version]<EOL>subprocess.check_output(bv_args)<EOL>
Automated software release workflow * (Configurably) bumps the version number * Tags the release You can run it like:: $ fab release which, by default, will create a 'patch' release (0.0.1 => 0.0.2). You can also specify a patch level (patch, minor, major) to change to:: $ fab release:part=major which will create a 'major' release (0.0.2 => 1.0.0).
f14252:m0
def construct_mail(recipients=None, context=None, template_base='<STR_LIT>', subject=None, message=None, site=None,<EOL>subject_templates=None, body_templates=None, html_templates=None, from_email=None, language=None,<EOL>**kwargs):
language = language or translation.get_language()<EOL>with force_language(language):<EOL><INDENT>recipients = recipients or []<EOL>if isinstance(recipients, basestring):<EOL><INDENT>recipients = [recipients]<EOL><DEDENT>from_email = from_email or settings.DEFAULT_FROM_EMAIL<EOL>subject_templates = subject_templates or get_template_names(language, template_base, '<STR_LIT>', '<STR_LIT>')<EOL>body_templates = body_templates or get_template_names(language, template_base, '<STR_LIT:body>', '<STR_LIT>')<EOL>html_templates = html_templates or get_template_names(language, template_base, '<STR_LIT:body>', '<STR_LIT:html>')<EOL>if not context:<EOL><INDENT>context = {}<EOL><DEDENT>site = site or Site.objects.get_current()<EOL>context['<STR_LIT>'] = site<EOL>context['<STR_LIT>'] = site.name<EOL>protocol = '<STR_LIT:http>' <EOL>base_url = "<STR_LIT>" % (protocol, site.domain)<EOL>if message:<EOL><INDENT>context['<STR_LIT:message>'] = message<EOL><DEDENT>subject = subject or render_to_string(subject_templates, context)<EOL>subject = subject.replace('<STR_LIT:\n>', '<STR_LIT>').replace('<STR_LIT:\r>', '<STR_LIT>').strip()<EOL>context['<STR_LIT>'] = subject<EOL>try:<EOL><INDENT>html = render_to_string(html_templates, context)<EOL><DEDENT>except TemplateDoesNotExist:<EOL><INDENT>html = '<STR_LIT>'<EOL><DEDENT>else:<EOL><INDENT>html = premailer.transform(html, base_url=base_url)<EOL><DEDENT>try:<EOL><INDENT>body = render_to_string(body_templates, context)<EOL><DEDENT>except TemplateDoesNotExist:<EOL><INDENT>body = '<STR_LIT>'<EOL><DEDENT>mail = EmailMultiAlternatives(subject, body, from_email, recipients, **kwargs)<EOL>if not (body or html):<EOL><INDENT>render_to_string([html_templates], context)<EOL>render_to_string([body_templates], context)<EOL><DEDENT>if html:<EOL><INDENT>mail.attach_alternative(html, '<STR_LIT>')<EOL><DEDENT><DEDENT>return mail<EOL>
usage: construct_mail(['my@email.com'], {'my_obj': obj}, template_base='myapp/emails/my_obj_notification').send() :param recipients: recipient or list of recipients :param context: context for template rendering :param template_base: the base template. '.subject.txt', '.body.txt' and '.body.html' will be added :param subject: optional subject instead of rendering it through a template :param message: optional message (will be inserted into the base email template) :param site: the site this is on. uses current site by default :param subject_templates: override the subject template :param body_templates: override the body template :param html_templates: override the html body template :param from_email: defaults to settings.DEFAULT_FROM_EMAIL :param language: the language that should be active for this email. defaults to currently active lang :param kwargs: kwargs to pass into the Email class :return:
f14257:m0
def combine_metadata(metadata_collection: Iterable[IrodsMetadata]) -> IrodsMetadata:
combined = IrodsMetadata()<EOL>for metadata in metadata_collection:<EOL><INDENT>for key, values in metadata.items():<EOL><INDENT>for value in values:<EOL><INDENT>combined.add(key, value)<EOL><DEDENT><DEDENT><DEDENT>return combined<EOL>
Combines n metadata objects into a single metadata object. Key values are merged, duplicate values are removed. :param metadata_collection: the collection of metadata to combine :return: the combined metadata
f14261:m6
def get_all_descendants(self) -> Sequence[EntityNode]:
descendants = []<EOL>for child in self.children:<EOL><INDENT>descendants.append(child)<EOL>if isinstance(child, CollectionNode):<EOL><INDENT>descendants.extend(child.get_all_descendants())<EOL><DEDENT><DEDENT>return descendants<EOL>
Gets all descendants of the collection node.
f14261:c1:m1
@abstractmethod<EOL><INDENT>def create_mapper(self) -> _BatonIrodsEntityMapper:<DEDENT>
Creates a mapper to test with. :return: the created mapper
f14263:c0:m2
@abstractmethod<EOL><INDENT>def create_irods_entity(self, name: str, metadata: IrodsMetadata=IrodsMetadata()) -> IrodsEntity:<DEDENT>
Creates an iRODS entity to test with :param name: the name of the entity to create :param metadata: the metadata to give to the entity :return: the created entity
f14263:c0:m3
def create_data_object_with_baton_json_representation() -> Tuple[DataObject, Dict]:
global _data_object, _data_object_as_json<EOL>if _data_object is None:<EOL><INDENT>test_with_baton = TestWithBaton()<EOL>test_with_baton.setup()<EOL>metadata = IrodsMetadata({"<STR_LIT>": {"<STR_LIT>", "<STR_LIT>"}, "<STR_LIT>": {"<STR_LIT>"}})<EOL>_data_object = create_data_object(test_with_baton, "<STR_LIT>", metadata)<EOL>baton_query = {<EOL>BATON_COLLECTION_PROPERTY: _data_object.get_collection_path(),<EOL>BATON_DATA_OBJECT_PROPERTY: _data_object.get_name()<EOL>}<EOL>baton_runner = BatonRunner(test_with_baton.baton_location, test_with_baton.irods_server.users[<NUM_LIT:0>].zone)<EOL>_data_object_as_json = baton_runner.run_baton_query(<EOL>BatonBinary.BATON_LIST, ["<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT>"], input_data=baton_query)[<NUM_LIT:0>]<EOL><DEDENT>return deepcopy(_data_object), deepcopy(_data_object_as_json)<EOL>
Creates a data object and returns it along with the JSON representation of it given by baton. Uses baton to get the JSON representation on the first use: the JSON is retrieved from a cache in subsequent uses. :return: a tuple where the first element is the created data object and the second is its JSON representation according to baton
f14266:m0
def create_collection_with_baton_json_representation() -> Tuple[Collection, Dict]:
global _collection, _collection_as_json<EOL>if _collection is None:<EOL><INDENT>test_with_baton = TestWithBaton()<EOL>test_with_baton.setup()<EOL>metadata = IrodsMetadata({"<STR_LIT>": {"<STR_LIT>", "<STR_LIT>"}, "<STR_LIT>": {"<STR_LIT>"}})<EOL>_collection = create_collection(test_with_baton, "<STR_LIT>", metadata)<EOL>baton_query = {<EOL>BATON_COLLECTION_PROPERTY: _collection.path<EOL>}<EOL>baton_runner = BatonRunner(test_with_baton.baton_location, test_with_baton.irods_server.users[<NUM_LIT:0>].zone)<EOL>_collection_as_json = baton_runner.run_baton_query(<EOL>BatonBinary.BATON_LIST, ["<STR_LIT>", "<STR_LIT>"], input_data=baton_query)[<NUM_LIT:0>]<EOL><DEDENT>return deepcopy(_collection), deepcopy(_collection_as_json)<EOL>
Creates a collection and returns it along with the JSON representation of it given by baton. Uses baton to get the JSON representation on the first use: the JSON is retrieved from a cache in subsequent uses. :return: a tuple where the first element is the created collection and the second is its JSON representation according to baton
f14266:m1
def create_specific_query_with_baton_json_representation() -> Tuple[SpecificQuery, Dict]:
global _specific_query, _specific_query_as_json<EOL>if _specific_query is None:<EOL><INDENT>test_with_baton = TestWithBaton()<EOL>test_with_baton.setup()<EOL>baton_runner = BatonRunner(test_with_baton.baton_location, test_with_baton.irods_server.users[<NUM_LIT:0>].zone)<EOL>baton_query = baton_runner.run_baton_query(BatonBinary.BATON, ["<STR_LIT>", IRODS_SPECIFIC_QUERY_FIND_QUERY_BY_ALIAS,<EOL>"<STR_LIT>", IRODS_SPECIFIC_QUERY_FIND_QUERY_BY_ALIAS])<EOL>_specific_query_as_json = baton_runner.run_baton_query(<EOL>BatonBinary.BATON_SPECIFIC_QUERY, input_data=baton_query)[<NUM_LIT:0>]<EOL>_specific_query = SpecificQuery(<EOL>IRODS_SPECIFIC_QUERY_FIND_QUERY_BY_ALIAS, _specific_query_as_json[<NUM_LIT:0>][BATON_SPECIFIC_QUERY_SQL_PROPERTY])<EOL><DEDENT>return deepcopy(_specific_query), deepcopy(_specific_query_as_json)<EOL>
Creates a specific query and returns it along with the JSON representation of it given by baton. Uses baton to get the JSON representation on the first use: the JSON is retrieved from a cache in subsequent uses. :return: a tuple where the first element is the created specific query and the second is its JSON representation according to baton
f14266:m2
@staticmethod<EOL><INDENT>def _parse_iquest_ls(iquest_ls_response: str) -> Sequence[SpecificQuery]:<DEDENT>
iquest_ls_response_lines = iquest_ls_response.split('<STR_LIT:\n>')<EOL>assert (len(iquest_ls_response_lines) + <NUM_LIT:1>) % <NUM_LIT:3> == <NUM_LIT:0><EOL>specific_queries = []<EOL>for i in range(int((len(iquest_ls_response_lines) + <NUM_LIT:1>) / <NUM_LIT:3>)):<EOL><INDENT>i3 = int(<NUM_LIT:3> * i)<EOL>alias = iquest_ls_response_lines[i3]<EOL>sql = iquest_ls_response_lines[i3 + <NUM_LIT:1>]<EOL>specific_queries.append(SpecificQuery(alias, sql))<EOL><DEDENT>return specific_queries<EOL>
Gets the installed specific queries by parsing the output returned by "iquest --sql ls". :param iquest_ls_response: the response returned by the iquest command :return: the specific queries installed on the iRODS server
f14267:c1:m2
@abstractmethod<EOL><INDENT>def create_mapper(self) -> _BatonIrodsMetadataMapper:<DEDENT>
Creates a mapper to test with. :return: the created mapper
f14269:c0:m0
@abstractmethod<EOL><INDENT>def create_irods_entity(self, name: str, metadata: IrodsMetadata=IrodsMetadata()) -> IrodsEntity:<DEDENT>
Creates an iRODS entity to test with :param name: the name of the entity to create :param metadata: the metadata to give to the entity :return: the created entity
f14269:c0:m1
@property<EOL><INDENT>def entity(self) -> IrodsEntity:<DEDENT>
if self._entity is None:<EOL><INDENT>self._entity = self.create_irods_entity("<STR_LIT>" % NAMES[<NUM_LIT:0>])<EOL><DEDENT>return self._entity<EOL>
Lazily creates the test entity (prevents spending time creating unused entities in iRODS). :return: an example entity
f14269:c0:m4
@property<EOL><INDENT>def entities(self) -> List[IrodsEntity]:<DEDENT>
if self._entities is None:<EOL><INDENT>self._entities = [self.create_irods_entity(name) for name in NAMES]<EOL><DEDENT>return self._entities<EOL>
Lazily creates a set of test entities (prevents spending time creating unused entities in iRODS). :return: an example collection of entities
f14269:c0:m5
@abstractmethod<EOL><INDENT>def create_mapper(self) -> _BatonAccessControlMapper:<DEDENT>
Creates a mapper to test with. :return: the created mapper
f14270:c0:m0
@abstractmethod<EOL><INDENT>def create_irods_entity(self, name: str, access_controls: Iterable[AccessControl]) -> IrodsEntity:<DEDENT>
Creates an iRODS entity to test with :param name: the name of the entity to create :param access_controls: the access controls the entity should have :return: the created entity
f14270:c0:m1
def _create_entity_tree_in_container(self, entity_tree: EntityNode) -> (List[IrodsEntity], IrodsEntity):
self._collection_count += <NUM_LIT:1><EOL>container_path = self.setup_helper.create_collection("<STR_LIT>" % self._collection_count)<EOL>entities = list(create_entity_tree(self.test_with_baton, container_path, entity_tree, self.access_controls))<EOL>for entity in entities:<EOL><INDENT>if entity.get_collection_path() == container_path:<EOL><INDENT>assert entity.get_name() == entity_tree.name<EOL>return entities, entity<EOL><DEDENT><DEDENT>assert False<EOL>
Creates the given entity tree inside a container. :param entity_tree: the entity to create :return: tuple where the first element is a list of all the created iRODS entities (not including the container) and the second is the top level iRODS entity (not including the container)
f14270:c2:m11
@staticmethod<EOL><INDENT>def _fix_set_as_list_in_json_issue(data_object_as_json: dict):<DEDENT>
avus = set()<EOL>for avu in data_object_as_json[BATON_AVU_PROPERTY]:<EOL><INDENT>avus.add(frozendict(avu))<EOL><DEDENT>data_object_as_json[BATON_AVU_PROPERTY] = avus<EOL>
Work around issue that (unordered) set is represented as (ordered) list in JSON :param data_object_as_json: a JSON representation of a `DataObject` instance
f14271:c8:m0
def _assert_data_object_as_json_equal(self, target: dict, actual: dict):
TestDataObjectJSONEncoder._fix_set_as_list_in_json_issue(target)<EOL>TestDataObjectJSONEncoder._fix_set_as_list_in_json_issue(actual)<EOL>self.assertEqual(target, actual)<EOL>
Assert that two JSON representations of `DataObject` instances are equal. :param target: the data object to check :param actual: the data object to check against
f14271:c8:m5
@staticmethod<EOL><INDENT>def from_metadata(metadata: Metadata) -> Any:<DEDENT>
irods_metadata = IrodsMetadata()<EOL>for key, value in metadata.items():<EOL><INDENT>irods_metadata[key] = {value}<EOL><DEDENT>return irods_metadata<EOL>
Static factory method to create an equivalent instance of this type from the given `Metadata` instance. :param metadata: the `Metadata` instance to create an instance of this class from :return: the created instance of this class
f14273:c0:m0
def add(self, key: str, value: str):
if key in self:<EOL><INDENT>super().__getitem__(key).add(value)<EOL><DEDENT>else:<EOL><INDENT>self[key] = {value}<EOL><DEDENT>
Adds the given value to the set of data stored under the given key. :param key: the set's key :param value: the value to add in the set associated to the given key
f14273:c0:m3
def __init__(self, data_object_replicas: Iterable[DataObjectReplica]=()):
self._data = dict() <EOL>for data_object_replica in data_object_replicas:<EOL><INDENT>self.add(data_object_replica)<EOL><DEDENT>
Constructor. :param data_object_replicas: (optional) replica to go into this collection initially
f14273:c1:m0
def get_by_number(self, number: int) -> Optional[DataObjectReplica]:
return self._data.get(number, None)<EOL>
Gets the data object replica in this collection with the given number. Will return `None` if such replica does not exist. :param number: the number of the data object replica to get :return: the data object replica in this collection with the given number
f14273:c1:m1
def get_out_of_date(self) -> Sequence[DataObjectReplica]:
out_of_date = []<EOL>for number, data_object_replica in self._data.items():<EOL><INDENT>if not data_object_replica.up_to_date:<EOL><INDENT>out_of_date.append(data_object_replica)<EOL><DEDENT><DEDENT>return out_of_date<EOL>
Gets any data object replica that are marked as out of date/not in date. :return: the out of data object replica
f14273:c1:m2
def add(self, data_object_replica: DataObjectReplica):
if data_object_replica.number in self._data:<EOL><INDENT>raise ValueError(<EOL>"<STR_LIT>" % data_object_replica.number)<EOL><DEDENT>self._data[data_object_replica.number] = data_object_replica<EOL>
Adds the given data object replica to this collection. Will raise a `ValueError` if a data object replica with the same number already exists. :param data_object_replica: the data object replica to add
f14273:c1:m3
def remove(self, identifier: Union[DataObjectReplica, int]):
if isinstance(identifier, int):<EOL><INDENT>self._remove_by_number(identifier)<EOL><DEDENT>elif isinstance(identifier, DataObjectReplica):<EOL><INDENT>self._remove_by_object(identifier)<EOL><DEDENT>else:<EOL><INDENT>raise TypeError("<STR_LIT>" % type(identifier))<EOL><DEDENT>
Removes a data object from this collection that has the given unique identifier. A `ValueError` will be raised if a data object with the given identifier does not exist. :param identifier: the identifier of the data object
f14273:c1:m4
def _remove_by_number(self, number: int):
if number not in self._data:<EOL><INDENT>raise ValueError("<STR_LIT>" % number)<EOL><DEDENT>del self._data[number]<EOL>assert number not in self._data<EOL>
Removes the data object from this collection with the given number. A `ValueError` will be raised if a data object with the given number does not exist. :param number: the number of the data object to remove
f14273:c1:m5
def _remove_by_object(self, data_object_replica: DataObjectReplica):
self.remove(data_object_replica.number)<EOL>
Removes the given data object from this collection. A `ValueError` will be raised if the given data object does not exist within this collection. :param data_object_replica: the data object replica to remove
f14273:c1:m6
@abstractmethod<EOL><INDENT>def _path_to_baton_json(self, path: str) -> Dict:<DEDENT>
Converts a path to the type of iRODS entity the mapper deals with, to its JSON representation. :param path: the path to convert :return: the JSON representation of the path
f14276:c0:m0
@abstractmethod<EOL><INDENT>def _baton_json_to_irods_entity(self, entity_as_baton_json: Dict) -> EntityType:<DEDENT>
Converts the baton representation of an iRODS entity to a list of `EntityType` models. :param entity_as_baton_json: the baton serialization representation of the entity :return: the equivalent models
f14276:c0:m1
@abstractmethod<EOL><INDENT>def _extract_irods_entities_of_entity_type_from_baton_json(self, entities_as_baton_json: List[Dict]) -> List[Dict]:<DEDENT>
Extract from the list the JSON representation of entities of type `EntityType`. :param entities_as_baton_json: iRODS entities encoded as baton JSON :return: extracted entities as baton JSON
f14276:c0:m2
def __init__(self, additional_metadata_query_arguments: List[str], *args, **kwargs):
super().__init__(*args, **kwargs)<EOL>self._additional_metadata_query_arguments = additional_metadata_query_arguments<EOL>
Constructor. :param additional_metadata_query_arguments: TODO
f14276:c0:m3
def _create_entity_query_arguments(self, load_metadata: bool=True) -> List[str]:
arguments = ["<STR_LIT>", "<STR_LIT>", "<STR_LIT>"]<EOL>if load_metadata:<EOL><INDENT>arguments.append("<STR_LIT>")<EOL><DEDENT>return arguments<EOL>
Create arguments to use with baton. :param load_metadata: whether baton should load metadata :return: the arguments to use with baton
f14276:c0:m7
def _baton_json_to_irods_entities(self, entities_as_baton_json: List[Dict]) -> List[EntityType]:
assert(isinstance(entities_as_baton_json, list))<EOL>entities = []<EOL>for file_as_baton_json in entities_as_baton_json:<EOL><INDENT>entity = self._baton_json_to_irods_entity(file_as_baton_json)<EOL>entities.append(entity)<EOL><DEDENT>return entities<EOL>
Converts the baton representation of multiple iRODS entities to a list of `EntityType` models. :param entities_as_baton_json: the baton serialization representation of the entities :return: the equivalent models
f14276:c0:m8
@abstractmethod<EOL><INDENT>def _create_entity_with_path(self, path: str) -> IrodsEntity:<DEDENT>
Creates an entity model with the given path. :param path: the path the entity should have :return: the created entity model
f14277:c0:m0
@abstractmethod<EOL><INDENT>def _entity_to_baton_json(self, entity: IrodsEntity) -> Dict:<DEDENT>
Converts an entity model to its baton JSON representation. :param entity: the entity to produce the JSON representation of :return: the JSON representation
f14277:c0:m1
def _modify(self, paths: Union[str, List[str]], metadata_for_paths: Union[IrodsMetadata, List[IrodsMetadata]],<EOL>operation: str):
if isinstance(paths, str):<EOL><INDENT>paths = [paths]<EOL><DEDENT>if isinstance(metadata_for_paths, IrodsMetadata):<EOL><INDENT>metadata_for_paths = [metadata_for_paths for _ in paths]<EOL><DEDENT>elif len(paths) != len(metadata_for_paths):<EOL><INDENT>raise ValueError("<STR_LIT>"<EOL>"<STR_LIT>")<EOL><DEDENT>assert len(paths) == len(metadata_for_paths)<EOL>baton_in_json = []<EOL>for i in range(len(metadata_for_paths)):<EOL><INDENT>entity = self._create_entity_with_path(paths[i])<EOL>entity.metadata = metadata_for_paths[i]<EOL>baton_in_json.append(self._entity_to_baton_json(entity))<EOL><DEDENT>arguments = [BATON_METAMOD_OPERATION_FLAG, operation]<EOL>self.run_baton_query(BatonBinary.BATON_METAMOD, arguments, input_data=baton_in_json)<EOL>
Modifies the metadata of the entity or entities in iRODS with the given path. :param path: the paths of the entity or entities to modify :param metadata_for_paths: the metadata to change. If only one metadata object is given, that metadata is set for all, else the metadata is matched against the path with the corresponding index :param operation: the baton operation used to modify the metadata
f14277:c0:m7
def _path_to_baton_json(self, path: str) -> Dict:
entity = self._create_entity_with_path(path)<EOL>return self._entity_to_baton_json(entity)<EOL>
Converts a path to the type of iRODS entity the mapper deals with, to its JSON representation. :param path: the path to convert :return: the JSON representation of the path
f14277:c0:m8
@abstractmethod<EOL><INDENT>def _object_deserialiser(self, object_as_json: dict) -> CustomObjectType:<DEDENT>
Function used to take the JSON representation of the custom object returned by the specific query and produce a Python model. :param object_as_json: JSON representation of the custom object :return: Python model of the custom object
f14278:c0:m0
@staticmethod<EOL><INDENT>def validate_baton_binaries_location(baton_binaries_directory: str) -> Optional[Exception]:<DEDENT>
if os.path.isfile(baton_binaries_directory):<EOL><INDENT>return ValueError("<STR_LIT>"<EOL>"<STR_LIT>")<EOL><DEDENT>for baton_binary in BatonBinary:<EOL><INDENT>binary_location = os.path.join(baton_binaries_directory, baton_binary.value)<EOL>if not (os.path.isfile(binary_location) and os.access(binary_location, os.X_OK)):<EOL><INDENT>return ValueError("<STR_LIT>"<EOL>"<STR_LIT>"<EOL>% (baton_binaries_directory, [name.value for name in BatonBinary]))<EOL><DEDENT><DEDENT>return None<EOL>
Validates that the given directory contains the baton binaries required to use the runner. :param baton_binaries_directory: the directory to check :return: exception that describes the issue else `None` if no issues
f14281:c1:m0
@staticmethod<EOL><INDENT>def _raise_any_errors_given_in_baton_out(baton_out_as_json: List[Dict]):<DEDENT>
if not isinstance(baton_out_as_json, list):<EOL><INDENT>baton_out_as_json = [baton_out_as_json]<EOL><DEDENT>for baton_item_as_json in baton_out_as_json:<EOL><INDENT>if BATON_ERROR_PROPERTY in baton_item_as_json:<EOL><INDENT>error = baton_item_as_json[BATON_ERROR_PROPERTY]<EOL>error_message = error[BATON_ERROR_MESSAGE_KEY]<EOL>error_code = error[BATON_ERROR_CODE_KEY]<EOL>if error_code == IRODS_ERROR_USER_FILE_DOES_NOT_EXIST or(error_code == IRODS_ERROR_CAT_INVALID_ARGUMENT and "<STR_LIT>" in error_message):<EOL><INDENT>raise FileNotFoundError(error_message)<EOL><DEDENT>elif error_code == IRODS_ERROR_CATALOG_ALREADY_HAS_ITEM_BY_THAT_NAMEor error_code == IRODS_ERROR_CAT_SUCCESS_BUT_WITH_NO_INFO:<EOL><INDENT>raise KeyError(error_message)<EOL><DEDENT>else:<EOL><INDENT>raise RuntimeError(error_message)<EOL><DEDENT><DEDENT><DEDENT>
Raises any errors that baton has expressed in its output. :param baton_out_as_json: the output baton gave as parsed serialization
f14281:c1:m1
def __init__(self, baton_binaries_directory: str, skip_baton_binaries_validation: bool=False,<EOL>timeout_queries_after: timedelta=None):
if not skip_baton_binaries_validation:<EOL><INDENT>exception = BatonRunner.validate_baton_binaries_location(baton_binaries_directory)<EOL>if exception is not None:<EOL><INDENT>raise exception<EOL><DEDENT><DEDENT>self._baton_binaries_directory = baton_binaries_directory<EOL>self.timeout_queries_after = timeout_queries_after<EOL>
Constructor. :param baton_binaries_directory: the host of baton's binaries :param irods_query_zone: the iRODS zone to query :param skip_baton_binaries_validation: skips validation of baton binaries (intending for testing only)
f14281:c1:m2
def run_baton_query(self, baton_binary: BatonBinary, program_arguments: List[str]=None, input_data: Any=None)-> List[Dict]:
if program_arguments is None:<EOL><INDENT>program_arguments = []<EOL><DEDENT>baton_binary_location = os.path.join(self._baton_binaries_directory, baton_binary.value)<EOL>program_arguments = [baton_binary_location] + program_arguments<EOL>_logger.info("<STR_LIT>" % (program_arguments, input_data))<EOL>start_at = time.monotonic()<EOL>baton_out = self._run_command(program_arguments, input_data=input_data)<EOL>time_taken_to_run_query = time.monotonic() - start_at<EOL>_logger.debug("<STR_LIT>" % (time_taken_to_run_query, baton_out))<EOL>if len(baton_out) == <NUM_LIT:0>:<EOL><INDENT>return []<EOL><DEDENT>if len(baton_out) > <NUM_LIT:0> and baton_out[<NUM_LIT:0>] != '<STR_LIT:[>':<EOL><INDENT>baton_out = "<STR_LIT>" % baton_out.replace('<STR_LIT:\n>', '<STR_LIT:U+002C>')<EOL><DEDENT>baton_out_as_json = json.loads(baton_out)<EOL>BatonRunner._raise_any_errors_given_in_baton_out(baton_out_as_json)<EOL>return baton_out_as_json<EOL>
Runs a baton query. :param baton_binary: the baton binary to use :param program_arguments: arguments to give to the baton binary :param input_data: input data to the baton binary :return: parsed serialization returned by baton
f14281:c1:m3
def _run_command(self, arguments: List[str], input_data: Any=None, output_encoding: str="<STR_LIT:utf-8>") -> str:
process = subprocess.Popen(arguments, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)<EOL>if isinstance(input_data, List):<EOL><INDENT>for to_write in input_data:<EOL><INDENT>to_write_as_json = json.dumps(to_write)<EOL>process.stdin.write(str.encode(to_write_as_json))<EOL><DEDENT>input_data = None<EOL><DEDENT>else:<EOL><INDENT>input_data = str.encode(json.dumps(input_data))<EOL><DEDENT>timeout_in_seconds = self.timeout_queries_after.total_seconds() if self.timeout_queries_after is not Noneelse None<EOL>out, error = process.communicate(input=input_data, timeout=timeout_in_seconds)<EOL>if len(out) == <NUM_LIT:0> and len(error) > <NUM_LIT:0>:<EOL><INDENT>raise RuntimeError(error)<EOL><DEDENT>return out.decode(output_encoding).rstrip()<EOL>
Run a command as a subprocess. Ignores errors given over stderr if there is output on stdout (this is the case where baton has been run correctly and has expressed the error in it's JSON out, which can be handled more appropriately upstream to this method.) :param arguments: the arguments to run :param input_data: the input data to pass to the subprocess :param output_encoding: optional specification of the output encoding to expect :return: the process' standard out
f14281:c1:m4
def connect_to_irods_with_baton(baton_binaries_directory: str, skip_baton_binaries_validation: bool=False)-> Connection:
return Connection(baton_binaries_directory, skip_baton_binaries_validation)<EOL>
Convenience method to create a pseudo connection to iRODS. :param baton_binaries_directory: see `Connection.__init__` :param skip_baton_binaries_validation: see `Connection.__init__` :return: pseudo connection to iRODS
f14282:m0
def __init__(self, baton_binaries_directory: str, skip_baton_binaries_validation: bool=False):
self.data_object = BatonDataObjectMapper(baton_binaries_directory, skip_baton_binaries_validation)<EOL>self.collection = BatonCollectionMapper(baton_binaries_directory, skip_baton_binaries_validation)<EOL>self.specific_query = BatonSpecificQueryMapper(baton_binaries_directory, skip_baton_binaries_validation)<EOL>
Constructor. :param baton_binaries_directory: the directory host of the baton binaries :param skip_baton_binaries_validation: whether checks on if the correct baton binaries exist within the given directory should be skipped
f14282:c0:m0
@abstractmethod<EOL><INDENT>def _create_entity_with_path(self, path: str) -> IrodsEntity:<DEDENT>
Creates an entity model with the given path. :param path: the path the entity should have :return: the created entity model
f14283:c0:m0
@abstractmethod<EOL><INDENT>def _entity_to_baton_json(self, entity: IrodsEntity) -> Dict:<DEDENT>
Converts an entity model to its baton JSON representation. :param entity: the entity to produce the JSON representation of :return: the JSON representation
f14283:c0:m1
def _path_to_baton_json(self, path: str) -> Dict:
entity = self._create_entity_with_path(path)<EOL>return self._entity_to_baton_json(entity)<EOL>
Converts a path to the type of iRODS entity the mapper deals with, to its JSON representation. :param path: the path to convert :return: the JSON representation of the path
f14283:c0:m7
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)<EOL>self._original_run_baton_query = self.run_baton_query<EOL>self._hijack_frame_ids = set() <EOL>self.run_baton_query = self._hijacked_run_baton_query<EOL>
Constructor.
f14283:c2:m0
def _do_recursive(self, method_that_runs_baton_chmod: callable, *args, **kwargs):
current_frame_id = id(inspect.currentframe())<EOL>try:<EOL><INDENT>self._hijack_frame_ids.add(current_frame_id)<EOL>method_that_runs_baton_chmod(*args, **kwargs)<EOL><DEDENT>finally:<EOL><INDENT>self._hijack_frame_ids.remove(current_frame_id)<EOL><DEDENT>
Adds the `--recursive` argument to all calls to `baton-chmod`. :param method_that_runs_baton_chmod: the method that, at a lower level, calls out to baton-chmod :param args: positional arguments to call given method with :param kwargs: named arguments to call given method with
f14283:c2:m7
def _hijacked_run_baton_query(<EOL>self, baton_binary: BatonBinary, program_arguments: List[str]=None, input_data: Any=None) -> List[Dict]:
if baton_binary == BatonBinary.BATON_CHMOD:<EOL><INDENT>current_frame = inspect.currentframe()<EOL>def frame_code_in_same_file(frame) -> bool:<EOL><INDENT>return frame_back.f_code.co_filename == current_frame.f_code.co_filename<EOL><DEDENT>frame_back = current_frame.f_back<EOL>assert frame_code_in_same_file(frame_back)<EOL>while frame_back is not None and frame_code_in_same_file(frame_back):<EOL><INDENT>if id(frame_back) in self._hijack_frame_ids:<EOL><INDENT>return self._original_run_baton_query(baton_binary, [BATON_CHMOD_RECURSIVE_FLAG], input_data)<EOL><DEDENT>frame_back = frame_back.f_back<EOL><DEDENT><DEDENT>return self._original_run_baton_query(baton_binary, program_arguments, input_data)<EOL>
Hijacked `run_baton_query` method with hijacking to add the `--recursive` flag to calls to `baton-chmod` that originate from code called from frames with the ids in `self._hijack_frame_ids`. :param baton_binary: see `BatonRunner.run_baton_query` :param program_arguments: see `BatonRunner.run_baton_query` :param input_data: see `BatonRunner.run_baton_query` :return: see `BatonRunner.run_baton_query`
f14283:c2:m8
@staticmethod<EOL><INDENT>def create_from_str(name_and_zone: str):<DEDENT>
if _NAME_ZONE_SEGREGATOR not in name_and_zone:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>name, zone = name_and_zone.split(_NAME_ZONE_SEGREGATOR)<EOL>if len(name) == <NUM_LIT:0>:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>if len(zone) == <NUM_LIT:0>:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>return User(name, zone)<EOL>
Factory method for creating a user from a string in the form `name#zone`. :param name_and_zone: the user's name followed by hash followed by the user's zone :return: the created user
f14285:c2:m0
@property<EOL><INDENT>def access_controls(self) -> Optional[Set[AccessControl]]:<DEDENT>
return copy(self._access_controls)<EOL>
Gets a copy of the access controls associated to this entity. :return: copy of the access controls
f14285:c4:m3
@access_controls.setter<EOL><INDENT>def access_controls(self, access_controls: Optional[Iterable[AccessControl]]):<DEDENT>
if access_controls is not None:<EOL><INDENT>access_controls = set(access_controls)<EOL><DEDENT>self._access_controls = access_controls<EOL>
Sets the access controls associated to this entity. :param access_controls: the access controls (immutable) or `None`
f14285:c4:m4
def get_collection_path(self) -> str:
return self.path.rsplit('<STR_LIT:/>', <NUM_LIT:1>)[<NUM_LIT:0>]<EOL>
Gets the path of the collection in which this entity resides. :return: the path of the collection that this entity is in
f14285:c4:m5
def get_name(self) -> str:
return self.path.rsplit('<STR_LIT:/>', <NUM_LIT:1>)[-<NUM_LIT:1>]<EOL>
Gets the name of this entity. :return: the name of this entity
f14285:c4:m6
def __init__(self, path: str, access_controls: Iterable[AccessControl]=None,<EOL>metadata: IrodsMetadata=None, replicas: Iterable[DataObjectReplica]=None):
from baton.collections import DataObjectReplicaCollection<EOL>super().__init__(path, access_controls, metadata)<EOL>self.replicas = DataObjectReplicaCollection(replicas) if replicas is not None else None<EOL>
Constructor. :param path: path of data object in iRODS :param access_controls: access controls or `None` if not known :param metadata: iRODS metadata or `None` if not known :param replicas: replicas or `None` if not known
f14285:c5:m0
def get_number_of_arguments(self) -> int:
return self.sql.count("<STR_LIT:?>")<EOL>
Gets the number of a arguments in the specific query. :return: the number of arguments
f14285:c7:m1
@abstractmethod<EOL><INDENT>def get_all(self, paths: Union[str, Sequence[str]]) -> Union[IrodsMetadata, List[IrodsMetadata]]:<DEDENT>
Gets all of the metadata for the iRODS entities at the given path or paths. If multiple paths are given, the metadata collection at index `i` on the output corresponds to the path at index `i` on the input. i.e. ``` output = mapper.get_all(["path_1", "path_2"]) metadata_for_path_1 = output[0] metadata_for_path_2 = output[1] ``` A `ValueError` will be raised will be raised if the path does not correspond to a valid entity. :param path: the path of the entity or entities to get the metadata for :return: metadata for the given entity or entities
f14287:c0:m0
@abstractmethod<EOL><INDENT>def add(self, paths: Union[str, Iterable[str]], metadata: Union[IrodsMetadata, List[IrodsMetadata]]):<DEDENT>
Adds the given metadata to the given iRODS entities at the given path or paths. If a single metadata collection is given, that metadata is added to all paths. If a list of metadata are given, each collection is added to the path with the corresponding index. A `ValueError` will be raised will be raised if the path does not correspond to a valid entity. :param path: the path of the entity or entities to add the metadata to :param metadata: the metadata to write
f14287:c0:m1
@abstractmethod<EOL><INDENT>def set(self, paths: Union[str, Iterable[str]], metadata: Union[IrodsMetadata, List[IrodsMetadata]]):<DEDENT>
Sets the given metadata on the iRODS entities at the given path or paths. Similar to `add` excpet pre-existing metadata with matching keys will be overwritten. If a single metadata collection is given, that metadata is set for all paths. If a list of metadata are given, each collection is set for the path with the corresponding index. A `ValueError` will be raised will be raised if the path does not correspond to a valid entity. :param path: the path of the entity to set the metadata for :param metadata: the metadata to set
f14287:c0:m2
@abstractmethod<EOL><INDENT>def remove(self, paths: Union[str, Iterable[str]], metadata: Union[IrodsMetadata, List[IrodsMetadata]]):<DEDENT>
Removes the given metadata from the given iRODS entity. If a single metadata collection is given, that metadata is removed from all paths. If a list of metadata are given, each collection is removed for the path with the corresponding index. A `KeyError` will be raised if the entity does not have metadata with the given key and value. If this exception is raised part-way through the removal of multiple pieces of metadata, a rollback will not occur - it would be necessary to get the metadata for the entity to determine what metadata in the collection was removed successfully. A `ValueError` will be raised will be raised if the path does not correspond to a valid entity. :param path: the path of the entity to remove metadata from :param metadata: the metadata to remove
f14287:c0:m3
@abstractmethod<EOL><INDENT>def remove_all(self, paths: Union[str, Iterable[str]]):<DEDENT>
Removes all of the metadata from the given iRODS entities at the given path or paths. A `ValueError` will be raised will be raised if the path does not correspond to a valid entity. :param path: the path of the entity to remove all of the metadata from
f14287:c0:m4
@abstractmethod<EOL><INDENT>def get_all(self, paths: Union[str, Sequence[str]]) -> Union[Set[AccessControl], Sequence[Set[AccessControl]]]:<DEDENT>
Gets all the access controls for the entity with the given path. :param path: the path of the entity to find access controls for :return:
f14287:c1:m0
@abstractmethod<EOL><INDENT>def add_or_replace(self, paths: Union[str, Iterable[str]],<EOL>access_controls: Union[AccessControl, Iterable[AccessControl]]):<DEDENT>
Adds the given access controls to those associated with the given path or collection of paths. If an access control already exists for a user, the access control is replaced. :param paths: the paths to add the access controls :param access_controls: the access controls to add
f14287:c1:m1