text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _list_patient_ids(self):
""" Utility function to return a list of patient ids in the Cohort """ |
results = []
for patient in self:
results.append(patient.id)
return(results) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def summarize_provenance_per_cache(self):
"""Utility function to summarize provenance files for cached items used by a Cohort, for each cache_dir that exists. Only existing cache_dirs are summarized. This is a summary of provenance files because the function checks to see whether all patients data have the same provenance within the cache dir. The function assumes that it will be desireable to have all patients data generated using the same environment, for each cache type. At the moment, most PROVENANCE files contain details about packages used to generat e the cached data file. However, this function is generic & so it summarizes the contents of those files irrespective of their contents. Returns Dict containing summarized provenance for each existing cache_dir, after checking to see that provenance files are identical among all patients in the data frame for that cache_dir. If conflicting PROVENANCE files are discovered within a cache-dir: - a warning is generated, describing the conflict - and, a value of `None` is returned in the dictionary for that cache-dir See also * `?cohorts.Cohort.summarize_provenance` which summarizes provenance files among cache_dirs. * `?cohorts.Cohort.summarize_dataframe` which hashes/summarizes contents of the data frame for this cohort. """ |
provenance_summary = {}
df = self.as_dataframe()
for cache in self.cache_names:
cache_name = self.cache_names[cache]
cache_provenance = None
num_discrepant = 0
this_cache_dir = path.join(self.cache_dir, cache_name)
if path.exists(this_cache_dir):
for patient_id in self._list_patient_ids():
patient_cache_dir = path.join(this_cache_dir, patient_id)
try:
this_provenance = self.load_provenance(patient_cache_dir = patient_cache_dir)
except:
this_provenance = None
if this_provenance:
if not(cache_provenance):
cache_provenance = this_provenance
else:
num_discrepant += compare_provenance(this_provenance, cache_provenance)
if num_discrepant == 0:
provenance_summary[cache_name] = cache_provenance
else:
provenance_summary[cache_name] = None
return(provenance_summary) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def summarize_provenance(self):
"""Utility function to summarize provenance files for cached items used by a Cohort. At the moment, most PROVENANCE files contain details about packages used to generate files. However, this function is generic & so it summarizes the contents of those files irrespective of their contents. Returns Dict containing summary of provenance items, among all cache dirs used by the Cohort. IE if all provenances are identical across all cache dirs, then a single set of provenances is returned. Otherwise, if all provenances are not identical, the provenance items per cache_dir are returned. See also `?cohorts.Cohort.summarize_provenance_per_cache` which is used to summarize provenance for each existing cache_dir. """ |
provenance_per_cache = self.summarize_provenance_per_cache()
summary_provenance = None
num_discrepant = 0
for cache in provenance_per_cache:
if not(summary_provenance):
## pick arbitrary provenance & call this the "summary" (for now)
summary_provenance = provenance_per_cache[cache]
summary_provenance_name = cache
## for each cache, check equivalence with summary_provenance
num_discrepant += compare_provenance(
provenance_per_cache[cache],
summary_provenance,
left_outer_diff = "In %s but not in %s" % (cache, summary_provenance_name),
right_outer_diff = "In %s but not in %s" % (summary_provenance_name, cache)
)
## compare provenance across cached items
if num_discrepant == 0:
prov = summary_provenance ## report summary provenance if exists
else:
prov = provenance_per_cache ## otherwise, return provenance per cache
return(prov) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def summarize_data_sources(self):
"""Utility function to summarize data source status for this Cohort, useful for confirming the state of data used for an analysis Returns Dictionary with summary of data sources Currently contains - dataframe_hash: hash of the dataframe (see `?cohorts.Cohort.summarize_dataframe`) - provenance_file_summary: summary of provenance file contents (see `?cohorts.Cohort.summarize_provenance`) """ |
provenance_file_summary = self.summarize_provenance()
dataframe_hash = self.summarize_dataframe()
results = {
"provenance_file_summary": provenance_file_summary,
"dataframe_hash": dataframe_hash
}
return(results) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def strelka_somatic_variant_stats(variant, variant_metadata):
"""Parse out the variant calling statistics for a given variant from a Strelka VCF Parameters variant : varcode.Variant sample_info : dict Dictionary of sample to variant calling statistics, corresponds to the sample columns in a Strelka VCF Returns ------- SomaticVariantStats """ |
sample_info = variant_metadata["sample_info"]
# Ensure there are exactly two samples in the VCF, a tumor and normal
assert len(sample_info) == 2, "More than two samples found in the somatic VCF"
tumor_stats = _strelka_variant_stats(variant, sample_info["TUMOR"])
normal_stats = _strelka_variant_stats(variant, sample_info["NORMAL"])
return SomaticVariantStats(tumor_stats=tumor_stats, normal_stats=normal_stats) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _strelka_variant_stats(variant, sample_info):
"""Parse a single sample"s variant calling statistics based on Strelka VCF output Parameters variant : varcode.Variant sample_info : dict Dictionary of Strelka-specific variant calling fields Returns ------- VariantStats """ |
if variant.is_deletion or variant.is_insertion:
# ref: https://sites.google.com/site/strelkasomaticvariantcaller/home/somatic-variant-output
ref_depth = int(sample_info['TAR'][0]) # number of reads supporting ref allele (non-deletion)
alt_depth = int(sample_info['TIR'][0]) # number of reads supporting alt allele (deletion)
depth = ref_depth + alt_depth
else:
# Retrieve the Tier 1 counts from Strelka
ref_depth = int(sample_info[variant.ref+"U"][0])
alt_depth = int(sample_info[variant.alt+"U"][0])
depth = alt_depth + ref_depth
if depth > 0:
vaf = float(alt_depth) / depth
else:
# unclear how to define vaf if no reads support variant
# up to user to interpret this (hopefully filtered out in QC settings)
vaf = None
return VariantStats(depth=depth, alt_depth=alt_depth, variant_allele_frequency=vaf) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mutect_somatic_variant_stats(variant, variant_metadata):
"""Parse out the variant calling statistics for a given variant from a Mutect VCF Parameters variant : varcode.Variant sample_info : dict Dictionary of sample to variant calling statistics, corresponds to the sample columns in a Mutect VCF Returns ------- SomaticVariantStats """ |
sample_info = variant_metadata["sample_info"]
# Ensure there are exactly two samples in the VCF, a tumor and normal
assert len(sample_info) == 2, "More than two samples found in the somatic VCF"
# Find the sample with the genotype field set to variant in the VCF
tumor_sample_infos = [info for info in sample_info.values() if info["GT"] == "0/1"]
# Ensure there is only one such sample
assert len(tumor_sample_infos) == 1, "More than one tumor sample found in the VCF file"
tumor_sample_info = tumor_sample_infos[0]
normal_sample_info = [info for info in sample_info.values() if info["GT"] != "0/1"][0]
tumor_stats = _mutect_variant_stats(variant, tumor_sample_info)
normal_stats = _mutect_variant_stats(variant, normal_sample_info)
return SomaticVariantStats(tumor_stats=tumor_stats, normal_stats=normal_stats) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def maf_somatic_variant_stats(variant, variant_metadata):
""" Parse out the variant calling statistics for a given variant from a MAF file Assumes the MAF format described here: https://www.biostars.org/p/161298/#161777 Parameters variant : varcode.Variant variant_metadata : dict Dictionary of metadata for this variant Returns ------- SomaticVariantStats """ |
tumor_stats = None
normal_stats = None
if "t_ref_count" in variant_metadata:
tumor_stats = _maf_variant_stats(variant, variant_metadata, prefix="t")
if "n_ref_count" in variant_metadata:
normal_stats = _maf_variant_stats(variant, variant_metadata, prefix="n")
return SomaticVariantStats(tumor_stats=tumor_stats, normal_stats=normal_stats) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _vcf_is_strelka(variant_file, variant_metadata):
"""Return True if variant_file given is in strelka format """ |
if "strelka" in variant_file.lower():
return True
elif "NORMAL" in variant_metadata["sample_info"].keys():
return True
else:
vcf_reader = vcf.Reader(open(variant_file, "r"))
try:
vcf_type = vcf_reader.metadata["content"]
except KeyError:
vcf_type = ""
if "strelka" in vcf_type.lower():
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def variant_stats_from_variant(variant, metadata, merge_fn=(lambda all_stats: \ max(all_stats, key=(lambda stats: stats.tumor_stats.depth)))):
"""Parse the variant calling stats from a variant called from multiple variant files. The stats are merged based on `merge_fn` Parameters variant : varcode.Variant metadata : dict Dictionary of variant file to variant calling metadata from that file merge_fn : function Function from list of SomaticVariantStats to single SomaticVariantStats. This is used if a variant is called by multiple callers or appears in multiple VCFs. By default, this uses the data from the caller that had a higher tumor depth. Returns ------- SomaticVariantStats """ |
all_stats = []
for (variant_file, variant_metadata) in metadata.items():
if _vcf_is_maf(variant_file=variant_file):
stats = maf_somatic_variant_stats(variant, variant_metadata)
elif _vcf_is_strelka(variant_file=variant_file,
variant_metadata=variant_metadata):
stats = strelka_somatic_variant_stats(variant, variant_metadata)
elif _vcf_is_mutect(variant_file=variant_file,
variant_metadata=variant_metadata):
stats = mutect_somatic_variant_stats(variant, variant_metadata)
else:
raise ValueError("Cannot parse sample fields, variant file {} is from an unsupported caller.".format(variant_file))
all_stats.append(stats)
return merge_fn(all_stats) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_ensembl_coverage(cohort, coverage_path, min_tumor_depth, min_normal_depth=0, pageant_dir_fn=None):
""" Load in Pageant CoverageDepth results with Ensembl loci. coverage_path is a path to Pageant CoverageDepth output directory, with one subdirectory per patient and a `cdf.csv` file inside each patient subdir. If min_normal_depth is 0, calculate tumor coverage. Otherwise, calculate join tumor/normal coverage. pageant_dir_fn is a function that takes in a Patient and produces a Pageant dir name. Last tested with Pageant CoverageDepth version 1ca9ed2. """ |
# Function to grab the pageant file name using the Patient
if pageant_dir_fn is None:
pageant_dir_fn = lambda patient: patient.id
columns_both = [
"depth1", # Normal
"depth2", # Tumor
"onBP1",
"onBP2",
"numOnLoci",
"fracBPOn1",
"fracBPOn2",
"fracLociOn",
"offBP1",
"offBP2",
"numOffLoci",
"fracBPOff1",
"fracBPOff2",
"fracLociOff",
]
columns_single = [
"depth",
"onBP",
"numOnLoci",
"fracBPOn",
"fracLociOn",
"offBP",
"numOffLoci",
"fracBPOff",
"fracLociOff"
]
if min_normal_depth < 0:
raise ValueError("min_normal_depth must be >= 0")
use_tumor_only = (min_normal_depth == 0)
columns = columns_single if use_tumor_only else columns_both
ensembl_loci_dfs = []
for patient in cohort:
patient_ensembl_loci_df = pd.read_csv(
path.join(coverage_path, pageant_dir_fn(patient), "cdf.csv"),
names=columns,
header=1)
# pylint: disable=no-member
# pylint gets confused by read_csv
if use_tumor_only:
depth_mask = (patient_ensembl_loci_df.depth == min_tumor_depth)
else:
depth_mask = (
(patient_ensembl_loci_df.depth1 == min_normal_depth) &
(patient_ensembl_loci_df.depth2 == min_tumor_depth))
patient_ensembl_loci_df = patient_ensembl_loci_df[depth_mask]
assert len(patient_ensembl_loci_df) == 1, (
"Incorrect number of tumor={}, normal={} depth loci results: {} for patient {}".format(
min_tumor_depth, min_normal_depth, len(patient_ensembl_loci_df), patient))
patient_ensembl_loci_df["patient_id"] = patient.id
ensembl_loci_dfs.append(patient_ensembl_loci_df)
ensembl_loci_df = pd.concat(ensembl_loci_dfs)
ensembl_loci_df["MB"] = ensembl_loci_df.numOnLoci / 1000000.0
return ensembl_loci_df[["patient_id", "numOnLoci", "MB"]] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def vertical_percent(plot, percent=0.1):
""" Using the size of the y axis, return a fraction of that size. """ |
plot_bottom, plot_top = plot.get_ylim()
return percent * (plot_top - plot_bottom) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_significance_indicator(plot, col_a=0, col_b=1, significant=False):
""" Add a p-value significance indicator. """ |
plot_bottom, plot_top = plot.get_ylim()
# Give the plot a little room for the significance indicator
line_height = vertical_percent(plot, 0.1)
# Add some extra spacing below the indicator
plot_top = plot_top + line_height
# Add some extra spacing above the indicator
plot.set_ylim(top=plot_top + line_height * 2)
color = "black"
line_top = plot_top + line_height
plot.plot([col_a, col_a, col_b, col_b], [plot_top, line_top, line_top, plot_top], lw=1.5, color=color)
indicator = "*" if significant else "ns"
plot.text((col_a + col_b) * 0.5, line_top, indicator, ha="center", va="bottom", color=color) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stripboxplot(x, y, data, ax=None, significant=None, **kwargs):
""" Overlay a stripplot on top of a boxplot. """ |
ax = sb.boxplot(
x=x,
y=y,
data=data,
ax=ax,
fliersize=0,
**kwargs
)
plot = sb.stripplot(
x=x,
y=y,
data=data,
ax=ax,
jitter=kwargs.pop("jitter", 0.05),
color=kwargs.pop("color", "0.3"),
**kwargs
)
if data[y].min() >= 0:
hide_negative_y_ticks(plot)
if significant is not None:
add_significance_indicator(plot=plot, significant=significant)
return plot |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fishers_exact_plot(data, condition1, condition2, ax=None, condition1_value=None, alternative="two-sided", **kwargs):
""" Perform a Fisher's exact test to compare to binary columns Parameters data: Pandas dataframe Dataframe to retrieve information from condition1: str First binary column to compare (and used for test sidedness) condition2: str Second binary column to compare ax : Axes, default None Axes to plot on condition1_value: If `condition1` is not a binary column, split on =/!= to condition1_value alternative: Specify the sidedness of the test: "two-sided", "less" or "greater" """ |
plot = sb.barplot(
x=condition1,
y=condition2,
ax=ax,
data=data,
**kwargs
)
plot.set_ylabel("Percent %s" % condition2)
condition1_mask = get_condition_mask(data, condition1, condition1_value)
count_table = pd.crosstab(data[condition1], data[condition2])
print(count_table)
oddsratio, p_value = fisher_exact(count_table, alternative=alternative)
add_significance_indicator(plot=plot, significant=p_value <= 0.05)
only_percentage_ticks(plot)
if alternative != "two-sided":
raise ValueError("We need to better understand the one-sided Fisher's Exact test")
sided_str = "two-sided"
print("Fisher's Exact Test: OR: {}, p-value={} ({})".format(oddsratio, p_value, sided_str))
return FishersExactResults(oddsratio=oddsratio,
p_value=p_value,
sided_str=sided_str,
with_condition1_series=data[condition1_mask][condition2],
without_condition1_series=data[~condition1_mask][condition2],
plot=plot) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mann_whitney_plot(data, condition, distribution, ax=None, condition_value=None, alternative="two-sided", skip_plot=False, **kwargs):
""" Create a box plot comparing a condition and perform a Mann Whitney test to compare the distribution in condition A v B Parameters data: Pandas dataframe Dataframe to retrieve information from condition: str Column to use as the splitting criteria distribution: str Column to use as the Y-axis or distribution in the test ax : Axes, default None Axes to plot on condition_value: If `condition` is not a binary column, split on =/!= to condition_value alternative: Specify the sidedness of the Mann-Whitney test: "two-sided", "less" or "greater" skip_plot: Calculate the test statistic and p-value, but don't plot. """ |
condition_mask = get_condition_mask(data, condition, condition_value)
U, p_value = mannwhitneyu(
data[condition_mask][distribution],
data[~condition_mask][distribution],
alternative=alternative
)
plot = None
if not skip_plot:
plot = stripboxplot(
x=condition,
y=distribution,
data=data,
ax=ax,
significant=p_value <= 0.05,
**kwargs
)
sided_str = sided_str_from_alternative(alternative, condition)
print("Mann-Whitney test: U={}, p-value={} ({})".format(U, p_value, sided_str))
return MannWhitneyResults(U=U,
p_value=p_value,
sided_str=sided_str,
with_condition_series=data[condition_mask][distribution],
without_condition_series=data[~condition_mask][distribution],
plot=plot) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def roc_curve_plot(data, value_column, outcome_column, bootstrap_samples=100, ax=None):
"""Create a ROC curve and compute the bootstrap AUC for the given variable and outcome Parameters data : Pandas dataframe Dataframe to retrieve information from value_column : str Column to retrieve the values from outcome_column : str Column to use as the outcome variable bootstrap_samples : int, optional Number of bootstrap samples to use to compute the AUC ax : Axes, default None Axes to plot on Returns ------- (mean_bootstrap_auc, roc_plot) : (float, matplotlib plot) Mean AUC for the given number of bootstrap samples and the plot """ |
scores = bootstrap_auc(df=data,
col=value_column,
pred_col=outcome_column,
n_bootstrap=bootstrap_samples)
mean_bootstrap_auc = scores.mean()
print("{}, Bootstrap (samples = {}) AUC:{}, std={}".format(
value_column, bootstrap_samples, mean_bootstrap_auc, scores.std()))
outcome = data[outcome_column].astype(int)
values = data[value_column]
fpr, tpr, thresholds = roc_curve(outcome, values)
if ax is None:
ax = plt.gca()
roc_plot = ax.plot(fpr, tpr, lw=1, label=value_column)
ax.set_xlim([-0.05, 1.05])
ax.set_ylim([-0.05, 1.05])
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
ax.legend(loc=2, borderaxespad=0.)
ax.set_title('{} ROC Curve (n={})'.format(value_column, len(values)))
return (mean_bootstrap_auc, roc_plot) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _strip_column_name(col_name, keep_paren_contents=True):
""" Utility script applying several regexs to a string. Intended to be used by `strip_column_names`. This function will: 1. replace informative punctuation components with text 2. (optionally) remove text within parentheses 3. replace remaining punctuation/whitespace with _ 4. strip leading/trailing punctuation/whitespace Parameters col_name (str):
input character string keep_paren_contents (logical):
controls behavior of within-paren elements of text - if True, (the default) all text within parens retained - if False, text within parens will be removed from the field name Returns -------- modified string for new field name Examples -------- > print([_strip_column_name(col) for col in ['PD-L1','PD L1','PD L1_']]) """ |
# start with input
new_col_name = col_name
# replace meaningful punctuation with text equivalents
# surround each with whitespace to enforce consistent use of _
punctuation_to_text = {
'<=': 'le',
'>=': 'ge',
'=<': 'le',
'=>': 'ge',
'<': 'lt',
'>': 'gt',
'#': 'num'
}
for punctuation, punctuation_text in punctuation_to_text.items():
new_col_name = new_col_name.replace(punctuation, punctuation_text)
# remove contents within ()
if not(keep_paren_contents):
new_col_name = re.sub('\([^)]*\)', '', new_col_name)
# replace remaining punctuation/whitespace with _
punct_pattern = '[\W_]+'
punct_replacement = '_'
new_col_name = re.sub(punct_pattern, punct_replacement, new_col_name)
# remove leading/trailing _ if it exists (if last char was punctuation)
new_col_name = new_col_name.strip("_")
# TODO: check for empty string
# return lower-case version of column name
return new_col_name.lower() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def strip_column_names(cols, keep_paren_contents=True):
""" Utility script for renaming pandas columns to patsy-friendly names. Revised names have been: - stripped of all punctuation and whitespace (converted to text or `_`) - converted to lower case Takes a list of column names, returns a dict mapping names to revised names. If there are any concerns with the conversion, this will print a warning & return original column names. Parameters cols (list):
list of strings containing column names keep_paren_contents (logical):
controls behavior of within-paren elements of text - if True, (the default) all text within parens retained - if False, text within parens will be removed from the field name Returns ------- dict mapping col_names -> new_col_names Example ------- > df = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']), 'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd']), 'PD L1 (value)': pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd']), 'PD L1 (>1)': pd.Series([0., 1., 1., 0.], index=['a', 'b', 'c', 'd']), } > df = pd.DataFrame(df) > df = df.rename(columns = strip_column_names(df.columns)) ## observe, by comparison > df2 = df.rename(columns = strip_column_names(df.columns, keep_paren_contents=False)) """ |
# strip/replace punctuation
new_cols = [
_strip_column_name(col, keep_paren_contents=keep_paren_contents)
for col in cols]
if len(new_cols) != len(set(new_cols)):
warn_str = 'Warning: strip_column_names (if run) would introduce duplicate names.'
warn_str += ' Reverting column names to the original.'
warnings.warn(warn_str, Warning)
print('Warning: strip_column_names would introduce duplicate names. Please fix & try again.')
return dict(zip(cols, cols))
return dict(zip(cols, new_cols)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_attributes(obj, additional_data):
""" Given an object and a dictionary, give the object new attributes from that dictionary. Uses _strip_column_name to git rid of whitespace/uppercase/special characters. """ |
for key, value in additional_data.items():
if hasattr(obj, key):
raise ValueError("Key %s in additional_data already exists in this object" % key)
setattr(obj, _strip_column_name(key), value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def return_obj(cols, df, return_cols=False):
"""Construct a DataFrameHolder and then return either that or the DataFrame.""" |
df_holder = DataFrameHolder(cols=cols, df=df)
return df_holder.return_self(return_cols=return_cols) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compare_provenance( this_provenance, other_provenance, left_outer_diff = "In current but not comparison", right_outer_diff = "In comparison but not current"):
"""Utility function to compare two abritrary provenance dicts returns number of discrepancies. Parameters this_provenance: provenance dict (to be compared to "other_provenance") other_provenance: comparison provenance dict (optional) left_outer_diff: description/prefix used when printing items in this_provenance but not in other_provenance right_outer_diff: description/prefix used when printing items in other_provenance but not in this_provenance Returns Number of discrepancies (0: None) """ |
## if either this or other items is null, return 0
if (not this_provenance or not other_provenance):
return 0
this_items = set(this_provenance.items())
other_items = set(other_provenance.items())
# Two-way diff: are any modules introduced, and are any modules lost?
new_diff = this_items.difference(other_items)
old_diff = other_items.difference(this_items)
warn_str = ""
if len(new_diff) > 0:
warn_str += "%s: %s" % (
left_outer_diff,
_provenance_str(new_diff))
if len(old_diff) > 0:
warn_str += "%s: %s" % (
right_outer_diff,
_provenance_str(old_diff))
if len(warn_str) > 0:
warnings.warn(warn_str, Warning)
return(len(new_diff)+len(old_diff)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_random_missense_variants(num_variants=10, max_search=100000, reference="GRCh37"):
""" Generate a random collection of missense variants by trying random variants repeatedly. """ |
variants = []
for i in range(max_search):
bases = ["A", "C", "T", "G"]
random_ref = choice(bases)
bases.remove(random_ref)
random_alt = choice(bases)
random_contig = choice(["1", "2", "3", "4", "5"])
random_variant = Variant(contig=random_contig, start=randint(1, 1000000),
ref=random_ref, alt=random_alt, ensembl=reference)
try:
effects = random_variant.effects()
for effect in effects:
if isinstance(effect, Substitution):
variants.append(random_variant)
break
except:
continue
if len(variants) == num_variants:
break
return VariantCollection(variants) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_simple_vcf(filename, variant_collection):
""" Output a very simple metadata-free VCF for each variant in a variant_collection. """ |
contigs = []
positions = []
refs = []
alts = []
for variant in variant_collection:
contigs.append("chr" + variant.contig)
positions.append(variant.start)
refs.append(variant.ref)
alts.append(variant.alt)
df = pd.DataFrame()
df["contig"] = contigs
df["position"] = positions
df["id"] = ["."] * len(variant_collection)
df["ref"] = refs
df["alt"] = alts
df["qual"] = ["."] * len(variant_collection)
df["filter"] = ["."] * len(variant_collection)
df["info"] = ["."] * len(variant_collection)
df["format"] = ["GT:AD:DP"] * len(variant_collection)
normal_ref_depths = [randint(1, 10) for v in variant_collection]
normal_alt_depths = [randint(1, 10) for v in variant_collection]
df["n1"] = ["0:%d,%d:%d" % (normal_ref_depths[i], normal_alt_depths[i],
normal_ref_depths[i] + normal_alt_depths[i])
for i in range(len(variant_collection))]
tumor_ref_depths = [randint(1, 10) for v in variant_collection]
tumor_alt_depths = [randint(1, 10) for v in variant_collection]
df["t1"] = ["0/1:%d,%d:%d" % (tumor_ref_depths[i], tumor_alt_depths[i], tumor_ref_depths[i] + tumor_alt_depths[i])
for i in range(len(variant_collection))]
with open(filename, "w") as f:
f.write("##fileformat=VCFv4.1\n")
f.write("##reference=file:///projects/ngs/resources/gatk/2.3/ucsc.hg19.fasta\n")
with open(filename, "a") as f:
df.to_csv(f, sep="\t", index=None, header=None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list_folder(self, path):
"""Looks up folder contents of `path.`""" |
# Inspired by https://github.com/rspivak/sftpserver/blob/0.3/src/sftpserver/stub_sftp.py#L70
try:
folder_contents = []
for f in os.listdir(path):
attr = paramiko.SFTPAttributes.from_stat(os.stat(os.path.join(path, f)))
attr.filename = f
folder_contents.append(attr)
return folder_contents
except OSError as e:
return SFTPServer.convert_errno(e.errno) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def filter_variants(variant_collection, patient, filter_fn, **kwargs):
"""Filter variants from the Variant Collection Parameters variant_collection : varcode.VariantCollection patient : cohorts.Patient filter_fn: function Takes a FilterableVariant and returns a boolean. Only variants returning True are preserved. Returns ------- varcode.VariantCollection Filtered variant collection, with only the variants passing the filter """ |
if filter_fn:
return variant_collection.clone_with_new_elements([
variant
for variant in variant_collection
if filter_fn(FilterableVariant(
variant=variant,
variant_collection=variant_collection,
patient=patient,
), **kwargs)
])
else:
return variant_collection |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def filter_effects(effect_collection, variant_collection, patient, filter_fn, all_effects, **kwargs):
"""Filter variants from the Effect Collection Parameters effect_collection : varcode.EffectCollection variant_collection : varcode.VariantCollection patient : cohorts.Patient filter_fn : function Takes a FilterableEffect and returns a boolean. Only effects returning True are preserved. all_effects : boolean Return the single, top-priority effect if False. If True, return all effects (don't filter to top-priority). Returns ------- varcode.EffectCollection Filtered effect collection, with only the variants passing the filter """ |
def top_priority_maybe(effect_collection):
"""
Always (unless all_effects=True) take the top priority effect per variant
so we end up with a single effect per variant.
"""
if all_effects:
return effect_collection
return EffectCollection(list(effect_collection.top_priority_effect_per_variant().values()))
def apply_filter_fn(filter_fn, effect):
"""
Return True if filter_fn is true for the effect or its alternate_effect.
If no alternate_effect, then just return True if filter_fn is True.
"""
applied = filter_fn(FilterableEffect(
effect=effect,
variant_collection=variant_collection,
patient=patient), **kwargs)
if hasattr(effect, "alternate_effect"):
applied_alternate = filter_fn(FilterableEffect(
effect=effect.alternate_effect,
variant_collection=variant_collection,
patient=patient), **kwargs)
return applied or applied_alternate
return applied
if filter_fn:
return top_priority_maybe(EffectCollection([
effect
for effect in effect_collection
if apply_filter_fn(filter_fn, effect)]))
else:
return top_priority_maybe(effect_collection) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def count_lines_in(filename):
"Count lines in a file"
f = open(filename)
lines = 0
buf_size = 1024 * 1024
read_f = f.read # loop optimization
buf = read_f(buf_size)
while buf:
lines += buf.count('\n')
buf = read_f(buf_size)
return lines |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def view_name_from(path):
"Resolve a path to the full python module name of the related view function"
try:
return CACHED_VIEWS[path]
except KeyError:
view = resolve(path)
module = path
name = ''
if hasattr(view.func, '__module__'):
module = resolve(path).func.__module__
if hasattr(view.func, '__name__'):
name = resolve(path).func.__name__
view = "%s.%s" % (module, name)
CACHED_VIEWS[path] = view
return view |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def generate_table_from(data):
"Output a nicely formatted ascii table"
table = Texttable(max_width=120)
table.add_row(["view", "method", "status", "count", "minimum", "maximum", "mean", "stdev", "queries", "querytime"])
table.set_cols_align(["l", "l", "l", "r", "r", "r", "r", "r", "r", "r"])
for item in sorted(data):
mean = round(sum(data[item]['times'])/data[item]['count'], 3)
mean_sql = round(sum(data[item]['sql'])/data[item]['count'], 3)
mean_sqltime = round(sum(data[item]['sqltime'])/data[item]['count'], 3)
sdsq = sum([(i - mean) ** 2 for i in data[item]['times']])
try:
stdev = '%.2f' % ((sdsq / (len(data[item]['times']) - 1)) ** .5)
except ZeroDivisionError:
stdev = '0.00'
minimum = "%.2f" % min(data[item]['times'])
maximum = "%.2f" % max(data[item]['times'])
table.add_row([data[item]['view'], data[item]['method'], data[item]['status'], data[item]['count'], minimum, maximum, '%.3f' % mean, stdev, mean_sql, mean_sqltime])
return table.draw() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def analyze_log_file(logfile, pattern, reverse_paths=True, progress=True):
"Given a log file and regex group and extract the performance data"
if progress:
lines = count_lines_in(logfile)
pbar = ProgressBar(widgets=[Percentage(), Bar()], maxval=lines+1).start()
counter = 0
data = {}
compiled_pattern = compile(pattern)
for line in fileinput.input([logfile]):
if progress:
counter = counter + 1
parsed = compiled_pattern.findall(line)[0]
date = parsed[0]
method = parsed[1]
path = parsed[2]
status = parsed[3]
time = parsed[4]
sql = parsed[5]
sqltime = parsed[6]
try:
ignore = False
for ignored_path in IGNORE_PATHS:
compiled_path = compile(ignored_path)
if compiled_path.match(path):
ignore = True
if not ignore:
if reverse_paths:
view = view_name_from(path)
else:
view = path
key = "%s-%s-%s" % (view, status, method)
try:
data[key]['count'] = data[key]['count'] + 1
data[key]['times'].append(float(time))
data[key]['sql'].append(int(sql))
data[key]['sqltime'].append(float(sqltime))
except KeyError:
data[key] = {
'count': 1,
'status': status,
'view': view,
'method': method,
'times': [float(time)],
'sql': [int(sql)],
'sqltime': [float(sqltime)],
}
except Resolver404:
pass
if progress:
pbar.update(counter)
if progress:
pbar.finish()
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_string(self, limit=None):
""" Create a string representation of this collection, showing up to `limit` items. """ |
header = self.short_string()
if len(self) == 0:
return header
contents = ""
element_lines = [
" -- %s" % (element,)
for element in self.elements[:limit]
]
contents = "\n".join(element_lines)
if limit is not None and len(self.elements) > limit:
contents += "\n ... and %d more" % (len(self) - limit)
return "%s\n%s" % (header, contents) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def safe_log_error(self, error: Exception, *info: str):
"""Log error failing silently on error""" |
self.__do_safe(lambda: self.logger.error(error, *info)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def safe_log_info(self, *info: str):
"""Log info failing silently on error""" |
self.__do_safe(lambda: self.logger.info(*info)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _default_client(jws_client, reactor, key, alg):
""" Make a client if we didn't get one. """ |
if jws_client is None:
pool = HTTPConnectionPool(reactor)
agent = Agent(reactor, pool=pool)
jws_client = JWSClient(HTTPClient(agent=agent), key, alg)
return jws_client |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _find_supported_challenge(authzr, responders):
""" Find a challenge combination that consists of a single challenge that the responder can satisfy. :param ~acme.messages.AuthorizationResource auth: The authorization to examine. :type responder: List[`~txacme.interfaces.IResponder`] :param responder: The possible responders to use. :raises NoSupportedChallenges: When a suitable challenge combination is not found. :rtype: Tuple[`~txacme.interfaces.IResponder`, `~acme.messages.ChallengeBody`] :return: The responder and challenge that were found. """ |
matches = [
(responder, challbs[0])
for challbs in authzr.body.resolved_combinations
for responder in responders
if [challb.typ for challb in challbs] == [responder.challenge_type]]
if len(matches) == 0:
raise NoSupportedChallenges(authzr)
else:
return matches[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def answer_challenge(authzr, client, responders):
""" Complete an authorization using a responder. :param ~acme.messages.AuthorizationResource auth: The authorization to complete. :param .Client client: The ACME client. :type responders: List[`~txacme.interfaces.IResponder`] :param responders: A list of responders that can be used to complete the challenge with. :return: A deferred firing when the authorization is verified. """ |
responder, challb = _find_supported_challenge(authzr, responders)
response = challb.response(client.key)
def _stop_responding():
return maybeDeferred(
responder.stop_responding,
authzr.body.identifier.value,
challb.chall,
response)
return (
maybeDeferred(
responder.start_responding,
authzr.body.identifier.value,
challb.chall,
response)
.addCallback(lambda _: client.answer_challenge(challb, response))
.addCallback(lambda _: _stop_responding)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def poll_until_valid(authzr, clock, client, timeout=300.0):
""" Poll an authorization until it is in a state other than pending or processing. :param ~acme.messages.AuthorizationResource auth: The authorization to complete. :param clock: The ``IReactorTime`` implementation to use; usually the reactor, when not testing. :param .Client client: The ACME client. :param float timeout: Maximum time to poll in seconds, before giving up. :raises txacme.client.AuthorizationFailed: if the authorization is no longer in the pending, processing, or valid states. :raises: ``twisted.internet.defer.CancelledError`` if the authorization was still in pending or processing state when the timeout was reached. :rtype: Deferred[`~acme.messages.AuthorizationResource`] :return: A deferred firing when the authorization has completed/failed; if the authorization is valid, the authorization resource will be returned. """ |
def repoll(result):
authzr, retry_after = result
if authzr.body.status in {STATUS_PENDING, STATUS_PROCESSING}:
return (
deferLater(clock, retry_after, lambda: None)
.addCallback(lambda _: client.poll(authzr))
.addCallback(repoll)
)
if authzr.body.status != STATUS_VALID:
raise AuthorizationFailed(authzr)
return authzr
def cancel_timeout(result):
if timeout_call.active():
timeout_call.cancel()
return result
d = client.poll(authzr).addCallback(repoll)
timeout_call = clock.callLater(timeout, d.cancel)
d.addBoth(cancel_timeout)
return d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_url(cls, reactor, url, key, alg=RS256, jws_client=None):
""" Construct a client from an ACME directory at a given URL. :param url: The ``twisted.python.url.URL`` to fetch the directory from. See `txacme.urls` for constants for various well-known public directories. :param reactor: The Twisted reactor to use. :param ~josepy.jwk.JWK key: The client key to use. :param alg: The signing algorithm to use. Needs to be compatible with the type of key used. :param JWSClient jws_client: The underlying client to use, or ``None`` to construct one. :return: The constructed client. :rtype: Deferred[`Client`] """ |
action = LOG_ACME_CONSUME_DIRECTORY(
url=url, key_type=key.typ, alg=alg.name)
with action.context():
check_directory_url_type(url)
jws_client = _default_client(jws_client, reactor, key, alg)
return (
DeferredContext(jws_client.get(url.asText()))
.addCallback(json_content)
.addCallback(messages.Directory.from_json)
.addCallback(
tap(lambda d: action.add_success_fields(directory=d)))
.addCallback(cls, reactor, key, jws_client)
.addActionFinish()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register(self, new_reg=None):
""" Create a new registration with the ACME server. :param ~acme.messages.NewRegistration new_reg: The registration message to use, or ``None`` to construct one. :return: The registration resource. :rtype: Deferred[`~acme.messages.RegistrationResource`] """ |
if new_reg is None:
new_reg = messages.NewRegistration()
action = LOG_ACME_REGISTER(registration=new_reg)
with action.context():
return (
DeferredContext(
self.update_registration(
new_reg, uri=self.directory[new_reg]))
.addErrback(self._maybe_registered, new_reg)
.addCallback(
tap(lambda r: action.add_success_fields(registration=r)))
.addActionFinish()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _maybe_registered(self, failure, new_reg):
""" If the registration already exists, we should just load it. """ |
failure.trap(ServerError)
response = failure.value.response
if response.code == http.CONFLICT:
reg = new_reg.update(
resource=messages.UpdateRegistration.resource_type)
uri = self._maybe_location(response)
return self.update_registration(reg, uri=uri)
return failure |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def agree_to_tos(self, regr):
""" Accept the terms-of-service for a registration. :param ~acme.messages.RegistrationResource regr: The registration to update. :return: The updated registration resource. :rtype: Deferred[`~acme.messages.RegistrationResource`] """ |
return self.update_registration(
regr.update(
body=regr.body.update(
agreement=regr.terms_of_service))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_registration(self, regr, uri=None):
""" Submit a registration to the server to update it. :param ~acme.messages.RegistrationResource regr: The registration to update. Can be a :class:`~acme.messages.NewRegistration` instead, in order to create a new registration. :param str uri: The url to submit to. Must be specified if a :class:`~acme.messages.NewRegistration` is provided. :return: The updated registration resource. :rtype: Deferred[`~acme.messages.RegistrationResource`] """ |
if uri is None:
uri = regr.uri
if isinstance(regr, messages.RegistrationResource):
message = messages.UpdateRegistration(**dict(regr.body))
else:
message = regr
action = LOG_ACME_UPDATE_REGISTRATION(uri=uri, registration=message)
with action.context():
return (
DeferredContext(self._client.post(uri, message))
.addCallback(self._parse_regr_response, uri=uri)
.addCallback(self._check_regr, regr)
.addCallback(
tap(lambda r: action.add_success_fields(registration=r)))
.addActionFinish()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse_regr_response(self, response, uri=None, new_authzr_uri=None, terms_of_service=None):
""" Parse a registration response from the server. """ |
links = _parse_header_links(response)
if u'terms-of-service' in links:
terms_of_service = links[u'terms-of-service'][u'url']
if u'next' in links:
new_authzr_uri = links[u'next'][u'url']
if new_authzr_uri is None:
raise errors.ClientError('"next" link missing')
return (
response.json()
.addCallback(
lambda body:
messages.RegistrationResource(
body=messages.Registration.from_json(body),
uri=self._maybe_location(response, uri=uri),
new_authzr_uri=new_authzr_uri,
terms_of_service=terms_of_service))
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_regr(self, regr, new_reg):
""" Check that a registration response contains the registration we were expecting. """ |
body = getattr(new_reg, 'body', new_reg)
for k, v in body.items():
if k == 'resource' or not v:
continue
if regr.body[k] != v:
raise errors.UnexpectedUpdate(regr)
if regr.body.key != self.key.public_key():
raise errors.UnexpectedUpdate(regr)
return regr |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def request_challenges(self, identifier):
""" Create a new authorization. :param ~acme.messages.Identifier identifier: The identifier to authorize. :return: The new authorization resource. :rtype: Deferred[`~acme.messages.AuthorizationResource`] """ |
action = LOG_ACME_CREATE_AUTHORIZATION(identifier=identifier)
with action.context():
message = messages.NewAuthorization(identifier=identifier)
return (
DeferredContext(
self._client.post(self.directory[message], message))
.addCallback(self._expect_response, http.CREATED)
.addCallback(self._parse_authorization)
.addCallback(self._check_authorization, identifier)
.addCallback(
tap(lambda a: action.add_success_fields(authorization=a)))
.addActionFinish()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _expect_response(cls, response, code):
""" Ensure we got the expected response code. """ |
if response.code != code:
raise errors.ClientError(
'Expected {!r} response but got {!r}'.format(
code, response.code))
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse_authorization(cls, response, uri=None):
""" Parse an authorization resource. """ |
links = _parse_header_links(response)
try:
new_cert_uri = links[u'next'][u'url']
except KeyError:
raise errors.ClientError('"next" link missing')
return (
response.json()
.addCallback(
lambda body: messages.AuthorizationResource(
body=messages.Authorization.from_json(body),
uri=cls._maybe_location(response, uri=uri),
new_cert_uri=new_cert_uri))
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_authorization(cls, authzr, identifier):
""" Check that the authorization we got is the one we expected. """ |
if authzr.body.identifier != identifier:
raise errors.UnexpectedUpdate(authzr)
return authzr |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def answer_challenge(self, challenge_body, response):
""" Respond to an authorization challenge. :param ~acme.messages.ChallengeBody challenge_body: The challenge being responded to. :param ~acme.challenges.ChallengeResponse response: The response to the challenge. :return: The updated challenge resource. :rtype: Deferred[`~acme.messages.ChallengeResource`] """ |
action = LOG_ACME_ANSWER_CHALLENGE(
challenge_body=challenge_body, response=response)
with action.context():
return (
DeferredContext(
self._client.post(challenge_body.uri, response))
.addCallback(self._parse_challenge)
.addCallback(self._check_challenge, challenge_body)
.addCallback(
tap(lambda c:
action.add_success_fields(challenge_resource=c)))
.addActionFinish()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse_challenge(cls, response):
""" Parse a challenge resource. """ |
links = _parse_header_links(response)
try:
authzr_uri = links['up']['url']
except KeyError:
raise errors.ClientError('"up" link missing')
return (
response.json()
.addCallback(
lambda body: messages.ChallengeResource(
authzr_uri=authzr_uri,
body=messages.ChallengeBody.from_json(body)))
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_challenge(cls, challenge, challenge_body):
""" Check that the challenge resource we got is the one we expected. """ |
if challenge.uri != challenge_body.uri:
raise errors.UnexpectedUpdate(challenge.uri)
return challenge |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def retry_after(cls, response, default=5, _now=time.time):
""" Parse the Retry-After value from a response. """ |
val = response.headers.getRawHeaders(b'retry-after', [default])[0]
try:
return int(val)
except ValueError:
return http.stringToDatetime(val) - _now() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def request_issuance(self, csr):
""" Request a certificate. Authorizations should have already been completed for all of the names requested in the CSR. Note that unlike `acme.client.Client.request_issuance`, the certificate resource will have the body data as raw bytes. .. seealso:: `txacme.util.csr_for_names` .. todo:: Delayed issuance is not currently supported, the server must issue the requested certificate immediately. :param csr: A certificate request message: normally `txacme.messages.CertificateRequest` or `acme.messages.CertificateRequest`. :rtype: Deferred[`acme.messages.CertificateResource`] :return: The issued certificate. """ |
action = LOG_ACME_REQUEST_CERTIFICATE()
with action.context():
return (
DeferredContext(
self._client.post(
self.directory[csr], csr,
content_type=DER_CONTENT_TYPE,
headers=Headers({b'Accept': [DER_CONTENT_TYPE]})))
.addCallback(self._expect_response, http.CREATED)
.addCallback(self._parse_certificate)
.addActionFinish()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse_certificate(cls, response):
""" Parse a response containing a certificate resource. """ |
links = _parse_header_links(response)
try:
cert_chain_uri = links[u'up'][u'url']
except KeyError:
cert_chain_uri = None
return (
response.content()
.addCallback(
lambda body: messages.CertificateResource(
uri=cls._maybe_location(response),
cert_chain_uri=cert_chain_uri,
body=body))
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch_chain(self, certr, max_length=10):
""" Fetch the intermediary chain for a certificate. :param acme.messages.CertificateResource certr: The certificate to fetch the chain for. :param int max_length: The maximum length of the chain that will be fetched. :rtype: Deferred[List[`acme.messages.CertificateResource`]] :return: The issuer certificate chain, ordered with the trust anchor last. """ |
action = LOG_ACME_FETCH_CHAIN()
with action.context():
if certr.cert_chain_uri is None:
return succeed([])
elif max_length < 1:
raise errors.ClientError('chain too long')
return (
DeferredContext(
self._client.get(
certr.cert_chain_uri,
content_type=DER_CONTENT_TYPE,
headers=Headers({b'Accept': [DER_CONTENT_TYPE]})))
.addCallback(self._parse_certificate)
.addCallback(
lambda issuer:
self.fetch_chain(issuer, max_length=max_length - 1)
.addCallback(lambda chain: [issuer] + chain))
.addActionFinish()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _wrap_in_jws(self, nonce, obj):
""" Wrap ``JSONDeSerializable`` object in JWS. .. todo:: Implement ``acmePath``. :param ~josepy.interfaces.JSONDeSerializable obj: :param bytes nonce: :rtype: `bytes` :return: JSON-encoded data """ |
with LOG_JWS_SIGN(key_type=self._key.typ, alg=self._alg.name,
nonce=nonce):
jobj = obj.json_dumps().encode()
return (
JWS.sign(
payload=jobj, key=self._key, alg=self._alg, nonce=nonce)
.json_dumps()
.encode()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _check_response(cls, response, content_type=JSON_CONTENT_TYPE):
""" Check response content and its type. .. note:: Unlike :mod:`acme.client`, checking is strict. :param bytes content_type: Expected Content-Type response header. If the response Content-Type does not match, :exc:`ClientError` is raised. :raises .ServerError: If server response body carries HTTP Problem (draft-ietf-appsawg-http-problem-00). :raises ~acme.errors.ClientError: In case of other networking errors. """ |
def _got_failure(f):
f.trap(ValueError)
return None
def _got_json(jobj):
if 400 <= response.code < 600:
if response_ct == JSON_ERROR_CONTENT_TYPE and jobj is not None:
raise ServerError(
messages.Error.from_json(jobj), response)
else:
# response is not JSON object
raise errors.ClientError(response)
elif response_ct != content_type:
raise errors.ClientError(
'Unexpected response Content-Type: {0!r}'.format(
response_ct))
elif content_type == JSON_CONTENT_TYPE and jobj is None:
raise errors.ClientError(response)
return response
response_ct = response.headers.getRawHeaders(
b'Content-Type', [None])[0]
action = LOG_JWS_CHECK_RESPONSE(
expected_content_type=content_type,
response_content_type=response_ct)
with action.context():
# TODO: response.json() is called twice, once here, and
# once in _get and _post clients
return (
DeferredContext(response.json())
.addErrback(_got_failure)
.addCallback(_got_json)
.addActionFinish()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def head(self, url, *args, **kwargs):
""" Send HEAD request without checking the response. Note that ``_check_response`` is not called, as there will be no response body to check. :param str url: The URL to make the request to. """ |
with LOG_JWS_HEAD().context():
return DeferredContext(
self._send_request(u'HEAD', url, *args, **kwargs)
).addActionFinish() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, url, content_type=JSON_CONTENT_TYPE, **kwargs):
""" Send GET request and check response. :param str method: The HTTP method to use. :param str url: The URL to make the request to. :raises txacme.client.ServerError: If server response body carries HTTP Problem (draft-ietf-appsawg-http-problem-00). :raises acme.errors.ClientError: In case of other protocol errors. :return: Deferred firing with the checked HTTP response. """ |
with LOG_JWS_GET().context():
return (
DeferredContext(self._send_request(u'GET', url, **kwargs))
.addCallback(self._check_response, content_type=content_type)
.addActionFinish()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _add_nonce(self, response):
""" Store a nonce from a response we received. :param twisted.web.iweb.IResponse response: The HTTP response. :return: The response, unmodified. """ |
nonce = response.headers.getRawHeaders(
REPLAY_NONCE_HEADER, [None])[0]
with LOG_JWS_ADD_NONCE(raw_nonce=nonce) as action:
if nonce is None:
raise errors.MissingNonce(response)
else:
try:
decoded_nonce = Header._fields['nonce'].decode(
nonce.decode('ascii')
)
action.add_success_fields(nonce=decoded_nonce)
except DeserializationError as error:
raise errors.BadNonce(nonce, error)
self._nonces.add(decoded_nonce)
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_nonce(self, url):
""" Get a nonce to use in a request, removing it from the nonces on hand. """ |
action = LOG_JWS_GET_NONCE()
if len(self._nonces) > 0:
with action:
nonce = self._nonces.pop()
action.add_success_fields(nonce=nonce)
return succeed(nonce)
else:
with action.context():
return (
DeferredContext(self.head(url))
.addCallback(self._add_nonce)
.addCallback(lambda _: self._nonces.pop())
.addCallback(tap(
lambda nonce: action.add_success_fields(nonce=nonce)))
.addActionFinish()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _post(self, url, obj, content_type, **kwargs):
""" POST an object and check the response. :param str url: The URL to request. :param ~josepy.interfaces.JSONDeSerializable obj: The serializable payload of the request. :param bytes content_type: The expected content type of the response. :raises txacme.client.ServerError: If server response body carries HTTP Problem (draft-ietf-appsawg-http-problem-00). :raises acme.errors.ClientError: In case of other protocol errors. """ |
with LOG_JWS_POST().context():
headers = kwargs.setdefault('headers', Headers())
headers.setRawHeaders(b'content-type', [JSON_CONTENT_TYPE])
return (
DeferredContext(self._get_nonce(url))
.addCallback(self._wrap_in_jws, obj)
.addCallback(
lambda data: self._send_request(
u'POST', url, data=data, **kwargs))
.addCallback(self._add_nonce)
.addCallback(self._check_response, content_type=content_type)
.addActionFinish()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def post(self, url, obj, content_type=JSON_CONTENT_TYPE, **kwargs):
""" POST an object and check the response. Retry once if a badNonce error is received. :param str url: The URL to request. :param ~josepy.interfaces.JSONDeSerializable obj: The serializable payload of the request. :param bytes content_type: The expected content type of the response. By default, JSON. :raises txacme.client.ServerError: If server response body carries HTTP Problem (draft-ietf-appsawg-http-problem-00). :raises acme.errors.ClientError: In case of other protocol errors. """ |
def retry_bad_nonce(f):
f.trap(ServerError)
# The current RFC draft defines the namespace as
# urn:ietf:params:acme:error:<code>, but earlier drafts (and some
# current implementations) use urn:acme:error:<code> instead. We
# don't really care about the namespace here, just the error code.
if f.value.message.typ.split(':')[-1] == 'badNonce':
# If one nonce is bad, others likely are too. Let's clear them
# and re-add the one we just got.
self._nonces.clear()
self._add_nonce(f.value.response)
return self._post(url, obj, content_type, **kwargs)
return f
return (
self._post(url, obj, content_type, **kwargs)
.addErrback(retry_bad_nonce)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _daemon_thread(*a, **kw):
""" Create a `threading.Thread`, but always set ``daemon``. """ |
thread = Thread(*a, **kw)
thread.daemon = True
return thread |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _defer_to_worker(deliver, worker, work, *args, **kwargs):
""" Run a task in a worker, delivering the result as a ``Deferred`` in the reactor thread. """ |
deferred = Deferred()
def wrapped_work():
try:
result = work(*args, **kwargs)
except BaseException:
f = Failure()
deliver(lambda: deferred.errback(f))
else:
deliver(lambda: deferred.callback(result))
worker.do(wrapped_work)
return deferred |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _split_zone(server_name, zone_name):
""" Split the zone portion off from a DNS label. :param str server_name: The full DNS label. :param str zone_name: The zone name suffix. """ |
server_name = server_name.rstrip(u'.')
zone_name = zone_name.rstrip(u'.')
if not (server_name == zone_name or
server_name.endswith(u'.' + zone_name)):
raise NotInZone(server_name=server_name, zone_name=zone_name)
return server_name[:-len(zone_name)].rstrip(u'.') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_existing(driver, zone_name, server_name, validation):
""" Get existing validation records. """ |
if zone_name is None:
zones = sorted(
(z for z
in driver.list_zones()
if server_name.rstrip(u'.')
.endswith(u'.' + z.domain.rstrip(u'.'))),
key=lambda z: len(z.domain),
reverse=True)
if len(zones) == 0:
raise NotInZone(server_name=server_name, zone_name=None)
else:
zones = [
z for z
in driver.list_zones()
if z.domain == zone_name]
if len(zones) == 0:
raise ZoneNotFound(zone_name=zone_name)
zone = zones[0]
subdomain = _split_zone(server_name, zone.domain)
existing = [
record for record
in zone.list_records()
if record.name == subdomain and
record.type == 'TXT' and
record.data == validation]
return zone, existing, subdomain |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _validation(response):
""" Get the validation value for a challenge response. """ |
h = hashlib.sha256(response.key_authorization.encode("utf-8"))
return b64encode(h.digest()).decode() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_or_create_client_key(pem_path):
""" Load the client key from a directory, creating it if it does not exist. .. note:: The client key that will be created will be a 2048-bit RSA key. :type pem_path: ``twisted.python.filepath.FilePath`` :param pem_path: The certificate directory to use, as with the endpoint. """ |
acme_key_file = pem_path.asTextMode().child(u'client.key')
if acme_key_file.exists():
key = serialization.load_pem_private_key(
acme_key_file.getContent(),
password=None,
backend=default_backend())
else:
key = generate_private_key(u'rsa')
acme_key_file.setContent(
key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncryption()))
return JWKRSA(key=key) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse(reactor, directory, pemdir, *args, **kwargs):
""" Parse a txacme endpoint description. :param reactor: The Twisted reactor. :param directory: ``twisted.python.url.URL`` for the ACME directory to use for issuing certs. :param str pemdir: The path to the certificate directory to use. """ |
def colon_join(items):
return ':'.join([item.replace(':', '\\:') for item in items])
sub = colon_join(list(args) + ['='.join(item) for item in kwargs.items()])
pem_path = FilePath(pemdir).asTextMode()
acme_key = load_or_create_client_key(pem_path)
return AutoTLSEndpoint(
reactor=reactor,
directory=directory,
client_creator=partial(Client.from_url, key=acme_key, alg=RS256),
cert_store=DirectoryStore(pem_path),
cert_mapping=HostDirectoryMap(pem_path),
sub_endpoint=serverFromString(reactor, sub)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lazyread(f, delimiter):
""" Generator which continually reads ``f`` to the next instance of ``delimiter``. This allows you to do batch processing on the contents of ``f`` without loading the entire file into memory. :param f: Any file-like object which has a ``.read()`` method. :param delimiter: Delimiter on which to split up the file. """ |
# Get an empty string to start with. We need to make sure that if the
# file is opened in binary mode, we're using byte strings, and similar
# for Unicode. Otherwise trying to update the running string will
# hit a TypeError.
try:
running = f.read(0)
except Exception as e:
# The boto3 APIs don't let you read zero bytes from an S3 object, but
# they always return bytestrings, so in this case we know what to
# start with.
if e.__class__.__name__ == 'IncompleteReadError':
running = b''
else:
raise
while True:
new_data = f.read(1024)
# When a call to read() returns nothing, we're at the end of the file.
if not new_data:
yield running
return
# Otherwise, update the running stream and look for instances of
# the delimiter. Remember we might have read more than one delimiter
# since the last time we checked
running += new_data
while delimiter in running:
curr, running = running.split(delimiter, 1)
yield curr + delimiter |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_private_key(key_type):
""" Generate a random private key using sensible parameters. :param str key_type: The type of key to generate. One of: ``rsa``. """ |
if key_type == u'rsa':
return rsa.generate_private_key(
public_exponent=65537, key_size=2048, backend=default_backend())
raise ValueError(key_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tap(f):
""" "Tap" a Deferred callback chain with a function whose return value is ignored. """ |
@wraps(f)
def _cb(res, *a, **kw):
d = maybeDeferred(f, res, *a, **kw)
d.addCallback(lambda ignored: res)
return d
return _cb |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decode_csr(b64der):
""" Decode JOSE Base-64 DER-encoded CSR. :param str b64der: The encoded CSR. :rtype: `cryptography.x509.CertificateSigningRequest` :return: The decoded CSR. """ |
try:
return x509.load_der_x509_csr(
decode_b64jose(b64der), default_backend())
except ValueError as error:
raise DeserializationError(error) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def csr_for_names(names, key):
""" Generate a certificate signing request for the given names and private key. .. seealso:: `acme.client.Client.request_issuance` .. seealso:: `generate_private_key` :param ``List[str]``: One or more names (subjectAltName) for which to request a certificate. :param key: A Cryptography private key object. :rtype: `cryptography.x509.CertificateSigningRequest` :return: The certificate request message. """ |
if len(names) == 0:
raise ValueError('Must have at least one name')
if len(names[0]) > 64:
common_name = u'san.too.long.invalid'
else:
common_name = names[0]
return (
x509.CertificateSigningRequestBuilder()
.subject_name(x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, common_name)]))
.add_extension(
x509.SubjectAlternativeName(list(map(x509.DNSName, names))),
critical=False)
.sign(key, hashes.SHA256(), default_backend())) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _wrap_parse(code, filename):
""" async wrapper is required to avoid await calls raising a SyntaxError """ |
code = 'async def wrapper():\n' + indent(code, ' ')
return ast.parse(code, filename=filename).body[0].body[0].value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def layers_to_solr(self, layers):
""" Sync n layers in Solr. """ |
layers_dict_list = []
layers_success_ids = []
layers_errors_ids = []
for layer in layers:
layer_dict, message = layer2dict(layer)
if not layer_dict:
layers_errors_ids.append([layer.id, message])
LOGGER.error(message)
else:
layers_dict_list.append(layer_dict)
layers_success_ids.append(layer.id)
layers_json = json.dumps(layers_dict_list)
try:
url_solr_update = '%s/solr/hypermap/update/json/docs' % SEARCH_URL
headers = {"content-type": "application/json"}
params = {"commitWithin": 1500}
requests.post(url_solr_update, data=layers_json, params=params, headers=headers)
LOGGER.info('Solr synced for the given layers')
except Exception:
message = "Error saving solr records: %s" % sys.exc_info()[1]
layers_errors_ids.append([-1, message])
LOGGER.error(message)
return False, layers_errors_ids
return True, layers_errors_ids |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def layer_to_solr(self, layer):
""" Sync a layer in Solr. """ |
success = True
message = 'Synced layer id %s to Solr' % layer.id
layer_dict, message = layer2dict(layer)
if not layer_dict:
success = False
else:
layer_json = json.dumps(layer_dict)
try:
url_solr_update = '%s/solr/hypermap/update/json/docs' % SEARCH_URL
headers = {"content-type": "application/json"}
params = {"commitWithin": 1500}
res = requests.post(url_solr_update, data=layer_json, params=params, headers=headers)
res = res.json()
if 'error' in res:
success = False
message = "Error syncing layer id %s to Solr: %s" % (layer.id, res["error"].get("msg"))
except Exception, e:
success = False
message = "Error syncing layer id %s to Solr: %s" % (layer.id, sys.exc_info()[1])
LOGGER.error(e, exc_info=True)
if success:
LOGGER.info(message)
else:
LOGGER.error(message)
return success, message |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clear_solr(self, catalog="hypermap"):
"""Clear all indexes in the solr core""" |
solr_url = "{0}/solr/{1}".format(SEARCH_URL, catalog)
solr = pysolr.Solr(solr_url, timeout=60)
solr.delete(q='*:*')
LOGGER.debug('Solr core cleared') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_service_from_endpoint(endpoint, service_type, title=None, abstract=None, catalog=None):
""" Create a service from an endpoint if it does not already exists. """ |
from models import Service
if Service.objects.filter(url=endpoint, catalog=catalog).count() == 0:
# check if endpoint is valid
request = requests.get(endpoint)
if request.status_code == 200:
LOGGER.debug('Creating a %s service for endpoint=%s catalog=%s' % (service_type, endpoint, catalog))
service = Service(
type=service_type, url=endpoint, title=title, abstract=abstract,
csw_type='service', catalog=catalog
)
service.save()
return service
else:
LOGGER.warning('This endpoint is invalid, status code is %s' % request.status_code)
else:
LOGGER.warning('A service for this endpoint %s in catalog %s already exists' % (endpoint, catalog))
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def service_url_parse(url):
""" Function that parses from url the service and folder of services. """ |
endpoint = get_sanitized_endpoint(url)
url_split_list = url.split(endpoint + '/')
if len(url_split_list) != 0:
url_split_list = url_split_list[1].split('/')
else:
raise Exception('Wrong url parsed')
# Remove unnecessary items from list of the split url.
parsed_url = [s for s in url_split_list if '?' not in s if 'Server' not in s]
return parsed_url |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def inverse_mercator(xy):
""" Given coordinates in spherical mercator, return a lon,lat tuple. """ |
lon = (xy[0] / 20037508.34) * 180
lat = (xy[1] / 20037508.34) * 180
lat = 180 / math.pi * \
(2 * math.atan(math.exp(lat * math.pi / 180)) - math.pi / 2)
return (lon, lat) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_wms_version_negotiate(url, timeout=10):
""" OWSLib wrapper function to perform version negotiation against owslib.wms.WebMapService """ |
try:
LOGGER.debug('Trying a WMS 1.3.0 GetCapabilities request')
return WebMapService(url, version='1.3.0', timeout=timeout)
except Exception as err:
LOGGER.warning('WMS 1.3.0 support not found: %s', err)
LOGGER.debug('Trying a WMS 1.1.1 GetCapabilities request instead')
return WebMapService(url, version='1.1.1', timeout=timeout) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_sanitized_endpoint(url):
""" Sanitize an endpoint, as removing unneeded parameters """ |
# sanitize esri
sanitized_url = url.rstrip()
esri_string = '/rest/services'
if esri_string in url:
match = re.search(esri_string, sanitized_url)
sanitized_url = url[0:(match.start(0)+len(esri_string))]
return sanitized_url |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_esri_extent(esriobj):
""" Get the extent of an ESRI resource """ |
extent = None
srs = None
if 'fullExtent' in esriobj._json_struct:
extent = esriobj._json_struct['fullExtent']
if 'extent' in esriobj._json_struct:
extent = esriobj._json_struct['extent']
try:
srs = extent['spatialReference']['wkid']
except KeyError, err:
LOGGER.error(err, exc_info=True)
return [extent, srs] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bbox2wktpolygon(bbox):
""" Return OGC WKT Polygon of a simple bbox list of strings """ |
minx = float(bbox[0])
miny = float(bbox[1])
maxx = float(bbox[2])
maxy = float(bbox[3])
return 'POLYGON((%.2f %.2f, %.2f %.2f, %.2f %.2f, %.2f %.2f, %.2f %.2f))' \
% (minx, miny, minx, maxy, maxx, maxy, maxx, miny, minx, miny) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_solr_date(pydate, is_negative):
""" Returns a date in a valid Solr format from a string. """ |
# check if date is valid and then set it to solr format YYYY-MM-DDThh:mm:ssZ
try:
if isinstance(pydate, datetime.datetime):
solr_date = '%sZ' % pydate.isoformat()[0:19]
if is_negative:
LOGGER.debug('%s This layer has a negative date' % solr_date)
solr_date = '-%s' % solr_date
return solr_date
else:
return None
except Exception, e:
LOGGER.error(e, exc_info=True)
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_date(layer):
""" Returns a custom date representation. A date can be detected or from metadata. It can be a range or a simple date in isoformat. """ |
date = None
sign = '+'
date_type = 1
layer_dates = layer.get_layer_dates()
# we index the first date!
if layer_dates:
sign = layer_dates[0][0]
date = layer_dates[0][1]
date_type = layer_dates[0][2]
if date is None:
date = layer.created
# layer date > 2300 is invalid for sure
# TODO put this logic in date miner
if date.year > 2300:
date = None
if date_type == 0:
date_type = "Detected"
if date_type == 1:
date_type = "From Metadata"
return get_solr_date(date, (sign == '-')), date_type |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def detect_metadata_url_scheme(url):
"""detect whether a url is a Service type that HHypermap supports""" |
scheme = None
url_lower = url.lower()
if any(x in url_lower for x in ['wms', 'service=wms']):
scheme = 'OGC:WMS'
if any(x in url_lower for x in ['wmts', 'service=wmts']):
scheme = 'OGC:WMTS'
elif all(x in url for x in ['/MapServer', 'f=json']):
scheme = 'ESRI:ArcGIS:MapServer'
elif all(x in url for x in ['/ImageServer', 'f=json']):
scheme = 'ESRI:ArcGIS:ImageServer'
return scheme |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def serialize_checks(check_set):
""" Serialize a check_set for raphael """ |
check_set_list = []
for check in check_set.all()[:25]:
check_set_list.append(
{
'datetime': check.checked_datetime.isoformat(),
'value': check.response_time,
'success': 1 if check.success else 0
}
)
return check_set_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def domains(request):
""" A page with number of services and layers faceted on domains. """ |
url = ''
query = '*:*&facet=true&facet.limit=-1&facet.pivot=domain_name,service_id&wt=json&indent=true&rows=0'
if settings.SEARCH_TYPE == 'elasticsearch':
url = '%s/select?q=%s' % (settings.SEARCH_URL, query)
if settings.SEARCH_TYPE == 'solr':
url = '%s/solr/hypermap/select?q=%s' % (settings.SEARCH_URL, query)
LOGGER.debug(url)
response = urllib2.urlopen(url)
data = response.read().replace('\n', '')
# stats
layers_count = Layer.objects.all().count()
services_count = Service.objects.all().count()
template = loader.get_template('aggregator/index.html')
context = RequestContext(request, {
'data': data,
'layers_count': layers_count,
'services_count': services_count,
})
return HttpResponse(template.render(context)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tasks_runner(request):
""" A page that let the admin to run global tasks. """ |
# server info
cached_layers_number = 0
cached_layers = cache.get('layers')
if cached_layers:
cached_layers_number = len(cached_layers)
cached_deleted_layers_number = 0
cached_deleted_layers = cache.get('deleted_layers')
if cached_deleted_layers:
cached_deleted_layers_number = len(cached_deleted_layers)
# task actions
if request.method == 'POST':
if 'check_all' in request.POST:
if settings.REGISTRY_SKIP_CELERY:
check_all_services()
else:
check_all_services.delay()
if 'index_all' in request.POST:
if settings.REGISTRY_SKIP_CELERY:
index_all_layers()
else:
index_all_layers.delay()
if 'index_cached' in request.POST:
if settings.REGISTRY_SKIP_CELERY:
index_cached_layers()
else:
index_cached_layers.delay()
if 'drop_cached' in request.POST:
cache.set('layers', None)
cache.set('deleted_layers', None)
if 'clear_index' in request.POST:
if settings.REGISTRY_SKIP_CELERY:
clear_index()
else:
clear_index.delay()
if 'remove_index' in request.POST:
if settings.REGISTRY_SKIP_CELERY:
unindex_layers_with_issues()
else:
unindex_layers_with_issues.delay()
return render(
request,
'aggregator/tasks_runner.html', {
'cached_layers_number': cached_layers_number,
'cached_deleted_layers_number': cached_deleted_layers_number,
}
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def layer_mapproxy(request, catalog_slug, layer_uuid, path_info):
""" Get Layer with matching catalog and uuid """ |
layer = get_object_or_404(Layer,
uuid=layer_uuid,
catalog__slug=catalog_slug)
# for WorldMap layers we need to use the url of the layer
if layer.service.type == 'Hypermap:WorldMap':
layer.service.url = layer.url
# Set up a mapproxy app for this particular layer
mp, yaml_config = get_mapproxy(layer)
query = request.META['QUERY_STRING']
if len(query) > 0:
path_info = path_info + '?' + query
params = {}
headers = {
'X-Script-Name': '/registry/{0}/layer/{1}/map/'.format(catalog_slug, layer.id),
'X-Forwarded-Host': request.META['HTTP_HOST'],
'HTTP_HOST': request.META['HTTP_HOST'],
'SERVER_NAME': request.META['SERVER_NAME'],
}
if path_info == '/config':
response = HttpResponse(yaml_config, content_type='text/plain')
return response
# Get a response from MapProxy as if it was running standalone.
mp_response = mp.get(path_info, params, headers)
# Create a Django response from the MapProxy WSGI response.
response = HttpResponse(mp_response.body, status=mp_response.status_int)
for header, value in mp_response.headers.iteritems():
response[header] = value
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_datetime(date_str):
""" Parses a date string to date object. for BCE dates, only supports the year part. """ |
is_common_era = True
date_str_parts = date_str.split("-")
if date_str_parts and date_str_parts[0] == '':
is_common_era = False
# for now, only support BCE years
# assume the datetime comes complete, but
# when it comes only the year, add the missing datetime info:
if len(date_str_parts) == 2:
date_str = date_str + "-01-01T00:00:00Z"
parsed_datetime = {
'is_common_era': is_common_era,
'parsed_datetime': None
}
if is_common_era:
if date_str == '*':
return parsed_datetime # open ended.
default = datetime.datetime.now().replace(
hour=0, minute=0, second=0, microsecond=0,
day=1, month=1
)
parsed_datetime['parsed_datetime'] = parse(date_str, default=default)
return parsed_datetime
parsed_datetime['parsed_datetime'] = date_str
return parsed_datetime |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query_ids(self, ids):
""" Query by list of identifiers """ |
results = self._get_repo_filter(Layer.objects).filter(uuid__in=ids).all()
if len(results) == 0: # try services
results = self._get_repo_filter(Service.objects).filter(uuid__in=ids).all()
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query_domain(self, domain, typenames, domainquerytype='list', count=False):
""" Query by property domain values """ |
objects = self._get_repo_filter(Layer.objects)
if domainquerytype == 'range':
return [tuple(objects.aggregate(Min(domain), Max(domain)).values())]
else:
if count:
return [(d[domain], d['%s__count' % domain])
for d in objects.values(domain).annotate(Count(domain))]
else:
return objects.values_list(domain).distinct() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query_source(self, source):
""" Query by source """ |
return self._get_repo_filter(Layer.objects).filter(url=source) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query(self, constraint, sortby=None, typenames=None, maxrecords=10, startposition=0):
""" Query records from underlying repository """ |
# run the raw query and get total
# we want to exclude layers which are not valid, as it is done in the search engine
if 'where' in constraint: # GetRecords with constraint
query = self._get_repo_filter(Layer.objects).filter(
is_valid=True).extra(where=[constraint['where']], params=constraint['values'])
else: # GetRecords sans constraint
query = self._get_repo_filter(Layer.objects).filter(is_valid=True)
total = query.count()
# apply sorting, limit and offset
if sortby is not None:
if 'spatial' in sortby and sortby['spatial']: # spatial sort
desc = False
if sortby['order'] == 'DESC':
desc = True
query = query.all()
return [str(total),
sorted(query,
key=lambda x: float(util.get_geometry_area(getattr(x, sortby['propertyname']))),
reverse=desc,
)[startposition:startposition+int(maxrecords)]]
else:
if sortby['order'] == 'DESC':
pname = '-%s' % sortby['propertyname']
else:
pname = sortby['propertyname']
return [str(total),
query.order_by(pname)[startposition:startposition+int(maxrecords)]]
else: # no sort
return [str(total), query.all()[startposition:startposition+int(maxrecords)]] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insert(self, resourcetype, source, insert_date=None):
""" Insert a record into the repository """ |
caller = inspect.stack()[1][3]
if caller == 'transaction': # insert of Layer
hhclass = 'Layer'
source = resourcetype
resourcetype = resourcetype.csw_schema
else: # insert of service
hhclass = 'Service'
if resourcetype not in HYPERMAP_SERVICE_TYPES.keys():
raise RuntimeError('Unsupported Service Type')
return self._insert_or_update(resourcetype, source, mode='insert', hhclass=hhclass) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.