code string | signature string | docstring string | loss_without_docstring float64 | loss_with_docstring float64 | factor float64 |
|---|---|---|---|---|---|
assert isinstance(catalogs, string_types + (dict, list))
# Si se pasa un único catálogo, genero una lista que lo contenga
if isinstance(catalogs, string_types + (dict,)):
catalogs = [catalogs]
harvestable_catalogs = [readers.read_catalog(c) for c in catalogs]
catalogs_urls = [catalog if isinstance(catalog, string_types)
else None for catalog in catalogs]
# aplica los criterios de cosecha
if harvest == 'all':
pass
elif harvest == 'none':
for catalog in harvestable_catalogs:
catalog["dataset"] = []
elif harvest == 'valid':
report = self.generate_datasets_report(catalogs, harvest)
return self.generate_harvestable_catalogs(
catalogs=catalogs, harvest='report', report=report,
export_path=export_path)
elif harvest == 'report':
if not report:
raise ValueError()
datasets_to_harvest = self._extract_datasets_to_harvest(report)
for idx_cat, catalog in enumerate(harvestable_catalogs):
catalog_url = catalogs_urls[idx_cat]
if ("dataset" in catalog and
isinstance(catalog["dataset"], list)):
catalog["dataset"] = [
dataset for dataset in catalog["dataset"]
if (catalog_url, dataset.get("title")) in
datasets_to_harvest
]
else:
catalog["dataset"] = []
else:
raise ValueError(.format(harvest))
# devuelve los catálogos harvesteables
if export_path and os.path.isdir(export_path):
# Creo un JSON por catálogo
for idx, catalog in enumerate(harvestable_catalogs):
filename = os.path.join(export_path, "catalog_{}".format(idx))
writers.write_json(catalog, filename)
elif export_path:
# Creo un único JSON con todos los catálogos
writers.write_json(harvestable_catalogs, export_path)
else:
return harvestable_catalogs | def generate_harvestable_catalogs(self, catalogs, harvest='all',
report=None, export_path=None) | Filtra los catálogos provistos según el criterio determinado en
`harvest`.
Args:
catalogs (str, dict o list): Uno (str o dict) o varios (list de
strs y/o dicts) catálogos.
harvest (str): Criterio para determinar qué datasets conservar de
cada catálogo ('all', 'none', 'valid' o 'report').
report (list o str): Tabla de reporte generada por
generate_datasets_report() como lista de diccionarios o archivo
en formato XLSX o CSV. Sólo se usa cuando `harvest=='report'`.
export_path (str): Path a un archivo JSON o directorio donde
exportar los catálogos filtrados. Si termina en ".json" se
exportará la lista de catálogos a un único archivo. Si es un
directorio, se guardará en él un JSON por catálogo. Si se
especifica `export_path`, el método no devolverá nada.
Returns:
list of dicts: Lista de catálogos. | 2.801926 | 2.529704 | 1.10761 |
catalog = readers.read_catalog(catalog)
# Trato de leer todos los datasets bien formados de la lista
# catalog["dataset"], si existe.
if "dataset" in catalog and isinstance(catalog["dataset"], list):
datasets = [d if isinstance(d, dict) else {} for d in
catalog["dataset"]]
else:
# Si no, considero que no hay datasets presentes
datasets = []
validation = self.validate_catalog(catalog)["error"]["dataset"]
def info_dataset(index, dataset):
info = OrderedDict()
info["indice"] = index
info["titulo"] = dataset.get("title")
info["identificador"] = dataset.get("identifier")
info["estado_metadatos"] = validation[index]["status"]
info["cant_errores"] = len(validation[index]["errors"])
info["cant_distribuciones"] = len(dataset["distribution"])
return info
summary = [info_dataset(i, ds) for i, ds in enumerate(datasets)]
if export_path:
writers.write_table(summary, export_path)
else:
return summary | def generate_datasets_summary(self, catalog, export_path=None) | Genera un informe sobre los datasets presentes en un catálogo,
indicando para cada uno:
- Índice en la lista catalog["dataset"]
- Título
- Identificador
- Cantidad de distribuciones
- Estado de sus metadatos ["OK"|"ERROR"]
Es utilizada por la rutina diaria de `libreria-catalogos` para reportar
sobre los datasets de los catálogos mantenidos.
Args:
catalog (str o dict): Path a un catálogo en cualquier formato,
JSON, XLSX, o diccionario de python.
export_path (str): Path donde exportar el informe generado (en
formato XLSX o CSV). Si se especifica, el método no devolverá
nada.
Returns:
list: Contiene tantos dicts como datasets estén presentes en
`catalogs`, con los datos antes mencionados. | 4.239777 | 3.501631 | 1.210801 |
assert isinstance(report, string_types + (list,))
# Si `report` es una lista de tuplas con longitud 2, asumimos que es un
# reporte procesado para extraer los datasets a harvestear. Se devuelve
# intacta.
if (isinstance(report, list) and all([isinstance(x, tuple) and
len(x) == 2 for x in report])):
return report
table = readers.read_table(report)
table_keys = table[0].keys()
expected_keys = ["catalog_metadata_url", "dataset_title",
"dataset_accrualPeriodicity"]
# Verifico la presencia de las claves básicas de un config de harvester
for key in expected_keys:
if key not in table_keys:
raise KeyError(.format(key))
if "harvest" in table_keys:
# El archivo es un reporte de datasets.
datasets_to_harvest = [
(row["catalog_metadata_url"], row["dataset_title"]) for row in
table if int(row["harvest"])]
else:
# El archivo es un config de harvester.
datasets_to_harvest = [
(row["catalog_metadata_url"], row["dataset_title"]) for row in
table]
return datasets_to_harvest | def _extract_datasets_to_harvest(cls, report) | Extrae de un reporte los datos necesarios para reconocer qué
datasets marcar para cosecha en cualquier generador.
Args:
report (str o list): Reporte (lista de dicts) o path a uno.
Returns:
list: Lista de tuplas con los títulos de catálogo y dataset de cada
reporte extraído. | 4.032939 | 3.895339 | 1.035324 |
catalog = DataJson(catalog) or self
dataset = catalog.get_dataset(dataset_identifier)
text = documentation.dataset_to_markdown(dataset)
if export_path:
with open(export_path, "wb") as f:
f.write(text.encode("utf-8"))
else:
return text | def generate_dataset_documentation(self, dataset_identifier,
export_path=None, catalog=None) | Genera texto en markdown a partir de los metadatos de una `dataset`.
Args:
dataset_identifier (str): Identificador único de un dataset.
export_path (str): Path donde exportar el texto generado. Si se
especifica, el método no devolverá nada.
catalog (dict, str o unicode): Representación externa (path/URL) o
interna (dict) de un catálogo. Si no se especifica se usa el
catálogo cargado en `self` (el propio objeto DataJson).
Returns:
str: Texto que describe una `dataset`. | 3.701756 | 3.389489 | 1.092128 |
module_path, _, class_name = path.rpartition('.')
return getattr(import_module(module_path), class_name) | def import_by_path(path: str) -> Callable | Import a class or function given it's absolute path.
Parameters
----------
path:
Path to object to import | 3.480417 | 4.926301 | 0.706497 |
unknown_catalog_repr_msg = \
.format(catalog)
assert isinstance(catalog, string_types + (dict,)
), unknown_catalog_repr_msg
if isinstance(catalog, dict):
catalog_dict = catalog
else:
# catalog es una URL remota o un path local
suffix = catalog.split(".")[-1].strip("/")
if suffix in ('json', 'xlsx'):
catalog_format = catalog_format or suffix
if catalog_format == "xlsx":
try:
catalog_dict = read_xlsx_catalog(catalog)
except openpyxl_exceptions + \
(ValueError, AssertionError, IOError, BadZipfile) as e:
raise ce.NonParseableCatalog(catalog, str(e))
elif catalog_format == "json":
try:
catalog_dict = read_json(catalog)
except(ValueError, TypeError, IOError) as e:
raise ce.NonParseableCatalog(catalog, str(e))
elif catalog_format == "ckan":
catalog_dict = read_ckan_catalog(catalog)
else:
catalog_dict = read_suffixless_catalog(catalog)
# si se pasaron valores default, los aplica al catálogo leído
if default_values:
_apply_default_values(catalog_dict, default_values)
return catalog_dict | def read_catalog(catalog, default_values=None, catalog_format=None) | Toma una representación cualquiera de un catálogo, y devuelve su
representación interna (un diccionario de Python con su metadata.)
Si recibe una representación _interna_ (un diccionario), lo devuelve
intacto. Si recibe una representación _externa_ (path/URL a un archivo
JSON/XLSX), devuelve su represetación interna, es decir, un diccionario.
Args:
catalog (dict or str): Representación externa/interna de un catálogo.
Una representación _externa_ es un path local o una URL remota a un
archivo con la metadata de un catálogo, en formato JSON o XLSX. La
representación _interna_ de un catálogo es un diccionario.
Returns:
dict: Representación interna de un catálogo para uso en las funciones
de esta librería. | 3.999774 | 3.762373 | 1.063099 |
for field, default_value in iteritems(default_values):
class_metadata = field.split("_")[0]
field_json_path = field.split("_")[1:]
# valores default de catálogo
if class_metadata == "catalog":
_set_default_value(catalog, field_json_path, default_value)
# valores default de dataset
elif class_metadata == "dataset":
for dataset in catalog["dataset"]:
_set_default_value(dataset, field_json_path, default_value)
# valores default de distribución
elif class_metadata == "distribution":
for dataset in catalog["dataset"]:
for distribution in dataset["distribution"]:
_set_default_value(
distribution, field_json_path, default_value)
# valores default de field
elif class_metadata == "field":
for dataset in catalog["dataset"]:
for distribution in dataset["distribution"]:
# campo "field" en una "distribution" no es obligatorio
if distribution.get("field"):
for field in distribution["field"]:
_set_default_value(
field, field_json_path, default_value) | def _apply_default_values(catalog, default_values) | Aplica valores default a los campos de un catálogo.
Si el campo está vacío, aplica el default. Si tiene un valor, deja el valor
que estaba. Sólo soporta defaults para las siguientes clases:
catalog
dataset
distribution
field
Args:
catalog (dict): Un catálogo.
default_values (dict): Valores default para algunos de los campos del
catálogo.
{
"dataset_issued": "2017-06-22",
"distribution_issued": "2017-06-22"
} | 2.426125 | 2.078608 | 1.167187 |
variable = dict_obj
if len(keys) == 1:
if not variable.get(keys[0]):
variable[keys[0]] = value
else:
for idx, field in enumerate(keys):
if idx < len(keys) - 1:
variable = variable[field]
if not variable.get(keys[-1]):
variable[keys[-1]] = value | def _set_default_value(dict_obj, keys, value) | Setea valor en diccionario anidado, siguiendo lista de keys.
Args:
dict_obj (dict): Un diccionario anidado.
keys (list): Una lista de keys para navegar el diccionario.
value (any): Un valor para reemplazar. | 2.571801 | 2.795322 | 0.920038 |
assert isinstance(json_path_or_url, string_types)
parsed_url = urlparse(json_path_or_url)
if parsed_url.scheme in ["http", "https"]:
res = requests.get(json_path_or_url, verify=False)
json_dict = json.loads(res.content, encoding='utf-8')
else:
# Si json_path_or_url parece ser una URL remota, lo advierto.
path_start = parsed_url.path.split(".")[0]
if path_start == "www" or path_start.isdigit():
warnings.warn(.format(json_path_or_url).encode("utf-8"))
with io.open(json_path_or_url, encoding='utf-8') as json_file:
json_dict = json.load(json_file)
return json_dict | def read_json(json_path_or_url) | Toma el path a un JSON y devuelve el diccionario que representa.
Se asume que el parámetro es una URL si comienza con 'http' o 'https', o
un path local de lo contrario.
Args:
json_path_or_url (str): Path local o URL remota a un archivo de texto
plano en formato JSON.
Returns:
dict: El diccionario que resulta de deserializar json_path_or_url. | 2.995646 | 2.964351 | 1.010557 |
logger = logger or pydj_logger
assert isinstance(xlsx_path_or_url, string_types)
parsed_url = urlparse(xlsx_path_or_url)
if parsed_url.scheme in ["http", "https"]:
res = requests.get(xlsx_path_or_url, verify=False)
tmpfilename = ".tmpfile.xlsx"
with io.open(tmpfilename, 'wb') as tmpfile:
tmpfile.write(res.content)
catalog_dict = read_local_xlsx_catalog(tmpfilename, logger)
os.remove(tmpfilename)
else:
# Si xlsx_path_or_url parece ser una URL remota, lo advierto.
path_start = parsed_url.path.split(".")[0]
if path_start == "www" or path_start.isdigit():
warnings.warn(.format(xlsx_path_or_url).encode("utf8"))
catalog_dict = read_local_xlsx_catalog(xlsx_path_or_url)
return catalog_dict | def read_xlsx_catalog(xlsx_path_or_url, logger=None) | Toma el path a un catálogo en formato XLSX y devuelve el diccionario
que representa.
Se asume que el parámetro es una URL si comienza con 'http' o 'https', o
un path local de lo contrario.
Args:
xlsx_path_or_url (str): Path local o URL remota a un libro XLSX de
formato específico para guardar los metadatos de un catálogo.
Returns:
dict: El diccionario que resulta de procesar xlsx_path_or_url. | 3.322587 | 3.471924 | 0.956987 |
level = catalog_or_dataset
keys = [k for k in ["publisher_name", "publisher_mbox"] if k in level]
if keys:
level["publisher"] = {
key.replace("publisher_", ""): level.pop(key) for key in keys
}
return level | def _make_publisher(catalog_or_dataset) | De estar presentes las claves necesarias, genera el diccionario
"publisher" a nivel catálogo o dataset. | 4.488162 | 3.363976 | 1.334184 |
keys = [k for k in ["contactPoint_fn", "contactPoint_hasEmail"]
if k in dataset]
if keys:
dataset["contactPoint"] = {
key.replace("contactPoint_", ""): dataset.pop(key) for key in keys
}
return dataset | def _make_contact_point(dataset) | De estar presentes las claves necesarias, genera el diccionario
"contactPoint" de un dataset. | 5.157434 | 3.707397 | 1.39112 |
logger = logger or pydj_logger
matching_datasets = []
for idx, dataset in enumerate(catalog["catalog_dataset"]):
if dataset["dataset_identifier"] == dataset_identifier:
if dataset["dataset_title"] == dataset_title:
matching_datasets.append(idx)
else:
logger.warning(
ce.DatasetUnexpectedTitle(
dataset_identifier,
dataset["dataset_title"],
dataset_title
)
)
# Debe haber exactamente un dataset con el identificador provisto.
no_dsets_msg = "No hay ningun dataset con el identifier {}".format(
dataset_identifier)
many_dsets_msg = "Hay mas de un dataset con el identifier {}: {}".format(
dataset_identifier, matching_datasets)
if len(matching_datasets) == 0:
logger.error(no_dsets_msg)
return None
elif len(matching_datasets) > 1:
logger.error(many_dsets_msg)
return None
else:
return matching_datasets[0] | def _get_dataset_index(catalog, dataset_identifier, dataset_title,
logger=None) | Devuelve el índice de un dataset en el catálogo en función de su
identificador | 2.787692 | 2.764123 | 1.008527 |
logger = logger or pydj_logger
dataset_index = _get_dataset_index(
catalog, dataset_identifier, dataset_title)
if dataset_index is None:
return None, None
else:
dataset = catalog["catalog_dataset"][dataset_index]
matching_distributions = []
for idx, distribution in enumerate(dataset["dataset_distribution"]):
if distribution["distribution_identifier"] == distribution_identifier:
if distribution["distribution_title"] == distribution_title:
matching_distributions.append(idx)
else:
logger.warning(
ce.DistributionUnexpectedTitle(
distribution_identifier,
distribution["distribution_title"],
distribution_title
)
)
# Debe haber exactamente una distribución con los identicadores provistos
if len(matching_distributions) == 0:
logger.warning(
ce.DistributionTitleNonExistentError(
distribution_title, dataset_identifier
)
)
return dataset_index, None
elif len(matching_distributions) > 1:
logger.warning(
ce.DistributionTitleRepetitionError(
distribution_title, dataset_identifier, matching_distributions)
)
return dataset_index, None
else:
distribution_index = matching_distributions[0]
return dataset_index, distribution_index | def _get_distribution_indexes(catalog, dataset_identifier, dataset_title,
distribution_identifier, distribution_title,
logger=None) | Devuelve el índice de una distribución en su dataset en función de su
título, junto con el índice de su dataset padre en el catálogo, en
función de su identificador | 2.662928 | 2.705692 | 0.984195 |
assert isinstance(path, string_types + (list,)), .format(path)
# Si `path` es una lista, devolverla intacta si tiene formato tabular.
# Si no, levantar una excepción.
if isinstance(path, list):
if helpers.is_list_of_matching_dicts(path):
return path
else:
raise ValueError()
# Deduzco el formato de archivo de `path` y redirijo según corresponda.
suffix = path.split(".")[-1]
if suffix == "csv":
return _read_csv_table(path)
elif suffix == "xlsx":
return _read_xlsx_table(path)
else:
raise ValueError(.format(suffix)) | def read_table(path) | Lee un archivo tabular (CSV o XLSX) a una lista de diccionarios.
La extensión del archivo debe ser ".csv" o ".xlsx". En función de
ella se decidirá el método a usar para leerlo.
Si recibe una lista, comprueba que todos sus diccionarios tengan las
mismas claves y de ser así, la devuelve intacta. Levanta una Excepción
en caso contrario.
Args:
path(str o list): Como 'str', path a un archivo CSV o XLSX.
Returns:
list: Lista de diccionarios con claves idénticas representando el
archivo original. | 4.80335 | 4.480114 | 1.072149 |
with open(path, 'rb') as csvfile:
reader = csv.DictReader(csvfile)
table = list(reader)
return table | def _read_csv_table(path) | Lee un CSV a una lista de diccionarios. | 2.50956 | 2.208205 | 1.13647 |
workbook = pyxl.load_workbook(path)
worksheet = workbook.active
table = helpers.sheet_to_table(worksheet)
return table | def _read_xlsx_table(path) | Lee la hoja activa de un archivo XLSX a una lista de diccionarios. | 4.422461 | 4.267183 | 1.036389 |
if not os.path.isfile(file_path):
raise ConfigurationError('If you provide a model specification file, it must be a file. '
f'You provided {file_path}')
extension = file_path.split('.')[-1]
if extension not in ['yaml', 'yml']:
raise ConfigurationError(f'Model specification files must be in a yaml format. You provided {extension}')
# Attempt to load
yaml.full_load(file_path)
return file_path | def validate_model_specification_file(file_path: str) -> str | Ensures the provided file is a yaml file | 3.380564 | 3.014925 | 1.121276 |
if layer:
return self._values[layer]
for layer in reversed(self._layers):
if layer in self._values:
return self._values[layer]
raise KeyError(layer) | def get_value_with_source(self, layer=None) | Returns a tuple of the value's source and the value at the specified
layer. If no layer is specified then the outer layer is used.
Parameters
----------
layer : str
Name of the layer to use. If None then the outermost where the value
exists will be used.
Raises
------
KeyError
If the value is not set for the specified layer | 3.432079 | 4.511666 | 0.760712 |
result = []
for layer in self._layers:
if layer in self._values:
result.append({
'layer': layer,
'value': self._values[layer][1],
'source': self._values[layer][0],
'default': layer == self._layers[-1]
})
return result | def metadata(self) | Returns all values and associated metadata for this node as a dict.
The value which would be selected if the node's value was requested
is indicated by the `default` flag. | 3.885005 | 3.161484 | 1.228855 |
if self._frozen:
raise TypeError('Frozen ConfigNode does not support assignment')
if not layer:
layer = self._layers[-1]
self._values[layer] = (source, value) | def set_value(self, value, layer=None, source=None) | Set a value for a particular layer with optional metadata about source.
Parameters
----------
value : str
Data to store in the node.
layer : str
Name of the layer to use. If None then the outermost where the value
exists will be used.
source : str
Metadata indicating the source of this value (e.g. a file path)
Raises
------
TypeError
If the node is frozen
KeyError
If the named layer does not exist | 6.38259 | 5.892467 | 1.083178 |
if self._frozen:
raise TypeError('Frozen ConfigNode does not support modification')
self.reset_layer(layer)
self._layers.remove(layer) | def drop_layer(self, layer) | Removes the named layer and the value associated with it from the node.
Parameters
----------
layer : str
Name of the layer to drop.
Raises
------
TypeError
If the node is frozen
KeyError
If the named layer does not exist | 8.404955 | 7.524879 | 1.116955 |
if self._frozen:
raise TypeError('Frozen ConfigNode does not support modification')
if layer in self._values:
del self._values[layer] | def reset_layer(self, layer) | Removes any value and metadata associated with the named layer.
Parameters
----------
layer : str
Name of the layer to reset.
Raises
------
TypeError
If the node is frozen
KeyError
If the named layer does not exist | 8.007431 | 6.053699 | 1.322733 |
self.__dict__['_frozen'] = True
for child in self._children.values():
child.freeze() | def freeze(self) | Causes the ConfigTree to become read only.
This is useful for loading and then freezing configurations that should not be modified at runtime. | 4.671229 | 4.962743 | 0.94126 |
if name not in self._children:
if self._frozen:
raise KeyError(name)
self._children[name] = ConfigTree(layers=self._layers)
child = self._children[name]
if isinstance(child, ConfigNode):
return child.get_value(layer)
else:
return child | def get_from_layer(self, name, layer=None) | Get a configuration value from the named layer.
Parameters
----------
name : str
The name of the value to retrieve
layer: str
The name of the layer to retrieve the value from. If it is not supplied
then the outermost layer in which the key is defined will be used. | 3.877659 | 3.802012 | 1.019897 |
if self._frozen:
raise TypeError('Frozen ConfigTree does not support assignment')
if isinstance(value, dict):
if name not in self._children or not isinstance(self._children[name], ConfigTree):
self._children[name] = ConfigTree(layers=list(self._layers))
self._children[name].update(value, layer, source)
else:
if name not in self._children or not isinstance(self._children[name], ConfigNode):
self._children[name] = ConfigNode(list(self._layers))
child = self._children[name]
child.set_value(value, layer, source) | def _set_with_metadata(self, name, value, layer=None, source=None) | Set a value in the named layer with the given source.
Parameters
----------
name : str
The name of the value
value
The value to store
layer : str, optional
The name of the layer to store the value in. If none is supplied
then the value will be stored in the outermost layer.
source : str, optional
The source to attribute the value to.
Raises
------
TypeError
if the ConfigTree is frozen | 2.518978 | 2.421613 | 1.040207 |
if isinstance(data, dict):
self._read_dict(data, layer, source)
elif isinstance(data, ConfigTree):
# TODO: set this to parse the other config tree including layer and source info. Maybe.
self._read_dict(data.to_dict(), layer, source)
elif isinstance(data, str):
if data.endswith(('.yaml', '.yml')):
source = source if source else data
self._load(data, layer, source)
else:
try:
self._loads(data, layer, source)
except AttributeError:
raise ValueError("The string data should be yaml formated string or path to .yaml/.yml file")
elif data is None:
pass
else:
raise ValueError(f"Update must be called with dictionary, string, or ConfigTree. "
f"You passed in {type(data)}") | def update(self, data: Union[Mapping, str, bytes], layer: str=None, source: str=None) | Adds additional data into the ConfigTree.
Parameters
----------
data :
source data
layer :
layer to load data into. If none is supplied the outermost one is used
source :
Source to attribute the values to
See Also
--------
read_dict | 4.256856 | 3.943052 | 1.079584 |
for k, v in data_dict.items():
self._set_with_metadata(k, v, layer, source) | def _read_dict(self, data_dict, layer=None, source=None) | Load a dictionary into the ConfigTree. If the dict contains nested dicts
then the values will be added recursively. See module docstring for example code.
Parameters
----------
data_dict : dict
source data
layer : str
layer to load data into. If none is supplied the outermost one is used
source : str
Source to attribute the values to | 4.454414 | 5.581613 | 0.798052 |
data_dict = yaml.full_load(data_string)
self._read_dict(data_dict, layer, source) | def _loads(self, data_string, layer=None, source=None) | Load data from a yaml formatted string.
Parameters
----------
data_string : str
yaml formatted string. The root element of the document should be
an associative array
layer : str
layer to load data into. If none is supplied the outermost one is used
source : str
Source to attribute the values to | 4.232496 | 5.755999 | 0.735319 |
if hasattr(f, 'read'):
self._loads(f.read(), layer=layer, source=source)
else:
with open(f) as f:
self._loads(f.read(), layer=layer, source=source) | def _load(self, f, layer=None, source=None) | Load data from a yaml formatted file.
Parameters
----------
f : str or file like object
If f is a string then it is interpreted as a path to the file to load
If it is a file like object then data is read directly from it.
layer : str
layer to load data into. If none is supplied the outermost one is used
source : str
Source to attribute the values to | 2.190146 | 2.581348 | 0.84845 |
if name in self._children:
return self._children[name].metadata()
else:
head, _, tail = name.partition('.')
if head in self._children:
return self._children[head].metadata(key=tail)
else:
raise KeyError(name) | def metadata(self, name) | Return value and metadata associated with the named value
Parameters
----------
name : str
name to retrieve. If the name contains '.'s it will be retrieved recursively
Raises
------
KeyError
if name is not defined in the ConfigTree | 3.007869 | 3.295599 | 0.912693 |
self._reset_layer(layer, [k.split('.') for k in preserve_keys], prefix=[]) | def reset_layer(self, layer, preserve_keys=()) | Removes any value and metadata associated with the named layer.
Parameters
----------
layer : str
Name of the layer to reset.
preserve_keys : list or tuple
A list of keys to skip while removing data
Raises
------
TypeError
If the node is frozen
KeyError
If the named layer does not exist | 11.007118 | 17.38706 | 0.633064 |
if self._frozen:
raise TypeError('Frozen ConfigTree does not support modification')
for child in self._children.values():
child.drop_layer(layer)
self._layers.remove(layer) | def drop_layer(self, layer) | Removes the named layer and the value associated with it from the node.
Parameters
----------
layer : str
Name of the layer to drop.
Raises
------
TypeError
If the node is frozen
KeyError
If the named layer does not exist | 5.373329 | 4.824787 | 1.113692 |
unused = set()
for k, c in self._children.items():
if isinstance(c, ConfigNode):
if not c.has_been_accessed():
unused.add(k)
else:
for ck in c.unused_keys():
unused.add(k+'.'+ck)
return unused | def unused_keys(self) | Lists all keys which are present in the ConfigTree but which have not been accessed. | 3.639158 | 2.490697 | 1.461101 |
schema_filename = schema_filename or DEFAULT_CATALOG_SCHEMA_FILENAME
schema_dir = schema_dir or ABSOLUTE_SCHEMA_DIR
schema_path = os.path.join(schema_dir, schema_filename)
schema = readers.read_json(schema_path)
# Según https://github.com/Julian/jsonschema/issues/98
# Permite resolver referencias locales a otros esquemas.
if platform.system() == 'Windows':
base_uri = "file:///" + schema_path.replace("\\", "/")
else:
base_uri = "file://" + schema_path
resolver = jsonschema.RefResolver(base_uri=base_uri, referrer=schema)
format_checker = jsonschema.FormatChecker()
validator = jsonschema.Draft4Validator(
schema=schema, resolver=resolver, format_checker=format_checker)
return validator | def create_validator(schema_filename=None, schema_dir=None) | Crea el validador necesario para inicializar un objeto DataJson.
Para poder resolver referencias inter-esquemas, un Validador requiere
que se especifique un RefResolver (Resolvedor de Referencias) con el
directorio base (absoluto) y el archivo desde el que se referencia el
directorio.
Para poder validar formatos, un Validador requiere que se provea
explícitamente un FormatChecker. Actualmente se usa el default de la
librería, jsonschema.FormatChecker().
Args:
schema_filename (str): Nombre del archivo que contiene el esquema
validador "maestro".
schema_dir (str): Directorio (absoluto) donde se encuentra el
esquema validador maestro y sus referencias, de tenerlas.
Returns:
Draft4Validator: Un validador de JSONSchema Draft #4. El validador
se crea con un RefResolver que resuelve referencias de
`schema_filename` dentro de `schema_dir`. | 2.697798 | 2.607422 | 1.034661 |
catalog = readers.read_catalog(catalog)
if not validator:
if hasattr(catalog, "validator"):
validator = catalog.validator
else:
validator = create_validator()
jsonschema_res = validator.is_valid(catalog)
custom_errors = iter_custom_errors(catalog)
return jsonschema_res and len(list(custom_errors)) == 0 | def is_valid_catalog(catalog, validator=None) | Valida que un archivo `data.json` cumpla con el schema definido.
Chequea que el data.json tiene todos los campos obligatorios y que
tanto los campos obligatorios como los opcionales siguen la estructura
definida en el schema.
Args:
catalog (str o dict): Catálogo (dict, JSON o XLSX) a ser validado.
Returns:
bool: True si el data.json cumple con el schema, sino False. | 4.114791 | 4.851897 | 0.848079 |
try:
# chequea que no se repiten los ids de la taxonomía específica
if "themeTaxonomy" in catalog:
theme_ids = [theme["id"] for theme in catalog["themeTaxonomy"]]
dups = _find_dups(theme_ids)
if len(dups) > 0:
yield ce.ThemeIdRepeated(dups)
# chequea que un dataset no use un theme que no exista en la taxonomía
# TODO: hay que implementarlo
# chequea que un dataset no use theme con id esté repetido en taxonomía
# TODO: hay que implementarlo
# chequea que la extensión de fileName, downloadURL y format sean
# consistentes
for dataset_idx, dataset in enumerate(catalog["dataset"]):
for distribution_idx, distribution in enumerate(
dataset["distribution"]):
for attribute in ['downloadURL', 'fileName']:
if not format_matches_extension(distribution, attribute):
yield ce.ExtensionError(dataset_idx, distribution_idx,
distribution, attribute)
# chequea que no haya duplicados en los downloadURL de las
# distribuciones
# urls = []
# for dataset in catalog["dataset"]:
# urls += [distribution['downloadURL']
# for distribution in dataset['distribution']
# if distribution.get('downloadURL')]
# dups = _find_dups(urls)
# if len(dups) > 0:
# yield ce.DownloadURLRepetitionError(dups)
# chequea que las URLs de las distribuciones estén operativas
# hay que revisar
# for distribution in catalog.distributions:
# url = distribution.get("downloadURL")
# status_code = requests.head(url).status_code
# if status_code != 200:
# yield ce.DownloadURLBrokenError(
# distribution.get("identifier"), url, status_code)
except Exception:
logger.exception("Error de validación.") | def iter_custom_errors(catalog) | Realiza validaciones sin usar el jsonschema.
En esta función se agregan bloques de código en python que realizan
validaciones complicadas o imposibles de especificar usando jsonschema. | 3.606358 | 3.540058 | 1.018728 |
# crea una lista de dicts para volcarse en una tabla (catalog)
rows_catalog = []
validation_result = {
"catalog_title": response["error"]["catalog"]["title"],
"catalog_status": response["error"]["catalog"]["status"],
}
for error in response["error"]["catalog"]["errors"]:
catalog_result = dict(validation_result)
catalog_result.update({
"catalog_error_message": error["message"],
"catalog_error_location": ", ".join(error["path"]),
})
rows_catalog.append(catalog_result)
if len(response["error"]["catalog"]["errors"]) == 0:
catalog_result = dict(validation_result)
catalog_result.update({
"catalog_error_message": None,
"catalog_error_location": None
})
rows_catalog.append(catalog_result)
# crea una lista de dicts para volcarse en una tabla (dataset)
rows_dataset = []
for dataset in response["error"]["dataset"]:
validation_result = {
"dataset_title": dataset["title"],
"dataset_identifier": dataset["identifier"],
"dataset_list_index": dataset["list_index"],
"dataset_status": dataset["status"]
}
for error in dataset["errors"]:
dataset_result = dict(validation_result)
dataset_result.update({
"dataset_error_message": error["message"],
"dataset_error_location": error["path"][-1]
})
rows_dataset.append(dataset_result)
if len(dataset["errors"]) == 0:
dataset_result = dict(validation_result)
dataset_result.update({
"dataset_error_message": None,
"dataset_error_location": None
})
rows_dataset.append(dataset_result)
return {"catalog": rows_catalog, "dataset": rows_dataset} | def _catalog_validation_to_list(response) | Formatea la validación de un catálogo a dos listas de errores.
Una lista de errores para "catalog" y otra para "dataset". | 1.877625 | 1.795614 | 1.045673 |
if attribute in distribution and "format" in distribution:
if "/" in distribution['format']:
possible_format_extensions = mimetypes.guess_all_extensions(
distribution['format'])
else:
possible_format_extensions = ['.' + distribution['format'].lower()]
file_name = urlparse(distribution[attribute]).path
extension = os.path.splitext(file_name)[-1].lower()
if attribute == 'downloadURL' and not extension:
return True
# hay extensiones exceptuadas porque enmascaran otros formatos
if extension.lower().replace(".", "") in EXTENSIONS_EXCEPTIONS:
return True
if extension not in possible_format_extensions:
return False
return True | def format_matches_extension(distribution, attribute) | Chequea si una extensión podría corresponder a un formato dado. | 4.680861 | 4.062706 | 1.152154 |
res = validate_catalog(catalog, only_errors=True, fmt="list",
export_path=None, validator=None)
print("")
print("=== Errores a nivel de CATALOGO ===")
pprint(res["catalog"])
print("")
print("=== Errores a nivel de DATASET ===")
pprint(res["dataset"]) | def main(catalog) | Permite validar un catálogo por línea de comandos.
Args:
catalog (str): Path local o URL a un catálogo. | 7.278668 | 7.325413 | 0.993619 |
dataset = catalog.get_dataset(dataset_origin_identifier)
ckan_portal = RemoteCKAN(portal_url, apikey=apikey)
package = map_dataset_to_package(catalog, dataset, owner_org, catalog_id,
demote_superThemes, demote_themes)
# Get license id
if dataset.get('license'):
license_list = ckan_portal.call_action('license_list')
try:
ckan_license = next(
license_item for license_item in license_list if
license_item['title'] == dataset['license'] or
license_item['url'] == dataset['license']
)
package['license_id'] = ckan_license['id']
except StopIteration:
package['license_id'] = 'notspecified'
else:
package['license_id'] = 'notspecified'
try:
pushed_package = ckan_portal.call_action(
'package_update', data_dict=package)
except NotFound:
pushed_package = ckan_portal.call_action(
'package_create', data_dict=package)
with resource_files_download(catalog, dataset.get('distribution', []),
download_strategy) as resource_files:
resources_update(portal_url, apikey, dataset.get('distribution', []),
resource_files, generate_new_access_url, catalog_id)
ckan_portal.close()
return pushed_package['id'] | def push_dataset_to_ckan(catalog, owner_org, dataset_origin_identifier,
portal_url, apikey, catalog_id=None,
demote_superThemes=True, demote_themes=True,
download_strategy=None, generate_new_access_url=None) | Escribe la metadata de un dataset en el portal pasado por parámetro.
Args:
catalog (DataJson): El catálogo de origen que contiene el dataset.
owner_org (str): La organización a la cual pertence el dataset.
dataset_origin_identifier (str): El id del dataset que se va a
federar.
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan crear o actualizar el dataset.
catalog_id (str): El prefijo con el que va a preceder el id del
dataset en catálogo destino.
demote_superThemes(bool): Si está en true, los ids de los super
themes del dataset, se propagan como grupo.
demote_themes(bool): Si está en true, los labels de los themes
del dataset, pasan a ser tags. Sino, se pasan como grupo.
download_strategy(callable): Una función (catálogo, distribución)->
bool. Sobre las distribuciones que evalúa True, descarga el
recurso en el downloadURL y lo sube al portal de destino.
Por default no sube ninguna distribución.
generate_new_access_url(list): Se pasan los ids de las
distribuciones cuyo accessURL se regenerar en el portal de
destino. Para el resto, el portal debe mantiene el valor pasado
en el DataJson.
Returns:
str: El id del dataset en el catálogo de destino. | 2.488294 | 2.604848 | 0.955255 |
ckan_portal = RemoteCKAN(portal_url, apikey=apikey)
result = []
generate_new_access_url = generate_new_access_url or []
for distribution in distributions:
updated = False
resource_id = catalog_id + '_' + distribution['identifier']\
if catalog_id else distribution['identifier']
fields = {'id': resource_id}
if distribution['identifier'] in generate_new_access_url:
fields.update({'accessURL': ''})
updated = True
if distribution['identifier'] in resource_files:
fields.update({'resource_type': 'file.upload',
'upload':
open(resource_files[distribution['identifier']],
'rb')
})
updated = True
if updated:
try:
pushed = ckan_portal.action.resource_patch(**fields)
result.append(pushed['id'])
except CKANAPIError as e:
logger.exception(
"Error subiendo recurso {} a la distribución {}: {}"
.format(resource_files[distribution['identifier']],
resource_files, str(e)))
return result | def resources_update(portal_url, apikey, distributions,
resource_files, generate_new_access_url=None,
catalog_id=None) | Sube archivos locales a sus distribuciones correspondientes en el portal
pasado por parámetro.
Args:
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan crear o actualizar el dataset.
distributions(list): Lista de distribuciones posibles para
actualizar.
resource_files(dict): Diccionario con entradas
id_de_distribucion:path_al_recurso a subir
generate_new_access_url(list): Lista de ids de distribuciones a
las cuales se actualizará el accessURL con los valores
generados por el portal de destino
catalog_id(str): prependea el id al id del recurso para
encontrarlo antes de subirlo
Returns:
list: los ids de los recursos modificados | 3.065748 | 2.800487 | 1.09472 |
ckan_portal = RemoteCKAN(portal_url, apikey=apikey)
identifiers = []
datajson_filters = filter_in or filter_out or only_time_series
if datajson_filters:
identifiers += get_datasets(
portal_url + '/data.json',
filter_in=filter_in, filter_out=filter_out,
only_time_series=only_time_series, meta_field='identifier'
)
if organization:
query = 'organization:"' + organization + '"'
search_result = ckan_portal.call_action('package_search', data_dict={
'q': query, 'rows': 500, 'start': 0})
org_identifiers = [dataset['id']
for dataset in search_result['results']]
start = 500
while search_result['count'] > start:
search_result = ckan_portal.call_action(
'package_search', data_dict={
'q': query, 'rows': 500, 'start': start})
org_identifiers += [dataset['id']
for dataset in search_result['results']]
start += 500
if datajson_filters:
identifiers = set(identifiers).intersection(set(org_identifiers))
else:
identifiers = org_identifiers
for identifier in identifiers:
ckan_portal.call_action('dataset_purge', data_dict={'id': identifier}) | def remove_datasets_from_ckan(portal_url, apikey, filter_in=None,
filter_out=None, only_time_series=False,
organization=None) | Borra un dataset en el portal pasado por parámetro.
Args:
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan borrar el dataset.
filter_in(dict): Diccionario de filtrado positivo, similar al
de search.get_datasets.
filter_out(dict): Diccionario de filtrado negativo, similar al
de search.get_datasets.
only_time_series(bool): Filtrar solo los datasets que tengan
recursos con series de tiempo.
organization(str): Filtrar solo los datasets que pertenezcan a
cierta organizacion.
Returns:
None | 2.206173 | 2.316457 | 0.952391 |
ckan_portal = RemoteCKAN(portal_url, apikey=apikey)
theme = catalog.get_theme(identifier=identifier, label=label)
group = map_theme_to_group(theme)
pushed_group = ckan_portal.call_action('group_create', data_dict=group)
return pushed_group['name'] | def push_theme_to_ckan(catalog, portal_url, apikey,
identifier=None, label=None) | Escribe la metadata de un theme en el portal pasado por parámetro.
Args:
catalog (DataJson): El catálogo de origen que contiene el
theme.
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan crear o actualizar el dataset.
identifier (str): El identificador para buscar el theme en la
taxonomia.
label (str): El label para buscar el theme en la taxonomia.
Returns:
str: El name del theme en el catálogo de destino. | 3.076289 | 3.80268 | 0.808979 |
return push_dataset_to_ckan(catalog, owner_org,
dataset_origin_identifier, portal_url,
apikey, None, False, False, download_strategy,
generate_new_access_url) | def restore_dataset_to_ckan(catalog, owner_org, dataset_origin_identifier,
portal_url, apikey, download_strategy=None,
generate_new_access_url=None) | Restaura la metadata de un dataset en el portal pasado por parámetro.
Args:
catalog (DataJson): El catálogo de origen que contiene el dataset.
owner_org (str): La organización a la cual pertence el dataset.
dataset_origin_identifier (str): El id del dataset que se va a
restaurar.
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan crear o actualizar el dataset.
download_strategy(callable): Una función (catálogo, distribución)->
bool. Sobre las distribuciones que evalúa True, descarga el
recurso en el downloadURL y lo sube al portal de destino.
Por default no sube ninguna distribución.
generate_new_access_url(list): Se pasan los ids de las
distribuciones cuyo accessURL se regenerar en el portal de
destino. Para el resto, el portal debe mantiene el valor
pasado en el DataJson.
Returns:
str: El id del dataset restaurado. | 2.494988 | 3.204886 | 0.778495 |
return push_dataset_to_ckan(catalog, owner_org, dataset_origin_identifier,
portal_url, apikey, catalog_id=catalog_id,
download_strategy=download_strategy) | def harvest_dataset_to_ckan(catalog, owner_org, dataset_origin_identifier,
portal_url, apikey, catalog_id,
download_strategy=None) | Federa la metadata de un dataset en el portal pasado por parámetro.
Args:
catalog (DataJson): El catálogo de origen que contiene el dataset.
owner_org (str): La organización a la cual pertence el dataset.
dataset_origin_identifier (str): El id del dataset que se va a
restaurar.
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan crear o actualizar el dataset.
catalog_id(str): El id que prependea al dataset y recursos
download_strategy(callable): Una función (catálogo, distribución)->
bool. Sobre las distribuciones que evalúa True, descarga el
recurso en el downloadURL y lo sube al portal de destino.
Por default no sube ninguna distribución.
Returns:
str: El id del dataset restaurado. | 2.128142 | 2.594772 | 0.820165 |
# Evitar entrar con valor falsy
harvested = []
if dataset_list is None:
try:
dataset_list = [ds['identifier'] for ds in catalog.datasets]
except KeyError:
logger.exception('Hay datasets sin identificadores')
return harvested
owner_org = owner_org or catalog_id
errors = {}
for dataset_id in dataset_list:
try:
harvested_id = harvest_dataset_to_ckan(catalog, owner_org,
dataset_id, portal_url,
apikey, catalog_id,
download_strategy)
harvested.append(harvested_id)
except Exception as e:
msg = "Error federando catalogo: %s, dataset: %s al portal: %s\n"\
% (catalog_id, dataset_id, portal_url)
msg += str(e)
logger.error(msg)
errors[dataset_id] = str(e)
return harvested, errors | def harvest_catalog_to_ckan(catalog, portal_url, apikey, catalog_id,
dataset_list=None, owner_org=None,
download_strategy=None) | Federa los datasets de un catálogo al portal pasado por parámetro.
Args:
catalog (DataJson): El catálogo de origen que se federa.
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan crear o actualizar el dataset.
catalog_id (str): El prefijo con el que va a preceder el id del
dataset en catálogo destino.
dataset_list(list(str)): Los ids de los datasets a federar. Si no
se pasa una lista, todos los datasests se federan.
owner_org (str): La organización a la cual pertencen los datasets.
Si no se pasa, se utiliza el catalog_id.
download_strategy(callable): Una función (catálogo, distribución)->
bool. Sobre las distribuciones que evalúa True, descarga el
recurso en el downloadURL y lo sube al portal de destino.
Por default no sube ninguna distribución.
Returns:
str: El id del dataset en el catálogo de destino. | 2.855575 | 2.75243 | 1.037474 |
ckan_portal = RemoteCKAN(portal_url, apikey=apikey)
existing_themes = ckan_portal.call_action('group_list')
new_themes = [theme['id'] for theme in catalog[
'themeTaxonomy'] if theme['id'] not in existing_themes]
pushed_names = []
for new_theme in new_themes:
name = push_theme_to_ckan(
catalog, portal_url, apikey, identifier=new_theme)
pushed_names.append(name)
return pushed_names | def push_new_themes(catalog, portal_url, apikey) | Toma un catálogo y escribe los temas de la taxonomía que no están
presentes.
Args:
catalog (DataJson): El catálogo de origen que contiene la
taxonomía.
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan crear o actualizar los temas.
Returns:
str: Los ids de los temas creados. | 2.932934 | 3.103695 | 0.944981 |
ckan_portal = RemoteCKAN(portal_url)
return ckan_portal.call_action('organization_show',
data_dict={'id': org_id}) | def get_organization_from_ckan(portal_url, org_id) | Toma la url de un portal y un id, y devuelve la organización a buscar.
Args:
portal_url (str): La URL del portal CKAN de origen.
org_id (str): El id de la organización a buscar.
Returns:
dict: Diccionario con la información de la organización. | 3.207804 | 4.878435 | 0.657548 |
created = []
for node in org_tree:
pushed_org = push_organization_to_ckan(portal_url,
apikey,
node,
parent=parent)
if pushed_org['success']:
pushed_org['children'] = push_organization_tree_to_ckan(
portal_url, apikey, node['children'], parent=node['name'])
created.append(pushed_org)
return created | def push_organization_tree_to_ckan(portal_url, apikey, org_tree, parent=None) | Toma un árbol de organizaciones y lo replica en el portal de
destino.
Args:
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan crear las organizaciones.
org_tree(list): lista de diccionarios con la data de las
organizaciones a crear.
parent(str): campo name de la organizacion padre.
Returns:
(list): Devuelve el arbol de organizaciones recorridas,
junto con el status detallando si la creación fue
exitosa o no. | 2.336669 | 2.358572 | 0.990714 |
portal = RemoteCKAN(portal_url, apikey=apikey)
if parent:
organization['groups'] = [{'name': parent}]
try:
pushed_org = portal.call_action('organization_create',
data_dict=organization)
pushed_org['success'] = True
except Exception as e:
logger.exception('Ocurrió un error creando la organización {}: {}'
.format(organization['title'], str(e)))
pushed_org = {'name': organization, 'success': False}
return pushed_org | def push_organization_to_ckan(portal_url, apikey, organization, parent=None) | Toma una organización y la crea en el portal de destino.
Args:
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan crear la organización.
organization(dict): Datos de la organización a crear.
parent(str): Campo name de la organización padre.
Returns:
(dict): Devuelve el diccionario de la organizacion enviada,
junto con el status detallando si la creación fue
exitosa o no. | 3.216917 | 3.450665 | 0.93226 |
portal = RemoteCKAN(portal_url, apikey=apikey)
try:
portal.call_action('organization_purge',
data_dict={'id': organization_id})
except Exception as e:
logger.exception('Ocurrió un error borrando la organización {}: {}'
.format(organization_id, str(e))) | def remove_organization_from_ckan(portal_url, apikey, organization_id) | Toma un id de organización y la purga del portal de destino.
Args:
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan borrar la organización.
organization_id(str): Id o name de la organización a borrar.
Returns:
None. | 3.215241 | 3.361009 | 0.95663 |
for org in organization_list:
remove_organization_from_ckan(portal_url, apikey, org) | def remove_organizations_from_ckan(portal_url, apikey, organization_list) | Toma una lista de ids de organización y las purga del portal de destino.
Args:
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan borrar la organización.
organization_list(list): Id o name de las organizaciones a borrar.
Returns:
None. | 2.034202 | 3.268094 | 0.622443 |
push_new_themes(catalog, portal_url, apikey)
restored = []
if dataset_list is None:
try:
dataset_list = [ds['identifier'] for ds in catalog.datasets]
except KeyError:
logger.exception('Hay datasets sin identificadores')
return restored
for dataset_id in dataset_list:
try:
restored_id = restore_dataset_to_ckan(catalog, owner_org,
dataset_id, portal_url,
apikey, download_strategy,
generate_new_access_url)
restored.append(restored_id)
except (CKANAPIError, KeyError, AttributeError) as e:
logger.exception('Ocurrió un error restaurando el dataset {}: {}'
.format(dataset_id, str(e)))
return restored | def restore_organization_to_ckan(catalog, owner_org, portal_url, apikey,
dataset_list=None, download_strategy=None,
generate_new_access_url=None) | Restaura los datasets de la organización de un catálogo al portal pasado
por parámetro. Si hay temas presentes en el DataJson que no están en el
portal de CKAN, los genera.
Args:
catalog (DataJson): El catálogo de origen que se restaura.
portal_url (str): La URL del portal CKAN de destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan crear o actualizar el dataset.
dataset_list(list(str)): Los ids de los datasets a restaurar. Si no
se pasa una lista, todos los datasests se restauran.
owner_org (str): La organización a la cual pertencen los datasets.
download_strategy(callable): Una función (catálogo, distribución)->
bool. Sobre las distribuciones que evalúa True, descarga el
recurso en el downloadURL y lo sube al portal de destino.
Por default no sube ninguna distribución.
generate_new_access_url(list): Se pasan los ids de las
distribuciones cuyo accessURL se regenerar en el portal de
destino. Para el resto, el portal debe mantiene el valor
pasado en el DataJson.
Returns:
list(str): La lista de ids de datasets subidos. | 2.642062 | 2.718641 | 0.971832 |
catalog['homepage'] = catalog.get('homepage') or origin_portal_url
res = {}
origin_portal = RemoteCKAN(origin_portal_url)
try:
org_list = origin_portal.action.organization_list()
except CKANAPIError as e:
logger.exception(
'Ocurrió un error buscando las organizaciones del portal {}: {}'
.format(origin_portal_url, str(e)))
print(e)
return res
for org in org_list:
print("Restaurando organizacion {}".format(org))
response = origin_portal.action.organization_show(
id=org, include_datasets=True)
datasets = [package['id'] for package in response['packages']]
pushed_datasets = restore_organization_to_ckan(
catalog, org, destination_portal_url, apikey,
dataset_list=datasets, download_strategy=download_strategy,
generate_new_access_url=generate_new_access_url
)
res[org] = pushed_datasets
return res | def restore_catalog_to_ckan(catalog, origin_portal_url, destination_portal_url,
apikey, download_strategy=None,
generate_new_access_url=None) | Restaura los datasets de un catálogo original al portal pasado
por parámetro. Si hay temas presentes en el DataJson que no están en
el portal de CKAN, los genera.
Args:
catalog (DataJson): El catálogo de origen que se restaura.
origin_portal_url (str): La URL del portal CKAN de origen.
destination_portal_url (str): La URL del portal CKAN de
destino.
apikey (str): La apikey de un usuario con los permisos que le
permitan crear o actualizar el dataset.
download_strategy(callable): Una función
(catálogo, distribución)-> bool. Sobre las distribuciones
que evalúa True, descarga el recurso en el downloadURL y lo
sube al portal de destino. Por default no sube ninguna
distribución.
generate_new_access_url(list): Se pasan los ids de las
distribuciones cuyo accessURL se regenerar en el portal de
destino. Para el resto, el portal debe mantiene el valor
pasado en el DataJson.
Returns:
dict: Diccionario con key organización y value la lista de ids
de datasets subidos a esa organización | 2.894881 | 2.853836 | 1.014382 |
config = build_simulation_configuration()
config.update(input_config)
plugin_manager = PluginManager(plugin_config)
return InteractiveContext(config, components, plugin_manager) | def initialize_simulation(components: List, input_config: Mapping=None,
plugin_config: Mapping=None) -> InteractiveContext | Construct a simulation from a list of components, component
configuration, and a plugin configuration.
The simulation context returned by this method still needs to be setup by
calling its setup method. It is mostly useful for testing and debugging.
Parameters
----------
components
A list of initialized simulation components. Corresponds to the
components block of a model specification.
input_config
A nested dictionary with any additional simulation configuration
information needed. Corresponds to the configuration block of a model
specification.
plugin_config
A dictionary containing a description of any simulation plugins to
include in the simulation. If you're using this argument, you're either
deep in the process of simulation development or the maintainers have
done something wrong. Corresponds to the plugins block of a model
specification.
Returns
-------
An initialized (but not set up) simulation context. | 4.231943 | 6.217498 | 0.68065 |
simulation = initialize_simulation(components, input_config, plugin_config)
simulation.setup()
return simulation | def setup_simulation(components: List, input_config: Mapping=None,
plugin_config: Mapping=None) -> InteractiveContext | Construct a simulation from a list of components and call its setup
method.
Parameters
----------
components
A list of initialized simulation components. Corresponds to the
components block of a model specification.
input_config
A nested dictionary with any additional simulation configuration
information needed. Corresponds to the configuration block of a model
specification.
plugin_config
A dictionary containing a description of any simulation plugins to
include in the simulation. If you're using this argument, you're either
deep in the process of simulation development or the maintainers have
done something wrong. Corresponds to the plugins block of a model
specification.
Returns
-------
A simulation context that is setup and ready to run. | 5.042899 | 7.268949 | 0.693759 |
model_specification = build_model_specification(model_specification_file)
plugin_config = model_specification.plugins
component_config = model_specification.components
simulation_config = model_specification.configuration
plugin_manager = PluginManager(plugin_config)
component_config_parser = plugin_manager.get_plugin('component_configuration_parser')
components = component_config_parser.get_components(component_config)
return InteractiveContext(simulation_config, components, plugin_manager) | def initialize_simulation_from_model_specification(model_specification_file: str) -> InteractiveContext | Construct a simulation from a model specification file.
The simulation context returned by this method still needs to be setup by
calling its setup method. It is mostly useful for testing and debugging.
Parameters
----------
model_specification_file
The path to a model specification file.
Returns
-------
An initialized (but not set up) simulation context. | 3.109859 | 3.72827 | 0.834129 |
simulation = initialize_simulation_from_model_specification(model_specification_file)
simulation.setup()
return simulation | def setup_simulation_from_model_specification(model_specification_file: str) -> InteractiveContext | Construct a simulation from a model specification file and call its setup
method.
Parameters
----------
model_specification_file
The path to a model specification file.
Returns
-------
A simulation context that is setup and ready to run. | 4.119403 | 5.670702 | 0.726436 |
super().setup()
self._start_time = self.clock.time
self.initialize_simulants() | def setup(self) | Setup the simulation and initialize its population. | 10.767998 | 7.735951 | 1.391942 |
super().initialize_simulants()
self._initial_population = self.population.get_population(True) | def initialize_simulants(self) | Initialize this simulation's population. Should not be called
directly. | 6.912361 | 5.12751 | 1.348093 |
warnings.warn("This reset method is very crude. It should work for "
"many simple simulations, but we make no guarantees. In "
"particular, if you have components that manage their "
"own state in any way, this might not work.")
self.population._population = self._initial_population
self.clock._time = self._start_time | def reset(self) | Reset the simulation to its initial state. | 12.033056 | 10.062705 | 1.195807 |
return self.run_until(self.clock.stop_time, with_logging=with_logging) | def run(self, with_logging: bool=True) -> int | Run the simulation for the time duration specified in the
configuration
Parameters
----------
with_logging
Whether or not to log the simulation steps. Only works in an ipython
environment.
Returns
-------
The number of steps the simulation took. | 8.408422 | 9.140856 | 0.919873 |
return self.run_until(self.clock.time + duration, with_logging=with_logging) | def run_for(self, duration: Timedelta, with_logging: bool=True) -> int | Run the simulation for the given time duration.
Parameters
----------
duration
The length of time to run the simulation for. Should be the same
type as the simulation clock's step size (usually a pandas
Timedelta).
with_logging
Whether or not to log the simulation steps. Only works in an ipython
environment.
Returns
-------
The number of steps the simulation took. | 5.056088 | 5.042626 | 1.00267 |
if not isinstance(end_time, type(self.clock.time)):
raise ValueError(f"Provided time must be an instance of {type(self.clock.time)}")
iterations = int(ceil((end_time - self.clock.time)/self.clock.step_size))
self.take_steps(number_of_steps=iterations, with_logging=with_logging)
assert self.clock.time - self.clock.step_size < end_time <= self.clock.time
return iterations | def run_until(self, end_time: Time, with_logging=True) -> int | Run the simulation until the provided end time.
Parameters
----------
end_time
The time to run the simulation until. The simulation will run until
its clock is greater than or equal to the provided end time.
with_logging
Whether or not to log the simulation steps. Only works in an ipython
environment.
Returns
-------
The number of steps the simulation took. | 3.213567 | 3.132933 | 1.025738 |
old_step_size = self.clock.step_size
if step_size is not None:
if not isinstance(step_size, type(self.clock.step_size)):
raise ValueError(f"Provided time must be an instance of {type(self.clock.step_size)}")
self.clock._step_size = step_size
super().step()
self.clock._step_size = old_step_size | def step(self, step_size: Timedelta=None) | Advance the simulation one step.
Parameters
----------
step_size
An optional size of step to take. Must be the same type as the
simulation clock's step size (usually a pandas.Timedelta). | 2.652776 | 2.644299 | 1.003206 |
if not isinstance(number_of_steps, int):
raise ValueError('Number of steps must be an integer.')
if run_from_ipython() and with_logging:
for _ in log_progress(range(number_of_steps), name='Step'):
self.step(step_size)
else:
for _ in range(number_of_steps):
self.step(step_size) | def take_steps(self, number_of_steps: int=1, step_size: Timedelta=None, with_logging: bool=True) | Run the simulation for the given number of steps.
Parameters
----------
number_of_steps
The number of steps to take.
step_size
An optional size of step to take. Must be the same type as the
simulation clock's step size (usually a pandas.Timedelta).
with_logging
Whether or not to log the simulation steps. Only works in an ipython
environment. | 2.847781 | 2.619975 | 1.08695 |
return self.population.get_population(untracked) | def get_population(self, untracked: bool=False) -> pd.DataFrame | Get a copy of the population state table. | 8.921798 | 5.246425 | 1.700548 |
if event_type not in self.events:
raise ValueError(f'No event {event_type} in system.')
return self.events.get_listeners(event_type) | def get_listeners(self, event_type: str) -> List[Callable] | Get all listeners of a particular type of event. | 3.862578 | 3.173713 | 1.217053 |
if event_type not in self.events:
raise ValueError(f'No event {event_type} in system.')
return self.events.get_emitter(event_type) | def get_emitter(self, event_type: str) -> Callable | Get the callable that emits the given type of events. | 3.951651 | 3.251425 | 1.21536 |
return [component for component in self.component_manager._components + self.component_manager._managers] | def get_components(self) -> List | Get a list of all components in the simulation. | 12.361643 | 8.191507 | 1.50908 |
return self._lookup_table_manager.build_table(data, key_columns, parameter_columns, value_columns) | def build_table(self, data, key_columns=('sex',), parameter_columns=(['age', 'age_group_start', 'age_group_end'],
['year', 'year_start', 'year_end']),
value_columns=None) -> LookupTable | Construct a LookupTable from input data.
If data is a ``pandas.DataFrame``, an interpolation function of the specified
order will be calculated for each permutation of the set of key_columns.
The columns in parameter_columns will be used as parameters for the interpolation
functions which will estimate all remaining columns in the table.
If data is a number, time, list, or tuple, a scalar table will be constructed with
the values in data as the values in each column of the table, named according to
value_columns.
Parameters
----------
data : The source data which will be used to build the resulting LookupTable.
key_columns : [str]
Columns used to select between interpolation functions. These
should be the non-continuous variables in the data. For
example 'sex' in data about a population.
parameter_columns : [str]
The columns which contain the parameters to the interpolation functions.
These should be the continuous variables. For example 'age'
in data about a population.
value_columns : [str]
The data columns that will be in the resulting LookupTable. Columns to be
interpolated over if interpolation or the names of the columns in the scalar
table.
Returns
-------
LookupTable | 3.706764 | 5.530312 | 0.670263 |
param_edges = [p[1:] for p in parameter_columns if isinstance(p, (Tuple, List))] # strip out call column name
# check no overlaps/gaps
for p in param_edges:
other_params = [p_ed[0] for p_ed in param_edges if p_ed != p]
if other_params:
sub_tables = data.groupby(list(other_params))
else:
sub_tables = {None: data}.items()
n_p_total = len(set(data[p[0]]))
for _, table in sub_tables:
param_data = table[[p[0], p[1]]].copy().sort_values(by=p[0])
start, end = param_data[p[0]].reset_index(drop=True), param_data[p[1]].reset_index(drop=True)
if len(set(start)) < n_p_total:
raise ValueError(f'You must provide a value for every combination of {parameter_columns}.')
if len(start) <= 1:
continue
for i in range(1, len(start)):
e = end[i-1]
s = start[i]
if e > s or s == start[i-1]:
raise ValueError(f'Parameter data must not contain overlaps. Parameter {p} '
f'contains overlapping data.')
if e < s:
raise NotImplementedError(f'Interpolation only supported for parameter columns '
f'with continuous bins. Parameter {p} contains '
f'non-continuous bins.') | def check_data_complete(data, parameter_columns) | For any parameters specified with edges, make sure edges
don't overlap and don't have any gaps. Assumes that edges are
specified with ends and starts overlapping (but one exclusive and
the other inclusive) so can check that end of previous == start
of current.
If multiple parameters, make sure all combinations of parameters
are present in data. | 4.005291 | 3.876679 | 1.033176 |
sub_dir_path = os.path.join(self.results_root, path)
os.makedirs(sub_dir_path, exist_ok=True)
self._directories[key] = sub_dir_path
return sub_dir_path | def add_sub_directory(self, key, path) | Adds a sub-directory to the results directory.
Parameters
----------
key: str
A look-up key for the directory path.
path: str
The relative path from the root of the results directory to the sub-directory.
Returns
-------
str:
The absolute path to the sub-directory. | 2.361924 | 2.355188 | 1.00286 |
path = os.path.join(self._directories[key], file_name)
extension = file_name.split('.')[-1]
if extension == 'yaml':
with open(path, 'w') as f:
yaml.dump(data, f)
elif extension == 'hdf':
# to_hdf breaks with categorical dtypes.
categorical_columns = data.dtypes[data.dtypes == 'category'].index
data.loc[:, categorical_columns] = data.loc[:, categorical_columns].astype('object')
# Writing to an hdf over and over balloons the file size so write to new file and move it over to avoid
data.to_hdf(path + "update", 'data')
if os.path.exists(path):
os.remove(path)
os.rename(path + "update", path)
else:
raise NotImplementedError(
f"Only 'yaml' and 'hdf' file types are supported. You requested {extension}") | def write_output(self, data, file_name, key=None) | Writes output data to disk.
Parameters
----------
data: pandas.DataFrame or dict
The data to write to disk.
file_name: str
The name of the file to write.
key: str, optional
The lookup key for the sub_directory to write results to, if any. | 4.239399 | 4.266361 | 0.99368 |
path = os.path.join(self._directories[key], file_name)
shutil.copyfile(src_path, path) | def copy_file(self, src_path, file_name, key=None) | Copies a file unmodified to a location inside the ouput directory.
Parameters
----------
src_path: str
Path to the src file
file_name: str
name of the destination file | 3.554964 | 4.884506 | 0.727804 |
'''
Loads combined SAR format logfile in ASCII format.
:return: ``True`` if loading and parsing of file went fine, \
``False`` if it failed (at any point)
'''
daychunks = self.__split_file()
if (daychunks):
maxcount = len(self.__splitpointers)
for i in range(maxcount):
start = self.__splitpointers[i]
end = None
if (i < (maxcount - 1)):
end = self.__splitpointers[i + 1]
chunk = self.__get_chunk(start, end)
parser = sarparse.Parser()
cpu_usage, mem_usage, swp_usage, io_usage = \
parser._parse_file(parser._split_file(chunk))
self.__sarinfos[self.__get_part_date(chunk)] = {
"cpu": cpu_usage,
"mem": mem_usage,
"swap": swp_usage,
"io": io_usage
}
del(cpu_usage)
del(mem_usage)
del(swp_usage)
del(io_usage)
del(parser)
return(True) | def load_file(self) | Loads combined SAR format logfile in ASCII format.
:return: ``True`` if loading and parsing of file went fine, \
``False`` if it failed (at any point) | 4.966671 | 3.209181 | 1.547644 |
'''
Gets chunk from the sar combo file, from start to end
:param start: where to start a pulled chunk
:type start: int.
:param end: where to end a pulled chunk
:type end: int.
:return: str.
'''
piece = False
if (self.__filename and os.access(self.__filename, os.R_OK)):
fhandle = None
try:
fhandle = os.open(self.__filename, os.O_RDONLY)
except OSError:
print(("Couldn't open file %s" % (self.__filename)))
fhandle = None
if (fhandle):
try:
sarmap = mmap.mmap(fhandle, length=0, prot=mmap.PROT_READ)
except (TypeError, IndexError):
os.close(fhandle)
traceback.print_exc()
#sys.exit(-1)
return False
if (not end):
end = sarmap.size()
try:
sarmap.seek(start)
piece = sarmap.read(end - start)
except:
traceback.print_exc()
os.close(fhandle)
return(piece) | def __get_chunk(self, start=0, end=None) | Gets chunk from the sar combo file, from start to end
:param start: where to start a pulled chunk
:type start: int.
:param end: where to end a pulled chunk
:type end: int.
:return: str. | 3.7174 | 2.590755 | 1.434871 |
'''
Splits combined SAR output file (in ASCII format) in order to
extract info we need for it, in the format we want.
:return: ``List``-style of SAR file sections separated by
the type of info they contain (SAR file sections) without
parsing what is exactly what at this point
'''
# Filename passed checks through __init__
if (self.__filename and os.access(self.__filename, os.R_OK)):
fhandle = None
try:
fhandle = os.open(self.__filename, os.O_RDONLY)
except OSError:
print(("Couldn't open file %s" % (self.__filename)))
fhandle = None
if (fhandle):
try:
sarmap = mmap.mmap(fhandle, length=0, prot=mmap.PROT_READ)
except (TypeError, IndexError):
os.close(fhandle)
traceback.print_exc()
#sys.exit(-1)
return False
sfpos = sarmap.find(PATTERN_MULTISPLIT, 0)
while (sfpos > -1):
'''Split by day found'''
self.__splitpointers.append(sfpos)
# Iterate for new position
try:
sfpos = sarmap.find(PATTERN_MULTISPLIT, (sfpos + 1))
except ValueError:
print("ValueError on mmap.find()")
return True
if (self.__splitpointers):
# Not sure if this will work - if empty set
# goes back as True here
return True
return False | def __split_file(self) | Splits combined SAR output file (in ASCII format) in order to
extract info we need for it, in the format we want.
:return: ``List``-style of SAR file sections separated by
the type of info they contain (SAR file sections) without
parsing what is exactly what at this point | 8.496337 | 4.304893 | 1.973647 |
'''
Retrieves date of the combo part from the file
:param part: Part of the combo file (parsed out whole SAR file
from the combo
:type part: str.
:return: string containing date in ISO format (YYY-MM-DD)
'''
if (type(part) is not StringType):
# We can cope with strings only
return False
firstline = part.split("\n")[0]
info = firstline.split()
datevalue = ''
try:
datevalue = info[3]
except KeyError:
datevalue = False
except:
traceback.print_exc()
datevalue = False
return(datevalue) | def __get_part_date(self, part='') | Retrieves date of the combo part from the file
:param part: Part of the combo file (parsed out whole SAR file
from the combo
:type part: str.
:return: string containing date in ISO format (YYY-MM-DD) | 8.838806 | 3.294129 | 2.683199 |
portal = RemoteCKAN(portal_url)
try:
status = portal.call_action(
'status_show', requests_kwargs={"verify": False})
packages_list = portal.call_action(
'package_list', requests_kwargs={"verify": False})
groups_list = portal.call_action(
'group_list', requests_kwargs={"verify": False})
# itera leyendo todos los datasets del portal
packages = []
num_packages = len(packages_list)
for index, pkg in enumerate(packages_list):
# progreso (necesario cuando son muchos)
msg = "Leyendo dataset {} de {}".format(index + 1, num_packages)
logger.info(msg)
# agrega un nuevo dataset a la lista
packages.append(portal.call_action(
'package_show', {'id': pkg},
requests_kwargs={"verify": False}
))
# tiempo de espera padra evitar baneos
time.sleep(0.2)
# itera leyendo todos los temas del portal
groups = [portal.call_action(
'group_show', {'id': grp},
requests_kwargs={"verify": False})
for grp in groups_list]
catalog = map_status_to_catalog(status)
catalog["dataset"] = map_packages_to_datasets(
packages, portal_url)
catalog["themeTaxonomy"] = map_groups_to_themes(groups)
except (CKANAPIError, RequestException) as e:
logger.exception(
'Error al procesar el portal %s', portal_url, exc_info=True)
raise NonParseableCatalog(portal_url, e)
return catalog | def read_ckan_catalog(portal_url) | Convierte los metadatos de un portal disponibilizados por la Action API
v3 de CKAN al estándar data.json.
Args:
portal_url (str): URL de un portal de datos CKAN que soporte la API v3.
Returns:
dict: Representación interna de un catálogo para uso en las funciones
de esta librería. | 3.434539 | 3.54091 | 0.96996 |
catalog = dict()
catalog_mapping = {
"site_title": "title",
"site_description": "description"
}
for status_key, catalog_key in iteritems(catalog_mapping):
try:
catalog[catalog_key] = status[status_key]
except BaseException:
logger.exception(, status_key, catalog_key)
publisher_mapping = {
"site_title": "name",
"error_emails_to": "mbox"
}
if any([k in status for k in publisher_mapping.keys()]):
catalog["publisher"] = dict()
for status_key, publisher_key in iteritems(publisher_mapping):
try:
catalog['publisher'][publisher_key] = status[status_key]
except BaseException:
logger.exception(,
status_key, publisher_key)
else:
logger.info()
catalog['superThemeTaxonomy'] = (
'http://datos.gob.ar/superThemeTaxonomy.json')
return catalog | def map_status_to_catalog(status) | Convierte el resultado de action.status_show() en metadata a nivel de
catálogo. | 3.636632 | 3.650705 | 0.996145 |
dataset = dict()
resources = package["resources"]
groups = package["groups"]
tags = package["tags"]
dataset_mapping = {
'title': 'title',
'notes': 'description',
'metadata_created': 'issued',
'metadata_modified': 'modified',
'license_title': 'license',
'id': 'identifier',
'url': 'landingPage'
}
for package_key, dataset_key in iteritems(dataset_mapping):
try:
dataset[dataset_key] = package[package_key]
except BaseException:
logger.exception(,
package_key, package['name'], dataset_key)
publisher_mapping = {
'author': 'name',
'author_email': 'mbox'
}
if any([k in package for k in publisher_mapping.keys()]):
dataset["publisher"] = dict()
for package_key, publisher_key in iteritems(publisher_mapping):
try:
dataset['publisher'][publisher_key] = package[package_key]
except BaseException:
logger.exception(,
package_key, package['name'], publisher_key)
contact_point_mapping = {
'maintainer': 'fn',
'maintainer_email': 'hasEmail'
}
if any([k in package for k in contact_point_mapping.keys()]):
dataset["contactPoint"] = dict()
for package_key, contact_key in iteritems(contact_point_mapping):
try:
dataset['contactPoint'][contact_key] = package[package_key]
except BaseException:
logger.exception(,
package_key, package['name'], contact_key)
# Si existen campos extras en la información del package, busco las claves
# "Frecuencia de actualización" y "Temática global" para completar los
# campos "accrualPeriodicity" y "superTheme" del dataset, respectivamente.
if "extras" in package:
add_accrualPeriodicity(dataset, package)
add_superTheme(dataset, package)
add_temporal(dataset, package)
dataset["distribution"] = map_resources_to_distributions(resources,
portal_url)
dataset["theme"] = [grp['name'] for grp in groups]
dataset['keyword'] = [tag['name'] for tag in tags]
return dataset | def map_package_to_dataset(package, portal_url) | Mapea un diccionario con metadatos de cierto 'package' de CKAN a un
diccionario con metadatos de un 'dataset' según el estándar data.json. | 2.729531 | 2.669619 | 1.022442 |
distribution = dict()
distribution_mapping = {
'url': 'downloadURL',
'name': 'title',
'created': 'issued',
'description': 'description',
'format': 'format',
'last_modified': 'modified',
'mimetype': 'mediaType',
'size': 'byteSize',
'id': 'identifier' # No es parte del estandar de PAD pero es relevante
}
for resource_key, distribution_key in iteritems(distribution_mapping):
try:
distribution[distribution_key] = resource[resource_key]
except BaseException:
logger.exception(,
resource_key, resource['name'], distribution_key)
if 'attributesDescription' in resource:
try:
distribution['field'] = json.loads(
resource['attributesDescription'])
except BaseException:
logger.exception(
"Error parseando los fields del resource '%s'",
resource['name'])
url_path = ['dataset', resource['package_id'], 'resource', resource['id']]
distribution["accessURL"] = urljoin(portal_url, "/".join(url_path))
return distribution | def map_resource_to_distribution(resource, portal_url) | Mapea un diccionario con metadatos de cierto 'resource' CKAN a dicts
con metadatos de una 'distribution' según el estándar data.json. | 4.240444 | 3.904371 | 1.086076 |
theme = dict()
theme_mapping = {
'name': 'id',
'title': 'label',
'description': 'description'
}
for group_key, theme_key in iteritems(theme_mapping):
try:
theme[theme_key] = group[group_key]
except BaseException:
logger.exception(,
group_key, theme['name'], theme_key)
return theme | def map_group_to_theme(group) | Mapea un diccionario con metadatos de cierto 'group' de CKAN a un
diccionario con metadatos de un 'theme' según el estándar data.json. | 4.372663 | 4.164337 | 1.050026 |
assert isinstance(path, string_types), "`path` debe ser un string"
assert isinstance(tables, dict), "`table` es dict de listas de dicts"
# Deduzco el formato de archivo de `path` y redirijo según corresponda.
suffix = path.split(".")[-1]
if suffix == "csv":
for table_name, table in tables:
root_path = "".join(path.split(".")[:-1])
table_path = "{}_{}.csv".format(root_path, table_name)
_write_csv_table(table, table_path)
elif suffix == "xlsx":
return _write_xlsx_table(tables, path, column_styles, cell_styles,
tables_fields=tables_fields,
tables_names=tables_names)
else:
raise ValueError(.format(suffix)) | def write_tables(tables, path, column_styles=None, cell_styles=None,
tables_fields=None, tables_names=None) | Exporta un reporte con varias tablas en CSV o XLSX.
Si la extensión es ".csv" se crean varias tablas agregando el nombre de la
tabla al final del "path". Si la extensión es ".xlsx" todas las tablas se
escriben en el mismo excel.
Args:
table (dict of (list of dicts)): Conjunto de tablas a ser exportadas
donde {
"table_name": [{
"field_name1": "field_value1",
"field_name2": "field_value2",
"field_name3": "field_value3"
}]
}
path (str): Path al archivo CSV o XLSX de exportación. | 3.744866 | 3.633339 | 1.030696 |
assert isinstance(path, string_types), "`path` debe ser un string"
assert isinstance(table, list), "`table` debe ser una lista de dicts"
# si la tabla está vacía, no escribe nada
if len(table) == 0:
logger.warning("Tabla vacia: no se genera ninguna archivo.")
return
# Sólo sabe escribir listas de diccionarios con información tabular
if not helpers.is_list_of_matching_dicts(table):
raise ValueError()
# Deduzco el formato de archivo de `path` y redirijo según corresponda.
suffix = path.split(".")[-1]
if suffix == "csv":
return _write_csv_table(table, path)
elif suffix == "xlsx":
return _write_xlsx_table(table, path, column_styles, cell_styles)
else:
raise ValueError(.format(suffix)) | def write_table(table, path, column_styles=None, cell_styles=None) | Exporta una tabla en el formato deseado (CSV o XLSX).
La extensión del archivo debe ser ".csv" o ".xlsx", y en función de
ella se decidirá qué método usar para escribirlo.
Args:
table (list of dicts): Tabla a ser exportada.
path (str): Path al archivo CSV o XLSX de exportación. | 4.384207 | 4.210912 | 1.041154 |
obj_str = text_type(json.dumps(obj, indent=4, separators=(",", ": "),
ensure_ascii=False))
helpers.ensure_dir_exists(os.path.dirname(path))
with io.open(path, "w", encoding='utf-8') as target:
target.write(obj_str) | def write_json(obj, path) | Escribo un objeto a un archivo JSON con codificación UTF-8. | 2.847426 | 2.713369 | 1.049406 |
xlsx_fields = xlsx_fields or XLSX_FIELDS
catalog_dict = {}
catalog_dict["catalog"] = [
_tabulate_nested_dict(catalog.get_catalog_metadata(
exclude_meta_fields=["themeTaxonomy"]),
"catalog")
]
catalog_dict["dataset"] = _generate_dataset_table(catalog)
catalog_dict["distribution"] = _generate_distribution_table(catalog)
catalog_dict["field"] = _generate_field_table(catalog)
catalog_dict["theme"] = _generate_theme_table(catalog)
write_tables(
catalog_dict, path, tables_fields=xlsx_fields,
tables_names=["catalog", "dataset", "distribution", "field", "theme"]
) | def write_xlsx_catalog(catalog, path, xlsx_fields=None) | Escribe el catálogo en Excel.
Args:
catalog (DataJson): Catálogo de datos.
path (str): Directorio absoluto donde se crea el archivo XLSX.
xlsx_fields (dict): Orden en que los campos del perfil de metadatos
se escriben en cada hoja del Excel. | 3.442171 | 3.732368 | 0.922248 |
for i in range(tries):
try:
return requests.get(url, timeout=try_timeout, proxies=proxies,
verify=verify).content
except Exception as e:
download_exception = e
if i < tries - 1:
time.sleep(retry_delay)
raise download_exception | def download(url, tries=DEFAULT_TRIES, retry_delay=RETRY_DELAY,
try_timeout=None, proxies=None, verify=True) | Descarga un archivo a través del protocolo HTTP, en uno o más intentos.
Args:
url (str): URL (schema HTTP) del archivo a descargar.
tries (int): Intentos a realizar (default: 1).
retry_delay (int o float): Tiempo a esperar, en segundos, entre cada
intento.
try_timeout (int o float): Tiempo máximo a esperar por intento.
proxies (dict): Proxies a utilizar. El diccionario debe contener los
valores 'http' y 'https', cada uno asociados a la URL del proxy
correspondiente.
Returns:
bytes: Contenido del archivo | 2.298947 | 2.919031 | 0.787572 |
content = download(url, **kwargs)
with open(file_path, "wb") as f:
f.write(content) | def download_to_file(url, file_path, **kwargs) | Descarga un archivo a través del protocolo HTTP, en uno o más intentos, y
escribe el contenido descargado el el path especificado.
Args:
url (str): URL (schema HTTP) del archivo a descargar.
file_path (str): Path del archivo a escribir. Si un archivo ya existe
en el path especificado, se sobrescribirá con nuevos contenidos.
kwargs: Parámetros para download(). | 2.471137 | 3.656251 | 0.675866 |
args = list(args) + [value]
return mutator(*args, **kwargs) | def replace_combiner(value, mutator, *args, **kwargs) | Replaces the output of the source or mutator with the output
of the subsequent mutator. This is the default combiner. | 4.106236 | 4.42412 | 0.928148 |
value.add(mutator(*args, **kwargs))
return value | def set_combiner(value, mutator, *args, **kwargs) | Expects the output of the source to be a set to which
the result of each mutator is added. | 6.309406 | 5.605295 | 1.125615 |
value.append(mutator(*args, **kwargs))
return value | def list_combiner(value, mutator, *args, **kwargs) | Expects the output of the source to be a list to which
the result of each mutator is appended. | 4.6581 | 5.44303 | 0.855792 |
# if there is only one value, return the value
if len(a) == 1:
return a[0]
# if there are multiple values, calculate the joint value
product = 1
for v in a:
new_value = (1-v)
product = product * new_value
joint_value = 1 - product
return joint_value | def joint_value_post_processor(a, _) | The final step in calculating joint values like disability weights.
If the combiner is list_combiner then the effective formula is:
.. math::
value(args) = 1 - \prod_{i=1}^{mutator count} 1-mutator_{i}(args)
Parameters
----------
a : List[pd.Series]
a is a list of series, indexed on the population. Each series
corresponds to a different value in the pipeline and each row
in a series contains a value that applies to a specific simulant. | 3.497045 | 3.222251 | 1.08528 |
return self._value_manager.register_value_producer(value_name, source,
preferred_combiner,
preferred_post_processor) | def register_value_producer(self, value_name: str, source: Callable[..., pd.DataFrame]=None,
preferred_combiner: Callable=replace_combiner,
preferred_post_processor: Callable[..., pd.DataFrame]=None) -> Pipeline | Marks a ``Callable`` as the producer of a named value.
Parameters
----------
value_name :
The name of the new dynamic value pipeline.
source :
A callable source for the dynamic value pipeline.
preferred_combiner :
A strategy for combining the source and the results of any calls to mutators in the pipeline.
``vivarium`` provides the strategies ``replace_combiner`` (the default), ``list_combiner``,
and ``set_combiner`` which are importable from ``vivarium.framework.values``. Client code
may define additional strategies as necessary.
preferred_post_processor :
A strategy for processing the final output of the pipeline. ``vivarium`` provides the strategies
``rescale_post_processor`` and ``joint_value_post_processor`` which are importable from
``vivarium.framework.values``. Client code may define additional strategies as necessary.
Returns
-------
Callable
A callable reference to the named dynamic value pipeline. | 3.225509 | 4.315459 | 0.747431 |
return self._value_manager.register_rate_producer(rate_name, source) | def register_rate_producer(self, rate_name: str, source: Callable[..., pd.DataFrame]=None) -> Pipeline | Marks a ``Callable`` as the producer of a named rate.
This is a convenience wrapper around ``register_value_producer`` that makes sure
rate data is appropriately scaled to the size of the simulation time step.
It is equivalent to ``register_value_producer(value_name, source,
preferred_combiner=replace_combiner, preferred_post_processor=rescale_post_processor)``
Parameters
----------
rate_name :
The name of the new dynamic rate pipeline.
source :
A callable source for the dynamic rate pipeline.
Returns
-------
Callable
A callable reference to the named dynamic rate pipeline. | 7.146166 | 9.357697 | 0.763667 |
self._value_manager.register_value_modifier(value_name, modifier, priority) | def register_value_modifier(self, value_name: str, modifier: Callable, priority: int=5) | Marks a ``Callable`` as the modifier of a named value.
Parameters
----------
value_name :
The name of the dynamic value pipeline to be modified.
modifier :
A function that modifies the source of the dynamic value pipeline when called.
If the pipeline has a ``replace_combiner``, the modifier should accept the same
arguments as the pipeline source with an additional last positional argument
for the results of the previous stage in the pipeline. For the ``list_combiner`` and
``set_combiner`` strategies, the pipeline modifiers should have the same signature
as the pipeline source.
priority : {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
An indication of the order in which pipeline modifiers should be called. Modifiers with
smaller priority values will be called earlier in the pipeline. Modifiers with
the same priority have no guaranteed ordering, and so should be commutative. | 4.748083 | 4.923932 | 0.964287 |
log_level = logging.DEBUG if verbose else logging.ERROR
logging.basicConfig(filename=log, level=log_level)
try:
run_simulation(model_specification, results_directory)
except (BdbQuit, KeyboardInterrupt):
raise
except Exception as e:
if with_debugger:
import pdb
import traceback
traceback.print_exc()
pdb.post_mortem()
else:
logging.exception("Uncaught exception {}".format(e))
raise | def run(model_specification, results_directory, verbose, log, with_debugger) | Run a simulation from the command line.
The simulation itself is defined by the given MODEL_SPECIFICATION yaml file.
Within the results directory, which defaults to ~/vivarium_results if none
is provided, a subdirectory will be created with the same name as the
MODEL_SPECIFICATION if one does not exist. Results will be written to a
further subdirectory named after the start time of the simulation run. | 2.424988 | 3.077524 | 0.787967 |
model_specification = Path(model_specification)
results_directory = Path(results_directory)
out_stats_file = results_directory / f'{model_specification.name}'.replace('yaml', 'stats')
command = f'run_simulation("{model_specification}", "{results_directory}")'
cProfile.runctx(command, globals=globals(), locals=locals(), filename=out_stats_file)
if process:
out_txt_file = results_directory / (out_stats_file.name + '.txt')
with open(out_txt_file, 'w') as f:
p = pstats.Stats(str(out_stats_file), stream=f)
p.sort_stats('cumulative')
p.print_stats() | def profile(model_specification, results_directory, process) | Run a simulation based on the provided MODEL_SPECIFICATION and profile
the run. | 2.538977 | 2.519149 | 1.007871 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.