code string | signature string | docstring string | loss_without_docstring float64 | loss_with_docstring float64 | factor float64 |
|---|---|---|---|---|---|
import_from_ding0(file=file, network=self.network) | def import_from_ding0(self, file, **kwargs) | Import grid data from DINGO file
For details see
:func:`edisgo.data.import_data.import_from_ding0` | 8.620342 | 10.303177 | 0.836668 |
if generator_scenario:
self.network.generator_scenario = generator_scenario
data_source = 'oedb'
import_generators(network=self.network, data_source=data_source) | def import_generators(self, generator_scenario=None) | Import generators
For details see
:func:`edisgo.data.import_data.import_generators` | 5.271786 | 5.851731 | 0.900893 |
if timesteps is None:
timesteps = self.network.timeseries.timeindex
# check if timesteps is array-like, otherwise convert to list
if not hasattr(timesteps, "__len__"):
timesteps = [timesteps]
if self.network.pypsa is None:
# Translate eDisGo grid topology representation to PyPSA format
self.network.pypsa = pypsa_io.to_pypsa(
self.network, mode, timesteps)
else:
if self.network.pypsa.edisgo_mode is not mode:
# Translate eDisGo grid topology representation to PyPSA format
self.network.pypsa = pypsa_io.to_pypsa(
self.network, mode, timesteps)
# check if all timesteps are in pypsa.snapshots, if not update time
# series
if False in [True if _ in self.network.pypsa.snapshots else False
for _ in timesteps]:
pypsa_io.update_pypsa_timeseries(self.network, timesteps=timesteps)
# run power flow analysis
pf_results = self.network.pypsa.pf(timesteps)
if all(pf_results['converged']['0'].tolist()):
pypsa_io.process_pfa_results(
self.network, self.network.pypsa, timesteps)
else:
raise ValueError("Power flow analysis did not converge.") | def analyze(self, mode=None, timesteps=None) | Analyzes the grid by power flow analysis
Analyze the grid for violations of hosting capacity. Means, perform a
power flow analysis and obtain voltages at nodes (load, generator,
stations/transformers and branch tees) and active/reactive power at
lines.
The power flow analysis can currently only be performed for both grid
levels MV and LV. See ToDos section for more information.
A static `non-linear power flow analysis is performed using PyPSA
<https://www.pypsa.org/doc/power_flow.html#full-non-linear-power-flow>`_.
The high-voltage to medium-voltage transformer are not included in the
analysis. The slack bus is defined at secondary side of these
transformers assuming an ideal tap changer. Hence, potential
overloading of the transformers is not studied here.
Parameters
----------
mode : str
Allows to toggle between power flow analysis (PFA) on the whole
grid topology (MV + LV), only MV or only LV. Defaults to None which
equals power flow analysis for MV + LV which is the only
implemented option at the moment. See ToDos section for
more information.
timesteps : :pandas:`pandas.DatetimeIndex<datetimeindex>` or :pandas:`pandas.Timestamp<timestamp>`
Timesteps specifies for which time steps to conduct the power flow
analysis. It defaults to None in which case the time steps in
timeseries.timeindex (see :class:`~.grid.network.TimeSeries`) are
used.
Notes
-----
The current implementation always translates the grid topology
representation to the PyPSA format and stores it to
:attr:`self.network.pypsa`.
ToDos
------
The option to export only the edisgo MV grid (mode = 'mv') to conduct
a power flow analysis is implemented in
:func:`~.tools.pypsa_io.to_pypsa` but NotImplementedError is raised
since the rest of edisgo does not handle this option yet. The analyze
function will throw an error since
:func:`~.tools.pypsa_io.process_pfa_results`
does not handle aggregated loads and generators in the LV grids. Also,
grid reinforcement, pypsa update of time series, and probably other
functionalities do not work when only the MV grid is analysed.
Further ToDos are:
* explain how power plants are modeled, if possible use a link
* explain where to find and adjust power flow analysis defining
parameters
See Also
--------
:func:`~.tools.pypsa_io.to_pypsa`
Translator to PyPSA data format | 3.891594 | 2.809073 | 1.385366 |
results = reinforce_grid(
self, max_while_iterations=kwargs.get(
'max_while_iterations', 10),
copy_graph=kwargs.get('copy_graph', False),
timesteps_pfa=kwargs.get('timesteps_pfa', None),
combined_analysis=kwargs.get('combined_analysis', False))
# add measure to Results object
if not kwargs.get('copy_graph', False):
self.network.results.measures = 'grid_expansion'
return results | def reinforce(self, **kwargs) | Reinforces the grid and calculates grid expansion costs.
See :meth:`edisgo.flex_opt.reinforce_grid` for more information. | 6.411062 | 5.680119 | 1.128685 |
StorageControl(edisgo=self, timeseries=timeseries,
position=position, **kwargs) | def integrate_storage(self, timeseries, position, **kwargs) | Integrates storage into grid.
See :class:`~.grid.network.StorageControl` for more information. | 23.177015 | 17.150698 | 1.351374 |
package_path = edisgo.__path__[0]
equipment_dir = self.config['system_dirs']['equipment_dir']
data = {}
equipment = {'mv': ['trafos', 'lines', 'cables'],
'lv': ['trafos', 'cables']}
for voltage_level, eq_list in equipment.items():
for i in eq_list:
equipment_parameters = self.config['equipment'][
'equipment_{}_parameters_{}'.format(voltage_level, i)]
data['{}_{}'.format(voltage_level, i)] = pd.read_csv(
os.path.join(package_path, equipment_dir,
equipment_parameters),
comment='#', index_col='name',
delimiter=',', decimal='.')
return data | def _load_equipment_data(self) | Load equipment data for transformers, cables etc.
Returns
-------
:obj:`dict` of :pandas:`pandas.DataFrame<dataframe>` | 5.170385 | 5.020022 | 1.029953 |
config_files = ['config_db_tables', 'config_grid',
'config_grid_expansion', 'config_timeseries']
# load configs
if isinstance(config_path, dict):
for conf in config_files:
config.load_config(filename='{}.cfg'.format(conf),
config_dir=config_path[conf],
copy_default_config=False)
else:
for conf in config_files:
config.load_config(filename='{}.cfg'.format(conf),
config_dir=config_path)
config_dict = config.cfg._sections
# convert numeric values to float
for sec, subsecs in config_dict.items():
for subsec, val in subsecs.items():
# try str -> float conversion
try:
config_dict[sec][subsec] = float(val)
except:
pass
# convert to time object
config_dict['demandlib']['day_start'] = datetime.datetime.strptime(
config_dict['demandlib']['day_start'], "%H:%M")
config_dict['demandlib']['day_start'] = datetime.time(
config_dict['demandlib']['day_start'].hour,
config_dict['demandlib']['day_start'].minute)
config_dict['demandlib']['day_end'] = datetime.datetime.strptime(
config_dict['demandlib']['day_end'], "%H:%M")
config_dict['demandlib']['day_end'] = datetime.time(
config_dict['demandlib']['day_end'].hour,
config_dict['demandlib']['day_end'].minute)
return config_dict | def _load_config(config_path=None) | Load config files.
Parameters
-----------
config_path : None or :obj:`str` or dict
See class definition for more information.
Returns
-------
:obj:`collections.OrderedDict`
eDisGo configuration data from config files. | 2.401367 | 2.393884 | 1.003126 |
try:
self.timeseries.generation_fluctuating
self.timeseries.generation_dispatchable
self.timeseries.load
self.timeseries.generation_reactive_power
self.timeseries.load_reactive_power
except:
message = 'Time index of feed-in and load time series does ' \
'not match.'
logging.error(message)
raise KeyError(message) | def _check_timeindex(self) | Check function to check if all feed-in and load time series contain
values for the specified time index. | 7.024692 | 4.729941 | 1.485154 |
self.timeseries.generation_fluctuating = pd.DataFrame(
{'solar': [worst_case_scale_factors[
'{}_feedin_pv'.format(mode)] for mode in modes],
'wind': [worst_case_scale_factors[
'{}_feedin_other'.format(mode)] for mode in modes]},
index=self.timeseries.timeindex)
self.timeseries.generation_dispatchable = pd.DataFrame(
{'other': [worst_case_scale_factors[
'{}_feedin_other'.format(mode)] for mode in modes]},
index=self.timeseries.timeindex) | def _worst_case_generation(self, worst_case_scale_factors, modes) | Define worst case generation time series for fluctuating and
dispatchable generators.
Parameters
----------
worst_case_scale_factors : dict
Scale factors defined in config file 'config_timeseries.cfg'.
Scale factors describe actual power to nominal power ratio of in
worst-case scenarios.
modes : list
List with worst-cases to generate time series for. Can be
'feedin_case', 'load_case' or both. | 3.679775 | 2.773935 | 1.326554 |
sectors = ['residential', 'retail', 'industrial', 'agricultural']
lv_power_scaling = np.array(
[worst_case_scale_factors['lv_{}_load'.format(mode)]
for mode in modes])
mv_power_scaling = np.array(
[worst_case_scale_factors['mv_{}_load'.format(mode)]
for mode in modes])
lv = {(sector, 'lv'): peakload_consumption_ratio[sector] *
lv_power_scaling
for sector in sectors}
mv = {(sector, 'mv'): peakload_consumption_ratio[sector] *
mv_power_scaling
for sector in sectors}
self.timeseries.load = pd.DataFrame({**lv, **mv},
index=self.timeseries.timeindex) | def _worst_case_load(self, worst_case_scale_factors,
peakload_consumption_ratio, modes) | Define worst case load time series for each sector.
Parameters
----------
worst_case_scale_factors : dict
Scale factors defined in config file 'config_timeseries.cfg'.
Scale factors describe actual power to nominal power ratio of in
worst-case scenarios.
peakload_consumption_ratio : dict
Ratios of peak load to annual consumption per sector, defined in
config file 'config_timeseries.cfg'
modes : list
List with worst-cases to generate time series for. Can be
'feedin_case', 'load_case' or both. | 2.875672 | 2.921384 | 0.984353 |
if curtailment_timeseries is None:
message = 'No curtailment given.'
logging.error(message)
raise KeyError(message)
try:
curtailment_timeseries.loc[network.timeseries.timeindex]
except:
message = 'Time index of curtailment time series does not match ' \
'with load and feed-in time series.'
logging.error(message)
raise KeyError(message) | def _check_timeindex(self, curtailment_timeseries, network) | Raises an error if time index of curtailment time series does not
comply with the time index of load and feed-in time series.
Parameters
-----------
curtailment_timeseries : :pandas:`pandas.Series<series>` or \
:pandas:`pandas.DataFrame<dataframe>`
See parameter `curtailment_timeseries` in class definition for more
information. | 3.551528 | 2.930092 | 1.212088 |
if not feedin_df.empty:
feedin_selected_sum = feedin_df.sum(axis=1)
diff = feedin_selected_sum - curtailment_timeseries
# add tolerance (set small negative values to zero)
diff[diff.between(-1, 0)] = 0
if not (diff >= 0).all():
bad_time_steps = [_ for _ in diff.index
if diff[_] < 0]
message = 'Curtailment demand exceeds total feed-in in time ' \
'steps {}.'.format(bad_time_steps)
logging.error(message)
raise ValueError(message)
else:
bad_time_steps = [_ for _ in curtailment_timeseries.index
if curtailment_timeseries[_] > 0]
if bad_time_steps:
message = 'Curtailment given for time steps {} but there ' \
'are no generators to meet the curtailment target ' \
'for {}.'.format(bad_time_steps, curtailment_key)
logging.error(message)
raise ValueError(message) | def _precheck(self, curtailment_timeseries, feedin_df, curtailment_key) | Raises an error if the curtailment at any time step exceeds the
total feed-in of all generators curtailment can be distributed among
at that time.
Parameters
-----------
curtailment_timeseries : :pandas:`pandas.Series<series>`
Curtailment time series in kW for the technology (and weather
cell) specified in `curtailment_key`.
feedin_df : :pandas:`pandas.Series<series>`
Feed-in time series in kW for all generators of type (and in
weather cell) specified in `curtailment_key`.
curtailment_key : :obj:`str` or :obj:`tuple` with :obj:`str`
Technology (and weather cell) curtailment is given for. | 3.11715 | 2.931599 | 1.063293 |
curtailment = network.timeseries.curtailment
gen_repr = [repr(_) for _ in curtailment.columns]
feedin_repr = feedin.loc[:, gen_repr]
curtailment_repr = curtailment
curtailment_repr.columns = gen_repr
if not ((feedin_repr - curtailment_repr) > -1e-1).all().all():
message = 'Curtailment exceeds feed-in.'
logging.error(message)
raise TypeError(message) | def _postcheck(self, network, feedin) | Raises an error if the curtailment of a generator exceeds the
feed-in of that generator at any time step.
Parameters
-----------
network : :class:`~.grid.network.Network`
feedin : :pandas:`pandas.DataFrame<dataframe>`
DataFrame with feed-in time series in kW. Columns of the dataframe
are :class:`~.grid.components.GeneratorFluctuating`, index is
time index. | 5.004216 | 4.453457 | 1.12367 |
# place storage
params = self._check_nominal_power(params, timeseries)
if isinstance(position, Station) or isinstance(position, BranchTee) \
or isinstance(position, Generator) \
or isinstance(position, Load):
storage = storage_integration.set_up_storage(
node=position, parameters=params, voltage_level=voltage_level)
line = storage_integration.connect_storage(storage, position)
elif isinstance(position, str) \
and position == 'hvmv_substation_busbar':
storage, line = storage_integration.storage_at_hvmv_substation(
self.edisgo.network.mv_grid, params)
elif isinstance(position, str) \
and position == 'distribute_storages_mv':
# check active power time series
if not isinstance(timeseries, pd.Series):
raise ValueError(
"Storage time series needs to be a pandas Series if "
"`position` is 'distribute_storages_mv'.")
else:
timeseries = pd.DataFrame(data={'p': timeseries},
index=timeseries.index)
self._check_timeindex(timeseries)
# check reactive power time series
if reactive_power_timeseries is not None:
self._check_timeindex(reactive_power_timeseries)
timeseries['q'] = reactive_power_timeseries.loc[
timeseries.index]
else:
timeseries['q'] = 0
# start storage positioning method
storage_positioning.one_storage_per_feeder(
edisgo=self.edisgo, storage_timeseries=timeseries,
storage_nominal_power=params['nominal_power'], **kwargs)
return
else:
message = 'Provided storage position option {} is not ' \
'valid.'.format(timeseries)
logging.error(message)
raise KeyError(message)
# implement operation strategy (active power)
if isinstance(timeseries, pd.Series):
timeseries = pd.DataFrame(data={'p': timeseries},
index=timeseries.index)
self._check_timeindex(timeseries)
storage.timeseries = timeseries
elif isinstance(timeseries, str) and timeseries == 'fifty-fifty':
storage_operation.fifty_fifty(self.edisgo.network, storage)
else:
message = 'Provided storage timeseries option {} is not ' \
'valid.'.format(timeseries)
logging.error(message)
raise KeyError(message)
# reactive power
if reactive_power_timeseries is not None:
self._check_timeindex(reactive_power_timeseries)
storage.timeseries = pd.DataFrame(
{'p': storage.timeseries.p,
'q': reactive_power_timeseries.loc[storage.timeseries.index]},
index=storage.timeseries.index)
# update pypsa representation
if self.edisgo.network.pypsa is not None:
pypsa_io.update_pypsa_storage(
self.edisgo.network.pypsa,
storages=[storage], storages_lines=[line]) | def _integrate_storage(self, timeseries, position, params, voltage_level,
reactive_power_timeseries, **kwargs) | Integrate storage units in the grid.
Parameters
----------
timeseries : :obj:`str` or :pandas:`pandas.Series<series>`
Parameter used to obtain time series of active power the storage
storage is charged (negative) or discharged (positive) with. Can
either be a given time series or an operation strategy. See class
definition for more information
position : :obj:`str` or :class:`~.grid.components.Station` or :class:`~.grid.components.BranchTee` or :class:`~.grid.components.Generator` or :class:`~.grid.components.Load`
Parameter used to place the storage. See class definition for more
information.
params : :obj:`dict`
Dictionary with storage parameters for one storage. See class
definition for more information on what parameters must be
provided.
voltage_level : :obj:`str` or None
`voltage_level` defines which side of the LV station the storage is
connected to. Valid options are 'lv' and 'mv'. Default: None. See
class definition for more information.
reactive_power_timeseries : :pandas:`pandas.Series<series>` or None
Reactive power time series in kvar (generator sign convention).
Index of the series needs to be a
:pandas:`pandas.DatetimeIndex<datetimeindex>`. | 3.532122 | 3.247713 | 1.087572 |
if storage_parameters.get('nominal_power', None) is None:
try:
storage_parameters['nominal_power'] = max(abs(timeseries))
except:
raise ValueError("Could not assign a nominal power to the "
"storage. Please provide either a nominal "
"power or an active power time series.")
return storage_parameters | def _check_nominal_power(self, storage_parameters, timeseries) | Tries to assign a nominal power to the storage.
Checks if nominal power is provided through `storage_parameters`,
otherwise tries to return the absolute maximum of `timeseries`. Raises
an error if it cannot assign a nominal power.
Parameters
----------
timeseries : :obj:`str` or :pandas:`pandas.Series<series>`
See parameter `timeseries` in class definition for more
information.
storage_parameters : :obj:`dict`
See parameter `parameters` in class definition for more
information.
Returns
--------
:obj:`dict`
The given `storage_parameters` is returned extended by an entry for
'nominal_power', if it didn't already have that key. | 3.755155 | 3.024935 | 1.2414 |
try:
timeseries.loc[self.edisgo.network.timeseries.timeindex]
except:
message = 'Time index of storage time series does not match ' \
'with load and feed-in time series.'
logging.error(message)
raise KeyError(message) | def _check_timeindex(self, timeseries) | Raises an error if time index of storage time series does not
comply with the time index of load and feed-in time series.
Parameters
-----------
timeseries : :pandas:`pandas.DataFrame<dataframe>`
DataFrame containing active power the storage is charged (negative)
and discharged (positive) with in kW in column 'p' and
reactive power in kVA in column 'q'. | 11.474419 | 6.119104 | 1.87518 |
try:
return self._generation_dispatchable.loc[[self.timeindex], :]
except:
return self._generation_dispatchable.loc[self.timeindex, :] | def generation_dispatchable(self) | Get generation time series of dispatchable generators (only active
power)
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
See class definition for details. | 5.486078 | 4.031999 | 1.360635 |
try:
return self._generation_fluctuating.loc[[self.timeindex], :]
except:
return self._generation_fluctuating.loc[self.timeindex, :] | def generation_fluctuating(self) | Get generation time series of fluctuating renewables (only active
power)
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
See class definition for details. | 4.281615 | 3.730923 | 1.147602 |
try:
return self._load.loc[[self.timeindex], :]
except:
return self._load.loc[self.timeindex, :] | def load(self) | Get load time series (only active power)
Returns
-------
dict or :pandas:`pandas.DataFrame<dataframe>`
See class definition for details. | 7.128857 | 5.085573 | 1.40178 |
if self._curtailment is not None:
if isinstance(self._curtailment, pd.DataFrame):
try:
return self._curtailment.loc[[self.timeindex], :]
except:
return self._curtailment.loc[self.timeindex, :]
elif isinstance(self._curtailment, list):
try:
curtailment = pd.DataFrame()
for gen in self._curtailment:
curtailment[gen] = gen.curtailment
return curtailment
except:
raise
else:
return None | def curtailment(self) | Get curtailment time series of dispatchable generators (only active
power)
Parameters
----------
curtailment : list or :pandas:`pandas.DataFrame<dataframe>`
See class definition for details.
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
In the case curtailment is applied to all solar and wind generators
curtailment time series either aggregated by technology type or by
type and weather cell ID are returnded. In the first case columns
of the DataFrame are 'solar' and 'wind'; in the second case columns
need to be a :pandas:`pandas.MultiIndex<multiindex>` with the
first level containing the type and the second level the weather
cell ID.
In the case curtailment is only applied to specific generators,
curtailment time series of all curtailed generators, specified in
by the column name are returned. | 2.44102 | 2.436662 | 1.001788 |
if self._curtailment is not None:
result_dict = {}
for key, gen_list in self._curtailment.items():
curtailment_df = pd.DataFrame()
for gen in gen_list:
curtailment_df[gen] = gen.curtailment
result_dict[key] = curtailment_df
return result_dict
else:
return None | def curtailment(self) | Holds curtailment assigned to each generator per curtailment target.
Returns
-------
:obj:`dict` with :pandas:`pandas.DataFrame<dataframe>`
Keys of the dictionary are generator types (and weather cell ID)
curtailment targets were given for. E.g. if curtailment is provided
as a :pandas:`pandas.DataFrame<dataframe>` with
:pandas.`pandas.MultiIndex` columns with levels 'type' and
'weather cell ID' the dictionary key is a tuple of
('type','weather_cell_id').
Values of the dictionary are dataframes with the curtailed power in
kW per generator and time step. Index of the dataframe is a
:pandas:`pandas.DatetimeIndex<datetimeindex>`. Columns are the
generators of type
:class:`edisgo.grid.components.GeneratorFluctuating`. | 2.439375 | 2.298124 | 1.061464 |
grids = [self.network.mv_grid] + list(self.network.mv_grid.lv_grids)
storage_results = {}
storage_results['storage_id'] = []
storage_results['nominal_power'] = []
storage_results['voltage_level'] = []
storage_results['grid_connection_point'] = []
for grid in grids:
for storage in grid.graph.nodes_by_attribute('storage'):
storage_results['storage_id'].append(repr(storage))
storage_results['nominal_power'].append(storage.nominal_power)
storage_results['voltage_level'].append(
'mv' if isinstance(grid, MVGrid) else 'lv')
storage_results['grid_connection_point'].append(
grid.graph.neighbors(storage)[0])
return pd.DataFrame(storage_results).set_index('storage_id') | def storages(self) | Gathers relevant storage results.
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
Dataframe containing all storages installed in the MV grid and
LV grids. Index of the dataframe are the storage representatives,
columns are the following:
nominal_power : :obj:`float`
Nominal power of the storage in kW.
voltage_level : :obj:`str`
Voltage level the storage is connected to. Can either be 'mv'
or 'lv'. | 3.282539 | 2.757126 | 1.190565 |
storages_p = pd.DataFrame()
storages_q = pd.DataFrame()
grids = [self.network.mv_grid] + list(self.network.mv_grid.lv_grids)
for grid in grids:
for storage in grid.graph.nodes_by_attribute('storage'):
ts = storage.timeseries
storages_p[repr(storage)] = ts.p
storages_q[repr(storage)] = ts.q
return storages_p, storages_q | def storages_timeseries(self) | Returns a dataframe with storage time series.
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
Dataframe containing time series of all storages installed in the
MV grid and LV grids. Index of the dataframe is a
:pandas:`pandas.DatetimeIndex<datetimeindex>`. Columns are the
storage representatives. | 4.090868 | 3.892131 | 1.051061 |
if components is not None:
labels_included = []
labels_not_included = []
labels = [repr(l) for l in components]
for label in labels:
if (label in list(self.pfa_p.columns) and
label in list(self.pfa_q.columns)):
labels_included.append(label)
else:
labels_not_included.append(label)
if labels_not_included:
logging.warning(
"Apparent power for {lines} are not returned from "
"PFA".format(lines=labels_not_included))
else:
labels_included = self.pfa_p.columns
s_res = ((self.pfa_p[labels_included] ** 2 + self.pfa_q[
labels_included] ** 2)).applymap(sqrt)
return s_res | def s_res(self, components=None) | Get resulting apparent power in kVA at line(s) and transformer(s).
The apparent power at a line (or transformer) is determined from the
maximum values of active power P and reactive power Q.
.. math::
S = max(\sqrt{p_0^2 + q_0^2}, \sqrt{p_1^2 + q_1^2})
Parameters
----------
components : :obj:`list`
List with all components (of type :class:`~.grid.components.Line`
or :class:`~.grid.components.Transformer`) to get apparent power
for. If not provided defaults to return apparent power of all lines
and transformers in the grid.
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
Apparent power in kVA for lines and/or transformers. | 3.677521 | 3.18824 | 1.153465 |
# check if voltages are available:
if hasattr(self, 'pfa_v_mag_pu'):
self.pfa_v_mag_pu.sort_index(axis=1, inplace=True)
else:
message = "No voltage results available."
raise AttributeError(message)
if level is None:
level = ['mv', 'lv']
if nodes is None:
return self.pfa_v_mag_pu.loc[:, (level, slice(None))]
else:
not_included = [_ for _ in nodes
if _ not in list(self.pfa_v_mag_pu[level].columns)]
labels_included = [_ for _ in nodes if _ not in not_included]
if not_included:
logging.warning("Voltage levels for {nodes} are not returned "
"from PFA".format(nodes=not_included))
return self.pfa_v_mag_pu[level][labels_included] | def v_res(self, nodes=None, level=None) | Get resulting voltage level at node.
Parameters
----------
nodes : :obj:`list`
List of string representatives of grid topology components, e.g.
:class:`~.grid.components.Generator`. If not provided defaults to
all nodes available in grid level `level`.
level : :obj:`str`
Either 'mv' or 'lv' or None (default). Depending on which grid
level results you are interested in. It is required to provide this
argument in order to distinguish voltage levels at primary and
secondary side of the transformer/LV station.
If not provided (respectively None) defaults to ['mv', 'lv'].
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
Resulting voltage levels obtained from power flow analysis | 4.094538 | 3.610523 | 1.134057 |
if components is None:
return self.apparent_power
else:
not_included = [_ for _ in components
if _ not in self.apparent_power.index]
labels_included = [_ for _ in components if _ not in not_included]
if not_included:
logging.warning(
"No apparent power results available for: {}".format(
not_included))
return self.apparent_power.loc[:, labels_included] | def s_res(self, components=None) | Get apparent power in kVA at line(s) and transformer(s).
Parameters
----------
components : :obj:`list`
List of string representatives of :class:`~.grid.components.Line`
or :class:`~.grid.components.Transformer`. If not provided defaults
to return apparent power of all lines and transformers in the grid.
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
Apparent power in kVA for lines and/or transformers. | 4.174804 | 3.748493 | 1.113728 |
storage = set_up_storage(node=mv_grid.station, parameters=parameters,
operational_mode=mode)
line = connect_storage(storage, mv_grid.station)
return storage, line | def storage_at_hvmv_substation(mv_grid, parameters, mode=None) | Place storage at HV/MV substation bus bar.
Parameters
----------
mv_grid : :class:`~.grid.grids.MVGrid`
MV grid instance
parameters : :obj:`dict`
Dictionary with storage parameters. Must at least contain
'nominal_power'. See :class:`~.grid.network.StorageControl` for more
information.
mode : :obj:`str`, optional
Operational mode. See :class:`~.grid.network.StorageControl` for
possible options and more information. Default: None.
Returns
-------
:class:`~.grid.components.Storage`, :class:`~.grid.components.Line`
Created storage instance and newly added line to connect storage. | 8.079207 | 6.699947 | 1.205861 |
# if node the storage is connected to is an LVStation voltage_level
# defines which side the storage is connected to
if isinstance(node, LVStation):
if voltage_level == 'lv':
grid = node.grid
elif voltage_level == 'mv':
grid = node.mv_grid
else:
raise ValueError(
"{} is not a valid option for voltage_level.".format(
voltage_level))
else:
grid = node.grid
return Storage(operation=operational_mode,
id='{}_storage_{}'.format(grid,
len(grid.graph.nodes_by_attribute(
'storage')) + 1),
grid=grid,
geom=node.geom,
**parameters) | def set_up_storage(node, parameters,
voltage_level=None, operational_mode=None) | Sets up a storage instance.
Parameters
----------
node : :class:`~.grid.components.Station` or :class:`~.grid.components.BranchTee`
Node the storage will be connected to.
parameters : :obj:`dict`, optional
Dictionary with storage parameters. Must at least contain
'nominal_power'. See :class:`~.grid.network.StorageControl` for more
information.
voltage_level : :obj:`str`, optional
This parameter only needs to be provided if `node` is of type
:class:`~.grid.components.LVStation`. In that case `voltage_level`
defines which side of the LV station the storage is connected to. Valid
options are 'lv' and 'mv'. Default: None.
operational_mode : :obj:`str`, optional
Operational mode. See :class:`~.grid.network.StorageControl` for
possible options and more information. Default: None. | 5.067085 | 3.838447 | 1.320087 |
# add storage itself to graph
storage.grid.graph.add_node(storage, type='storage')
# add 1m connecting line to node the storage is connected to
if isinstance(storage.grid, MVGrid):
voltage_level = 'mv'
else:
voltage_level = 'lv'
# necessary apparent power the line must be able to carry is set to be
# the storages nominal power and equal amount of reactive power devided by
# the minimum load factor
lf_dict = storage.grid.network.config['grid_expansion_load_factors']
lf = min(lf_dict['{}_feedin_case_line'.format(voltage_level)],
lf_dict['{}_load_case_line'.format(voltage_level)])
apparent_power_line = sqrt(2) * storage.nominal_power / lf
line_type, line_count = select_cable(storage.grid.network, voltage_level,
apparent_power_line)
line = Line(
id=storage.id,
type=line_type,
kind='cable',
length=1e-3,
grid=storage.grid,
quantity=line_count)
storage.grid.graph.add_edge(node, storage, line=line, type='line')
return line | def connect_storage(storage, node) | Connects storage to the given node.
The storage is connected by a cable
The cable the storage is connected with is selected to be able to carry
the storages nominal power and equal amount of reactive power.
No load factor is considered.
Parameters
----------
storage : :class:`~.grid.components.Storage`
Storage instance to be integrated into the grid.
node : :class:`~.grid.components.Station` or :class:`~.grid.components.BranchTee`
Node the storage will be connected to.
Returns
-------
:class:`~.grid.components.Line`
Newly added line to connect storage. | 6.512207 | 5.198901 | 1.252612 |
service, method = call_details.method[1:].split("/")
log_fields.add_fields({
"system": "grpc",
"span.kind": self.KIND,
"grpc.service": service,
"grpc.method": method,
}) | def add_request_log_fields(
self, log_fields: LogFields,
call_details: Union[grpc.HandlerCallDetails,
grpc.ClientCallDetails]
) | Add log fields related to a request to the provided log fields
:param log_fields: log fields instance to which to add the fields
:param call_details: some information regarding the call | 4.940898 | 6.261128 | 0.789139 |
code = "Unknown" if err is not None else "OK"
duration = (datetime.utcnow() - start_time).total_seconds() * 1000
log_fields.add_fields({
"grpc.start_time": start_time.isoformat() + "Z",
"grpc.code": code,
"duration": "{duration}ms".format(duration=duration),
}) | def add_response_log_fields(self, log_fields: LogFields,
start_time: datetime, err: Exception) | Add log fields related to a response to the provided log fields
:param log_fields: log fields instnace to which to add the fields
:param start_time: start time of the request
:param err: exception raised during the handling of the request. | 3.651333 | 3.964513 | 0.921004 |
timeseries_load_feedin_case = network.timeseries.timesteps_load_feedin_case
timestamp = {}
timestamp['load_case'] = (
timeseries_load_feedin_case.residual_load.idxmax()
if max(timeseries_load_feedin_case.residual_load) > 0 else None)
timestamp['feedin_case'] = (
timeseries_load_feedin_case.residual_load.idxmin()
if min(timeseries_load_feedin_case.residual_load) < 0 else None)
return timestamp | def select_worstcase_snapshots(network) | Select two worst-case snapshots from time series
Two time steps in a time series represent worst-case snapshots. These are
1. Load case: refers to the point in the time series where the
(load - generation) achieves its maximum and is greater than 0.
2. Feed-in case: refers to the point in the time series where the
(load - generation) achieves its minimum and is smaller than 0.
These two points are identified based on the generation and load time
series. In case load or feed-in case don't exist None is returned.
Parameters
----------
network : :class:`~.grid.network.Network`
Network for which worst-case snapshots are identified.
Returns
-------
:obj:`dict`
Dictionary with keys 'load_case' and 'feedin_case'. Values are
corresponding worst-case snapshots of type
:pandas:`pandas.Timestamp<timestamp>` or None. | 3.451476 | 2.97518 | 1.16009 |
residual_load = \
pypsa_network.loads_t.p_set.sum(axis=1) - (
pypsa_network.generators_t.p_set.loc[
:, pypsa_network.generators_t.p_set.columns !=
'Generator_slack'].sum(axis=1) +
pypsa_network.storage_units_t.p_set.sum(axis=1))
return residual_load | def get_residual_load_from_pypsa_network(pypsa_network) | Calculates residual load in MW in MV grid and underlying LV grids.
Parameters
----------
pypsa_network : :pypsa:`pypsa.Network<network>`
The `PyPSA network
<https://www.pypsa.org/doc/components.html#network>`_ container,
containing load flow results.
Returns
-------
:pandas:`pandas.Series<series>`
Series with residual load in MW for each time step. Positiv values
indicate a higher demand than generation and vice versa. Index of the
series is a :pandas:`pandas.DatetimeIndex<datetimeindex>` | 3.066046 | 3.526221 | 0.869499 |
if network.pypsa is not None:
residual_load = get_residual_load_from_pypsa_network(network.pypsa) * \
1e3
else:
grids = [network.mv_grid] + list(network.mv_grid.lv_grids)
gens = []
loads = []
for grid in grids:
gens.extend(grid.generators)
gens.extend(list(grid.graph.nodes_by_attribute('storage')))
loads.extend(list(grid.graph.nodes_by_attribute('load')))
generation_timeseries = pd.Series(
0, index=network.timeseries.timeindex)
for gen in gens:
generation_timeseries += gen.timeseries.p
load_timeseries = pd.Series(0, index=network.timeseries.timeindex)
for load in loads:
load_timeseries += load.timeseries.p
residual_load = load_timeseries - generation_timeseries
timeseries_load_feedin_case = residual_load.rename(
'residual_load').to_frame()
timeseries_load_feedin_case['case'] = \
timeseries_load_feedin_case.residual_load.apply(
lambda _: 'feedin_case' if _ < 0 else 'load_case')
return timeseries_load_feedin_case | def assign_load_feedin_case(network) | For each time step evaluate whether it is a feed-in or a load case.
Feed-in and load case are identified based on the
generation and load time series and defined as follows:
1. Load case: positive (load - generation) at HV/MV substation
2. Feed-in case: negative (load - generation) at HV/MV substation
Output of this function is written to `timesteps_load_feedin_case`
attribute of the network.timeseries (see
:class:`~.grid.network.TimeSeries`).
Parameters
----------
network : :class:`~.grid.network.Network`
Network for which worst-case snapshots are identified.
Returns
--------
:pandas:`pandas.DataFrame<dataframe>`
Dataframe with information on whether time step is handled as load case
('load_case') or feed-in case ('feedin_case') for each time step in
`timeindex` attribute of network.timeseries.
Index of the dataframe is network.timeseries.timeindex. Columns of the
dataframe are 'residual_load' with (load - generation) in kW at HV/MV
substation and 'case' with 'load_case' for positive residual load and
'feedin_case' for negative residual load. | 3.273568 | 2.793394 | 1.171896 |
if not config_dir:
config_file = os.path.join(get_default_config_path(), filename)
else:
config_file = os.path.join(config_dir, filename)
# config file does not exist -> copy default
if not os.path.isfile(config_file):
if copy_default_config:
logger.info('Config file {} not found, I will create a '
'default version'.format(config_file))
make_directory(config_dir)
shutil.copy(os.path.join(package_path, 'config', filename.
replace('.cfg', '_default.cfg')),
config_file)
else:
message = 'Config file {} not found.'.format(config_file)
logger.error(message)
raise FileNotFoundError(message)
if len(cfg.read(config_file)) == 0:
message = 'Config file {} not found or empty.'.format(config_file)
logger.error(message)
raise FileNotFoundError(message)
global _loaded
_loaded = True | def load_config(filename, config_dir=None, copy_default_config=True) | Loads the specified config file.
Parameters
-----------
filename : :obj:`str`
Config file name, e.g. 'config_grid.cfg'.
config_dir : :obj:`str`, optional
Path to config file. If None uses default edisgo config directory
specified in config file 'config_system.cfg' in section 'user_dirs'
by subsections 'root_dir' and 'config_dir'. Default: None.
copy_default_config : Boolean
If True copies a default config file into `config_dir` if the
specified config file does not exist. Default: True. | 2.436588 | 2.390383 | 1.019329 |
config_dir = get('user_dirs', 'config_dir')
root_dir = get('user_dirs', 'root_dir')
root_path = os.path.join(os.path.expanduser('~'), root_dir)
config_path = os.path.join(root_path, config_dir)
# root directory does not exist
if not os.path.isdir(root_path):
# create it
logger.info('eDisGo root path {} not found, I will create it.'
.format(root_path))
make_directory(root_path)
# config directory does not exist
if not os.path.isdir(config_path):
# create it
config_path = os.path.join(root_path, config_dir)
make_directory(config_path)
# copy default config files
logger.info('eDisGo config path {} not found, I will create it.'
.format(config_path))
# copy default config files if they don't exist
internal_config_dir = os.path.join(package_path, 'config')
for file in glob(os.path.join(internal_config_dir, '*.cfg')):
filename = os.path.join(config_path,
os.path.basename(file).replace('_default', ''))
if not os.path.isfile(filename):
logger.info('I will create a default config file {} in {}'
.format(file, config_path))
shutil.copy(file, filename)
return config_path | def get_default_config_path() | Returns the basic edisgo config path. If it does not yet exist it creates
it and copies all default config files into it.
Returns
--------
:obj:`str`
Path to default edisgo config directory specified in config file
'config_system.cfg' in section 'user_dirs' by subsections 'root_dir'
and 'config_dir'. | 2.433639 | 2.17618 | 1.118308 |
if not os.path.isdir(directory):
os.mkdir(directory)
logger.info('Path {} not found, I will create it.'
.format(directory)) | def make_directory(directory) | Makes directory if it does not exist.
Parameters
-----------
directory : :obj:`str`
Directory path | 4.085363 | 5.493032 | 0.743736 |
crit_lines = pd.DataFrame()
crit_lines = _line_load(network, network.mv_grid, crit_lines)
if not crit_lines.empty:
logger.debug('==> {} line(s) in MV grid has/have load issues.'.format(
crit_lines.shape[0]))
else:
logger.debug('==> No line load issues in MV grid.')
return crit_lines | def mv_line_load(network) | Checks for over-loading issues in MV grid.
Parameters
----------
network : :class:`~.grid.network.Network`
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
Dataframe containing over-loaded MV lines, their maximum relative
over-loading and the corresponding time step.
Index of the dataframe are the over-loaded lines of type
:class:`~.grid.components.Line`. Columns are 'max_rel_overload'
containing the maximum relative over-loading as float and 'time_index'
containing the corresponding time step the over-loading occured in as
:pandas:`pandas.Timestamp<timestamp>`.
Notes
-----
Line over-load is determined based on allowed load factors for feed-in and
load cases that are defined in the config file 'config_grid_expansion' in
section 'grid_expansion_load_factors'. | 5.346944 | 4.978475 | 1.074013 |
crit_lines = pd.DataFrame()
for lv_grid in network.mv_grid.lv_grids:
crit_lines = _line_load(network, lv_grid, crit_lines)
if not crit_lines.empty:
logger.debug('==> {} line(s) in LV grids has/have load issues.'.format(
crit_lines.shape[0]))
else:
logger.debug('==> No line load issues in LV grids.')
return crit_lines | def lv_line_load(network) | Checks for over-loading issues in LV grids.
Parameters
----------
network : :class:`~.grid.network.Network`
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
Dataframe containing over-loaded LV lines, their maximum relative
over-loading and the corresponding time step.
Index of the dataframe are the over-loaded lines of type
:class:`~.grid.components.Line`. Columns are 'max_rel_overload'
containing the maximum relative over-loading as float and 'time_index'
containing the corresponding time step the over-loading occured in as
:pandas:`pandas.Timestamp<timestamp>`.
Notes
-----
Line over-load is determined based on allowed load factors for feed-in and
load cases that are defined in the config file 'config_grid_expansion' in
section 'grid_expansion_load_factors'. | 5.014227 | 4.742022 | 1.057403 |
if isinstance(grid, LVGrid):
grid_level = 'lv'
else:
grid_level = 'mv'
for line in list(grid.graph.lines()):
i_line_allowed_per_case = {}
i_line_allowed_per_case['feedin_case'] = \
line['line'].type['I_max_th'] * line['line'].quantity * \
network.config['grid_expansion_load_factors'][
'{}_feedin_case_line'.format(grid_level)]
i_line_allowed_per_case['load_case'] = \
line['line'].type['I_max_th'] * line['line'].quantity * \
network.config['grid_expansion_load_factors'][
'{}_load_case_line'.format(grid_level)]
# maximum allowed line load in each time step
i_line_allowed = \
network.timeseries.timesteps_load_feedin_case.case.apply(
lambda _: i_line_allowed_per_case[_])
try:
# check if maximum current from power flow analysis exceeds
# allowed maximum current
i_line_pfa = network.results.i_res[repr(line['line'])]
if any((i_line_allowed - i_line_pfa) < 0):
# find out largest relative deviation
relative_i_res = i_line_pfa / i_line_allowed
crit_lines = crit_lines.append(pd.DataFrame(
{'max_rel_overload': relative_i_res.max(),
'time_index': relative_i_res.idxmax()},
index=[line['line']]))
except KeyError:
logger.debug('No results for line {} '.format(str(line)) +
'to check overloading.')
return crit_lines | def _line_load(network, grid, crit_lines) | Checks for over-loading issues of lines.
Parameters
----------
network : :class:`~.grid.network.Network`
grid : :class:`~.grid.grids.LVGrid` or :class:`~.grid.grids.MVGrid`
crit_lines : :pandas:`pandas.DataFrame<dataframe>`
Dataframe containing over-loaded lines, their maximum relative
over-loading and the corresponding time step.
Index of the dataframe are the over-loaded lines of type
:class:`~.grid.components.Line`. Columns are 'max_rel_overload'
containing the maximum relative over-loading as float and 'time_index'
containing the corresponding time step the over-loading occured in as
:pandas:`pandas.Timestamp<timestamp>`.
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
Dataframe containing over-loaded lines, their maximum relative
over-loading and the corresponding time step.
Index of the dataframe are the over-loaded lines of type
:class:`~.grid.components.Line`. Columns are 'max_rel_overload'
containing the maximum relative over-loading as float and 'time_index'
containing the corresponding time step the over-loading occured in as
:pandas:`pandas.Timestamp<timestamp>`. | 4.822997 | 4.239796 | 1.137554 |
crit_stations = pd.DataFrame()
crit_stations = _station_load(network, network.mv_grid.station,
crit_stations)
if not crit_stations.empty:
logger.debug('==> HV/MV station has load issues.')
else:
logger.debug('==> No HV/MV station load issues.')
return crit_stations | def hv_mv_station_load(network) | Checks for over-loading of HV/MV station.
Parameters
----------
network : :class:`~.grid.network.Network`
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
Dataframe containing over-loaded HV/MV stations, their apparent power
at maximal over-loading and the corresponding time step.
Index of the dataframe are the over-loaded stations of type
:class:`~.grid.components.MVStation`. Columns are 's_pfa'
containing the apparent power at maximal over-loading as float and
'time_index' containing the corresponding time step the over-loading
occured in as :pandas:`pandas.Timestamp<timestamp>`.
Notes
-----
Over-load is determined based on allowed load factors for feed-in and
load cases that are defined in the config file 'config_grid_expansion' in
section 'grid_expansion_load_factors'. | 5.996403 | 6.804676 | 0.881218 |
crit_stations = pd.DataFrame()
for lv_grid in network.mv_grid.lv_grids:
crit_stations = _station_load(network, lv_grid.station,
crit_stations)
if not crit_stations.empty:
logger.debug('==> {} MV/LV station(s) has/have load issues.'.format(
crit_stations.shape[0]))
else:
logger.debug('==> No MV/LV station load issues.')
return crit_stations | def mv_lv_station_load(network) | Checks for over-loading of MV/LV stations.
Parameters
----------
network : :class:`~.grid.network.Network`
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
Dataframe containing over-loaded MV/LV stations, their apparent power
at maximal over-loading and the corresponding time step.
Index of the dataframe are the over-loaded stations of type
:class:`~.grid.components.LVStation`. Columns are 's_pfa'
containing the apparent power at maximal over-loading as float and
'time_index' containing the corresponding time step the over-loading
occured in as :pandas:`pandas.Timestamp<timestamp>`.
Notes
-----
Over-load is determined based on allowed load factors for feed-in and
load cases that are defined in the config file 'config_grid_expansion' in
section 'grid_expansion_load_factors'. | 4.992682 | 4.95682 | 1.007235 |
if isinstance(station, LVStation):
grid_level = 'lv'
else:
grid_level = 'mv'
# maximum allowed apparent power of station for feed-in and load case
s_station = sum([_.type.S_nom for _ in station.transformers])
s_station_allowed_per_case = {}
s_station_allowed_per_case['feedin_case'] = s_station * network.config[
'grid_expansion_load_factors']['{}_feedin_case_transformer'.format(
grid_level)]
s_station_allowed_per_case['load_case'] = s_station * network.config[
'grid_expansion_load_factors']['{}_load_case_transformer'.format(
grid_level)]
# maximum allowed apparent power of station in each time step
s_station_allowed = \
network.timeseries.timesteps_load_feedin_case.case.apply(
lambda _: s_station_allowed_per_case[_])
try:
if isinstance(station, LVStation):
s_station_pfa = network.results.s_res(
station.transformers).sum(axis=1)
else:
s_station_pfa = network.results.s_res([station]).iloc[:, 0]
s_res = s_station_allowed - s_station_pfa
s_res = s_res[s_res < 0]
# check if maximum allowed apparent power of station exceeds
# apparent power from power flow analysis at any time step
if not s_res.empty:
# find out largest relative deviation
load_factor = \
network.timeseries.timesteps_load_feedin_case.case.apply(
lambda _: network.config[
'grid_expansion_load_factors'][
'{}_{}_transformer'.format(grid_level, _)])
relative_s_res = load_factor * s_res
crit_stations = crit_stations.append(pd.DataFrame(
{'s_pfa': s_station_pfa.loc[relative_s_res.idxmin()],
'time_index': relative_s_res.idxmin()},
index=[station]))
except KeyError:
logger.debug('No results for {} station to check overloading.'.format(
grid_level.upper()))
return crit_stations | def _station_load(network, station, crit_stations) | Checks for over-loading of stations.
Parameters
----------
network : :class:`~.grid.network.Network`
station : :class:`~.grid.components.LVStation` or :class:`~.grid.components.MVStation`
crit_stations : :pandas:`pandas.DataFrame<dataframe>`
Dataframe containing over-loaded stations, their apparent power at
maximal over-loading and the corresponding time step.
Index of the dataframe are the over-loaded stations either of type
:class:`~.grid.components.LVStation` or
:class:`~.grid.components.MVStation`. Columns are 's_pfa'
containing the apparent power at maximal over-loading as float and
'time_index' containing the corresponding time step the over-loading
occured in as :pandas:`pandas.Timestamp<timestamp>`.
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
Dataframe containing over-loaded stations, their apparent power at
maximal over-loading and the corresponding time step.
Index of the dataframe are the over-loaded stations either of type
:class:`~.grid.components.LVStation` or
:class:`~.grid.components.MVStation`. Columns are 's_pfa'
containing the apparent power at maximal over-loading as float and
'time_index' containing the corresponding time step the over-loading
occured in as :pandas:`pandas.Timestamp<timestamp>`. | 3.978315 | 3.550611 | 1.120459 |
def _append_crit_node(series):
return pd.DataFrame({'v_mag_pu': series.max(),
'time_index': series.idxmax()},
index=[node])
crit_nodes_grid = pd.DataFrame()
v_mag_pu_pfa = network.results.v_res(nodes=nodes, level=voltage_level)
for node in nodes:
# check for over- and under-voltage
overvoltage = v_mag_pu_pfa[repr(node)][
(v_mag_pu_pfa[repr(node)] > (v_dev_allowed_upper.loc[
v_mag_pu_pfa.index]))]
undervoltage = v_mag_pu_pfa[repr(node)][
(v_mag_pu_pfa[repr(node)] < (v_dev_allowed_lower.loc[
v_mag_pu_pfa.index]))]
# write greatest voltage deviation to dataframe
if not overvoltage.empty:
overvoltage_diff = overvoltage - v_dev_allowed_upper.loc[
overvoltage.index]
if not undervoltage.empty:
undervoltage_diff = v_dev_allowed_lower.loc[
undervoltage.index] - undervoltage
if overvoltage_diff.max() > undervoltage_diff.max():
crit_nodes_grid = crit_nodes_grid.append(
_append_crit_node(overvoltage_diff))
else:
crit_nodes_grid = crit_nodes_grid.append(
_append_crit_node(undervoltage_diff))
else:
crit_nodes_grid = crit_nodes_grid.append(
_append_crit_node(overvoltage_diff))
elif not undervoltage.empty:
undervoltage_diff = v_dev_allowed_lower.loc[
undervoltage.index] - undervoltage
crit_nodes_grid = crit_nodes_grid.append(
_append_crit_node(undervoltage_diff))
return crit_nodes_grid | def _voltage_deviation(network, nodes, v_dev_allowed_upper,
v_dev_allowed_lower, voltage_level) | Checks for voltage stability issues in LV grids.
Parameters
----------
network : :class:`~.grid.network.Network`
nodes : :obj:`list`
List of nodes (of type :class:`~.grid.components.Generator`,
:class:`~.grid.components.Load`, etc.) to check voltage deviation for.
v_dev_allowed_upper : :pandas:`pandas.Series<series>`
Series with time steps (of type :pandas:`pandas.Timestamp<timestamp>`)
power flow analysis was conducted for and the allowed upper limit of
voltage deviation for each time step as float.
v_dev_allowed_lower : :pandas:`pandas.Series<series>`
Series with time steps (of type :pandas:`pandas.Timestamp<timestamp>`)
power flow analysis was conducted for and the allowed lower limit of
voltage deviation for each time step as float.
voltage_levels : :obj:`str`
Specifies which voltage level to retrieve power flow analysis results
for. Possible options are 'mv' and 'lv'.
Returns
-------
:pandas:`pandas.DataFrame<dataframe>`
Dataframe with critical nodes, sorted descending by voltage deviation.
Index of the dataframe are all nodes (of type
:class:`~.grid.components.Generator`, :class:`~.grid.components.Load`,
etc.) with over-voltage issues. Columns are 'v_mag_pu' containing the
maximum voltage deviation as float and 'time_index' containing the
corresponding time step the over-voltage occured in as
:pandas:`pandas.Timestamp<timestamp>`. | 2.285026 | 1.975258 | 1.156824 |
v_mag_pu_pfa = network.results.v_res()
if (v_mag_pu_pfa > 1.1).any().any() or (v_mag_pu_pfa < 0.9).any().any():
message = "Maximum allowed voltage deviation of 10% exceeded."
raise ValueError(message) | def check_ten_percent_voltage_deviation(network) | Checks if 10% criteria is exceeded.
Parameters
----------
network : :class:`~.grid.network.Network` | 5.440531 | 5.423748 | 1.003094 |
srid = int(network.config['geo']['srid'])
return partial(pyproj.transform,
pyproj.Proj(init='epsg:{}'
.format(str(srid))), # source coordinate system
pyproj.Proj(init='epsg:3035') # destination coordinate system
) | def proj2equidistant(network) | Defines conformal (e.g. WGS84) to ETRS (equidistant) projection
Source CRS is loaded from Network's config.
Parameters
----------
network : :class:`~.grid.network.Network`
The eDisGo container object
Returns
-------
:py:func:`functools.partial` | 4.019285 | 4.491731 | 0.894819 |
lines = []
while not lines:
node_shp = transform(proj2equidistant(network), node.geom)
buffer_zone_shp = node_shp.buffer(radius)
for line in grid.graph.lines():
nodes = line['adj_nodes']
branch_shp = transform(proj2equidistant(network), LineString([nodes[0].geom, nodes[1].geom]))
if buffer_zone_shp.intersects(branch_shp):
lines.append(line)
radius += radius_inc
return sorted(lines, key=lambda _: repr(_)) | def calc_geo_lines_in_buffer(network, node, grid, radius, radius_inc) | Determines lines in nodes' associated graph that are at least partly
within buffer of radius from node. If there are no lines, the buffer is
successively extended by radius_inc until lines are found.
Parameters
----------
network : :class:`~.grid.network.Network`
The eDisGo container object
node : :class:`~.grid.components.Component`
Origin node the buffer is created around (e.g. :class:`~.grid.components.Generator`).
Node must be a member of grid's graph (grid.graph)
grid : :class:`~.grid.grids.Grid`
Grid whose lines are searched
radius : :obj:`float`
Buffer radius in m
radius_inc : :obj:`float`
Buffer radius increment in m
Returns
-------
:obj:`list` of :class:`~.grid.components.Line`
Sorted (by repr()) list of lines
Notes
-----
Adapted from `Ding0 <https://github.com/openego/ding0/blob/\
21a52048f84ec341fe54e0204ac62228a9e8a32a/\
ding0/tools/geo.py#L53>`_. | 4.600093 | 4.025896 | 1.142626 |
branch_detour_factor = network.config['grid_connection'][
'branch_detour_factor']
# notice: vincenty takes (lat,lon)
branch_length = branch_detour_factor * vincenty((node_source.geom.y, node_source.geom.x),
(node_target.geom.y, node_target.geom.x)).m
# ========= BUG: LINE LENGTH=0 WHEN CONNECTING GENERATORS ===========
# When importing generators, the geom_new field is used as position. If it is empty, EnergyMap's geom
# is used and so there are a couple of generators at the same position => length of interconnecting
# line is 0. See issue #76
if branch_length == 0:
branch_length = 1
logger.debug('Geo distance is zero, check objects\' positions. '
'Distance is set to 1m')
# ===================================================================
return branch_length | def calc_geo_dist_vincenty(network, node_source, node_target) | Calculates the geodesic distance between node_source and node_target
incorporating the detour factor in config.
Parameters
----------
network : :class:`~.grid.network.Network`
The eDisGo container object
node_source : :class:`~.grid.components.Component`
Node to connect (e.g. :class:`~.grid.components.Generator`)
node_target : :class:`~.grid.components.Component`
Target node (e.g. :class:`~.grid.components.BranchTee`)
Returns
-------
:obj:`float`
Distance in m | 10.362311 | 9.585139 | 1.081081 |
user = details.get('user')
if not user:
user_uuid = kwargs.get('uid')
if not user_uuid:
return
username = uuid_to_username(user_uuid)
else:
username = user.username
return {
'username': username
} | def get_username(details, backend, response, *args, **kwargs) | Sets the `username` argument.
If the user exists already, use the existing username. Otherwise
generate username from the `new_uuid` using the
`helusers.utils.uuid_to_username` function. | 2.916434 | 2.549176 | 1.144069 |
user = sociallogin.user
# If the user hasn't been saved yet, it will be updated
# later on in the sign-up flow.
if not user.pk:
return
data = sociallogin.account.extra_data
oidc = sociallogin.account.provider == 'helsinki_oidc'
update_user(user, data, oidc) | def pre_social_login(self, request, sociallogin) | Update user based on token information. | 5.362392 | 5.183607 | 1.03449 |
ad_list = ADGroupMapping.objects.values_list('ad_group', 'group')
mappings = {ad_group: group for ad_group, group in ad_list}
user_ad_groups = set(self.ad_groups.filter(groups__isnull=False).values_list(flat=True))
all_mapped_groups = set(mappings.values())
old_groups = set(self.groups.filter(id__in=all_mapped_groups).values_list(flat=True))
new_groups = set([mappings[x] for x in user_ad_groups])
groups_to_delete = old_groups - new_groups
if groups_to_delete:
self.groups.remove(*groups_to_delete)
groups_to_add = new_groups - old_groups
if groups_to_add:
self.groups.add(*groups_to_add) | def sync_groups_from_ad(self) | Determine which Django groups to add or remove based on AD groups. | 2.291382 | 2.170097 | 1.055889 |
defaults = api_settings.defaults
defaults['JWT_PAYLOAD_GET_USER_ID_HANDLER'] = (
__name__ + '.get_user_id_from_payload_handler')
if 'allauth.socialaccount' not in settings.INSTALLED_APPS:
return
from allauth.socialaccount.models import SocialApp
try:
app = SocialApp.objects.get(provider='helsinki')
except SocialApp.DoesNotExist:
return
defaults['JWT_SECRET_KEY'] = app.secret
defaults['JWT_AUDIENCE'] = app.client_id | def patch_jwt_settings() | Patch rest_framework_jwt authentication settings from allauth | 2.633072 | 2.585269 | 1.01849 |
uuid_data = getattr(uuid, 'bytes', None) or UUID(uuid).bytes
b32coded = base64.b32encode(uuid_data)
return 'u-' + b32coded.decode('ascii').replace('=', '').lower() | def uuid_to_username(uuid) | Convert UUID to username.
>>> uuid_to_username('00fbac99-0bab-5e66-8e84-2e567ea4d1f6')
'u-ad52zgilvnpgnduefzlh5jgr6y'
>>> uuid_to_username(UUID('00fbac99-0bab-5e66-8e84-2e567ea4d1f6'))
'u-ad52zgilvnpgnduefzlh5jgr6y' | 4.67607 | 5.456963 | 0.8569 |
if not username.startswith('u-') or len(username) != 28:
raise ValueError('Not an UUID based username: %r' % (username,))
decoded = base64.b32decode(username[2:].upper() + '======')
return UUID(bytes=decoded) | def username_to_uuid(username) | Convert username to UUID.
>>> username_to_uuid('u-ad52zgilvnpgnduefzlh5jgr6y')
UUID('00fbac99-0bab-5e66-8e84-2e567ea4d1f6') | 4.400019 | 4.120029 | 1.067958 |
payload = payload.copy()
field_map = {
'given_name': 'first_name',
'family_name': 'last_name',
'email': 'email',
}
ret = {}
for token_attr, user_attr in field_map.items():
if token_attr not in payload:
continue
ret[user_attr] = payload.pop(token_attr)
ret.update(payload)
return ret | def oidc_to_user_data(payload) | Map OIDC claims to Django user fields. | 2.413439 | 2.130475 | 1.132817 |
if self._authorized_api_scopes is None:
return None
return all((x in self._authorized_api_scopes) for x in api_scopes) | def has_api_scopes(self, *api_scopes) | Test if all given API scopes are authorized.
:type api_scopes: list[str]
:param api_scopes: The API scopes to test
:rtype: bool|None
:return:
True or False, if the API Token has the API scopes field set,
otherwise None | 3.94145 | 3.928102 | 1.003398 |
if self._authorized_api_scopes is None:
return None
return any(
x == prefix or x.startswith(prefix + '.')
for x in self._authorized_api_scopes) | def has_api_scope_with_prefix(self, prefix) | Test if there is an API scope with the given prefix.
:rtype: bool|None | 3.983978 | 3.991555 | 0.998102 |
# Separate the module path from the function name.
module_path, function_name = tuple(dotted_function_module_path.rsplit('.', 1))
module = import_module(module_path)
return getattr(module, function_name) | def get_function_by_path(dotted_function_module_path) | Returns the function for a given path (e.g. 'my_app.my_module.my_function'). | 2.518083 | 2.442326 | 1.031018 |
if self.in_task_section:
raise RuntimeError(u'Waypoints must be written before any tasks')
if not name:
raise ValueError(u'Waypoint name must not be empty')
fields = [
self.escape(name),
self.escape(shortname),
country,
self.format_latitude(latitude),
self.format_longitude(longitude),
self.format_distance(elevation),
str(style),
str(runway_direction),
self.format_distance(runway_length),
self.escape(frequency),
self.escape(description),
]
self.write_fields(fields)
self.wps.add(name) | def write_waypoint(
self, name, shortname, country, latitude, longitude, elevation=u'',
style=WaypointStyle.NORMAL, runway_direction=u'', runway_length=u'',
frequency=u'', description=u'') | Write a waypoint::
writer.write_waypoint(
'Meiersberg',
'MEIER',
'DE',
(51 + 7.345 / 60.),
(6 + 24.765 / 60.),
)
# -> "Meiersberg","MEIER",DE,5107.345N,00624.765E,,1,,,,
:param name: name of the waypoint (must not be empty)
:param shortname: short name for depending GPS devices
:param country: IANA top level domain country code (see
http://www.iana.org/cctld/cctld-whois.htm)
:param latitude: latitude of the point (between -90 and 90 degrees)
:param longitude: longitude of the point (between -180 and 180 degrees)
:param elevation: elevation of the waypoint in meters or as
``(elevation, unit)`` tuple
:param style: the waypoint type (see official specification for the
list of valid styles, defaults to "Normal")
:param runway_direction: heading of the runway in degrees if the
waypoint is landable
:param runway_length: length of the runway in meters or as ``(length,
unit)`` tuple if the waypoint is landable
:param frequency: radio frequency of the airport
:param description: optional description of the waypoint (no length
limit) | 2.754475 | 2.951339 | 0.933297 |
if not self.in_task_section:
self.write_line()
self.write_line(self.DIVIDER)
self.in_task_section = True
fields = [self.escape(description)]
for waypoint in waypoints:
if waypoint not in self.wps:
raise ValueError(u'Waypoint "%s" was not found' % waypoint)
fields.append(self.escape(waypoint))
self.write_fields(fields) | def write_task(self, description, waypoints) | Write a task definition::
writer.write_task('500 km FAI', [
'MEIER',
'BRILO',
'AILER',
'MEIER',
])
# -> "500 km FAI","MEIER","BRILO","AILER","MEIER"
Make sure that the referenced waypoints have been written with
:meth:`~aerofiles.seeyou.Writer.write_waypoint` before writing the
task. The task section divider will be written to automatically when
:meth:`~aerofiles.seeyou.Writer.write_task` is called the first time.
After the first task is written
:meth:`~aerofiles.seeyou.Writer.write_waypoint` must not be called
anymore.
:param description: description of the task (may be blank)
:param waypoints: list of waypoints in the task (names must match the
long names of previously written waypoints) | 3.531622 | 3.148623 | 1.12164 |
if not self.in_task_section:
raise RuntimeError(
u'Task options have to be written in task section')
fields = ['Options']
if 'start_time' in kw:
fields.append(u'NoStart=' + self.format_time(kw['start_time']))
if 'task_time' in kw:
fields.append(u'TaskTime=' + self.format_timedelta(kw['task_time']))
if 'waypoint_distance' in kw:
fields.append(u'WpDis=%s' % kw['waypoint_distance'])
if 'distance_tolerance' in kw:
fields.append(
u'NearDis=' + self.format_distance(kw['distance_tolerance']))
if 'altitude_tolerance' in kw:
fields.append(
u'NearAlt=' + self.format_distance(kw['altitude_tolerance']))
if 'min_distance' in kw:
fields.append(u'MinDis=%s' % kw['min_distance'])
if 'random_order' in kw:
fields.append(u'RandomOrder=%s' % kw['random_order'])
if 'max_points' in kw:
fields.append(u'MaxPts=%d' % kw['max_points'])
if 'before_points' in kw:
fields.append(u'BeforePts=%d' % kw['before_points'])
if 'after_points' in kw:
fields.append(u'AfterPts=%d' % kw['after_points'])
if 'bonus' in kw:
fields.append(u'Bonus=%d' % kw['bonus'])
self.write_fields(fields) | def write_task_options(self, **kw) | Write an options line for a task definition::
writer.write_task_options(
start_time=time(12, 34, 56),
task_time=timedelta(hours=1, minutes=45, seconds=12),
waypoint_distance=False,
distance_tolerance=(0.7, 'km'),
altitude_tolerance=300.0,
)
# -> Options,NoStart=12:34:56,TaskTime=01:45:12,WpDis=False,NearDis=0.7km,NearAlt=300.0m
:param start_time: opening time of the start line as
:class:`datetime.time` or string
:param task_time: designated time for the task as
:class:`datetime.timedelta` or string
:param waypoint_distance: task distance calculation (``False``: use
fixes, ``True``: use waypoints)
:param distance_tolerance: distance tolerance in meters or as
``(distance, unit)`` tuple
:param altitude_tolerance: altitude tolerance in meters or as
``(distance, unit)`` tuple
:param min_distance: "uncompleted leg (``False``: calculate maximum
distance from last observation zone)"
:param random_order: if ``True``, then Random order of waypoints is
checked
:param max_points: maximum number of points
:param before_points: number of mandatory waypoints at the beginning.
``1`` means start line only, ``2`` means start line plus first
point in task sequence (Task line).
:param after_points: number of mandatory waypoints at the end. ``1``
means finish line only, ``2`` means finish line and one point
before finish in task sequence (Task line).
:param bonus: bonus for crossing the finish line | 2.199834 | 1.467194 | 1.499348 |
if not self.in_task_section:
raise RuntimeError(
u'Observation zones have to be written in task section')
fields = [u'ObsZone=%d' % num]
if 'style' in kw:
fields.append(u'Style=%d' % kw['style'])
if 'radius' in kw:
fields.append(u'R1=' + self.format_distance(kw['radius']))
if 'angle' in kw:
fields.append(u'A1=' + self.format_angle(kw['angle']))
if 'radius2' in kw:
fields.append(u'R2=' + self.format_distance(kw['radius2']))
if 'angle2' in kw:
fields.append(u'A2=' + self.format_angle(kw['angle2']))
if 'angle12' in kw:
fields.append(u'A12=' + self.format_angle(kw['angle12']))
if 'line' in kw:
fields.append(u'Line=' + ('1' if kw['line'] else '0'))
self.write_fields(fields) | def write_observation_zone(self, num, **kw) | Write observation zone information for a taskpoint::
writer.write_task_options(
start_time=time(12, 34, 56),
task_time=timedelta(hours=1, minutes=45, seconds=12),
waypoint_distance=False,
distance_tolerance=(0.7, 'km'),
altitude_tolerance=300.0,
)
# -> Options,NoStart=12:34:56,TaskTime=01:45:12,WpDis=False,NearDis=0.7km,NearAlt=300.0m
:param num: consecutive number of a waypoint (``0``: Start)
:param style: direction (``0``: Fixed value, ``1``: Symmetrical, ``2``:
To next point, ``3``: To previous point, ``4``: To start point
:param radius: radius 1 in meter or as ``(radius, unit)`` tuple
:param angle: angle 1 in degrees
:param radius2: radius 2 in meter or as ``(radius, unit)`` tuple
:param angle 2: angle 2 in degrees
:param angle12: angle 12 in degress
:param line: should be ``True`` if start or finish line | 2.114472 | 1.743648 | 1.212672 |
if self.oxd_id:
logger.info('Client is already registered. ID: %s', self.oxd_id)
return self.oxd_id
# add required params for the command
params = {
"authorization_redirect_uri": self.authorization_redirect_uri,
"oxd_rp_programming_language": "python",
}
# add other optional params if they exist in config
for op in self.opt_params:
if self.config.get("client", op):
params[op] = self.config.get("client", op)
for olp in self.opt_list_params:
if self.config.get("client", olp):
params[olp] = self.config.get("client", olp).split(",")
logger.debug("Sending command `register_site` with params %s", params)
response = self.msgr.request("register_site", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
self.oxd_id = response["data"]["oxd_id"]
self.config.set("oxd", "id", self.oxd_id)
logger.info("Site registration successful. Oxd ID: %s", self.oxd_id)
return self.oxd_id | def register_site(self) | Function to register the site and generate a unique ID for the site
Returns:
**string:** The ID of the site (also called client id) if the registration is successful
Raises:
**OxdServerError:** If the site registration fails. | 3.179627 | 3.063392 | 1.037943 |
params = {"oxd_id": self.oxd_id}
if scope and isinstance(scope, list):
params["scope"] = scope
if acr_values and isinstance(acr_values, list):
params["acr_values"] = acr_values
if prompt and isinstance(prompt, str):
params["prompt"] = prompt
if custom_params:
params["custom_parameters"] = custom_params
logger.debug("Sending command `get_authorization_url` with params %s",
params)
response = self.msgr.request("get_authorization_url", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
return response['data']['authorization_url'] | def get_authorization_url(self, acr_values=None, prompt=None, scope=None,
custom_params=None) | Function to get the authorization url that can be opened in the
browser for the user to provide authorization and authentication
Parameters:
* **acr_values (list, optional):** acr values in the order of priority
* **prompt (string, optional):** prompt=login is required if you want to force alter current user session (in case user is already logged in from site1 and site2 constructs authorization request and want to force alter current user session)
* **scope (list, optional):** scopes required, takes the one provided during site registrations by default
* **custom_params (dict, optional):** Any custom arguments that the client wishes to pass on to the OP can be passed on as extra parameters to the function
Returns:
**string:** The authorization url that the user must access for authentication and authorization
Raises:
**OxdServerError:** If the oxd throws an error for any reason. | 2.551392 | 2.756165 | 0.925704 |
params = dict(oxd_id=self.oxd_id, code=code, state=state)
logger.debug("Sending command `get_tokens_by_code` with params %s",
params)
response = self.msgr.request("get_tokens_by_code", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
return response['data'] | def get_tokens_by_code(self, code, state) | Function to get access code for getting the user details from the
OP. It is called after the user authorizes by visiting the auth URL.
Parameters:
* **code (string):** code, parse from the callback URL querystring
* **state (string):** state value parsed from the callback URL
Returns:
**dict:** The tokens object with the following data structure.
Example response::
{
"access_token": "<token string>",
"expires_in": 3600,
"refresh_token": "<token string>",
"id_token": "<token string>",
"id_token_claims":
{
"iss": "https://server.example.com",
"sub": "24400320",
"aud": "s6BhdRkqt3",
"nonce": "n-0S6_WzA2Mj",
"exp": 1311281970,
"iat": 1311280970,
"at_hash": "MTIzNDU2Nzg5MDEyMzQ1Ng"
}
}
Raises:
**OxdServerError:** If oxd server throws an error OR if the params code
and scopes are of improper data type. | 3.537205 | 3.995268 | 0.885349 |
params = {
"oxd_id": self.oxd_id,
"refresh_token": refresh_token
}
if scope:
params['scope'] = scope
logger.debug("Sending command `get_access_token_by_refresh_token` with"
" params %s", params)
response = self.msgr.request("get_access_token_by_refresh_token",
**params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
return response['data'] | def get_access_token_by_refresh_token(self, refresh_token, scope=None) | Function that is used to get a new access token using refresh token
Parameters:
* **refresh_token (str):** refresh_token from get_tokens_by_code command
* **scope (list, optional):** a list of scopes. If not specified should grant access with scope provided in previous request
Returns:
**dict:** the tokens with expiry time.
Example response::
{
"access_token":"SlAV32hkKG",
"expires_in":3600,
"refresh_token":"aaAV32hkKG1"
} | 3.023305 | 3.423936 | 0.882991 |
params = dict(oxd_id=self.oxd_id, access_token=access_token)
params["access_token"] = access_token
logger.debug("Sending command `get_user_info` with params %s",
params)
response = self.msgr.request("get_user_info", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
return response['data']['claims'] | def get_user_info(self, access_token) | Function to get the information about the user using the access code
obtained from the OP
Note:
Refer to the /.well-known/openid-configuration URL of your OP for
the complete list of the claims for different scopes.
Parameters:
* **access_token (string):** access token from the get_tokens_by_code function
Returns:
**dict:** The user data claims that are returned by the OP in format
Example response::
{
"sub": ["248289761001"],
"name": ["Jane Doe"],
"given_name": ["Jane"],
"family_name": ["Doe"],
"preferred_username": ["j.doe"],
"email": ["janedoe@example.com"],
"picture": ["http://example.com/janedoe/me.jpg"]
}
Raises:
**OxdServerError:** If the param access_token is empty OR if the oxd
Server returns an error. | 3.775229 | 3.914922 | 0.964318 |
params = {"oxd_id": self.oxd_id}
if id_token_hint:
params["id_token_hint"] = id_token_hint
if post_logout_redirect_uri:
params["post_logout_redirect_uri"] = post_logout_redirect_uri
if state:
params["state"] = state
if session_state:
params["session_state"] = session_state
logger.debug("Sending command `get_logout_uri` with params %s", params)
response = self.msgr.request("get_logout_uri", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
return response['data']['uri'] | def get_logout_uri(self, id_token_hint=None, post_logout_redirect_uri=None,
state=None, session_state=None) | Function to logout the user.
Parameters:
* **id_token_hint (string, optional):** oxd server will use last used ID Token, if not provided
* **post_logout_redirect_uri (string, optional):** URI to redirect, this uri would override the value given in the site-config
* **state (string, optional):** site state
* **session_state (string, optional):** session state
Returns:
**string:** The URI to which the user must be directed in order to
perform the logout | 2.253099 | 2.385897 | 0.944341 |
params = {
"oxd_id": self.oxd_id,
"authorization_redirect_uri": self.authorization_redirect_uri
}
if client_secret_expires_at:
params["client_secret_expires_at"] = client_secret_expires_at
for param in self.opt_params:
if self.config.get("client", param):
value = self.config.get("client", param)
params[param] = value
for param in self.opt_list_params:
if self.config.get("client", param):
value = self.config.get("client", param).split(",")
params[param] = value
logger.debug("Sending `update_site` with params %s",
params)
response = self.msgr.request("update_site", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
return True | def update_site(self, client_secret_expires_at=None) | Function to update the site's information with OpenID Provider.
This should be called after changing the values in the cfg file.
Parameters:
* **client_secret_expires_at (long, OPTIONAL):** milliseconds since 1970, can be used to extends client lifetime
Returns:
**bool:** The status for update. True for success and False for failure
Raises:
**OxdServerError:** When the update fails and oxd server returns error | 2.732516 | 2.78376 | 0.981592 |
params = dict(oxd_id=self.oxd_id, resources=resources)
if overwrite:
params["overwrite"] = overwrite
logger.debug("Sending `uma_rs_protect` with params %s", params)
response = self.msgr.request("uma_rs_protect", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
return True | def uma_rs_protect(self, resources, overwrite=None) | Function to be used in a UMA Resource Server to protect resources.
Parameters:
* **resources (list):** list of resource to protect. See example at `here <https://gluu.org/docs/oxd/3.1.2/api/#uma-rs-protect-resources>`_
* **overwrite (bool):** If true, Allows to update existing resources
Returns:
**bool:** The status of the request. | 3.838717 | 4.001997 | 0.9592 |
params = {"oxd_id": self.oxd_id,
"rpt": rpt,
"path": path,
"http_method": http_method}
logger.debug("Sending command `uma_rs_check_access` with params %s",
params)
response = self.msgr.request("uma_rs_check_access", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
if response['data']['error'] == 'invalid_request':
raise InvalidRequestError(response['data'])
else:
raise OxdServerError(response['data'])
return response['data'] | def uma_rs_check_access(self, rpt, path, http_method) | Function to be used in a UMA Resource Server to check access.
Parameters:
* **rpt (string):** RPT or blank value if absent (not send by RP)
* **path (string):** Path of resource (e.g. for http://rs.com/phones, /phones should be passed)
* **http_method (string):** Http method of RP request (GET, POST, PUT, DELETE)
Returns:
**dict:** The access information received in the format below.
If the access is granted::
{ "access": "granted" }
If the access is denied with ticket response::
{
"access": "denied",
"www-authenticate_header": "UMA realm='example',
as_uri='https://as.example.com',
error='insufficient_scope',
ticket='016f84e8-f9b9-11e0-bd6f-0021cc6004de'",
"ticket": "016f84e8-f9b9-11e0-bd6f-0021cc6004de"
}
If the access is denied without ticket response::
{ "access": "denied" }
Raises:
``oxdpython.exceptions.InvalidRequestError`` if the resource is not
protected | 2.923757 | 3.067817 | 0.953041 |
params = {
"oxd_id": self.oxd_id,
"ticket": ticket
}
if claim_token:
params["claim_token"] = claim_token
if claim_token_format:
params["claim_token_format"] = claim_token_format
if pct:
params["pct"] = pct
if rpt:
params["rpt"] = rpt
if scope:
params["scope"] = scope
if state:
params["state"] = state
logger.debug("Sending command `uma_rp_get_rpt` with params %s", params)
response = self.msgr.request("uma_rp_get_rpt", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'ok':
return response['data']
if response['data']['error'] == 'internal_error':
raise OxdServerError(response['data'])
if response['data']['error'] == 'need_info':
return response['data']
if response['data']['error'] == 'invalid_ticket':
raise InvalidTicketError(response['data']) | def uma_rp_get_rpt(self, ticket, claim_token=None, claim_token_format=None,
pct=None, rpt=None, scope=None, state=None) | Function to be used by a UMA Requesting Party to get RPT token.
Parameters:
* **ticket (str, REQUIRED):** ticket
* **claim_token (str, OPTIONAL):** claim token
* **claim_token_format (str, OPTIONAL):** claim token format
* **pct (str, OPTIONAL):** pct
* **rpt (str, OPTIONAL):** rpt
* **scope (list, OPTIONAL):** scope
* **state (str, OPTIONAL):** state that is returned from `uma_rp_get_claims_gathering_url` command
Returns:
**dict:** The response from the OP.
Success response::
{
"status":"ok",
"data":{
"access_token":"SSJHBSUSSJHVhjsgvhsgvshgsv",
"token_type":"Bearer",
"pct":"c2F2ZWRjb25zZW50",
"upgraded":true
}
}
NeedInfoError response::
{
"error":"need_info",
"ticket":"ZXJyb3JfZGV0YWlscw==",
"required_claims":[
{
"claim_token_format":[
"http://openid.net/specs/openid-connect-core-1_0.html#IDToken"
],
"claim_type":"urn:oid:0.9.2342.19200300.100.1.3",
"friendly_name":"email",
"issuer":["https://example.com/idp"],
"name":"email23423453ou453"
}
],
"redirect_user":"https://as.example.com/rqp_claims?id=2346576421"
}
Raises:
**OxdServerError:** When oxd-server reports a generic internal_error
**InvalidTicketError:** When the oxd server returns a "invalid_ticket" error | 2.086625 | 1.890133 | 1.103957 |
params = {
'oxd_id': self.oxd_id,
'claims_redirect_uri': self.config.get('client',
'claims_redirect_uri'),
'ticket': ticket
}
logger.debug("Sending command `uma_rp_get_claims_gathering_url` with "
"params %s", params)
response = self.msgr.request("uma_rp_get_claims_gathering_url",
**params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
return response['data']['url'] | def uma_rp_get_claims_gathering_url(self, ticket) | UMA RP function to get the claims gathering URL.
Parameters:
* **ticket (str):** ticket to pass to the auth server. for 90% of the cases, this will be obtained from 'need_info' error of get_rpt
Returns:
**string** specifying the claims gathering url | 3.323818 | 3.688902 | 0.901032 |
# add required params for the command
params = {
"authorization_redirect_uri": self.authorization_redirect_uri,
"oxd_rp_programming_language": "python",
}
# add other optional params if they exist in config
for op in self.opt_params:
if self.config.get("client", op):
params[op] = self.config.get("client", op)
for olp in self.opt_list_params:
if self.config.get("client", olp):
params[olp] = self.config.get("client", olp).split(",")
logger.debug("Sending command `setup_client` with params %s", params)
response = self.msgr.request("setup_client", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
data = response["data"]
self.oxd_id = data["oxd_id"]
self.config.set("oxd", "id", data["oxd_id"])
self.config.set("client", "client_id", data["client_id"])
self.config.set("client", "client_secret", data["client_secret"])
if data["client_registration_access_token"]:
self.config.set("client", "client_registration_access_token",
data["client_registration_access_token"])
if data["client_registration_client_uri"]:
self.config.set("client", "client_registration_client_uri",
data["client_registration_client_uri"])
self.config.set("client", "client_id_issued_at",
str(data["client_id_issued_at"]))
return data | def setup_client(self) | The command registers the client for communication protection. This
will be used to obtain an access token via the Get Client Token
command. The access token will be passed as a protection_access_token
parameter to other commands.
Note:
If you are using the oxd-https-extension, you must setup the client
Returns:
**dict:** the client setup information
Example response::
{
"oxd_id":"6F9619FF-8B86-D011-B42D-00CF4FC964FF",
"op_host": "<op host>",
"client_id":"<client id>",
"client_secret":"<client secret>",
"client_registration_access_token":"<Client registration access token>",
"client_registration_client_uri":"<URI of client registration>",
"client_id_issued_at":"<client_id issued at>",
"client_secret_expires_at":"<client_secret expires at>"
} | 2.467308 | 2.31216 | 1.0671 |
# override the values from config
params = dict(client_id=client_id, client_secret=client_secret,
op_host=op_host)
if op_discovery_path:
params['op_discovery_path'] = op_discovery_path
if scope and isinstance(scope, list):
params['scope'] = scope
# If client id and secret aren't passed, then just read from the config
if not client_id:
params["client_id"] = self.config.get("client", "client_id")
if not client_secret:
params["client_secret"] = self.config.get("client",
"client_secret")
if not op_host:
params["op_host"] = self.config.get("client", "op_host")
logger.debug("Sending command `get_client_token` with params %s",
params)
response = self.msgr.request("get_client_token", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
self.config.set("client", "protection_access_token",
response["data"]["access_token"])
self.msgr.access_token = response["data"]["access_token"]
# Setup a new timer thread to refresh the access token.
if auto_update:
interval = int(response['data']['expires_in'])
args = [client_id, client_secret, op_host, op_discovery_path,
scope, auto_update]
logger.info("Setting up a threading.Timer to get_client_token in "
"%s seconds", interval)
t = Timer(interval, self.get_client_token, args)
t.start()
return response['data'] | def get_client_token(self, client_id=None, client_secret=None,
op_host=None, op_discovery_path=None, scope=None,
auto_update=True) | Function to get the client token which can be used for protection in
all future communication. The access token received by this method is
stored in the config file and used as the `protection_access_token`
for all subsequent calls to oxd.
Parameters:
* **client_id (str, optional):** client id from OP or from previous `setup_client` call
* **client_secret (str, optional):** client secret from the OP or from `setup_client` call
* **op_host (str, optional):** OP Host URL, default is read from the site configuration file
* **op_discovery_path (str, optional):** op discovery path provided by OP
* **scope (list, optional):** scopes of access required, default values are obtained from the config file
* **auto_update(bool, optional):** automatically get a new access_token when the current one expires. If this is set to False, then the application must call `get_client_token` when the token expires to update the client with a new access token.
Returns:
**dict:** The client token and the refresh token in the form.
Example response ::
{
"access_token":"6F9619FF-8B86-D011-B42D-00CF4FC964FF",
"expires_in": 399,
"refresh_token": "fr459f",
"scope": "openid"
} | 2.692165 | 2.501926 | 1.076037 |
params = dict(oxd_id=self.oxd_id)
logger.debug("Sending command `remove_site` with params %s",
params)
response = self.msgr.request("remove_site", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
return response['data']['oxd_id'] | def remove_site(self) | Cleans up the data for the site.
Returns:
oxd_id if the process was completed without error
Raises:
OxdServerError if there was an issue with the operation | 4.6144 | 4.046369 | 1.14038 |
params = dict(oxd_id=self.oxd_id)
params['rpt'] = rpt
logger.debug("Sending command `introspect_rpt` with params %s",
params)
response = self.msgr.request("introspect_rpt", **params)
logger.debug("Received response: %s", response)
if response['status'] == 'error':
raise OxdServerError(response['data'])
return response['data'] | def introspect_rpt(self, rpt) | Gives information about an RPT.
Parameters:
* **rpt (str, required):** rpt from uma_rp_get_rpt function
Returns:
**dict:** The information about the RPT.
Example response ::
{
"active": true,
"exp": 1256953732,
"iat": 1256912345,
"nbf": null,
"permissions": [{
"resource_id": "112210f47de98100",
"resource_scopes": [
"view",
"http://photoz.example.com/dev/actions/print"
],
"exp": 1256953732
}],
"client_id": "@6F96!19756yCF4F!C964FF",
"sub": "John Doe",
"aud": "@6F96!19756yCF4F!C964FF",
"iss": "https://idp.example.com",
"jti": null
}
Raises:
OxdServerError if there was an issue with the operation | 4.413846 | 4.360898 | 1.012141 |
self.convert_timedelta(kw, 'aat_min_time')
self.convert_time(kw, 'start_open_time')
self.convert_time(kw, 'start_close_time')
self.convert_bool(kw, 'fai_finish')
self.convert_bool(kw, 'start_requires_arm')
return self.write_tag_with_content('Task', **kw) | def write_task(self, **kw) | Write the main task to the file::
with writer.write_task(type=TaskType.RACING):
...
# <Task type="RT"> ... </Task>
Inside the with clause the :meth:`~aerofiles.xcsoar.Writer.write_point`
method should be used to write the individual task points. All
parameters are optional.
:param type: type of the task (one of the constants in
:class:`~aerofiles.xcsoar.constants.TaskType`)
:param start_requires_arm: ``True``: start has to be armed manually,
``False``: task will be started automatically
:param start_max_height: maximum altitude when the task is started
(in m)
:param start_max_height_ref: altitude reference of
``start_max_height`` (one of the constants in
:class:`~aerofiles.xcsoar.constants.AltitudeReference`)
:param start_max_speed: maximum speed when the task is started (in m/s)
:param start_open_time: time that the start line opens as
:class:`datetime.time`
:param start_close_time: time that the start line is closing as
:class:`datetime.time`
:param aat_min_time: AAT time as :class:`datetime.timedelta`
:param finish_min_height: minimum altitude when the task is finished
(in m)
:param finish_min_height_ref: altitude reference of
``finish_min_height`` (one of the constants in
:class:`~aerofiles.xcsoar.constants.AltitudeReference`)
:param fai_finish: ``True``: FAI finish rules apply | 5.890191 | 3.02958 | 1.944227 |
assert 'type' in kw
self.convert_bool(kw, 'score_exit')
return self.write_tag_with_content('Point', **kw) | def write_point(self, **kw) | Write a task point to the file::
with writer.write_point(type=PointType.TURN):
writer.write_waypoint(...)
writer.write_observation_zone(...)
# <Point type="Turn"> ... </Point>
Inside the with clause the
:meth:`~aerofiles.xcsoar.Writer.write_waypoint` and
:meth:`~aerofiles.xcsoar.Writer.write_observation_zone` methods must be
used to write the details of the task point.
:param type: type of the task point (one of the constants in
:class:`~aerofiles.xcsoar.constants.PointType`) | 16.180368 | 14.493114 | 1.116418 |
assert 'name' in kw
assert 'latitude' in kw
assert 'longitude' in kw
location_kw = {
'latitude': kw['latitude'],
'longitude': kw['longitude'],
}
del kw['latitude']
del kw['longitude']
with self.write_tag_with_content('Waypoint', **kw):
self.write_tag('Location', **location_kw) | def write_waypoint(self, **kw) | Write a waypoint to the file::
writer.write_waypoint(
name='Meiersberg',
latitude=51.4,
longitude=7.1
)
# <Waypoint name="Meiersberg">
# <Location latitude="51.4" longitude="7.1"/>
# </Waypoint>
:param name: name of the waypoint
:param latitude: latitude of the waypoint (in WGS84)
:param longitude: longitude of the waypoint (in WGS84)
:param altitude: altitude of the waypoint (in m, optional)
:param id: internal id of the waypoint (optional)
:param comment: extended description of the waypoint (optional) | 3.260821 | 2.858322 | 1.140817 |
assert 'type' in kw
if kw['type'] == ObservationZoneType.LINE:
assert 'length' in kw
elif kw['type'] == ObservationZoneType.CYLINDER:
assert 'radius' in kw
elif kw['type'] == ObservationZoneType.SECTOR:
assert 'radius' in kw
assert 'start_radial' in kw
assert 'end_radial' in kw
elif kw['type'] == ObservationZoneType.SYMMETRIC_QUADRANT:
assert 'radius' in kw
elif kw['type'] == ObservationZoneType.CUSTOM_KEYHOLE:
assert 'radius' in kw
assert 'inner_radius' in kw
assert 'angle' in kw
self.write_tag('ObservationZone', **kw) | def write_observation_zone(self, **kw) | Write an observation zone declaration to the file::
writer.write_observation_zone(
type=ObservationZoneType.CYLINDER,
radius=30000,
)
# <ObservationZone type="Cylinder" radius="30000"/>
The required parameters depend on the type parameter. Different
observation zone types require different parameters.
:param type: observation zone type (one of the constants in
:class:`~aerofiles.xcsoar.constants.ObservationZoneType`)
:param length: length of the line
(only used with type
:const:`~aerofiles.xcsoar.constants.ObservationZoneType.LINE`)
:param radius: (outer) radius of the observation zone
(used with types
:const:`~aerofiles.xcsoar.constants.ObservationZoneType.CYLINDER`,
:const:`~aerofiles.xcsoar.constants.ObservationZoneType.SECTOR`,
:const:`~aerofiles.xcsoar.constants.ObservationZoneType.SYMMETRIC_QUADRANT` and
:const:`~aerofiles.xcsoar.constants.ObservationZoneType.CUSTOM_KEYHOLE`)
:param inner_radius: inner radius of the observation zone
(only used with type
:const:`~aerofiles.xcsoar.constants.ObservationZoneType.CUSTOM_KEYHOLE`)
:param angle: angle of the observation zone
(only used with type
:const:`~aerofiles.xcsoar.constants.ObservationZoneType.CUSTOM_KEYHOLE`)
:param start_radial: start radial of the observation zone
(only used with type
:const:`~aerofiles.xcsoar.constants.ObservationZoneType.SECTOR`)
:param end_radial: end radial of the observation zone
(only used with type
:const:`~aerofiles.xcsoar.constants.ObservationZoneType.SECTOR`) | 2.690745 | 1.494052 | 1.800971 |
return [field.name for field in self._meta.fields if field.get_internal_type() == 'PlaceholderField'] | def get_placeholder_field_names(self) | Returns a list with the names of all PlaceholderFields. | 4.29345 | 3.181447 | 1.349528 |
resp = make_response('Logging Out')
resp.set_cookie('sub', 'null', expires=0)
resp.set_cookie('session_id', 'null', expires=0)
return resp | def logout_callback() | Route called by the OpenID provider when user logs out.
Clear the cookies here. | 3.755299 | 3.50305 | 1.072008 |
try:
logger.debug("Socket connecting to %s:%s", self.host, self.port)
self.sock.connect((self.host, self.port))
except socket.error as e:
logger.exception("socket error %s", e)
logger.error("Closing socket and recreating a new one.")
self.sock.close()
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.connect((self.host, self.port)) | def __connect(self) | A helper function to make connection. | 2.130497 | 2.035966 | 1.04643 |
cmd = json.dumps(command)
cmd = "{:04d}".format(len(cmd)) + cmd
msg_length = len(cmd)
# make the first time connection
if not self.firstDone:
logger.info('Initiating first time socket connection.')
self.__connect()
self.firstDone = True
# Send the message the to the server
totalsent = 0
while totalsent < msg_length:
try:
logger.debug("Sending: %s", cmd[totalsent:])
sent = self.sock.send(cmd[totalsent:])
totalsent = totalsent + sent
except socket.error as e:
logger.exception("Reconneting due to socket error. %s", e)
self.__connect()
logger.info("Reconnected to socket.")
# Check and receive the response if available
parts = []
resp_length = 0
received = 0
done = False
while not done:
part = self.sock.recv(1024)
if part == "":
logger.error("Socket connection broken, read empty.")
self.__connect()
logger.info("Reconnected to socket.")
# Find out the length of the response
if len(part) > 0 and resp_length == 0:
resp_length = int(part[0:4])
part = part[4:]
# Set Done flag
received = received + len(part)
if received >= resp_length:
done = True
parts.append(part)
response = "".join(parts)
# return the JSON as a namedtuple object
return json.loads(response) | def send(self, command) | send function sends the command to the oxd server and recieves the
response.
Parameters:
* **command (dict):** Dict representation of the JSON command string
Returns:
**response (dict):** The JSON response from the oxd Server as a dict | 3.340259 | 3.420592 | 0.976515 |
payload = {
"command": command,
"params": dict()
}
for item in kwargs.keys():
payload["params"][item] = kwargs.get(item)
if self.access_token:
payload["params"]["protection_access_token"] = self.access_token
return self.send(payload) | def request(self, command, **kwargs) | Function that builds the request and returns the response from
oxd-server
Parameters:
* **command (str):** The command that has to be sent to the oxd-server
* ** **kwargs:** The parameters that should accompany the request
Returns:
**dict:** the returned response from oxd-server as a dictionary | 3.686864 | 4.149626 | 0.888481 |
url = self.base + command.replace("_", "-")
req = urllib2.Request(url, json.dumps(kwargs))
req.add_header("User-Agent", "oxdpython/%s" % __version__)
req.add_header("Content-type", "application/json; charset=UTF-8")
# add the protection token if available
if self.access_token:
req.add_header("Authorization",
"Bearer {0}".format(self.access_token))
gcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
resp = urllib2.urlopen(req, context=gcontext)
return json.load(resp) | def request(self, command, **kwargs) | Function that builds the request and returns the response
Parameters:
* **command (str):** The command that has to be sent to the oxd-server
* ** **kwargs:** The parameters that should accompany the request
Returns:
**dict:** the returned response from oxd-server as a dictionary | 2.961978 | 2.847313 | 1.040271 |
'''Example code, showing the instantiation of a ChebiEntity, a call to
get_name(), get_outgoings() and the calling of a number of methods of the
returned Relation objects.'''
chebi_entity = ChebiEntity(15903)
print(chebi_entity.get_name())
for outgoing in chebi_entity.get_outgoings():
target_chebi_entity = ChebiEntity(outgoing.get_target_chebi_id())
print(outgoing.get_type() + '\t' + target_chebi_entity.get_name()) | def main() | Example code, showing the instantiation of a ChebiEntity, a call to
get_name(), get_outgoings() and the calling of a number of methods of the
returned Relation objects. | 4.996578 | 2.030106 | 2.46124 |
'''Returns parent id'''
parent_id = parsers.get_parent_id(self.__chebi_id)
return None if math.isnan(parent_id) else 'CHEBI:' + str(parent_id) | def get_parent_id(self) | Returns parent id | 7.186992 | 7.155188 | 1.004445 |
'''Returns mass'''
mass = parsers.get_mass(self.__chebi_id)
if math.isnan(mass):
mass = parsers.get_mass(self.get_parent_id())
if math.isnan(mass):
for parent_or_child_id in self.__get_all_ids():
mass = parsers.get_mass(parent_or_child_id)
if not math.isnan(mass):
break
return mass | def get_mass(self) | Returns mass | 4.033407 | 4.098724 | 0.984064 |
'''Returns charge'''
charge = parsers.get_charge(self.__chebi_id)
if math.isnan(charge):
charge = parsers.get_charge(self.get_parent_id())
if math.isnan(charge):
for parent_or_child_id in self.__get_all_ids():
charge = parsers.get_charge(parent_or_child_id)
if not math.isnan(charge):
break
return charge | def get_charge(self) | Returns charge | 4.224051 | 4.259617 | 0.99165 |
'''Returns name'''
name = parsers.get_name(self.__chebi_id)
if name is None:
name = parsers.get_name(self.get_parent_id())
if name is None:
for parent_or_child_id in self.__get_all_ids():
name = parsers.get_name(parent_or_child_id)
if name is not None:
break
return name | def get_name(self) | Returns name | 4.026978 | 4.077095 | 0.987708 |
'''Returns definition'''
definition = parsers.get_definition(self.__chebi_id)
if definition is None:
definition = parsers.get_definition(self.get_parent_id())
if definition is None:
for parent_or_child_id in self.__get_all_ids():
definition = parsers.get_definition(parent_or_child_id)
if definition is not None:
break
return definition | def get_definition(self) | Returns definition | 4.015901 | 4.028841 | 0.996788 |
'''Returns created by'''
created_by = parsers.get_created_by(self.__chebi_id)
if created_by is None:
created_by = parsers.get_created_by(self.get_parent_id())
if created_by is None:
for parent_or_child_id in self.__get_all_ids():
created_by = parsers.get_created_by(parent_or_child_id)
if created_by is not None:
break
return created_by | def get_created_by(self) | Returns created by | 3.418134 | 3.319546 | 1.029699 |
'''Returns inchi'''
inchi = parsers.get_inchi(self.__chebi_id)
if inchi is None:
inchi = parsers.get_inchi(self.get_parent_id())
if inchi is None:
for parent_or_child_id in self.__get_all_ids():
inchi = parsers.get_inchi(parent_or_child_id)
if inchi is not None:
break
return inchi | def get_inchi(self) | Returns inchi | 3.290353 | 3.303436 | 0.99604 |
'''Returns mol'''
structure = parsers.get_mol(self.__chebi_id)
if structure is None:
structure = parsers.get_mol(self.get_parent_id())
if structure is None:
for parent_or_child_id in self.__get_all_ids():
structure = parsers.get_mol(parent_or_child_id)
if structure is not None:
break
return None if structure is None else structure.get_structure() | def get_mol(self) | Returns mol | 4.269081 | 4.238369 | 1.007246 |
'''Returns mol filename'''
mol_filename = parsers.get_mol_filename(self.__chebi_id)
if mol_filename is None:
mol_filename = parsers.get_mol_filename(self.get_parent_id())
if mol_filename is None:
for parent_or_child_id in self.__get_all_ids():
mol_filename = \
parsers.get_mol_filename(parent_or_child_id)
if mol_filename is not None:
break
return mol_filename | def get_mol_filename(self) | Returns mol filename | 3.279481 | 3.226799 | 1.016327 |
'''Returns all ids'''
if self.__all_ids is None:
parent_id = parsers.get_parent_id(self.__chebi_id)
self.__all_ids = parsers.get_all_ids(self.__chebi_id
if math.isnan(parent_id)
else parent_id)
if self.__all_ids is None:
self.__all_ids = []
return self.__all_ids | def __get_all_ids(self) | Returns all ids | 3.541501 | 3.537751 | 1.00106 |
for con in self.conditions:
if http_method in con['httpMethods']:
if isinstance(scope, list):
con['scopes'] = scope
elif isinstance(scope, str) or isinstance(scope, unicode):
con['scopes'].append(scope)
return
# If not present, then create a new condition
if isinstance(scope, list):
self.conditions.append({'httpMethods': [http_method],
'scopes': scope})
elif isinstance(scope, str) or isinstance(scope, unicode):
self.conditions.append({'httpMethods': [http_method],
'scopes': [scope]}) | def set_scope(self, http_method, scope) | Set a scope condition for the resource for a http_method
Parameters:
* **http_method (str):** HTTP method like GET, POST, PUT, DELETE
* **scope (str, list):** the scope of access control as str if single, or as a list of strings if multiple scopes are to be set | 2.290137 | 2.259347 | 1.013628 |
if not isinstance(path, str) and not isinstance(path, unicode):
raise TypeError('The value passed for parameter path is not a str'
' or unicode')
resource = Resource(path)
self.resources[path] = resource
return resource | def add(self, path) | Adds a new resource with the given path to the resource set.
Parameters:
* **path (str, unicode):** path of the resource to be protected
Raises:
TypeError when the path is not a string or a unicode string | 4.820497 | 4.428902 | 1.088418 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.