code string | signature string | docstring string | loss_without_docstring float64 | loss_with_docstring float64 | factor float64 |
|---|---|---|---|---|---|
dont_care = 'image'
chan = ChannelResource(
name='', collection_name=collection_name,
experiment_name=experiment_name, type=dont_care)
return self._list_resource(chan) | def list_channels(self, collection_name, experiment_name) | List all channels belonging to the named experiment that is part
of the named collection.
Args:
collection_name (string): Name of the parent collection.
experiment_name (string): Name of the parent experiment.
Returns:
(list)
Raises:
requests.HTTPError on failure. | 7.19036 | 10.75777 | 0.668388 |
chan = ChannelResource(chan_name, coll_name, exp_name)
return self.get_project(chan) | def get_channel(self, chan_name, coll_name, exp_name) | Helper that gets a fully initialized ChannelResource for an *existing* channel.
Args:
chan_name (str): Name of channel.
coll_name (str): Name of channel's collection.
exp_name (str): Name of channel's experiment.
Returns:
(intern.resource.boss.ChannelResource) | 5.637154 | 7.599108 | 0.741818 |
self.project_service.set_auth(self._token_project)
return self.project_service.create(resource) | def create_project(self, resource) | Create the entity described by the given resource.
Args:
resource (intern.resource.boss.BossResource)
Returns:
(intern.resource.boss.BossResource): Returns resource of type
requested on success.
Raises:
requests.HTTPError on failure. | 7.066056 | 9.880226 | 0.715172 |
self.project_service.set_auth(self._token_project)
return self.project_service.get(resource) | def get_project(self, resource) | Get attributes of the data model object named by the given resource.
Args:
resource (intern.resource.boss.BossResource): resource.name as well
as any parents must be identified to succeed.
Returns:
(intern.resource.boss.BossResource): Returns resource of type
requested on success.
Raises:
requests.HTTPError on failure. | 6.950017 | 10.264949 | 0.677063 |
self.project_service.set_auth(self._token_project)
return self.project_service.update(resource_name, resource) | def update_project(self, resource_name, resource) | Updates an entity in the data model using the given resource.
Args:
resource_name (string): Current name of the resource (in case the
resource is getting its name changed).
resource (intern.resource.boss.BossResource): New attributes for
the resource.
Returns:
(intern.resource.boss.BossResource): Returns updated resource of
given type on success.
Raises:
requests.HTTPError on failure. | 5.595921 | 8.242013 | 0.678951 |
self.project_service.set_auth(self._token_project)
self.project_service.delete(resource) | def delete_project(self, resource) | Deletes the entity described by the given resource.
Args:
resource (intern.resource.boss.BossResource)
Raises:
requests.HTTPError on a failure. | 7.532389 | 9.372861 | 0.803638 |
self.metadata_service.set_auth(self._token_metadata)
return self.metadata_service.list(resource) | def list_metadata(self, resource) | List all keys associated with the given resource.
Args:
resource (intern.resource.boss.BossResource)
Returns:
(list)
Raises:
requests.HTTPError on a failure. | 7.398228 | 11.462329 | 0.645438 |
self.metadata_service.set_auth(self._token_metadata)
self.metadata_service.create(resource, keys_vals) | def create_metadata(self, resource, keys_vals) | Associates new key-value pairs with the given resource.
Will attempt to add all key-value pairs even if some fail.
Args:
resource (intern.resource.boss.BossResource)
keys_vals (dictionary): Collection of key-value pairs to assign to
given resource.
Raises:
HTTPErrorList on failure. | 5.879491 | 7.804356 | 0.75336 |
self.metadata_service.set_auth(self._token_metadata)
return self.metadata_service.get(resource, keys) | def get_metadata(self, resource, keys) | Gets the values for given keys associated with the given resource.
Args:
resource (intern.resource.boss.BossResource)
keys (list)
Returns:
(dictionary)
Raises:
HTTPErrorList on failure. | 6.255074 | 9.323138 | 0.670919 |
self.metadata_service.set_auth(self._token_metadata)
self.metadata_service.update(resource, keys_vals) | def update_metadata(self, resource, keys_vals) | Updates key-value pairs with the given resource.
Will attempt to update all key-value pairs even if some fail.
Keys must already exist.
Args:
resource (intern.resource.boss.BossResource)
keys_vals (dictionary): Collection of key-value pairs to update on
the given resource.
Raises:
HTTPErrorList on failure. | 5.852152 | 7.718175 | 0.75823 |
self.metadata_service.set_auth(self._token_metadata)
self.metadata_service.delete(resource, keys) | def delete_metadata(self, resource, keys) | Deletes the given key-value pairs associated with the given resource.
Will attempt to delete all key-value pairs even if some fail.
Args:
resource (intern.resource.boss.BossResource)
keys (list)
Raises:
HTTPErrorList on failure. | 6.567978 | 8.957031 | 0.733276 |
return self.get_channel(t[2], t[0], t[1])
raise ValueError("Cannot parse URI " + uri + ".") | def parse_bossURI(self, uri): # type: (str) -> Resource
t = uri.split("://")[1].split("/")
if len(t) is 3 | Parse a bossDB URI and handle malform errors.
Arguments:
uri (str): URI of the form bossdb://<collection>/<experiment>/<channel>
Returns:
Resource | 7.353967 | 9.193282 | 0.799928 |
if no_cache is not None:
warnings.warn("The no-cache option has been deprecated and will not be used in future versions of intern.")
warnings.warn("Please from intern.service.boss.volume import CacheMode and use access_mode=CacheMode.[cache,no-cache,raw] instead.")
if no_cache and access_mode != CacheMode.no_cache:
warnings.warn("Both no_cache and access_mode were used, please use access_mode only. As no_cache has been deprecated. ")
warnings.warn("Your request will be made using the default mode no_cache.")
access_mode=CacheMode.no_cache
if no_cache:
access_mode=CacheMode.no_cache
elif no_cache == False:
access_mode=CacheMode.cache
return self._volume.get_cutout(resource, resolution, x_range, y_range, z_range, time_range, id_list, access_mode, **kwargs) | def get_cutout(self, resource, resolution, x_range, y_range, z_range, time_range=None, id_list=[], no_cache=None, access_mode=CacheMode.no_cache, **kwargs) | Get a cutout from the volume service.
Note that access_mode=no_cache is desirable when reading large amounts of
data at once. In these cases, the data is not first read into the
cache, but instead, is sent directly from the data store to the
requester.
Args:
resource (intern.resource.boss.resource.ChannelResource | str): Channel or layer Resource. If a
string is provided instead, BossRemote.parse_bossURI is called instead on a URI-formatted
string of the form `bossdb://collection/experiment/channel`.
resolution (int): 0 indicates native resolution.
x_range (list[int]): x range such as [10, 20] which means x>=10 and x<20.
y_range (list[int]): y range such as [10, 20] which means y>=10 and y<20.
z_range (list[int]): z range such as [10, 20] which means z>=10 and z<20.
time_range (optional [list[int]]): time range such as [30, 40] which means t>=30 and t<40.
id_list (optional [list[int]]): list of object ids to filter the cutout by.
no_cache (optional [boolean or None]): Deprecated way to specify the use of cache to be True or False.
access_mode should be used instead
access_mode (optional [Enum]): Identifies one of three cache access options:
cache = Will check both cache and for dirty keys
no_cache = Will skip cache check but check for dirty keys
raw = Will skip both the cache and dirty keys check
TODO: Add mode to documentation
Returns:
(numpy.array): A 3D or 4D (time) numpy matrix in (time)ZYX order.
Raises:
requests.HTTPError on error. | 4.139916 | 3.666432 | 1.12914 |
exp = ExperimentResource(exp_name, coll_name)
return self.get_project(exp) | def get_experiment(self, coll_name, exp_name) | Convenience method that gets experiment resource.
Args:
coll_name (str): Collection name
exp_name (str): Experiment name
Returns:
(ExperimentResource) | 8.051491 | 11.259634 | 0.715076 |
return self._volume.get_neuroglancer_link(resource, resolution, x_range, y_range, z_range, **kwargs) | def get_neuroglancer_link(self, resource, resolution, x_range, y_range, z_range, **kwargs) | Get a neuroglancer link of the cutout specified from the host specified in the remote configuration step.
Args:
resource (intern.resource.Resource): Resource compatible with cutout operations.
resolution (int): 0 indicates native resolution.
x_range (list[int]): x range such as [10, 20] which means x>=10 and x<20.
y_range (list[int]): y range such as [10, 20] which means y>=10 and y<20.
z_range (list[int]): z range such as [10, 20] which means z>=10 and z<20.
Returns:
(string): Return neuroglancer link.
Raises:
RuntimeError when given invalid resource.
Other exceptions may be raised depending on the volume service's implementation. | 2.729626 | 3.172363 | 0.860439 |
if not isinstance(model, Model):
raise TypeError("Expected a Model, not %r." % model)
return model._model_views[:] | def views(model: "Model") -> list | Return a model's views keyed on what events they respond to.
Model views are added by calling :func:`view` on a model. | 7.47624 | 7.873293 | 0.94957 |
if not isinstance(model, Model):
raise TypeError("Expected a Model, not %r." % model)
def setup(function: Callable):
model._model_views.append(function)
return function
if functions:
for f in functions:
setup(f)
else:
return setup | def view(model: "Model", *functions: Callable) -> Optional[Callable] | A decorator for registering a callback to a model
Parameters:
model: the model object whose changes the callback should respond to.
Examples:
.. code-block:: python
from spectate import mvc
items = mvc.List()
@mvc.view(items)
def printer(items, events):
for e in events:
print(e)
items.append(1) | 3.811784 | 5.109589 | 0.746006 |
if isinstance(callback, Control):
callback = callback._before
self._before = callback
return self | def before(self, callback: Union[Callable, str]) -> "Control" | Register a control method that reacts before the trigger method is called.
Parameters:
callback:
The control method. If given as a callable, then that function will be
used as the callback. If given as a string, then the control will look
up a method with that name when reacting (useful when subclassing). | 5.100338 | 9.783922 | 0.521298 |
if isinstance(callback, Control):
callback = callback._after
self._after = callback
return self | def after(self, callback: Union[Callable, str]) -> "Control" | Register a control method that reacts after the trigger method is called.
Parameters:
callback:
The control method. If given as a callable, then that function will be
used as the callback. If given as a string, then the control will look
up a method with that name when reacting (useful when subclassing). | 5.033739 | 9.825574 | 0.51231 |
@app.route(base_url + '/(.*)')
def serve(env, req):
try:
base = pathlib.Path(base_path).resolve()
path = (base / req.match.group(1)).resolve()
except FileNotFoundError:
return Response(None, 404, 'Not Found')
# Don't let bad paths through
if base == path or base in path.parents:
if path.is_file():
return ResponseFile(str(path))
if index and path.is_dir():
if base == path:
ret = ''
else:
ret = '<a href="../">../</a><br/>\r\n'
for item in path.iterdir():
name = item.parts[-1]
if item.is_dir():
name += '/'
ret += '<a href="{}">{}</a><br/>\r\n'.format(urllib.parse.quote(name), html.escape(name))
ret = ResponseString(ret, 'text/html')
return ret
return Response(None, 404, 'Not Found') | def serve_static(app, base_url, base_path, index=False) | Serve a directory statically
Parameters:
* app: Grole application object
* base_url: Base URL to serve from, e.g. /static
* base_path: Base path to look for files in
* index: Provide simple directory indexes if True | 2.972739 | 3.068206 | 0.968885 |
@app.route(url, doc=False)
def index(env, req):
ret = ''
for d in env['doc']:
ret += 'URL: {url}, supported methods: {methods}{doc}\n'.format(**d)
return ret | def serve_doc(app, url) | Serve API documentation extracted from request handler docstrings
Parameters:
* app: Grole application object
* url: URL to serve at | 6.766177 | 8.245365 | 0.820604 |
parser = argparse.ArgumentParser()
parser.add_argument('-a', '--address', help='address to listen on, default localhost',
default='localhost')
parser.add_argument('-p', '--port', help='port to listen on, default 1234',
default=1234, type=int)
parser.add_argument('-d', '--directory', help='directory to serve, default .',
default='.')
parser.add_argument('-n', '--noindex', help='do not show directory indexes',
default=False, action='store_true')
loglevel = parser.add_mutually_exclusive_group()
loglevel.add_argument('-v', '--verbose', help='verbose logging',
default=False, action='store_true')
loglevel.add_argument('-q', '--quiet', help='quiet logging',
default=False, action='store_true')
return parser.parse_args(args) | def parse_args(args=sys.argv[1:]) | Parse command line arguments for Grole server running as static file server | 1.803276 | 1.700486 | 1.060447 |
args = parse_args(args)
if args.verbose:
logging.basicConfig(level=logging.DEBUG)
elif args.quiet:
logging.basicConfig(level=logging.ERROR)
else:
logging.basicConfig(level=logging.INFO)
app = Grole()
serve_static(app, '', args.directory, not args.noindex)
app.run(args.address, args.port) | def main(args=sys.argv[1:]) | Run Grole static file server | 3.049257 | 2.497237 | 1.221052 |
start_line = await self._readline(reader)
self.method, self.location, self.version = start_line.decode().split()
path_query = urllib.parse.unquote(self.location).split('?', 1)
self.path = path_query[0]
self.query = {}
if len(path_query) > 1:
for q in path_query[1].split('&'):
try:
k, v = q.split('=', 1)
self.query[k] = v
except ValueError:
self.query[q] = None
self.headers = {}
while True:
header_raw = await self._readline(reader)
if header_raw.strip() == b'':
break
header = header_raw.decode().split(':', 1)
self.headers[header[0]] = header[1].strip()
# TODO implement chunked handling
self.data = b''
await self._buffer_body(reader) | async def _read(self, reader) | Parses HTTP request into member variables | 2.244199 | 2.078668 | 1.079633 |
ret = await reader.readline()
if len(ret) == 0 and reader.at_eof():
raise EOFError()
return ret | async def _readline(self, reader) | Readline helper | 3.582786 | 3.320412 | 1.079018 |
remaining = int(self.headers.get('Content-Length', 0))
if remaining > 0:
try:
self.data = await reader.readexactly(remaining)
except asyncio.IncompleteReadError:
raise EOFError() | async def _buffer_body(self, reader) | Buffers the body of the request | 3.134701 | 2.901439 | 1.080395 |
def register_func(func):
if doc:
self.env['doc'].append({'url': path_regex, 'methods': ', '.join(methods), 'doc': func.__doc__})
for method in methods:
self._handlers[method].append((re.compile(path_regex), func))
return func # Return the original function
return register_func | def route(self, path_regex, methods=['GET'], doc=True) | Decorator to register a handler
Parameters:
* path_regex: Request path regex to match against for running the handler
* methods: HTTP methods to use this handler for
* doc: Add to internal doc structure | 3.934204 | 4.345284 | 0.905396 |
peer = writer.get_extra_info('peername')
self._logger.debug('New connection from {}'.format(peer))
try:
# Loop handling requests
while True:
# Read the request
req = Request()
await req._read(reader)
# Find and execute handler
res = None
for path_regex, handler in self._handlers.get(req.method, []):
match = path_regex.fullmatch(req.path)
if match:
req.match = match
try:
if inspect.iscoroutinefunction(handler):
res = await handler(self.env, req)
else:
res = handler(self.env, req)
if not isinstance(res, Response):
res = Response(data=res)
except:
# Error - log it and return 500
self._logger.error(traceback.format_exc())
res = Response(code=500, reason='Internal Server Error')
break
# No handler - send 404
if res == None:
res = Response(code=404, reason='Not Found')
# Respond
await res._write(writer)
self._logger.info('{}: {} -> {}'.format(peer, req.path, res.code))
except EOFError:
self._logger.debug('Connection closed from {}'.format(peer))
except Exception as e:
self._logger.error('Connection error ({}) from {}'.format(e, peer))
writer.close() | async def _handle(self, reader, writer) | Handle a single TCP connection
Parses requests, finds appropriate handlers and returns responses | 2.422891 | 2.404147 | 1.007797 |
# Setup loop
loop = asyncio.get_event_loop()
coro = asyncio.start_server(self._handle, host, port, loop=loop)
try:
server = loop.run_until_complete(coro)
except Exception as e:
self._logger.error('Could not launch server: {}'.format(e))
return
# Run the server
self._logger.info('Serving on {}'.format(server.sockets[0].getsockname()))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
# Close the server
server.close()
loop.run_until_complete(server.wait_closed())
loop.close() | def run(self, host='localhost', port=1234) | Launch the server. Will run forever accepting connections until interrupted.
Parameters:
* host: The host to listen on
* port: The port to listen on | 1.835423 | 1.926039 | 0.952952 |
if not isinstance(model, Model):
raise TypeError("Expected a Model, not %r." % model)
events = []
restore = model.__dict__.get("_notify_model_views")
model._notify_model_views = lambda e: events.extend(e)
try:
yield events
finally:
if restore is None:
del model._notify_model_views
else:
model._notify_model_views = restore
events = tuple(events)
if reducer is not None:
events = tuple(map(Data, reducer(model, events)))
model._notify_model_views(events) | def hold(model: Model, reducer: Optional[Callable] = None) -> Iterator[list] | Temporarilly withold change events in a modifiable list.
All changes that are captured within a "hold" context are forwarded to a list
which is yielded to the user before being sent to views of the given ``model``.
If desired, the user may modify the list of events before the context is left in
order to change the events that are ultimately sent to the model's views.
Parameters:
model:
The model object whose change events will be temporarilly witheld.
reducer:
A function for modifying the events list at the end of the context.
Its signature is ``(model, events) -> new_events`` where ``model`` is the
given model, ``events`` is the complete list of events produced in the
context, and the returned ``new_events`` is a list of events that will
actuall be distributed to views.
Notes:
All changes witheld from views will be sent as a single notification. For
example if you view a :class:`specate.mvc.models.List` and its ``append()``
method is called three times within a :func:`hold` context,
Examples:
Note how the event from ``l.append(1)`` is omitted from the printed statements.
.. code-block:: python
from spectate import mvc
l = mvc.List()
mvc.view(d, lambda d, e: list(map(print, e)))
with mvc.hold(l) as events:
l.append(1)
l.append(2)
del events[0]
.. code-block:: text
{'index': 1, 'old': Undefined, 'new': 2} | 3.700713 | 3.45996 | 1.069583 |
with hold(model, *args, **kwargs) as events:
try:
yield events
except Exception as error:
if undo is not None:
with mute(model):
undo(model, tuple(events), error)
events.clear()
raise | def rollback(
model: Model, undo: Optional[Callable] = None, *args, **kwargs
) -> Iterator[list] | Withold events if an error occurs.
Generall operate
Parameters:
model:
The model object whose change events may be witheld.
undo:
An optional function for reversing any changes that may have taken place.
Its signature is ``(model, events, error)`` where ``model`` is the given
model, ``event`` is a tuple of all the events that took place, and ``error``
is the exception that was riased. Any changes that you make to the model
within this function will not produce events.
Examples:
Simple supression of events:
.. code-block:: python
from spectate import mvc
d = mvc.Dict()
@mvc.view(d)
def should_not_be_called(d, events):
# we never call this view
assert False
try:
with mvc.rollback(d):
d["a"] = 1
d["b"] # key doesn't exist
except KeyError:
pass
Undo changes for a dictionary:
.. code-block:: python
from spectate import mvc
def undo_dict_changes(model, events, error):
seen = set()
for e in reversed(events):
if e.old is mvc.Undefined:
del model[e.key]
else:
model[e.key] = e.old
try:
with mvc.rollback(d, undo=undo_dict_changes):
d["a"] = 1
d["b"] = 2
print(d)
d["c"]
except KeyError:
pass
print(d)
.. code-block:: python
{'a': 1, 'b': 2}
{} | 5.638781 | 5.634758 | 1.000714 |
if not isinstance(model, Model):
raise TypeError("Expected a Model, not %r." % model)
restore = model.__dict__.get("_notify_model_views")
model._notify_model_views = lambda e: None
try:
yield
finally:
if restore is None:
del model._notify_model_views
else:
model._notify_model_views = restore | def mute(model: Model) | Block a model's views from being notified.
All changes within a "mute" context will be blocked. No content is yielded to the
user as in :func:`hold`, and the views of the model are never notified that changes
took place.
Parameters:
mode: The model whose change events will be blocked.
Examples:
The view is never called due to the :func:`mute` context:
.. code-block:: python
from spectate import mvc
l = mvc.List()
@mvc.view(l)
def raises(events):
raise ValueError("Events occured!")
with mvc.mute(l):
l.append(1) | 3.754198 | 4.008885 | 0.936469 |
def setup(base):
return expose_as(base.__name__, base, *methods)
return setup | def expose(*methods) | A decorator for exposing the methods of a class.
Parameters
----------
*methods : str
A str representation of the methods that should be exposed to callbacks.
Returns
-------
decorator : function
A function accepting one argument - the class whose methods will be
exposed - and which returns a new :class:`Watchable` that will
notify a :class:`Spectator` when those methods are called.
Notes
-----
This is essentially a decorator version of :func:`expose_as` | 10.916621 | 12.875589 | 0.847854 |
classdict = {}
for method in methods:
if not hasattr(base, method):
raise AttributeError(
"Cannot expose '%s', because '%s' "
"instances lack this method" % (method, base.__name__)
)
else:
classdict[method] = MethodSpectator(getattr(base, method), method)
return type(name, (base, Watchable), classdict) | def expose_as(name, base, *methods) | Return a new type with certain methods that are exposed to callback registration.
Parameters
----------
name : str
The name of the new type.
base : type
A type such as list or dict.
*methods : str
A str representation of the methods that should be exposed to callbacks.
Returns
-------
exposed : obj:
A :class:`Watchable` with methods that will notify a :class:`Spectator`. | 4.617033 | 4.371266 | 1.056223 |
check = issubclass if inspect.isclass(value) else isinstance
return check(value, Watchable) | def watchable(value) | Returns True if the given value is a :class:`Watchable` subclass or instance. | 6.40051 | 4.540544 | 1.409635 |
if isinstance(value, Watchable):
wtype = type(value)
else:
raise TypeError("Expected a Watchable, not %r." % value)
spectator = getattr(value, "_instance_spectator", None)
if not isinstance(spectator, Spectator):
spectator = spectator_type(wtype)
value._instance_spectator = spectator
return spectator | def watch(value, spectator_type=Spectator) | Register a :class:`Specatator` to a :class:`Watchable` and return it.
In order to register callbacks to an eventful object, you need to create
a Spectator that will watch it for you. A :class:`Specatator` is a relatively simple
object that has methods for adding, deleting, and triggering callbacks. To
create a spectator we call ``spectator = watch(x)``, where x is a Watchable
instance.
Parameters
----------
value : Watchable
A :class:`Watchable` instance.
spectator_type : Spectator
The type of spectator that will be returned.
Returns
-------
spectator: spectator_type
The :class:`Specatator` (specified by ``spectator_type``) that is
was registered to the given instance. | 3.304322 | 3.362421 | 0.982721 |
value = cls(*args, **kwargs)
return value, watch(value) | def watched(cls, *args, **kwargs) | Create and return a :class:`Watchable` with its :class:`Specatator`.
See :func:`watch` for more info on :class:`Specatator` registration.
Parameters
----------
cls: type:
A subclass of :class:`Watchable`
*args:
Positional arguments used to create the instance
**kwargs:
Keyword arguments used to create the instance. | 10.019233 | 21.083757 | 0.475211 |
if not isinstance(value, Watchable):
raise TypeError("Expected a Watchable, not %r." % value)
spectator = watcher(value)
try:
del value._instance_spectator
except Exception:
pass
return spectator | def unwatch(value) | Return the :class:`Specatator` of a :class:`Watchable` instance. | 6.645329 | 5.049259 | 1.3161 |
if isinstance(name, (list, tuple)):
for name in name:
self.callback(name, before, after)
else:
if not isinstance(getattr(self.subclass, name), MethodSpectator):
raise ValueError("No method specator for '%s'" % name)
if before is None and after is None:
raise ValueError("No pre or post '%s' callbacks were given" % name)
elif before is not None and not callable(before):
raise ValueError("Expected a callable, not %r." % before)
elif after is not None and not callable(after):
raise ValueError("Expected a callable, not %r." % after)
elif before is None and after is None:
raise ValueError("No callbacks were given.")
if name in self._callback_registry:
callback_list = self._callback_registry[name]
else:
callback_list = []
self._callback_registry[name] = callback_list
callback_list.append((before, after)) | def callback(self, name, before=None, after=None) | Add a callback pair to this spectator.
You can specify, with keywords, whether each callback should be triggered
before, and/or or after a given method is called - hereafter refered to as
"beforebacks" and "afterbacks" respectively.
Parameters
----------
name: str
The name of the method to which callbacks should respond.
before: None or callable
A callable of the form ``before(obj, call)`` where ``obj`` is
the instance which called a watched method, and ``call`` is a
:class:`Data` containing the name of the called method, along with
its positional and keyword arguments under the attributes "name"
"args", and "kwargs" respectively.
after: None or callable
A callable of the form ``after(obj, answer)`` where ``obj` is
the instance which alled a watched method, and ``answer`` is a
:class:`Data` containing the name of the called method, along with
the value it returned, and data ``before`` may have returned
under the attributes "name", "value", and "before" respectively. | 2.542336 | 2.573996 | 0.9877 |
if isinstance(name, (list, tuple)):
for name in name:
self.remove_callback(name, before, after)
elif before is None and after is None:
del self._callback_registry[name]
else:
if name in self._callback_registry:
callback_list = self._callback_registry[name]
else:
callback_list = []
self._callback_registry[name] = callback_list
callback_list.remove((before, after))
if len(callback_list) == 0:
# cleanup if all callbacks are gone
del self._callback_registry[name] | def remove_callback(self, name, before=None, after=None) | Remove a beforeback, and afterback pair from this Spectator
If ``before`` and ``after`` are None then all callbacks for
the given method will be removed. Otherwise, only the exact
callback pair will be removed.
Parameters
----------
name: str
The name of the method the callback pair is associated with.
before: None or callable
The beforeback that was originally registered to the given method.
after: None or callable
The afterback that was originally registered to the given method. | 1.906615 | 2.086628 | 0.91373 |
if name in self._callback_registry:
beforebacks, afterbacks = zip(*self._callback_registry.get(name, []))
hold = []
for b in beforebacks:
if b is not None:
call = Data(name=name, kwargs=kwargs.copy(), args=args)
v = b(obj, call)
else:
v = None
hold.append(v)
out = method(*args, **kwargs)
for a, bval in zip(afterbacks, hold):
if a is not None:
a(obj, Data(before=bval, name=name, value=out))
elif callable(bval):
# the beforeback's return value was an
# afterback that expects to be called
bval(out)
return out
else:
return method(*args, **kwargs) | def call(self, obj, name, method, args, kwargs) | Trigger a method along with its beforebacks and afterbacks.
Parameters
----------
name: str
The name of the method that will be called
args: tuple
The arguments that will be passed to the base method
kwargs: dict
The keyword args that will be passed to the base method | 4.247377 | 3.981593 | 1.066753 |
cut = text.find(delim, length)
if cut > -1:
return text[:cut]
else:
return text | def cut_to_length(text, length, delim) | Shorten given text on first delimiter after given number
of characters. | 3.275394 | 3.66945 | 0.892612 |
if version and version != str(sys.version_info[0]):
return settings.PYTHON_INTERPRETER + version
else:
return sys.executable | def get_interpreter_path(version=None) | Return the executable of a specified or current version. | 5.877654 | 5.055686 | 1.162583 |
license = []
for classifier in trove:
if 'License' in classifier:
stripped = classifier.strip()
# if taken from EGG-INFO, begins with Classifier:
stripped = stripped[stripped.find('License'):]
if stripped in settings.TROVE_LICENSES:
license.append(settings.TROVE_LICENSES[stripped])
return ' and '.join(license) | def license_from_trove(trove) | Finds out license from list of trove classifiers.
Args:
trove: list of trove classifiers
Returns:
Fedora name of the package license or empty string, if no licensing
information is found in trove classifiers. | 5.35823 | 5.318169 | 1.007533 |
versions = set()
for classifier in trove:
if 'Programming Language :: Python ::' in classifier:
ver = classifier.split('::')[-1]
major = ver.split('.')[0].strip()
if major:
versions.add(major)
return sorted(
set([v for v in versions if v.replace('.', '', 1).isdigit()])) | def versions_from_trove(trove) | Finds out python version from list of trove classifiers.
Args:
trove: list of trove classifiers
Returns:
python version string | 3.929016 | 3.513959 | 1.118116 |
def inner(self, client=None):
data = extraction_fce(self)
if client is None:
logger.warning("Client is None, it was probably disabled")
data.update_attr('source0', self.archive.name)
return data
try:
release_data = client.release_data(self.name, self.version)
except BaseException:
logger.warning("Some kind of error while communicating with "
"client: {0}.".format(client), exc_info=True)
return data
try:
url, md5_digest = get_url(client, self.name, self.version)
except exc.MissingUrlException:
url, md5_digest = ('FAILED TO EXTRACT FROM PYPI',
'FAILED TO EXTRACT FROM PYPI')
data_dict = {'source0': url, 'md5': md5_digest}
for data_field in settings.PYPI_USABLE_DATA:
data_dict[data_field] = release_data.get(data_field, '')
# we usually get better license representation from trove classifiers
data_dict["license"] = license_from_trove(release_data.get(
'classifiers', ''))
data.set_from(data_dict, update=True)
return data
return inner | def pypi_metadata_extension(extraction_fce) | Extracts data from PyPI and merges them with data from extraction
method. | 5.254622 | 5.168066 | 1.016748 |
def inner(self):
data = extraction_fce(self)
if virtualenv is None or not self.venv:
logger.debug("Skipping virtualenv metadata extraction.")
return data
temp_dir = tempfile.mkdtemp()
try:
extractor = virtualenv.VirtualEnv(self.name, temp_dir,
self.name_convertor,
self.base_python_version)
data.set_from(extractor.get_venv_data, update=True)
except exc.VirtualenvFailException as e:
logger.error("{}, skipping virtualenv metadata extraction.".format(
e))
finally:
shutil.rmtree(temp_dir)
return data
return inner | def venv_metadata_extension(extraction_fce) | Extracts specific metadata from virtualenv object, merges them with data
from given extraction method. | 5.462939 | 5.463952 | 0.999815 |
def inner(description):
clear_description = \
re.sub(r'\s+', ' ', # multiple whitespaces
# general URLs
re.sub(r'\w+:\/{2}[\d\w-]+(\.[\d\w-]+)*(?:(?:\/[^\s/]*))*', '',
# delimiters
re.sub('(#|=|---|~|`)*', '',
# very short lines, typically titles
re.sub('((\r?\n)|^).{0,8}((\r?\n)|$)', '',
# PyPI's version and downloads tags
re.sub(
'((\r*.. image::|:target:) https?|(:align:|:alt:))[^\n]*\n', '',
description_fce(description))))))
return ' '.join(textwrap.wrap(clear_description, 80))
return inner | def process_description(description_fce) | Removes special character delimiters, titles
and wraps paragraphs. | 8.537432 | 7.979308 | 1.069946 |
py_vers = versions_from_trove(self.classifiers)
return [ver for ver in py_vers if ver != self.unsupported_version] | def versions_from_archive(self) | Return Python versions extracted from trove classifiers. | 10.927391 | 5.730243 | 1.906968 |
if self.rpm_name or self.name.startswith(('python-', 'Python-')):
return self.name_convertor.base_name(self.rpm_name or self.name) | def srcname(self) | Return srcname for the macro if the pypi name should be changed.
Those cases are:
- name was provided with -r option
- pypi name is like python-<name> | 9.603829 | 7.240546 | 1.326396 |
data = PackageData(
local_file=self.local_file,
name=self.name,
pkg_name=self.rpm_name or self.name_convertor.rpm_name(
self.name, pkg_name=True),
version=self.version,
srcname=self.srcname)
with self.archive:
data.set_from(self.data_from_archive)
# for example nose has attribute `packages` but instead of name
# listing the pacakges is using function to find them, that makes
# data.packages an empty set if virtualenv is disabled
if self.venv_extraction_disabled and getattr(data, "packages") == []:
data.packages = [data.name]
return data | def extract_data(self) | Extracts data from archive.
Returns:
PackageData object containing the extracted data. | 10.618641 | 10.412421 | 1.019805 |
archive_data = {}
archive_data['runtime_deps'] = self.runtime_deps
archive_data['build_deps'] = [
['BuildRequires', 'python2-devel']] + self.build_deps
archive_data['py_modules'] = self.py_modules
archive_data['scripts'] = self.scripts
archive_data['home_page'] = self.home_page
archive_data['description'] = self.description
archive_data['summary'] = self.summary
archive_data['license'] = self.license
archive_data['has_pth'] = self.has_pth
archive_data['has_extension'] = self.has_extension
archive_data['has_test_suite'] = self.has_test_suite
archive_data['python_versions'] = self.versions_from_archive
(archive_data['doc_files'],
archive_data['doc_license']) = self.separate_license_files(
self.doc_files)
archive_data['dirname'] = self.archive.top_directory
return archive_data | def data_from_archive(self) | Returns all metadata extractable from the archive.
Returns:
dictionary containing metadata extracted from the archive | 3.272108 | 3.341981 | 0.979092 |
install_requires.append('setuptools') # entrypoints
return sorted(self.name_convert_deps_list(deps_from_pyp_format(
install_requires, runtime=True))) | def runtime_deps(self): # install_requires
install_requires = self.metadata['install_requires']
if self.metadata[
'entry_points'] and 'setuptools' not in install_requires | Returns list of runtime dependencies of the package specified in
setup.py.
Dependencies are in RPM SPECFILE format - see dependency_to_rpm()
for details, but names are already transformed according to
current distro.
Returns:
list of runtime dependencies of the package | 15.399007 | 23.656036 | 0.650955 |
build_requires += self.metadata['tests_require'] + self.metadata[
'install_requires']
if 'setuptools' not in build_requires:
build_requires.append('setuptools')
return sorted(self.name_convert_deps_list(deps_from_pyp_format(
build_requires, runtime=False))) | def build_deps(self): # setup_requires [tests_require, install_requires]
build_requires = self.metadata['setup_requires']
if self.has_test_suite | Same as runtime_deps, but build dependencies. Test and install
requires are included if package contains test suite to prevent
%check phase crashes because of missing dependencies
Returns:
list of build dependencies of the package | 7.006606 | 7.082216 | 0.989324 |
doc_files = []
for doc_file_re in settings.DOC_FILES_RE:
doc_files.extend(
self.archive.get_files_re(doc_file_re, ignorecase=True))
return ['/'.join(x.split('/')[1:]) for x in doc_files] | def doc_files(self) | Returns list of doc files that should be used for %doc in specfile.
Returns:
List of doc files from the archive - only basenames, not full
paths. | 4.159768 | 3.719105 | 1.118486 |
# search for sphinx dir doc/ or docs/ under the first directory in
# archive (e.g. spam-1.0.0/doc)
candidate_dirs = self.archive.get_directories_re(
settings.SPHINX_DIR_RE, full_path=True)
# search for conf.py in the dirs (TODO: what if more are found?)
for directory in candidate_dirs:
contains_conf_py = self.archive.get_files_re(
r'{0}/conf.py$'.format(re.escape(directory)), full_path=True)
in_tests = 'tests' in directory.split(os.sep)
if contains_conf_py and not in_tests:
return directory | def sphinx_dir(self) | Returns directory with sphinx documentation, if there is such.
Returns:
Full path to sphinx documentation dir inside the archive, or None
if there is no such. | 5.976896 | 5.555723 | 1.075809 |
archive_data = super(SetupPyMetadataExtractor, self).data_from_archive
archive_data['has_packages'] = self.has_packages
archive_data['packages'] = self.packages
archive_data['has_bundled_egg_info'] = self.has_bundled_egg_info
sphinx_dir = self.sphinx_dir
if sphinx_dir:
archive_data['sphinx_dir'] = "/".join(sphinx_dir.split("/")[1:])
archive_data['build_deps'].append(
['BuildRequires', self.name_convertor.rpm_name(
"sphinx", self.base_python_version)])
return archive_data | def data_from_archive(self) | Appends setup.py specific metadata to archive_data. | 4.982787 | 4.21856 | 1.181158 |
if not isinstance(requires_types, list):
requires_types = list(requires_types)
extracted_requires = []
for requires_name in requires_types:
for requires in self.json_metadata.get(requires_name, []):
if 'win' in requires.get('environment', {}):
continue
extracted_requires.extend(requires['requires'])
return extracted_requires | def get_requires(self, requires_types) | Extracts requires of given types from metadata file, filter windows
specific requires. | 3.646296 | 3.057793 | 1.19246 |
try:
release_urls = client.release_urls(name, version)
release_data = client.release_data(name, version)
except BaseException: # some kind of error with client
logger.debug('Client: {0} Name: {1} Version: {2}.'.format(
client, name, version))
raise exceptions.MissingUrlException(
"Some kind of error while communicating with client: {0}.".format(
client), exc_info=True)
url = ''
md5_digest = None
if not wheel:
# Prefered archive is tar.gz
if len(release_urls):
zip_url = zip_md5 = ''
for release_url in release_urls:
if release_url['url'].endswith("tar.gz"):
url = release_url['url']
md5_digest = release_url['md5_digest']
if release_url['url'].endswith(".zip"):
zip_url = release_url['url']
zip_md5 = release_url['md5_digest']
if url == '':
url = zip_url or release_urls[0]['url']
md5_digest = zip_md5 or release_urls[0]['md5_digest']
elif release_data:
url = release_data['download_url']
else:
# Only wheel is acceptable
for release_url in release_urls:
if release_url['url'].endswith("none-any.whl"):
url = release_url['url']
md5_digest = release_url['md5_digest']
break
if not url:
raise exceptions.MissingUrlException(
"Url of source archive not found.")
if url == 'UNKNOWN':
raise exceptions.MissingUrlException(
"{0} package has no sources on PyPI, Please ask the maintainer "
"to upload sources.".format(release_data['name']))
if not hashed_format:
url = ("https://files.pythonhosted.org/packages/source"
"/{0[0]}/{0}/{1}").format(name, url.split("/")[-1])
return (url, md5_digest) | def get_url(client, name, version, wheel=False, hashed_format=False) | Retrieves list of package URLs using PyPI's XML-RPC. Chooses URL
of prefered archive and md5_digest. | 2.869086 | 2.749558 | 1.043472 |
try:
url = get_url(self.client, self.name, self.version,
wheel, hashed_format=True)[0]
except exceptions.MissingUrlException as e:
raise SystemExit(e)
if wheel:
self.temp_dir = tempfile.mkdtemp()
save_dir = self.temp_dir
else:
save_dir = self.save_dir
save_file = '{0}/{1}'.format(save_dir, url.split('/')[-1])
request.urlretrieve(url, save_file)
logger.info('Downloaded package from PyPI: {0}.'.format(save_file))
return save_file | def get(self, wheel=False) | Downloads the package from PyPI.
Returns:
Full path of the downloaded file.
Raises:
PermissionError if the save_dir is not writable. | 3.701308 | 3.324072 | 1.113486 |
if self.local_file.endswith('.whl'):
self.temp_dir = tempfile.mkdtemp()
save_dir = self.temp_dir
else:
save_dir = self.save_dir
save_file = '{0}/{1}'.format(save_dir, os.path.basename(
self.local_file))
if not os.path.exists(save_file) or not os.path.samefile(
self.local_file, save_file):
shutil.copy2(self.local_file, save_file)
logger.info('Local file: {0} copyed to {1}.'.format(
self.local_file, save_file))
return save_file | def get(self) | Copies file from local filesystem to self.save_dir.
Returns:
Full path of the copied file.
Raises:
EnvironmentError if the file can't be found or the save_dir
is not writable. | 2.604593 | 2.246647 | 1.159324 |
# we don't use splitext, because on "a.tar.gz" it returns ("a.tar",
# "gz")
filename = os.path.basename(self.local_file)
for archive_suffix in settings.ARCHIVE_SUFFIXES:
if filename.endswith(archive_suffix):
return filename.rstrip('{0}'.format(archive_suffix))
# if for cycle is exhausted it means no suffix was found
else:
raise exceptions.UnknownArchiveFormatException(
'Unkown archive format of file {0}.'.format(filename)) | def _stripped_name_version(self) | Returns filename stripped of the suffix.
Returns:
Filename stripped of the suffix (extension). | 6.849054 | 5.92401 | 1.156152 |
'''
Scans content of directories
'''
self.bindir = set(os.listdir(path + 'bin/'))
self.lib_sitepackages = set(os.listdir(glob.glob(
path + 'lib/python?.?/site-packages/')[0])) | def fill(self, path) | Scans content of directories | 9.844398 | 6.70066 | 1.469169 |
'''
Installs package given as first argument to virtualenv without
dependencies
'''
try:
self.env.install(self.name, force=True, options=["--no-deps"])
except (ve.PackageInstallationException,
ve.VirtualenvReadonlyException):
raise VirtualenvFailException(
'Failed to install package to virtualenv')
self.dirs_after_install.fill(self.temp_dir + '/venv/') | def install_package_to_venv(self) | Installs package given as first argument to virtualenv without
dependencies | 11.924771 | 8.446174 | 1.411855 |
'''
Makes final versions of site_packages and scripts using DirsContent
sub method and filters
'''
try:
diff = self.dirs_after_install - self.dirs_before_install
except ValueError:
raise VirtualenvFailException(
"Some of the DirsContent attributes is uninicialized")
self.data['has_pth'] = \
any([x for x in diff.lib_sitepackages if x.endswith('.pth')])
site_packages = site_packages_filter(diff.lib_sitepackages)
self.data['packages'] = sorted(
[p for p in site_packages if not p.endswith(MODULE_SUFFIXES)])
self.data['py_modules'] = sorted(set(
[os.path.splitext(m)[0] for m in site_packages - set(
self.data['packages'])]))
self.data['scripts'] = scripts_filter(sorted(diff.bindir))
logger.debug('Data from files differance in virtualenv:')
logger.debug(pprint.pformat(self.data)) | def get_dirs_differance(self) | Makes final versions of site_packages and scripts using DirsContent
sub method and filters | 7.336321 | 5.070409 | 1.446889 |
register_file_log_handler('/tmp/pyp2rpm-{0}.log'.format(getpass.getuser()))
if srpm or s:
register_console_log_handler()
distro = o
if t and os.path.splitext(t)[0] in settings.KNOWN_DISTROS:
distro = t
elif t and not (b or p):
raise click.UsageError("Default python versions for template {0} are "
"missing in settings, add them or use flags "
"-b/-p to set python versions.".format(t))
logger = logging.getLogger(__name__)
logger.info('Pyp2rpm initialized.')
convertor = Convertor(package=package,
version=v,
save_dir=d,
template=t or settings.DEFAULT_TEMPLATE,
distro=distro,
base_python_version=b,
python_versions=p,
rpm_name=r,
proxy=proxy,
venv=venv,
autonc=autonc)
logger.debug(
'Convertor: {0} created. Trying to convert.'.format(convertor))
converted = convertor.convert()
logger.debug('Convertor: {0} succesfully converted.'.format(convertor))
if sclize:
converted = convert_to_scl(converted, scl_kwargs)
if srpm or s:
if r:
spec_name = r + '.spec'
else:
prefix = 'python-' if not convertor.name.startswith(
'python-') else ''
spec_name = prefix + convertor.name + '.spec'
logger.info('Using name: {0} for specfile.'.format(spec_name))
if d == settings.DEFAULT_PKG_SAVE_PATH:
# default save_path is rpmbuild tree so we want to save spec
# in rpmbuild/SPECS/
spec_path = d + '/SPECS/' + spec_name
else:
# if user provide save_path then save spec in provided path
spec_path = d + '/' + spec_name
spec_dir = os.path.dirname(spec_path)
if not os.path.exists(spec_dir):
os.makedirs(spec_dir)
logger.debug('Opening specfile: {0}.'.format(spec_path))
if not utils.PY3:
converted = converted.encode('utf-8')
with open(spec_path, 'w') as f:
f.write(converted)
logger.info('Specfile saved at: {0}.'.format(spec_path))
if srpm:
msg = utils.build_srpm(spec_path, d)
logger.info(msg)
else:
logger.debug('Printing specfile to stdout.')
if utils.PY3:
print(converted)
else:
print(converted.encode('utf-8'))
logger.debug('Specfile printed.')
logger.info("That's all folks!") | def main(package, v, d, s, r, proxy, srpm, p, b, o, t, venv, autonc, sclize,
**scl_kwargs) | Convert PyPI package to RPM specfile or SRPM.
\b
\b\bArguments:
PACKAGE Provide PyPI name of the package or path to compressed
source file. | 3.547898 | 3.688862 | 0.961787 |
scl_options['skip_functions'] = scl_options['skip_functions'].split(',')
scl_options['meta_spec'] = None
convertor = SclConvertor(options=scl_options)
return str(convertor.convert(spec)) | def convert_to_scl(spec, scl_options) | Convert spec into SCL-style spec file using `spec2scl`.
Args:
spec: (str) a spec file
scl_options: (dict) SCL options provided
Returns:
A converted spec file | 4.741767 | 5.475598 | 0.865982 |
super(Pyp2rpmCommand, self).format_options(ctx, formatter)
scl_opts = []
for param in self.get_params(ctx):
if isinstance(param, SclizeOption):
scl_opts.append(param.get_scl_help_record(ctx))
if scl_opts:
with formatter.section('SCL related options'):
formatter.write_dl(scl_opts) | def format_options(self, ctx, formatter) | Writes SCL related options into the formatter as a separate
group. | 4.841064 | 3.950279 | 1.2255 |
if 'sclize' in opts and not SclConvertor:
raise click.UsageError("Please install spec2scl package to "
"perform SCL-style conversion")
if self.name in opts and 'sclize' not in opts:
raise click.UsageError(
"`--{}` can only be used with --sclize option".format(
self.name))
return super(SclizeOption, self).handle_parse_result(ctx, opts, args) | def handle_parse_result(self, ctx, opts, args) | Validate SCL related options before parsing. | 5.631464 | 4.657577 | 1.209097 |
if var is None:
return []
if isinstance(var, str):
var = var.split('\n')
elif not isinstance(var, list):
try:
var = list(var)
except TypeError:
raise ValueError("{} cannot be converted to the list.".format(var))
return var | def to_list(var) | Checks if given value is a list, tries to convert, if it is not. | 2.646703 | 2.493463 | 1.061457 |
if self.stdout:
sys.stdout.write("extracted json data:\n" + json.dumps(
self.metadata, default=to_str) + "\n")
else:
extract_dist.class_metadata = self.metadata | def run(self) | Sends extracted metadata in json format to stdout if stdout
option is specified, assigns metadata dictionary to class_metadata
variable otherwise. | 10.2774 | 5.721556 | 1.79626 |
logger.debug('Dependencies provided: {0} runtime: {1}.'.format(
dep, runtime))
converted = []
if not len(dep.specs):
converted.append(['Requires', dep.project_name])
else:
for ver_spec in dep.specs:
if ver_spec[0] == '!=':
converted.append(
['Conflicts', dep.project_name, '=', ver_spec[1]])
elif ver_spec[0] == '==':
converted.append(
['Requires', dep.project_name, '=', ver_spec[1]])
else:
converted.append(
['Requires', dep.project_name, ver_spec[0], ver_spec[1]])
if not runtime:
for conv in converted:
conv[0] = "Build" + conv[0]
logger.debug('Converted dependencies: {0}.'.format(converted))
return converted | def dependency_to_rpm(dep, runtime) | Converts a dependency got by pkg_resources.Requirement.parse()
to RPM format.
Args:
dep - a dependency retrieved by pkg_resources.Requirement.parse()
runtime - whether the returned dependency should be runtime (True)
or build time (False)
Returns:
List of semi-SPECFILE dependencies (package names are not properly
converted yet).
For example: [['Requires', 'jinja2'],
['Conflicts', 'jinja2', '=', '2.0.1']] | 2.786091 | 2.225608 | 1.251833 |
parsed = []
logger.debug("Dependencies from setup.py: {0} runtime: {1}.".format(
requires, runtime))
for req in requires:
try:
parsed.append(Requirement.parse(req))
except ValueError:
logger.warn("Unparsable dependency {0}.".format(req),
exc_info=True)
in_rpm_format = []
for dep in parsed:
in_rpm_format.extend(dependency_to_rpm(dep, runtime))
logger.debug("Dependencies from setup.py in rpm format: {0}.".format(
in_rpm_format))
return in_rpm_format | def deps_from_pyp_format(requires, runtime=True) | Parses dependencies extracted from setup.py.
Args:
requires: list of dependencies as written in setup.py of the package.
runtime: are the dependencies runtime (True) or build time (False)?
Returns:
List of semi-SPECFILE dependencies (see dependency_to_rpm for format). | 2.913749 | 2.718026 | 1.072009 |
parsed = []
for req in requires:
# req looks like 'some-name (>=X.Y,!=Y.X)' or 'someme-name' where
# 'some-name' is the name of required package and '(>=X.Y,!=Y.X)'
# are specs
name, specs = None, None
# len(reqs) == 1 if there are not specified versions, 2 otherwise
reqs = req.split(' ')
name = reqs[0]
if len(reqs) == 2:
specs = reqs[1]
# try if there are more specs in spec part of the requires
specs = specs.split(",")
# strip brackets
specs = [re.sub('[()]', '', spec) for spec in specs]
# this will divide (>=0.1.2) to ['>=', '0', '.1.2']
# or (0.1.2) into ['', '0', '.1.2']
specs = [re.split('([0-9])', spec, 1) for spec in specs]
# we have separated specs based on number as delimiter
# so we need to join it back to rest of version number
# e.g ['>=', '0', '.1.2'] to ['>=', '0.1.2']
for spec in specs:
spec[1:3] = [''.join(spec[1:3])]
if specs:
for spec in specs:
if '!' in spec[0]:
parsed.append(['Conflicts', name, '=', spec[1]])
elif specs[0] == '==':
parsed.append(['Requires', name, '=', spec[1]])
else:
parsed.append(['Requires', name, spec[0], spec[1]])
else:
parsed.append(['Requires', name])
if not runtime:
for pars in parsed:
pars[0] = 'Build' + pars[0]
return parsed | def deps_from_pydit_json(requires, runtime=True) | Parses dependencies returned by pydist.json, since versions
uses brackets we can't use pkg_resources to parse and we need a separate
method
Args:
requires: list of dependencies as written in pydist.json of the package
runtime: are the dependencies runtime (True) or build time (False)
Returns:
List of semi-SPECFILE dependecies (see dependency_to_rpm for format) | 4.135217 | 4.115095 | 1.00489 |
try:
packager = subprocess.Popen(
'rpmdev-packager', stdout=subprocess.PIPE).communicate(
)[0].strip()
except OSError:
# Hi John Doe, you should install rpmdevtools
packager = "John Doe <john@doe.com>"
logger.warn("Package rpmdevtools is missing, using default "
"name: {0}.".format(packager))
with utils.c_time_locale():
date_str = time.strftime('%a %b %d %Y', time.gmtime())
encoding = locale.getpreferredencoding()
return u'{0} {1}'.format(date_str, packager.decode(encoding)) | def get_changelog_date_packager(self) | Returns part of the changelog entry, containing date and packager. | 6.776869 | 6.021906 | 1.125369 |
with utils.ChangeDir(self.dirname):
sys.path.insert(0, self.dirname)
sys.argv[1:] = self.args
runpy.run_module(self.not_suffixed(self.filename),
run_name='__main__',
alter_sys=True) | def run(self) | Executes the code of the specified module. | 4.525481 | 3.999655 | 1.131468 |
with utils.ChangeDir(self.dirname):
command_list = ['PYTHONPATH=' + main_dir, interpreter,
self.filename] + list(self.args)
try:
proc = Popen(' '.join(command_list), stdout=PIPE, stderr=PIPE,
shell=True)
stream_data = proc.communicate()
except Exception as e:
logger.error(
"Error {0} while executing extract_dist command.".format(e))
raise ExtractionError
stream_data = [utils.console_to_str(s) for s in stream_data]
if proc.returncode:
logger.error(
"Subprocess failed, stdout: {0[0]}, stderr: {0[1]}".format(
stream_data))
self._result = json.loads(stream_data[0].split(
"extracted json data:\n")[-1].split("\n")[0]) | def run(self, interpreter) | Executes the code of the specified module. Deserializes captured
json data. | 4.65489 | 4.51334 | 1.031363 |
def wrapper(*args, **kw):
return list(fn(*args, **kw))
return wrapper | def generator_to_list(fn) | This decorator is for flat_list function.
It converts returned generator to list. | 2.633176 | 2.569935 | 1.024608 |
if isinstance(lst, list):
for item in lst:
for i in flat_list(item):
yield i
else:
yield lst | def flat_list(lst) | This function flatten given nested list.
Argument:
nested list
Returns:
flat list | 2.272594 | 3.058479 | 0.743047 |
file_cls = None
# only catches ".gz", even from ".tar.gz"
if self.is_tar:
file_cls = TarFile
elif self.is_zip:
file_cls = ZipFile
else:
logger.info("Couldn't recognize archive suffix: {0}.".format(
self.suffix))
return file_cls | def extractor_cls(self) | Returns the class that can read this archive based on archive suffix.
Returns:
Class that can read this archive or None if no such exists. | 6.269431 | 5.622755 | 1.115011 |
if self.handle:
for member in self.handle.getmembers():
if (full_path and member.name == name) or (
not full_path and os.path.basename(
member.name) == name):
extracted = self.handle.extractfile(member)
return extracted.read().decode(
locale.getpreferredencoding())
return None | def get_content_of_file(self, name, full_path=False) | Returns content of file from archive.
If full_path is set to False and two files with given name exist,
content of one is returned (it is not specified which one that is).
If set to True, returns content of exactly that file.
Args:
name: name of the file to get content of
Returns:
Content of the file with given name or None, if no such. | 3.363004 | 3.243438 | 1.036864 |
if self.handle:
for member in self.handle.getmembers():
if (full_path and member.name == name or
not full_path and os.path.basename(
member.name) == name):
# TODO handle KeyError exception
self.handle.extract(member, path=directory) | def extract_file(self, name, full_path=False, directory=".") | Extract a member from the archive to the specified working directory.
Behaviour of name and pull_path is the same as in function
get_content_of_file. | 4.907257 | 3.830194 | 1.281203 |
if self.handle:
self.handle.extractall(path=directory, members=members) | def extract_all(self, directory=".", members=None) | Extract all member from the archive to the specified working
directory. | 5.2802 | 5.499587 | 0.960109 |
if not isinstance(suffixes, list):
suffixes = [suffixes]
if self.handle:
for member in self.handle.getmembers():
if os.path.splitext(member.name)[1] in suffixes:
return True
else:
# hack for .zip files, where directories are not returned
# themselves, therefore we can't find e.g. .egg-info
for suffix in suffixes:
if '{0}/'.format(suffix) in member.name:
return True
return False | def has_file_with_suffix(self, suffixes) | Finds out if there is a file with one of suffixes in the archive.
Args:
suffixes: list of suffixes or single suffix to look for
Returns:
True if there is at least one file with at least one given suffix
in the archive, False otherwise (or archive can't be opened) | 4.217182 | 4.283973 | 0.984409 |
try:
if ignorecase:
compiled_re = re.compile(file_re, re.I)
else:
compiled_re = re.compile(file_re)
except sre_constants.error:
logger.error("Failed to compile regex: {}.".format(file_re))
return []
found = []
if self.handle:
for member in self.handle.getmembers():
if isinstance(member, TarInfo) and member.isdir():
pass # for TarInfo files, filter out directories
elif (full_path and compiled_re.search(member.name)) or (
not full_path and compiled_re.search(os.path.basename(
member.name))):
found.append(member.name)
return found | def get_files_re(self, file_re, full_path=False, ignorecase=False) | Finds all files that match file_re and returns their list.
Doesn't return directories, only files.
Args:
file_re: raw string to match files against (gets compiled into re)
full_path: whether to match against full path inside the archive
or just the filenames
ignorecase: whether to ignore case when using the given re
Returns:
List of full paths of files inside the archive that match the given
file_re. | 2.960821 | 2.762009 | 1.071981 |
if ignorecase:
compiled_re = re.compile(directory_re, re.I)
else:
compiled_re = re.compile(directory_re)
found = set()
if self.handle:
for member in self.handle.getmembers():
# zipfiles only list directories => have to work around that
if isinstance(member, ZipInfo):
to_match = os.path.dirname(member.name)
# tarfiles => only match directories
elif isinstance(member, TarInfo) and member.isdir():
to_match = member.name
else:
to_match = None
if to_match:
if ((full_path and compiled_re.search(to_match)) or (
not full_path and compiled_re.search(
os.path.basename(to_match)))):
found.add(to_match)
return list(found) | def get_directories_re(
self,
directory_re,
full_path=False,
ignorecase=False) | Same as get_files_re, but for directories | 3.061517 | 3.080424 | 0.993862 |
if self.handle:
return os.path.commonprefix(self.handle.getnames()).rstrip('/') | def top_directory(self) | Return the name of the archive topmost directory. | 9.655751 | 6.979515 | 1.383442 |
for meta_file in ("metadata.json", "pydist.json"):
try:
return json.loads(self.get_content_of_file(meta_file))
except TypeError as err:
logger.warning(
'Could not extract metadata from {}.'
' Error: {}'.format(meta_file, err))
sys.exit(
'Unable to extract package metadata from .whl archive. '
'This might be caused by an old .whl format version. '
'You may ask the upstream to upload fresh wheels created '
'with wheel >= 0.17.0 or to upload an sdist as well to '
'workaround this problem.') | def json_wheel_metadata(self) | Simple getter that get content of metadata.json file in .whl archive
Returns:
metadata from metadata.json or pydist.json in json format | 7.38764 | 6.670705 | 1.107475 |
modules = []
scripts = []
if self.get_content_of_file('RECORD'):
lines = self.get_content_of_file('RECORD').splitlines()
for line in lines:
if 'dist-info' in line or '/' not in line:
continue
elif '.data/scripts' in line:
script = line.split(',', 1)[0]
# strip Name.version.data/scripts/
scripts.append(re.sub('.*/.*/', '', script))
else:
# strip everything from first occurance of slash
modules.append(re.sub('/.*', '', line))
return {'modules': sorted(set(modules)),
'scripts': sorted(set(scripts))} | def record(self) | Getter that get content of RECORD file in .whl archive
Returns:
dict with keys `modules` and `scripts` | 5.281305 | 4.023477 | 1.312622 |
regexp = re.compile(r'^python(\d*|)-(.*)')
auto_provides_regexp = re.compile(r'^python(\d*|)dist(.*)')
if (not version or version == cls.get_default_py_version() and
not default_number):
found = regexp.search(name)
# second check is to avoid renaming of python2-devel to
# python-devel
if found and found.group(2) != 'devel':
if 'epel' not in cls.template:
return 'python-{0}'.format(regexp.search(name).group(2))
return name
versioned_name = name
if version:
if regexp.search(name):
versioned_name = re.sub(r'^python(\d*|)-', 'python{0}-'.format(
version), name)
elif auto_provides_regexp.search(name):
versioned_name = re.sub(
r'^python(\d*|)dist', 'python{0}dist'.format(
version), name)
else:
versioned_name = 'python{0}-{1}'.format(version, name)
if ('epel' in cls.template and version !=
cls.get_default_py_version()):
versioned_name = versioned_name.replace('{0}'.format(
version), '%{{python{0}_pkgversion}}'.format(version))
return versioned_name | def rpm_versioned_name(cls, name, version, default_number=False) | Properly versions the name.
For example:
rpm_versioned_name('python-foo', '26') will return python26-foo
rpm_versioned_name('pyfoo, '3') will return python3-pyfoo
If version is same as settings.DEFAULT_PYTHON_VERSION, no change
is done.
Args:
name: name to version
version: version or None
Returns:
Versioned name or the original name if given version is None. | 3.539145 | 3.484045 | 1.015815 |
logger.debug("Converting name: {0} to rpm name, version: {1}.".format(
name, python_version))
rpmized_name = self.base_name(name)
rpmized_name = 'python-{0}'.format(rpmized_name)
if self.distro == 'mageia':
rpmized_name = rpmized_name.lower()
logger.debug('Rpmized name of {0}: {1}.'.format(name, rpmized_name))
return NameConvertor.rpm_versioned_name(rpmized_name, python_version) | def rpm_name(self, name, python_version=None, pkg_name=False) | Returns name of the package converted to (possibly) correct package
name according to Packaging Guidelines.
Args:
name: name to convert
python_version: python version for which to retrieve the name of
the package
pkg_name: flag to perform conversion of rpm package name,
present in this class just for API compatibility reason
Returns:
Converted name of the package, that should be in line with
Fedora Packaging Guidelines. If for_python is not None,
the returned name is in form python%(version)s-%(name)s | 4.012046 | 4.020956 | 0.997784 |
base_name = name.replace('.', "-")
# remove python prefix if present
found_prefix = self.reg_start.search(name)
if found_prefix:
base_name = found_prefix.group(2)
# remove -pythonXY like suffix if present
found_end = self.reg_end.search(name.lower())
if found_end:
base_name = found_end.group(1)
return base_name | def base_name(self, name) | Removes any python prefixes of suffixes from name if present. | 4.239743 | 3.923538 | 1.080592 |
if not isinstance(other, NameVariants):
raise TypeError("NameVariants isinstance can be merge with"
"other isinstance of the same class")
for key in self.variants:
self.variants[key] = self.variants[key] or other.variants[key]
return self | def merge(self, other) | Merges object with other NameVariants object, not set values
of self.variants are replace by values from other object. | 6.548263 | 4.320083 | 1.515773 |
if pkg_name:
return super(DandifiedNameConvertor, self).rpm_name(
name, python_version)
original_name = name
converted = super(DandifiedNameConvertor, self).rpm_name(
name, python_version)
python_query = self.query.filter(name__substr=[
'python', 'py', original_name, canonical_form(original_name)])
if converted in [pkg.name for pkg in python_query]:
logger.debug("Converted name exists")
return converted
logger.debug("Converted name not found, searches for correct form")
not_versioned_name = NameVariants(self.base_name(original_name), '')
versioned_name = NameVariants(self.base_name(original_name),
python_version)
if self.base_name(original_name).startswith("py"):
nonpy_name = NameVariants(self.base_name(
original_name)[2:], python_version)
for pkg in python_query:
versioned_name.find_match(pkg.name)
not_versioned_name.find_match(pkg.name)
if 'nonpy_name' in locals():
nonpy_name.find_match(pkg.name)
if 'nonpy_name' in locals():
versioned_name = versioned_name.merge(nonpy_name)
correct_form = versioned_name.merge(not_versioned_name).best_matching
logger.debug("Most likely correct form of the name {0}.".format(
correct_form))
return correct_form or converted | def rpm_name(self, name, python_version=None, pkg_name=False) | Checks if name converted using superclass rpm_name_method match name
of package in the query. Searches for correct name if it doesn't.
Args:
name: name to convert
python_version: python version for which to retrieve the name of
the package
pkg_name: flag to perform conversion of rpm package name
(foo -> python-foo) | 3.992516 | 3.828466 | 1.04285 |
if self.template == "epel6.spec":
# if user requested version greater than 2, writes error message
# and exits
requested_versions = self.python_versions
if self.base_python_version:
requested_versions += [self.base_python_version]
if any(int(ver[0]) > 2 for ver in requested_versions):
sys.stderr.write(
"Invalid version, major number of python version for "
"EPEL6 spec file must not be greater than 2.\n")
sys.exit(1)
# if version greater than 2 were extracted it is removed
data.python_versions = [
ver for ver in data.python_versions if not int(ver[0]) > 2]
# Set python versions from default values in settings.
base_version, additional_versions = (
self.template_base_py_ver, self.template_py_vers)
# Sync default values with extracted versions from PyPI classifiers.
if data.python_versions:
if base_version not in data.python_versions:
base_version = data.python_versions[0]
additional_versions = [
v for v in additional_versions if v in data.python_versions]
# Override default values with those set from command line if any.
if self.base_python_version:
base_version = self.base_python_version
if self.python_versions:
additional_versions = [
v for v in self.python_versions if v != base_version]
data.base_python_version = base_version
data.python_versions = additional_versions | def merge_versions(self, data) | Merges python versions specified in command lines options with
extracted versions, checks if some of the versions is not > 2 if EPEL6
template will be used. attributes base_python_version and
python_versions contain values specified by command line options or
default values, data.python_versions contains extracted data. | 4.114692 | 3.324923 | 1.23753 |
# move file into position
try:
local_file = self.getter.get()
except (exceptions.NoSuchPackageException, OSError) as e:
logger.error(
"Failed and exiting:", exc_info=True)
logger.info("Pyp2rpm failed. See log for more info.")
sys.exit(e)
# save name and version from the file (rewrite if set previously)
self.name, self.version = self.getter.get_name_version()
self.local_file = local_file
data = self.metadata_extractor.extract_data(self.client)
logger.debug("Extracted metadata:")
logger.debug(pprint.pformat(data.data))
self.merge_versions(data)
jinja_env = jinja2.Environment(loader=jinja2.ChoiceLoader([
jinja2.FileSystemLoader(['/']),
jinja2.PackageLoader('pyp2rpm', 'templates'), ]))
for filter in filters.__all__:
jinja_env.filters[filter.__name__] = filter
try:
jinja_template = jinja_env.get_template(
os.path.abspath(self.template))
except jinja2.exceptions.TemplateNotFound:
# absolute path not found => search in default template dir
logger.warn('Template: {0} was not found in {1} using default '
'template dir.'.format(
self.template, os.path.abspath(self.template)))
jinja_template = jinja_env.get_template(self.template)
logger.info('Using default template: {0}.'.format(self.template))
ret = jinja_template.render(data=data, name_convertor=name_convertor)
return re.sub(r'[ \t]+\n', "\n", ret) | def convert(self) | Returns RPM SPECFILE.
Returns:
rendered RPM SPECFILE. | 4.248132 | 4.275916 | 0.993502 |
if not hasattr(self, '_getter'):
if not self.pypi:
self._getter = package_getters.LocalFileGetter(
self.package,
self.save_dir)
else:
logger.debug(
'{0} does not exist as local file trying PyPI.'.format(
self.package))
self._getter = package_getters.PypiDownloader(
self.client,
self.package,
self.version,
self.save_dir)
return self._getter | def getter(self) | Returns an instance of proper PackageGetter subclass. Always
returns the same instance.
Returns:
Instance of the proper PackageGetter subclass according to
provided argument.
Raises:
NoSuchSourceException if source to get the package from is unknown
NoSuchPackageException if the package is unknown on PyPI | 4.380385 | 4.059115 | 1.079148 |
if not hasattr(self, '_local_file'):
raise AttributeError("local_file attribute must be set before "
"calling metadata_extractor")
if not hasattr(self, '_metadata_extractor'):
if self.local_file.endswith('.whl'):
logger.info("Getting metadata from wheel using "
"WheelMetadataExtractor.")
extractor_cls = metadata_extractors.WheelMetadataExtractor
else:
logger.info("Getting metadata from setup.py using "
"SetupPyMetadataExtractor.")
extractor_cls = metadata_extractors.SetupPyMetadataExtractor
base_python_version = (
self.base_python_version or self.template_base_py_ver)
self._metadata_extractor = extractor_cls(
self.local_file,
self.name,
self.name_convertor,
self.version,
self.rpm_name,
self.venv,
base_python_version)
return self._metadata_extractor | def metadata_extractor(self) | Returns an instance of proper MetadataExtractor subclass.
Always returns the same instance.
Returns:
The proper MetadataExtractor subclass according to local file
suffix. | 3.381762 | 3.293938 | 1.026662 |
if self.proxy:
proxyhandler = urllib.ProxyHandler({"http": self.proxy})
opener = urllib.build_opener(proxyhandler)
urllib.install_opener(opener)
transport = ProxyTransport()
if not hasattr(self, '_client'):
transport = None
if self.pypi:
if self.proxy:
logger.info('Using provided proxy: {0}.'.format(
self.proxy))
self._client = xmlrpclib.ServerProxy(settings.PYPI_URL,
transport=transport)
self._client_set = True
else:
self._client = None
return self._client | def client(self) | XMLRPC client for PyPI. Always returns the same instance.
If the package is provided as a path to compressed source file,
PyPI will not be used and the client will not be instantiated.
Returns:
XMLRPC client for PyPI or None. | 3.690442 | 3.410562 | 1.082063 |
memory = {}
@functools.wraps(func)
def memoized(*args):
if args not in memory.keys():
value = func(*args)
memory[args] = value
return memory[args]
return memoized | def memoize_by_args(func) | Memoizes return value of a func based on args. | 2.21117 | 2.152606 | 1.027206 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.