_id stringlengths 2 7 | title stringlengths 1 88 | partition stringclasses 3 values | text stringlengths 75 19.8k | language stringclasses 1 value | meta_information dict |
|---|---|---|---|---|---|
q35700 | ConeFlatGeometry.frommatrix | train | def frommatrix(cls, apart, dpart, src_radius, det_radius, init_matrix,
pitch=0, **kwargs):
"""Create an instance of `ConeFlatGeometry` using a matrix.
This alternative constructor uses a matrix to rotate and
translate the default configuration. It is most useful when
the transformation to be applied is already given as a matrix.
Parameters
----------
apart : 1-dim. `RectPartition`
Partition of the parameter interval.
dpart : 2-dim. `RectPartition`
Partition of the detector parameter set.
src_radius : nonnegative float
Radius of the source circle.
det_radius : nonnegative float
Radius of the detector circle. Must be nonzero if ``src_radius``
is zero.
init_matrix : `array_like`, shape ``(3, 3)`` or ``(3, 4)``, optional
Transformation matrix whose left ``(3, 3)`` block is multiplied
with the default ``det_pos_init`` and ``det_axes_init`` to
determine the new vectors. If present, the fourth column acts
as a translation after the initial transformation.
The resulting ``det_axes_init`` will be normalized.
pitch : float, optional
Constant distance along the rotation axis that a point on the
helix traverses when increasing the angle parameter by
``2 * pi``. The default case ``pitch=0`` results in a circular
cone beam geometry.
kwargs :
Further keyword arguments passed to the class constructor.
Returns
-------
geometry : `ConeFlatGeometry`
Examples
--------
Map unit vectors ``e_y -> e_z`` and ``e_z -> -e_y``, keeping the
right-handedness:
>>> apart = odl.uniform_partition(0, 2 * np.pi, 10)
>>> dpart = odl.uniform_partition([-1, -1], [1, 1], (20, 20))
>>> matrix = np.array([[1, 0, 0],
... [0, 0, -1],
... [0, 1, 0]])
>>> geom = ConeFlatGeometry.frommatrix(
... apart, dpart, src_radius=5, det_radius=10, pitch=2,
... init_matrix=matrix)
>>> geom.axis
array([ 0., -1., 0.])
>>> geom.src_to_det_init
array([ 0., 0., 1.])
>>> geom.det_axes_init
array([[ 1., 0., 0.],
[ 0., -1., 0.]])
Adding a translation with a fourth matrix column:
>>> matrix = np.array([[0, 0, -1, 0],
... [0, 1, 0, 1],
... [1, 0, 0, 1]])
>>> geom = ConeFlatGeometry.frommatrix(
... apart, dpart, src_radius=5, det_radius=10, pitch=2,
... init_matrix=matrix)
>>> geom.translation
array([ 0., 1., 1.])
>>> geom.det_refpoint(0) # (0, 10, 0) + (0, 1, 1)
array([ 0., 11., 1.])
"""
for key in ('axis', 'src_to_det_init', 'det_axes_init', 'translation'):
if key in kwargs:
raise TypeError('got unknown keyword argument {!r}'
''.format(key))
# Get transformation and translation parts from `init_matrix`
init_matrix = np.asarray(init_matrix, dtype=float)
if init_matrix.shape not in ((3, 3), (3, 4)):
raise ValueError('`matrix` must have shape (3, 3) or (3, 4), '
'got array with shape {}'
''.format(init_matrix.shape))
trafo_matrix = init_matrix[:, :3]
translation = init_matrix[:, 3:].squeeze()
# Transform the default vectors
default_axis = cls._default_config['axis']
default_src_to_det_init = cls._default_config['src_to_det_init']
default_det_axes_init = cls._default_config['det_axes_init']
vecs_to_transform = (default_src_to_det_init,) + default_det_axes_init
transformed_vecs = transform_system(
default_axis, None, vecs_to_transform, matrix=trafo_matrix)
# Use the standard constructor with these vectors
axis, src_to_det, det_axis_0, det_axis_1 = transformed_vecs
if translation.size == 0:
pass
else:
kwargs['translation'] = translation
return cls(apart, dpart, src_radius, det_radius, pitch, axis,
src_to_det_init=src_to_det,
det_axes_init=[det_axis_0, det_axis_1],
**kwargs) | python | {
"resource": ""
} |
q35701 | linear_deform | train | def linear_deform(template, displacement, out=None):
"""Linearized deformation of a template with a displacement field.
The function maps a given template ``I`` and a given displacement
field ``v`` to the new function ``x --> I(x + v(x))``.
Parameters
----------
template : `DiscreteLpElement`
Template to be deformed by a displacement field.
displacement : element of power space of ``template.space``
Vector field (displacement field) used to deform the
template.
out : `numpy.ndarray`, optional
Array to which the function values of the deformed template
are written. It must have the same shape as ``template`` and
a data type compatible with ``template.dtype``.
Returns
-------
deformed_template : `numpy.ndarray`
Function values of the deformed template. If ``out`` was given,
the returned object is a reference to it.
Examples
--------
Create a simple 1D template to initialize the operator and
apply it to a displacement field. Where the displacement is zero,
the output value is the same as the input value.
In the 4-th point, the value is taken from 0.2 (one cell) to the
left, i.e. 1.0.
>>> space = odl.uniform_discr(0, 1, 5)
>>> disp_field_space = space.tangent_bundle
>>> template = space.element([0, 0, 1, 0, 0])
>>> displacement_field = disp_field_space.element([[0, 0, 0, -0.2, 0]])
>>> linear_deform(template, displacement_field)
array([ 0., 0., 1., 1., 0.])
The result depends on the chosen interpolation. With 'linear'
interpolation and an offset equal to half the distance between two
points, 0.1, one gets the mean of the values.
>>> space = odl.uniform_discr(0, 1, 5, interp='linear')
>>> disp_field_space = space.tangent_bundle
>>> template = space.element([0, 0, 1, 0, 0])
>>> displacement_field = disp_field_space.element([[0, 0, 0, -0.1, 0]])
>>> linear_deform(template, displacement_field)
array([ 0. , 0. , 1. , 0.5, 0. ])
"""
image_pts = template.space.points()
for i, vi in enumerate(displacement):
image_pts[:, i] += vi.asarray().ravel()
values = template.interpolation(image_pts.T, out=out, bounds_check=False)
return values.reshape(template.space.shape) | python | {
"resource": ""
} |
q35702 | LinDeformFixedTempl.derivative | train | def derivative(self, displacement):
"""Derivative of the operator at ``displacement``.
Parameters
----------
displacement : `domain` `element-like`
Point at which the derivative is computed.
Returns
-------
derivative : `PointwiseInner`
The derivative evaluated at ``displacement``.
"""
# To implement the complex case we need to be able to embed the real
# vector field space into the range of the gradient. Issue #59.
if not self.range.is_real:
raise NotImplementedError('derivative not implemented for complex '
'spaces.')
displacement = self.domain.element(displacement)
# TODO: allow users to select what method to use here.
grad = Gradient(domain=self.range, method='central',
pad_mode='symmetric')
grad_templ = grad(self.template)
def_grad = self.domain.element(
[linear_deform(gf, displacement) for gf in grad_templ])
return PointwiseInner(self.domain, def_grad) | python | {
"resource": ""
} |
q35703 | LinDeformFixedDisp.adjoint | train | def adjoint(self):
"""Adjoint of the linear operator.
Note that this implementation uses an approximation that is only
valid for small displacements.
"""
# TODO allow users to select what method to use here.
div_op = Divergence(domain=self.displacement.space, method='forward',
pad_mode='symmetric')
jacobian_det = self.domain.element(
np.exp(-div_op(self.displacement)))
return jacobian_det * self.inverse | python | {
"resource": ""
} |
q35704 | cylinders_from_ellipses | train | def cylinders_from_ellipses(ellipses):
"""Create 3d cylinders from ellipses."""
ellipses = np.asarray(ellipses)
ellipsoids = np.zeros((ellipses.shape[0], 10))
ellipsoids[:, [0, 1, 2, 4, 5, 7]] = ellipses
ellipsoids[:, 3] = 100000.0
return ellipsoids | python | {
"resource": ""
} |
q35705 | ElapsedMixIn.elapsed | train | def elapsed(self):
'''
Returns elapsed crawl time as a float in seconds.
This metric includes all the time that a site was in active rotation,
including any time it spent waiting for its turn to be brozzled.
In contrast `Site.active_brozzling_time` only counts time when a
brozzler worker claimed the site and was actively brozzling it.
'''
dt = 0
for ss in self.starts_and_stops[:-1]:
dt += (ss['stop'] - ss['start']).total_seconds()
ss = self.starts_and_stops[-1]
if ss['stop']:
dt += (ss['stop'] - ss['start']).total_seconds()
else: # crawl is active
dt += (doublethink.utcnow() - ss['start']).total_seconds()
return dt | python | {
"resource": ""
} |
q35706 | RethinkDbFrontier.enforce_time_limit | train | def enforce_time_limit(self, site):
'''
Raises `brozzler.ReachedTimeLimit` if appropriate.
'''
if (site.time_limit and site.time_limit > 0
and site.elapsed() > site.time_limit):
self.logger.debug(
"site FINISHED_TIME_LIMIT! time_limit=%s "
"elapsed=%s %s", site.time_limit, site.elapsed(), site)
raise brozzler.ReachedTimeLimit | python | {
"resource": ""
} |
q35707 | RethinkDbFrontier.honor_stop_request | train | def honor_stop_request(self, site):
"""Raises brozzler.CrawlStopped if stop has been requested."""
site.refresh()
if (site.stop_requested
and site.stop_requested <= doublethink.utcnow()):
self.logger.info("stop requested for site %s", site.id)
raise brozzler.CrawlStopped
if site.job_id:
job = brozzler.Job.load(self.rr, site.job_id)
if (job and job.stop_requested
and job.stop_requested <= doublethink.utcnow()):
self.logger.info("stop requested for job %s", site.job_id)
raise brozzler.CrawlStopped | python | {
"resource": ""
} |
q35708 | RethinkDbFrontier._maybe_finish_job | train | def _maybe_finish_job(self, job_id):
"""Returns True if job is finished."""
job = brozzler.Job.load(self.rr, job_id)
if not job:
return False
if job.status.startswith("FINISH"):
self.logger.warn("%s is already %s", job, job.status)
return True
results = self.rr.table("sites").get_all(job_id, index="job_id").run()
n = 0
for result in results:
site = brozzler.Site(self.rr, result)
if not site.status.startswith("FINISH"):
results.close()
return False
n += 1
self.logger.info(
"all %s sites finished, job %s is FINISHED!", n, job.id)
job.finish()
job.save()
return True | python | {
"resource": ""
} |
q35709 | RethinkDbFrontier._merge_page | train | def _merge_page(self, existing_page, fresh_page):
'''
Utility method for merging info from `brozzler.Page` instances
representing the same url but with possibly different metadata.
'''
existing_page.priority += fresh_page.priority
existing_page.hashtags = list(set(
existing_page.hashtags + fresh_page.hashtags))
existing_page.hops_off = min(
existing_page.hops_off, fresh_page.hops_off) | python | {
"resource": ""
} |
q35710 | behaviors | train | def behaviors(behaviors_dir=None):
"""Return list of JS behaviors loaded from YAML file.
:param behaviors_dir: Directory containing `behaviors.yaml` and
`js-templates/`. Defaults to brozzler dir.
"""
import os, yaml, string
global _behaviors
if _behaviors is None:
d = behaviors_dir or os.path.dirname(__file__)
behaviors_yaml = os.path.join(d, 'behaviors.yaml')
with open(behaviors_yaml) as fin:
_behaviors = yaml.safe_load(fin)
return _behaviors | python | {
"resource": ""
} |
q35711 | behavior_script | train | def behavior_script(url, template_parameters=None, behaviors_dir=None):
'''
Returns the javascript behavior string populated with template_parameters.
'''
import re, logging, json
for behavior in behaviors(behaviors_dir=behaviors_dir):
if re.match(behavior['url_regex'], url):
parameters = dict()
if 'default_parameters' in behavior:
parameters.update(behavior['default_parameters'])
if template_parameters:
parameters.update(template_parameters)
template = jinja2_environment(behaviors_dir).get_template(
behavior['behavior_js_template'])
script = template.render(parameters)
logging.info(
'using template=%r populated with parameters=%r for %r',
behavior['behavior_js_template'], json.dumps(parameters), url)
return script
return None | python | {
"resource": ""
} |
q35712 | thread_raise | train | def thread_raise(thread, exctype):
'''
Raises or queues the exception `exctype` for the thread `thread`.
See the documentation on the function `thread_exception_gate()` for more
information.
Adapted from http://tomerfiliba.com/recipes/Thread2/ which explains:
"The exception will be raised only when executing python bytecode. If your
thread calls a native/built-in blocking function, the exception will be
raised only when execution returns to the python code."
Raises:
TypeError if `exctype` is not a class
ValueError, SystemError in case of unexpected problems
'''
import ctypes, inspect, threading, logging
if not inspect.isclass(exctype):
raise TypeError(
'cannot raise %s, only exception types can be raised (not '
'instances)' % exctype)
gate = thread_exception_gate(thread)
with gate.lock:
if gate.ok_to_raise.is_set() and thread.is_alive():
gate.ok_to_raise.clear()
logging.info('raising %s in thread %s', exctype, thread)
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(thread.ident), ctypes.py_object(exctype))
if res == 0:
raise ValueError(
'invalid thread id? thread.ident=%s' % thread.ident)
elif res != 1:
# if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread.ident, 0)
raise SystemError('PyThreadState_SetAsyncExc failed')
else:
logging.info('queueing %s for thread %s', exctype, thread)
gate.queue_exception(exctype) | python | {
"resource": ""
} |
q35713 | sleep | train | def sleep(duration):
'''
Sleeps for duration seconds in increments of 0.5 seconds.
Use this so that the sleep can be interrupted by thread_raise().
'''
import time
start = time.time()
while True:
elapsed = time.time() - start
if elapsed >= duration:
break
time.sleep(min(duration - elapsed, 0.5)) | python | {
"resource": ""
} |
q35714 | BrowserPool.acquire_multi | train | def acquire_multi(self, n=1):
'''
Returns a list of up to `n` browsers.
Raises:
NoBrowsersAvailable if none available
'''
browsers = []
with self._lock:
if len(self._in_use) >= self.size:
raise NoBrowsersAvailable
while len(self._in_use) < self.size and len(browsers) < n:
browser = self._fresh_browser()
browsers.append(browser)
self._in_use.add(browser)
return browsers | python | {
"resource": ""
} |
q35715 | BrowserPool.acquire | train | def acquire(self):
'''
Returns an available instance.
Returns:
browser from pool, if available
Raises:
NoBrowsersAvailable if none available
'''
with self._lock:
if len(self._in_use) >= self.size:
raise NoBrowsersAvailable
browser = self._fresh_browser()
self._in_use.add(browser)
return browser | python | {
"resource": ""
} |
q35716 | WebsockReceiverThread._on_error | train | def _on_error(self, websock, e):
'''
Raises BrowsingException in the thread that created this instance.
'''
if isinstance(e, (
websocket.WebSocketConnectionClosedException,
ConnectionResetError)):
self.logger.error('websocket closed, did chrome die?')
else:
self.logger.error(
'exception from websocket receiver thread',
exc_info=1)
brozzler.thread_raise(self.calling_thread, BrowsingException) | python | {
"resource": ""
} |
q35717 | Browser.start | train | def start(self, **kwargs):
'''
Starts chrome if it's not running.
Args:
**kwargs: arguments for self.chrome.start(...)
'''
if not self.is_running():
self.websock_url = self.chrome.start(**kwargs)
self.websock = websocket.WebSocketApp(self.websock_url)
self.websock_thread = WebsockReceiverThread(
self.websock, name='WebsockThread:%s' % self.chrome.port)
self.websock_thread.start()
self._wait_for(lambda: self.websock_thread.is_open, timeout=30)
# tell browser to send us messages we're interested in
self.send_to_chrome(method='Network.enable')
self.send_to_chrome(method='Page.enable')
self.send_to_chrome(method='Console.enable')
self.send_to_chrome(method='Runtime.enable')
self.send_to_chrome(method='ServiceWorker.enable')
self.send_to_chrome(method='ServiceWorker.setForceUpdateOnPageLoad')
# disable google analytics
self.send_to_chrome(
method='Network.setBlockedURLs',
params={'urls': ['*google-analytics.com/analytics.js',
'*google-analytics.com/ga.js']}) | python | {
"resource": ""
} |
q35718 | Browser.stop | train | def stop(self):
'''
Stops chrome if it's running.
'''
try:
if (self.websock and self.websock.sock
and self.websock.sock.connected):
self.logger.info('shutting down websocket connection')
try:
self.websock.close()
except BaseException as e:
self.logger.error(
'exception closing websocket %s - %s',
self.websock, e)
self.chrome.stop()
if self.websock_thread and (
self.websock_thread != threading.current_thread()):
self.websock_thread.join(timeout=30)
if self.websock_thread.is_alive():
self.logger.error(
'%s still alive 30 seconds after closing %s, will '
'forcefully nudge it again', self.websock_thread,
self.websock)
self.websock.keep_running = False
self.websock_thread.join(timeout=30)
if self.websock_thread.is_alive():
self.logger.critical(
'%s still alive 60 seconds after closing %s',
self.websock_thread, self.websock)
self.websock_url = None
except:
self.logger.error('problem stopping', exc_info=True) | python | {
"resource": ""
} |
q35719 | Browser.browse_page | train | def browse_page(
self, page_url, extra_headers=None,
user_agent=None, behavior_parameters=None, behaviors_dir=None,
on_request=None, on_response=None,
on_service_worker_version_updated=None, on_screenshot=None,
username=None, password=None, hashtags=None,
skip_extract_outlinks=False, skip_visit_hashtags=False,
skip_youtube_dl=False, page_timeout=300, behavior_timeout=900):
'''
Browses page in browser.
Browser should already be running, i.e. start() should have been
called. Opens the page_url in the browser, runs behaviors, takes a
screenshot, extracts outlinks.
Args:
page_url: url of the page to browse
extra_headers: dict of extra http headers to configure the browser
to send with every request (default None)
user_agent: user agent string, replaces browser default if
supplied (default None)
behavior_parameters: dict of parameters for populating the
javascript behavior template (default None)
behaviors_dir: Directory containing behaviors.yaml and JS templates
(default None loads Brozzler default JS behaviors)
on_request: callback to invoke on every Network.requestWillBeSent
event, takes one argument, the json-decoded message (default
None)
on_response: callback to invoke on every Network.responseReceived
event, takes one argument, the json-decoded message (default
None)
on_service_worker_version_updated: callback to invoke on every
ServiceWorker.workerVersionUpdated event, takes one argument,
the json-decoded message (default None)
on_screenshot: callback to invoke when screenshot is obtained,
takes one argument, the the raw jpeg bytes (default None)
# XXX takes two arguments, the url of the page at the time the
# screenshot was taken, and the raw jpeg bytes (default None)
username: username string to use to try logging in if a login form
is found in the page (default None)
password: password string to use to try logging in if a login form
is found in the page (default None)
... (there are more)
Returns:
A tuple (final_page_url, outlinks).
final_page_url: the url in the location bar at the end of the
browse_page cycle, which could be different from the original
page url if the page redirects, javascript has changed the url
in the location bar, etc
outlinks: a list of navigational links extracted from the page
Raises:
brozzler.ProxyError: in case of proxy connection error
BrowsingException: if browsing the page fails in some other way
'''
if not self.is_running():
raise BrowsingException('browser has not been started')
if self.is_browsing:
raise BrowsingException('browser is already busy browsing a page')
self.is_browsing = True
if on_request:
self.websock_thread.on_request = on_request
if on_response:
self.websock_thread.on_response = on_response
if on_service_worker_version_updated:
self.websock_thread.on_service_worker_version_updated = \
on_service_worker_version_updated
try:
with brozzler.thread_accept_exceptions():
self.configure_browser(
extra_headers=extra_headers,
user_agent=user_agent)
self.navigate_to_page(page_url, timeout=page_timeout)
if password:
self.try_login(username, password, timeout=page_timeout)
# if login redirected us, return to page_url
if page_url != self.url().split('#')[0]:
self.logger.debug(
'login navigated away from %s; returning!',
page_url)
self.navigate_to_page(page_url, timeout=page_timeout)
if on_screenshot:
self._try_screenshot(on_screenshot)
behavior_script = brozzler.behavior_script(
page_url, behavior_parameters,
behaviors_dir=behaviors_dir)
self.run_behavior(behavior_script, timeout=behavior_timeout)
if skip_extract_outlinks:
outlinks = []
else:
outlinks = self.extract_outlinks()
if not skip_visit_hashtags:
self.visit_hashtags(self.url(), hashtags, outlinks)
final_page_url = self.url()
return final_page_url, outlinks
except brozzler.ReachedLimit:
# websock_thread has stashed the ReachedLimit exception with
# more information, raise that one
raise self.websock_thread.reached_limit
except websocket.WebSocketConnectionClosedException as e:
self.logger.error('websocket closed, did chrome die?')
raise BrowsingException(e)
finally:
self.is_browsing = False
self.websock_thread.on_request = None
self.websock_thread.on_response = None | python | {
"resource": ""
} |
q35720 | Browser.url | train | def url(self, timeout=30):
'''
Returns value of document.URL from the browser.
'''
self.websock_thread.expect_result(self._command_id.peek())
msg_id = self.send_to_chrome(
method='Runtime.evaluate',
params={'expression': 'document.URL'})
self._wait_for(
lambda: self.websock_thread.received_result(msg_id),
timeout=timeout)
message = self.websock_thread.pop_result(msg_id)
return message['result']['result']['value'] | python | {
"resource": ""
} |
q35721 | brozzler_new_job | train | def brozzler_new_job(argv=None):
'''
Command line utility entry point for queuing a new brozzler job. Takes a
yaml brozzler job configuration file, creates job, sites, and pages objects
in rethinkdb, which brozzler-workers will look at and start crawling.
'''
argv = argv or sys.argv
arg_parser = argparse.ArgumentParser(
prog=os.path.basename(argv[0]),
description='brozzler-new-job - queue new job with brozzler',
formatter_class=BetterArgumentDefaultsHelpFormatter)
arg_parser.add_argument(
'job_conf_file', metavar='JOB_CONF_FILE',
help='brozzler job configuration file in yaml')
add_rethinkdb_options(arg_parser)
add_common_options(arg_parser, argv)
args = arg_parser.parse_args(args=argv[1:])
configure_logging(args)
rr = rethinker(args)
frontier = brozzler.RethinkDbFrontier(rr)
try:
brozzler.new_job_file(frontier, args.job_conf_file)
except brozzler.InvalidJobConf as e:
print('brozzler-new-job: invalid job file:', args.job_conf_file, file=sys.stderr)
print(' ' + yaml.dump(e.errors).rstrip().replace('\n', '\n '), file=sys.stderr)
sys.exit(1) | python | {
"resource": ""
} |
q35722 | brozzler_new_site | train | def brozzler_new_site(argv=None):
'''
Command line utility entry point for queuing a new brozzler site.
Takes a seed url and creates a site and page object in rethinkdb, which
brozzler-workers will look at and start crawling.
'''
argv = argv or sys.argv
arg_parser = argparse.ArgumentParser(
prog=os.path.basename(argv[0]),
description='brozzler-new-site - register site to brozzle',
formatter_class=BetterArgumentDefaultsHelpFormatter)
arg_parser.add_argument('seed', metavar='SEED', help='seed url')
add_rethinkdb_options(arg_parser)
arg_parser.add_argument(
'--time-limit', dest='time_limit', default=None,
help='time limit in seconds for this site')
arg_parser.add_argument(
'--ignore-robots', dest='ignore_robots', action='store_true',
help='ignore robots.txt for this site')
arg_parser.add_argument(
'--warcprox-meta', dest='warcprox_meta',
help=(
'Warcprox-Meta http request header to send with each request; '
'must be a json blob, ignored unless warcprox features are '
'enabled'))
arg_parser.add_argument(
'--behavior-parameters', dest='behavior_parameters',
default=None, help=(
'json blob of parameters to populate the javascript behavior '
'template, e.g. {"parameter_username":"x",'
'"parameter_password":"y"}'))
arg_parser.add_argument(
'--username', dest='username', default=None,
help='use this username to try to log in if a login form is found')
arg_parser.add_argument(
'--password', dest='password', default=None,
help='use this password to try to log in if a login form is found')
add_common_options(arg_parser, argv)
args = arg_parser.parse_args(args=argv[1:])
configure_logging(args)
rr = rethinker(args)
site = brozzler.Site(rr, {
'seed': args.seed,
'time_limit': int(args.time_limit) if args.time_limit else None,
'ignore_robots': args.ignore_robots,
'warcprox_meta': json.loads(
args.warcprox_meta) if args.warcprox_meta else None,
'behavior_parameters': json.loads(
args.behavior_parameters) if args.behavior_parameters else None,
'username': args.username,
'password': args.password})
frontier = brozzler.RethinkDbFrontier(rr)
brozzler.new_site(frontier, site) | python | {
"resource": ""
} |
q35723 | brozzler_list_captures | train | def brozzler_list_captures(argv=None):
'''
Handy utility for looking up entries in the rethinkdb "captures" table by
url or sha1.
'''
import urlcanon
argv = argv or sys.argv
arg_parser = argparse.ArgumentParser(
prog=os.path.basename(argv[0]),
formatter_class=BetterArgumentDefaultsHelpFormatter)
arg_parser.add_argument(
'-p', '--prefix', dest='prefix', action='store_true', help=(
'use prefix match for url (n.b. may not work as expected if '
'searching key has query string because canonicalization can '
'reorder query parameters)'))
arg_parser.add_argument(
'--yaml', dest='yaml', action='store_true', help=(
'yaml output (default is json)'))
add_rethinkdb_options(arg_parser)
add_common_options(arg_parser, argv)
arg_parser.add_argument(
'url_or_sha1', metavar='URL_or_SHA1',
help='url or sha1 to look up in captures table')
args = arg_parser.parse_args(args=argv[1:])
configure_logging(args)
rr = rethinker(args)
if args.url_or_sha1[:5] == 'sha1:':
if args.prefix:
logging.warn(
'ignoring supplied --prefix option which does not apply '
'to lookup by sha1')
# assumes it's already base32 (XXX could detect if hex and convert)
sha1base32 = args.url_or_sha1[5:].upper()
reql = rr.table('captures').between(
[sha1base32, r.minval, r.minval],
[sha1base32, r.maxval, r.maxval],
index='sha1_warc_type')
logging.debug('querying rethinkdb: %s', reql)
results = reql.run()
else:
key = urlcanon.semantic(args.url_or_sha1).surt().decode('ascii')
abbr_start_key = key[:150]
if args.prefix:
# surt is necessarily ascii and \x7f is the last ascii character
abbr_end_key = key[:150] + '\x7f'
end_key = key + '\x7f'
else:
abbr_end_key = key[:150]
end_key = key
reql = rr.table('captures').between(
[abbr_start_key, r.minval],
[abbr_end_key, r.maxval],
index='abbr_canon_surt_timestamp', right_bound='closed')
reql = reql.order_by(index='abbr_canon_surt_timestamp')
reql = reql.filter(
lambda capture: (capture['canon_surt'] >= key)
& (capture['canon_surt'] <= end_key))
logging.debug('querying rethinkdb: %s', reql)
results = reql.run()
if args.yaml:
yaml.dump_all(
results, stream=sys.stdout, explicit_start=True,
default_flow_style=False)
else:
for result in results:
print(json.dumps(result, cls=Jsonner, indent=2)) | python | {
"resource": ""
} |
q35724 | BrozzlerEasyController._warcprox_opts | train | def _warcprox_opts(self, args):
'''
Takes args as produced by the argument parser built by
_build_arg_parser and builds warcprox arguments object suitable to pass
to warcprox.main.init_controller. Copies some arguments, renames some,
populates some with defaults appropriate for brozzler-easy, etc.
'''
warcprox_opts = warcprox.Options()
warcprox_opts.address = 'localhost'
# let the OS choose an available port; discover it later using
# sock.getsockname()[1]
warcprox_opts.port = 0
warcprox_opts.cacert = args.cacert
warcprox_opts.certs_dir = args.certs_dir
warcprox_opts.directory = args.warcs_dir
warcprox_opts.gzip = True
warcprox_opts.prefix = 'brozzler'
warcprox_opts.size = 1000 * 1000* 1000
warcprox_opts.rollover_idle_time = 3 * 60
warcprox_opts.digest_algorithm = 'sha1'
warcprox_opts.base32 = True
warcprox_opts.stats_db_file = None
warcprox_opts.playback_port = None
warcprox_opts.playback_index_db_file = None
warcprox_opts.rethinkdb_big_table_url = (
'rethinkdb://%s/%s/captures' % (
args.rethinkdb_servers, args.rethinkdb_db))
warcprox_opts.queue_size = 500
warcprox_opts.max_threads = None
warcprox_opts.profile = False
warcprox_opts.onion_tor_socks_proxy = args.onion_tor_socks_proxy
return warcprox_opts | python | {
"resource": ""
} |
q35725 | _reppy_rules_getitem | train | def _reppy_rules_getitem(self, agent):
'''
Find the user-agent token matching the supplied full user-agent, using
a case-insensitive substring search.
'''
lc_agent = agent.lower()
for s in self.agents:
if s in lc_agent:
return self.agents[s]
return self.agents.get('*') | python | {
"resource": ""
} |
q35726 | is_permitted_by_robots | train | def is_permitted_by_robots(site, url, proxy=None):
'''
Checks if `url` is permitted by robots.txt.
Treats any kind of error fetching robots.txt as "allow all". See
http://builds.archive.org/javadoc/heritrix-3.x-snapshot/org/archive/modules/net/CrawlServer.html#updateRobots(org.archive.modules.CrawlURI)
for some background on that policy.
Returns:
bool: `True` if `site.ignore_robots` is set, or if `url` is permitted
by robots.txt, `False` otherwise
Raises:
brozzler.ReachedLimit: if warcprox responded with 420 Reached Limit
requests.exceptions.ProxyError: if the proxy is down
'''
if site.ignore_robots:
return True
try:
result = _robots_cache(site, proxy).allowed(
url, site.user_agent or "brozzler")
return result
except Exception as e:
if isinstance(e, reppy.exceptions.ServerError) and isinstance(
e.args[0], brozzler.ReachedLimit):
raise e.args[0]
elif hasattr(e, 'args') and isinstance(
e.args[0], requests.exceptions.ProxyError):
# reppy has wrapped an exception that we want to bubble up
raise brozzler.ProxyError(e)
else:
logging.warn(
"returning true (permitted) after problem fetching "
"robots.txt for %r: %r", url, e)
return True | python | {
"resource": ""
} |
q35727 | final_bounces | train | def final_bounces(fetches, url):
"""
Resolves redirect chains in `fetches` and returns a list of fetches
representing the final redirect destinations of the given url. There could
be more than one if for example youtube-dl hit the same url with HEAD and
then GET requests.
"""
redirects = {}
for fetch in fetches:
# XXX check http status 301,302,303,307? check for "uri" header
# as well as "location"? see urllib.request.HTTPRedirectHandler
if 'location' in fetch['response_headers']:
redirects[fetch['url']] = fetch
final_url = url
while final_url in redirects:
fetch = redirects.pop(final_url)
final_url = urllib.parse.urljoin(
fetch['url'], fetch['response_headers']['location'])
final_bounces = []
for fetch in fetches:
if fetch['url'] == final_url:
final_bounces.append(fetch)
return final_bounces | python | {
"resource": ""
} |
q35728 | _remember_videos | train | def _remember_videos(page, fetches, stitch_ups=None):
'''
Saves info about videos captured by youtube-dl in `page.videos`.
'''
if not 'videos' in page:
page.videos = []
for fetch in fetches or []:
content_type = fetch['response_headers'].get_content_type()
if (content_type.startswith('video/')
# skip manifests of DASH segmented video -
# see https://github.com/internetarchive/brozzler/pull/70
and content_type != 'video/vnd.mpeg.dash.mpd'
and fetch['method'] == 'GET'
and fetch['response_code'] in (200, 206)):
video = {
'blame': 'youtube-dl',
'url': fetch['url'],
'response_code': fetch['response_code'],
'content-type': content_type,
}
if 'content-length' in fetch['response_headers']:
video['content-length'] = int(
fetch['response_headers']['content-length'])
if 'content-range' in fetch['response_headers']:
video['content-range'] = fetch[
'response_headers']['content-range']
logging.debug('embedded video %s', video)
page.videos.append(video)
for stitch_up in stitch_ups or []:
if stitch_up['content-type'].startswith('video/'):
video = {
'blame': 'youtube-dl',
'url': stitch_up['url'],
'response_code': stitch_up['response_code'],
'content-type': stitch_up['content-type'],
'content-length': stitch_up['content-length'],
}
logging.debug('embedded video %s', video)
page.videos.append(video) | python | {
"resource": ""
} |
q35729 | do_youtube_dl | train | def do_youtube_dl(worker, site, page):
'''
Runs youtube-dl configured for `worker` and `site` to download videos from
`page`.
Args:
worker (brozzler.BrozzlerWorker): the calling brozzler worker
site (brozzler.Site): the site we are brozzling
page (brozzler.Page): the page we are brozzling
Returns:
tuple with two entries:
`list` of `dict`: with info about urls fetched:
[{
'url': ...,
'method': ...,
'response_code': ...,
'response_headers': ...,
}, ...]
`list` of `str`: outlink urls
'''
with tempfile.TemporaryDirectory(prefix='brzl-ydl-') as tempdir:
ydl = _build_youtube_dl(worker, tempdir, site)
ie_result = _try_youtube_dl(worker, ydl, site, page)
outlinks = set()
if ie_result and ie_result.get('extractor') == 'youtube:playlist':
# youtube watch pages as outlinks
outlinks = {'https://www.youtube.com/watch?v=%s' % e['id']
for e in ie_result.get('entries_no_dl', [])}
# any outlinks for other cases?
return ydl.fetch_spy.fetches, outlinks | python | {
"resource": ""
} |
q35730 | pages | train | def pages(site_id):
"""Pages already crawled."""
start = int(flask.request.args.get("start", 0))
end = int(flask.request.args.get("end", start + 90))
reql = rr.table("pages").between(
[site_id, 1, r.minval], [site_id, r.maxval, r.maxval],
index="least_hops").order_by(index="least_hops")[start:end]
logging.debug("querying rethinkdb: %s", reql)
pages_ = reql.run()
return flask.jsonify(pages=list(pages_)) | python | {
"resource": ""
} |
q35731 | check_version | train | def check_version(chrome_exe):
'''
Raises SystemExit if `chrome_exe` is not a supported browser version.
Must run in the main thread to have the desired effect.
'''
# mac$ /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --version
# Google Chrome 64.0.3282.140
# mac$ /Applications/Google\ Chrome\ Canary.app/Contents/MacOS/Google\ Chrome\ Canary --version
# Google Chrome 66.0.3341.0 canary
# linux$ chromium-browser --version
# Using PPAPI flash.
# --ppapi-flash-path=/usr/lib/adobe-flashplugin/libpepflashplayer.so --ppapi-flash-version=
# Chromium 61.0.3163.100 Built on Ubuntu , running on Ubuntu 16.04
cmd = [chrome_exe, '--version']
out = subprocess.check_output(cmd, timeout=60)
m = re.search(br'(Chromium|Google Chrome) ([\d.]+)', out)
if not m:
sys.exit(
'unable to parse browser version from output of '
'%r: %r' % (subprocess.list2cmdline(cmd), out))
version_str = m.group(2).decode()
major_version = int(version_str.split('.')[0])
if major_version < 64:
sys.exit('brozzler requires chrome/chromium version 64 or '
'later but %s reports version %s' % (
chrome_exe, version_str)) | python | {
"resource": ""
} |
q35732 | BrozzlerWorker._service_heartbeat_if_due | train | def _service_heartbeat_if_due(self):
'''Sends service registry heartbeat if due'''
due = False
if self._service_registry:
if not hasattr(self, "status_info"):
due = True
else:
d = doublethink.utcnow() - self.status_info["last_heartbeat"]
due = d.total_seconds() > self.HEARTBEAT_INTERVAL
if due:
self._service_heartbeat() | python | {
"resource": ""
} |
q35733 | BrozzlerWorker._start_browsing_some_sites | train | def _start_browsing_some_sites(self):
'''
Starts browsing some sites.
Raises:
NoBrowsersAvailable if none available
'''
# acquire_multi() raises NoBrowsersAvailable if none available
browsers = self._browser_pool.acquire_multi(
(self._browser_pool.num_available() + 1) // 2)
try:
sites = self._frontier.claim_sites(len(browsers))
except:
self._browser_pool.release_all(browsers)
raise
for i in range(len(browsers)):
if i < len(sites):
th = threading.Thread(
target=self._brozzle_site_thread_target,
args=(browsers[i], sites[i]),
name="BrozzlingThread:%s" % browsers[i].chrome.port,
daemon=True)
with self._browsing_threads_lock:
self._browsing_threads.add(th)
th.start()
else:
self._browser_pool.release(browsers[i]) | python | {
"resource": ""
} |
q35734 | tokenizer._createtoken | train | def _createtoken(self, type_, value, flags=None):
'''create a token with position information'''
pos = None
assert len(self._positions) >= 2, (type_, value)
p2 = self._positions.pop()
p1 = self._positions.pop()
pos = [p1, p2]
return token(type_, value, pos, flags) | python | {
"resource": ""
} |
q35735 | parse | train | def parse(s, strictmode=True, expansionlimit=None, convertpos=False):
'''parse the input string, returning a list of nodes
top level node kinds are:
- command - a simple command
- pipeline - a series of simple commands
- list - a series of one or more pipelines
- compound - contains constructs for { list; }, (list), if, for..
leafs are word nodes (which in turn can also contain any of the
aforementioned nodes due to command substitutions).
when strictmode is set to False, we will:
- skip reading a heredoc if we're at the end of the input
expansionlimit is used to limit the amount of recursive parsing done due to
command substitutions found during word expansion.
'''
p = _parser(s, strictmode=strictmode, expansionlimit=expansionlimit)
parts = [p.parse()]
class endfinder(ast.nodevisitor):
def __init__(self):
self.end = -1
def visitheredoc(self, node, value):
self.end = node.pos[1]
# find the 'real' end incase we have a heredoc in there
ef = _endfinder()
ef.visit(parts[-1])
index = max(parts[-1].pos[1], ef.end) + 1
while index < len(s):
part = _parser(s[index:], strictmode=strictmode).parse()
if not isinstance(part, ast.node):
break
ast.posshifter(index).visit(part)
parts.append(part)
ef = _endfinder()
ef.visit(parts[-1])
index = max(parts[-1].pos[1], ef.end) + 1
if convertpos:
for tree in parts:
ast.posconverter(s).visit(tree)
return parts | python | {
"resource": ""
} |
q35736 | split | train | def split(s):
'''a utility function that mimics shlex.split but handles more
complex shell constructs such as command substitutions inside words
>>> list(split('a b"c"\\'d\\''))
['a', 'bcd']
>>> list(split('a "b $(c)" $(d) \\'$(e)\\''))
['a', 'b $(c)', '$(d)', '$(e)']
>>> list(split('a b\\n'))
['a', 'b', '\\n']
'''
p = _parser(s)
for t in p.tok:
if t.ttype == tokenizer.tokentype.WORD:
quoted = bool(t.flags & flags.word.QUOTED)
doublequoted = quoted and t.value[0] == '"'
parts, expandedword = subst._expandwordinternal(p, t, 0,
doublequoted, 0, 0)
yield expandedword
else:
yield s[t.lexpos:t.endlexpos] | python | {
"resource": ""
} |
q35737 | sleep_and_retry | train | def sleep_and_retry(func):
'''
Return a wrapped function that rescues rate limit exceptions, sleeping the
current thread until rate limit resets.
:param function func: The function to decorate.
:return: Decorated function.
:rtype: function
'''
@wraps(func)
def wrapper(*args, **kargs):
'''
Call the rate limited function. If the function raises a rate limit
exception sleep for the remaing time period and retry the function.
:param args: non-keyword variable length argument list to the decorated function.
:param kargs: keyworded variable length argument list to the decorated function.
'''
while True:
try:
return func(*args, **kargs)
except RateLimitException as exception:
time.sleep(exception.period_remaining)
return wrapper | python | {
"resource": ""
} |
q35738 | RateLimitDecorator.__period_remaining | train | def __period_remaining(self):
'''
Return the period remaining for the current rate limit window.
:return: The remaing period.
:rtype: float
'''
elapsed = self.clock() - self.last_reset
return self.period - elapsed | python | {
"resource": ""
} |
q35739 | filter_lines_from_comments | train | def filter_lines_from_comments(lines):
""" Filter the lines from comments and non code lines. """
for line_nb, raw_line in enumerate(lines):
clean_line = remove_comments_from_line(raw_line)
if clean_line == '':
continue
yield line_nb, clean_line, raw_line | python | {
"resource": ""
} |
q35740 | _check_no_current_table | train | def _check_no_current_table(new_obj, current_table):
""" Raises exception if we try to add a relation or a column
with no current table. """
if current_table is None:
msg = 'Cannot add {} before adding table'
if isinstance(new_obj, Relation):
raise NoCurrentTableException(msg.format('relation'))
if isinstance(new_obj, Column):
raise NoCurrentTableException(msg.format('column')) | python | {
"resource": ""
} |
q35741 | update_models | train | def update_models(new_obj, current_table, tables, relations):
""" Update the state of the parsing. """
_update_check_inputs(current_table, tables, relations)
_check_no_current_table(new_obj, current_table)
if isinstance(new_obj, Table):
tables_names = [t.name for t in tables]
_check_not_creating_duplicates(new_obj.name, tables_names, 'table', DuplicateTableException)
return new_obj, tables + [new_obj], relations
if isinstance(new_obj, Relation):
tables_names = [t.name for t in tables]
_check_colname_in_lst(new_obj.right_col, tables_names)
_check_colname_in_lst(new_obj.left_col, tables_names)
return current_table, tables, relations + [new_obj]
if isinstance(new_obj, Column):
columns_names = [c.name for c in current_table.columns]
_check_not_creating_duplicates(new_obj.name, columns_names, 'column', DuplicateColumnException)
current_table.columns.append(new_obj)
return current_table, tables, relations
msg = "new_obj cannot be of type {}"
raise ValueError(msg.format(new_obj.__class__.__name__)) | python | {
"resource": ""
} |
q35742 | markdown_file_to_intermediary | train | def markdown_file_to_intermediary(filename):
""" Parse a file and return to intermediary syntax. """
with open(filename) as f:
lines = f.readlines()
return line_iterator_to_intermediary(lines) | python | {
"resource": ""
} |
q35743 | check_args | train | def check_args(args):
"""Checks that the args are coherent."""
check_args_has_attributes(args)
if args.v:
non_version_attrs = [v for k, v in args.__dict__.items() if k != 'v']
print('non_version_attrs', non_version_attrs)
if len([v for v in non_version_attrs if v is not None]) != 0:
fail('Cannot show the version number with another command.')
return
if args.i is None:
fail('Cannot draw ER diagram of no database.')
if args.o is None:
fail('Cannot draw ER diagram with no output file.') | python | {
"resource": ""
} |
q35744 | relation_to_intermediary | train | def relation_to_intermediary(fk):
"""Transform an SQLAlchemy ForeignKey object to it's intermediary representation. """
return Relation(
right_col=format_name(fk.parent.table.fullname),
left_col=format_name(fk._column_tokens[1]),
right_cardinality='?',
left_cardinality='*',
) | python | {
"resource": ""
} |
q35745 | column_to_intermediary | train | def column_to_intermediary(col, type_formatter=format_type):
"""Transform an SQLAlchemy Column object to it's intermediary representation. """
return Column(
name=col.name,
type=type_formatter(col.type),
is_key=col.primary_key,
) | python | {
"resource": ""
} |
q35746 | table_to_intermediary | train | def table_to_intermediary(table):
"""Transform an SQLAlchemy Table object to it's intermediary representation. """
return Table(
name=table.fullname,
columns=[column_to_intermediary(col) for col in table.c._data.values()]
) | python | {
"resource": ""
} |
q35747 | metadata_to_intermediary | train | def metadata_to_intermediary(metadata):
""" Transforms SQLAlchemy metadata to the intermediary representation. """
tables = [table_to_intermediary(table) for table in metadata.tables.values()]
relationships = [relation_to_intermediary(fk) for table in metadata.tables.values() for fk in table.foreign_keys]
return tables, relationships | python | {
"resource": ""
} |
q35748 | name_for_scalar_relationship | train | def name_for_scalar_relationship(base, local_cls, referred_cls, constraint):
""" Overriding naming schemes. """
name = referred_cls.__name__.lower() + "_ref"
return name | python | {
"resource": ""
} |
q35749 | intermediary_to_markdown | train | def intermediary_to_markdown(tables, relationships, output):
""" Saves the intermediary representation to markdown. """
er_markup = _intermediary_to_markdown(tables, relationships)
with open(output, "w") as file_out:
file_out.write(er_markup) | python | {
"resource": ""
} |
q35750 | intermediary_to_dot | train | def intermediary_to_dot(tables, relationships, output):
""" Save the intermediary representation to dot format. """
dot_file = _intermediary_to_dot(tables, relationships)
with open(output, "w") as file_out:
file_out.write(dot_file) | python | {
"resource": ""
} |
q35751 | intermediary_to_schema | train | def intermediary_to_schema(tables, relationships, output):
""" Transforms and save the intermediary representation to the file chosen. """
dot_file = _intermediary_to_dot(tables, relationships)
graph = AGraph()
graph = graph.from_string(dot_file)
extension = output.split('.')[-1]
graph.draw(path=output, prog='dot', format=extension) | python | {
"resource": ""
} |
q35752 | _intermediary_to_markdown | train | def _intermediary_to_markdown(tables, relationships):
""" Returns the er markup source in a string. """
t = '\n'.join(t.to_markdown() for t in tables)
r = '\n'.join(r.to_markdown() for r in relationships)
return '{}\n{}'.format(t, r) | python | {
"resource": ""
} |
q35753 | _intermediary_to_dot | train | def _intermediary_to_dot(tables, relationships):
""" Returns the dot source representing the database in a string. """
t = '\n'.join(t.to_dot() for t in tables)
r = '\n'.join(r.to_dot() for r in relationships)
return '{}\n{}\n{}\n}}'.format(GRAPH_BEGINNING, t, r) | python | {
"resource": ""
} |
q35754 | all_to_intermediary | train | def all_to_intermediary(filename_or_input, schema=None):
""" Dispatch the filename_or_input to the different function to produce the intermediary syntax.
All the supported classes names are in `swich_input_class_to_method`.
The input can also be a list of strings in markdown format or a filename finishing by '.er' containing markdown
format.
"""
# Try to convert from the name of the class
input_class_name = filename_or_input.__class__.__name__
try:
this_to_intermediary = switch_input_class_to_method[input_class_name]
tables, relationships = this_to_intermediary(filename_or_input)
return tables, relationships
except KeyError:
pass
# try to read markdown file.
if isinstance(filename_or_input, basestring):
if filename_or_input.split('.')[-1] == 'er':
return markdown_file_to_intermediary(filename_or_input)
# try to read a markdown in a string
if not isinstance(filename_or_input, basestring):
if all(isinstance(e, basestring) for e in filename_or_input):
return line_iterator_to_intermediary(filename_or_input)
# try to read DB URI.
try:
make_url(filename_or_input)
return database_to_intermediary(filename_or_input, schema=schema)
except ArgumentError:
pass
msg = 'Cannot process filename_or_input {}'.format(input_class_name)
raise ValueError(msg) | python | {
"resource": ""
} |
q35755 | get_output_mode | train | def get_output_mode(output, mode):
"""
From the output name and the mode returns a the function that will transform the intermediary
representation to the output.
"""
if mode != 'auto':
try:
return switch_output_mode_auto[mode]
except KeyError:
raise ValueError('Mode "{}" is not supported.')
extension = output.split('.')[-1]
try:
return switch_output_mode[extension]
except KeyError:
return intermediary_to_schema | python | {
"resource": ""
} |
q35756 | handle_oneof | train | def handle_oneof(oneof_schema: list) -> tuple:
"""
Custom handle of `oneOf` JSON schema validator. Tried to match primitive type and see if it should be allowed
to be passed multiple timns into a command
:param oneof_schema: `oneOf` JSON schema
:return: Tuple of :class:`click.ParamType`, ``multiple`` flag and ``description`` of option
"""
oneof_dict = {schema["type"]: schema for schema in oneof_schema}
click_type = None
multiple = False
description = None
for key, value in oneof_dict.items():
if key == "array":
continue
elif key in SCHEMA_BASE_MAP:
if oneof_dict.get("array") and oneof_dict["array"]["items"]["type"] == key:
multiple = True
# Found a match to a primitive type
click_type = SCHEMA_BASE_MAP[key]
description = value.get("title")
break
return click_type, multiple, description | python | {
"resource": ""
} |
q35757 | clean_data | train | def clean_data(data: dict) -> dict:
"""Removes all empty values and converts tuples into lists"""
new_data = {}
for key, value in data.items():
# Verify that only explicitly passed args get passed on
if not isinstance(value, bool) and not value:
continue
# Multiple choice command are passed as tuples, convert to list to match schema
if isinstance(value, tuple):
value = list(value)
new_data[key] = value
return new_data | python | {
"resource": ""
} |
q35758 | SchemaResource.schema | train | def schema(self) -> dict:
"""
A property method that'll return the constructed provider schema.
Schema MUST be an object and this method must be overridden
:return: JSON schema of the provider
"""
if not self._merged_schema:
log.debug("merging required dict into schema for %s", self.name)
self._merged_schema = self._schema.copy()
self._merged_schema.update(self._required)
return self._merged_schema | python | {
"resource": ""
} |
q35759 | SchemaResource._process_data | train | def _process_data(self, **data) -> dict:
"""
The main method that process all resources data. Validates schema, gets environs, validates data, prepares
it via provider requirements, merges defaults and check for data dependencies
:param data: The raw data passed by the notifiers client
:return: Processed data
"""
env_prefix = data.pop("env_prefix", None)
environs = self._get_environs(env_prefix)
if environs:
data = merge_dicts(data, environs)
data = self._merge_defaults(data)
self._validate_data(data)
data = self._validate_data_dependencies(data)
data = self._prepare_data(data)
return data | python | {
"resource": ""
} |
q35760 | is_iso8601 | train | def is_iso8601(instance: str):
"""Validates ISO8601 format"""
if not isinstance(instance, str):
return True
return ISO8601.match(instance) is not None | python | {
"resource": ""
} |
q35761 | is_rfc2822 | train | def is_rfc2822(instance: str):
"""Validates RFC2822 format"""
if not isinstance(instance, str):
return True
return email.utils.parsedate(instance) is not None | python | {
"resource": ""
} |
q35762 | is_valid_port | train | def is_valid_port(instance: int):
"""Validates data is a valid port"""
if not isinstance(instance, (int, str)):
return True
return int(instance) in range(65535) | python | {
"resource": ""
} |
q35763 | is_timestamp | train | def is_timestamp(instance):
"""Validates data is a timestamp"""
if not isinstance(instance, (int, str)):
return True
return datetime.fromtimestamp(int(instance)) | python | {
"resource": ""
} |
q35764 | func_factory | train | def func_factory(p, method: str) -> callable:
"""
Dynamically generates callback commands to correlate to provider public methods
:param p: A :class:`notifiers.core.Provider` object
:param method: A string correlating to a provider method
:return: A callback func
"""
def callback(pretty: bool = False):
res = getattr(p, method)
dump = partial(json.dumps, indent=4) if pretty else partial(json.dumps)
click.echo(dump(res))
return callback | python | {
"resource": ""
} |
q35765 | _notify | train | def _notify(p, **data):
"""The callback func that will be hooked to the ``notify`` command"""
message = data.get("message")
if not message and not sys.stdin.isatty():
message = click.get_text_stream("stdin").read()
data["message"] = message
data = clean_data(data)
ctx = click.get_current_context()
if ctx.obj.get("env_prefix"):
data["env_prefix"] = ctx.obj["env_prefix"]
rsp = p.notify(**data)
rsp.raise_on_errors()
click.secho(f"Succesfully sent a notification to {p.name}!", fg="green") | python | {
"resource": ""
} |
q35766 | _resource | train | def _resource(resource, pretty: bool = None, **data):
"""The callback func that will be hooked to the generic resource commands"""
data = clean_data(data)
ctx = click.get_current_context()
if ctx.obj.get("env_prefix"):
data["env_prefix"] = ctx.obj["env_prefix"]
rsp = resource(**data)
dump = partial(json.dumps, indent=4) if pretty else partial(json.dumps)
click.echo(dump(rsp)) | python | {
"resource": ""
} |
q35767 | _resources | train | def _resources(p):
"""Callback func to display provider resources"""
if p.resources:
click.echo(",".join(p.resources))
else:
click.echo(f"Provider '{p.name}' does not have resource helpers") | python | {
"resource": ""
} |
q35768 | one_or_more | train | def one_or_more(
schema: dict, unique_items: bool = True, min: int = 1, max: int = None
) -> dict:
"""
Helper function to construct a schema that validates items matching
`schema` or an array containing items matching `schema`.
:param schema: The schema to use
:param unique_items: Flag if array items should be unique
:param min: Correlates to ``minLength`` attribute of JSON Schema array
:param max: Correlates to ``maxLength`` attribute of JSON Schema array
"""
multi_schema = {
"type": "array",
"items": schema,
"minItems": min,
"uniqueItems": unique_items,
}
if max:
multi_schema["maxItems"] = max
return {"oneOf": [multi_schema, schema]} | python | {
"resource": ""
} |
q35769 | text_to_bool | train | def text_to_bool(value: str) -> bool:
"""
Tries to convert a text value to a bool. If unsuccessful returns if value is None or not
:param value: Value to check
"""
try:
return bool(strtobool(value))
except (ValueError, AttributeError):
return value is not None | python | {
"resource": ""
} |
q35770 | merge_dicts | train | def merge_dicts(target_dict: dict, merge_dict: dict) -> dict:
"""
Merges ``merge_dict`` into ``target_dict`` if the latter does not already contain a value for each of the key
names in ``merge_dict``. Used to cleanly merge default and environ data into notification payload.
:param target_dict: The target dict to merge into and return, the user provided data for example
:param merge_dict: The data that should be merged into the target data
:return: A dict of merged data
"""
log.debug("merging dict %s into %s", merge_dict, target_dict)
for key, value in merge_dict.items():
if key not in target_dict:
target_dict[key] = value
return target_dict | python | {
"resource": ""
} |
q35771 | snake_to_camel_case | train | def snake_to_camel_case(value: str) -> str:
"""
Convert a snake case param to CamelCase
:param value: The value to convert
:return: A CamelCase value
"""
log.debug("trying to convert %s to camel case", value)
return "".join(word.capitalize() for word in value.split("_")) | python | {
"resource": ""
} |
q35772 | valid_file | train | def valid_file(path: str) -> bool:
"""
Verifies that a string path actually exists and is a file
:param path: The path to verify
:return: **True** if path exist and is a file
"""
path = Path(path).expanduser()
log.debug("checking if %s is a valid file", path)
return path.exists() and path.is_file() | python | {
"resource": ""
} |
q35773 | NotificationHandler.init_providers | train | def init_providers(self, provider, kwargs):
"""
Inits main and fallback provider if relevant
:param provider: Provider name to use
:param kwargs: Additional kwargs
:raises ValueError: If provider name or fallback names are not valid providers, a :exc:`ValueError` will
be raised
"""
self.provider = notifiers.get_notifier(provider, strict=True)
if kwargs.get("fallback"):
self.fallback = notifiers.get_notifier(kwargs.pop("fallback"), strict=True)
self.fallback_defaults = kwargs.pop("fallback_defaults", {}) | python | {
"resource": ""
} |
q35774 | provider_group_factory | train | def provider_group_factory():
"""Dynamically generate provider groups for all providers, and add all basic command to it"""
for provider in all_providers():
p = get_notifier(provider)
provider_name = p.name
help = f"Options for '{provider_name}'"
group = click.Group(name=provider_name, help=help)
# Notify command
notify = partial(_notify, p=p)
group.add_command(schema_to_command(p, "notify", notify, add_message=True))
# Resources command
resources_callback = partial(_resources, p=p)
resources_cmd = click.Command(
"resources",
callback=resources_callback,
help="Show provider resources list",
)
group.add_command(resources_cmd)
pretty_opt = click.Option(
["--pretty/--not-pretty"], help="Output a pretty version of the JSON"
)
# Add any provider resources
for resource in p.resources:
rsc = getattr(p, resource)
rsrc_callback = partial(_resource, rsc)
rsrc_command = schema_to_command(
rsc, resource, rsrc_callback, add_message=False
)
rsrc_command.params.append(pretty_opt)
group.add_command(rsrc_command)
for name, description in CORE_COMMANDS.items():
callback = func_factory(p, name)
params = [pretty_opt]
command = click.Command(
name,
callback=callback,
help=description.format(provider_name),
params=params,
)
group.add_command(command)
notifiers_cli.add_command(group) | python | {
"resource": ""
} |
q35775 | entry_point | train | def entry_point():
"""The entry that CLI is executed from"""
try:
provider_group_factory()
notifiers_cli(obj={})
except NotifierException as e:
click.secho(f"ERROR: {e.message}", bold=True, fg="red")
exit(1) | python | {
"resource": ""
} |
q35776 | Requester.request | train | def request(
self, method, endpoint=None, headers=None, use_auth=True,
_url=None, _kwargs=None, **kwargs):
"""
Make a request to the Canvas API and return the response.
:param method: The HTTP method for the request.
:type method: str
:param endpoint: The endpoint to call.
:type endpoint: str
:param headers: Optional HTTP headers to be sent with the request.
:type headers: dict
:param use_auth: Optional flag to remove the authentication
header from the request.
:type use_auth: bool
:param _url: Optional argument to send a request to a URL
outside of the Canvas API. If this is selected and an
endpoint is provided, the endpoint will be ignored and
only the _url argument will be used.
:type _url: str
:param _kwargs: A list of 2-tuples representing processed
keyword arguments to be sent to Canvas as params or data.
:type _kwargs: `list`
:rtype: str
"""
full_url = _url if _url else "{}{}".format(self.base_url, endpoint)
if not headers:
headers = {}
if use_auth:
auth_header = {'Authorization': 'Bearer {}'.format(self.access_token)}
headers.update(auth_header)
# Convert kwargs into list of 2-tuples and combine with _kwargs.
_kwargs = _kwargs or []
_kwargs.extend(kwargs.items())
# Do any final argument processing before sending to request method.
for i, kwarg in enumerate(_kwargs):
kw, arg = kwarg
# Convert boolean objects to a lowercase string.
if isinstance(arg, bool):
_kwargs[i] = (kw, str(arg).lower())
# Convert any datetime objects into ISO 8601 formatted strings.
elif isinstance(arg, datetime):
_kwargs[i] = (kw, arg.isoformat())
# Determine the appropriate request method.
if method == 'GET':
req_method = self._get_request
elif method == 'POST':
req_method = self._post_request
elif method == 'DELETE':
req_method = self._delete_request
elif method == 'PUT':
req_method = self._put_request
# Call the request method
response = req_method(full_url, headers, _kwargs)
# Add response to internal cache
if len(self._cache) > 4:
self._cache.pop()
self._cache.insert(0, response)
# Raise for status codes
if response.status_code == 400:
raise BadRequest(response.text)
elif response.status_code == 401:
if 'WWW-Authenticate' in response.headers:
raise InvalidAccessToken(response.json())
else:
raise Unauthorized(response.json())
elif response.status_code == 403:
raise Forbidden(response.text)
elif response.status_code == 404:
raise ResourceDoesNotExist('Not Found')
elif response.status_code == 409:
raise Conflict(response.text)
elif response.status_code == 500:
raise CanvasException("API encountered an error processing your request")
return response | python | {
"resource": ""
} |
q35777 | Requester._get_request | train | def _get_request(self, url, headers, params=None):
"""
Issue a GET request to the specified endpoint with the data provided.
:param url: str
:pararm headers: dict
:param params: dict
"""
return self._session.get(url, headers=headers, params=params) | python | {
"resource": ""
} |
q35778 | Requester._post_request | train | def _post_request(self, url, headers, data=None):
"""
Issue a POST request to the specified endpoint with the data provided.
:param url: str
:pararm headers: dict
:param data: dict
"""
# Grab file from data.
files = None
for field, value in data:
if field == 'file':
if isinstance(value, dict):
files = value
else:
files = {'file': value}
break
# Remove file entry from data.
data[:] = [tup for tup in data if tup[0] != 'file']
return self._session.post(url, headers=headers, data=data, files=files) | python | {
"resource": ""
} |
q35779 | Requester._delete_request | train | def _delete_request(self, url, headers, data=None):
"""
Issue a DELETE request to the specified endpoint with the data provided.
:param url: str
:pararm headers: dict
:param data: dict
"""
return self._session.delete(url, headers=headers, data=data) | python | {
"resource": ""
} |
q35780 | Requester._put_request | train | def _put_request(self, url, headers, data=None):
"""
Issue a PUT request to the specified endpoint with the data provided.
:param url: str
:pararm headers: dict
:param data: dict
"""
return self._session.put(url, headers=headers, data=data) | python | {
"resource": ""
} |
q35781 | Uploader.request_upload_token | train | def request_upload_token(self, file):
"""
Request an upload token.
:param file: A file handler pointing to the file to upload.
:returns: True if the file uploaded successfully, False otherwise, \
and the JSON response from the API.
:rtype: tuple
"""
self.kwargs['name'] = os.path.basename(file.name)
self.kwargs['size'] = os.fstat(file.fileno()).st_size
response = self._requester.request(
'POST',
self.url,
_kwargs=combine_kwargs(**self.kwargs)
)
return self.upload(response, file) | python | {
"resource": ""
} |
q35782 | Uploader.upload | train | def upload(self, response, file):
"""
Upload the file.
:param response: The response from the upload request.
:type response: dict
:param file: A file handler pointing to the file to upload.
:returns: True if the file uploaded successfully, False otherwise, \
and the JSON response from the API.
:rtype: tuple
"""
response = response.json()
if not response.get('upload_url'):
raise ValueError('Bad API response. No upload_url.')
if not response.get('upload_params'):
raise ValueError('Bad API response. No upload_params.')
kwargs = response.get('upload_params')
response = self._requester.request(
'POST',
use_auth=False,
_url=response.get('upload_url'),
file=file,
_kwargs=combine_kwargs(**kwargs)
)
# remove `while(1);` that may appear at the top of a response
response_json = json.loads(response.text.lstrip('while(1);'))
return ('url' in response_json, response_json) | python | {
"resource": ""
} |
q35783 | CanvasObject.set_attributes | train | def set_attributes(self, attributes):
"""
Load this object with attributes.
This method attempts to detect special types based on the field's content
and will create an additional attribute of that type.
Consider a JSON response with the following fields::
{
"name": "New course name",
"course_code": "COURSE-001",
"start_at": "2012-05-05T00:00:00Z",
"end_at": "2012-08-05T23:59:59Z",
"sis_course_id": "12345"
}
The `start_at` and `end_at` fields match a date in ISO8601 format,
so two additional datetime attributes are created, `start_at_date`
and `end_at_date`.
:param attributes: The JSON object to build this object with.
:type attributes: dict
"""
self.attributes = attributes
for attribute, value in attributes.items():
self.__setattr__(attribute, value)
# datetime field
if DATE_PATTERN.match(text_type(value)):
naive = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
aware = naive.replace(tzinfo=pytz.utc)
self.__setattr__(attribute + '_date', aware) | python | {
"resource": ""
} |
q35784 | Canvas.create_account | train | def create_account(self, **kwargs):
"""
Create a new root account.
:calls: `POST /api/v1/accounts \
<https://canvas.instructure.com/doc/api/accounts.html#method.accounts.create>`_
:rtype: :class:`canvasapi.account.Account`
"""
response = self.__requester.request(
'POST',
'accounts',
_kwargs=combine_kwargs(**kwargs)
)
return Account(self.__requester, response.json()) | python | {
"resource": ""
} |
q35785 | Canvas.get_account | train | def get_account(self, account, use_sis_id=False, **kwargs):
"""
Retrieve information on an individual account.
:calls: `GET /api/v1/accounts/:id \
<https://canvas.instructure.com/doc/api/accounts.html#method.accounts.show>`_
:param account: The object or ID of the account to retrieve.
:type account: int, str or :class:`canvasapi.account.Account`
:param use_sis_id: Whether or not account_id is an sis ID.
Defaults to `False`.
:type use_sis_id: bool
:rtype: :class:`canvasapi.account.Account`
"""
if use_sis_id:
account_id = account
uri_str = 'accounts/sis_account_id:{}'
else:
account_id = obj_or_id(account, "account", (Account,))
uri_str = 'accounts/{}'
response = self.__requester.request(
'GET',
uri_str.format(account_id),
_kwargs=combine_kwargs(**kwargs)
)
return Account(self.__requester, response.json()) | python | {
"resource": ""
} |
q35786 | Canvas.get_accounts | train | def get_accounts(self, **kwargs):
"""
List accounts that the current user can view or manage.
Typically, students and teachers will get an empty list in
response. Only account admins can view the accounts that they
are in.
:calls: `GET /api/v1/accounts \
<https://canvas.instructure.com/doc/api/accounts.html#method.accounts.index>`_
:rtype: :class:`canvasapi.paginated_list.PaginatedList` of
:class:`canvasapi.account.Account`
"""
return PaginatedList(
Account,
self.__requester,
'GET',
'accounts',
_kwargs=combine_kwargs(**kwargs)
) | python | {
"resource": ""
} |
q35787 | Canvas.get_course | train | def get_course(self, course, use_sis_id=False, **kwargs):
"""
Retrieve a course by its ID.
:calls: `GET /api/v1/courses/:id \
<https://canvas.instructure.com/doc/api/courses.html#method.courses.show>`_
:param course: The object or ID of the course to retrieve.
:type course: int, str or :class:`canvasapi.course.Course`
:param use_sis_id: Whether or not course_id is an sis ID.
Defaults to `False`.
:type use_sis_id: bool
:rtype: :class:`canvasapi.course.Course`
"""
if use_sis_id:
course_id = course
uri_str = 'courses/sis_course_id:{}'
else:
course_id = obj_or_id(course, "course", (Course,))
uri_str = 'courses/{}'
response = self.__requester.request(
'GET',
uri_str.format(course_id),
_kwargs=combine_kwargs(**kwargs)
)
return Course(self.__requester, response.json()) | python | {
"resource": ""
} |
q35788 | Canvas.get_user | train | def get_user(self, user, id_type=None):
"""
Retrieve a user by their ID. `id_type` denotes which endpoint to try as there are
several different IDs that can pull the same user record from Canvas.
Refer to API documentation's
`User <https://canvas.instructure.com/doc/api/users.html#User>`_
example to see the ID types a user can be retrieved with.
:calls: `GET /api/v1/users/:id \
<https://canvas.instructure.com/doc/api/users.html#method.users.api_show>`_
:param user: The user's object or ID.
:type user: :class:`canvasapi.user.User` or int
:param id_type: The ID type.
:type id_type: str
:rtype: :class:`canvasapi.user.User`
"""
if id_type:
uri = 'users/{}:{}'.format(id_type, user)
elif user == 'self':
uri = 'users/self'
else:
user_id = obj_or_id(user, "user", (User,))
uri = 'users/{}'.format(user_id)
response = self.__requester.request(
'GET',
uri
)
return User(self.__requester, response.json()) | python | {
"resource": ""
} |
q35789 | Canvas.get_courses | train | def get_courses(self, **kwargs):
"""
Return a list of active courses for the current user.
:calls: `GET /api/v1/courses \
<https://canvas.instructure.com/doc/api/courses.html#method.courses.index>`_
:rtype: :class:`canvasapi.paginated_list.PaginatedList` of
:class:`canvasapi.course.Course`
"""
return PaginatedList(
Course,
self.__requester,
'GET',
'courses',
_kwargs=combine_kwargs(**kwargs)
) | python | {
"resource": ""
} |
q35790 | Canvas.get_section | train | def get_section(self, section, use_sis_id=False, **kwargs):
"""
Get details about a specific section.
:calls: `GET /api/v1/sections/:id \
<https://canvas.instructure.com/doc/api/sections.html#method.sections.show>`_
:param section: The object or ID of the section to get.
:type section: :class:`canvasapi.section.Section` or int
:param use_sis_id: Whether or not section_id is an sis ID.
Defaults to `False`.
:type use_sis_id: bool
:rtype: :class:`canvasapi.section.Section`
"""
if use_sis_id:
section_id = section
uri_str = 'sections/sis_section_id:{}'
else:
section_id = obj_or_id(section, "section", (Section,))
uri_str = 'sections/{}'
response = self.__requester.request(
'GET',
uri_str.format(section_id),
_kwargs=combine_kwargs(**kwargs)
)
return Section(self.__requester, response.json()) | python | {
"resource": ""
} |
q35791 | Canvas.set_course_nickname | train | def set_course_nickname(self, course, nickname):
"""
Set a nickname for the given course. This will replace the
course's name in the output of subsequent API calls, as
well as in selected places in the Canvas web user interface.
:calls: `PUT /api/v1/users/self/course_nicknames/:course_id \
<https://canvas.instructure.com/doc/api/users.html#method.course_nicknames.update>`_
:param course: The ID of the course.
:type course: :class:`canvasapi.course.Course` or int
:param nickname: The nickname for the course.
:type nickname: str
:rtype: :class:`canvasapi.course.CourseNickname`
"""
from canvasapi.course import CourseNickname
course_id = obj_or_id(course, "course", (Course,))
response = self.__requester.request(
'PUT',
'users/self/course_nicknames/{}'.format(course_id),
nickname=nickname
)
return CourseNickname(self.__requester, response.json()) | python | {
"resource": ""
} |
q35792 | Canvas.search_accounts | train | def search_accounts(self, **kwargs):
"""
Return a list of up to 5 matching account domains. Partial matches on
name and domain are supported.
:calls: `GET /api/v1/accounts/search \
<https://canvas.instructure.com/doc/api/account_domain_lookups.html#method.account_domain_lookups.search>`_
:rtype: dict
"""
response = self.__requester.request(
'GET',
'accounts/search',
_kwargs=combine_kwargs(**kwargs)
)
return response.json() | python | {
"resource": ""
} |
q35793 | Canvas.get_group | train | def get_group(self, group, use_sis_id=False, **kwargs):
"""
Return the data for a single group. If the caller does not
have permission to view the group a 401 will be returned.
:calls: `GET /api/v1/groups/:group_id \
<https://canvas.instructure.com/doc/api/groups.html#method.groups.show>`_
:param group: The object or ID of the group to get.
:type group: :class:`canvasapi.group.Group` or int
:param use_sis_id: Whether or not group_id is an sis ID.
Defaults to `False`.
:type use_sis_id: bool
:rtype: :class:`canvasapi.group.Group`
"""
if use_sis_id:
group_id = group
uri_str = 'groups/sis_group_id:{}'
else:
group_id = obj_or_id(group, "group", (Group,))
uri_str = 'groups/{}'
response = self.__requester.request(
'GET',
uri_str.format(group_id),
_kwargs=combine_kwargs(**kwargs)
)
return Group(self.__requester, response.json()) | python | {
"resource": ""
} |
q35794 | Canvas.get_group_category | train | def get_group_category(self, category):
"""
Get a single group category.
:calls: `GET /api/v1/group_categories/:group_category_id \
<https://canvas.instructure.com/doc/api/group_categories.html#method.group_categories.show>`_
:param category: The object or ID of the category.
:type category: :class:`canvasapi.group.GroupCategory` or int
:rtype: :class:`canvasapi.group.GroupCategory`
"""
category_id = obj_or_id(category, "category", (GroupCategory,))
response = self.__requester.request(
'GET',
'group_categories/{}'.format(category_id)
)
return GroupCategory(self.__requester, response.json()) | python | {
"resource": ""
} |
q35795 | Canvas.create_conversation | train | def create_conversation(self, recipients, body, **kwargs):
"""
Create a new Conversation.
:calls: `POST /api/v1/conversations \
<https://canvas.instructure.com/doc/api/conversations.html#method.conversations.create>`_
:param recipients: An array of recipient ids.
These may be user ids or course/group ids prefixed
with 'course\\_' or 'group\\_' respectively,
e.g. recipients=['1', '2', 'course_3']
:type recipients: `list` of `str`
:param body: The body of the message being added.
:type body: `str`
:rtype: list of :class:`canvasapi.conversation.Conversation`
"""
from canvasapi.conversation import Conversation
kwargs['recipients'] = recipients
kwargs['body'] = body
response = self.__requester.request(
'POST',
'conversations',
_kwargs=combine_kwargs(**kwargs)
)
return [Conversation(self.__requester, convo) for convo in response.json()] | python | {
"resource": ""
} |
q35796 | Canvas.get_conversation | train | def get_conversation(self, conversation, **kwargs):
"""
Return single Conversation
:calls: `GET /api/v1/conversations/:id \
<https://canvas.instructure.com/doc/api/conversations.html#method.conversations.show>`_
:param conversation: The object or ID of the conversation.
:type conversation: :class:`canvasapi.conversation.Conversation` or int
:rtype: :class:`canvasapi.conversation.Conversation`
"""
from canvasapi.conversation import Conversation
conversation_id = obj_or_id(conversation, "conversation", (Conversation,))
response = self.__requester.request(
'GET',
'conversations/{}'.format(conversation_id),
_kwargs=combine_kwargs(**kwargs)
)
return Conversation(self.__requester, response.json()) | python | {
"resource": ""
} |
q35797 | Canvas.get_conversations | train | def get_conversations(self, **kwargs):
"""
Return list of conversations for the current user, most resent ones first.
:calls: `GET /api/v1/conversations \
<https://canvas.instructure.com/doc/api/conversations.html#method.conversations.index>`_
:rtype: :class:`canvasapi.paginated_list.PaginatedList` of \
:class:`canvasapi.conversation.Conversation`
"""
from canvasapi.conversation import Conversation
return PaginatedList(
Conversation,
self.__requester,
'GET',
'conversations',
_kwargs=combine_kwargs(**kwargs)
) | python | {
"resource": ""
} |
q35798 | Canvas.create_calendar_event | train | def create_calendar_event(self, calendar_event, **kwargs):
"""
Create a new Calendar Event.
:calls: `POST /api/v1/calendar_events \
<https://canvas.instructure.com/doc/api/calendar_events.html#method.calendar_events_api.create>`_
:param calendar_event: The attributes of the calendar event.
:type calendar_event: `dict`
:rtype: :class:`canvasapi.calendar_event.CalendarEvent`
"""
from canvasapi.calendar_event import CalendarEvent
if isinstance(calendar_event, dict) and 'context_code' in calendar_event:
kwargs['calendar_event'] = calendar_event
else:
raise RequiredFieldMissing(
"Dictionary with key 'context_codes' is required."
)
response = self.__requester.request(
'POST',
'calendar_events',
_kwargs=combine_kwargs(**kwargs)
)
return CalendarEvent(self.__requester, response.json()) | python | {
"resource": ""
} |
q35799 | Canvas.get_calendar_events | train | def get_calendar_events(self, **kwargs):
"""
List calendar events.
:calls: `GET /api/v1/calendar_events \
<https://canvas.instructure.com/doc/api/calendar_events.html#method.calendar_events_api.index>`_
:rtype: :class:`canvasapi.paginated_list.PaginatedList` of
:class:`canvasapi.calendar_event.CalendarEvent`
"""
from canvasapi.calendar_event import CalendarEvent
return PaginatedList(
CalendarEvent,
self.__requester,
'GET',
'calendar_events',
_kwargs=combine_kwargs(**kwargs)
) | python | {
"resource": ""
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.