signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
def get_type(self):
return self.obj_dict['<STR_LIT:type>']<EOL>
Get the graph's type, 'graph' or 'digraph'.
f15701:c6:m13
def set_name(self, graph_name):
self.obj_dict['<STR_LIT:name>'] = graph_name<EOL>
Set the graph's name.
f15701:c6:m14
def get_name(self):
return self.obj_dict['<STR_LIT:name>']<EOL>
Get the graph's name.
f15701:c6:m15
def set_strict(self, val):
self.obj_dict['<STR_LIT:strict>'] = val<EOL>
Set graph to 'strict' mode. This option is only valid for top level graphs.
f15701:c6:m16
def get_strict(self, val):
return self.obj_dict['<STR_LIT:strict>']<EOL>
Get graph's 'strict' mode (True, False). This option is only valid for top level graphs.
f15701:c6:m17
def set_suppress_disconnected(self, val):
self.obj_dict['<STR_LIT>'] = val<EOL>
Suppress disconnected nodes in the output graph. This option will skip nodes in the graph with no incoming or outgoing edges. This option works also for subgraphs and has effect only in the current graph/subgraph.
f15701:c6:m18
def get_suppress_disconnected(self, val):
return self.obj_dict['<STR_LIT>']<EOL>
Get if suppress disconnected is set. Refer to set_suppress_disconnected for more information.
f15701:c6:m19
def add_node(self, graph_node):
if not isinstance(graph_node, Node):<EOL><INDENT>raise TypeError(<EOL>'<STR_LIT>' +<EOL>'<STR_LIT>' + str(graph_node))<EOL><DEDENT>node = self.get_node(graph_node.get_name())<EOL>if not node:<EOL><INDENT>self.obj_dict['<STR_LIT>'][graph_node.get_name()] = [<EOL>graph_node.obj_dict ]<EOL>graph_node.set_parent_graph(self.get_parent_graph())<EOL><DEDENT>else:<EOL><INDENT>self.obj_dict['<STR_LIT>'][graph_node.get_name()].append(<EOL>graph_node.obj_dict )<EOL><DEDENT>graph_node.set_sequence(self.get_next_sequence_number())<EOL>
Adds a node object to the graph. It takes a node object as its only argument and returns None.
f15701:c6:m21
def del_node(self, name, index=None):
if isinstance(name, Node):<EOL><INDENT>name = name.get_name()<EOL><DEDENT>if name in self.obj_dict['<STR_LIT>']:<EOL><INDENT>if (index is not None and<EOL>index < len(self.obj_dict['<STR_LIT>'][name])):<EOL><INDENT>del self.obj_dict['<STR_LIT>'][name][index]<EOL>return True<EOL><DEDENT>else:<EOL><INDENT>del self.obj_dict['<STR_LIT>'][name]<EOL>return True<EOL><DEDENT><DEDENT>return False<EOL>
Delete a node from the graph. Given a node's name all node(s) with that same name will be deleted if 'index' is not specified or set to None. If there are several nodes with that same name and 'index' is given, only the node in that position will be deleted. 'index' should be an integer specifying the position of the node to delete. If index is larger than the number of nodes with that name, no action is taken. If nodes are deleted it returns True. If no action is taken it returns False.
f15701:c6:m22
def get_node(self, name):
match = list()<EOL>if name in self.obj_dict['<STR_LIT>']:<EOL><INDENT>match.extend(<EOL>[Node(obj_dict=obj_dict)<EOL>for obj_dict in self.obj_dict['<STR_LIT>'][name]])<EOL><DEDENT>return match<EOL>
Retrieve a node from the graph. Given a node's name the corresponding Node instance will be returned. If one or more nodes exist with that name a list of Node instances is returned. An empty list is returned otherwise.
f15701:c6:m23
def get_nodes(self):
return self.get_node_list()<EOL>
Get the list of Node instances.
f15701:c6:m24
def get_node_list(self):
node_objs = list()<EOL>for node in self.obj_dict['<STR_LIT>']:<EOL><INDENT>obj_dict_list = self.obj_dict['<STR_LIT>'][node]<EOL>node_objs.extend( [ Node( obj_dict = obj_d )<EOL>for obj_d in obj_dict_list ] )<EOL><DEDENT>return node_objs<EOL>
Get the list of Node instances. This method returns the list of Node instances composing the graph.
f15701:c6:m25
def add_edge(self, graph_edge):
if not isinstance(graph_edge, Edge):<EOL><INDENT>raise TypeError(<EOL>'<STR_LIT>' +<EOL>str(graph_edge))<EOL><DEDENT>edge_points = ( graph_edge.get_source(),<EOL>graph_edge.get_destination() )<EOL>if edge_points in self.obj_dict['<STR_LIT>']:<EOL><INDENT>edge_list = self.obj_dict['<STR_LIT>'][edge_points]<EOL>edge_list.append(graph_edge.obj_dict)<EOL><DEDENT>else:<EOL><INDENT>self.obj_dict['<STR_LIT>'][edge_points] = [ graph_edge.obj_dict ]<EOL><DEDENT>graph_edge.set_sequence( self.get_next_sequence_number() )<EOL>graph_edge.set_parent_graph( self.get_parent_graph() )<EOL>
Adds an edge object to the graph. It takes a edge object as its only argument and returns None.
f15701:c6:m26
def del_edge(self, src_or_list, dst=None, index=None):
if isinstance( src_or_list, (list, tuple)):<EOL><INDENT>if dst is not None and isinstance(dst, int):<EOL><INDENT>index = dst<EOL><DEDENT>src, dst = src_or_list<EOL><DEDENT>else:<EOL><INDENT>src, dst = src_or_list, dst<EOL><DEDENT>if isinstance(src, Node):<EOL><INDENT>src = src.get_name()<EOL><DEDENT>if isinstance(dst, Node):<EOL><INDENT>dst = dst.get_name()<EOL><DEDENT>if (src, dst) in self.obj_dict['<STR_LIT>']:<EOL><INDENT>if (index is not None and<EOL>index < len(self.obj_dict['<STR_LIT>'][(src, dst)])):<EOL><INDENT>del self.obj_dict['<STR_LIT>'][(src, dst)][index]<EOL>return True<EOL><DEDENT>else:<EOL><INDENT>del self.obj_dict['<STR_LIT>'][(src, dst)]<EOL>return True<EOL><DEDENT><DEDENT>return False<EOL>
Delete an edge from the graph. Given an edge's (source, destination) node names all matching edges(s) will be deleted if 'index' is not specified or set to None. If there are several matching edges and 'index' is given, only the edge in that position will be deleted. 'index' should be an integer specifying the position of the edge to delete. If index is larger than the number of matching edges, no action is taken. If edges are deleted it returns True. If no action is taken it returns False.
f15701:c6:m27
def get_edge(self, src_or_list, dst=None):
if isinstance( src_or_list, (list, tuple)) and dst is None:<EOL><INDENT>edge_points = tuple(src_or_list)<EOL>edge_points_reverse = (edge_points[<NUM_LIT:1>], edge_points[<NUM_LIT:0>])<EOL><DEDENT>else:<EOL><INDENT>edge_points = (src_or_list, dst)<EOL>edge_points_reverse = (dst, src_or_list)<EOL><DEDENT>match = list()<EOL>if edge_points in self.obj_dict['<STR_LIT>'] or (<EOL>self.get_top_graph_type() == '<STR_LIT>' and<EOL>edge_points_reverse in self.obj_dict['<STR_LIT>']):<EOL><INDENT>edges_obj_dict = self.obj_dict['<STR_LIT>'].get(<EOL>edge_points,<EOL>self.obj_dict['<STR_LIT>'].get( edge_points_reverse, None ))<EOL>for edge_obj_dict in edges_obj_dict:<EOL><INDENT>match.append(<EOL>Edge(edge_points[<NUM_LIT:0>],<EOL>edge_points[<NUM_LIT:1>],<EOL>obj_dict=edge_obj_dict))<EOL><DEDENT><DEDENT>return match<EOL>
Retrieved an edge from the graph. Given an edge's source and destination the corresponding Edge instance(s) will be returned. If one or more edges exist with that source and destination a list of Edge instances is returned. An empty list is returned otherwise.
f15701:c6:m28
def get_edge_list(self):
edge_objs = list()<EOL>for edge in self.obj_dict['<STR_LIT>']:<EOL><INDENT>obj_dict_list = self.obj_dict['<STR_LIT>'][edge]<EOL>edge_objs.extend(<EOL>[Edge(obj_dict=obj_d)<EOL>for obj_d in obj_dict_list])<EOL><DEDENT>return edge_objs<EOL>
Get the list of Edge instances. This method returns the list of Edge instances composing the graph.
f15701:c6:m30
def add_subgraph(self, sgraph):
if (not isinstance(sgraph, Subgraph) and<EOL>not isinstance(sgraph, Cluster)):<EOL><INDENT>raise TypeError(<EOL>'<STR_LIT>' +<EOL>str(sgraph))<EOL><DEDENT>if sgraph.get_name() in self.obj_dict['<STR_LIT>']:<EOL><INDENT>sgraph_list = self.obj_dict['<STR_LIT>'][ sgraph.get_name() ]<EOL>sgraph_list.append( sgraph.obj_dict )<EOL><DEDENT>else:<EOL><INDENT>self.obj_dict['<STR_LIT>'][sgraph.get_name()] = [<EOL>sgraph.obj_dict]<EOL><DEDENT>sgraph.set_sequence( self.get_next_sequence_number() )<EOL>sgraph.set_parent_graph( self.get_parent_graph() )<EOL>
Adds an subgraph object to the graph. It takes a subgraph object as its only argument and returns None.
f15701:c6:m31
def get_subgraph(self, name):
match = list()<EOL>if name in self.obj_dict['<STR_LIT>']:<EOL><INDENT>sgraphs_obj_dict = self.obj_dict['<STR_LIT>'].get( name )<EOL>for obj_dict_list in sgraphs_obj_dict:<EOL><INDENT>match.append( Subgraph( obj_dict = obj_dict_list ) )<EOL><DEDENT><DEDENT>return match<EOL>
Retrieved a subgraph from the graph. Given a subgraph's name the corresponding Subgraph instance will be returned. If one or more subgraphs exist with the same name, a list of Subgraph instances is returned. An empty list is returned otherwise.
f15701:c6:m32
def get_subgraph_list(self):
sgraph_objs = list()<EOL>for sgraph in self.obj_dict['<STR_LIT>']:<EOL><INDENT>obj_dict_list = self.obj_dict['<STR_LIT>'][sgraph]<EOL>sgraph_objs.extend(<EOL>[Subgraph(obj_dict=obj_d)<EOL>for obj_d in obj_dict_list])<EOL><DEDENT>return sgraph_objs<EOL>
Get the list of Subgraph instances. This method returns the list of Subgraph instances in the graph.
f15701:c6:m34
def to_string(self):
graph = list()<EOL>if self.obj_dict.get('<STR_LIT:strict>', None) is not None:<EOL><INDENT>if (self == self.get_parent_graph() and<EOL>self.obj_dict['<STR_LIT:strict>']):<EOL><INDENT>graph.append('<STR_LIT>')<EOL><DEDENT><DEDENT>graph_type = self.obj_dict['<STR_LIT:type>']<EOL>if (graph_type == '<STR_LIT>' and<EOL>not self.obj_dict.get('<STR_LIT>', True)):<EOL><INDENT>graph_type = '<STR_LIT>'<EOL><DEDENT>s = '<STR_LIT>'.format(<EOL>type=graph_type,<EOL>name=self.obj_dict['<STR_LIT:name>'])<EOL>graph.append(s)<EOL>for attr in sorted(self.obj_dict['<STR_LIT>']):<EOL><INDENT>if self.obj_dict['<STR_LIT>'].get(attr, None) is not None:<EOL><INDENT>val = self.obj_dict['<STR_LIT>'].get(attr)<EOL>if val == '<STR_LIT>':<EOL><INDENT>val = '<STR_LIT>'<EOL><DEDENT>if val is not None:<EOL><INDENT>graph.append('<STR_LIT>' %<EOL>(attr, quote_if_necessary(val)))<EOL><DEDENT>else:<EOL><INDENT>graph.append( attr )<EOL><DEDENT>graph.append( '<STR_LIT>' )<EOL><DEDENT><DEDENT>edges_done = set()<EOL>edge_obj_dicts = list()<EOL>for k in self.obj_dict['<STR_LIT>']:<EOL><INDENT>edge_obj_dicts.extend(self.obj_dict['<STR_LIT>'][k])<EOL><DEDENT>if edge_obj_dicts:<EOL><INDENT>edge_src_set, edge_dst_set = list(zip(<EOL>*[obj['<STR_LIT>'] for obj in edge_obj_dicts]))<EOL>edge_src_set, edge_dst_set = set(edge_src_set), set(edge_dst_set)<EOL><DEDENT>else:<EOL><INDENT>edge_src_set, edge_dst_set = set(), set()<EOL><DEDENT>node_obj_dicts = list()<EOL>for k in self.obj_dict['<STR_LIT>']:<EOL><INDENT>node_obj_dicts.extend(self.obj_dict['<STR_LIT>'][k])<EOL><DEDENT>sgraph_obj_dicts = list()<EOL>for k in self.obj_dict['<STR_LIT>']:<EOL><INDENT>sgraph_obj_dicts.extend(self.obj_dict['<STR_LIT>'][k])<EOL><DEDENT>obj_list = [(obj['<STR_LIT>'], obj)<EOL>for obj in (edge_obj_dicts +<EOL>node_obj_dicts + sgraph_obj_dicts) ]<EOL>obj_list.sort(key=lambda x: x[<NUM_LIT:0>])<EOL>for idx, obj in obj_list:<EOL><INDENT>if obj['<STR_LIT:type>'] == '<STR_LIT>':<EOL><INDENT>node = Node(obj_dict=obj)<EOL>if self.obj_dict.get('<STR_LIT>', False):<EOL><INDENT>if (node.get_name() not in edge_src_set and<EOL>node.get_name() not in edge_dst_set):<EOL><INDENT>continue<EOL><DEDENT><DEDENT>graph.append( node.to_string()+'<STR_LIT:\n>' )<EOL><DEDENT>elif obj['<STR_LIT:type>'] == '<STR_LIT>':<EOL><INDENT>edge = Edge(obj_dict=obj)<EOL>if (self.obj_dict.get('<STR_LIT>', False) and<EOL>edge in edges_done):<EOL><INDENT>continue<EOL><DEDENT>graph.append( edge.to_string() + '<STR_LIT:\n>' )<EOL>edges_done.add(edge)<EOL><DEDENT>else:<EOL><INDENT>sgraph = Subgraph(obj_dict=obj)<EOL>graph.append( sgraph.to_string()+'<STR_LIT:\n>' )<EOL><DEDENT><DEDENT>graph.append( '<STR_LIT>' )<EOL>return '<STR_LIT>'.join(graph)<EOL>
Return string representation of graph in DOT language. @return: graph and subelements @rtype: `str`
f15701:c6:m36
def set_shape_files(self, file_paths):
if isinstance( file_paths, str_type):<EOL><INDENT>self.shape_files.append( file_paths )<EOL><DEDENT>if isinstance( file_paths, (list, tuple) ):<EOL><INDENT>self.shape_files.extend( file_paths )<EOL><DEDENT>
Add the paths of the required image files. If the graph needs graphic objects to be used as shapes or otherwise those need to be in the same folder as the graph is going to be rendered from. Alternatively the absolute path to the files can be specified when including the graphics in the graph. The files in the location pointed to by the path(s) specified as arguments to this method will be copied to the same temporary location where the graph is going to be rendered.
f15701:c9:m3
def set_prog(self, prog):
self.prog = prog<EOL>
Sets the default program. Sets the default program in charge of processing the dot file into a graph.
f15701:c9:m4
def write(self, path, prog=None, format='<STR_LIT>', encoding=None):
if prog is None:<EOL><INDENT>prog = self.prog<EOL><DEDENT>if format == '<STR_LIT>':<EOL><INDENT>s = self.to_string()<EOL>if not PY3:<EOL><INDENT>s = unicode(s)<EOL><DEDENT>with io.open(path, mode='<STR_LIT>', encoding=encoding) as f:<EOL><INDENT>f.write(s)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>s = self.create(prog, format, encoding=encoding)<EOL>with io.open(path, mode='<STR_LIT:wb>') as f:<EOL><INDENT>f.write(s)<EOL><DEDENT><DEDENT>return True<EOL>
Writes a graph to a file. Given a filename 'path' it will open/create and truncate such file and write on it a representation of the graph defined by the dot object in the format specified by 'format' and using the encoding specified by `encoding` for text. The format 'raw' is used to dump the string representation of the Dot object, without further processing. The output can be processed by any of graphviz tools, defined in 'prog', which defaults to 'dot' Returns True or False according to the success of the write operation. There's also the preferred possibility of using: write_'format'(path, prog='program') which are automatically defined for all the supported formats. [write_ps(), write_gif(), write_dia(), ...] The encoding is passed to `open` [1]. [1] https://docs.python.org/3/library/functions.html#open
f15701:c9:m5
def create(self, prog=None, format='<STR_LIT>', encoding=None):
if prog is None:<EOL><INDENT>prog = self.prog<EOL><DEDENT>assert prog is not None<EOL>if isinstance(prog, (list, tuple)):<EOL><INDENT>prog, args = prog[<NUM_LIT:0>], prog[<NUM_LIT:1>:]<EOL><DEDENT>else:<EOL><INDENT>args = []<EOL><DEDENT>tmp_fd, tmp_name = tempfile.mkstemp()<EOL>os.close(tmp_fd)<EOL>self.write(tmp_name, encoding=encoding)<EOL>tmp_dir = os.path.dirname(tmp_name)<EOL>for img in self.shape_files:<EOL><INDENT>f = open(img, '<STR_LIT:rb>')<EOL>f_data = f.read()<EOL>f.close()<EOL>f = open(os.path.join(tmp_dir, os.path.basename(img)), '<STR_LIT:wb>')<EOL>f.write(f_data)<EOL>f.close()<EOL><DEDENT>arguments = ['<STR_LIT>'.format(format), ] + args + [tmp_name]<EOL>try:<EOL><INDENT>stdout_data, stderr_data, process = call_graphviz(<EOL>program=prog,<EOL>arguments=arguments,<EOL>working_dir=tmp_dir,<EOL>)<EOL><DEDENT>except OSError as e:<EOL><INDENT>if e.errno == errno.ENOENT:<EOL><INDENT>args = list(e.args)<EOL>args[<NUM_LIT:1>] = '<STR_LIT>'.format(<EOL>prog=prog)<EOL>raise OSError(*args)<EOL><DEDENT>else:<EOL><INDENT>raise<EOL><DEDENT><DEDENT>for img in self.shape_files:<EOL><INDENT>os.unlink(os.path.join(tmp_dir, os.path.basename(img)))<EOL><DEDENT>os.unlink(tmp_name)<EOL>if process.returncode != <NUM_LIT:0>:<EOL><INDENT>message = (<EOL>'<STR_LIT>'<EOL>'<STR_LIT>'<EOL>).format(<EOL>prog=prog,<EOL>arguments=arguments,<EOL>code=process.returncode,<EOL>out=stdout_data,<EOL>err=stderr_data,<EOL>)<EOL>print(message)<EOL><DEDENT>assert process.returncode == <NUM_LIT:0>, process.returncode<EOL>return stdout_data<EOL>
Creates and returns a binary image for the graph. create will write the graph to a temporary dot file in the encoding specified by `encoding` and process it with the program given by 'prog' (which defaults to 'twopi'), reading the binary image output and return it as: - `str` of bytes in Python 2 - `bytes` in Python 3 There's also the preferred possibility of using: create_'format'(prog='program') which are automatically defined for all the supported formats, for example: - `create_ps()` - `create_gif()` - `create_dia()` If 'prog' is a list, instead of a string, then the fist item is expected to be the program name, followed by any optional command-line arguments for it: [ 'twopi', '-Tdot', '-s10' ] @param prog: either: - name of GraphViz executable that can be found in the `$PATH`, or - absolute path to GraphViz executable. If you have added GraphViz to the `$PATH` and use its executables as installed (without renaming any of them) then their names are: - `'dot'` - `'twopi'` - `'neato'` - `'circo'` - `'fdp'` - `'sfdp'` On Windows, these have the notorious ".exe" extension that, only for the above strings, will be added automatically. The `$PATH` is inherited from `os.env['PATH']` and passed to `subprocess.Popen` using the `env` argument. If you haven't added GraphViz to your `$PATH` on Windows, then you may want to give the absolute path to the executable (for example, to `dot.exe`) in `prog`.
f15701:c9:m6
def parse_dot_data(s):
global top_graphs<EOL>top_graphs = list()<EOL>try:<EOL><INDENT>graphparser = graph_definition()<EOL>graphparser.parseWithTabs()<EOL>tokens = graphparser.parseString(s)<EOL>return list(tokens)<EOL><DEDENT>except ParseException as err:<EOL><INDENT>print(<EOL>err.line +<EOL>"<STR_LIT:U+0020>"*(err.column-<NUM_LIT:1>) + "<STR_LIT>" +<EOL>err)<EOL>return None<EOL><DEDENT>
Parse DOT description in (unicode) string `s`. @return: Graphs that result from parsing. @rtype: `list` of `pydot.Dot`
f15702:m13
def parse_args():
parser = argparse.ArgumentParser()<EOL>parser.add_argument(<EOL>'<STR_LIT>', action='<STR_LIT:store_true>',<EOL>help=('<STR_LIT>'<EOL>'<STR_LIT>'))<EOL>args, unknown = parser.parse_known_args()<EOL>sys.argv = [sys.argv[<NUM_LIT:0>]] + unknown<EOL>return args.no_check<EOL>
Return arguments.
f15703:m1
def __eq__(self, other):
if isinstance(other, UserMixin):<EOL><INDENT>return self.get_id() == other.get_id()<EOL><DEDENT>return NotImplemented<EOL>
Checks the equality of two `UserMixin` objects using `get_id`.
f15706:c0:m4
def __ne__(self, other):
equal = self.__eq__(other)<EOL>if equal is NotImplemented:<EOL><INDENT>return NotImplemented<EOL><DEDENT>return not equal<EOL>
Checks the inequality of two `UserMixin` objects using `get_id`.
f15706:c0:m5
def encode_cookie(payload, key=None):
return u'<STR_LIT>'.format(payload, _cookie_digest(payload, key=key))<EOL>
This will encode a ``unicode`` value into a cookie, and sign that cookie with the app's secret key. :param payload: The value to encode, as `unicode`. :type payload: unicode :param key: The key to use when creating the cookie digest. If not specified, the SECRET_KEY value from app config will be used. :type key: str
f15710:m0
def decode_cookie(cookie, key=None):
try:<EOL><INDENT>payload, digest = cookie.rsplit(u'<STR_LIT:|>', <NUM_LIT:1>)<EOL>if hasattr(digest, '<STR_LIT>'):<EOL><INDENT>digest = digest.decode('<STR_LIT:ascii>') <EOL><DEDENT><DEDENT>except ValueError:<EOL><INDENT>return<EOL><DEDENT>if safe_str_cmp(_cookie_digest(payload, key=key), digest):<EOL><INDENT>return payload<EOL><DEDENT>
This decodes a cookie given by `encode_cookie`. If verification of the cookie fails, ``None`` will be implicitly returned. :param cookie: An encoded cookie. :type cookie: str :param key: The key to use when creating the cookie digest. If not specified, the SECRET_KEY value from app config will be used. :type key: str
f15710:m1
def make_next_param(login_url, current_url):
l = urlparse(login_url)<EOL>c = urlparse(current_url)<EOL>if (not l.scheme or l.scheme == c.scheme) and(not l.netloc or l.netloc == c.netloc):<EOL><INDENT>return urlunparse(('<STR_LIT>', '<STR_LIT>', c.path, c.params, c.query, '<STR_LIT>'))<EOL><DEDENT>return current_url<EOL>
Reduces the scheme and host from a given URL so it can be passed to the given `login` URL more efficiently. :param login_url: The login URL being redirected to. :type login_url: str :param current_url: The URL to reduce. :type current_url: str
f15710:m2
def expand_login_view(login_view):
if login_view.startswith(('<STR_LIT>', '<STR_LIT>', '<STR_LIT:/>')):<EOL><INDENT>return login_view<EOL><DEDENT>else:<EOL><INDENT>return url_for(login_view)<EOL><DEDENT>
Returns the url for the login view, expanding the view name to a url if needed. :param login_view: The name of the login view or a URL for the login view. :type login_view: str
f15710:m3
def login_url(login_view, next_url=None, next_field='<STR_LIT>'):
base = expand_login_view(login_view)<EOL>if next_url is None:<EOL><INDENT>return base<EOL><DEDENT>parsed_result = urlparse(base)<EOL>md = url_decode(parsed_result.query)<EOL>md[next_field] = make_next_param(base, next_url)<EOL>netloc = current_app.config.get('<STR_LIT>') orparsed_result.netloc<EOL>parsed_result = parsed_result._replace(netloc=netloc,<EOL>query=url_encode(md, sort=True))<EOL>return urlunparse(parsed_result)<EOL>
Creates a URL for redirecting to a login page. If only `login_view` is provided, this will just return the URL for it. If `next_url` is provided, however, this will append a ``next=URL`` parameter to the query string so that the login view can redirect back to that URL. Flask-Login's default unauthorized handler uses this function when redirecting to your login url. To force the host name used, set `FORCE_HOST_FOR_REDIRECTS` to a host. This prevents from redirecting to external sites if request headers Host or X-Forwarded-For are present. :param login_view: The name of the login view. (Alternately, the actual URL to the login view.) :type login_view: str :param next_url: The URL to give the login view for redirection. :type next_url: str :param next_field: What field to store the next URL in. (It defaults to ``next``.) :type next_field: str
f15710:m4
def login_fresh():
return session.get('<STR_LIT>', False)<EOL>
This returns ``True`` if the current login is fresh.
f15710:m5
def login_user(user, remember=False, duration=None, force=False, fresh=True):
if not force and not user.is_active:<EOL><INDENT>return False<EOL><DEDENT>user_id = getattr(user, current_app.login_manager.id_attribute)()<EOL>session['<STR_LIT>'] = user_id<EOL>session['<STR_LIT>'] = fresh<EOL>session['<STR_LIT>'] = current_app.login_manager._session_identifier_generator()<EOL>if remember:<EOL><INDENT>session['<STR_LIT>'] = '<STR_LIT>'<EOL>if duration is not None:<EOL><INDENT>try:<EOL><INDENT>session['<STR_LIT>'] = (duration.microseconds +<EOL>(duration.seconds +<EOL>duration.days * <NUM_LIT> * <NUM_LIT>) *<EOL><NUM_LIT:10>**<NUM_LIT:6>) / <NUM_LIT>**<NUM_LIT:6><EOL><DEDENT>except AttributeError:<EOL><INDENT>raise Exception('<STR_LIT>'<EOL>'<STR_LIT>'.format(duration))<EOL><DEDENT><DEDENT><DEDENT>current_app.login_manager._update_request_context_with_user(user)<EOL>user_logged_in.send(current_app._get_current_object(), user=_get_user())<EOL>return True<EOL>
Logs a user in. You should pass the actual user object to this. If the user's `is_active` property is ``False``, they will not be logged in unless `force` is ``True``. This will return ``True`` if the log in attempt succeeds, and ``False`` if it fails (i.e. because the user is inactive). :param user: The user object to log in. :type user: object :param remember: Whether to remember the user after their session expires. Defaults to ``False``. :type remember: bool :param duration: The amount of time before the remember cookie expires. If ``None`` the value set in the settings is used. Defaults to ``None``. :type duration: :class:`datetime.timedelta` :param force: If the user is inactive, setting this to ``True`` will log them in regardless. Defaults to ``False``. :type force: bool :param fresh: setting this to ``False`` will log in the user with a session marked as not "fresh". Defaults to ``True``. :type fresh: bool
f15710:m6
def logout_user():
user = _get_user()<EOL>if '<STR_LIT>' in session:<EOL><INDENT>session.pop('<STR_LIT>')<EOL><DEDENT>if '<STR_LIT>' in session:<EOL><INDENT>session.pop('<STR_LIT>')<EOL><DEDENT>if '<STR_LIT>' in session:<EOL><INDENT>session.pop('<STR_LIT>')<EOL><DEDENT>cookie_name = current_app.config.get('<STR_LIT>', COOKIE_NAME)<EOL>if cookie_name in request.cookies:<EOL><INDENT>session['<STR_LIT>'] = '<STR_LIT>'<EOL>if '<STR_LIT>' in session:<EOL><INDENT>session.pop('<STR_LIT>')<EOL><DEDENT><DEDENT>user_logged_out.send(current_app._get_current_object(), user=user)<EOL>current_app.login_manager._update_request_context_with_user()<EOL>return True<EOL>
Logs a user out. (You do not need to pass the actual user.) This will also clean up the remember me cookie if it exists.
f15710:m7
def confirm_login():
session['<STR_LIT>'] = True<EOL>session['<STR_LIT>'] = current_app.login_manager._session_identifier_generator()<EOL>user_login_confirmed.send(current_app._get_current_object())<EOL>
This sets the current session as fresh. Sessions become stale when they are reloaded from a cookie.
f15710:m8
def login_required(func):
@wraps(func)<EOL>def decorated_view(*args, **kwargs):<EOL><INDENT>if request.method in EXEMPT_METHODS:<EOL><INDENT>return func(*args, **kwargs)<EOL><DEDENT>elif current_app.config.get('<STR_LIT>'):<EOL><INDENT>return func(*args, **kwargs)<EOL><DEDENT>elif not current_user.is_authenticated:<EOL><INDENT>return current_app.login_manager.unauthorized()<EOL><DEDENT>return func(*args, **kwargs)<EOL><DEDENT>return decorated_view<EOL>
If you decorate a view with this, it will ensure that the current user is logged in and authenticated before calling the actual view. (If they are not, it calls the :attr:`LoginManager.unauthorized` callback.) For example:: @app.route('/post') @login_required def post(): pass If there are only certain times you need to require that your user is logged in, you can do so with:: if not current_user.is_authenticated: return current_app.login_manager.unauthorized() ...which is essentially the code that this function adds to your views. It can be convenient to globally turn off authentication when unit testing. To enable this, if the application configuration variable `LOGIN_DISABLED` is set to `True`, this decorator will be ignored. .. Note :: Per `W3 guidelines for CORS preflight requests <http://www.w3.org/TR/cors/#cross-origin-request-with-preflight-0>`_, HTTP ``OPTIONS`` requests are exempt from login checks. :param func: The view function to decorate. :type func: function
f15710:m9
def fresh_login_required(func):
@wraps(func)<EOL>def decorated_view(*args, **kwargs):<EOL><INDENT>if request.method in EXEMPT_METHODS:<EOL><INDENT>return func(*args, **kwargs)<EOL><DEDENT>elif current_app.config.get('<STR_LIT>'):<EOL><INDENT>return func(*args, **kwargs)<EOL><DEDENT>elif not current_user.is_authenticated:<EOL><INDENT>return current_app.login_manager.unauthorized()<EOL><DEDENT>elif not login_fresh():<EOL><INDENT>return current_app.login_manager.needs_refresh()<EOL><DEDENT>return func(*args, **kwargs)<EOL><DEDENT>return decorated_view<EOL>
If you decorate a view with this, it will ensure that the current user's login is fresh - i.e. their session was not restored from a 'remember me' cookie. Sensitive operations, like changing a password or e-mail, should be protected with this, to impede the efforts of cookie thieves. If the user is not authenticated, :meth:`LoginManager.unauthorized` is called as normal. If they are authenticated, but their session is not fresh, it will call :meth:`LoginManager.needs_refresh` instead. (In that case, you will need to provide a :attr:`LoginManager.refresh_view`.) Behaves identically to the :func:`login_required` decorator with respect to configutation variables. .. Note :: Per `W3 guidelines for CORS preflight requests <http://www.w3.org/TR/cors/#cross-origin-request-with-preflight-0>`_, HTTP ``OPTIONS`` requests are exempt from login checks. :param func: The view function to decorate. :type func: function
f15710:m10
def set_login_view(login_view, blueprint=None):
num_login_views = len(current_app.login_manager.blueprint_login_views)<EOL>if blueprint is not None or num_login_views != <NUM_LIT:0>:<EOL><INDENT>(current_app.login_manager<EOL>.blueprint_login_views[blueprint.name]) = login_view<EOL>if (current_app.login_manager.login_view is not None and<EOL>None not in current_app.login_manager.blueprint_login_views):<EOL><INDENT>(current_app.login_manager<EOL>.blueprint_login_views[None]) = (current_app.login_manager<EOL>.login_view)<EOL><DEDENT>current_app.login_manager.login_view = None<EOL><DEDENT>else:<EOL><INDENT>current_app.login_manager.login_view = login_view<EOL><DEDENT>
Sets the login view for the app or blueprint. If a blueprint is passed, the login view is set for this blueprint on ``blueprint_login_views``. :param login_view: The user object to log in. :type login_view: str :param blueprint: The blueprint which this login view should be set on. Defaults to ``None``. :type blueprint: object
f15710:m11
def setup_app(self, app, add_context_processor=True):
warnings.warn('<STR_LIT>',<EOL>DeprecationWarning)<EOL>self.init_app(app, add_context_processor)<EOL>
This method has been deprecated. Please use :meth:`LoginManager.init_app` instead.
f15711:c0:m1
def init_app(self, app, add_context_processor=True):
app.login_manager = self<EOL>app.after_request(self._update_remember_cookie)<EOL>if add_context_processor:<EOL><INDENT>app.context_processor(_user_context_processor)<EOL><DEDENT>
Configures an application. This registers an `after_request` call, and attaches this `LoginManager` to it as `app.login_manager`. :param app: The :class:`flask.Flask` object to configure. :type app: :class:`flask.Flask` :param add_context_processor: Whether to add a context processor to the app that adds a `current_user` variable to the template. Defaults to ``True``. :type add_context_processor: bool
f15711:c0:m2
def unauthorized(self):
user_unauthorized.send(current_app._get_current_object())<EOL>if self.unauthorized_callback:<EOL><INDENT>return self.unauthorized_callback()<EOL><DEDENT>if request.blueprint in self.blueprint_login_views:<EOL><INDENT>login_view = self.blueprint_login_views[request.blueprint]<EOL><DEDENT>else:<EOL><INDENT>login_view = self.login_view<EOL><DEDENT>if not login_view:<EOL><INDENT>abort(<NUM_LIT>)<EOL><DEDENT>if self.login_message:<EOL><INDENT>if self.localize_callback is not None:<EOL><INDENT>flash(self.localize_callback(self.login_message),<EOL>category=self.login_message_category)<EOL><DEDENT>else:<EOL><INDENT>flash(self.login_message, category=self.login_message_category)<EOL><DEDENT><DEDENT>config = current_app.config<EOL>if config.get('<STR_LIT>', USE_SESSION_FOR_NEXT):<EOL><INDENT>login_url = expand_login_view(login_view)<EOL>session['<STR_LIT>'] = self._session_identifier_generator()<EOL>session['<STR_LIT>'] = make_next_param(login_url, request.url)<EOL>redirect_url = make_login_url(login_view)<EOL><DEDENT>else:<EOL><INDENT>redirect_url = make_login_url(login_view, next_url=request.url)<EOL><DEDENT>return redirect(redirect_url)<EOL>
This is called when the user is required to log in. If you register a callback with :meth:`LoginManager.unauthorized_handler`, then it will be called. Otherwise, it will take the following actions: - Flash :attr:`LoginManager.login_message` to the user. - If the app is using blueprints find the login view for the current blueprint using `blueprint_login_views`. If the app is not using blueprints or the login view for the current blueprint is not specified use the value of `login_view`. - Redirect the user to the login view. (The page they were attempting to access will be passed in the ``next`` query string variable, so you can redirect there if present instead of the homepage. Alternatively, it will be added to the session as ``next`` if USE_SESSION_FOR_NEXT is set.) If :attr:`LoginManager.login_view` is not defined, then it will simply raise a HTTP 401 (Unauthorized) error instead. This should be returned from a view or before/after_request function, otherwise the redirect will have no effect.
f15711:c0:m3
def user_loader(self, callback):
self._user_callback = callback<EOL>return callback<EOL>
This sets the callback for reloading a user from the session. The function you set should take a user ID (a ``unicode``) and return a user object, or ``None`` if the user does not exist. :param callback: The callback for retrieving a user object. :type callback: callable
f15711:c0:m4
def header_loader(self, callback):
print('<STR_LIT>' +<EOL>'<STR_LIT>')<EOL>self._header_callback = callback<EOL>return callback<EOL>
This function has been deprecated. Please use :meth:`LoginManager.request_loader` instead. This sets the callback for loading a user from a header value. The function you set should take an authentication token and return a user object, or `None` if the user does not exist. :param callback: The callback for retrieving a user object. :type callback: callable
f15711:c0:m5
def request_loader(self, callback):
self._request_callback = callback<EOL>return callback<EOL>
This sets the callback for loading a user from a Flask request. The function you set should take Flask request object and return a user object, or `None` if the user does not exist. :param callback: The callback for retrieving a user object. :type callback: callable
f15711:c0:m6
def unauthorized_handler(self, callback):
self.unauthorized_callback = callback<EOL>return callback<EOL>
This will set the callback for the `unauthorized` method, which among other things is used by `login_required`. It takes no arguments, and should return a response to be sent to the user instead of their normal view. :param callback: The callback for unauthorized users. :type callback: callable
f15711:c0:m7
def needs_refresh_handler(self, callback):
self.needs_refresh_callback = callback<EOL>return callback<EOL>
This will set the callback for the `needs_refresh` method, which among other things is used by `fresh_login_required`. It takes no arguments, and should return a response to be sent to the user instead of their normal view. :param callback: The callback for unauthorized users. :type callback: callable
f15711:c0:m8
def needs_refresh(self):
user_needs_refresh.send(current_app._get_current_object())<EOL>if self.needs_refresh_callback:<EOL><INDENT>return self.needs_refresh_callback()<EOL><DEDENT>if not self.refresh_view:<EOL><INDENT>abort(<NUM_LIT>)<EOL><DEDENT>if self.localize_callback is not None:<EOL><INDENT>flash(self.localize_callback(self.needs_refresh_message),<EOL>category=self.needs_refresh_message_category)<EOL><DEDENT>else:<EOL><INDENT>flash(self.needs_refresh_message,<EOL>category=self.needs_refresh_message_category)<EOL><DEDENT>config = current_app.config<EOL>if config.get('<STR_LIT>', USE_SESSION_FOR_NEXT):<EOL><INDENT>login_url = expand_login_view(self.refresh_view)<EOL>session['<STR_LIT>'] = self._session_identifier_generator()<EOL>session['<STR_LIT>'] = make_next_param(login_url, request.url)<EOL>redirect_url = make_login_url(self.refresh_view)<EOL><DEDENT>else:<EOL><INDENT>login_url = self.refresh_view<EOL>redirect_url = make_login_url(login_url, next_url=request.url)<EOL><DEDENT>return redirect(redirect_url)<EOL>
This is called when the user is logged in, but they need to be reauthenticated because their session is stale. If you register a callback with `needs_refresh_handler`, then it will be called. Otherwise, it will take the following actions: - Flash :attr:`LoginManager.needs_refresh_message` to the user. - Redirect the user to :attr:`LoginManager.refresh_view`. (The page they were attempting to access will be passed in the ``next`` query string variable, so you can redirect there if present instead of the homepage.) If :attr:`LoginManager.refresh_view` is not defined, then it will simply raise a HTTP 401 (Unauthorized) error instead. This should be returned from a view or before/after_request function, otherwise the redirect will have no effect.
f15711:c0:m9
def _update_request_context_with_user(self, user=None):
ctx = _request_ctx_stack.top<EOL>ctx.user = self.anonymous_user() if user is None else user<EOL>
Store the given user as ctx.user.
f15711:c0:m10
def _load_user(self):
if self._user_callback is None and self._request_callback is None:<EOL><INDENT>raise Exception(<EOL>"<STR_LIT>"<EOL>"<STR_LIT>"<EOL>"<STR_LIT>")<EOL><DEDENT>user_accessed.send(current_app._get_current_object())<EOL>if self._session_protection_failed():<EOL><INDENT>return self._update_request_context_with_user()<EOL><DEDENT>user = None<EOL>user_id = session.get('<STR_LIT>')<EOL>if user_id is not None and self._user_callback is not None:<EOL><INDENT>user = self._user_callback(user_id)<EOL><DEDENT>if user is None:<EOL><INDENT>config = current_app.config<EOL>cookie_name = config.get('<STR_LIT>', COOKIE_NAME)<EOL>header_name = config.get('<STR_LIT>', AUTH_HEADER_NAME)<EOL>has_cookie = (cookie_name in request.cookies and<EOL>session.get('<STR_LIT>') != '<STR_LIT>')<EOL>if has_cookie:<EOL><INDENT>cookie = request.cookies[cookie_name]<EOL>user = self._load_user_from_remember_cookie(cookie)<EOL><DEDENT>elif self._request_callback:<EOL><INDENT>user = self._load_user_from_request(request)<EOL><DEDENT>elif header_name in request.headers:<EOL><INDENT>header = request.headers[header_name]<EOL>user = self._load_user_from_header(header)<EOL><DEDENT><DEDENT>return self._update_request_context_with_user(user)<EOL>
Loads user from session or remember_me cookie as applicable
f15711:c0:m11
@property<EOL><INDENT>def _login_disabled(self):<DEDENT>
if has_app_context():<EOL><INDENT>return current_app.config.get('<STR_LIT>', False)<EOL><DEDENT>return False<EOL>
Legacy property, use app.config['LOGIN_DISABLED'] instead.
f15711:c0:m19
@_login_disabled.setter<EOL><INDENT>def _login_disabled(self, newvalue):<DEDENT>
current_app.config['<STR_LIT>'] = newvalue<EOL>
Legacy property setter, use app.config['LOGIN_DISABLED'] instead.
f15711:c0:m20
@contextmanager<EOL>def listen_to(signal):
class _SignalsCaught(object):<EOL><INDENT>def __init__(self):<EOL><INDENT>self.heard = []<EOL><DEDENT>def add(self, *args, **kwargs):<EOL><INDENT>'''<STR_LIT>'''<EOL>self.heard.append((args, kwargs))<EOL><DEDENT>def assert_heard_one(self, *args, **kwargs):<EOL><INDENT>'''<STR_LIT>'''<EOL>if len(self.heard) == <NUM_LIT:0>:<EOL><INDENT>raise AssertionError('<STR_LIT>')<EOL><DEDENT>elif len(self.heard) > <NUM_LIT:1>:<EOL><INDENT>msg = '<STR_LIT>'.format(len(self.heard))<EOL>raise AssertionError(msg)<EOL><DEDENT>elif self.heard[<NUM_LIT:0>] != (args, kwargs):<EOL><INDENT>msg = '<STR_LIT>''<STR_LIT>'<EOL>raise AssertionError(msg.format(self.heard[<NUM_LIT:0>], args, kwargs))<EOL><DEDENT><DEDENT>def assert_heard_none(self, *args, **kwargs):<EOL><INDENT>'''<STR_LIT>'''<EOL>if len(self.heard) >= <NUM_LIT:1>:<EOL><INDENT>msg = '<STR_LIT>'.format(len(self.heard))<EOL>raise AssertionError(msg)<EOL><DEDENT><DEDENT><DEDENT>results = _SignalsCaught()<EOL>signal.connect(results.add)<EOL>try:<EOL><INDENT>yield results<EOL><DEDENT>finally:<EOL><INDENT>signal.disconnect(results.add)<EOL><DEDENT>
Context Manager that listens to signals and records emissions Example: with listen_to(user_logged_in) as listener: login_user(user) # Assert that a single emittance of the specific args was seen. listener.assert_heard_one(app, user=user)) # Of course, you can always just look at the list yourself self.assertEqual(1, len(listener.heard))
f15714:m0
def parse(html_string, wrapper=Parser, *args, **kwargs):
return Parser(lxml.html.fromstring(html_string), *args, **kwargs)<EOL>
Parse html with wrapper
f15720:m0
def str2int(string_with_int):
return int("<STR_LIT>".join([char for char in string_with_int if char in string.digits]) or <NUM_LIT:0>)<EOL>
Collect digits from a string
f15720:m1
def to_unicode(obj, encoding='<STR_LIT:utf-8>'):
if isinstance(obj, string_types) or isinstance(obj, binary_type):<EOL><INDENT>if not isinstance(obj, text_type):<EOL><INDENT>obj = text_type(obj, encoding)<EOL><DEDENT><DEDENT>return obj<EOL>
Convert string to unicode string
f15720:m2
def strip_accents(s, pass_symbols=(u'<STR_LIT>', u'<STR_LIT>', u'<STR_LIT:\n>')):
result = []<EOL>for char in s:<EOL><INDENT>if char in pass_symbols:<EOL><INDENT>result.append(char)<EOL>continue<EOL><DEDENT>for c in unicodedata.normalize('<STR_LIT>', char):<EOL><INDENT>if unicodedata.category(c) == '<STR_LIT>':<EOL><INDENT>continue<EOL><DEDENT>result.append(c)<EOL><DEDENT><DEDENT>return '<STR_LIT>'.join(result)<EOL>
Strip accents from a string
f15720:m3
def strip_symbols(s, pass_symbols=(u'<STR_LIT>', u'<STR_LIT>', u'<STR_LIT:\n>')):
result = []<EOL>for char in s:<EOL><INDENT>if char in pass_symbols:<EOL><INDENT>result.append(char)<EOL>continue<EOL><DEDENT>for c in unicodedata.normalize('<STR_LIT>', char):<EOL><INDENT>if unicodedata.category(c) == '<STR_LIT>':<EOL><INDENT>result.append(u'<STR_LIT:U+0020>')<EOL>continue<EOL><DEDENT>if unicodedata.category(c) not in ['<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>']:<EOL><INDENT>result.append(c)<EOL><DEDENT><DEDENT><DEDENT>return u"<STR_LIT>".join(result)<EOL>
Strip ugly unicode symbols from a string
f15720:m4
def strip_spaces(s):
return u"<STR_LIT:U+0020>".join([c for c in s.split(u'<STR_LIT:U+0020>') if c])<EOL>
Strip excess spaces from a string
f15720:m5
def strip_linebreaks(s):
return u"<STR_LIT:\n>".join([c for c in s.split(u'<STR_LIT:\n>') if c])<EOL>
Strip excess line breaks from a string
f15720:m6
def __call__(self, selector):
result = CSSSelector(selector)(self.element)<EOL>return [Parser(element) for element in result]<EOL>
Simple access to CSSSelector through obj(selector)
f15720:c0:m1
def get(self, selector, index=<NUM_LIT:0>, default=None):
elements = self(selector)<EOL>if elements:<EOL><INDENT>try:<EOL><INDENT>return elements[index]<EOL><DEDENT>except (IndexError):<EOL><INDENT>pass<EOL><DEDENT><DEDENT>return default<EOL>
Get first element from CSSSelector
f15720:c0:m2
def html(self, unicode=False):
html = lxml.html.tostring(self.element, encoding=self.encoding)<EOL>if unicode:<EOL><INDENT>html = html.decode(self.encoding)<EOL><DEDENT>return html<EOL>
Return HTML of element
f15720:c0:m3
def parse(self, func, *args, **kwargs):
result = []<EOL>for element in self.xpath('<STR_LIT>'):<EOL><INDENT>if isinstance(element, Parser):<EOL><INDENT>children = element.parse(func, *args, **kwargs)<EOL>element_result = func(element, children, *args, **kwargs)<EOL>if element_result:<EOL><INDENT>result.append(element_result)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>result.append(element)<EOL><DEDENT><DEDENT>return u"<STR_LIT>".join(result)<EOL>
Parse element with given function
f15720:c0:m6
def _wrap_result(self, func):
def wrapper(*args):<EOL><INDENT>result = func(*args)<EOL>if hasattr(result, '<STR_LIT>') and not isinstance(result, etree._Element):<EOL><INDENT>return [self._wrap_element(element) for element in result]<EOL><DEDENT>else:<EOL><INDENT>return self._wrap_element(result)<EOL><DEDENT><DEDENT>return wrapper<EOL>
Wrap result in Parser instance
f15720:c0:m7
def _wrap_element(self, result):
if isinstance(result, lxml.html.HtmlElement):<EOL><INDENT>return Parser(result)<EOL><DEDENT>else:<EOL><INDENT>return result<EOL><DEDENT>
Wrap single element in Parser instance
f15720:c0:m8
def __getattr__(self, name):
<EOL>if name in self.element.attrib:<EOL><INDENT>return self.element.attrib[name]<EOL><DEDENT>result = getattr(self.element, name, None)<EOL>if callable(result):<EOL><INDENT>return self._wrap_result(result)<EOL><DEDENT>else:<EOL><INDENT>return result<EOL><DEDENT>
Nice attribution getter modification
f15720:c0:m9
def __setattr__(self, name, value):
<EOL>if name in ['<STR_LIT>', '<STR_LIT>']:<EOL><INDENT>super(Parser, self).__setattr__(name, value)<EOL><DEDENT>if name in self.element.attrib:<EOL><INDENT>self.element.attrib[name] = value<EOL><DEDENT>
Easy access to attribute modification
f15720:c0:m10
@abc.abstractmethod<EOL><INDENT>def consume(self):<DEDENT>
pass<EOL>
Consume from a queue.
f15722:c0:m0
def set_context_manager(self, context_manager):
self._context_manager = context_manager<EOL>
Wrap a context manager around subscriber execution. :param context_manager: object that implements __enter__ and __exit__
f15722:c0:m1
def process_event(self, name, subject, data):
method_mapping = Registry.get_event(name)<EOL>if not method_mapping:<EOL><INDENT>log.info('<STR_LIT>'<EOL>.format(self.__class__.__name__, name))<EOL>return<EOL><DEDENT>for event, methods in method_mapping.items():<EOL><INDENT>event_instance = event(subject, data)<EOL>log.info('<STR_LIT>'.format(<EOL>self.__class__.__name__,<EOL>event_instance.__class__.__name__,<EOL>subject<EOL>))<EOL>for method in methods:<EOL><INDENT>with self._context_manager:<EOL><INDENT>log.info('<STR_LIT>'<EOL>.format(method.__name__))<EOL>method(event_instance)<EOL><DEDENT><DEDENT><DEDENT>
Process a single event. :param name: :param subject: :param data:
f15722:c0:m2
def thread(self):
log.info('<STR_LIT>'.format(self.__class__.__name__))<EOL>thread = threading.Thread(target=thread_wrapper(self.consume), args=())<EOL>thread.daemon = True<EOL>thread.start()<EOL>
Start a thread for this consumer.
f15722:c0:m3
@abc.abstractmethod<EOL><INDENT>def produce(self, topic, event, subject, data):<DEDENT>
pass<EOL>
Send a message to the queue. :param topic: the topic the message is for. :param event: name of the event. :param subject: identifier for resource. :param data: dictionary with information for this event.
f15723:c0:m0
def register(self):
Registry.register_producer(self)<EOL>
Add this producer to the `Registry` as default producer.
f15723:c0:m1
@classmethod<EOL><INDENT>def register_event(cls, event_name, event, method):<DEDENT>
log.info('<STR_LIT>'<EOL>.format(event_name, method.__name__))<EOL>if event_name not in cls._events:<EOL><INDENT>cls._events[event_name] = {}<EOL><DEDENT>if event not in cls._events[event_name]:<EOL><INDENT>cls._events[event_name][event] = []<EOL><DEDENT>cls._events[event_name][event].append(method)<EOL>
Register an event class on it's name with a method to process it. :param event_name: name of the event. :param event: class of the event. :param method: a method used to process this event.
f15725:c0:m0
@classmethod<EOL><INDENT>def register_producer(cls, producer):<DEDENT>
log.info('<STR_LIT>'<EOL>.format(producer.__class__.__name__))<EOL>cls._producer = (cls._producer or producer)<EOL>
Register a default producer for events to use. :param producer: the default producer to to dispatch events on.
f15725:c0:m1
@classmethod<EOL><INDENT>def get_event(cls, event_name):<DEDENT>
return cls._events.get(event_name)<EOL>
Find the event class and registered methods. :param event_name: name of the event.
f15725:c0:m3
@classmethod<EOL><INDENT>def get_producer(cls):<DEDENT>
return cls._producer<EOL>
Get the default producer.
f15725:c0:m4
def dispatch(self, producer=None):
log.info('<STR_LIT>'<EOL>.format(self.name, self.subject))<EOL>producer = (producer or Registry.get_producer())<EOL>if not producer:<EOL><INDENT>raise MissingProducerError('<STR_LIT>')<EOL><DEDENT>try:<EOL><INDENT>producer.produce(self.topic, self.name, self.subject, self.data)<EOL><DEDENT>except:<EOL><INDENT>fallback = Registry.get_fallback()<EOL>fallback(self)<EOL>raise<EOL><DEDENT>
Dispatch the event, sending a message to the queue using a producer. :param producer: optional `Producer` to replace the default one.
f15726:c0:m1
def event_subscriber(event):
def wrapper(method):<EOL><INDENT>Registry.register_event(event.name, event, method)<EOL><DEDENT>return wrapper<EOL>
Register a method, which gets called when this event triggers. :param event: the event to register the decorator method on.
f15727:m0
def dispatch_event(event, subject='<STR_LIT:id>'):
def wrapper(method):<EOL><INDENT>def inner_wrapper(*args, **kwargs):<EOL><INDENT>resource = method(*args, **kwargs)<EOL>if isinstance(resource, dict):<EOL><INDENT>subject_ = resource.get(subject)<EOL>data = resource<EOL><DEDENT>else:<EOL><INDENT>subject_ = getattr(resource, subject)<EOL>data = resource.__dict__<EOL><DEDENT>event(subject_, data).dispatch()<EOL>return resource<EOL><DEDENT>return inner_wrapper<EOL><DEDENT>return wrapper<EOL>
Dispatch an event when the decorated method is called. :param event: the event class to instantiate and dispatch. :param subject_property: the property name to get the subject.
f15727:m1
def ping(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Ping Server Respond without doing anything. This endpoint is used to check that the service is up. This method is ``stable``
f15732:c0:m0
def task(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Get Task Definition This end-point will return the task-definition. Notice that the task definition may have been modified by queue, if an optional property is not specified the queue may provide a default value. This method gives output: ``v1/task.json#`` This method is ``stable``
f15732:c0:m1
def status(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT:status>"], *args, **kwargs)<EOL>
Get task status Get task status structure from `taskId` This method gives output: ``v1/task-status-response.json#`` This method is ``stable``
f15732:c0:m2
def listTaskGroup(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
List Task Group List tasks sharing the same `taskGroupId`. As a task-group may contain an unbounded number of tasks, this end-point may return a `continuationToken`. To continue listing tasks you must call the `listTaskGroup` again with the `continuationToken` as the query-string option `continuationToken`. By default this end-point will try to return up to 1000 members in one request. But it **may return less**, even if more tasks are available. It may also return a `continuationToken` even though there are no more results. However, you can only be sure to have seen all results if you keep calling `listTaskGroup` with the last `continuationToken` until you get a result without a `continuationToken`. If you are not interested in listing all the members at once, you may use the query-string option `limit` to return fewer. This method gives output: ``v1/list-task-group-response.json#`` This method is ``stable``
f15732:c0:m3
def listDependentTasks(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
List Dependent Tasks List tasks that depend on the given `taskId`. As many tasks from different task-groups may dependent on a single tasks, this end-point may return a `continuationToken`. To continue listing tasks you must call `listDependentTasks` again with the `continuationToken` as the query-string option `continuationToken`. By default this end-point will try to return up to 1000 tasks in one request. But it **may return less**, even if more tasks are available. It may also return a `continuationToken` even though there are no more results. However, you can only be sure to have seen all results if you keep calling `listDependentTasks` with the last `continuationToken` until you get a result without a `continuationToken`. If you are not interested in listing all the tasks at once, you may use the query-string option `limit` to return fewer. This method gives output: ``v1/list-dependent-tasks-response.json#`` This method is ``stable``
f15732:c0:m4
def createTask(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Create New Task Create a new task, this is an **idempotent** operation, so repeat it if you get an internal server error or network connection is dropped. **Task `deadline`**: the deadline property can be no more than 5 days into the future. This is to limit the amount of pending tasks not being taken care of. Ideally, you should use a much shorter deadline. **Task expiration**: the `expires` property must be greater than the task `deadline`. If not provided it will default to `deadline` + one year. Notice, that artifacts created by task must expire before the task. **Task specific routing-keys**: using the `task.routes` property you may define task specific routing-keys. If a task has a task specific routing-key: `<route>`, then when the AMQP message about the task is published, the message will be CC'ed with the routing-key: `route.<route>`. This is useful if you want another component to listen for completed tasks you have posted. The caller must have scope `queue:route:<route>` for each route. **Dependencies**: any tasks referenced in `task.dependencies` must have already been created at the time of this call. **Scopes**: Note that the scopes required to complete this API call depend on the content of the `scopes`, `routes`, `schedulerId`, `priority`, `provisionerId`, and `workerType` properties of the task definition. **Legacy Scopes**: The `queue:create-task:..` scope without a priority and the `queue:define-task:..` and `queue:task-group-id:..` scopes are considered legacy and should not be used. Note that the new, non-legacy scopes require a `queue:scheduler-id:..` scope as well as scopes for the proper priority. This method takes input: ``v1/create-task-request.json#`` This method gives output: ``v1/task-status-response.json#`` This method is ``stable``
f15732:c0:m5
def defineTask(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Define Task **Deprecated**, this is the same as `createTask` with a **self-dependency**. This is only present for legacy. This method takes input: ``v1/create-task-request.json#`` This method gives output: ``v1/task-status-response.json#`` This method is ``deprecated``
f15732:c0:m6
def scheduleTask(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Schedule Defined Task scheduleTask will schedule a task to be executed, even if it has unresolved dependencies. A task would otherwise only be scheduled if its dependencies were resolved. This is useful if you have defined a task that depends on itself or on some other task that has not been resolved, but you wish the task to be scheduled immediately. This will announce the task as pending and workers will be allowed to claim it and resolve the task. **Note** this operation is **idempotent** and will not fail or complain if called with a `taskId` that is already scheduled, or even resolved. To reschedule a task previously resolved, use `rerunTask`. This method gives output: ``v1/task-status-response.json#`` This method is ``stable``
f15732:c0:m7
def rerunTask(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Rerun a Resolved Task This method _reruns_ a previously resolved task, even if it was _completed_. This is useful if your task completes unsuccessfully, and you just want to run it from scratch again. This will also reset the number of `retries` allowed. This method is deprecated in favour of creating a new task with the same task definition (but with a new taskId). Remember that `retries` in the task status counts the number of runs that the queue have started because the worker stopped responding, for example because a spot node died. **Remark** this operation is idempotent, if you try to rerun a task that is not either `failed` or `completed`, this operation will just return the current task status. This method gives output: ``v1/task-status-response.json#`` This method is ``deprecated``
f15732:c0:m8
def cancelTask(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Cancel Task This method will cancel a task that is either `unscheduled`, `pending` or `running`. It will resolve the current run as `exception` with `reasonResolved` set to `canceled`. If the task isn't scheduled yet, ie. it doesn't have any runs, an initial run will be added and resolved as described above. Hence, after canceling a task, it cannot be scheduled with `queue.scheduleTask`, but a new run can be created with `queue.rerun`. These semantics is equivalent to calling `queue.scheduleTask` immediately followed by `queue.cancelTask`. **Remark** this operation is idempotent, if you try to cancel a task that isn't `unscheduled`, `pending` or `running`, this operation will just return the current task status. This method gives output: ``v1/task-status-response.json#`` This method is ``stable``
f15732:c0:m9
def claimWork(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Claim Work Claim pending task(s) for the given `provisionerId`/`workerType` queue. If any work is available (even if fewer than the requested number of tasks, this will return immediately. Otherwise, it will block for tens of seconds waiting for work. If no work appears, it will return an emtpy list of tasks. Callers should sleep a short while (to avoid denial of service in an error condition) and call the endpoint again. This is a simple implementation of "long polling". This method takes input: ``v1/claim-work-request.json#`` This method gives output: ``v1/claim-work-response.json#`` This method is ``stable``
f15732:c0:m10
def claimTask(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Claim Task claim a task - never documented This method takes input: ``v1/task-claim-request.json#`` This method gives output: ``v1/task-claim-response.json#`` This method is ``deprecated``
f15732:c0:m11
def reclaimTask(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Reclaim task Refresh the claim for a specific `runId` for given `taskId`. This updates the `takenUntil` property and returns a new set of temporary credentials for performing requests on behalf of the task. These credentials should be used in-place of the credentials returned by `claimWork`. The `reclaimTask` requests serves to: * Postpone `takenUntil` preventing the queue from resolving `claim-expired`, * Refresh temporary credentials used for processing the task, and * Abort execution if the task/run have been resolved. If the `takenUntil` timestamp is exceeded the queue will resolve the run as _exception_ with reason `claim-expired`, and proceeded to retry to the task. This ensures that tasks are retried, even if workers disappear without warning. If the task is resolved, this end-point will return `409` reporting `RequestConflict`. This typically happens if the task have been canceled or the `task.deadline` have been exceeded. If reclaiming fails, workers should abort the task and forget about the given `runId`. There is no need to resolve the run or upload artifacts. This method gives output: ``v1/task-reclaim-response.json#`` This method is ``stable``
f15732:c0:m12
def reportCompleted(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Report Run Completed Report a task completed, resolving the run as `completed`. This method gives output: ``v1/task-status-response.json#`` This method is ``stable``
f15732:c0:m13
def reportFailed(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Report Run Failed Report a run failed, resolving the run as `failed`. Use this to resolve a run that failed because the task specific code behaved unexpectedly. For example the task exited non-zero, or didn't produce expected output. Do not use this if the task couldn't be run because if malformed payload, or other unexpected condition. In these cases we have a task exception, which should be reported with `reportException`. This method gives output: ``v1/task-status-response.json#`` This method is ``stable``
f15732:c0:m14
def reportException(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Report Task Exception Resolve a run as _exception_. Generally, you will want to report tasks as failed instead of exception. You should `reportException` if, * The `task.payload` is invalid, * Non-existent resources are referenced, * Declared actions cannot be executed due to unavailable resources, * The worker had to shutdown prematurely, * The worker experienced an unknown error, or, * The task explicitly requested a retry. Do not use this to signal that some user-specified code crashed for any reason specific to this code. If user-specific code hits a resource that is temporarily unavailable worker should report task _failed_. This method takes input: ``v1/task-exception-request.json#`` This method gives output: ``v1/task-status-response.json#`` This method is ``stable``
f15732:c0:m15
def createArtifact(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Create Artifact This API end-point creates an artifact for a specific run of a task. This should **only** be used by a worker currently operating on this task, or from a process running within the task (ie. on the worker). All artifacts must specify when they `expires`, the queue will automatically take care of deleting artifacts past their expiration point. This features makes it feasible to upload large intermediate artifacts from data processing applications, as the artifacts can be set to expire a few days later. We currently support 3 different `storageType`s, each storage type have slightly different features and in some cases difference semantics. We also have 2 deprecated `storageType`s which are only maintained for backwards compatiability and should not be used in new implementations **Blob artifacts**, are useful for storing large files. Currently, these are all stored in S3 but there are facilities for adding support for other backends in futre. A call for this type of artifact must provide information about the file which will be uploaded. This includes sha256 sums and sizes. This method will return a list of general form HTTP requests which are signed by AWS S3 credentials managed by the Queue. Once these requests are completed the list of `ETag` values returned by the requests must be passed to the queue `completeArtifact` method **S3 artifacts**, DEPRECATED is useful for static files which will be stored on S3. When creating an S3 artifact the queue will return a pre-signed URL to which you can do a `PUT` request to upload your artifact. Note that `PUT` request **must** specify the `content-length` header and **must** give the `content-type` header the same value as in the request to `createArtifact`. **Azure artifacts**, DEPRECATED are stored in _Azure Blob Storage_ service which given the consistency guarantees and API interface offered by Azure is more suitable for artifacts that will be modified during the execution of the task. For example docker-worker has a feature that persists the task log to Azure Blob Storage every few seconds creating a somewhat live log. A request to create an Azure artifact will return a URL featuring a [Shared-Access-Signature](http://msdn.microsoft.com/en-us/library/azure/dn140256.aspx), refer to MSDN for further information on how to use these. **Warning: azure artifact is currently an experimental feature subject to changes and data-drops.** **Reference artifacts**, only consists of meta-data which the queue will store for you. These artifacts really only have a `url` property and when the artifact is requested the client will be redirect the URL provided with a `303` (See Other) redirect. Please note that we cannot delete artifacts you upload to other service, we can only delete the reference to the artifact, when it expires. **Error artifacts**, only consists of meta-data which the queue will store for you. These artifacts are only meant to indicate that you the worker or the task failed to generate a specific artifact, that you would otherwise have uploaded. For example docker-worker will upload an error artifact, if the file it was supposed to upload doesn't exists or turns out to be a directory. Clients requesting an error artifact will get a `424` (Failed Dependency) response. This is mainly designed to ensure that dependent tasks can distinguish between artifacts that were suppose to be generated and artifacts for which the name is misspelled. **Artifact immutability**, generally speaking you cannot overwrite an artifact when created. But if you repeat the request with the same properties the request will succeed as the operation is idempotent. This is useful if you need to refresh a signed URL while uploading. Do not abuse this to overwrite artifacts created by another entity! Such as worker-host overwriting artifact created by worker-code. As a special case the `url` property on _reference artifacts_ can be updated. You should only use this to update the `url` property for reference artifacts your process has created. This method takes input: ``v1/post-artifact-request.json#`` This method gives output: ``v1/post-artifact-response.json#`` This method is ``stable``
f15732:c0:m16
def completeArtifact(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Complete Artifact This endpoint finalises an upload done through the blob `storageType`. The queue will ensure that the task/run is still allowing artifacts to be uploaded. For single-part S3 blob artifacts, this endpoint will simply ensure the artifact is present in S3. For multipart S3 artifacts, the endpoint will perform the commit step of the multipart upload flow. As the final step for both multi and single part artifacts, the `present` entity field will be set to `true` to reflect that the artifact is now present and a message published to pulse. NOTE: This endpoint *must* be called for all artifacts of storageType 'blob' This method takes input: ``v1/put-artifact-request.json#`` This method is ``experimental``
f15732:c0:m17