code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
def _ipsi(y, tol=1.48e-9, maxiter=10): '''Inverse of psi (digamma) using Newton's method. For the purposes of Dirichlet MLE, since the parameters a[i] must always satisfy a > 0, we define ipsi :: R -> (0,inf).''' y = asanyarray(y, dtype='float') x0 = _piecewise(y, [y >= -2.22, y < -2.22], [(lambda x: exp(x) + 0.5), (lambda x: -1/(x+euler))]) for i in xrange(maxiter): x1 = x0 - (psi(x0) - y)/_trigamma(x0) if norm(x1 - x0) < tol: return x1 x0 = x1 raise Exception( 'Unable to converge in {} iterations, value is {}'.format(maxiter, x1))
Inverse of psi (digamma) using Newton's method. For the purposes of Dirichlet MLE, since the parameters a[i] must always satisfy a > 0, we define ipsi :: R -> (0,inf).
def weather_history_at_place(self, name, start=None, end=None): """ Queries the OWM Weather API for weather history for the specified location (eg: "London,uk"). A list of *Weather* objects is returned. It is possible to query for weather history in a closed time period, whose boundaries can be passed as optional parameters. :param name: the location's toponym :type name: str or unicode :param start: the object conveying the time value for the start query boundary (defaults to ``None``) :type start: int, ``datetime.datetime`` or ISO8601-formatted string :param end: the object conveying the time value for the end query boundary (defaults to ``None``) :type end: int, ``datetime.datetime`` or ISO8601-formatted string :returns: a list of *Weather* instances or ``None`` if history data is not available for the specified location :raises: *ParseResponseException* when OWM Weather API responses' data cannot be parsed, *APICallException* when OWM Weather API can not be reached, *ValueError* if the time boundaries are not in the correct chronological order, if one of the time boundaries is not ``None`` and the other is or if one or both of the time boundaries are after the current time """ assert isinstance(name, str), "Value must be a string" encoded_name = name params = {'q': encoded_name, 'lang': self._language} if start is None and end is None: pass elif start is not None and end is not None: unix_start = timeformatutils.to_UNIXtime(start) unix_end = timeformatutils.to_UNIXtime(end) if unix_start >= unix_end: raise ValueError("Error: the start time boundary must " \ "precede the end time!") current_time = time() if unix_start > current_time: raise ValueError("Error: the start time boundary must " \ "precede the current time!") params['start'] = str(unix_start) params['end'] = str(unix_end) else: raise ValueError("Error: one of the time boundaries is None, " \ "while the other is not!") uri = http_client.HttpClient.to_url(CITY_WEATHER_HISTORY_URL, self._API_key, self._subscription_type, self._use_ssl) _, json_data = self._wapi.cacheable_get_json(uri, params=params) return self._parsers['weather_history'].parse_JSON(json_data)
Queries the OWM Weather API for weather history for the specified location (eg: "London,uk"). A list of *Weather* objects is returned. It is possible to query for weather history in a closed time period, whose boundaries can be passed as optional parameters. :param name: the location's toponym :type name: str or unicode :param start: the object conveying the time value for the start query boundary (defaults to ``None``) :type start: int, ``datetime.datetime`` or ISO8601-formatted string :param end: the object conveying the time value for the end query boundary (defaults to ``None``) :type end: int, ``datetime.datetime`` or ISO8601-formatted string :returns: a list of *Weather* instances or ``None`` if history data is not available for the specified location :raises: *ParseResponseException* when OWM Weather API responses' data cannot be parsed, *APICallException* when OWM Weather API can not be reached, *ValueError* if the time boundaries are not in the correct chronological order, if one of the time boundaries is not ``None`` and the other is or if one or both of the time boundaries are after the current time
def get_last_branch_location(cls): """ Returns the source and destination addresses of the last taken branch. @rtype: tuple( int, int ) @return: Source and destination addresses of the last taken branch. @raise WindowsError: Raises an exception on error. @raise NotImplementedError: Current architecture is not C{i386} or C{amd64}. @warning: This method uses the processor's machine specific registers (MSR). It could potentially brick your machine. It works on my machine, but your mileage may vary. @note: It doesn't seem to work in VMWare or VirtualBox machines. Maybe it fails in other virtualization/emulation environments, no extensive testing was made so far. """ LastBranchFromIP = cls.read_msr(DebugRegister.LastBranchFromIP) LastBranchToIP = cls.read_msr(DebugRegister.LastBranchToIP) return ( LastBranchFromIP, LastBranchToIP )
Returns the source and destination addresses of the last taken branch. @rtype: tuple( int, int ) @return: Source and destination addresses of the last taken branch. @raise WindowsError: Raises an exception on error. @raise NotImplementedError: Current architecture is not C{i386} or C{amd64}. @warning: This method uses the processor's machine specific registers (MSR). It could potentially brick your machine. It works on my machine, but your mileage may vary. @note: It doesn't seem to work in VMWare or VirtualBox machines. Maybe it fails in other virtualization/emulation environments, no extensive testing was made so far.
def first_line_indent(self): """ A |Length| value calculated from the values of `w:ind/@w:firstLine` and `w:ind/@w:hanging`. Returns |None| if the `w:ind` child is not present. """ ind = self.ind if ind is None: return None hanging = ind.hanging if hanging is not None: return Length(-hanging) firstLine = ind.firstLine if firstLine is None: return None return firstLine
A |Length| value calculated from the values of `w:ind/@w:firstLine` and `w:ind/@w:hanging`. Returns |None| if the `w:ind` child is not present.
def commit(self): """ Commit MySQL Transaction to database. MySQLDB: If the database and the tables support transactions, this commits the current transaction; otherwise this method successfully does nothing. @author: Nick Verbeck @since: 5/12/2008 """ try: if self.connection is not None: self.connection.commit() self._updateCheckTime() self.release() except Exception, e: pass
Commit MySQL Transaction to database. MySQLDB: If the database and the tables support transactions, this commits the current transaction; otherwise this method successfully does nothing. @author: Nick Verbeck @since: 5/12/2008
def check_virtualserver(self, name): ''' Check to see if a virtual server exists ''' vs = self.bigIP.LocalLB.VirtualServer for v in vs.get_list(): if v.split('/')[-1] == name: return True return False
Check to see if a virtual server exists
def wsgi_wrap(app): ''' Wraps a standard wsgi application e.g.: def app(environ, start_response) It intercepts the start_response callback and grabs the results from it so it can return the status, headers, and body as a tuple ''' @wraps(app) def wrapped(environ, start_response): status_headers = [None, None] def _start_response(status, headers): status_headers[:] = [status, headers] body = app(environ, _start_response) ret = body, status_headers[0], status_headers[1] return ret return wrapped
Wraps a standard wsgi application e.g.: def app(environ, start_response) It intercepts the start_response callback and grabs the results from it so it can return the status, headers, and body as a tuple
def edit(self, hardware_id, userdata=None, hostname=None, domain=None, notes=None, tags=None): """Edit hostname, domain name, notes, user data of the hardware. Parameters set to None will be ignored and not attempted to be updated. :param integer hardware_id: the instance ID to edit :param string userdata: user data on the hardware to edit. If none exist it will be created :param string hostname: valid hostname :param string domain: valid domain name :param string notes: notes about this particular hardware :param string tags: tags to set on the hardware as a comma separated list. Use the empty string to remove all tags. Example:: # Change the hostname on instance 12345 to 'something' result = mgr.edit(hardware_id=12345 , hostname="something") #result will be True or an Exception """ obj = {} if userdata: self.hardware.setUserMetadata([userdata], id=hardware_id) if tags is not None: self.hardware.setTags(tags, id=hardware_id) if hostname: obj['hostname'] = hostname if domain: obj['domain'] = domain if notes: obj['notes'] = notes if not obj: return True return self.hardware.editObject(obj, id=hardware_id)
Edit hostname, domain name, notes, user data of the hardware. Parameters set to None will be ignored and not attempted to be updated. :param integer hardware_id: the instance ID to edit :param string userdata: user data on the hardware to edit. If none exist it will be created :param string hostname: valid hostname :param string domain: valid domain name :param string notes: notes about this particular hardware :param string tags: tags to set on the hardware as a comma separated list. Use the empty string to remove all tags. Example:: # Change the hostname on instance 12345 to 'something' result = mgr.edit(hardware_id=12345 , hostname="something") #result will be True or an Exception
def get_dockercfg_credentials(self, docker_registry): """ Read the .dockercfg file and return an empty dict, or else a dict with keys 'basic_auth_username' and 'basic_auth_password'. """ if not self.registry_secret_path: return {} dockercfg = Dockercfg(self.registry_secret_path) registry_creds = dockercfg.get_credentials(docker_registry) if 'username' not in registry_creds: return {} return { 'basic_auth_username': registry_creds['username'], 'basic_auth_password': registry_creds['password'], }
Read the .dockercfg file and return an empty dict, or else a dict with keys 'basic_auth_username' and 'basic_auth_password'.
def handle_one_request(self): """Copy of WSGIRequestHandler.handle(), but with different ServerHandler""" """Copy of WSGIRequestHandler, but with different ServerHandler""" self.raw_requestline = self.rfile.readline(65537) if len(self.raw_requestline) > 65536: self.requestline = '' self.request_version = '' self.command = '' self.send_error(414) return if not self.parse_request(): # An error code has been sent, just exit return handler = ServerHandler( self.rfile, self.wfile, self.get_stderr(), self.get_environ() ) handler.request_handler = self # backpointer for logging handler.run(self.server.get_app())
Copy of WSGIRequestHandler.handle(), but with different ServerHandler
def get_ssh_key(host, username, password, protocol=None, port=None, certificate_verify=False): ''' Retrieve the authorized_keys entry for root. This function only works for ESXi, not vCenter. :param host: The location of the ESXi Host :param username: Username to connect as :param password: Password for the ESXi web endpoint :param protocol: defaults to https, can be http if ssl is disabled on ESXi :param port: defaults to 443 for https :param certificate_verify: If true require that the SSL connection present a valid certificate :return: True if upload is successful CLI Example: .. code-block:: bash salt '*' vsphere.get_ssh_key my.esxi.host root bad-password certificate_verify=True ''' if protocol is None: protocol = 'https' if port is None: port = 443 url = '{0}://{1}:{2}/host/ssh_root_authorized_keys'.format(protocol, host, port) ret = {} try: result = salt.utils.http.query(url, status=True, text=True, method='GET', username=username, password=password, verify_ssl=certificate_verify) if result.get('status') == 200: ret['status'] = True ret['key'] = result['text'] else: ret['status'] = False ret['Error'] = result['error'] except Exception as msg: ret['status'] = False ret['Error'] = msg return ret
Retrieve the authorized_keys entry for root. This function only works for ESXi, not vCenter. :param host: The location of the ESXi Host :param username: Username to connect as :param password: Password for the ESXi web endpoint :param protocol: defaults to https, can be http if ssl is disabled on ESXi :param port: defaults to 443 for https :param certificate_verify: If true require that the SSL connection present a valid certificate :return: True if upload is successful CLI Example: .. code-block:: bash salt '*' vsphere.get_ssh_key my.esxi.host root bad-password certificate_verify=True
def suites(self, request, pk=None): """ List of test suite names available in this project """ suites_names = self.get_object().suites.values_list('slug') suites_metadata = SuiteMetadata.objects.filter(kind='suite', suite__in=suites_names) page = self.paginate_queryset(suites_metadata) serializer = SuiteMetadataSerializer(page, many=True, context={'request': request}) return self.get_paginated_response(serializer.data)
List of test suite names available in this project
def fmpt(P): """ Calculates the matrix of first mean passage times for an ergodic transition probability matrix. Parameters ---------- P : array (k, k), an ergodic Markov transition probability matrix. Returns ------- M : array (k, k), elements are the expected value for the number of intervals required for a chain starting in state i to first enter state j. If i=j then this is the recurrence time. Examples -------- >>> import numpy as np >>> from giddy.ergodic import fmpt >>> p=np.array([[.5, .25, .25],[.5,0,.5],[.25,.25,.5]]) >>> fm=fmpt(p) >>> fm array([[2.5 , 4. , 3.33333333], [2.66666667, 5. , 2.66666667], [3.33333333, 4. , 2.5 ]]) Thus, if it is raining today in Oz we can expect a nice day to come along in another 4 days, on average, and snow to hit in 3.33 days. We can expect another rainy day in 2.5 days. If it is nice today in Oz, we would experience a change in the weather (either rain or snow) in 2.67 days from today. (That wicked witch can only die once so I reckon that is the ultimate absorbing state). Notes ----- Uses formulation (and examples on p. 218) in :cite:`Kemeny1967`. """ P = np.matrix(P) k = P.shape[0] A = np.zeros_like(P) ss = steady_state(P).reshape(k, 1) for i in range(k): A[:, i] = ss A = A.transpose() I = np.identity(k) Z = la.inv(I - P + A) E = np.ones_like(Z) A_diag = np.diag(A) A_diag = A_diag + (A_diag == 0) D = np.diag(1. / A_diag) Zdg = np.diag(np.diag(Z)) M = (I - Z + E * Zdg) * D return np.array(M)
Calculates the matrix of first mean passage times for an ergodic transition probability matrix. Parameters ---------- P : array (k, k), an ergodic Markov transition probability matrix. Returns ------- M : array (k, k), elements are the expected value for the number of intervals required for a chain starting in state i to first enter state j. If i=j then this is the recurrence time. Examples -------- >>> import numpy as np >>> from giddy.ergodic import fmpt >>> p=np.array([[.5, .25, .25],[.5,0,.5],[.25,.25,.5]]) >>> fm=fmpt(p) >>> fm array([[2.5 , 4. , 3.33333333], [2.66666667, 5. , 2.66666667], [3.33333333, 4. , 2.5 ]]) Thus, if it is raining today in Oz we can expect a nice day to come along in another 4 days, on average, and snow to hit in 3.33 days. We can expect another rainy day in 2.5 days. If it is nice today in Oz, we would experience a change in the weather (either rain or snow) in 2.67 days from today. (That wicked witch can only die once so I reckon that is the ultimate absorbing state). Notes ----- Uses formulation (and examples on p. 218) in :cite:`Kemeny1967`.
def x_build_targets_target( self, node ): ''' Process the target dependency DAG into an ancestry tree so we can look up which top-level library and test targets specific build actions correspond to. ''' target_node = node name = self.get_child_data(target_node,tag='name',strip=True) path = self.get_child_data(target_node,tag='path',strip=True) jam_target = self.get_child_data(target_node,tag='jam-target',strip=True) #~ Map for jam targets to virtual targets. self.target[jam_target] = { 'name' : name, 'path' : path } #~ Create the ancestry. dep_node = self.get_child(self.get_child(target_node,tag='dependencies'),tag='dependency') while dep_node: child = self.get_data(dep_node,strip=True) child_jam_target = '<p%s>%s' % (path,child.split('//',1)[1]) self.parent[child_jam_target] = jam_target dep_node = self.get_sibling(dep_node.nextSibling,tag='dependency') return None
Process the target dependency DAG into an ancestry tree so we can look up which top-level library and test targets specific build actions correspond to.
def createproject(self, name, **kwargs): """ Creates a new project owned by the authenticated user. :param name: new project name :param path: custom repository name for new project. By default generated based on name :param namespace_id: namespace for the new project (defaults to user) :param description: short project description :param issues_enabled: :param merge_requests_enabled: :param wiki_enabled: :param snippets_enabled: :param public: if true same as setting visibility_level = 20 :param visibility_level: :param sudo: :param import_url: :return: """ data = {'name': name} if kwargs: data.update(kwargs) request = requests.post( self.projects_url, headers=self.headers, data=data, verify=self.verify_ssl, auth=self.auth, timeout=self.timeout) if request.status_code == 201: return request.json() elif request.status_code == 403: if 'Your own projects limit is 0' in request.text: print(request.text) return False else: return False
Creates a new project owned by the authenticated user. :param name: new project name :param path: custom repository name for new project. By default generated based on name :param namespace_id: namespace for the new project (defaults to user) :param description: short project description :param issues_enabled: :param merge_requests_enabled: :param wiki_enabled: :param snippets_enabled: :param public: if true same as setting visibility_level = 20 :param visibility_level: :param sudo: :param import_url: :return:
def Shift(self, term): """Adds a term to the xs. term: how much to add """ new = self.Copy() new.xs = [x + term for x in self.xs] return new
Adds a term to the xs. term: how much to add
def get_icon(name, aspix=False, asicon=False): """Return the real file path to the given icon name If aspix is True return as QtGui.QPixmap, if asicon is True return as QtGui.QIcon. :param name: the name of the icon :type name: str :param aspix: If True, return a QtGui.QPixmap. :type aspix: bool :param asicon: If True, return a QtGui.QIcon. :type asicon: bool :returns: The real file path to the given icon name. If aspix is True return as QtGui.QPixmap, if asicon is True return as QtGui.QIcon. If both are True, a QtGui.QIcon is returned. :rtype: string :raises: None """ datapath = os.path.join(ICON_PATH, name) icon = pkg_resources.resource_filename('jukeboxcore', datapath) if aspix or asicon: icon = QtGui.QPixmap(icon) if asicon: icon = QtGui.QIcon(icon) return icon
Return the real file path to the given icon name If aspix is True return as QtGui.QPixmap, if asicon is True return as QtGui.QIcon. :param name: the name of the icon :type name: str :param aspix: If True, return a QtGui.QPixmap. :type aspix: bool :param asicon: If True, return a QtGui.QIcon. :type asicon: bool :returns: The real file path to the given icon name. If aspix is True return as QtGui.QPixmap, if asicon is True return as QtGui.QIcon. If both are True, a QtGui.QIcon is returned. :rtype: string :raises: None
def keywords(s, top=10, **kwargs): """ Returns a sorted list of keywords in the given string. """ return parser.find_keywords(s, top=top, frequency=parser.frequency)
Returns a sorted list of keywords in the given string.
def _compute_count_availability(resource, status, previous_status): '''Compute the `check:count-availability` extra value''' count_availability = resource.extras.get('check:count-availability', 1) return count_availability + 1 if status == previous_status else 1
Compute the `check:count-availability` extra value
def main(): """ NAME plotXY.py DESCRIPTION Makes simple X,Y plots INPUT FORMAT X,Y data in columns SYNTAX plotxy.py [command line options] OPTIONS -h prints this help message -f FILE to set file name on command line -c col1 col2 specify columns to plot -xsig col3 specify xsigma if desired -ysig col4 specify xsigma if desired -b xmin xmax ymin ymax, sets bounds -sym SYM SIZE specify symbol to plot: default is red dots, 10 pt -S don't plot the symbols -xlab XLAB -ylab YLAB -l connect symbols with lines -fmt [svg,png,pdf,eps] specify output format, default is svg -sav saves plot and quits -poly X plot a degree X polynomial through the data -skip n Number of lines to skip before reading in data """ fmt,plot='svg',0 col1,col2=0,1 sym,size = 'ro',50 xlab,ylab='','' lines=0 if '-h' in sys.argv: print(main.__doc__) sys.exit() if '-f' in sys.argv: ind=sys.argv.index('-f') file=sys.argv[ind+1] if '-fmt' in sys.argv: ind=sys.argv.index('-fmt') fmt=sys.argv[ind+1] if '-sav' in sys.argv:plot=1 if '-c' in sys.argv: ind=sys.argv.index('-c') col1=int(sys.argv[ind+1])-1 col2=int(sys.argv[ind+2])-1 if '-xsig' in sys.argv: ind=sys.argv.index('-xsig') col3=int(sys.argv[ind+1])-1 if '-ysig' in sys.argv: ind=sys.argv.index('-ysig') col4=int(sys.argv[ind+1])-1 if '-xlab' in sys.argv: ind=sys.argv.index('-xlab') xlab=sys.argv[ind+1] if '-ylab' in sys.argv: ind=sys.argv.index('-ylab') ylab=sys.argv[ind+1] if '-b' in sys.argv: ind=sys.argv.index('-b') xmin=float(sys.argv[ind+1]) xmax=float(sys.argv[ind+2]) ymin=float(sys.argv[ind+3]) ymax=float(sys.argv[ind+4]) if '-poly' in sys.argv: ind=sys.argv.index('-poly') degr=sys.argv[ind+1] if '-sym' in sys.argv: ind=sys.argv.index('-sym') sym=sys.argv[ind+1] size=int(sys.argv[ind+2]) if '-l' in sys.argv: lines=1 if '-S' in sys.argv: sym='' skip = int(pmag.get_named_arg('-skip', default_val=0)) X,Y=[],[] Xerrs,Yerrs=[],[] f=open(file,'r') for num in range(skip): f.readline() data=f.readlines() for line in data: line.replace('\n','') line.replace('\t',' ') rec=line.split() X.append(float(rec[col1])) Y.append(float(rec[col2])) if '-xsig' in sys.argv:Xerrs.append(float(rec[col3])) if '-ysig' in sys.argv:Yerrs.append(float(rec[col4])) if '-poly' in sys.argv: pylab.plot(xs,ys) coeffs=numpy.polyfit(X,Y,degr) correl=numpy.corrcoef(X,Y)**2 polynomial=numpy.poly1d(coeffs) xs=numpy.linspace(numpy.min(X),numpy.max(X),10) ys=polynomial(xs) pylab.plot(xs,ys) print(polynomial) if degr=='1': print('R-square value =', '%5.4f'%(correl[0,1])) if sym!='': pylab.scatter(X,Y,marker=sym[1],c=sym[0],s=size) else: pylab.plot(X,Y) if '-xsig' in sys.argv and '-ysig' in sys.argv: pylab.errorbar(X,Y,xerr=Xerrs,yerr=Yerrs,fmt=None) if '-xsig' in sys.argv and '-ysig' not in sys.argv: pylab.errorbar(X,Y,xerr=Xerrs,fmt=None) if '-xsig' not in sys.argv and '-ysig' in sys.argv: pylab.errorbar(X,Y,yerr=Yerrs,fmt=None) if xlab!='':pylab.xlabel(xlab) if ylab!='':pylab.ylabel(ylab) if lines==1:pylab.plot(X,Y,'k-') if '-b' in sys.argv:pylab.axis([xmin,xmax,ymin,ymax]) if plot==0: pylab.show() else: pylab.savefig('plotXY.'+fmt) print('Figure saved as ','plotXY.'+fmt) sys.exit()
NAME plotXY.py DESCRIPTION Makes simple X,Y plots INPUT FORMAT X,Y data in columns SYNTAX plotxy.py [command line options] OPTIONS -h prints this help message -f FILE to set file name on command line -c col1 col2 specify columns to plot -xsig col3 specify xsigma if desired -ysig col4 specify xsigma if desired -b xmin xmax ymin ymax, sets bounds -sym SYM SIZE specify symbol to plot: default is red dots, 10 pt -S don't plot the symbols -xlab XLAB -ylab YLAB -l connect symbols with lines -fmt [svg,png,pdf,eps] specify output format, default is svg -sav saves plot and quits -poly X plot a degree X polynomial through the data -skip n Number of lines to skip before reading in data
def from_json(json_str, allow_pickle=False): """ Decodes a JSON object specified in the utool convention Args: json_str (str): allow_pickle (bool): (default = False) Returns: object: val CommandLine: python -m utool.util_cache from_json --show Example: >>> # ENABLE_DOCTEST >>> from utool.util_cache import * # NOQA >>> import utool as ut >>> json_str = 'just a normal string' >>> json_str = '["just a normal string"]' >>> allow_pickle = False >>> val = from_json(json_str, allow_pickle) >>> result = ('val = %s' % (ut.repr2(val),)) >>> print(result) """ if six.PY3: if isinstance(json_str, bytes): json_str = json_str.decode('utf-8') UtoolJSONEncoder = make_utool_json_encoder(allow_pickle) object_hook = UtoolJSONEncoder._json_object_hook val = json.loads(json_str, object_hook=object_hook) return val
Decodes a JSON object specified in the utool convention Args: json_str (str): allow_pickle (bool): (default = False) Returns: object: val CommandLine: python -m utool.util_cache from_json --show Example: >>> # ENABLE_DOCTEST >>> from utool.util_cache import * # NOQA >>> import utool as ut >>> json_str = 'just a normal string' >>> json_str = '["just a normal string"]' >>> allow_pickle = False >>> val = from_json(json_str, allow_pickle) >>> result = ('val = %s' % (ut.repr2(val),)) >>> print(result)
def diffplot(self, f, delay=1, lfilter=None, **kargs): """diffplot(f, delay=1, lfilter=None) Applies a function to couples (l[i],l[i+delay]) A list of matplotlib.lines.Line2D is returned. """ # Get the list of packets if lfilter is None: lst_pkts = [f(self.res[i], self.res[i + 1]) for i in range(len(self.res) - delay)] else: lst_pkts = [f(self.res[i], self.res[i + 1]) for i in range(len(self.res) - delay) if lfilter(self.res[i])] # Mimic the default gnuplot output if kargs == {}: kargs = MATPLOTLIB_DEFAULT_PLOT_KARGS lines = plt.plot(lst_pkts, **kargs) # Call show() if matplotlib is not inlined if not MATPLOTLIB_INLINED: plt.show() return lines
diffplot(f, delay=1, lfilter=None) Applies a function to couples (l[i],l[i+delay]) A list of matplotlib.lines.Line2D is returned.
def count(start=0, step=1, *, interval=0): """Generate consecutive numbers indefinitely. Optional starting point and increment can be defined, respectively defaulting to ``0`` and ``1``. An optional interval can be given to space the values out. """ agen = from_iterable.raw(itertools.count(start, step)) return time.spaceout.raw(agen, interval) if interval else agen
Generate consecutive numbers indefinitely. Optional starting point and increment can be defined, respectively defaulting to ``0`` and ``1``. An optional interval can be given to space the values out.
def all_coplanar(triangles): """ Check to see if a list of triangles are all coplanar Parameters ---------------- triangles: (n, 3, 3) float Vertices of triangles Returns --------------- all_coplanar : bool True if all triangles are coplanar """ triangles = np.asanyarray(triangles, dtype=np.float64) if not util.is_shape(triangles, (-1, 3, 3)): raise ValueError('Triangles must be (n,3,3)!') test_normal = normals(triangles)[0] test_vertex = triangles[0][0] distances = point_plane_distance(points=triangles[1:].reshape((-1, 3)), plane_normal=test_normal, plane_origin=test_vertex) all_coplanar = np.all(np.abs(distances) < tol.zero) return all_coplanar
Check to see if a list of triangles are all coplanar Parameters ---------------- triangles: (n, 3, 3) float Vertices of triangles Returns --------------- all_coplanar : bool True if all triangles are coplanar
def parse_query(query_str): """ Drives the whole logic, by parsing, restructuring and finally, generating an ElasticSearch query. Args: query_str (six.text_types): the given query to be translated to an ElasticSearch query Returns: six.text_types: Return an ElasticSearch query. Notes: In case there's an error, an ElasticSearch `multi_match` query is generated with its `query` value, being the query_str argument. """ def _generate_match_all_fields_query(): # Strip colon character (special character for ES) stripped_query_str = ' '.join(query_str.replace(':', ' ').split()) return {'multi_match': {'query': stripped_query_str, 'fields': ['_all'], 'zero_terms_query': 'all'}} if not isinstance(query_str, six.text_type): query_str = six.text_type(query_str.decode('utf-8')) logger.info('Parsing: "' + query_str + '\".') parser = StatefulParser() rst_visitor = RestructuringVisitor() es_visitor = ElasticSearchVisitor() try: unrecognized_text, parse_tree = parser.parse(query_str, Query) if unrecognized_text: # Usually, should never happen. msg = 'Parser returned unrecognized text: "' + unrecognized_text + \ '" for query: "' + query_str + '".' if query_str == unrecognized_text and parse_tree is None: # Didn't recognize anything. logger.warn(msg) return _generate_match_all_fields_query() else: msg += 'Continuing with recognized parse tree.' logger.warn(msg) except SyntaxError as e: logger.warn('Parser syntax error (' + six.text_type(e) + ') with query: "' + query_str + '". Continuing with a match_all with the given query.') return _generate_match_all_fields_query() # Try-Catch-all exceptions for visitors, so that search functionality never fails for the user. try: restructured_parse_tree = parse_tree.accept(rst_visitor) logger.debug('Parse tree: \n' + emit_tree_format(restructured_parse_tree)) except Exception as e: logger.exception( RestructuringVisitor.__name__ + " crashed" + (": " + six.text_type(e) + ".") if six.text_type(e) else '.' ) return _generate_match_all_fields_query() try: es_query = restructured_parse_tree.accept(es_visitor) except Exception as e: logger.exception( ElasticSearchVisitor.__name__ + " crashed" + (": " + six.text_type(e) + ".") if six.text_type(e) else '.' ) return _generate_match_all_fields_query() if not es_query: # Case where an empty query was generated (i.e. date query with malformed date, e.g. "d < 200"). return _generate_match_all_fields_query() return es_query
Drives the whole logic, by parsing, restructuring and finally, generating an ElasticSearch query. Args: query_str (six.text_types): the given query to be translated to an ElasticSearch query Returns: six.text_types: Return an ElasticSearch query. Notes: In case there's an error, an ElasticSearch `multi_match` query is generated with its `query` value, being the query_str argument.
def find_longest_match(self, alo, ahi, blo, bhi): """Find longest matching block in a[alo:ahi] and b[blo:bhi]. Wrapper for the C implementation of this function. """ besti, bestj, bestsize = _cdifflib.find_longest_match(self, alo, ahi, blo, bhi) return _Match(besti, bestj, bestsize)
Find longest matching block in a[alo:ahi] and b[blo:bhi]. Wrapper for the C implementation of this function.
def bind_super(self, opr): """ 为超级管理员授权所有权限 """ for path in self.routes: route = self.routes.get(path) route['oprs'].append(opr)
为超级管理员授权所有权限
def _get_resources_string(res_dict, pid): """ Returns the nextflow resources string from a dictionary object If the dictionary has at least on of the resource directives, these will be compiled for each process in the dictionary and returned as a string read for injection in the nextflow config file template. This dictionary should be:: dict = {"processA": {"cpus": 1, "memory": "4GB"}, "processB": {"cpus": 2}} Parameters ---------- res_dict : dict Dictionary with the resources for processes. pid : int Unique identified of the process Returns ------- str nextflow config string """ config_str = "" ignore_directives = ["container", "version"] for p, directives in res_dict.items(): for d, val in directives.items(): if d in ignore_directives: continue config_str += '\n\t${}_{}.{} = {}'.format(p, pid, d, val) return config_str
Returns the nextflow resources string from a dictionary object If the dictionary has at least on of the resource directives, these will be compiled for each process in the dictionary and returned as a string read for injection in the nextflow config file template. This dictionary should be:: dict = {"processA": {"cpus": 1, "memory": "4GB"}, "processB": {"cpus": 2}} Parameters ---------- res_dict : dict Dictionary with the resources for processes. pid : int Unique identified of the process Returns ------- str nextflow config string
def classifications(ctx, classifications, results, readlevel, readlevel_path): """Retrieve performed metagenomic classifications""" # basic operation -- just print if not readlevel and not results: cli_resource_fetcher(ctx, "classifications", classifications) # fetch the results elif not readlevel and results: if len(classifications) != 1: log.error("Can only request results data on one Classification at a time") else: classification = ctx.obj["API"].Classifications.get(classifications[0]) if not classification: log.error( "Could not find classification {} (404 status code)".format(classifications[0]) ) return results = classification.results(json=True) pprint(results, ctx.obj["NOPPRINT"]) # fetch the readlevel elif readlevel is not None and not results: if len(classifications) != 1: log.error("Can only request read-level data on one Classification at a time") else: classification = ctx.obj["API"].Classifications.get(classifications[0]) if not classification: log.error( "Could not find classification {} (404 status code)".format(classifications[0]) ) return tsv_url = classification._readlevel()["url"] log.info("Downloading tsv data from: {}".format(tsv_url)) download_file_helper(tsv_url, readlevel_path) # both given -- complain else: log.error("Can only request one of read-level data or results data at a time")
Retrieve performed metagenomic classifications
def Run(self, unused_arg): """Run the kill.""" # Send a message back to the service to say that we are about to shutdown. reply = rdf_flows.GrrStatus(status=rdf_flows.GrrStatus.ReturnedStatus.OK) # Queue up the response message, jump the queue. self.SendReply(reply, message_type=rdf_flows.GrrMessage.Type.STATUS) # Give the http thread some time to send the reply. self.grr_worker.Sleep(10) # Die ourselves. logging.info("Dying on request.") os._exit(242)
Run the kill.
def get(self, idx, default=''): '''Returns the element at idx, or default if idx is beyond the length of the list''' # if the index is beyond the length of the list, return '' if isinstance(idx, int) and (idx >= len(self) or idx < -1 * len(self)): return default # else do the regular list function (for int, slice types, etc.) return super().__getitem__(idx)
Returns the element at idx, or default if idx is beyond the length of the list
def register_sub_command(self, sub_command, additional_ids=[]): """ Register a command as a subcommand. It will have it's CommandDesc.command string used as id. Additional ids can be provided. Args: sub_command (CommandBase): Subcommand to register. additional_ids (List[str]): List of additional ids. Can be empty. """ self.__register_sub_command(sub_command, sub_command.command_desc().command) self.__additional_ids.update(additional_ids) for id in additional_ids: self.__register_sub_command(sub_command, id)
Register a command as a subcommand. It will have it's CommandDesc.command string used as id. Additional ids can be provided. Args: sub_command (CommandBase): Subcommand to register. additional_ids (List[str]): List of additional ids. Can be empty.
def river_sources(world, water_flow, water_path): """Find places on map where sources of river can be found""" river_source_list = [] # Using the wind and rainfall data, create river 'seeds' by # flowing rainfall along paths until a 'flow' threshold is reached # and we have a beginning of a river... trickle->stream->river->sea # step one: Using flow direction, follow the path for each cell # adding the previous cell's flow to the current cell's flow. # step two: We loop through the water flow map looking for cells # above the water flow threshold. These are our river sources and # we mark them as rivers. While looking, the cells with no # out-going flow, above water flow threshold and are still # above sea level are marked as 'sources'. for y in range(0, world.height - 1): for x in range(0, world.width - 1): rain_fall = world.layers['precipitation'].data[y, x] water_flow[y, x] = rain_fall if water_path[y, x] == 0: continue # ignore cells without flow direction cx, cy = x, y # begin with starting location neighbour_seed_found = False # follow flow path to where it may lead while not neighbour_seed_found: # have we found a seed? if world.is_mountain((cx, cy)) and water_flow[cy, cx] >= RIVER_TH: # try not to create seeds around other seeds for seed in river_source_list: sx, sy = seed if in_circle(9, cx, cy, sx, sy): neighbour_seed_found = True if neighbour_seed_found: break # we do not want seeds for neighbors river_source_list.append([cx, cy]) # river seed break # no path means dead end... if water_path[cy, cx] == 0: break # break out of loop # follow path, add water flow from previous cell dx, dy = DIR_NEIGHBORS_CENTER[water_path[cy, cx]] nx, ny = cx + dx, cy + dy # calculate next cell water_flow[ny, nx] += rain_fall cx, cy = nx, ny # set current cell to next cell return river_source_list
Find places on map where sources of river can be found
def precision(Ntp, Nsys, eps=numpy.spacing(1)): """Precision. Wikipedia entry https://en.wikipedia.org/wiki/Precision_and_recall Parameters ---------- Ntp : int >=0 Number of true positives. Nsys : int >=0 Amount of system output. eps : float eps. Default value numpy.spacing(1) Returns ------- precision: float Precision """ if Nsys == 0: return numpy.nan else: return float(Ntp / float(Nsys))
Precision. Wikipedia entry https://en.wikipedia.org/wiki/Precision_and_recall Parameters ---------- Ntp : int >=0 Number of true positives. Nsys : int >=0 Amount of system output. eps : float eps. Default value numpy.spacing(1) Returns ------- precision: float Precision
def get_class_alias(klass): """ Tries to find a suitable L{pyamf.ClassAlias} subclass for C{klass}. """ for k, v in pyamf.ALIAS_TYPES.iteritems(): for kl in v: try: if issubclass(klass, kl): return k except TypeError: # not a class if hasattr(kl, '__call__'): if kl(klass) is True: return k
Tries to find a suitable L{pyamf.ClassAlias} subclass for C{klass}.
def get_next_url(request, redirect_field_name): """Retrieves next url from request Note: This verifies that the url is safe before returning it. If the url is not safe, this returns None. :arg HttpRequest request: the http request :arg str redirect_field_name: the name of the field holding the next url :returns: safe url or None """ next_url = request.GET.get(redirect_field_name) if next_url: kwargs = { 'url': next_url, 'require_https': import_from_settings( 'OIDC_REDIRECT_REQUIRE_HTTPS', request.is_secure()) } hosts = list(import_from_settings('OIDC_REDIRECT_ALLOWED_HOSTS', [])) hosts.append(request.get_host()) kwargs['allowed_hosts'] = hosts is_safe = is_safe_url(**kwargs) if is_safe: return next_url return None
Retrieves next url from request Note: This verifies that the url is safe before returning it. If the url is not safe, this returns None. :arg HttpRequest request: the http request :arg str redirect_field_name: the name of the field holding the next url :returns: safe url or None
def xyz_with_ports(self, arrnx3): """Set the positions of the particles in the Compound, including the Ports. Parameters ---------- arrnx3 : np.ndarray, shape=(n,3), dtype=float The new particle positions """ if not self.children: if not arrnx3.shape[0] == 1: raise ValueError( 'Trying to set position of {} with more than one' 'coordinate: {}'.format( self, arrnx3)) self.pos = np.squeeze(arrnx3) else: for atom, coords in zip( self._particles( include_ports=True), arrnx3): atom.pos = coords
Set the positions of the particles in the Compound, including the Ports. Parameters ---------- arrnx3 : np.ndarray, shape=(n,3), dtype=float The new particle positions
def post_grade2(self, grade, user=None, comment=''): """ Post grade to LTI consumer using REST/JSON URL munging will is related to: https://openedx.atlassian.net/browse/PLAT-281 :param: grade: 0 <= grade <= 1 :return: True if post successful and grade valid :exception: LTIPostMessageException if call failed """ content_type = 'application/vnd.ims.lis.v2.result+json' if user is None: user = self.user_id lti2_url = self.response_url.replace( "/grade_handler", "/lti_2_0_result_rest_handler/user/{}".format(user)) score = float(grade) if 0 <= score <= 1.0: body = json.dumps({ "@context": "http://purl.imsglobal.org/ctx/lis/v2/Result", "@type": "Result", "resultScore": score, "comment": comment }) ret = post_message2(self._consumers(), self.key, lti2_url, body, method='PUT', content_type=content_type) if not ret: raise LTIPostMessageException("Post Message Failed") return True return False
Post grade to LTI consumer using REST/JSON URL munging will is related to: https://openedx.atlassian.net/browse/PLAT-281 :param: grade: 0 <= grade <= 1 :return: True if post successful and grade valid :exception: LTIPostMessageException if call failed
def _configure_manager(self): """ Creates the Manager instance to handle networks. """ self._manager = CloudNetworkManager(self, resource_class=CloudNetwork, response_key="network", uri_base="os-networksv2")
Creates the Manager instance to handle networks.
def info_label(self, indicator): """Set info label by given settings. Parameters ---------- indicator : int A number where 0-8 is number of mines in srrounding. 12 is a mine field. """ if indicator in xrange(1, 9): self.id = indicator self.setPixmap(QtGui.QPixmap(NUMBER_PATHS[indicator]).scaled( self.field_width, self.field_height)) elif indicator == 0: self.id == 0 self.setPixmap(QtGui.QPixmap(NUMBER_PATHS[0]).scaled( self.field_width, self.field_height)) elif indicator == 12: self.id = 12 self.setPixmap(QtGui.QPixmap(BOOM_PATH).scaled(self.field_width, self.field_height)) self.setStyleSheet("QLabel {background-color: black;}") elif indicator == 9: self.id = 9 self.setPixmap(QtGui.QPixmap(FLAG_PATH).scaled(self.field_width, self.field_height)) self.setStyleSheet("QLabel {background-color: #A3C1DA;}") elif indicator == 10: self.id = 10 self.setPixmap(QtGui.QPixmap(QUESTION_PATH).scaled( self.field_width, self.field_height)) self.setStyleSheet("QLabel {background-color: yellow;}") elif indicator == 11: self.id = 11 self.setPixmap(QtGui.QPixmap(EMPTY_PATH).scaled( self.field_width*3, self.field_height*3)) self.setStyleSheet('QLabel {background-color: blue;}')
Set info label by given settings. Parameters ---------- indicator : int A number where 0-8 is number of mines in srrounding. 12 is a mine field.
def to_digital(d, num): """ 进制转换,从10进制转到指定机制 :param d: :param num: :return: """ if not isinstance(num, int) or not 1 < num < 10: raise ValueError('digital num must between 1 and 10') d = int(d) result = [] x = d % num d = d - x result.append(str(x)) while d > 0: d = d // num x = d % num d = d - x result.append(str(x)) return ''.join(result[::-1])
进制转换,从10进制转到指定机制 :param d: :param num: :return:
def find_doc(self, name=None, ns_uri=None, first_only=False): """ Find :class:`Element` node descendants of the document containing this node, with optional constraints to limit the results. Delegates to :meth:`find` applied to this node's owning document. """ return self.document.find(name=name, ns_uri=ns_uri, first_only=first_only)
Find :class:`Element` node descendants of the document containing this node, with optional constraints to limit the results. Delegates to :meth:`find` applied to this node's owning document.
def is_first_root(self): """Return ``True`` if this page is the first root pages.""" if self.parent: return False if self._is_first_root is not None: return self._is_first_root first_root_id = cache.get('PAGE_FIRST_ROOT_ID') if first_root_id is not None: self._is_first_root = first_root_id == self.id return self._is_first_root try: first_root_id = Page.objects.root().values('id')[0]['id'] except IndexError: first_root_id = None if first_root_id is not None: cache.set('PAGE_FIRST_ROOT_ID', first_root_id) self._is_first_root = self.id == first_root_id return self._is_first_root
Return ``True`` if this page is the first root pages.
def revoke_all(self, paths: Union[str, Iterable[str]], recursive: bool=False): """ See `AccessControlMapper.revoke_all`. :param paths: see `AccessControlMapper.revoke_all` :param access_controls: see `AccessControlMapper.revoke_all` :param recursive: whether the access control list should be changed recursively for all nested collections """
See `AccessControlMapper.revoke_all`. :param paths: see `AccessControlMapper.revoke_all` :param access_controls: see `AccessControlMapper.revoke_all` :param recursive: whether the access control list should be changed recursively for all nested collections
def append_note(self, player, text): """Append text to an already existing note.""" note = self._find_note(player) note.text += text
Append text to an already existing note.
def init_app(self, app): """ Register this extension with the flask app :param app: A flask application """ # Save this so we can use it later in the extension if not hasattr(app, 'extensions'): # pragma: no cover app.extensions = {} app.extensions['flask-jwt-simple'] = self # Set all the default configurations for this extension self._set_default_configuration_options(app) self._set_error_handler_callbacks(app) # Set propagate exceptions, so all of our error handlers properly # work in production app.config['PROPAGATE_EXCEPTIONS'] = True
Register this extension with the flask app :param app: A flask application
def filter_any_above_threshold( self, multi_key_fn, value_dict, threshold, default_value=0.0): """Like filter_above_threshold but `multi_key_fn` returns multiple keys and the element is kept if any of them have a value above the given threshold. Parameters ---------- multi_key_fn : callable Given an element of this collection, returns multiple keys into `value_dict` value_dict : dict Dict from keys returned by `extract_key_fn` to float values threshold : float Only keep elements whose value in `value_dict` is above this threshold. default_value : float Value to use for elements whose key is not in `value_dict` """ def filter_fn(x): for key in multi_key_fn(x): value = value_dict.get(key, default_value) if value > threshold: return True return False return self.filter(filter_fn)
Like filter_above_threshold but `multi_key_fn` returns multiple keys and the element is kept if any of them have a value above the given threshold. Parameters ---------- multi_key_fn : callable Given an element of this collection, returns multiple keys into `value_dict` value_dict : dict Dict from keys returned by `extract_key_fn` to float values threshold : float Only keep elements whose value in `value_dict` is above this threshold. default_value : float Value to use for elements whose key is not in `value_dict`
def get_randomized_guid_sample(self, item_count): """ Fetch a subset of randomzied GUIDs from the whitelist """ dataset = self.get_whitelist() random.shuffle(dataset) return dataset[:item_count]
Fetch a subset of randomzied GUIDs from the whitelist
def apex(self, axis): ''' Find the most extreme vertex in the direction of the axis provided. axis: A vector, which is an 3x1 np.array. ''' from blmath.geometry.apex import apex return apex(self.v, axis)
Find the most extreme vertex in the direction of the axis provided. axis: A vector, which is an 3x1 np.array.
def categorization(self, domains, labels=False): '''Get the domain status and categorization of a domain or list of domains. 'domains' can be either a single domain, or a list of domains. Setting 'labels' to True will give back categorizations in human-readable form. For more detail, see https://investigate.umbrella.com/docs/api#categorization ''' if type(domains) is str: return self._get_categorization(domains, labels) elif type(domains) is list: return self._post_categorization(domains, labels) else: raise Investigate.DOMAIN_ERR
Get the domain status and categorization of a domain or list of domains. 'domains' can be either a single domain, or a list of domains. Setting 'labels' to True will give back categorizations in human-readable form. For more detail, see https://investigate.umbrella.com/docs/api#categorization
def set_pid_params(self, *args, **kwargs): '''Set PID parameters for all joints in the skeleton. Parameters for this method are passed directly to the `pid` constructor. ''' for joint in self.joints: joint.target_angles = [None] * joint.ADOF joint.controllers = [pid(*args, **kwargs) for i in range(joint.ADOF)]
Set PID parameters for all joints in the skeleton. Parameters for this method are passed directly to the `pid` constructor.
def datetime(self, start: int = 2000, end: int = 2035, timezone: Optional[str] = None) -> DateTime: """Generate random datetime. :param start: Minimum value of year. :param end: Maximum value of year. :param timezone: Set custom timezone (pytz required). :return: Datetime """ datetime_obj = datetime.combine( date=self.date(start, end), time=self.time(), ) if timezone: if not pytz: raise ImportError('Timezones are supported only with pytz') tz = pytz.timezone(timezone) datetime_obj = tz.localize(datetime_obj) return datetime_obj
Generate random datetime. :param start: Minimum value of year. :param end: Maximum value of year. :param timezone: Set custom timezone (pytz required). :return: Datetime
def _default_buffer_pos_changed(self, _): """ When the cursor changes in the default buffer. Synchronize with history buffer. """ # Only when this buffer has the focus. if self.app.current_buffer == self.default_buffer: try: line_no = self.default_buffer.document.cursor_position_row - \ self.history_mapping.result_line_offset if line_no < 0: # When the cursor is above the inserted region. raise IndexError history_lineno = sorted(self.history_mapping.selected_lines)[line_no] except IndexError: pass else: self.history_buffer.cursor_position = \ self.history_buffer.document.translate_row_col_to_index(history_lineno, 0)
When the cursor changes in the default buffer. Synchronize with history buffer.
def create_label(self, name, justify=Gtk.Justification.CENTER, wrap_mode=True, tooltip=None): """ The function is used for creating lable with HTML text """ label = Gtk.Label() name = name.replace('|', '\n') label.set_markup(name) label.set_justify(justify) label.set_line_wrap(wrap_mode) if tooltip is not None: label.set_has_tooltip(True) label.connect("query-tooltip", self.parent.tooltip_queries, tooltip) return label
The function is used for creating lable with HTML text
def get_related_models(cls, model): """ Get a dictionary with related structure models for given class or model: >> SupportedServices.get_related_models(gitlab_models.Project) { 'service': nodeconductor_gitlab.models.GitLabService, 'service_project_link': nodeconductor_gitlab.models.GitLabServiceProjectLink, 'resources': [ nodeconductor_gitlab.models.Group, nodeconductor_gitlab.models.Project, ] } """ from waldur_core.structure.models import ServiceSettings if isinstance(model, ServiceSettings): model_str = cls._registry.get(model.type, {}).get('model_name', '') else: model_str = cls._get_model_str(model) for models in cls.get_service_models().values(): if model_str == cls._get_model_str(models['service']) or \ model_str == cls._get_model_str(models['service_project_link']): return models for resource_model in models['resources']: if model_str == cls._get_model_str(resource_model): return models
Get a dictionary with related structure models for given class or model: >> SupportedServices.get_related_models(gitlab_models.Project) { 'service': nodeconductor_gitlab.models.GitLabService, 'service_project_link': nodeconductor_gitlab.models.GitLabServiceProjectLink, 'resources': [ nodeconductor_gitlab.models.Group, nodeconductor_gitlab.models.Project, ] }
def update(self, **kwargs): """Update `params` values using alias. """ for k in self.prior_params: try: self.params[k] = kwargs[self.alias[k]] except(KeyError): pass
Update `params` values using alias.
def set_source_morphology(self, name, **kwargs): """Set the spatial model of a source. Parameters ---------- name : str Source name. spatial_model : str Spatial model name (PointSource, RadialGaussian, etc.). spatial_pars : dict Dictionary of spatial parameters (optional). use_cache : bool Generate the spatial model by interpolating the cached source map. use_pylike : bool """ name = self.roi.get_source_by_name(name).name src = self.roi[name] spatial_model = kwargs.get('spatial_model', src['SpatialModel']) spatial_pars = kwargs.get('spatial_pars', {}) use_pylike = kwargs.get('use_pylike', True) psf_scale_fn = kwargs.get('psf_scale_fn', None) update_source = kwargs.get('update_source', False) if hasattr(pyLike.BinnedLikelihood, 'setSourceMapImage') and not use_pylike: src.set_spatial_model(spatial_model, spatial_pars) self._update_srcmap(src.name, src, psf_scale_fn=psf_scale_fn) else: src = self.delete_source(name, loglevel=logging.DEBUG, save_template=False) src.set_spatial_model(spatial_model, spatial_pars) self.add_source(src.name, src, init_source=False, use_pylike=use_pylike, loglevel=logging.DEBUG) if update_source: self.update_source(name)
Set the spatial model of a source. Parameters ---------- name : str Source name. spatial_model : str Spatial model name (PointSource, RadialGaussian, etc.). spatial_pars : dict Dictionary of spatial parameters (optional). use_cache : bool Generate the spatial model by interpolating the cached source map. use_pylike : bool
def disconnect(self, code): """Called when WebSocket connection is closed.""" Subscriber.objects.filter(session_id=self.session_id).delete()
Called when WebSocket connection is closed.
def page_not_found(request, template_name='404.html'): """ Custom page not found (404) handler. Don't raise a Http404 or anything like that in here otherwise you will cause an infinite loop. That would be bad. If no ResponsePage exists for with type ``RESPONSE_HTTP404`` then the default template render view will be used. Templates: :template:`404.html` Context: request_path The path of the requested URL (e.g., '/app/pages/bad_page/') page A ResponsePage with type ``RESPONSE_HTTP404`` if it exists. """ rendered_page = get_response_page( request, http.HttpResponseNotFound, 'icekit/response_pages/404.html', abstract_models.RESPONSE_HTTP404 ) if rendered_page is None: return defaults.page_not_found(request, template_name) return rendered_page
Custom page not found (404) handler. Don't raise a Http404 or anything like that in here otherwise you will cause an infinite loop. That would be bad. If no ResponsePage exists for with type ``RESPONSE_HTTP404`` then the default template render view will be used. Templates: :template:`404.html` Context: request_path The path of the requested URL (e.g., '/app/pages/bad_page/') page A ResponsePage with type ``RESPONSE_HTTP404`` if it exists.
def prepare_hooks(self, hooks): """Prepares the given hooks.""" # hooks can be passed as None to the prepare method and to this # method. To prevent iterating over None, simply use an empty list # if hooks is False-y hooks = hooks or [] for event in hooks: self.register_hook(event, hooks[event])
Prepares the given hooks.
def _getInputValue(self, obj, fieldName): """ Gets the value of a given field from the input record """ if isinstance(obj, dict): if not fieldName in obj: knownFields = ", ".join( key for key in obj.keys() if not key.startswith("_") ) raise ValueError( "Unknown field name '%s' in input record. Known fields are '%s'.\n" "This could be because input headers are mislabeled, or because " "input data rows do not contain a value for '%s'." % ( fieldName, knownFields, fieldName ) ) return obj[fieldName] else: return getattr(obj, fieldName)
Gets the value of a given field from the input record
def getLogLevelNo(level): """Return numerical log level or raise ValueError. A valid level is either an integer or a string such as WARNING etc.""" if isinstance(level,(int,long)): return level try: return(int(logging.getLevelName(level.upper()))) except: raise ValueError('illegal loglevel %s' % level)
Return numerical log level or raise ValueError. A valid level is either an integer or a string such as WARNING etc.
def _write(self, request): """Actually serialize and write the request.""" with sw("serialize_request"): request_str = request.SerializeToString() with sw("write_request"): with catch_websocket_connection_errors(): self._sock.send(request_str)
Actually serialize and write the request.
def propagate_cols_up(self, cols, target_df_name, source_df_name): """ Take values from source table, compile them into a colon-delimited list, and apply them to the target table. This method won't overwrite values in the target table, it will only supply values where they are missing. Parameters ---------- cols : list-like list of columns to propagate target_df_name : str name of table to propagate values into source_df_name: name of table to propagate values from Returns --------- target_df : MagicDataFrame updated MagicDataFrame with propagated values """ print("-I- Trying to propagate {} columns from {} table into {} table".format(cols, source_df_name, target_df_name)) # make sure target table is read in if target_df_name not in self.tables: self.add_magic_table(target_df_name) if target_df_name not in self.tables: print("-W- Couldn't read in {} table".format(target_df_name)) return # make sure source table is read in if source_df_name not in self.tables: self.add_magic_table(source_df_name) print("-W- Couldn't read in {} table".format(source_df_name)) return target_df = self.tables[target_df_name] source_df = self.tables[source_df_name] target_name = target_df_name[:-1] # make sure source_df has relevant columns for col in cols: if col not in source_df.df.columns: source_df.df[col] = None # if target_df has info, propagate that into all rows target_df.front_and_backfill(cols) # make sure target_name is in source_df for merging if target_name not in source_df.df.columns: print("-W- You can't merge data from {} table into {} table".format(source_df_name, target_df_name)) print(" Your {} table is missing {} column".format(source_df_name, target_name)) self.tables[target_df_name] = target_df return target_df source_df.front_and_backfill([target_name]) # group source df by target_name grouped = source_df.df.groupby(source_df.df[target_name]) if not len(grouped): print("-W- Couldn't propagate from {} to {}".format(source_df_name, target_df_name)) return target_df # function to generate capitalized, sorted, colon-delimited list # of unique, non-null values from a column def func(group, col_name): lst = group[col_name][group[col_name].notnull()].unique() split_lst = [col.split(':') for col in lst if col] sorted_lst = sorted(np.unique([item.capitalize() for sublist in split_lst for item in sublist])) group_col = ":".join(sorted_lst) return group_col # apply func to each column for col in cols: res = grouped.apply(func, col) target_df.df['new_' + col] = res target_df.df[col] = np.where(target_df.df[col], target_df.df[col], target_df.df['new_' + col]) target_df.df.drop(['new_' + col], axis='columns', inplace=True) # set table self.tables[target_df_name] = target_df return target_df
Take values from source table, compile them into a colon-delimited list, and apply them to the target table. This method won't overwrite values in the target table, it will only supply values where they are missing. Parameters ---------- cols : list-like list of columns to propagate target_df_name : str name of table to propagate values into source_df_name: name of table to propagate values from Returns --------- target_df : MagicDataFrame updated MagicDataFrame with propagated values
def delete_project(self, owner, id, **kwargs): """ Delete a project Permanently deletes a project and all data associated with it. This operation cannot be undone, although a new project may be created with the same id. This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.delete_project(owner, id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param str owner: User name and unique identifier of the creator of a project. For example, in the URL: [https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), government is the unique identifier of the owner. (required) :param str id: Project unique identifier. For example, in the URL:[https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), how-to-add-depth-to-your-data-with-the-us-census-acs is the unique identifier of the project. (required) :return: SuccessMessage If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('callback'): return self.delete_project_with_http_info(owner, id, **kwargs) else: (data) = self.delete_project_with_http_info(owner, id, **kwargs) return data
Delete a project Permanently deletes a project and all data associated with it. This operation cannot be undone, although a new project may be created with the same id. This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.delete_project(owner, id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param str owner: User name and unique identifier of the creator of a project. For example, in the URL: [https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), government is the unique identifier of the owner. (required) :param str id: Project unique identifier. For example, in the URL:[https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), how-to-add-depth-to-your-data-with-the-us-census-acs is the unique identifier of the project. (required) :return: SuccessMessage If the method is called asynchronously, returns the request thread.
def bulkMinuteBars(symbol, dates, token='', version=''): '''fetch many dates worth of minute-bars for a given symbol''' _raiseIfNotStr(symbol) dates = [_strOrDate(date) for date in dates] list_orig = dates.__class__ args = [] for date in dates: args.append((symbol, '1d', date, token, version)) pool = ThreadPool(20) rets = pool.starmap(chart, args) pool.close() return list_orig(itertools.chain(*rets))
fetch many dates worth of minute-bars for a given symbol
def get_file(self, filename): """ Return the raw data of the specified filename inside the APK :rtype: bytes """ try: return self.zip.read(filename) except KeyError: raise FileNotPresent(filename)
Return the raw data of the specified filename inside the APK :rtype: bytes
def convert_uv(pinyin): """ü 转换,还原原始的韵母 ü行的韵跟声母j,q,x拼的时候,写成ju(居),qu(区),xu(虚), ü上两点也省略;但是跟声母n,l拼的时候,仍然写成nü(女),lü(吕)。 """ return UV_RE.sub( lambda m: ''.join((m.group(1), UV_MAP[m.group(2)], m.group(3))), pinyin)
ü 转换,还原原始的韵母 ü行的韵跟声母j,q,x拼的时候,写成ju(居),qu(区),xu(虚), ü上两点也省略;但是跟声母n,l拼的时候,仍然写成nü(女),lü(吕)。
def has_own_property(self, attr): """ Returns if the property """ try: object.__getattribute__(self, attr) except AttributeError: return False else: return True
Returns if the property
def resolve_object_number(self, ref): """Resolve a variety of object numebrs to a dataset number""" if not isinstance(ref, ObjectNumber): on = ObjectNumber.parse(ref) else: on = ref ds_on = on.as_dataset return ds_on
Resolve a variety of object numebrs to a dataset number
def count_missense_per_gene(lines): """ count the number of missense variants in each gene. """ counts = {} for x in lines: x = x.split("\t") gene = x[0] consequence = x[3] if gene not in counts: counts[gene] = 0 if consequence != "missense_variant": continue counts[gene] += 1 return counts
count the number of missense variants in each gene.
def describe_constructor(self, s): """ Describe the input bytesequence (constructor arguments) s based on the loaded contract abi definition :param s: bytes constructor arguments :return: AbiMethod instance """ method = self.signatures.get(b"__constructor__") if not method: # constructor not available m = AbiMethod({"type": "constructor", "name": "", "inputs": [], "outputs": []}) return m types_def = method["inputs"] types = [t["type"] for t in types_def] names = [t["name"] for t in types_def] if not len(s): values = len(types) * ["<nA>"] else: values = decode_abi(types, s) # (type, name, data) method.inputs = [{"type": t, "name": n, "data": v} for t, n, v in list( zip(types, names, values))] return method
Describe the input bytesequence (constructor arguments) s based on the loaded contract abi definition :param s: bytes constructor arguments :return: AbiMethod instance
def head(self, n=5): """Get the first n rows of the DataFrame. Args: n (int): The number of rows to return. Returns: A new DataFrame with the first n rows of the DataFrame. """ if n >= len(self.index): return self.copy() return self.__constructor__(query_compiler=self._query_compiler.head(n))
Get the first n rows of the DataFrame. Args: n (int): The number of rows to return. Returns: A new DataFrame with the first n rows of the DataFrame.
def lineMatchingPattern(pattern, lines): """ Searches through the specified list of strings and returns the regular expression match for the first line that matches the specified pre-compiled regex pattern, or None if no match was found Note: if you are using a regex pattern string (i.e. not already compiled), use lineMatching() instead :type pattern: Compiled regular expression pattern to use :type lines: List of lines to search :return: the regular expression match for the first line that matches the specified regex, or None if no match was found :rtype: re.Match """ for line in lines: m = pattern.match(line) if m: return m else: return None
Searches through the specified list of strings and returns the regular expression match for the first line that matches the specified pre-compiled regex pattern, or None if no match was found Note: if you are using a regex pattern string (i.e. not already compiled), use lineMatching() instead :type pattern: Compiled regular expression pattern to use :type lines: List of lines to search :return: the regular expression match for the first line that matches the specified regex, or None if no match was found :rtype: re.Match
def create_tables(self, tables): """Creates database tables in sqlite lookup db""" cursor = self.get_cursor() for table in tables: columns = mslookup_tables[table] try: cursor.execute('CREATE TABLE {0}({1})'.format( table, ', '.join(columns))) except sqlite3.OperationalError as error: print(error) print('Warning: Table {} already exists in database, will ' 'add to existing tables instead of creating ' 'new.'.format(table)) else: self.conn.commit()
Creates database tables in sqlite lookup db
def invoke(*args, **kwargs): """Invokes a command callback in exactly the way it expects. There are two ways to invoke this method: 1. the first argument can be a callback and all other arguments and keyword arguments are forwarded directly to the function. 2. the first argument is a click command object. In that case all arguments are forwarded as well but proper click parameters (options and click arguments) must be keyword arguments and Click will fill in defaults. Note that before Click 3.2 keyword arguments were not properly filled in against the intention of this code and no context was created. For more information about this change and why it was done in a bugfix release see :ref:`upgrade-to-3.2`. """ self, callback = args[:2] ctx = self # It's also possible to invoke another command which might or # might not have a callback. In that case we also fill # in defaults and make a new context for this command. if isinstance(callback, Command): other_cmd = callback callback = other_cmd.callback ctx = Context(other_cmd, info_name=other_cmd.name, parent=self) if callback is None: raise TypeError('The given command does not have a ' 'callback that can be invoked.') for param in other_cmd.params: if param.name not in kwargs and param.expose_value: kwargs[param.name] = param.get_default(ctx) args = args[2:] with augment_usage_errors(self): with ctx: return callback(*args, **kwargs)
Invokes a command callback in exactly the way it expects. There are two ways to invoke this method: 1. the first argument can be a callback and all other arguments and keyword arguments are forwarded directly to the function. 2. the first argument is a click command object. In that case all arguments are forwarded as well but proper click parameters (options and click arguments) must be keyword arguments and Click will fill in defaults. Note that before Click 3.2 keyword arguments were not properly filled in against the intention of this code and no context was created. For more information about this change and why it was done in a bugfix release see :ref:`upgrade-to-3.2`.
def _command_line(): # pragma: no cover pylint: disable=too-many-branches,too-many-statements """ Provide the command line interface. """ if __name__ == "PyFunceble": # We initiate the end of the coloration at the end of each line. initiate(autoreset=True) # We load the configuration and the directory structure. load_config(True) try: # The following handle the command line argument. try: PARSER = argparse.ArgumentParser( epilog="Crafted with %s by %s" % ( Fore.RED + "♥" + Fore.RESET, Style.BRIGHT + Fore.CYAN + "Nissar Chababy (Funilrys) " + Style.RESET_ALL + "with the help of " + Style.BRIGHT + Fore.GREEN + "https://pyfunceble.rtfd.io/en/master/contributors.html " + Style.RESET_ALL + "&& " + Style.BRIGHT + Fore.GREEN + "https://pyfunceble.rtfd.io/en/master/special-thanks.html", ), add_help=False, ) CURRENT_VALUE_FORMAT = ( Fore.YELLOW + Style.BRIGHT + "Configured value: " + Fore.BLUE ) PARSER.add_argument( "-ad", "--adblock", action="store_true", help="Switch the decoding of the adblock format. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["adblock"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-a", "--all", action="store_false", help="Output all available information on the screen. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["less"]) + Style.RESET_ALL ), ) PARSER.add_argument( "" "-c", "--auto-continue", "--continue", action="store_true", help="Switch the value of the auto continue mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["auto_continue"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--autosave-minutes", type=int, help="Update the minimum of minutes before we start " "committing to upstream under Travis CI. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["travis_autosave_minutes"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--clean", action="store_true", help="Clean all files under output." ) PARSER.add_argument( "--clean-all", action="store_true", help="Clean all files under output and all file generated by PyFunceble.", ) PARSER.add_argument( "--cmd", type=str, help="Pass a command to run before each commit " "(except the final one) under the Travis mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["command_before_end"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--cmd-before-end", type=str, help="Pass a command to run before the results " "(final) commit under the Travis mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["command_before_end"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--commit-autosave-message", type=str, help="Replace the default autosave commit message. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["travis_autosave_commit"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--commit-results-message", type=str, help="Replace the default results (final) commit message. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["travis_autosave_final_commit"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-d", "--domain", type=str, help="Set and test the given domain." ) PARSER.add_argument( "-db", "--database", action="store_true", help="Switch the value of the usage of a database to store " "inactive domains of the currently tested list. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["inactive_database"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-dbr", "--days-between-db-retest", type=int, help="Set the numbers of days between each retest of domains present " "into inactive-db.json. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["days_between_db_retest"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--debug", action="store_true", help="Switch the value of the debug mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["debug"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--directory-structure", action="store_true", help="Generate the directory and files that are needed and which does " "not exist in the current directory.", ) PARSER.add_argument( "-ex", "--execution", action="store_true", help="Switch the default value of the execution time showing. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["show_execution_time"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-f", "--file", type=str, help="Read the given file and test all domains inside it. " "If a URL is given we download and test the content of the given URL.", # pylint: disable=line-too-long ) PARSER.add_argument( "--filter", type=str, help="Domain to filter (regex)." ) PARSER.add_argument( "--help", action="help", default=argparse.SUPPRESS, help="Show this help message and exit.", ) PARSER.add_argument( "--hierarchical", action="store_true", help="Switch the value of the hierarchical sorting of the tested file. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["hierarchical_sorting"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-h", "--host", action="store_true", help="Switch the value of the generation of hosts file. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["generate_hosts"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--http", action="store_true", help="Switch the value of the usage of HTTP code. %s" % ( CURRENT_VALUE_FORMAT + repr(HTTP_CODE["active"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--iana", action="store_true", help="Update/Generate `iana-domains-db.json`.", ) PARSER.add_argument( "--idna", action="store_true", help="Switch the value of the IDNA conversion. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["idna_conversion"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-ip", type=str, help="Change the IP to print in the hosts files with the given one. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["custom_ip"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--json", action="store_true", help="Switch the value of the generation " "of the JSON formatted list of domains. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["generate_json"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--less", action="store_true", help="Output less informations on screen. %s" % ( CURRENT_VALUE_FORMAT + repr(Core.switch("less")) + Style.RESET_ALL ), ) PARSER.add_argument( "--local", action="store_true", help="Switch the value of the local network testing. %s" % ( CURRENT_VALUE_FORMAT + repr(Core.switch("local")) + Style.RESET_ALL ), ) PARSER.add_argument( "--link", type=str, help="Download and test the given file." ) PARSER.add_argument( "-m", "--mining", action="store_true", help="Switch the value of the mining subsystem usage. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["mining"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-n", "--no-files", action="store_true", help="Switch the value of the production of output files. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["no_files"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-nl", "--no-logs", action="store_true", help="Switch the value of the production of logs files " "in the case we encounter some errors. %s" % ( CURRENT_VALUE_FORMAT + repr(not CONFIGURATION["logs"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-ns", "--no-special", action="store_true", help="Switch the value of the usage of the SPECIAL rules. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["no_special"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-nu", "--no-unified", action="store_true", help="Switch the value of the production unified logs " "under the output directory. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["unified"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-nw", "--no-whois", action="store_true", help="Switch the value the usage of whois to test domain's status. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["no_whois"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-p", "--percentage", action="store_true", help="Switch the value of the percentage output mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["show_percentage"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--plain", action="store_true", help="Switch the value of the generation " "of the plain list of domains. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["plain_list_domain"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--production", action="store_true", help="Prepare the repository for production.", ) PARSER.add_argument( "-psl", "--public-suffix", action="store_true", help="Update/Generate `public-suffix.json`.", ) PARSER.add_argument( "-q", "--quiet", action="store_true", help="Run the script in quiet mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["quiet"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--share-logs", action="store_true", help="Switch the value of the sharing of logs. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["share_logs"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-s", "--simple", action="store_true", help="Switch the value of the simple output mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["simple"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--split", action="store_true", help="Switch the value of the split of the generated output files. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["inactive_database"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--syntax", action="store_true", help="Switch the value of the syntax test mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["syntax"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-t", "--timeout", type=int, default=3, help="Switch the value of the timeout. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["seconds_before_http_timeout"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--travis", action="store_true", help="Switch the value of the Travis mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["travis"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--travis-branch", type=str, default="master", help="Switch the branch name where we are going to push. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["travis_branch"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-u", "--url", type=str, help="Analyze the given URL." ) PARSER.add_argument( "-uf", "--url-file", type=str, help="Read and test the list of URL of the given file. " "If a URL is given we download and test the content of the given URL.", # pylint: disable=line-too-long ) PARSER.add_argument( "-ua", "--user-agent", type=str, help="Set the user-agent to use and set every time we " "interact with everything which is not our logs sharing system.", # pylint: disable=line-too-long ) PARSER.add_argument( "-v", "--version", help="Show the version of PyFunceble and exit.", action="version", version="%(prog)s " + VERSION, ) PARSER.add_argument( "-vsc", "--verify-ssl-certificate", action="store_true", help="Switch the value of the verification of the " "SSL/TLS certificate when testing for URL. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["verify_ssl_certificate"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-wdb", "--whois-database", action="store_true", help="Switch the value of the usage of a database to store " "whois data in order to avoid whois servers rate limit. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["whois_database"]) + Style.RESET_ALL ), ) ARGS = PARSER.parse_args() if ARGS.less: CONFIGURATION.update({"less": ARGS.less}) elif not ARGS.all: CONFIGURATION.update({"less": ARGS.all}) if ARGS.adblock: CONFIGURATION.update({"adblock": Core.switch("adblock")}) if ARGS.auto_continue: CONFIGURATION.update( {"auto_continue": Core.switch("auto_continue")} ) if ARGS.autosave_minutes: CONFIGURATION.update( {"travis_autosave_minutes": ARGS.autosave_minutes} ) if ARGS.clean: Clean(None) if ARGS.clean_all: Clean(None, ARGS.clean_all) if ARGS.cmd: CONFIGURATION.update({"command": ARGS.cmd}) if ARGS.cmd_before_end: CONFIGURATION.update({"command_before_end": ARGS.cmd_before_end}) if ARGS.commit_autosave_message: CONFIGURATION.update( {"travis_autosave_commit": ARGS.commit_autosave_message} ) if ARGS.commit_results_message: CONFIGURATION.update( {"travis_autosave_final_commit": ARGS.commit_results_message} ) if ARGS.database: CONFIGURATION.update( {"inactive_database": Core.switch("inactive_database")} ) if ARGS.days_between_db_retest: CONFIGURATION.update( {"days_between_db_retest": ARGS.days_between_db_retest} ) if ARGS.debug: CONFIGURATION.update({"debug": Core.switch("debug")}) if ARGS.directory_structure: DirectoryStructure() if ARGS.execution: CONFIGURATION.update( {"show_execution_time": Core.switch("show_execution_time")} ) if ARGS.filter: CONFIGURATION.update({"filter": ARGS.filter}) if ARGS.hierarchical: CONFIGURATION.update( {"hierarchical_sorting": Core.switch("hierarchical_sorting")} ) if ARGS.host: CONFIGURATION.update( {"generate_hosts": Core.switch("generate_hosts")} ) if ARGS.http: HTTP_CODE.update({"active": Core.switch(HTTP_CODE["active"], True)}) if ARGS.iana: IANA().update() if ARGS.idna: CONFIGURATION.update( {"idna_conversion": Core.switch("idna_conversion")} ) if ARGS.ip: CONFIGURATION.update({"custom_ip": ARGS.ip}) if ARGS.json: CONFIGURATION.update( {"generate_json": Core.switch("generate_json")} ) if ARGS.local: CONFIGURATION.update({"local": Core.switch("local")}) if ARGS.mining: CONFIGURATION.update({"mining": Core.switch("mining")}) if ARGS.no_files: CONFIGURATION.update({"no_files": Core.switch("no_files")}) if ARGS.no_logs: CONFIGURATION.update({"logs": Core.switch("logs")}) if ARGS.no_special: CONFIGURATION.update({"no_special": Core.switch("no_special")}) if ARGS.no_unified: CONFIGURATION.update({"unified": Core.switch("unified")}) if ARGS.no_whois: CONFIGURATION.update({"no_whois": Core.switch("no_whois")}) if ARGS.percentage: CONFIGURATION.update( {"show_percentage": Core.switch("show_percentage")} ) if ARGS.plain: CONFIGURATION.update( {"plain_list_domain": Core.switch("plain_list_domain")} ) if ARGS.production: Production() if ARGS.public_suffix: PublicSuffix().update() if ARGS.quiet: CONFIGURATION.update({"quiet": Core.switch("quiet")}) if ARGS.share_logs: CONFIGURATION.update({"share_logs": Core.switch("share_logs")}) if ARGS.simple: CONFIGURATION.update( {"simple": Core.switch("simple"), "quiet": Core.switch("quiet")} ) if ARGS.split: CONFIGURATION.update({"split": Core.switch("split")}) if ARGS.syntax: CONFIGURATION.update({"syntax": Core.switch("syntax")}) if ARGS.timeout and ARGS.timeout % 3 == 0: CONFIGURATION.update({"seconds_before_http_timeout": ARGS.timeout}) if ARGS.travis: CONFIGURATION.update({"travis": Core.switch("travis")}) if ARGS.travis_branch: CONFIGURATION.update({"travis_branch": ARGS.travis_branch}) if ARGS.user_agent: CONFIGURATION.update({"user_agent": ARGS.user_agent}) if ARGS.verify_ssl_certificate: CONFIGURATION.update( {"verify_ssl_certificate": ARGS.verify_ssl_certificate} ) if ARGS.whois_database: CONFIGURATION.update( {"whois_database": Core.switch("whois_database")} ) if not CONFIGURATION["quiet"]: Core.colorify_logo(home=True) # We compare the versions (upstream and local) and in between. Version().compare() # We call our Core which will handle all case depending of the configuration or # the used command line arguments. Core( domain_or_ip_to_test=ARGS.domain, file_path=ARGS.file, url_to_test=ARGS.url, url_file=ARGS.url_file, link_to_test=ARGS.link, ) except KeyError as e: if not Version(True).is_cloned(): # We are not into the cloned version. # We merge the local with the upstream configuration. Merge(CURRENT_DIRECTORY) else: # We are in the cloned version. # We raise the exception. # # Note: The purpose of this is to avoid having # to search for a mistake while developing. raise e except KeyboardInterrupt: stay_safe()
Provide the command line interface.
def present(name, type, url, access='proxy', user='', password='', database='', basic_auth=False, basic_auth_user='', basic_auth_password='', is_default=False, json_data=None, profile='grafana'): ''' Ensure that a data source is present. name Name of the data source. type Which type of data source it is ('graphite', 'influxdb' etc.). url The URL to the data source API. user Optional - user to authenticate with the data source password Optional - password to authenticate with the data source basic_auth Optional - set to True to use HTTP basic auth to authenticate with the data source. basic_auth_user Optional - HTTP basic auth username. basic_auth_password Optional - HTTP basic auth password. is_default Default: False ''' if isinstance(profile, string_types): profile = __salt__['config.option'](profile) ret = {'name': name, 'result': None, 'comment': None, 'changes': {}} datasource = _get_datasource(profile, name) data = _get_json_data(name, type, url, access, user, password, database, basic_auth, basic_auth_user, basic_auth_password, is_default, json_data) if datasource: requests.put( _get_url(profile, datasource['id']), data, headers=_get_headers(profile), timeout=profile.get('grafana_timeout', 3), ) ret['result'] = True ret['changes'] = _diff(datasource, data) if ret['changes']['new'] or ret['changes']['old']: ret['comment'] = 'Data source {0} updated'.format(name) else: ret['changes'] = {} ret['comment'] = 'Data source {0} already up-to-date'.format(name) else: requests.post( '{0}/api/datasources'.format(profile['grafana_url']), data, headers=_get_headers(profile), timeout=profile.get('grafana_timeout', 3), ) ret['result'] = True ret['comment'] = 'New data source {0} added'.format(name) ret['changes'] = data return ret
Ensure that a data source is present. name Name of the data source. type Which type of data source it is ('graphite', 'influxdb' etc.). url The URL to the data source API. user Optional - user to authenticate with the data source password Optional - password to authenticate with the data source basic_auth Optional - set to True to use HTTP basic auth to authenticate with the data source. basic_auth_user Optional - HTTP basic auth username. basic_auth_password Optional - HTTP basic auth password. is_default Default: False
def mousePressEvent(self, event): """Marshalls behaviour depending on location of the mouse click""" if event.x() < 50: super(PlotMenuBar, self).mousePressEvent(event) else: # ignore to allow proper functioning of float event.ignore()
Marshalls behaviour depending on location of the mouse click
def mkopen(p, *args, **kwargs): """ A wrapper for the open() builtin which makes parent directories if needed. """ dir = os.path.dirname(p) mkdir(dir) return open(p, *args, **kwargs)
A wrapper for the open() builtin which makes parent directories if needed.
def open(self, value, nt=None, wrap=None, unwrap=None): """Mark the PV as opened an provide its initial value. This initial value is later updated with post(). :param value: A Value, or appropriate object (see nt= and wrap= of the constructor). Any clients which have begun connecting which began connecting while this PV was in the close'd state will complete connecting. Only those fields of the value which are marked as changed will be stored. """ self._wrap = wrap or (nt and nt.wrap) or self._wrap self._unwrap = unwrap or (nt and nt.unwrap) or self._unwrap _SharedPV.open(self, self._wrap(value))
Mark the PV as opened an provide its initial value. This initial value is later updated with post(). :param value: A Value, or appropriate object (see nt= and wrap= of the constructor). Any clients which have begun connecting which began connecting while this PV was in the close'd state will complete connecting. Only those fields of the value which are marked as changed will be stored.
def on_origin(self, *args): """Make sure to redraw whenever the origin moves.""" if self.origin is None: Clock.schedule_once(self.on_origin, 0) return self.origin.bind( pos=self._trigger_repoint, size=self._trigger_repoint )
Make sure to redraw whenever the origin moves.
def insert_object_into_db_pk_known(self, obj: Any, table: str, fieldlist: Sequence[str]) -> None: """Inserts object into database table, with PK (first field) already known.""" pkvalue = getattr(obj, fieldlist[0]) if pkvalue is None: raise AssertionError("insert_object_intoto_db_pk_known called " "without PK") valuelist = [] for f in fieldlist: valuelist.append(getattr(obj, f)) self.db_exec( get_sql_insert(table, fieldlist, self.get_delims()), *valuelist )
Inserts object into database table, with PK (first field) already known.
def transform(self): """ Get the (4, 4) homogenous transformation from the world frame to this camera object. Returns ------------ transform : (4, 4) float Transform from world to camera """ # no scene set if self._scene is None: # no transform saved locally if not hasattr(self, '_transform') or self._transform is None: return np.eye(4) # transform saved locally return self._transform # get the transform from the scene return self._scene.graph[self.name][0]
Get the (4, 4) homogenous transformation from the world frame to this camera object. Returns ------------ transform : (4, 4) float Transform from world to camera
async def handle_action(self, action: str, request_id: str, **kwargs): """ run the action. """ try: await self.check_permissions(action, **kwargs) if action not in self.available_actions: raise MethodNotAllowed(method=action) method_name = self.available_actions[action] method = getattr(self, method_name) reply = partial(self.reply, action=action, request_id=request_id) # the @action decorator will wrap non-async action into async ones. response = await method( request_id=request_id, action=action, **kwargs ) if isinstance(response, tuple): data, status = response await reply( data=data, status=status ) except Exception as exc: await self.handle_exception( exc, action=action, request_id=request_id )
run the action.
def wanted_labels(self, labels): """ Specify only WANTED labels to minimize get_labels() requests Args: - labels: <list> of wanted labels. Example: page.wanted_labels(['P18', 'P31']) """ if not isinstance(labels, list): raise ValueError("Input labels must be a list.") self.user_labels = labels
Specify only WANTED labels to minimize get_labels() requests Args: - labels: <list> of wanted labels. Example: page.wanted_labels(['P18', 'P31'])
def encrypt_assertion(self, statement, enc_key, template, key_type='des-192', node_xpath=None, node_id=None): """ Will encrypt an assertion :param statement: A XML document that contains the assertion to encrypt :param enc_key: File name of a file containing the encryption key :param template: A template for the encryption part to be added. :param key_type: The type of session key to use. :return: The encrypted text """ if six.PY2: _str = unicode else: _str = str if isinstance(statement, SamlBase): statement = pre_encrypt_assertion(statement) _, fil = make_temp( _str(statement), decode=False, delete=self._xmlsec_delete_tmpfiles ) _, tmpl = make_temp(_str(template), decode=False) if not node_xpath: node_xpath = ASSERT_XPATH com_list = [ self.xmlsec, '--encrypt', '--pubkey-cert-pem', enc_key, '--session-key', key_type, '--xml-data', fil, '--node-xpath', node_xpath, ] if node_id: com_list.extend(['--node-id', node_id]) try: (_stdout, _stderr, output) = self._run_xmlsec(com_list, [tmpl]) except XmlsecError as e: six.raise_from(EncryptError(com_list), e) return output.decode('utf-8')
Will encrypt an assertion :param statement: A XML document that contains the assertion to encrypt :param enc_key: File name of a file containing the encryption key :param template: A template for the encryption part to be added. :param key_type: The type of session key to use. :return: The encrypted text
def run(self, n_iterations=1, min_n_workers=1, iteration_kwargs = {},): """ run n_iterations of SuccessiveHalving Parameters ---------- n_iterations: int number of iterations to be performed in this run min_n_workers: int minimum number of workers before starting the run """ self.wait_for_workers(min_n_workers) iteration_kwargs.update({'result_logger': self.result_logger}) if self.time_ref is None: self.time_ref = time.time() self.config['time_ref'] = self.time_ref self.logger.info('HBMASTER: starting run at %s'%(str(self.time_ref))) self.thread_cond.acquire() while True: self._queue_wait() next_run = None # find a new run to schedule for i in self.active_iterations(): next_run = self.iterations[i].get_next_run() if not next_run is None: break if not next_run is None: self.logger.debug('HBMASTER: schedule new run for iteration %i'%i) self._submit_job(*next_run) continue else: if n_iterations > 0: #we might be able to start the next iteration self.iterations.append(self.get_next_iteration(len(self.iterations), iteration_kwargs)) n_iterations -= 1 continue # at this point there is no imediate run that can be scheduled, # so wait for some job to finish if there are active iterations if self.active_iterations(): self.thread_cond.wait() else: break self.thread_cond.release() for i in self.warmstart_iteration: i.fix_timestamps(self.time_ref) ws_data = [i.data for i in self.warmstart_iteration] return Result([copy.deepcopy(i.data) for i in self.iterations] + ws_data, self.config)
run n_iterations of SuccessiveHalving Parameters ---------- n_iterations: int number of iterations to be performed in this run min_n_workers: int minimum number of workers before starting the run
def _date_time_match(cron, **kwargs): ''' Returns true if the minute, hour, etc. params match their counterparts from the dict returned from list_tab(). ''' return all([kwargs.get(x) is None or cron[x] == six.text_type(kwargs[x]) or (six.text_type(kwargs[x]).lower() == 'random' and cron[x] != '*') for x in ('minute', 'hour', 'daymonth', 'month', 'dayweek')])
Returns true if the minute, hour, etc. params match their counterparts from the dict returned from list_tab().
def user_auth_link(self, redirect_uri, scope='', state='', avoid_linking=False): """Generates a URL to send the user for OAuth 2.0 :param string redirect_uri: URL to redirect the user to after auth. :param string scope: The scope of the privileges you want the eventual access_token to grant. :param string state: A value that will be returned to you unaltered along with the user's authorization request decision. (The OAuth 2.0 RFC recommends using this to prevent cross-site request forgery.) :param bool avoid_linking: Avoid linking calendar accounts together under one set of credentials. (Optional, default: false). :return: authorization link :rtype: ``string`` """ if not scope: scope = ' '.join(settings.DEFAULT_OAUTH_SCOPE) self.auth.update(redirect_uri=redirect_uri) url = '%s/oauth/authorize' % self.app_base_url params = { 'response_type': 'code', 'client_id': self.auth.client_id, 'redirect_uri': redirect_uri, 'scope': scope, 'state': state, 'avoid_linking': avoid_linking, } urlencoded_params = urlencode(params) return "{url}?{params}".format(url=url, params=urlencoded_params)
Generates a URL to send the user for OAuth 2.0 :param string redirect_uri: URL to redirect the user to after auth. :param string scope: The scope of the privileges you want the eventual access_token to grant. :param string state: A value that will be returned to you unaltered along with the user's authorization request decision. (The OAuth 2.0 RFC recommends using this to prevent cross-site request forgery.) :param bool avoid_linking: Avoid linking calendar accounts together under one set of credentials. (Optional, default: false). :return: authorization link :rtype: ``string``
def command(sock, dbname, spec, slave_ok, is_mongos, read_preference, codec_options, session, client, check=True, allowable_errors=None, address=None, check_keys=False, listeners=None, max_bson_size=None, read_concern=None, parse_write_concern_error=False, collation=None, compression_ctx=None, use_op_msg=False, unacknowledged=False, user_fields=None): """Execute a command over the socket, or raise socket.error. :Parameters: - `sock`: a raw socket instance - `dbname`: name of the database on which to run the command - `spec`: a command document as an ordered dict type, eg SON. - `slave_ok`: whether to set the SlaveOkay wire protocol bit - `is_mongos`: are we connected to a mongos? - `read_preference`: a read preference - `codec_options`: a CodecOptions instance - `session`: optional ClientSession instance. - `client`: optional MongoClient instance for updating $clusterTime. - `check`: raise OperationFailure if there are errors - `allowable_errors`: errors to ignore if `check` is True - `address`: the (host, port) of `sock` - `check_keys`: if True, check `spec` for invalid keys - `listeners`: An instance of :class:`~pymongo.monitoring.EventListeners` - `max_bson_size`: The maximum encoded bson size for this server - `read_concern`: The read concern for this command. - `parse_write_concern_error`: Whether to parse the ``writeConcernError`` field in the command response. - `collation`: The collation for this command. - `compression_ctx`: optional compression Context. - `use_op_msg`: True if we should use OP_MSG. - `unacknowledged`: True if this is an unacknowledged command. - `user_fields` (optional): Response fields that should be decoded using the TypeDecoders from codec_options, passed to bson._decode_all_selective. """ name = next(iter(spec)) ns = dbname + '.$cmd' flags = 4 if slave_ok else 0 # Publish the original command document, perhaps with lsid and $clusterTime. orig = spec if is_mongos and not use_op_msg: spec = message._maybe_add_read_preference(spec, read_preference) if read_concern and not (session and session._in_transaction): if read_concern.level: spec['readConcern'] = read_concern.document if (session and session.options.causal_consistency and session.operation_time is not None): spec.setdefault( 'readConcern', {})['afterClusterTime'] = session.operation_time if collation is not None: spec['collation'] = collation publish = listeners is not None and listeners.enabled_for_commands if publish: start = datetime.datetime.now() if compression_ctx and name.lower() in _NO_COMPRESSION: compression_ctx = None if use_op_msg: flags = 2 if unacknowledged else 0 request_id, msg, size, max_doc_size = message._op_msg( flags, spec, dbname, read_preference, slave_ok, check_keys, codec_options, ctx=compression_ctx) # If this is an unacknowledged write then make sure the encoded doc(s) # are small enough, otherwise rely on the server to return an error. if (unacknowledged and max_bson_size is not None and max_doc_size > max_bson_size): message._raise_document_too_large(name, size, max_bson_size) else: request_id, msg, size = message.query( flags, ns, 0, -1, spec, None, codec_options, check_keys, compression_ctx) if (max_bson_size is not None and size > max_bson_size + message._COMMAND_OVERHEAD): message._raise_document_too_large( name, size, max_bson_size + message._COMMAND_OVERHEAD) if publish: encoding_duration = datetime.datetime.now() - start listeners.publish_command_start(orig, dbname, request_id, address) start = datetime.datetime.now() try: sock.sendall(msg) if use_op_msg and unacknowledged: # Unacknowledged, fake a successful command response. response_doc = {"ok": 1} else: reply = receive_message(sock, request_id) unpacked_docs = reply.unpack_response( codec_options=codec_options, user_fields=user_fields) response_doc = unpacked_docs[0] if client: client._process_response(response_doc, session) if check: helpers._check_command_response( response_doc, None, allowable_errors, parse_write_concern_error=parse_write_concern_error) except Exception as exc: if publish: duration = (datetime.datetime.now() - start) + encoding_duration if isinstance(exc, (NotMasterError, OperationFailure)): failure = exc.details else: failure = message._convert_exception(exc) listeners.publish_command_failure( duration, failure, name, request_id, address) raise if publish: duration = (datetime.datetime.now() - start) + encoding_duration listeners.publish_command_success( duration, response_doc, name, request_id, address) return response_doc
Execute a command over the socket, or raise socket.error. :Parameters: - `sock`: a raw socket instance - `dbname`: name of the database on which to run the command - `spec`: a command document as an ordered dict type, eg SON. - `slave_ok`: whether to set the SlaveOkay wire protocol bit - `is_mongos`: are we connected to a mongos? - `read_preference`: a read preference - `codec_options`: a CodecOptions instance - `session`: optional ClientSession instance. - `client`: optional MongoClient instance for updating $clusterTime. - `check`: raise OperationFailure if there are errors - `allowable_errors`: errors to ignore if `check` is True - `address`: the (host, port) of `sock` - `check_keys`: if True, check `spec` for invalid keys - `listeners`: An instance of :class:`~pymongo.monitoring.EventListeners` - `max_bson_size`: The maximum encoded bson size for this server - `read_concern`: The read concern for this command. - `parse_write_concern_error`: Whether to parse the ``writeConcernError`` field in the command response. - `collation`: The collation for this command. - `compression_ctx`: optional compression Context. - `use_op_msg`: True if we should use OP_MSG. - `unacknowledged`: True if this is an unacknowledged command. - `user_fields` (optional): Response fields that should be decoded using the TypeDecoders from codec_options, passed to bson._decode_all_selective.
def convex_conj(self): """The convex conjugate functional. Convex conjugate distributes over separable sums, so the result is simply the separable sum of the convex conjugates. """ convex_conjs = [func.convex_conj for func in self.functionals] return SeparableSum(*convex_conjs)
The convex conjugate functional. Convex conjugate distributes over separable sums, so the result is simply the separable sum of the convex conjugates.
def remove_ectopy(tachogram_data, tachogram_time): """ ----- Brief ----- Function for removing ectopic beats. ----------- Description ----------- Ectopic beats are beats that are originated in cells that do not correspond to the expected pacemaker cells. These beats are identifiable in ECG signals by abnormal rhythms. This function allows to remove the ectopic beats by defining time thresholds that consecutive heartbeats should comply with. ---------- Parameters ---------- tachogram_data : list Y Axis of tachogram. tachogram_time : list X Axis of tachogram. Returns ------- out : list, list List of tachogram samples. List of instants where each cardiac cycle ends. Source ------ "Comparison of methods for removal of ectopy in measurement of heart rate variability" by N. Lippman, K. M. Stein and B. B. Lerman. """ # If the i RR interval differs from i-1 by more than 20 % then it will be removed from analysis. remove_margin = 0.20 finish_ectopy_remove = False signal = list(tachogram_data) time = list(tachogram_time) # Sample by sample analysis. beat = 1 while finish_ectopy_remove is False: max_thresh = signal[beat - 1] + remove_margin * signal[beat - 1] min_thresh = signal[beat - 1] - remove_margin * signal[beat - 1] if signal[beat] > max_thresh or signal[beat] < min_thresh: signal.pop(beat) signal.pop(beat) time.pop(beat) time.pop(beat) # To remove the influence of the ectopic beat we need to exclude the RR # intervals "before" and "after" the ectopic beat. # [NB <RRi> NB <RRi+1> EB <RRi+2> NB <RRi+3> NB...] --> # --> [NB <RRi> NB cut NB <RRi+3> NB...] # Advance "Pointer". beat += 1 else: # Advance "Pointer". beat += 1 # Verification if the cycle should or not end. if beat >= len(signal): finish_ectopy_remove = True return signal, time
----- Brief ----- Function for removing ectopic beats. ----------- Description ----------- Ectopic beats are beats that are originated in cells that do not correspond to the expected pacemaker cells. These beats are identifiable in ECG signals by abnormal rhythms. This function allows to remove the ectopic beats by defining time thresholds that consecutive heartbeats should comply with. ---------- Parameters ---------- tachogram_data : list Y Axis of tachogram. tachogram_time : list X Axis of tachogram. Returns ------- out : list, list List of tachogram samples. List of instants where each cardiac cycle ends. Source ------ "Comparison of methods for removal of ectopy in measurement of heart rate variability" by N. Lippman, K. M. Stein and B. B. Lerman.
def subtract_bg(samplename, bgname, factor=1, distance=None, disttolerance=2, subname=None, qrange=(), graph_extension='png', graph_dpi=80): """Subtract background from measurements. Inputs: samplename: the name of the sample bgname: the name of the background measurements. Alternatively, it can be a numeric value (float or ErrorValue), which will be subtracted. If None, this constant will be determined by integrating the scattering curve in the range given by qrange. factor: the background curve will be multiplied by this distance: if None, do the subtraction for all sample-to-detector distances. Otherwise give here the value of the sample-to-detector distance. qrange: a tuple (qmin, qmax) disttolerance: the tolerance in which two distances are considered equal. subname: the sample name of the background-corrected curve. The default is samplename + '-' + bgname """ ip = get_ipython() data1d = ip.user_ns['_data1d'] data2d = ip.user_ns['_data2d'] if 'subtractedsamplenames' not in ip.user_ns: ip.user_ns['subtractedsamplenames'] = set() subtractedsamplenames = ip.user_ns['subtractedsamplenames'] if subname is None: if isinstance(bgname, str): subname = samplename + '-' + bgname else: subname = samplename + '-const' if distance is None: dists = data1d[samplename] else: dists = [d for d in data1d[samplename] if abs(d - distance) < disttolerance] for dist in dists: if isinstance(bgname, str): if not disttolerance: if dist not in data1d[bgname]: print( 'Warning: Missing distance %g for background measurement (samplename: %s, background samplename: %s)' % ( dist, samplename, bgname)) continue else: bgdist = dist else: bgdist = sorted([(d, r) for (d, r) in [(d, np.abs(d - dist)) for d in list(data1d[bgname].keys())] if r <= disttolerance], key=lambda x: x[1])[0][0] if subname not in data1d: data1d[subname] = {} if subname not in data2d: data2d[subname] = {} if subname not in ip.user_ns['_headers_sample']: ip.user_ns['_headers_sample'][subname] = {} data1_s = data1d[samplename][dist] data2_s = data2d[samplename][dist] if isinstance(bgname, str): data1_bg = data1d[bgname][bgdist] data2_bg = data2d[bgname][bgdist] if factor is None: factor = data1_s.trim(*qrange).momentum(0) / data1_bg.trim(*qrange).momentum(0) elif bgname is None: data1_bg = data1_s.trim(*qrange).momentum(0) data2_bg = data1_bg else: data1_bg = bgname data2_bg = bgname if factor is None: factor = 1 data1d[subname][dist] = data1_s - factor * data1_bg data2d[subname][dist] = data2_s - factor * data2_bg data1d[subname][dist].save( os.path.join(ip.user_ns['saveto_dir'], subname + '_' + ('%.2f' % dist).replace('.', '_') + '.txt')) ip.user_ns['_headers_sample'][subname][dist] = ip.user_ns['_headers_sample'][samplename][ dist] # ugly hack, I have no better idea. plt.figure() plotsascurve(samplename, dist=dist) if isinstance(bgname, str): plotsascurve(bgname, dist=dist, factor=factor) plotsascurve(subname, dist=dist) plt.savefig(os.path.join(ip.user_ns['auximages_dir'], 'subtractbg_' + samplename + '.' + graph_extension), dpi=graph_dpi) subtractedsamplenames.add(subname)
Subtract background from measurements. Inputs: samplename: the name of the sample bgname: the name of the background measurements. Alternatively, it can be a numeric value (float or ErrorValue), which will be subtracted. If None, this constant will be determined by integrating the scattering curve in the range given by qrange. factor: the background curve will be multiplied by this distance: if None, do the subtraction for all sample-to-detector distances. Otherwise give here the value of the sample-to-detector distance. qrange: a tuple (qmin, qmax) disttolerance: the tolerance in which two distances are considered equal. subname: the sample name of the background-corrected curve. The default is samplename + '-' + bgname
def back_bfs(self, start, end=None): """ Returns a list of nodes in some backward BFS order. Starting from the start node the breadth first search proceeds along incoming edges. """ return [node for node, step in self._iterbfs(start, end, forward=False)]
Returns a list of nodes in some backward BFS order. Starting from the start node the breadth first search proceeds along incoming edges.
def _apply_cell_filters(self, context): """ Applies the field restrictions based on the return value of the context's "has_permission()" method. Stores them on self._unpermitted_fields. Returns: List of unpermitted fields names. """ self.setattrs(_is_unpermitted_fields_set=True) for perm, fields in self.Meta.field_permissions.items(): if not context.has_permission(perm): self._unpermitted_fields.extend(fields) return self._unpermitted_fields
Applies the field restrictions based on the return value of the context's "has_permission()" method. Stores them on self._unpermitted_fields. Returns: List of unpermitted fields names.
def confidenceInterval(self, alpha=0.6827, steps=1.e5, plot=False): """ Compute two-sided confidence interval by taking x-values corresponding to the largest PDF-values first. """ x_dense, y_dense = self.densify() y_dense -= np.max(y_dense) # Numeric stability f = scipy.interpolate.interp1d(x_dense, y_dense, kind='linear') x = np.linspace(0., np.max(x_dense), steps) # ADW: Why does this start at 0, which often outside the input range? # Wouldn't starting at xmin be better: #x = np.linspace(np.min(x_dense), np.max(x_dense), steps) pdf = np.exp(f(x) / 2.) cut = (pdf / np.max(pdf)) > 1.e-10 x = x[cut] pdf = pdf[cut] sorted_pdf_indices = np.argsort(pdf)[::-1] # Indices of PDF in descending value cdf = np.cumsum(pdf[sorted_pdf_indices]) cdf /= cdf[-1] sorted_pdf_index_max = np.argmin((cdf - alpha)**2) x_select = x[sorted_pdf_indices[0: sorted_pdf_index_max]] return np.min(x_select), np.max(x_select)
Compute two-sided confidence interval by taking x-values corresponding to the largest PDF-values first.
def _ParseDocstring(function): """Parses the functions docstring into a dictionary of type checks.""" if not function.__doc__: return {} type_check_dict = {} for match in param_regexp.finditer(function.__doc__): param_str = match.group(1).strip() param_splitted = param_str.split(" ") if len(param_splitted) >= 2: type_str = " ".join(param_splitted[:-1]) name = param_splitted[-1] type_check_dict[name] = type_str for match in returns_regexp.finditer(function.__doc__): type_check_dict["returns"] = match.group(1) for match in type_regexp.finditer(function.__doc__): name = match.group(1) type_str = match.group(2) type_check_dict[name] = type_str for match in rtype_regexp.finditer(function.__doc__): type_check_dict["returns"] = match.group(1) return type_check_dict
Parses the functions docstring into a dictionary of type checks.
def get_table(self, dataset, table, project_id=None): """ Retrieve a table if it exists, otherwise return an empty dict. Parameters ---------- dataset : str The dataset that the table is in table : str The name of the table project_id: str, optional The project that the table is in Returns ------- dict Containing the table object if it exists, else empty """ project_id = self._get_project_id(project_id) try: table = self.bigquery.tables().get( projectId=project_id, datasetId=dataset, tableId=table).execute(num_retries=self.num_retries) except HttpError: table = {} return table
Retrieve a table if it exists, otherwise return an empty dict. Parameters ---------- dataset : str The dataset that the table is in table : str The name of the table project_id: str, optional The project that the table is in Returns ------- dict Containing the table object if it exists, else empty
def _compute_examples(self): """ Populates the ``_examples`` instance attribute by computing full examples for each label in ``_raw_examples``. The logic in this method is separate from :meth:`_add_example` because this method requires that every type have ``_raw_examples`` assigned for resolving example references. """ for label in self._raw_examples: self._examples[label] = self._compute_example(label) # Add examples for each void union member. for field in self.all_fields: dt, _ = unwrap_nullable(field.data_type) if is_void_type(dt): self._examples[field.name] = \ Example( field.name, None, OrderedDict([('.tag', field.name)]))
Populates the ``_examples`` instance attribute by computing full examples for each label in ``_raw_examples``. The logic in this method is separate from :meth:`_add_example` because this method requires that every type have ``_raw_examples`` assigned for resolving example references.