signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
@abstractmethod<EOL><INDENT>def set(self, paths: Union[str, Iterable[str]], access_controls: Union[AccessControl, Iterable[AccessControl]]):<DEDENT>
Sets the access controls associated to a give path or collection of paths to those given. :param paths: the paths to set the access controls for :param access_controls: the access controls to set
f14287:c1:m2
@abstractmethod<EOL><INDENT>def revoke(self, paths: Union[str, Iterable[str]], users: Union[str, Iterable[str], User, Iterable[User]]):<DEDENT>
Revokes all access controls that are associated to the given path or collection of paths. :param paths: the paths to remove access controls on :param users: the users to revoke access controls for. User may be in the represented as a `User` object or in the form "name#zone"
f14287:c1:m3
@abstractmethod<EOL><INDENT>def revoke_all(self, paths: Union[str, Iterable[str]]):<DEDENT>
Removes all access controls associated to the given path or collection of paths. :param paths: the paths to remove all access controls on (i.e. they are made accessible to no-one)
f14287:c1:m4
@abstractmethod<EOL><INDENT>def add_or_replace(self, paths: Union[str, Iterable[str]],<EOL>access_controls: Union[AccessControl, Iterable[AccessControl]], recursive: bool=False):<DEDENT>
See `AccessControlMapper.add`. :param paths: see `AccessControlMapper.add` :param access_controls: see `AccessControlMapper.add` :param recursive: whether the access control list should be changed recursively for all nested collections
f14287:c2:m0
@abstractmethod<EOL><INDENT>def set(self, paths: Union[str, Iterable[str]], access_controls: Union[AccessControl, Iterable[AccessControl]],<EOL>recursive: bool=False):<DEDENT>
See `AccessControlMapper.set`. :param paths: see `AccessControlMapper.set` :param access_controls: see `AccessControlMapper.set` :param recursive: whether the access control list should be changed recursively for all nested collections
f14287:c2:m1
@abstractmethod<EOL><INDENT>def revoke(self, paths: Union[str, Iterable[str]], users: Union[str, Iterable[str], User, Iterable[User]],<EOL>recursive: bool=False):<DEDENT>
See `AccessControlMapper.revoke`. :param paths: see `AccessControlMapper.revoke` :param users: see `AccessControlMapper.revoke` :param recursive: whether the access control list should be changed recursively for all nested collections
f14287:c2:m2
@abstractmethod<EOL><INDENT>def revoke_all(self, paths: Union[str, Iterable[str]], recursive: bool=False):<DEDENT>
See `AccessControlMapper.revoke_all`. :param paths: see `AccessControlMapper.revoke_all` :param access_controls: see `AccessControlMapper.revoke_all` :param recursive: whether the access control list should be changed recursively for all nested collections
f14287:c2:m3
@abstractproperty<EOL><INDENT>def metadata(self) -> IrodsMetadataMapper[EntityType]:<DEDENT>
Property to access a mapper for metadata that can be assocaited to the iRODS entity that this mapper deals with. :return: mapper for metadata
f14287:c3:m0
@abstractproperty<EOL><INDENT>def access_control(self) -> AccessControlMapper:<DEDENT>
Property to access a mapper for access controls related the entities that this mapper deals with. :return: the entity access control mapper
f14287:c3:m1
@abstractmethod<EOL><INDENT>def get_by_metadata(self, metadata_search_criteria: Union[SearchCriterion, Iterable[SearchCriterion]],<EOL>load_metadata: bool=True, zone: str=None) -> Sequence[EntityType]:<DEDENT>
Gets entities from iRODS that have metadata that matches the given search criteria. :param metadata_search_criteria: the metadata search criteria :param load_metadata: whether metadata associated to the entities should be loaded :param zone: limit query to specific zone in iRODS :return: the matched entities in iRODS
f14287:c3:m2
@abstractmethod<EOL><INDENT>def get_by_path(self, paths: Union[str, Iterable[str]], load_metadata: bool=True)-> Union[EntityType, Sequence[EntityType]]:<DEDENT>
Gets the entity or entities with the given path or paths from iRODS. If one or more of the entities does not exist, a `FileNotFound` exception will be raised. :param paths: the paths of the entities to get from iRODS :param load_metadata: whether metadata associated to the entities should be loaded :return: the single entity retrieved from iRODS if a single path is given, else a sequence of retrieved entities
f14287:c3:m3
@abstractmethod<EOL><INDENT>def get_all_in_collection(self, collection_paths: Union[str, Iterable[str]], load_metadata: bool = True)-> Sequence[EntityType]:<DEDENT>
Gets entities contained within the given iRODS collections. If one or more of the collection_paths does not exist, a `FileNotFound` exception will be raised. :param collection_paths: the collection(s) to get the entities from :param load_metadata: whether metadata associated to the entities should be loaded :return: the entities loaded from iRODS
f14287:c3:m4
@abstractproperty<EOL><INDENT>def access_control(self) -> CollectionAccessControlMapper:<DEDENT>
Mapper for access controls related the the collections that this mapper deals with. :return: the collection access control mapper
f14287:c5:m0
@abstractmethod<EOL><INDENT>def _get_with_prepared_specific_query(self, specific_query: PreparedSpecificQuery, zone: str=None)-> Sequence[CustomObjectType]:<DEDENT>
Gets an object from iRODS using a specific query. :param specific_query: the specific query to use :param zone: limit query to specific zone in iRODS :return: Python model of the object returned from iRODS using the specific query
f14287:c6:m0
@abstractmethod<EOL><INDENT>def get_all(self) -> Sequence[SpecificQuery]:<DEDENT>
Gets all of the specific queries installed on the iRODS server. :return: all of the installed queries
f14287:c7:m0
def get_history_by_flight_number(self, flight_number, page=<NUM_LIT:1>, limit=<NUM_LIT:100>):
url = FLT_BASE.format(flight_number, str(self.AUTH_TOKEN), page, limit)<EOL>return self._fr24.get_data(url)<EOL>
Fetch the history of a flight by its number. This method can be used to get the history of a flight route by the number. It checks the user authentication and returns the data accordingly. Args: flight_number (str): The flight number, e.g. AI101 page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: A list of dicts with the data; one dict for each row of data from flightradar24 Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_history_by_flight_number('AI101') f.get_history_by_flight_number('AI101',page=1,limit=10)
f14296:c0:m1
def get_history_by_tail_number(self, tail_number, page=<NUM_LIT:1>, limit=<NUM_LIT:100>):
url = REG_BASE.format(tail_number, str(self.AUTH_TOKEN), page, limit)<EOL>return self._fr24.get_data(url, True)<EOL>
Fetch the history of a particular aircraft by its tail number. This method can be used to get the history of a particular aircraft by its tail number. It checks the user authentication and returns the data accordingly. Args: tail_number (str): The tail number, e.g. VT-ANL page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: A list of dicts with the data; one dict for each row of data from flightradar24 Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_history_by_flight_number('VT-ANL') f.get_history_by_flight_number('VT-ANL',page=1,limit=10)
f14296:c0:m2
def get_countries(self):
return self._fr24.get_countries_data()<EOL>
Returns a list of all countries This can be used to get the country name/code as it is known on flightradar24
f14296:c0:m3
def get_airports(self, country):
url = AIRPORT_BASE.format(country.replace("<STR_LIT:U+0020>", "<STR_LIT:->"))<EOL>return self._fr24.get_airports_data(url)<EOL>
Returns a list of all the airports For a given country this returns a list of dicts, one for each airport, with information like the iata code of the airport etc Args: country (str): The country for which the airports will be fetched Example:: from pyflightdata import FlightData f=FlightData() f.get_airports('India')
f14296:c0:m4
def get_info_by_tail_number(self, tail_number, page=<NUM_LIT:1>, limit=<NUM_LIT:100>):
url = REG_BASE.format(tail_number, str(self.AUTH_TOKEN), page, limit)<EOL>return self._fr24.get_aircraft_data(url)<EOL>
Fetch the details of a particular aircraft by its tail number. This method can be used to get the details of a particular aircraft by its tail number. Details include the serial number, age etc along with links to the images of the aircraft. It checks the user authentication and returns the data accordingly. Args: tail_number (str): The tail number, e.g. VT-ANL page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: A list of dicts with the data; one dict for each row of data from flightradar24 Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_info_by_flight_number('VT-ANL') f.get_info_by_flight_number('VT-ANL',page=1,limit=10)
f14296:c0:m5
def get_airlines(self):
url = AIRLINE_BASE.format('<STR_LIT>')<EOL>return self._fr24.get_airlines_data(url)<EOL>
Returns a list of all the airlines in the world that are known on flightradar24 The return value is a list of dicts, one for each airline, with details like the airline code on flightradar24, call sign, codes etc. The airline code can be used to get the fleet and the flights from flightradar24
f14296:c0:m6
def get_fleet(self, airline_key):
url = AIRLINE_FLEET_BASE.format(airline_key)<EOL>return self._fr24.get_airline_fleet_data(url, self.AUTH_TOKEN != '<STR_LIT>')<EOL>
Get the fleet for a particular airline. Given a airline code form the get_airlines() method output, this method returns the fleet for the airline. Args: airline_key (str): The code for the airline on flightradar24 Returns: A list of dicts, one for each aircraft in the airlines fleet Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_fleet('ai-aic')
f14296:c0:m7
def get_flights(self, search_key):
<EOL>url = AIRLINE_FLT_BASE.format(search_key, <NUM_LIT:100>)<EOL>return self._fr24.get_airline_flight_data(url)<EOL>
Get the flights for a particular airline. Given a full or partial flight number string, this method returns the first 100 flights matching that string. Please note this method was different in earlier versions. The older versions took an airline code and returned all scheduled flights for that airline Args: search_key (str): Full or partial flight number for any airline e.g. MI47 to get all SilkAir flights starting with MI47 Returns: A list of dicts, one for each scheduled flight in the airlines network Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_flights('MI47')
f14296:c0:m8
def get_flights_from_to(self, origin, destination):
<EOL>url = AIRLINE_FLT_BASE_POINTS.format(origin, destination)<EOL>return self._fr24.get_airline_flight_data(url, by_airports=True)<EOL>
Get the flights for a particular origin and destination. Given an origin and destination this method returns the upcoming scheduled flights between these two points. The data returned has the airline, airport and schedule information - this is subject to change in future. Args: origin (str): The origin airport code destination (str): The destination airport code Returns: A list of dicts, one for each scheduled flight between the two points. Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_flights_from_to('SIN','HYD')
f14296:c0:m9
def get_airport_weather(self, iata, page=<NUM_LIT:1>, limit=<NUM_LIT:100>):
url = AIRPORT_DATA_BASE.format(iata, str(self.AUTH_TOKEN), page, limit)<EOL>weather = self._fr24.get_airport_weather(url)<EOL>mi = weather['<STR_LIT>']['<STR_LIT>']['<STR_LIT>']<EOL>if (mi is not None) and (mi != "<STR_LIT:None>"):<EOL><INDENT>mi = float(mi)<EOL>km = mi * <NUM_LIT><EOL>weather['<STR_LIT>']['<STR_LIT>']['<STR_LIT>'] = km<EOL><DEDENT>return weather<EOL>
Retrieve the weather at an airport Given the IATA code of an airport, this method returns the weather information. Args: iata (str): The IATA code for an airport, e.g. HYD page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: A list of dicts with the data; one dict for each row of data from flightradar24 Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_airport_weather('HYD') f.get_airport_weather('HYD',page=1,limit=10)
f14296:c0:m10
def get_airport_metars(self, iata, page=<NUM_LIT:1>, limit=<NUM_LIT:100>):
url = AIRPORT_DATA_BASE.format(iata, str(self.AUTH_TOKEN), page, limit)<EOL>w = self._fr24.get_airport_weather(url)<EOL>return w['<STR_LIT>']<EOL>
Retrieve the metar data at the current time Given the IATA code of an airport, this method returns the metar information. Args: iata (str): The IATA code for an airport, e.g. HYD page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: The metar data for the airport Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_airport_metars('HYD')
f14296:c0:m11
def get_airport_metars_hist(self, iata):
url = AIRPORT_BASE.format(iata) + "<STR_LIT>"<EOL>return self._fr24.get_airport_metars_hist(url)<EOL>
Retrieve the metar data for past 72 hours. The data will not be parsed to readable format. Given the IATA code of an airport, this method returns the metar information for last 72 hours. Args: iata (str): The IATA code for an airport, e.g. HYD Returns: The metar data for the airport Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_airport_metars_hist('HYD')
f14296:c0:m12
def get_airport_stats(self, iata, page=<NUM_LIT:1>, limit=<NUM_LIT:100>):
url = AIRPORT_DATA_BASE.format(iata, str(self.AUTH_TOKEN), page, limit)<EOL>return self._fr24.get_airport_stats(url)<EOL>
Retrieve the performance statistics at an airport Given the IATA code of an airport, this method returns the performance statistics for the airport. Args: iata (str): The IATA code for an airport, e.g. HYD page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: A list of dicts with the data; one dict for each row of data from flightradar24 Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_airport_stats('HYD') f.get_airport_stats('HYD',page=1,limit=10)
f14296:c0:m13
def get_airport_details(self, iata, page=<NUM_LIT:1>, limit=<NUM_LIT:100>):
url = AIRPORT_DATA_BASE.format(iata, str(self.AUTH_TOKEN), page, limit)<EOL>details = self._fr24.get_airport_details(url)<EOL>weather = self._fr24.get_airport_weather(url)<EOL>details['<STR_LIT>']['<STR_LIT>'] = weather['<STR_LIT>']<EOL>return details<EOL>
Retrieve the details of an airport Given the IATA code of an airport, this method returns the detailed information like lat lon, full name, URL, codes etc. Args: iata (str): The IATA code for an airport, e.g. HYD page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: A list of dicts with the data; one dict for each row of data from flightradar24 Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_airport_details('HYD') f.get_airport_details('HYD',page=1,limit=10)
f14296:c0:m14
def get_airport_reviews(self, iata, page=<NUM_LIT:1>, limit=<NUM_LIT:100>):
url = AIRPORT_DATA_BASE.format(iata, str(self.AUTH_TOKEN), page, limit)<EOL>return self._fr24.get_airport_reviews(url)<EOL>
Retrieve the passenger reviews of an airport Given the IATA code of an airport, this method returns the passenger reviews of an airport. Args: iata (str): The IATA code for an airport, e.g. HYD page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: A list of dicts with the data; one dict for each row of data from flightradar24 Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_airport_reviews('HYD') f.get_airport_reviews('HYD',page=1,limit=10)
f14296:c0:m15
def get_airport_arrivals(self, iata, page=<NUM_LIT:1>, limit=<NUM_LIT:100>):
url = AIRPORT_DATA_BASE.format(iata, str(self.AUTH_TOKEN), page, limit)<EOL>return self._fr24.get_airport_arrivals(url)<EOL>
Retrieve the arrivals at an airport Given the IATA code of an airport, this method returns the arrivals information. Args: iata (str): The IATA code for an airport, e.g. HYD page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: A list of dicts with the data; one dict for each row of data from flightradar24 Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_airport_arrivals('HYD') f.get_airport_arrivals('HYD',page=1,limit=10)
f14296:c0:m16
def get_airport_departures(self, iata, page=<NUM_LIT:1>, limit=<NUM_LIT:100>):
url = AIRPORT_DATA_BASE.format(iata, str(self.AUTH_TOKEN), page, limit)<EOL>return self._fr24.get_airport_departures(url)<EOL>
Retrieve the departures at an airport Given the IATA code of an airport, this method returns the departures information. Args: iata (str): The IATA code for an airport, e.g. HYD page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: A list of dicts with the data; one dict for each row of data from flightradar24 Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_airport_departures('HYD') f.get_airport_departures('HYD',page=1,limit=10)
f14296:c0:m17
def get_airport_onground(self, iata, page=<NUM_LIT:1>, limit=<NUM_LIT:100>):
url = AIRPORT_DATA_BASE.format(iata, str(self.AUTH_TOKEN), page, limit)<EOL>return self._fr24.get_airport_onground(url)<EOL>
Retrieve the aircraft on ground at an airport Given the IATA code of an airport, this method returns the aircraft on the ground at the airport. Args: iata (str): The IATA code for an airport, e.g. HYD page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: A list of dicts with the data; one dict for each row of data from flightradar24 Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_airport_onground('HYD') f.get_airport_onground('HYD',page=1,limit=10)
f14296:c0:m18
def get_images_by_tail_number(self, tail_number, page=<NUM_LIT:1>, limit=<NUM_LIT:100>):
url = REG_BASE.format(tail_number, str(self.AUTH_TOKEN), page, limit)<EOL>return self._fr24.get_aircraft_image_data(url)<EOL>
Fetch the images of a particular aircraft by its tail number. This method can be used to get the images of the aircraft. The images are in 3 sizes and you can use what suits your need. Args: tail_number (str): The tail number, e.g. VT-ANL page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: A dict with the images of the aircraft in various sizes Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_images_by_flight_number('VT-ANL') f.get_images_by_flight_number('VT-ANL',page=1,limit=10)
f14296:c0:m19
def login(self, email, password):
response = FlightData.session.post(<EOL>url=LOGIN_URL,<EOL>data={<EOL>'<STR_LIT:email>': email,<EOL>'<STR_LIT:password>': password,<EOL>'<STR_LIT>': '<STR_LIT:true>',<EOL>'<STR_LIT:type>': '<STR_LIT>'<EOL>},<EOL>headers={<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>'<EOL>}<EOL>)<EOL>response = self._fr24.json_loads_byteified(<EOL>response.content) if response.status_code == <NUM_LIT:200> else None<EOL>if response:<EOL><INDENT>token = response['<STR_LIT>']['<STR_LIT>']<EOL>self.AUTH_TOKEN = token<EOL><DEDENT>
Login to the flightradar24 session The API currently uses flightradar24 as the primary data source. The site provides different levels of data based on user plans. For users who have signed up for a plan, this method allows to login with the credentials from flightradar24. The API obtains a token that will be passed on all the requests; this obtains the data as per the plan limits. Args: email (str): The email ID which is used to login to flightradar24 password (str): The password for the user ID Example:: from pyflightdata import FlightData f=FlightData() f.login(myemail,mypassword)
f14296:c0:m20
def logout(self):
self.AUTH_TOKEN = '<STR_LIT>'<EOL>
Logout from the flightradar24 session. This will reset the user token that was retrieved earlier; the API will return data visible to unauthenticated users
f14296:c0:m21
def is_authenticated(self):
return not self.AUTH_TOKEN == '<STR_LIT>'<EOL>
Simple method to check if the user is authenticated to flightradar24
f14296:c0:m22
def decode_metar(self, metar):
try:<EOL><INDENT>from metar import Metar<EOL><DEDENT>except:<EOL><INDENT>return "<STR_LIT>"<EOL><DEDENT>m = Metar.Metar(metar)<EOL>return m.string()<EOL>
Simple method that decodes a given metar string. Args: metar (str): The metar data Returns: The metar data in readable format Example:: from pyflightdata import FlightData f=FlightData() f.decode_metar('WSSS 181030Z 04009KT 010V080 9999 FEW018TCU BKN300 29/22 Q1007 NOSIG')
f14296:c0:m23
def _check_synonyms(self, term):
for s in term.synonyms:<EOL><INDENT>self.assertIsInstance(s, pronto.synonym.Synonym)<EOL><DEDENT>
Check that each Term synonym is stored as Synonym object
f14300:c0:m2
def _check_utf8(self, m, t, i):
for term in six.itervalues(t):<EOL><INDENT>self.assertIsInstance(term.id, six.text_type)<EOL>self.assertIsInstance(term.name, six.text_type)<EOL>self.assertIsInstance(term.desc, six.text_type)<EOL><DEDENT>for importpath in i:<EOL><INDENT>self.assertIsInstance(importpath, six.text_type)<EOL><DEDENT>for k,v in six.iteritems(m):<EOL><INDENT>self.assertIsInstance(k, six.text_type)<EOL>for x in v:<EOL><INDENT>self.assertIsInstance(x, six.text_type)<EOL><DEDENT><DEDENT>
Check that inner properties are stored as utf-8
f14300:c0:m3
def _check_len(self, t, exp_len):
self.assertEqual(len(t), exp_len)<EOL>
Check the terms dict length matches the expected length
f14300:c0:m4
def __init__(self, name, desc, scope=None):
self.name = name<EOL>self.desc = desc<EOL>if scope in {'<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>', None}:<EOL><INDENT>self.scope = scope<EOL><DEDENT>elif scope in {six.b('<STR_LIT>'), six.b('<STR_LIT>'), six.b('<STR_LIT>'), six.b('<STR_LIT>')}:<EOL><INDENT>self.scope = scope.decode('<STR_LIT:utf-8>')<EOL><DEDENT>else:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>self._register()<EOL>
Create a new synonym type. Arguments: name (str): the name of the synonym type. desc (str): the description of the synonym type. scope (str, optional): the scope modifier.
f14311:c0:m0
@property<EOL><INDENT>def obo(self):<DEDENT>
return '<STR_LIT:U+0020>'.join(['<STR_LIT>', self.name,<EOL>'<STR_LIT>'.format(self.desc),<EOL>self.scope or '<STR_LIT>']).strip()<EOL>
str: the synonym type serialized in obo format.
f14311:c0:m3
def __init__(self, desc, scope=None, syn_type=None, xref=None):
if isinstance(desc, six.binary_type):<EOL><INDENT>self.desc = desc.decode('<STR_LIT:utf-8>')<EOL><DEDENT>elif isinstance(desc, six.text_type):<EOL><INDENT>self.desc = desc<EOL><DEDENT>else:<EOL><INDENT>raise ValueError("<STR_LIT>".format(type(desc).__name__))<EOL><DEDENT>if isinstance(scope, six.binary_type):<EOL><INDENT>self.scope = scope.decode('<STR_LIT:utf-8>')<EOL><DEDENT>elif isinstance(scope, six.text_type):<EOL><INDENT>self.scope = scope<EOL><DEDENT>elif scope is None:<EOL><INDENT>self.scope = "<STR_LIT>"<EOL><DEDENT>if syn_type is not None:<EOL><INDENT>try:<EOL><INDENT>self.syn_type = SynonymType._instances[syn_type]<EOL>self.scope = self.syn_type.scope or self.scope or '<STR_LIT>'<EOL><DEDENT>except KeyError as e:<EOL><INDENT>raise ValueError("<STR_LIT>".format(syn_type))<EOL><DEDENT><DEDENT>else:<EOL><INDENT>self.syn_type = None<EOL><DEDENT>if self.scope not in {'<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>', None}:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>self.xref = xref or []<EOL>
Create a new synonym. Arguments: desc (str): a description of the synonym. scope (str, optional): the scope of the synonym (either EXACT, BROAD, NARROW or RELATED). syn_type (SynonymType, optional): the type of synonym if relying on a synonym type defined in the *Typedef* section of the ontology. xref (list, optional): a list of cross-references for the synonym.
f14311:c1:m0
@property<EOL><INDENT>def obo(self):<DEDENT>
return '<STR_LIT>'.format(<EOL>self.desc,<EOL>'<STR_LIT:U+0020>'.join([self.scope, self.syn_type.name])if self.syn_type else self.scope,<EOL>'<STR_LIT:U+002CU+0020>'.join(self.xref)<EOL>)<EOL>
str: the synonym serialized in obo format.
f14311:c1:m2
def __init__(self, id, name='<STR_LIT>', desc='<STR_LIT>', relations=None, synonyms=None, other=None):
if not isinstance(id, six.text_type):<EOL><INDENT>id = id.decode('<STR_LIT:utf-8>')<EOL><DEDENT>if isinstance(desc, six.binary_type):<EOL><INDENT>desc = desc.decode('<STR_LIT:utf-8>')<EOL><DEDENT>if not isinstance(desc, Description):<EOL><INDENT>desc = Description(desc)<EOL><DEDENT>if not isinstance(name, six.text_type):<EOL><INDENT>name = name.decode('<STR_LIT:utf-8>')<EOL><DEDENT>self.id = id<EOL>self.name = name<EOL>self.desc = desc<EOL>self.relations = relations or {}<EOL>self.other = other or {}<EOL>self.synonyms = synonyms or set()<EOL>self._rchildren = {}<EOL>self._rparents = {}<EOL>self._children = None<EOL>self._parents = None<EOL>
Create a new Term. Arguments: id (str): the Term id (e.g. "MS:1000031") name (str): the name of the Term in human language desc (str): a description of the Term relations (dict, optional): a dictionary containing the other terms the Term is in a relationship with. other (dict, optional): other information about the term synonyms (set, optional): a list containing :obj:`pronto.synonym.Synonym` objects relating to the term. Example: >>> new_term = Term('TR:001', 'new term', 'a new term') >>> linked_term = Term('TR:002', 'other new', 'another term', ... { Relationship('is_a'): 'TR:001'})
f14312:c0:m0
@property<EOL><INDENT>def parents(self):<DEDENT>
if self._parents is None:<EOL><INDENT>bottomups = tuple(Relationship.bottomup())<EOL>self._parents = TermList()<EOL>self._parents.extend(<EOL>[ other<EOL>for rship,others in six.iteritems(self.relations)<EOL>for other in others<EOL>if rship in bottomups<EOL>]<EOL>)<EOL><DEDENT>return self._parents<EOL>
~TermList: The direct parents of the `Term`.
f14312:c0:m6
@property<EOL><INDENT>def children(self):<DEDENT>
if self._children is None:<EOL><INDENT>topdowns = tuple(Relationship.topdown())<EOL>self._children = TermList()<EOL>self._children.extend(<EOL>[ other<EOL>for rship,others in six.iteritems(self.relations)<EOL>for other in others<EOL>if rship in topdowns<EOL>]<EOL>)<EOL><DEDENT>return self._children<EOL>
~TermList: The direct children of the `Term`.
f14312:c0:m7
@property<EOL><INDENT>@output_str<EOL>def obo(self):<DEDENT>
def add_tags(stanza_list, tags):<EOL><INDENT>for tag in tags:<EOL><INDENT>if tag in self.other:<EOL><INDENT>if isinstance(self.other[tag], list):<EOL><INDENT>for attribute in self.other[tag]:<EOL><INDENT>stanza_list.append("<STR_LIT>".format(tag, attribute))<EOL><DEDENT><DEDENT>else:<EOL><INDENT>stanza_list.append("<STR_LIT>".format(tag, self.other[tag]))<EOL><DEDENT><DEDENT><DEDENT><DEDENT>stanza_list = ["<STR_LIT>"]<EOL>stanza_list.append("<STR_LIT>".format(self.id))<EOL>if self.name is not None:<EOL><INDENT>stanza_list.append("<STR_LIT>".format(self.name))<EOL><DEDENT>else:<EOL><INDENT>stanza_list.append("<STR_LIT>")<EOL><DEDENT>add_tags(stanza_list, ['<STR_LIT>', '<STR_LIT>'])<EOL>if self.desc:<EOL><INDENT>stanza_list.append(self.desc.obo)<EOL><DEDENT>add_tags(stanza_list, ['<STR_LIT>', '<STR_LIT>'])<EOL>for synonym in sorted(self.synonyms, key=str):<EOL><INDENT>stanza_list.append(synonym.obo)<EOL><DEDENT>add_tags(stanza_list, ['<STR_LIT>'])<EOL>if Relationship('<STR_LIT>') in self.relations:<EOL><INDENT>for companion in self.relations[Relationship('<STR_LIT>')]:<EOL><INDENT>stanza_list.append("<STR_LIT>".format(companion.id, companion.name))<EOL><DEDENT><DEDENT>add_tags(stanza_list, ['<STR_LIT>', '<STR_LIT>', '<STR_LIT>'])<EOL>for relation in self.relations:<EOL><INDENT>if relation.direction=="<STR_LIT>" and relation is not Relationship('<STR_LIT>'):<EOL><INDENT>stanza_list.extend(<EOL>"<STR_LIT>".format(<EOL>relation.obo_name, companion.id, companion.name<EOL>) for companion in self.relations[relation]<EOL>)<EOL><DEDENT><DEDENT>add_tags(stanza_list, ['<STR_LIT>', '<STR_LIT>', '<STR_LIT>',<EOL>'<STR_LIT>', '<STR_LIT>', '<STR_LIT>'])<EOL>return "<STR_LIT:\n>".join(stanza_list)<EOL>
str: the `Term` serialized in an Obo ``[Term]`` stanza. Note: The following guide was used: ftp://ftp.geneontology.org/pub/go/www/GO.format.obo-1_4.shtml
f14312:c0:m8
@property<EOL><INDENT>def __deref__(self):<DEDENT>
return {<EOL>'<STR_LIT:id>': self.id,<EOL>'<STR_LIT:name>': self.name,<EOL>'<STR_LIT>': self.other,<EOL>'<STR_LIT>': self.desc,<EOL>'<STR_LIT>': {k.obo_name:v.id for k,v in six.iteritems(self.relations)}<EOL>}<EOL>
dict: the `Term` as a `dict` without circular references.
f14312:c0:m9
def _empty_cache(self):
self._children, self._parents = None, None<EOL>self._rchildren, self._rparents = {}, {}<EOL>
Empty the cache of the Term's memoized functions.
f14312:c0:m12
def rchildren(self, level=-<NUM_LIT:1>, intermediate=True):
try:<EOL><INDENT>return self._rchildren[(level, intermediate)]<EOL><DEDENT>except KeyError:<EOL><INDENT>rchildren = []<EOL>if self.children and level:<EOL><INDENT>if intermediate or level==<NUM_LIT:1>:<EOL><INDENT>rchildren.extend(self.children)<EOL><DEDENT>for child in self.children:<EOL><INDENT>rchildren.extend(child.rchildren(level=level-<NUM_LIT:1>,<EOL>intermediate=intermediate))<EOL><DEDENT><DEDENT>rchildren = TermList(unique_everseen(rchildren))<EOL>self._rchildren[(level, intermediate)] = rchildren<EOL>return rchildren<EOL><DEDENT>
Create a recursive list of children. Parameters: level (int): The depth level to continue fetching children from (default is -1, to get children to the utter depths) intermediate (bool): Also include the intermediate children (default is True) Returns: :obj:`pronto.TermList`: The recursive children of the Term following the parameters
f14312:c0:m13
def rparents(self, level=-<NUM_LIT:1>, intermediate=True):
try:<EOL><INDENT>return self._rparents[(level, intermediate)]<EOL><DEDENT>except KeyError:<EOL><INDENT>rparents = []<EOL>if self.parents and level:<EOL><INDENT>if intermediate or level==<NUM_LIT:1>:<EOL><INDENT>rparents.extend(self.parents)<EOL><DEDENT>for parent in self.parents:<EOL><INDENT>rparents.extend(parent.rparents(level=level-<NUM_LIT:1>,<EOL>intermediate=intermediate))<EOL><DEDENT><DEDENT>rparents = TermList(unique_everseen(rparents))<EOL>self._rparents[(level, intermediate)] = rparents<EOL>return rparents<EOL><DEDENT>
Create a recursive list of children. Note that the :param:`intermediate` can be used to include every parents to the returned list, not only the most nested ones. Parameters: level (int): The depth level to continue fetching parents from (default is -1, to get parents to the utter depths) intermediate (bool): Also include the intermediate parents (default is True) Returns: :obj:`pronto.TermList`: The recursive children of the Term following the parameters
f14312:c0:m14
def __init__(self, elements=None):
super(TermList, self).__init__()<EOL>self._contents = set()<EOL>try:<EOL><INDENT>for t in elements or []:<EOL><INDENT>super(TermList, self).append(t)<EOL>self._contents.add(t.id)<EOL><DEDENT><DEDENT>except AttributeError:<EOL><INDENT>raise TypeError('<STR_LIT>')<EOL><DEDENT>
Create a new `TermList`. Arguments: elements (collections.Iterable, optional): an Iterable that yields `Term` objects. Raises: TypeError: when the given ``elements`` are not instances of `Term`.
f14312:c1:m0
@property<EOL><INDENT>def children(self):<DEDENT>
return TermList(unique_everseen(<EOL>y for x in self for y in x.children<EOL>))<EOL>
~TermList: the children of all the terms in the list.
f14312:c1:m5
@property<EOL><INDENT>def parents(self):<DEDENT>
return TermList(unique_everseen(<EOL>y for x in self for y in x.parents<EOL>))<EOL>
~TermList: the parents of all the terms in the list.
f14312:c1:m6
@property<EOL><INDENT>def id(self):<DEDENT>
return [x.id for x in self]<EOL>
list: a list of id corresponding to each term of the list.
f14312:c1:m7
@property<EOL><INDENT>def name(self):<DEDENT>
return [x.name for x in self]<EOL>
list: the name of all the terms in the list.
f14312:c1:m8
@property<EOL><INDENT>def desc(self):<DEDENT>
return [x.desc for x in self]<EOL>
list: the description of all the terms in the list.
f14312:c1:m9
@property<EOL><INDENT>def other(self):<DEDENT>
return [x.other for x in self]<EOL>
list: the "other" property of all the terms in the list.
f14312:c1:m10
@property<EOL><INDENT>def obo(self):<DEDENT>
return [x.obo for x in self]<EOL>
list: all the terms in the term list serialized in obo.
f14312:c1:m11
def __contains__(self, term):
try:<EOL><INDENT>_id = term.id<EOL><DEDENT>except AttributeError:<EOL><INDENT>_id = term<EOL><DEDENT>return _id in self._contents<EOL>
Check if the TermList contains a term. The method allows to check for the presence of a Term in a TermList based on a Term object or on a term accession number. Example: >>> from pronto import * >>> nmr = Ontology('tests/resources/nmrCV.owl') >>> 'NMR:1000122' in nmr['NMR:1000031'].children True >>> nmr['NMR:1000122'] in nmr['NMR:1000031'].children True
f14312:c1:m14
def __init__(self, handle=None, imports=True, import_depth=-<NUM_LIT:1>, timeout=<NUM_LIT:2>, parser=None):
self.meta = {}<EOL>self.terms = {}<EOL>self.imports = ()<EOL>self._parsed_by = None<EOL>if handle is None:<EOL><INDENT>self.path = None<EOL><DEDENT>elif hasattr(handle, '<STR_LIT>'):<EOL><INDENT>self.path = getattr(handle, '<STR_LIT:name>', None)or getattr(handle, '<STR_LIT:url>', None)or getattr(handle, '<STR_LIT>', lambda: None)()<EOL>self.parse(handle, parser)<EOL><DEDENT>elif isinstance(handle, six.string_types):<EOL><INDENT>self.path = handle<EOL>with self._get_handle(handle, timeout) as handle:<EOL><INDENT>self.parse(handle, parser)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>actual = type(handle).__name__<EOL>raise TypeError("<STR_LIT>"<EOL>"<STR_LIT>".format(actual))<EOL><DEDENT>if handle is not None and self._parsed_by is None:<EOL><INDENT>raise ValueError("<STR_LIT>".format(handle))<EOL><DEDENT>self.adopt()<EOL>self.resolve_imports(imports, import_depth, parser)<EOL>self.reference()<EOL>
Create an `Ontology` instance from a file handle or a path. Arguments: handle (io.IOBase or str): the location of the file (either a path on the local filesystem, or a FTP or HTTP URL), a readable file handle containing an ontology, or `None` to create a new ontology from scratch. imports (bool, optional): if `True` (the default), embed the ontology imports into the returned instance. import_depth (int, optional): The depth up to which the imports should be resolved. Setting this to 0 is equivalent to setting ``imports`` to `False`. Leave as default (-1) to handle all the imports. timeout (int, optional): The timeout in seconds for network operations. parser (~pronto.parser.BaseParser, optional): A parser instance to use. Leave to `None` to autodetect.
f14315:c0:m0
def __contains__(self, item):
if isinstance(item, six.string_types):<EOL><INDENT>return item in self.terms<EOL><DEDENT>elif isinstance(item, Term):<EOL><INDENT>return item.id in self.terms<EOL><DEDENT>else:<EOL><INDENT>raise TypeError("<STR_LIT>"<EOL>"<STR_LIT>".format(type(item)))<EOL><DEDENT>
Check if the ontology contains a term. It is possible to check if an Ontology contains a Term using an id or a Term instance. Raises: TypeError: if argument (or left operand) is neither a string nor a Term Example: >>> 'CL:0002404' in cl True >>> from pronto import Term >>> Term('TST:001', 'tst') in cl False
f14315:c0:m2
def __iter__(self):
return six.itervalues(self.terms)<EOL>
Return an iterator over the Terms of the Ontology. For convenience of implementation, the returned instance is actually a generator that yields each term of the ontology, sorted in the definition order in the ontology file. Example: >>> for k in uo: ... if 'basepair' in k.name: ... print(k) <UO:0000328: kilobasepair> <UO:0000329: megabasepair> <UO:0000330: gigabasepair>
f14315:c0:m3
def __getitem__(self, item):
return self.terms[item]<EOL>
Get a term in the `Ontology`. Method was overloaded to allow accessing to any Term of the `Ontology` using the Python dictionary syntax. Example: >>> cl['CL:0002380'] <CL:0002380: oospore> >>> cl['CL:0002380'].relations {Relationship('is_a'): [<CL:0000605: fungal asexual spore>]}
f14315:c0:m4
def __len__(self):
return len(self.terms)<EOL>
Return the number of terms in the Ontology.
f14315:c0:m5
def parse(self, stream, parser=None):
force, parsers = self._get_parsers(parser)<EOL>try:<EOL><INDENT>stream.seek(<NUM_LIT:0>)<EOL>lookup = stream.read(<NUM_LIT>)<EOL>stream.seek(<NUM_LIT:0>)<EOL><DEDENT>except (io.UnsupportedOperation, AttributeError):<EOL><INDENT>lookup = None<EOL><DEDENT>for p in parsers:<EOL><INDENT>if p.hook(path=self.path, force=force, lookup=lookup):<EOL><INDENT>self.meta, self.terms, self.imports, self.typedefs = p.parse(stream)<EOL>self._parsed_by = p.__name__<EOL>break<EOL><DEDENT><DEDENT>
Parse the given file using available `BaseParser` instances. Raises: TypeError: when the parser argument is not a string or None. ValueError: when the parser argument is a string that does not name a `BaseParser`.
f14315:c0:m8
def _get_parsers(self, name):
parserlist = BaseParser.__subclasses__()<EOL>forced = name is None<EOL>if isinstance(name, (six.text_type, six.binary_type)):<EOL><INDENT>parserlist = [p for p in parserlist if p.__name__ == name]<EOL>if not parserlist:<EOL><INDENT>raise ValueError("<STR_LIT>".format(name))<EOL><DEDENT><DEDENT>elif name is not None:<EOL><INDENT>raise TypeError("<STR_LIT>".format(<EOL>types="<STR_LIT>".join([six.text_type.__name__, six.binary_type.__name__]),<EOL>actual=type(parser).__name__,<EOL>))<EOL><DEDENT>return not forced, parserlist<EOL>
Return the appropriate parser asked by the user. Todo: Change `Ontology._get_parsers` behaviour to look for parsers through a setuptools entrypoint instead of mere subclasses.
f14315:c0:m9
def adopt(self):
valid_relationships = set(Relationship._instances.keys())<EOL>relationships = [<EOL>(parent, relation.complement(), term.id)<EOL>for term in six.itervalues(self.terms)<EOL>for relation in term.relations<EOL>for parent in term.relations[relation]<EOL>if relation.complementary<EOL>and relation.complementary in valid_relationships<EOL>]<EOL>relationships.sort(key=operator.itemgetter(<NUM_LIT:2>))<EOL>for parent, rel, child in relationships:<EOL><INDENT>if rel is None:<EOL><INDENT>break<EOL><DEDENT>try:<EOL><INDENT>parent = parent.id<EOL><DEDENT>except AttributeError:<EOL><INDENT>pass<EOL><DEDENT>if parent in self.terms:<EOL><INDENT>try:<EOL><INDENT>if child not in self.terms[parent].relations[rel]:<EOL><INDENT>self.terms[parent].relations[rel].append(child)<EOL><DEDENT><DEDENT>except KeyError:<EOL><INDENT>self[parent].relations[rel] = [child]<EOL><DEDENT><DEDENT><DEDENT>del relationships<EOL>
Make terms aware of their children. This is done automatically when using the `~Ontology.merge` and `~Ontology.include` methods as well as the `~Ontology.__init__` method, but it should be called in case of manual editing of the parents or children of a `Term`.
f14315:c0:m10
def reference(self):
for termkey, termval in six.iteritems(self.terms):<EOL><INDENT>termval.relations.update(<EOL>(relkey, TermList(<EOL>(self.terms.get(x) or Term(x, '<STR_LIT>', '<STR_LIT>')<EOL>if not isinstance(x, Term) else x) for x in relval<EOL>)) for relkey, relval in six.iteritems(termval.relations)<EOL>)<EOL><DEDENT>
Make relations point to ontology terms instead of term ids. This is done automatically when using the :obj:`merge` and :obj:`include` methods as well as the :obj:`__init__` method, but it should be called in case of manual changes of the relationships of a Term.
f14315:c0:m11
def resolve_imports(self, imports, import_depth, parser=None):
if imports and import_depth:<EOL><INDENT>for i in list(self.imports):<EOL><INDENT>try:<EOL><INDENT>if os.path.exists(i) or i.startswith(('<STR_LIT:http>', '<STR_LIT>')):<EOL><INDENT>self.merge(Ontology(i, import_depth=import_depth-<NUM_LIT:1>, parser=parser))<EOL><DEDENT>else: <EOL><INDENT>self.merge(Ontology( os.path.join(os.path.dirname(self.path), i),<EOL>import_depth=import_depth-<NUM_LIT:1>, parser=parser))<EOL><DEDENT><DEDENT>except (IOError, OSError, URLError, HTTPError, _etree.ParseError) as e:<EOL><INDENT>warnings.warn("<STR_LIT>"<EOL>"<STR_LIT:{}>".format(type(e).__name__, i),<EOL>ProntoWarning)<EOL><DEDENT><DEDENT><DEDENT>
Import required ontologies.
f14315:c0:m12
def include(self, *terms):
ref_needed = False<EOL>for term in terms:<EOL><INDENT>if isinstance(term, TermList):<EOL><INDENT>ref_needed = ref_needed or self._include_term_list(term)<EOL><DEDENT>elif isinstance(term, Term):<EOL><INDENT>ref_needed = ref_needed or self._include_term(term)<EOL><DEDENT>else:<EOL><INDENT>raise TypeError('<STR_LIT>')<EOL><DEDENT><DEDENT>self.adopt()<EOL>self.reference()<EOL>
Add new terms to the current ontology. Raises: TypeError: when the arguments is (are) neither a TermList nor a Term. Note: This will also recursively include terms in the term's relations dictionnary, but it is considered bad practice to do so. If you want to create your own ontology, you should only add an ID (such as 'ONT:001') to your terms relations, and let the Ontology link terms with each other. Examples: Create a new ontology from scratch >>> from pronto import Term, Relationship >>> t1 = Term('ONT:001','my 1st term', ... 'this is my first term') >>> t2 = Term('ONT:002', 'my 2nd term', ... 'this is my second term', ... {Relationship('part_of'): ['ONT:001']}) >>> ont = Ontology() >>> ont.include(t1, t2) >>> >>> 'ONT:002' in ont True >>> ont['ONT:001'].children [<ONT:002: my 2nd term>]
f14315:c0:m13
def merge(self, other):
if not isinstance(other, Ontology):<EOL><INDENT>raise TypeError("<STR_LIT>"<EOL>"<STR_LIT>".format(type(other)))<EOL><DEDENT>self.terms.update(other.terms)<EOL>self._empty_cache()<EOL>self.adopt()<EOL>self.reference()<EOL>
Merge another ontology into the current one. Raises: TypeError: When argument is not an Ontology object. Example: >>> from pronto import Ontology >>> nmr = Ontology('tests/resources/nmrCV.owl', False) >>> po = Ontology('tests/resources/po.obo.gz', False) >>> 'NMR:1000271' in nmr True >>> 'NMR:1000271' in po False >>> po.merge(nmr) >>> 'NMR:1000271' in po True
f14315:c0:m14
def _include_term_list(self, termlist):
ref_needed = False<EOL>for term in termlist:<EOL><INDENT>ref_needed = ref_needed or self._include_term(term)<EOL><DEDENT>return ref_needed<EOL>
Add terms from a TermList to the ontology.
f14315:c0:m16
def _include_term(self, term):
ref_needed = False<EOL>if term.relations:<EOL><INDENT>for k,v in six.iteritems(term.relations):<EOL><INDENT>for i,t in enumerate(v):<EOL><INDENT>try:<EOL><INDENT>if t.id not in self:<EOL><INDENT>self._include_term(t)<EOL><DEDENT>v[i] = t.id<EOL><DEDENT>except AttributeError:<EOL><INDENT>pass<EOL><DEDENT>ref_needed = True<EOL><DEDENT><DEDENT><DEDENT>self.terms[term.id] = term<EOL>return ref_needed<EOL>
Add a single term to the current ontology. It is needed to dereference any term in the term's relationship and then to build the reference again to make sure the other terms referenced in the term's relations are the one contained in the ontology (to make sure changes to one term in the ontology will be applied to every other term related to that term).
f14315:c0:m17
def _empty_cache(self, termlist=None):
if termlist is None:<EOL><INDENT>for term in six.itervalues(self.terms):<EOL><INDENT>term._empty_cache()<EOL><DEDENT><DEDENT>else:<EOL><INDENT>for term in termlist:<EOL><INDENT>try:<EOL><INDENT>self.terms[term.id]._empty_cache()<EOL><DEDENT>except AttributeError:<EOL><INDENT>self.terms[term]._empty_cache()<EOL><DEDENT><DEDENT><DEDENT>
Empty the cache associated with each `Term` instance. This method is called when merging Ontologies or including new terms in the Ontology to make sure the cache of each term is cleaned and avoid returning wrong memoized values (such as Term.rchildren() TermLists, which get memoized for performance concerns)
f14315:c0:m18
@output_str<EOL><INDENT>def _obo_meta(self):<DEDENT>
metatags = (<EOL>"<STR_LIT>", "<STR_LIT>", "<STR_LIT:date>", "<STR_LIT>",<EOL>"<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT>",<EOL>"<STR_LIT>", "<STR_LIT>", "<STR_LIT>",<EOL>"<STR_LIT>", "<STR_LIT>",<EOL>"<STR_LIT>", "<STR_LIT>", "<STR_LIT>"<EOL>)<EOL>meta = self.meta.copy()<EOL>meta['<STR_LIT>'] = ['<STR_LIT>'.format(__version__)]<EOL>meta['<STR_LIT:date>'] = [datetime.datetime.now().strftime('<STR_LIT>')]<EOL>obo_meta = "<STR_LIT:\n>".join(<EOL>[ <EOL>x.obo if hasattr(x, '<STR_LIT>')else "<STR_LIT>".format(k,x)<EOL>for k in metatags[:-<NUM_LIT:1>]<EOL>for x in meta.get(k, ())<EOL>] + [ <EOL>"<STR_LIT>".format(k, x)<EOL>for k,v in sorted(six.iteritems(meta), key=operator.itemgetter(<NUM_LIT:0>))<EOL>for x in v<EOL>if k not in metatags<EOL>] + ( ["<STR_LIT>".format(x) for x in meta["<STR_LIT>"]]<EOL>if "<STR_LIT>" in meta<EOL>else ["<STR_LIT>".format(meta["<STR_LIT>"][<NUM_LIT:0>].lower())]<EOL>if "<STR_LIT>" in meta<EOL>else [])<EOL>)<EOL>return obo_meta<EOL>
Generate the obo metadata header and updates metadata. When called, this method will create appropriate values for the ``auto-generated-by`` and ``date`` fields. Note: Generated following specs of the unofficial format guide: ftp://ftp.geneontology.org/pub/go/www/GO.format.obo-1_4.shtml
f14315:c0:m19
@property<EOL><INDENT>def json(self):<DEDENT>
return json.dumps(self.terms, indent=<NUM_LIT:4>, sort_keys=True,<EOL>default=operator.attrgetter("<STR_LIT>"))<EOL>
str: the ontology serialized in json format.
f14315:c0:m20
@property<EOL><INDENT>def obo(self):<DEDENT>
meta = self._obo_meta()<EOL>meta = [meta] if meta else []<EOL>newline = "<STR_LIT>" if six.PY3 else "<STR_LIT>".encode('<STR_LIT:utf-8>')<EOL>try: <EOL><INDENT>return newline.join( meta + [<EOL>r.obo for r in self.typedefs<EOL>] + [<EOL>t.obo for t in self<EOL>if t.id.startswith(self.meta['<STR_LIT>'][<NUM_LIT:0>])<EOL>])<EOL><DEDENT>except KeyError:<EOL><INDENT>return newline.join( meta + [<EOL>r.obo for r in self.typedefs<EOL>] + [<EOL>t.obo for t in self<EOL>])<EOL><DEDENT>
str: the ontology serialized in obo format.
f14315:c0:m21
def __init__(self, obo_name, symmetry=None, transitivity=None,<EOL>reflexivity=None, complementary=None, prefix=None,<EOL>direction=None, comment=None, aliases=None):
if obo_name not in self._instances:<EOL><INDENT>if not isinstance(obo_name, six.text_type):<EOL><INDENT>obo_name = obo_name.decode('<STR_LIT:utf-8>')<EOL><DEDENT>if complementary is not None and not isinstance(complementary, six.text_type):<EOL><INDENT>complementary = complementary.decode('<STR_LIT:utf-8>')<EOL><DEDENT>if prefix is not None and not isinstance(prefix, six.text_type):<EOL><INDENT>prefix = prefix.decode('<STR_LIT:utf-8>')<EOL><DEDENT>if direction is not None and not isinstance(direction, six.text_type):<EOL><INDENT>direction = direction.decode('<STR_LIT:utf-8>')<EOL><DEDENT>if comment is not None and not isinstance(comment, six.text_type):<EOL><INDENT>comment = comment.decode('<STR_LIT:utf-8>')<EOL><DEDENT>self.obo_name = obo_name<EOL>self.symmetry = symmetry<EOL>self.transitivity = transitivity<EOL>self.reflexivity = reflexivity<EOL>self.complementary = complementary or '<STR_LIT>'<EOL>self.prefix = prefix or '<STR_LIT>'<EOL>self.direction = direction or '<STR_LIT>'<EOL>self.comment = comment or '<STR_LIT>'<EOL>if aliases is not None:<EOL><INDENT>self.aliases = [alias.decode('<STR_LIT:utf-8>') if not isinstance(alias, six.text_type) else alias<EOL>for alias in aliases]<EOL><DEDENT>else:<EOL><INDENT>self.aliases = []<EOL><DEDENT>self._instances[obo_name] = self<EOL>for alias in self.aliases:<EOL><INDENT>self._instances[alias] = self<EOL><DEDENT><DEDENT>
Instantiate a new relationship. Arguments: obo_name (str): the name of the relationship as it appears in obo files (such as is_a, has_part, etc.) symetry (bool or None): the symetry of the relationship transitivity (bool or None): the transitivity of the relationship. reflexivity (bool or None): the reflexivity of the relationship. complementary (string or None): if any, the obo_name of the complementary relationship. direction (string, optional): if any, the direction of the relationship (can be 'topdown', 'bottomup', 'horizontal'). A relationship with a direction set as 'topdown' will be counted as _childhooding_ when acessing `Term.children`. comment (string, optional): comments about the relationship. aliases (list, optional): a list of names that are synonyms to the obo name of this relationship. Note: For symetry, transitivity, reflexivity, the allowed values are the following: * `True` for reflexive, transitive, symmetric * `False` for areflexive, atransitive, asymmetric * `None` for non-reflexive, non-transitive, non-symmetric
f14316:c0:m0
def complement(self):
if self.complementary:<EOL><INDENT>try:<EOL><INDENT>return self._instances[self.complementary]<EOL><DEDENT>except KeyError:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT><DEDENT>else:<EOL><INDENT>return None<EOL><DEDENT>
Return the complementary relationship of self. Raises: ValueError: if the relationship has a complementary which was not defined. Returns: complementary (Relationship): the complementary relationship. Example: >>> from pronto.relationship import Relationship >>> print(Relationship('has_part').complement()) Relationship('part_of') >>> print(Relationship('has_units').complement()) None
f14316:c0:m1
@output_str<EOL><INDENT>def __repr__(self):<DEDENT>
return "<STR_LIT>".format(self.obo_name)<EOL>
Return a string reprensentation of the relationship.
f14316:c0:m2
def __new__(cls, obo_name, *args, **kwargs):
if obo_name in cls._instances:<EOL><INDENT>return cls._instances[obo_name]<EOL><DEDENT>else:<EOL><INDENT>return super(Relationship, cls).__new__(cls)<EOL><DEDENT>
Create a relationship or returning an already existing one. This allows to do the following: >>> Relationship('has_part').direction u'topdown' The Python syntax is overloaded, and what looks like a object initialization in fact retrieves an existing object with all its properties already set. The Relationship class behaves like a factory of its own objects ! Todo: * Add a warning for unknown relationship (the goal being to instantiate every known ontology relationship and even allow instatiation of file-defined relationships).
f14316:c0:m3
@classmethod<EOL><INDENT>def topdown(cls):<DEDENT>
return tuple(unique_everseen(r for r in cls._instances.values() if r.direction=='<STR_LIT>'))<EOL>
Get all topdown `Relationship` instances. Returns: :obj:`generator` Example: >>> from pronto import Relationship >>> for r in Relationship.topdown(): ... print(r) Relationship('can_be') Relationship('has_part')
f14316:c0:m4
@classmethod<EOL><INDENT>def bottomup(cls):<DEDENT>
return tuple(unique_everseen(r for r in cls._instances.values() if r.direction=='<STR_LIT>'))<EOL>
Get all bottomup `Relationship` instances. Example: >>> from pronto import Relationship >>> for r in Relationship.bottomup(): ... print(r) Relationship('is_a') Relationship('part_of') Relationship('develops_from')
f14316:c0:m5
@property<EOL><INDENT>@output_str<EOL>def obo(self):<DEDENT>
lines = [<EOL>"<STR_LIT>",<EOL>"<STR_LIT>".format(self.obo_name),<EOL>"<STR_LIT>".format(self.obo_name)<EOL>]<EOL>if self.complementary is not None:<EOL><INDENT>lines.append("<STR_LIT>".format(self.complementary))<EOL><DEDENT>if self.symmetry is not None:<EOL><INDENT>lines.append("<STR_LIT>".format(self.symmetry).lower())<EOL><DEDENT>if self.transitivity is not None:<EOL><INDENT>lines.append("<STR_LIT>".format(self.transitivity).lower())<EOL><DEDENT>if self.reflexivity is not None:<EOL><INDENT>lines.append("<STR_LIT>".format(self.reflexivity).lower())<EOL><DEDENT>if self.comment:<EOL><INDENT>lines.append("<STR_LIT>".format(self.comment))<EOL><DEDENT>return "<STR_LIT:\n>".join(lines)<EOL>
str: the `Relationship` serialized in an ``[Typedef]`` stanza. Note: The following guide was used: ftp://ftp.geneontology.org/pub/go/www/GO.format.obo-1_4.shtml
f14316:c0:m8
def unique_everseen(iterable):
<EOL>seen = set()<EOL>seen_add = seen.add<EOL>for element in six.moves.filterfalse(seen.__contains__, iterable):<EOL><INDENT>seen_add(element)<EOL>yield element<EOL><DEDENT>
List unique elements, preserving order. Remember all elements ever seen.
f14317:m0
def output_str(f):
if six.PY2:<EOL><INDENT>def new_f(*args, **kwargs):<EOL><INDENT>return f(*args, **kwargs).encode("<STR_LIT:utf-8>")<EOL><DEDENT><DEDENT>else:<EOL><INDENT>new_f = f<EOL><DEDENT>return new_f<EOL>
Create a function that always return instances of `str`. This decorator is useful when the returned string is to be used with libraries that do not support ̀`unicode` in Python 2, but work fine with Python 3 `str` objects.
f14317:m1
def nowarnings(func):
@functools.wraps(func)<EOL>def new_func(*args, **kwargs):<EOL><INDENT>with warnings.catch_warnings():<EOL><INDENT>warnings.simplefilter('<STR_LIT:ignore>')<EOL>return func(*args, **kwargs)<EOL><DEDENT><DEDENT>return new_func<EOL>
Create a function wrapped in a context that ignores warnings.
f14317:m2
@classmethod<EOL><INDENT>@abc.abstractmethod<EOL>def hook(cls, force=False, path=None, lookup=None):<DEDENT>
Test whether this parser should be used. The current behaviour relies on filenames, file extension and looking ahead a small buffer in the file object.
f14318:c0:m0
@classmethod<EOL><INDENT>@abc.abstractmethod<EOL>def parse(self, stream):<DEDENT>
Parse the ontology file. Parameters stream (io.StringIO): A stream of ontologic data. Returns: (dict, dict, list): a tuple of metadata, dict, and imports.
f14318:c0:m1
@classmethod<EOL><INDENT>def hook(cls, force=False, path=None, lookup=None):<DEDENT>
if force:<EOL><INDENT>return True<EOL><DEDENT>elif path is not None and path.endswith(cls.extensions):<EOL><INDENT>return True<EOL><DEDENT>elif lookup is not None and lookup.startswith(b'<STR_LIT>'):<EOL><INDENT>return True<EOL><DEDENT>return False<EOL>
Test whether this parser should be used. The current behaviour relies on filenames, file extension and looking ahead a small buffer in the file object.
f14319:c0:m0
@staticmethod<EOL><INDENT>def _check_section(line, section):<DEDENT>
if "<STR_LIT>" in line:<EOL><INDENT>section = OboSection.term<EOL><DEDENT>elif "<STR_LIT>" in line:<EOL><INDENT>section = OboSection.typedef<EOL><DEDENT>return section<EOL>
Update the section being parsed. The parser starts in the `OboSection.meta` section but once it reaches the first ``[Typedef]``, it will enter the `OboSection.typedef` section, and/or when it reaches the first ``[Term]``, it will enter the `OboSection.term` section.
f14319:c0:m3
@classmethod<EOL><INDENT>def _parse_metadata(cls, line, meta, parse_remarks=True):<DEDENT>
key, value = line.split('<STR_LIT::>', <NUM_LIT:1>)<EOL>key, value = key.strip(), value.strip()<EOL>if parse_remarks and "<STR_LIT>" in key: <EOL><INDENT>if <NUM_LIT:0><value.find('<STR_LIT>')<<NUM_LIT:20>: <EOL><INDENT>try: <EOL><INDENT>cls._parse_metadata(value, meta, parse_remarks) <EOL><DEDENT>except ValueError: <EOL><INDENT>pass <EOL><DEDENT><DEDENT><DEDENT>else:<EOL><INDENT>meta[key].append(value)<EOL>try:<EOL><INDENT>syn_type_def = []<EOL>for m in meta['<STR_LIT>']:<EOL><INDENT>if not isinstance(m, SynonymType):<EOL><INDENT>x = SynonymType.from_obo(m)<EOL>syn_type_def.append(x)<EOL><DEDENT>else:<EOL><INDENT>syn_type_def.append(m)<EOL><DEDENT><DEDENT><DEDENT>except KeyError:<EOL><INDENT>pass<EOL><DEDENT>else:<EOL><INDENT>meta['<STR_LIT>'] = syn_type_def<EOL><DEDENT><DEDENT>
Parse a metadata line. The metadata is organized as a ``key: value`` statement which is split into the proper key and the proper value. Arguments: line (str): the line containing the metadata parse_remarks(bool, optional): set to `False` to avoid parsing the remarks. Note: If the line follows the following schema: ``remark: key: value``, the function will attempt to extract the proper key/value instead of leaving everything inside the remark key. This may cause issues when the line is identified as such even though the remark is simply a sentence containing a colon, such as ``remark: 090506 "Attribute"`` in Term deleted and new entries: Scan Type [...]" (found in imagingMS.obo). To prevent the splitting from happening, the text on the left of the colon must be less that *20 chars long*.
f14319:c0:m4
@staticmethod<EOL><INDENT>def _parse_typedef(line, _rawtypedef):<DEDENT>
if "<STR_LIT>" in line:<EOL><INDENT>_rawtypedef.append(collections.defaultdict(list))<EOL><DEDENT>else:<EOL><INDENT>key, value = line.split('<STR_LIT::>', <NUM_LIT:1>)<EOL>_rawtypedef[-<NUM_LIT:1>][key.strip()].append(value.strip())<EOL><DEDENT>
Parse a typedef line. The typedef is organized as a succesion of ``key:value`` pairs that are extracted into the same dictionnary until a new header is encountered Arguments: line (str): the line containing a typedef statement
f14319:c0:m5
@staticmethod<EOL><INDENT>def _parse_term(_rawterms):<DEDENT>
line = yield<EOL>_rawterms.append(collections.defaultdict(list))<EOL>while True:<EOL><INDENT>line = yield<EOL>if "<STR_LIT>" in line:<EOL><INDENT>_rawterms.append(collections.defaultdict(list))<EOL><DEDENT>else:<EOL><INDENT>key, value = line.split('<STR_LIT::>', <NUM_LIT:1>)<EOL>_rawterms[-<NUM_LIT:1>][key.strip()].append(value.strip())<EOL><DEDENT><DEDENT>
Parse a term line. The term is organized as a succesion of ``key:value`` pairs that are extracted into the same dictionnary until a new header is encountered Arguments: line (str): the line containing a term statement
f14319:c0:m6
@staticmethod<EOL><INDENT>def _classify(_rawtypedef, _rawterms):<DEDENT>
terms = collections.OrderedDict()<EOL>_cached_synonyms = {}<EOL>typedefs = [<EOL>Relationship._from_obo_dict( <EOL>{k:v for k,lv in six.iteritems(_typedef) for v in lv}<EOL>)<EOL>for _typedef in _rawtypedef<EOL>]<EOL>for _term in _rawterms:<EOL><INDENT>synonyms = set()<EOL>_id = _term['<STR_LIT:id>'][<NUM_LIT:0>]<EOL>_name = _term.pop('<STR_LIT:name>', ('<STR_LIT>',))[<NUM_LIT:0>]<EOL>_desc = _term.pop('<STR_LIT>', ('<STR_LIT>',))[<NUM_LIT:0>]<EOL>_relations = collections.defaultdict(list)<EOL>try:<EOL><INDENT>for other in _term.get('<STR_LIT>', ()):<EOL><INDENT>_relations[Relationship('<STR_LIT>')].append(other.split('<STR_LIT:!>')[<NUM_LIT:0>].strip())<EOL><DEDENT><DEDENT>except IndexError:<EOL><INDENT>pass<EOL><DEDENT>try:<EOL><INDENT>for relname, other in ( x.split('<STR_LIT:U+0020>', <NUM_LIT:1>) for x in _term.pop('<STR_LIT>', ())):<EOL><INDENT>_relations[Relationship(relname)].append(other.split('<STR_LIT:!>')[<NUM_LIT:0>].strip())<EOL><DEDENT><DEDENT>except IndexError:<EOL><INDENT>pass<EOL><DEDENT>for key, scope in six.iteritems(_obo_synonyms_map):<EOL><INDENT>for obo_header in _term.pop(key, ()):<EOL><INDENT>try:<EOL><INDENT>s = _cached_synonyms[obo_header]<EOL><DEDENT>except KeyError:<EOL><INDENT>s = Synonym.from_obo(obo_header, scope)<EOL>_cached_synonyms[obo_header] = s<EOL><DEDENT>finally:<EOL><INDENT>synonyms.add(s)<EOL><DEDENT><DEDENT><DEDENT>desc = Description.from_obo(_desc) if _desc else Description("<STR_LIT>")<EOL>terms[_id] = Term(_id, _name, desc, dict(_relations), synonyms, dict(_term))<EOL><DEDENT>return terms, typedefs<EOL>
Create proper objects out of extracted dictionnaries. New Relationship objects are instantiated with the help of the `Relationship._from_obo_dict` alternate constructor. New `Term` objects are instantiated by manually extracting id, name, desc and relationships out of the ``_rawterm`` dictionnary, and then calling the default constructor.
f14319:c0:m7
@staticmethod<EOL><INDENT>def _get_basename(tag):<DEDENT>
return tag.split('<STR_LIT:}>', <NUM_LIT:1>)[-<NUM_LIT:1>]<EOL>
Remove the namespace part of the tag.
f14321:c0:m3
@staticmethod<EOL><INDENT>def _get_id_from_url(url):<DEDENT>
_id = url.split('<STR_LIT:#>' if '<STR_LIT:#>' in url else '<STR_LIT:/>')[-<NUM_LIT:1>]<EOL>return _id.replace('<STR_LIT:_>', '<STR_LIT::>')<EOL>
Extract the ID of a term from an XML URL.
f14321:c0:m4