signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
@classmethod<EOL><INDENT>def delete_statement(cls, prop_nr):<DEDENT>
return cls(value='<STR_LIT>', snak_type='<STR_LIT:value>', data_type='<STR_LIT>', is_reference=False, is_qualifier=False, references=[],<EOL>qualifiers=[], rank='<STR_LIT>', prop_nr=prop_nr, check_qualifier_equality=True)<EOL>
This serves as an alternative constructor for WDBaseDataType with the only purpose of holding a WD property number and an empty string value in order to indicate that the whole statement with this property number of a WD item should be deleted. :param prop_nr: A WD property number as string :return: An instance of WDBaseDataType
f10941:c2:m26
def equals(self, that, include_ref=False, fref=None):
if not include_ref:<EOL><INDENT>return self == that<EOL><DEDENT>if include_ref and self != that:<EOL><INDENT>return False<EOL><DEDENT>if include_ref and fref is None:<EOL><INDENT>fref = WDBaseDataType.refs_equal<EOL><DEDENT>return fref(self, that)<EOL>
Tests for equality of two statements. If comparing references, the order of the arguments matters!!! self is the current statement, the next argument is the new statement. Allows passing in a function to use to compare the references 'fref'. Default is equality. fref accepts two arguments 'oldrefs' and 'newrefs', each of which are a list of references, where each reference is a list of statements
f10941:c2:m27
@staticmethod<EOL><INDENT>def refs_equal(olditem, newitem):<DEDENT>
oldrefs = olditem.references<EOL>newrefs = newitem.references<EOL>ref_equal = lambda oldref, newref: True if (len(oldref) == len(newref)) and all(<EOL>x in oldref for x in newref) else False<EOL>if len(oldrefs) == len(newrefs) and all(<EOL>any(ref_equal(oldref, newref) for oldref in oldrefs) for newref in newrefs):<EOL><INDENT>return True<EOL><DEDENT>else:<EOL><INDENT>return False<EOL><DEDENT>
tests for exactly identical references
f10941:c2:m28
def __init__(self, value, prop_nr, is_reference=False, is_qualifier=False, snak_type='<STR_LIT:value>', references=None,<EOL>qualifiers=None, rank='<STR_LIT>', check_qualifier_equality=True):
super(WDString, self).__init__(value=value, snak_type=snak_type, data_type=self.DTYPE,<EOL>is_reference=is_reference, is_qualifier=is_qualifier, references=references,<EOL>qualifiers=qualifiers, rank=rank, prop_nr=prop_nr,<EOL>check_qualifier_equality=check_qualifier_equality)<EOL>self.set_value(value=value)<EOL>
Constructor, calls the superclass WDBaseDataType :param value: The string to be used as the value :type value: str :param prop_nr: The WD item ID for this claim :type prop_nr: str with a 'P' prefix followed by digits :param is_reference: Whether this snak is a reference :type is_reference: boolean :param is_qualifier: Whether this snak is a qualifier :type is_qualifier: boolean :param snak_type: The snak type, either 'value', 'somevalue' or 'novalue' :type snak_type: str :param references: List with reference objects :type references: A WD data type with subclass of WDBaseDataType :param qualifiers: List with qualifier objects :type qualifiers: A WD data type with subclass of WDBaseDataType :param rank: WD rank of a snak with value 'preferred', 'normal' or 'deprecated' :type rank: str
f10941:c3:m0
def __init__(self, value, prop_nr, is_reference=False, is_qualifier=False, snak_type='<STR_LIT:value>', references=None,<EOL>qualifiers=None, rank='<STR_LIT>', check_qualifier_equality=True):
super(WDMath, self).__init__(value=value, snak_type=snak_type, data_type=self.DTYPE, is_reference=is_reference,<EOL>is_qualifier=is_qualifier, references=references, qualifiers=qualifiers,<EOL>rank=rank, prop_nr=prop_nr, check_qualifier_equality=check_qualifier_equality)<EOL>self.set_value(value=value)<EOL>
Constructor, calls the superclass WDBaseDataType :param value: The string to be used as the value :type value: str :param prop_nr: The WD item ID for this claim :type prop_nr: str with a 'P' prefix followed by digits :param is_reference: Whether this snak is a reference :type is_reference: boolean :param is_qualifier: Whether this snak is a qualifier :type is_qualifier: boolean :param snak_type: The snak type, either 'value', 'somevalue' or 'novalue' :type snak_type: str :param references: List with reference objects :type references: A WD data type with subclass of WDBaseDataType :param qualifiers: List with qualifier objects :type qualifiers: A WD data type with subclass of WDBaseDataType :param rank: WD rank of a snak with value 'preferred', 'normal' or 'deprecated' :type rank: str
f10941:c4:m0
def __init__(self, value, prop_nr, is_reference=False, is_qualifier=False, snak_type='<STR_LIT:value>', references=None,<EOL>qualifiers=None, rank='<STR_LIT>', check_qualifier_equality=True):
super(WDExternalID, self).__init__(value=value, snak_type=snak_type, data_type=self.DTYPE,<EOL>is_reference=is_reference, is_qualifier=is_qualifier, references=references,<EOL>qualifiers=qualifiers, rank=rank, prop_nr=prop_nr,<EOL>check_qualifier_equality=check_qualifier_equality)<EOL>self.set_value(value=value)<EOL>
Constructor, calls the superclass WDBaseDataType :param value: The string to be used as the value :type value: str :param prop_nr: The WD item ID for this claim :type prop_nr: str with a 'P' prefix followed by digits :param is_reference: Whether this snak is a reference :type is_reference: boolean :param is_qualifier: Whether this snak is a qualifier :type is_qualifier: boolean :param snak_type: The snak type, either 'value', 'somevalue' or 'novalue' :type snak_type: str :param references: List with reference objects :type references: A WD data type with subclass of WDBaseDataType :param qualifiers: List with qualifier objects :type qualifiers: A WD data type with subclass of WDBaseDataType :param rank: WD rank of a snak with value 'preferred', 'normal' or 'deprecated' :type rank: str
f10941:c5:m0
def __init__(self, value, prop_nr, is_reference=False, is_qualifier=False, snak_type='<STR_LIT:value>', references=None,<EOL>qualifiers=None, rank='<STR_LIT>', check_qualifier_equality=True):
super(WDItemID, self).__init__(value=value, snak_type=snak_type, data_type=self.DTYPE,<EOL>is_reference=is_reference, is_qualifier=is_qualifier, references=references,<EOL>qualifiers=qualifiers, rank=rank, prop_nr=prop_nr,<EOL>check_qualifier_equality=check_qualifier_equality)<EOL>self.set_value(value=value)<EOL>
Constructor, calls the superclass WDBaseDataType :param value: The WD item ID to serve as the value :type value: str with a 'Q' prefix, followed by several digits or only the digits without the 'Q' prefix :param prop_nr: The WD item ID for this claim :type prop_nr: str with a 'P' prefix followed by digits :param is_reference: Whether this snak is a reference :type is_reference: boolean :param is_qualifier: Whether this snak is a qualifier :type is_qualifier: boolean :param snak_type: The snak type, either 'value', 'somevalue' or 'novalue' :type snak_type: str :param references: List with reference objects :type references: A WD data type with subclass of WDBaseDataType :param qualifiers: List with qualifier objects :type qualifiers: A WD data type with subclass of WDBaseDataType :param rank: WD rank of a snak with value 'preferred', 'normal' or 'deprecated' :type rank: str
f10941:c6:m0
def __init__(self, value, prop_nr, is_reference=False, is_qualifier=False, snak_type='<STR_LIT:value>', references=None,<EOL>qualifiers=None, rank='<STR_LIT>', check_qualifier_equality=True):
super(WDProperty, self).__init__(value=value, snak_type=snak_type, data_type=self.DTYPE,<EOL>is_reference=is_reference, is_qualifier=is_qualifier, references=references,<EOL>qualifiers=qualifiers, rank=rank, prop_nr=prop_nr,<EOL>check_qualifier_equality=check_qualifier_equality)<EOL>self.set_value(value=value)<EOL>
Constructor, calls the superclass WDBaseDataType :param value: The WD property number to serve as a value :type value: str with a 'P' prefix, followed by several digits or only the digits without the 'P' prefix :param prop_nr: The WD property number for this claim :type prop_nr: str with a 'P' prefix followed by digits :param is_reference: Whether this snak is a reference :type is_reference: boolean :param is_qualifier: Whether this snak is a qualifier :type is_qualifier: boolean :param snak_type: The snak type, either 'value', 'somevalue' or 'novalue' :type snak_type: str :param references: List with reference objects :type references: A WD data type with subclass of WDBaseDataType :param qualifiers: List with qualifier objects :type qualifiers: A WD data type with subclass of WDBaseDataType :param rank: WD rank of a snak with value 'preferred', 'normal' or 'deprecated' :type rank: str
f10941:c7:m0
def __init__(self, time, prop_nr, precision=<NUM_LIT:11>, timezone=<NUM_LIT:0>, calendarmodel='<STR_LIT>',<EOL>is_reference=False, is_qualifier=False, snak_type='<STR_LIT:value>', references=None, qualifiers=None,<EOL>rank='<STR_LIT>', check_qualifier_equality=True):
<EOL>value = (time, timezone, precision, calendarmodel)<EOL>super(WDTime, self).__init__(value=value, snak_type=snak_type, data_type=self.DTYPE, is_reference=is_reference,<EOL>is_qualifier=is_qualifier, references=references, qualifiers=qualifiers, rank=rank,<EOL>prop_nr=prop_nr, check_qualifier_equality=check_qualifier_equality)<EOL>self.set_value(value=value)<EOL>
Constructor, calls the superclass WDBaseDataType :param time: A time representation string in the following format: '+%Y-%m-%dT%H:%M:%SZ' :type time: str in the format '+%Y-%m-%dT%H:%M:%SZ', e.g. '+2001-12-31T12:01:13Z' :param prop_nr: The WD property number for this claim :type prop_nr: str with a 'P' prefix followed by digits :param precision: Precision value for dates and time as specified in the WD data model (https://www.mediawiki.org/wiki/Wikibase/DataModel#Dates_and_times) :type precision: int :param timezone: The timezone which applies to the date and time as specified in the WD data model :type timezone: int :param calendarmodel: The calendar model used for the date. URL to the WD calendar model item. :type calendarmodel: str :param is_reference: Whether this snak is a reference :type is_reference: boolean :param is_qualifier: Whether this snak is a qualifier :type is_qualifier: boolean :param snak_type: The snak type, either 'value', 'somevalue' or 'novalue' :type snak_type: str :param references: List with reference objects :type references: A WD data type with subclass of WDBaseDataType :param qualifiers: List with qualifier objects :type qualifiers: A WD data type with subclass of WDBaseDataType :param rank: WD rank of a snak with value 'preferred', 'normal' or 'deprecated' :type rank: str
f10941:c8:m0
def __init__(self, value, prop_nr, is_reference=False, is_qualifier=False, snak_type='<STR_LIT:value>', references=None,<EOL>qualifiers=None, rank='<STR_LIT>', check_qualifier_equality=True):
super(WDUrl, self).__init__(value=value, snak_type=snak_type, data_type=self.DTYPE, is_reference=is_reference,<EOL>is_qualifier=is_qualifier, references=references, qualifiers=qualifiers, rank=rank,<EOL>prop_nr=prop_nr, check_qualifier_equality=check_qualifier_equality)<EOL>self.set_value(value)<EOL>
Constructor, calls the superclass WDBaseDataType :param value: The URL to be used as the value :type value: str :param prop_nr: The WD item ID for this claim :type prop_nr: str with a 'P' prefix followed by digits :param is_reference: Whether this snak is a reference :type is_reference: boolean :param is_qualifier: Whether this snak is a qualifier :type is_qualifier: boolean :param snak_type: The snak type, either 'value', 'somevalue' or 'novalue' :type snak_type: str :param references: List with reference objects :type references: A WD data type with subclass of WDBaseDataType :param qualifiers: List with qualifier objects :type qualifiers: A WD data type with subclass of WDBaseDataType :param rank: WD rank of a snak with value 'preferred', 'normal' or 'deprecated' :type rank: str
f10941:c9:m0
def __init__(self, value, prop_nr, language='<STR_LIT>', is_reference=False, is_qualifier=False, snak_type='<STR_LIT:value>',<EOL>references=None, qualifiers=None, rank='<STR_LIT>', check_qualifier_equality=True):
self.language = language<EOL>value = (value, language)<EOL>super(WDMonolingualText, self).__init__(value=value, snak_type=snak_type, data_type=self.DTYPE, is_reference=is_reference,<EOL>is_qualifier=is_qualifier, references=references, qualifiers=qualifiers, rank=rank,<EOL>prop_nr=prop_nr, check_qualifier_equality=check_qualifier_equality)<EOL>self.set_value(value)<EOL>
Constructor, calls the superclass WDBaseDataType :param value: The language specific string to be used as the value :type value: str :param prop_nr: The WD item ID for this claim :type prop_nr: str with a 'P' prefix followed by digits :param language: Specifies the WD language the value belongs to :type language: str :param is_reference: Whether this snak is a reference :type is_reference: boolean :param is_qualifier: Whether this snak is a qualifier :type is_qualifier: boolean :param snak_type: The snak type, either 'value', 'somevalue' or 'novalue' :type snak_type: str :param references: List with reference objects :type references: A WD data type with subclass of WDBaseDataType :param qualifiers: List with qualifier objects :type qualifiers: A WD data type with subclass of WDBaseDataType :param rank: WD rank of a snak with value 'preferred', 'normal' or 'deprecated' :type rank: str
f10941:c10:m0
def __init__(self, value, prop_nr, upper_bound=None, lower_bound=None, unit='<STR_LIT:1>', is_reference=False,<EOL>is_qualifier=False, snak_type='<STR_LIT:value>', references=None, qualifiers=None, rank='<STR_LIT>',<EOL>check_qualifier_equality=True):
v = (value, unit, upper_bound, lower_bound)<EOL>super(WDQuantity, self).__init__(value=v, snak_type=snak_type, data_type=self.DTYPE,<EOL>is_reference=is_reference, is_qualifier=is_qualifier, references=references,<EOL>qualifiers=qualifiers, rank=rank, prop_nr=prop_nr,<EOL>check_qualifier_equality=check_qualifier_equality)<EOL>self.set_value(v)<EOL>
Constructor, calls the superclass WDBaseDataType :param value: The quantity value :type value: float, str :param prop_nr: The WD item ID for this claim :type prop_nr: str with a 'P' prefix followed by digits :param upper_bound: Upper bound of the value if it exists, e.g. for standard deviations :type upper_bound: float, str :param lower_bound: Lower bound of the value if it exists, e.g. for standard deviations :type lower_bound: float, str :param unit: The WD unit item URL a certain quantity has been measured in (https://www.wikidata.org/wiki/Wikidata:Units). The default is dimensionless, represented by a '1' :type unit: str :type is_reference: boolean :param is_qualifier: Whether this snak is a qualifier :type is_qualifier: boolean :param snak_type: The snak type, either 'value', 'somevalue' or 'novalue' :type snak_type: str :param references: List with reference objects :type references: A WD data type with subclass of WDBaseDataType :param qualifiers: List with qualifier objects :type qualifiers: A WD data type with subclass of WDBaseDataType :param rank: WD rank of a snak with value 'preferred', 'normal' or 'deprecated' :type rank: str
f10941:c11:m0
def __init__(self, value, prop_nr, is_reference=False, is_qualifier=False, snak_type='<STR_LIT:value>', references=None,<EOL>qualifiers=None, rank='<STR_LIT>', check_qualifier_equality=True):
super(WDCommonsMedia, self).__init__(value=value, snak_type=snak_type, data_type=self.DTYPE,<EOL>is_reference=is_reference, is_qualifier=is_qualifier,<EOL>references=references, qualifiers=qualifiers, rank=rank, prop_nr=prop_nr,<EOL>check_qualifier_equality=check_qualifier_equality)<EOL>self.set_value(value)<EOL>
Constructor, calls the superclass WDBaseDataType :param value: The media file name from Wikimedia commons to be used as the value :type value: str :param prop_nr: The WD item ID for this claim :type prop_nr: str with a 'P' prefix followed by digits :param is_reference: Whether this snak is a reference :type is_reference: boolean :param is_qualifier: Whether this snak is a qualifier :type is_qualifier: boolean :param snak_type: The snak type, either 'value', 'somevalue' or 'novalue' :type snak_type: str :param references: List with reference objects :type references: A WD data type with subclass of WDBaseDataType :param qualifiers: List with qualifier objects :type qualifiers: A WD data type with subclass of WDBaseDataType :param rank: WD rank of a snak with value 'preferred', 'normal' or 'deprecated' :type rank: str
f10941:c12:m0
def __init__(self, latitude, longitude, precision, prop_nr, globe='<STR_LIT>',<EOL>is_reference=False, is_qualifier=False,<EOL>snak_type='<STR_LIT:value>', references=None, qualifiers=None, rank='<STR_LIT>', check_qualifier_equality=True):
<EOL>value = (latitude, longitude, precision)<EOL>self.latitude, self.longitude, self.precision = value<EOL>self.globe = globe<EOL>super(WDGlobeCoordinate, self).__init__(value=value, snak_type=snak_type, data_type=self.DTYPE, is_reference=is_reference,<EOL>is_qualifier=is_qualifier, references=references, qualifiers=qualifiers, rank=rank,<EOL>prop_nr=prop_nr, check_qualifier_equality=check_qualifier_equality)<EOL>self.set_value(value)<EOL>
Constructor, calls the superclass WDBaseDataType :param latitude: Latitute in decimal format :type latitude: float :param longitude: Longitude in decimal format :type longitude: float :param precision: Precision of the position measurement :type precision: float :param prop_nr: The WD item ID for this claim :type prop_nr: str with a 'P' prefix followed by digits :param is_reference: Whether this snak is a reference :type is_reference: boolean :param is_qualifier: Whether this snak is a qualifier :type is_qualifier: boolean :param snak_type: The snak type, either 'value', 'somevalue' or 'novalue' :type snak_type: str :param references: List with reference objects :type references: A WD data type with subclass of WDBaseDataType :param qualifiers: List with qualifier objects :type qualifiers: A WD data type with subclass of WDBaseDataType :param rank: WD rank of a snak with value 'preferred', 'normal' or 'deprecated' :type rank: str
f10941:c13:m0
def __init__(self, value, prop_nr, is_reference=False, is_qualifier=False, snak_type='<STR_LIT:value>', references=None,<EOL>qualifiers=None, rank='<STR_LIT>', check_qualifier_equality=True):
super(WDGeoShape, self).__init__(value=value, snak_type=snak_type, data_type=self.DTYPE,<EOL>is_reference=is_reference, is_qualifier=is_qualifier, references=references,<EOL>qualifiers=qualifiers, rank=rank, prop_nr=prop_nr,<EOL>check_qualifier_equality=check_qualifier_equality)<EOL>self.set_value(value=value)<EOL>
Constructor, calls the superclass WDBaseDataType :param value: The GeoShape map file name in Wikimedia Commons to be linked :type value: str :param prop_nr: The WD item ID for this claim :type prop_nr: str with a 'P' prefix followed by digits :param is_reference: Whether this snak is a reference :type is_reference: boolean :param is_qualifier: Whether this snak is a qualifier :type is_qualifier: boolean :param snak_type: The snak type, either 'value', 'somevalue' or 'novalue' :type snak_type: str :param references: List with reference objects :type references: A WD data type with subclass of WDBaseDataType :param qualifiers: List with qualifier objects :type qualifiers: A WD data type with subclass of WDBaseDataType :param rank: WD rank of a snak with value 'preferred', 'normal' or 'deprecated' :type rank: str
f10941:c14:m0
def __init__(self, wd_error_message):
self.wd_error_msg = wd_error_message<EOL>
Base class for Wikidata error handling :param wd_error_message: The error message returned by the WD API :type wd_error_message: A Python json representation dictionary of the error message :return:
f10941:c15:m0
def __init__(self, wd_error_message):
self.wd_error_msg = wd_error_message<EOL>
This class handles errors returned from the WD API due to an attempt to create an item which has the same label and description as an existing item in a certain language. :param wd_error_message: An WD API error mesage containing 'wikibase-validator-label-with-description-conflict' as the message name. :type wd_error_message: A Python json representation dictionary of the error message :return:
f10941:c16:m0
def get_language(self):
return self.wd_error_msg['<STR_LIT:error>']['<STR_LIT>'][<NUM_LIT:0>]['<STR_LIT>'][<NUM_LIT:1>]<EOL>
:return: Returns a 2 letter Wikidata language string, indicating the language which triggered the error
f10941:c16:m1
def get_conflicting_item_qid(self):
qid_string = self.wd_error_msg['<STR_LIT:error>']['<STR_LIT>'][<NUM_LIT:0>]['<STR_LIT>'][<NUM_LIT:2>]<EOL>return qid_string.split('<STR_LIT:|>')[<NUM_LIT:0>][<NUM_LIT:2>:]<EOL>
:return: Returns the QID string of the item which has the same label and description as the one which should be set.
f10941:c16:m2
def check_json_decode_error(e):
return type(e) == JSONDecodeError and str(e) != "<STR_LIT>"<EOL>
Check if the error message is "Expecting value: line 1 column 1 (char 0)" if not, its a real error and we shouldn't retry :param e: :return:
f10942:m2
def expo(base=<NUM_LIT:2>, factor=<NUM_LIT:1>, max_value=None):
n = <NUM_LIT:0><EOL>while True:<EOL><INDENT>a = factor * base ** n<EOL>if max_value is None or a < max_value:<EOL><INDENT>yield a<EOL>n += <NUM_LIT:1><EOL><DEDENT>else:<EOL><INDENT>yield max_value<EOL><DEDENT><DEDENT>
Generator for exponential decay. Args: base: The mathematical base of the exponentiation operation factor: Factor to multiply the exponentation by. max_value: The maximum value to yield. Once the value in the true exponential sequence exceeds this, the value of max_value will forever after be yielded.
f10943:m0
def fibo(max_value=None):
a = <NUM_LIT:1><EOL>b = <NUM_LIT:1><EOL>while True:<EOL><INDENT>if max_value is None or a < max_value:<EOL><INDENT>yield a<EOL>a, b = b, a + b<EOL><DEDENT>else:<EOL><INDENT>yield max_value<EOL><DEDENT><DEDENT>
Generator for fibonaccial decay. Args: max_value: The maximum value to yield. Once the value in the true fibonacci sequence exceeds this, the value of max_value will forever after be yielded.
f10943:m1
def constant(interval=<NUM_LIT:1>):
while True:<EOL><INDENT>yield interval<EOL><DEDENT>
Generator for constant intervals. Args: interval: The constant value in seconds to yield.
f10943:m2
def random_jitter(value):
return value + random.random()<EOL>
Jitter the value a random number of milliseconds. This adds up to 1 second of additional time to the original value. Prior to backoff version 1.2 this was the default jitter behavior. Args: value: The unadulterated backoff value.
f10943:m3
def full_jitter(value):
return random.uniform(<NUM_LIT:0>, value)<EOL>
Jitter the value across the full range (0 to value). This corresponds to the "Full Jitter" algorithm specified in the AWS blog's post on the performance of various jitter algorithms. (http://www.awsarchitectureblog.com/2015/03/backoff.html) Args: value: The unadulterated backoff value.
f10943:m4
def on_predicate(wait_gen,<EOL>predicate=operator.not_,<EOL>max_tries=None,<EOL>jitter=full_jitter,<EOL>on_success=None,<EOL>on_backoff=None,<EOL>on_giveup=None,<EOL>**wait_gen_kwargs):
success_hdlrs = _handlers(on_success)<EOL>backoff_hdlrs = _handlers(on_backoff, _log_backoff)<EOL>giveup_hdlrs = _handlers(on_giveup, _log_giveup)<EOL>def decorate(target):<EOL><INDENT>@functools.wraps(target)<EOL>def retry(*args, **kwargs):<EOL><INDENT>max_tries_ = _maybe_call(max_tries)<EOL>wait = wait_gen(**dict((k, _maybe_call(v))<EOL>for k, v in wait_gen_kwargs.items()))<EOL>tries = <NUM_LIT:0><EOL>while True:<EOL><INDENT>tries += <NUM_LIT:1><EOL>ret = target(*args, **kwargs)<EOL>if predicate(ret):<EOL><INDENT>if tries == max_tries_:<EOL><INDENT>for hdlr in giveup_hdlrs:<EOL><INDENT>hdlr({'<STR_LIT:target>': target,<EOL>'<STR_LIT:args>': args,<EOL>'<STR_LIT>': kwargs,<EOL>'<STR_LIT>': tries,<EOL>'<STR_LIT:value>': ret})<EOL><DEDENT>break<EOL><DEDENT>value = next(wait)<EOL>try:<EOL><INDENT>if jitter is not None:<EOL><INDENT>seconds = jitter(value)<EOL><DEDENT>else:<EOL><INDENT>seconds = value<EOL><DEDENT><DEDENT>except TypeError:<EOL><INDENT>seconds = value + jitter()<EOL><DEDENT>for hdlr in backoff_hdlrs:<EOL><INDENT>hdlr({'<STR_LIT:target>': target,<EOL>'<STR_LIT:args>': args,<EOL>'<STR_LIT>': kwargs,<EOL>'<STR_LIT>': tries,<EOL>'<STR_LIT:value>': ret,<EOL>'<STR_LIT>': seconds})<EOL><DEDENT>time.sleep(seconds)<EOL>continue<EOL><DEDENT>else:<EOL><INDENT>for hdlr in success_hdlrs:<EOL><INDENT>hdlr({'<STR_LIT:target>': target,<EOL>'<STR_LIT:args>': args,<EOL>'<STR_LIT>': kwargs,<EOL>'<STR_LIT>': tries,<EOL>'<STR_LIT:value>': ret})<EOL><DEDENT>break<EOL><DEDENT><DEDENT>return ret<EOL><DEDENT>return retry<EOL><DEDENT>return decorate<EOL>
Returns decorator for backoff and retry triggered by predicate. Args: wait_gen: A generator yielding successive wait times in seconds. predicate: A function which when called on the return value of the target function will trigger backoff when considered truthily. If not specified, the default behavior is to backoff on falsey return values. max_tries: The maximum number of attempts to make before giving up. In the case of failure, the result of the last attempt will be returned. The default value of None means their is no limit to the number of tries. jitter: A function of the value yielded by wait_gen returning the actual time to wait. This distributes wait times stochastically in order to avoid timing collisions across concurrent clients. Wait times are jittered by default using the full_jitter function. Jittering may be disabled altogether by passing jitter=None. on_success: Callable (or iterable of callables) with a unary signature to be called in the event of success. The parameter is a dict containing details about the invocation. on_backoff: Callable (or iterable of callables) with a unary signature to be called in the event of a backoff. The parameter is a dict containing details about the invocation. on_giveup: Callable (or iterable of callables) with a unary signature to be called in the event that max_tries is exceeded. The parameter is a dict containing details about the invocation. **wait_gen_kwargs: Any additional keyword args specified will be passed to wait_gen when it is initialized. Any callable args will first be evaluated and their return values passed. This is useful for runtime configuration.
f10943:m5
def on_exception(wait_gen,<EOL>exception,<EOL>max_tries=None,<EOL>jitter=full_jitter,<EOL>giveup=lambda e: False,<EOL>on_success=None,<EOL>on_backoff=None,<EOL>on_giveup=None,<EOL>**wait_gen_kwargs):
success_hdlrs = _handlers(on_success)<EOL>backoff_hdlrs = _handlers(on_backoff, _log_backoff)<EOL>giveup_hdlrs = _handlers(on_giveup, _log_giveup)<EOL>def decorate(target):<EOL><INDENT>@functools.wraps(target)<EOL>def retry(*args, **kwargs):<EOL><INDENT>max_tries_ = _maybe_call(max_tries)<EOL>wait = wait_gen(**dict((k, _maybe_call(v))<EOL>for k, v in wait_gen_kwargs.items()))<EOL>tries = <NUM_LIT:0><EOL>while True:<EOL><INDENT>try:<EOL><INDENT>tries += <NUM_LIT:1><EOL>ret = target(*args, **kwargs)<EOL><DEDENT>except exception as e:<EOL><INDENT>if giveup(e) or tries == max_tries_:<EOL><INDENT>for hdlr in giveup_hdlrs:<EOL><INDENT>hdlr({'<STR_LIT:target>': target,<EOL>'<STR_LIT:args>': args,<EOL>'<STR_LIT>': kwargs,<EOL>'<STR_LIT>': tries})<EOL><DEDENT>raise<EOL><DEDENT>value = next(wait)<EOL>try:<EOL><INDENT>if jitter is not None:<EOL><INDENT>seconds = jitter(value)<EOL><DEDENT>else:<EOL><INDENT>seconds = value<EOL><DEDENT><DEDENT>except TypeError:<EOL><INDENT>seconds = value + jitter()<EOL><DEDENT>for hdlr in backoff_hdlrs:<EOL><INDENT>hdlr({'<STR_LIT:target>': target,<EOL>'<STR_LIT:args>': args,<EOL>'<STR_LIT>': kwargs,<EOL>'<STR_LIT>': tries,<EOL>'<STR_LIT>': seconds})<EOL><DEDENT>time.sleep(seconds)<EOL><DEDENT>else:<EOL><INDENT>for hdlr in success_hdlrs:<EOL><INDENT>hdlr({'<STR_LIT:target>': target,<EOL>'<STR_LIT:args>': args,<EOL>'<STR_LIT>': kwargs,<EOL>'<STR_LIT>': tries})<EOL><DEDENT>return ret<EOL><DEDENT><DEDENT><DEDENT>return retry<EOL><DEDENT>return decorate<EOL>
Returns decorator for backoff and retry triggered by exception. Args: wait_gen: A generator yielding successive wait times in seconds. exception: An exception type (or tuple of types) which triggers backoff. max_tries: The maximum number of attempts to make before giving up. Once exhausted, the exception will be allowed to escape. The default value of None means their is no limit to the number of tries. If a callable is passed, it will be evaluated at runtime and its return value used. jitter: A function of the value yielded by wait_gen returning the actual time to wait. This distributes wait times stochastically in order to avoid timing collisions across concurrent clients. Wait times are jittered by default using the full_jitter function. Jittering may be disabled altogether by passing jitter=None. giveup: Function accepting an exception instance and returning whether or not to give up. Optional. The default is to always continue. on_success: Callable (or iterable of callables) with a unary signature to be called in the event of success. The parameter is a dict containing details about the invocation. on_backoff: Callable (or iterable of callables) with a unary signature to be called in the event of a backoff. The parameter is a dict containing details about the invocation. on_giveup: Callable (or iterable of callables) with a unary signature to be called in the event that max_tries is exceeded. The parameter is a dict containing details about the invocation. **wait_gen_kwargs: Any additional keyword args specified will be passed to wait_gen when it is initialized. Any callable args will first be evaluated and their return values passed. This is useful for runtime configuration.
f10943:m6
def init_language_data(self, lang, lang_data_type):
if lang not in self.loaded_langs:<EOL><INDENT>self.loaded_langs[lang] = {}<EOL><DEDENT>if lang_data_type not in self.loaded_langs[lang]:<EOL><INDENT>result = self._query_lang(lang=lang, lang_data_type=lang_data_type)<EOL>data = self._process_lang(result)<EOL>self.loaded_langs[lang].update({lang_data_type: data})<EOL><DEDENT>
Initialize language data store :param lang: language code :param lang_data_type: 'label', 'description' or 'aliases' :return: None
f10954:c0:m3
def get_language_data(self, qid, lang, lang_data_type):
self.init_language_data(lang, lang_data_type)<EOL>current_lang_data = self.loaded_langs[lang][lang_data_type]<EOL>all_lang_strings = current_lang_data.get(qid, [])<EOL>if not all_lang_strings and lang_data_type in {'<STR_LIT:label>', '<STR_LIT:description>'}:<EOL><INDENT>all_lang_strings = ['<STR_LIT>']<EOL><DEDENT>return all_lang_strings<EOL>
get language data for specified qid :param qid: :param lang: language code :param lang_data_type: 'label', 'description' or 'aliases' :return: list of strings If nothing is found: If lang_data_type == label: returns [''] If lang_data_type == description: returns [''] If lang_data_type == aliases: returns []
f10954:c0:m4
def check_language_data(self, qid, lang_data, lang, lang_data_type):
all_lang_strings = set(x.strip().lower() for x in self.get_language_data(qid, lang, lang_data_type))<EOL>for s in lang_data:<EOL><INDENT>if s.strip().lower() not in all_lang_strings:<EOL><INDENT>print('<STR_LIT>'.format(lang_data_type, s))<EOL>return True<EOL><DEDENT><DEDENT>return False<EOL>
Method to check if certain language data exists as a label, description or aliases :param lang_data: list of string values to check :type lang_data: list :param lang: language code :type lang: str :param lang_data_type: What kind of data is it? 'label', 'description' or 'aliases'? :return:
f10954:c0:m5
def format_query_results(self, r, prop_nr):
prop_dt = self.get_prop_datatype(prop_nr)<EOL>for i in r:<EOL><INDENT>for value in {'<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>'}:<EOL><INDENT>if value in i:<EOL><INDENT>i[value] = i[value]['<STR_LIT:value>'].split('<STR_LIT:/>')[-<NUM_LIT:1>]<EOL><DEDENT><DEDENT>for value in {'<STR_LIT:v>', '<STR_LIT>', '<STR_LIT>'}:<EOL><INDENT>if value in i:<EOL><INDENT>if i[value].get("<STR_LIT>") == '<STR_LIT>' and noti[value]['<STR_LIT:value>'][<NUM_LIT:0>] in '<STR_LIT>':<EOL><INDENT>i[value]['<STR_LIT:value>'] = '<STR_LIT:+>' + i[value]['<STR_LIT:value>']<EOL><DEDENT><DEDENT><DEDENT>if '<STR_LIT:v>' in i:<EOL><INDENT>if i['<STR_LIT:v>']['<STR_LIT:type>'] == '<STR_LIT>' and prop_dt == '<STR_LIT>':<EOL><INDENT>i['<STR_LIT:v>'] = i['<STR_LIT:v>']['<STR_LIT:value>'].split('<STR_LIT:/>')[-<NUM_LIT:1>]<EOL><DEDENT>else:<EOL><INDENT>i['<STR_LIT:v>'] = i['<STR_LIT:v>']['<STR_LIT:value>']<EOL><DEDENT>if type(i['<STR_LIT:v>']) is not dict:<EOL><INDENT>self.rev_lookup[i['<STR_LIT:v>']].add(i['<STR_LIT>'])<EOL><DEDENT><DEDENT>if '<STR_LIT>' in i:<EOL><INDENT>qual_prop_dt = self.get_prop_datatype(prop_nr=i['<STR_LIT>'])<EOL>if i['<STR_LIT>']['<STR_LIT:type>'] == '<STR_LIT>' and qual_prop_dt == '<STR_LIT>':<EOL><INDENT>i['<STR_LIT>'] = i['<STR_LIT>']['<STR_LIT:value>'].split('<STR_LIT:/>')[-<NUM_LIT:1>]<EOL><DEDENT>else:<EOL><INDENT>i['<STR_LIT>'] = i['<STR_LIT>']['<STR_LIT:value>']<EOL><DEDENT><DEDENT>if '<STR_LIT>' in i:<EOL><INDENT>ref_prop_dt = self.get_prop_datatype(prop_nr=i['<STR_LIT>'])<EOL>if i['<STR_LIT>']['<STR_LIT:type>'] == '<STR_LIT>' and ref_prop_dt == '<STR_LIT>':<EOL><INDENT>i['<STR_LIT>'] = i['<STR_LIT>']['<STR_LIT:value>'].split('<STR_LIT:/>')[-<NUM_LIT:1>]<EOL><DEDENT>else:<EOL><INDENT>i['<STR_LIT>'] = i['<STR_LIT>']['<STR_LIT:value>']<EOL><DEDENT><DEDENT><DEDENT>
`r` is the results of the sparql query in _query_data and is modified in place `prop_nr` is needed to get the property datatype to determine how to format the value `r` is a list of dicts. The keys are: item: the subject. the item this statement is on v: the object. The value for this statement sid: statement ID pq: qualifier property qval: qualifier value ref: reference ID pr: reference property rval: reference value
f10954:c0:m7
def _query_lang(self, lang, lang_data_type):
lang_data_type_dict = {<EOL>'<STR_LIT:label>': '<STR_LIT>',<EOL>'<STR_LIT:description>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>'<EOL>}<EOL>query = '''<STR_LIT>'''.format(self.base_filter_string, lang_data_type_dict[lang_data_type], lang)<EOL>if self.debug:<EOL><INDENT>print(query)<EOL><DEDENT>return self.engine.execute_sparql_query(query=query, endpoint=self.sparql_endpoint_url)['<STR_LIT>']['<STR_LIT>']<EOL>
:param lang: :param lang_data_type: :return:
f10954:c0:m11
def clear(self):
self.prop_dt_map = dict()<EOL>self.prop_data = dict()<EOL>self.rev_lookup = defaultdict(set)<EOL>
convinience function to empty this fastrun container
f10954:c0:m14
def __init__(self, title, description, edition, edition_of_wdid, archive_url=None,<EOL>pub_date=None, date_precision=<NUM_LIT:11>, mediawiki_api_url='<STR_LIT>',<EOL>sparql_endpoint_url='<STR_LIT>'):
self.title = title<EOL>self.description = description<EOL>self.edition = str(edition)<EOL>self.archive_url = archive_url<EOL>if isinstance(pub_date, datetime.date):<EOL><INDENT>self.pub_date = pub_date.strftime('<STR_LIT>')<EOL><DEDENT>else:<EOL><INDENT>self.pub_date = pub_date<EOL><DEDENT>self.date_precision = date_precision<EOL>self.edition_of_qid = edition_of_wdid<EOL>self.sparql_endpoint_url = sparql_endpoint_url<EOL>self.mediawiki_api_url = mediawiki_api_url<EOL>self.helper = WikibaseHelper(sparql_endpoint_url)<EOL>self.statements = None<EOL>
:param title: title of release item :type title: str :param description: description of release item :type description: str :param edition: edition number or unique identifier for the release :type edition: str :param edition_of_wdid: wikidata qid of database this release is a release of :type edition_of_wdid: str :param archive_url: (optional) :type archive_url: str :param pub_date: (optional) Datetime will be converted to str :type pub_date: str or datetime :param date_precision: (optional) passed to PBB_Core.WDTime as is. default is 11 (day) :type date_precision: int
f10955:c0:m0
def get_pid(self, uri):
<EOL>if (self.sparql_endpoint_url == '<STR_LIT>' and<EOL>(uri.startswith("<STR_LIT:P>") or uri.startswith("<STR_LIT>"))):<EOL><INDENT>return uri<EOL><DEDENT>if uri.startswith("<STR_LIT:P>"):<EOL><INDENT>uri = "<STR_LIT>" + uri<EOL><DEDENT>return self.URI_PID[uri]<EOL>
Get the pid for the property in this wikibase instance ( the one at `sparql_endpoint_url` ), that corresponds to (i.e. has the equivalent property) `uri`
f10956:c0:m2
def prop2qid(self, prop, value):
equiv_class_pid = self.URI_PID['<STR_LIT>']<EOL>query = """<STR_LIT>"""<EOL>query = query.format(prop=prop, value=value, equiv_class_pid=equiv_class_pid)<EOL>results = wdi_core.WDItemEngine.execute_sparql_query(query, endpoint=self.sparql_endpoint_url)<EOL>result = results['<STR_LIT>']['<STR_LIT>']<EOL>if len(result) == <NUM_LIT:0>:<EOL><INDENT>return None<EOL><DEDENT>elif len(result) > <NUM_LIT:1>:<EOL><INDENT>raise ValueError("<STR_LIT>".format(prop, value, result))<EOL><DEDENT>else:<EOL><INDENT>return result[<NUM_LIT:0>]['<STR_LIT>']['<STR_LIT:value>'].split("<STR_LIT:/>")[-<NUM_LIT:1>]<EOL><DEDENT>
Lookup the local item QID for a Wikidata item that has a certain `prop` -> `value` in the case where the local item has a `equivalent item` statement to that wikidata item Example: In my wikibase, I have CDK2 (Q79363) with the only statement: equivalent class -> http://www.wikidata.org/entity/Q14911732 Calling prop2qid("P351", "1017") will return the local QID (Q79363) :param prop: :param value: :return:
f10956:c0:m4
def id_mapper(self, prop, filters=None, return_as_set=False):
if filters:<EOL><INDENT>filter_str = "<STR_LIT:\n>".join("<STR_LIT>".format(x[<NUM_LIT:0>], x[<NUM_LIT:1>]) for x in filters)<EOL><DEDENT>else:<EOL><INDENT>filter_str = "<STR_LIT>"<EOL><DEDENT>query = """<STR_LIT>""".format(prop=prop, filter_str=filter_str)<EOL>results = wdi_core.WDItemEngine.execute_sparql_query(query, endpoint=self.sparql_endpoint_url)['<STR_LIT>']['<STR_LIT>']<EOL>results = [{k: v['<STR_LIT:value>'] for k, v in x.items()} for x in results]<EOL>for r in results:<EOL><INDENT>r['<STR_LIT>'] = r['<STR_LIT>'].split('<STR_LIT:/>')[-<NUM_LIT:1>]<EOL><DEDENT>if not results:<EOL><INDENT>return None<EOL><DEDENT>id_qid = defaultdict(set)<EOL>for r in results:<EOL><INDENT>id_qid[r['<STR_LIT>']].add(r['<STR_LIT>'])<EOL><DEDENT>if return_as_set:<EOL><INDENT>return dict(id_qid)<EOL><DEDENT>else:<EOL><INDENT>return {x['<STR_LIT>']: x['<STR_LIT>'] for x in results}<EOL><DEDENT>
# see wdi_helpers.id_mapper for help on usage # QIDs returned are from the wikibase specified in self.sparql_endpoint_url # see WikibaseHelper.prop2qid for more details Example: Get all human genes: prop = "P351" filters = [("P703", "Q15978631")] h.id_mapper(prop, filters)
f10956:c0:m5
def __init__(self, title=None, instance_of=None, subtitle=None, authors=list(),<EOL>publication_date=None, original_language_of_work=None,<EOL>published_in_issn=None, published_in_isbn=None,<EOL>volume=None, issue=None, pages=None, number_of_pages=None, cites=None,<EOL>editor=None, license=None, full_work_available_at=None, language_of_work_or_name=None,<EOL>main_subject=None, commons_category=None, sponsor=None, data_available_at=None,<EOL>ids=None, ref_url=None, source=None):
<EOL>self.warnings = []<EOL>self._instance_of = instance_of<EOL>self.instance_of_qid = None<EOL>self._published_in_isbn = published_in_isbn<EOL>self._published_in_issn = published_in_issn<EOL>self.published_in_qid = None<EOL>self._authors = authors<EOL>self.title = title<EOL>self.publication_date = publication_date<EOL>self.volume = volume<EOL>self.issue = issue<EOL>self.pages = pages<EOL>self.ids = ids<EOL>self.source = source<EOL>self.ref_url = ref_url<EOL>self.subtitle = subtitle<EOL>self.original_language_of_work = original_language_of_work<EOL>self.cites = cites<EOL>self.editor = editor<EOL>self.license = license<EOL>self.full_work_available_at = full_work_available_at<EOL>self.language_of_work_or_name = language_of_work_or_name<EOL>self.main_subject = main_subject<EOL>self.commons_category = commons_category<EOL>self.sponsor = sponsor<EOL>self.data_available_at = data_available_at<EOL>self.number_of_pages = number_of_pages<EOL>self.reference = None<EOL>self.statements = []<EOL>
:param title: :type title: str :param instance_of: one of `INSTANCE_OF` :type instance_of: str :param authors: authors is a list of dicts, containing the following keys: full_name, orcid (optional) example: {'full_name': "Andrew I. Su", 'orcid': "0000-0002-9859-4104"} If author name can't be parsed, use value None. i.e. {'full_name': None} :type authors: list :param publication_date: :type publication_date: datetime.datetime :param published_in_issn: The issn# for the journal :type published_in_issn: str or list :param published_in_isbn: The isbn# :type published_in_isbn: str or list :param volume: :type volume: str :param issue: :type issue: str :param pages: :type pages: str :param ids: may contain the following keys: doi, pmid, pmcid, article_id, arxiv_id, bibcode, zoobank_pub_id, jstor_article_id, ssrn_id, nioshtic2_id, dialnet_article, opencitations_id, acmdl_id, publons_id example: {'doi': 'xxx', 'pmid': '1234'} :type ids: dict :param ref_url: Ref url (for the api call) :type ref_url: str :param source: One of {'crossref', 'europepmc'} :type source: str :param subtitle: Not implemented :param original_language_of_work: Not implemented :param number_of_pages: Not implemented :param cites: Not implemented :param editor: Not implemented :param license: Not implemented :param full_work_available_at: Not implemented :param language_of_work_or_name: Not implemented :param main_subject: Not implemented :param commons_category: Not implemented :param sponsor: Not implemented :param data_available_at: Not implemented
f10957:c1:m0
def __init__(self, ext_id, id_type, source):
assert source in self.SOURCE_FUNCT<EOL>self.f = self.SOURCE_FUNCT[source]<EOL>self.e = None<EOL>try:<EOL><INDENT>self.p = self.f(ext_id, id_type=id_type)<EOL><DEDENT>except Exception as e:<EOL><INDENT>self.p = None<EOL>self.e = e<EOL><DEDENT>
PublicationHelper: Helper to create wikidata items about literature Supported data sources and (ID types): crossref (doi), europepmc (pmid, pmcid, doi) :param ext_id: the external ID to use :type ext_id: str :param id_type: one of {'pmid', 'pmcid', 'doi'} :type id_type: str :param source: one of {'crossref', 'europepmc'} :type source: str
f10957:c2:m0
def get_or_create(self, login):
if self.p:<EOL><INDENT>try:<EOL><INDENT>return self.p.get_or_create(login)<EOL><DEDENT>except Exception as e:<EOL><INDENT>return None, self.p.warnings, e<EOL><DEDENT><DEDENT>else:<EOL><INDENT>return None, [], self.e<EOL><DEDENT>
Get the qid of the item by its external id or create if doesn't exist :param login: WDLogin item :return: tuple of (qid, list of warnings (strings), success (True if success, returns the Exception otherwise))
f10957:c2:m1
def try_write(wd_item, record_id, record_prop, login, edit_summary='<STR_LIT>', write=True):
if wd_item.require_write:<EOL><INDENT>if wd_item.create_new_item:<EOL><INDENT>msg = "<STR_LIT>"<EOL><DEDENT>else:<EOL><INDENT>msg = "<STR_LIT>"<EOL><DEDENT><DEDENT>else:<EOL><INDENT>msg = "<STR_LIT>"<EOL><DEDENT>try:<EOL><INDENT>if write:<EOL><INDENT>wd_item.write(login=login, edit_summary=edit_summary)<EOL><DEDENT>wdi_core.WDItemEngine.log("<STR_LIT>", format_msg(record_id, record_prop, wd_item.wd_item_id, msg) + "<STR_LIT:;>" + str(<EOL>wd_item.lastrevid))<EOL><DEDENT>except wdi_core.WDApiError as e:<EOL><INDENT>print(e)<EOL>wdi_core.WDItemEngine.log("<STR_LIT>",<EOL>format_msg(record_id, record_prop, wd_item.wd_item_id, json.dumps(e.wd_error_msg),<EOL>type(e)))<EOL>return e<EOL><DEDENT>except Exception as e:<EOL><INDENT>print(e)<EOL>wdi_core.WDItemEngine.log("<STR_LIT>", format_msg(record_id, record_prop, wd_item.wd_item_id, str(e), type(e)))<EOL>return e<EOL><DEDENT>return True<EOL>
Write a PBB_core item. Log if item was created, updated, or skipped. Catch and log all errors. :param wd_item: A wikidata item that will be written :type wd_item: PBB_Core.WDItemEngine :param record_id: An external identifier, to be used for logging :type record_id: str :param record_prop: Property of the external identifier :type record_prop: str :param login: PBB_core login instance :type login: PBB_login.WDLogin :param edit_summary: passed directly to wd_item.write :type edit_summary: str :param write: If `False`, do not actually perform write. Action will be logged as if write had occured :type write: bool :return: True if write did not throw an exception, returns the exception otherwise
f10958:m2
def format_msg(external_id, external_id_prop, wdid, msg, msg_type=None, delimiter="<STR_LIT:;>"):
fmt = ('<STR_LIT:{}>' + delimiter) * <NUM_LIT:4> + '<STR_LIT:{}>' <EOL>d = {'<STR_LIT>': external_id,<EOL>'<STR_LIT>': external_id_prop,<EOL>'<STR_LIT>': wdid,<EOL>'<STR_LIT>': msg,<EOL>'<STR_LIT>': msg_type}<EOL>for k, v in d.items():<EOL><INDENT>if isinstance(v, str) and delimiter in v and '<STR_LIT:">' in v:<EOL><INDENT>v = v.replace('<STR_LIT:">', "<STR_LIT:'>")<EOL><DEDENT>if isinstance(v, str) and delimiter in v:<EOL><INDENT>d[k] = '<STR_LIT:">' + v + '<STR_LIT:">'<EOL><DEDENT><DEDENT>s = fmt.format(d['<STR_LIT>'], d['<STR_LIT>'], d['<STR_LIT>'], d['<STR_LIT>'], d['<STR_LIT>'])<EOL>return s<EOL>
Format message for logging :return: str
f10958:m3
def prop2qid(prop, value, endpoint='<STR_LIT>'):
arguments = '<STR_LIT>'.format(prop, value)<EOL>query = '<STR_LIT>'.format(arguments)<EOL>results = wdi_core.WDItemEngine.execute_sparql_query(query, endpoint=endpoint)<EOL>result = results['<STR_LIT>']['<STR_LIT>']<EOL>if len(result) == <NUM_LIT:0>:<EOL><INDENT>return None<EOL><DEDENT>elif len(result) > <NUM_LIT:1>:<EOL><INDENT>raise ValueError("<STR_LIT>".format(prop, value, result))<EOL><DEDENT>else:<EOL><INDENT>return result[<NUM_LIT:0>]['<STR_LIT>']['<STR_LIT:value>'].split("<STR_LIT:/>")[-<NUM_LIT:1>]<EOL><DEDENT>
Lookup a wikidata item ID from a property and string value. For example, get the item QID for the item with the entrez gene id (P351): "899959" >>> prop2qid('P351','899959') :param prop: property :type prop: str :param value: value of property :type value: str :return: wdid as string or None
f10958:m4
def id_mapper(prop, filters=None, raise_on_duplicate=False, return_as_set=False, prefer_exact_match=False,<EOL>endpoint='<STR_LIT>'):
query = "<STR_LIT>"<EOL>query += "<STR_LIT>".format(prop, prop)<EOL>query += "<STR_LIT>"<EOL>if filters:<EOL><INDENT>for f in filters:<EOL><INDENT>query += "<STR_LIT>".format(f[<NUM_LIT:0>], f[<NUM_LIT:1>])<EOL><DEDENT><DEDENT>query = query + "<STR_LIT:}>"<EOL>results = wdi_core.WDItemEngine.execute_sparql_query(query, endpoint=endpoint)['<STR_LIT>']['<STR_LIT>']<EOL>results = [{k: v['<STR_LIT:value>'] for k, v in x.items()} for x in results]<EOL>for r in results:<EOL><INDENT>r['<STR_LIT>'] = r['<STR_LIT>'].split('<STR_LIT:/>')[-<NUM_LIT:1>]<EOL>if '<STR_LIT>' in r:<EOL><INDENT>r['<STR_LIT>'] = r['<STR_LIT>'].split('<STR_LIT:/>')[-<NUM_LIT:1>]<EOL><DEDENT><DEDENT>if not results:<EOL><INDENT>return None<EOL><DEDENT>if prefer_exact_match:<EOL><INDENT>df = pd.DataFrame(results)<EOL>if '<STR_LIT>' not in df:<EOL><INDENT>df['<STR_LIT>'] = '<STR_LIT>'<EOL><DEDENT>df.mrt = df.mrt.fillna('<STR_LIT>')<EOL>df['<STR_LIT>'] = True<EOL>df.sort_values("<STR_LIT>", inplace=True)<EOL>dupe_df = df[df.duplicated(subset=["<STR_LIT>"], keep=False)]<EOL>for item, subdf in dupe_df.groupby("<STR_LIT>"):<EOL><INDENT>if sum(subdf.mrt == RELATIONS['<STR_LIT>']) == <NUM_LIT:1>:<EOL><INDENT>df.loc[(df.item == item) & (df.mrt != RELATIONS['<STR_LIT>']), '<STR_LIT>'] = False<EOL><DEDENT><DEDENT>df.sort_values("<STR_LIT:id>", inplace=True)<EOL>dupe_df = df[df.duplicated(subset=["<STR_LIT:id>"], keep=False)]<EOL>for ext_id, subdf in dupe_df.groupby("<STR_LIT:id>"):<EOL><INDENT>if sum(subdf.mrt == RELATIONS['<STR_LIT>']) == <NUM_LIT:1>:<EOL><INDENT>df.loc[(df.id == ext_id) & (df.mrt != RELATIONS['<STR_LIT>']), '<STR_LIT>'] = False<EOL><DEDENT><DEDENT>df = df[df.keep]<EOL>results = df.to_dict("<STR_LIT>")<EOL><DEDENT>id_qid = defaultdict(set)<EOL>for r in results:<EOL><INDENT>id_qid[r['<STR_LIT:id>']].add(r['<STR_LIT>'])<EOL><DEDENT>dupe = {k: v for k, v in id_qid.items() if len(v) > <NUM_LIT:1>}<EOL>if raise_on_duplicate and dupe:<EOL><INDENT>raise ValueError("<STR_LIT>".format(dupe))<EOL><DEDENT>if return_as_set:<EOL><INDENT>return dict(id_qid)<EOL><DEDENT>else:<EOL><INDENT>return {x['<STR_LIT:id>']: x['<STR_LIT>'] for x in results}<EOL><DEDENT>
Get all wikidata ID <-> prop <-> value mappings Example: id_mapper("P352") -> { 'A0KH68': 'Q23429083', 'Q5ZWJ4': 'Q22334494', 'Q53WF2': 'Q21766762', .... } Optional filters can filter query results. Example (get all uniprot to wdid, where taxon is human): id_mapper("P352",(("P703", "Q15978631"),)) :param prop: wikidata property :type prop: str :param filters: list of tuples, where the first item is a property, second is a value :param raise_on_duplicate: If an ID is found on more than one wikidata item, what action to take? This is equivalent to the Distinct values constraint. e.g.: http://tinyurl.com/ztpncyb Note that a wikidata item can have more than one ID. This is not checked for True: raise ValueError False: only one of the values is kept if there are duplicates (unless return_as_set if True) :type raise_on_duplicate: bool :param return_as_set: If True, all values in the returned dict will be a set of strings :type return_as_set: bool :param prefer_exact_match: If True, the mapping relation type qualifier will be queried. If an ID mapping has multiple values, the ones marked as an 'exactMatch' will be returned while the others discarded. If none have an exactMatch qualifier, all will be returned. If multiple has 'exactMatch', they will not be discarded. https://www.wikidata.org/wiki/Property:P4390 :type prefer_exact_match: bool If `raise_on_duplicate` is False and `return_as_set` is True, the following can be returned: { 'A0KH68': {'Q23429083'}, 'B023F44': {'Q237623', 'Q839742'} } :return: dict
f10958:m5
def get_values(pid, values, endpoint='<STR_LIT>'):
chunks = chunked(values, <NUM_LIT:100>)<EOL>d = dict()<EOL>for chunk in tqdm(chunks, total=round(len(values) / <NUM_LIT:100>)):<EOL><INDENT>value_quotes = '<STR_LIT:">' + '<STR_LIT>'.join(map(str, chunk)) + '<STR_LIT:">'<EOL>query = """<STR_LIT>""".replace("<STR_LIT>", value_quotes).replace("<STR_LIT>", pid)<EOL>results = wdi_core.WDItemEngine.execute_sparql_query(query, endpoint=endpoint)['<STR_LIT>']['<STR_LIT>']<EOL>dl = [{k: v['<STR_LIT:value>'] for k, v in item.items()} for item in results]<EOL>d.update({x['<STR_LIT:x>']: x['<STR_LIT>'].replace("<STR_LIT>", "<STR_LIT>") for x in dl})<EOL><DEDENT>return d<EOL>
This is a basic version of id_mapper, but restrict to values in `values`. Missing IDs are ignored :param pid: PID :param values: list of strings :param endpoint: sparql endpoint url :return: Example: Get the QIDs for the items with these PMIDs: get_values("P698", ["9719382", "9729004", "16384941"]) -> {'16384941': 'Q24642869', '9719382': 'Q33681179'}
f10958:m6
def set_mrt(self, s, mrt: str):
valid_mrts_abv = self.ABV_MRT.keys()<EOL>valid_mrts_uri = self.ABV_MRT.values()<EOL>if mrt in valid_mrts_abv:<EOL><INDENT>mrt_uri = self.ABV_MRT[mrt]<EOL><DEDENT>elif mrt in valid_mrts_uri:<EOL><INDENT>mrt_uri = mrt<EOL><DEDENT>else:<EOL><INDENT>raise ValueError("<STR_LIT>".format(valid_mrts_abv, mrt))<EOL><DEDENT>mrt_qid = self.mrt_qids[mrt_uri]<EOL>q = wdi_core.WDItemID(mrt_qid, self.mrt_pid, is_qualifier=True)<EOL>s.qualifiers.append(q)<EOL>return s<EOL>
accepts a statement and adds a qualifer setting the mrt modifies s in place :param s: a WDBaseDataType statement :param mrt: one of {'close', 'broad', 'exact', 'related', 'narrow'} :return: s
f10959:c0:m2
def update_retrieved_if_new_multiple_refs(olditem, newitem, days=<NUM_LIT>, retrieved_pid='<STR_LIT>'):
def is_equal_not_retrieved(oldref, newref):<EOL><INDENT>"""<STR_LIT>"""<EOL>if len(oldref) != len(newref):<EOL><INDENT>return False<EOL><DEDENT>oldref_minus_retrieved = [x for x in oldref if x.get_prop_nr() != retrieved_pid]<EOL>newref_minus_retrieved = [x for x in newref if x.get_prop_nr() != retrieved_pid]<EOL>if not all(x in oldref_minus_retrieved for x in newref_minus_retrieved):<EOL><INDENT>return False<EOL><DEDENT>oldref_retrieved = [x for x in oldref if x.get_prop_nr() == retrieved_pid]<EOL>newref_retrieved = [x for x in newref if x.get_prop_nr() == retrieved_pid]<EOL>if (len(newref_retrieved) != len(oldref_retrieved)):<EOL><INDENT>return False<EOL><DEDENT>return True<EOL><DEDENT>def ref_overwrite(oldref, newref, days):<EOL><INDENT>"""<STR_LIT>"""<EOL>if len(oldref) != len(newref):<EOL><INDENT>return True<EOL><DEDENT>oldref_minus_retrieved = [x for x in oldref if x.get_prop_nr() != retrieved_pid]<EOL>newref_minus_retrieved = [x for x in newref if x.get_prop_nr() != retrieved_pid]<EOL>if not all(x in oldref_minus_retrieved for x in newref_minus_retrieved):<EOL><INDENT>return True<EOL><DEDENT>oldref_retrieved = [x for x in oldref if x.get_prop_nr() == retrieved_pid]<EOL>newref_retrieved = [x for x in newref if x.get_prop_nr() == retrieved_pid]<EOL>if (len(newref_retrieved) != len(oldref_retrieved)) or not (<EOL>len(newref_retrieved) == len(oldref_retrieved) == <NUM_LIT:1>):<EOL><INDENT>return True<EOL><DEDENT>datefmt = '<STR_LIT>'<EOL>retold = list([datetime.strptime(r.get_value()[<NUM_LIT:0>], datefmt) for r in oldref if r.get_prop_nr() == retrieved_pid])[<NUM_LIT:0>]<EOL>retnew = list([datetime.strptime(r.get_value()[<NUM_LIT:0>], datefmt) for r in newref if r.get_prop_nr() == retrieved_pid])[<NUM_LIT:0>]<EOL>return (retnew - retold).days >= days<EOL><DEDENT>newrefs = newitem.references<EOL>oldrefs = olditem.references<EOL>found_mate = [False] * len(newrefs)<EOL>for new_n, newref in enumerate(newrefs):<EOL><INDENT>for old_n, oldref in enumerate(oldrefs):<EOL><INDENT>if is_equal_not_retrieved(oldref, newref):<EOL><INDENT>found_mate[new_n] = True<EOL>if ref_overwrite(oldref, newref, days):<EOL><INDENT>oldrefs[old_n] = newref<EOL><DEDENT><DEDENT><DEDENT><DEDENT>for f_idx, f in enumerate(found_mate):<EOL><INDENT>if not f:<EOL><INDENT>oldrefs.append(newrefs[f_idx])<EOL><DEDENT><DEDENT>
# modifies olditem in place # any ref that does not exactly match the new proposed reference (not including retrieved) is kept
f10964:m0
def update_retrieved_if_new(olditem, newitem, days=<NUM_LIT>, retrieved_pid='<STR_LIT>'):
def ref_overwrite(oldref, newref, days):<EOL><INDENT>"""<STR_LIT>"""<EOL>if len(oldref) != len(newref):<EOL><INDENT>return True<EOL><DEDENT>oldref_minus_retrieved = [x for x in oldref if x.get_prop_nr() != retrieved_pid]<EOL>newref_minus_retrieved = [x for x in newref if x.get_prop_nr() != retrieved_pid]<EOL>if not all(x in oldref_minus_retrieved for x in newref_minus_retrieved):<EOL><INDENT>return True<EOL><DEDENT>oldref_retrieved = [x for x in oldref if x.get_prop_nr() == retrieved_pid]<EOL>newref_retrieved = [x for x in newref if x.get_prop_nr() == retrieved_pid]<EOL>if (len(newref_retrieved) != len(oldref_retrieved)) or not (<EOL>len(newref_retrieved) == len(oldref_retrieved) == <NUM_LIT:1>):<EOL><INDENT>return True<EOL><DEDENT>datefmt = '<STR_LIT>'<EOL>retold = list([datetime.strptime(r.get_value()[<NUM_LIT:0>], datefmt) for r in oldref if r.get_prop_nr() == retrieved_pid])[<NUM_LIT:0>]<EOL>retnew = list([datetime.strptime(r.get_value()[<NUM_LIT:0>], datefmt) for r in newref if r.get_prop_nr() == retrieved_pid])[<NUM_LIT:0>]<EOL>return (retnew - retold).days >= days<EOL><DEDENT>newrefs = newitem.references<EOL>oldrefs = olditem.references<EOL>if not (len(newrefs) == len(oldrefs) == <NUM_LIT:1>):<EOL><INDENT>olditem.references = copy.deepcopy(newitem.references)<EOL>return None<EOL><DEDENT>overwrite = ref_overwrite(oldrefs[<NUM_LIT:0>], newrefs[<NUM_LIT:0>], days)<EOL>if overwrite:<EOL><INDENT>olditem.references = newrefs<EOL><DEDENT>else:<EOL><INDENT>pass<EOL><DEDENT>
# modifies olditem in place
f10967:m0
@wdi_backoff()<EOL><INDENT>def __init__(self, user=None, pwd=None, mediawiki_api_url='<STR_LIT>',<EOL>token_renew_period=<NUM_LIT>, use_clientlogin=False,<EOL>consumer_key=None, consumer_secret=None, callback_url='<STR_LIT>', user_agent=None):<DEDENT>
self.base_url = mediawiki_api_url<EOL>print(self.base_url)<EOL>self.s = requests.Session()<EOL>self.edit_token = '<STR_LIT>'<EOL>self.instantiation_time = time.time()<EOL>self.token_renew_period = token_renew_period<EOL>self.mw_url = "<STR_LIT>"<EOL>self.consumer_key = consumer_key<EOL>self.consumer_secret = consumer_secret<EOL>self.response_qs = None<EOL>self.callback_url = callback_url<EOL>if user_agent:<EOL><INDENT>self.user_agent = user_agent<EOL><DEDENT>else:<EOL><INDENT>if user and user.lower() not in config['<STR_LIT>'].lower():<EOL><INDENT>config['<STR_LIT>'] += "<STR_LIT>".format(user)<EOL><DEDENT>self.user_agent = config['<STR_LIT>']<EOL><DEDENT>self.s.headers.update({<EOL>'<STR_LIT>': self.user_agent<EOL>})<EOL>if self.consumer_key and self.consumer_secret:<EOL><INDENT>self.consumer_token = ConsumerToken(self.consumer_key, self.consumer_secret)<EOL>self.handshaker = Handshaker(self.mw_url, self.consumer_token, callback=self.callback_url,<EOL>user_agent=self.user_agent)<EOL>self.redirect, self.request_token = self.handshaker.initiate(callback=self.callback_url)<EOL><DEDENT>elif use_clientlogin:<EOL><INDENT>params = {<EOL>'<STR_LIT:action>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>'<EOL>}<EOL>self.s.get(self.base_url, params=params)<EOL>params2 = {<EOL>'<STR_LIT:action>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:type>': '<STR_LIT>'<EOL>}<EOL>login_token = self.s.get(self.base_url, params=params2).json()['<STR_LIT>']['<STR_LIT>']['<STR_LIT>']<EOL>data = {<EOL>'<STR_LIT:action>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:username>': user,<EOL>'<STR_LIT:password>': pwd,<EOL>'<STR_LIT>': login_token,<EOL>'<STR_LIT>': '<STR_LIT>'<EOL>}<EOL>login_result = self.s.post(self.base_url, data=data).json()<EOL>print(login_result)<EOL>if login_result['<STR_LIT>']['<STR_LIT:status>'] == '<STR_LIT>':<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>self.generate_edit_credentials()<EOL><DEDENT>else:<EOL><INDENT>params = {<EOL>'<STR_LIT:action>': '<STR_LIT>',<EOL>'<STR_LIT>': user,<EOL>'<STR_LIT>': pwd,<EOL>'<STR_LIT>': '<STR_LIT>'<EOL>}<EOL>login_token = self.s.post(self.base_url, data=params).json()['<STR_LIT>']['<STR_LIT>']<EOL>params.update({'<STR_LIT>': login_token})<EOL>r = self.s.post(self.base_url, data=params).json()<EOL>if r['<STR_LIT>']['<STR_LIT:result>'] != '<STR_LIT>':<EOL><INDENT>print('<STR_LIT>', r['<STR_LIT>']['<STR_LIT>'])<EOL>raise ValueError('<STR_LIT>')<EOL><DEDENT>else:<EOL><INDENT>print('<STR_LIT>', r['<STR_LIT>']['<STR_LIT>'])<EOL><DEDENT>self.generate_edit_credentials()<EOL><DEDENT>
This class handles several types of login procedures. Either use user and pwd authentication or OAuth. Wikidata clientlogin can also be used. If using one method, do NOT pass parameters for another method. :param user: the username which should be used for the login :param pwd: the password which should be used for the login :param token_renew_period: Seconds after which a new token should be requested from the Wikidata server :type token_renew_period: int :param use_clientlogin: use authmanager based login method instead of standard login. For 3rd party data consumer, e.g. web clients :type use_clientlogin: bool :param consumer_key: The consumer key for OAuth :type consumer_key: str :param consumer_secret: The consumer secret for OAuth :type consumer_secret: str :param callback_url: URL which should be used as the callback URL :type callback_url: str :param user_agent: UA string to use for API requests. :type user_agent: str :return: None
f10968:c0:m0
def generate_edit_credentials(self):
params = {<EOL>'<STR_LIT:action>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': '<STR_LIT>'<EOL>}<EOL>response = self.s.get(self.base_url, params=params)<EOL>self.edit_token = response.json()['<STR_LIT>']['<STR_LIT>']['<STR_LIT>']<EOL>return self.s.cookies<EOL>
request an edit token and update the cookie_jar in order to add the session cookie :return: Returns a json with all relevant cookies, aka cookie jar
f10968:c0:m1
def get_edit_cookie(self):
if (time.time() - self.instantiation_time) > self.token_renew_period:<EOL><INDENT>self.generate_edit_credentials()<EOL>self.instantiation_time = time.time()<EOL><DEDENT>return self.s.cookies<EOL>
Can be called in order to retrieve the cookies from an instance of WDLogin :return: Returns a json with all relevant cookies, aka cookie jar
f10968:c0:m2
def get_edit_token(self):
if not self.edit_token or (time.time() - self.instantiation_time) > self.token_renew_period:<EOL><INDENT>self.generate_edit_credentials()<EOL>self.instantiation_time = time.time()<EOL><DEDENT>return self.edit_token<EOL>
Can be called in order to retrieve the edit token from an instance of WDLogin :return: returns the edit token
f10968:c0:m3
def get_session(self):
return self.s<EOL>
returns the requests session object used for the login. :return: Object of type requests.Session()
f10968:c0:m4
def continue_oauth(self, oauth_callback_data=None):
self.response_qs = oauth_callback_data<EOL>if not self.response_qs:<EOL><INDENT>webbrowser.open(self.redirect)<EOL>self.response_qs = input("<STR_LIT>")<EOL><DEDENT>response_qs = self.response_qs.split(b'<STR_LIT:?>')[-<NUM_LIT:1>]<EOL>access_token = self.handshaker.complete(self.request_token, response_qs)<EOL>auth1 = OAuth1(self.consumer_token.key,<EOL>client_secret=self.consumer_token.secret,<EOL>resource_owner_key=access_token.key,<EOL>resource_owner_secret=access_token.secret)<EOL>self.s.auth = auth1<EOL>self.generate_edit_credentials()<EOL>
Continuation of OAuth procedure. Method must be explicitly called in order to complete OAuth. This allows external entities, e.g. websites, to provide tokens through callback URLs directly. :param oauth_callback_data: The callback URL received to a Web app :type oauth_callback_data: bytes :return:
f10968:c0:m5
@register.tag('<STR_LIT>')<EOL>def smart_if(parser, token):
bits = token.split_contents()[<NUM_LIT:1>:]<EOL>var = TemplateIfParser(parser, bits).parse()<EOL>nodelist_true = parser.parse(('<STR_LIT>', '<STR_LIT>'))<EOL>token = parser.next_token()<EOL>if token.contents == '<STR_LIT>':<EOL><INDENT>nodelist_false = parser.parse(('<STR_LIT>',))<EOL>parser.delete_first_token()<EOL><DEDENT>else:<EOL><INDENT>nodelist_false = None<EOL><DEDENT>return SmartIfNode(var, nodelist_true, nodelist_false)<EOL>
A smarter {% if %} tag for django templates. While retaining current Django functionality, it also handles equality, greater than and less than operators. Some common case examples:: {% if articles|length >= 5 %}...{% endif %} {% if "ifnotequal tag" != "beautiful" %}...{% endif %} Arguments and operators _must_ have a space between them, so ``{% if 1>2 %}`` is not a valid smart if tag. All supported operators are: ``or``, ``and``, ``in``, ``=`` (or ``==``), ``!=``, ``>``, ``>=``, ``<`` and ``<=``.
f10973:m0
def assertCalc(self, calc, context=None):
context = context or {}<EOL>self.assert_(calc.resolve(context))<EOL>calc.negate = not calc.negate<EOL>self.assertFalse(calc.resolve(context))<EOL>
Test a calculation is True, also checking the inverse "negate" case.
f10973:c8:m1
def assertCalcFalse(self, calc, context=None):
context = context or {}<EOL>self.assertFalse(calc.resolve(context))<EOL>calc.negate = not calc.negate<EOL>self.assert_(calc.resolve(context))<EOL>
Test a calculation is False, also checking the inverse "negate" case.
f10973:c8:m2
def fburl(parser, token):
bits = token.contents.split('<STR_LIT:U+0020>')<EOL>if len(bits) < <NUM_LIT:2>:<EOL><INDENT>raise template.TemplateSyntaxError("<STR_LIT>"<EOL>"<STR_LIT>" % bits[<NUM_LIT:0>])<EOL><DEDENT>viewname = bits[<NUM_LIT:1>]<EOL>args = []<EOL>kwargs = {}<EOL>asvar = None<EOL>if len(bits) > <NUM_LIT:2>:<EOL><INDENT>bits = iter(bits[<NUM_LIT:2>:])<EOL>for bit in bits:<EOL><INDENT>if bit == '<STR_LIT>':<EOL><INDENT>asvar = bits.next()<EOL>break<EOL><DEDENT>else:<EOL><INDENT>for arg in bit.split("<STR_LIT:U+002C>"):<EOL><INDENT>if '<STR_LIT:=>' in arg:<EOL><INDENT>k, v = arg.split('<STR_LIT:=>', <NUM_LIT:1>)<EOL>k = k.strip()<EOL>kwargs[k] = parser.compile_filter(v)<EOL><DEDENT>elif arg:<EOL><INDENT>args.append(parser.compile_filter(arg))<EOL><DEDENT><DEDENT><DEDENT><DEDENT><DEDENT>return URLNode(viewname, args, kwargs, asvar)<EOL>
Returns an absolute URL matching given view with its parameters. This is a way to define links that aren't tied to a particular URL configuration:: {% url path.to.some_view arg1,arg2,name1=value1 %} The first argument is a path to a view. It can be an absolute python path or just ``app_name.view_name`` without the project name if the view is located inside the project. Other arguments are comma-separated values that will be filled in place of positional and keyword arguments in the URL. All arguments for the URL should be present. For example if you have a view ``app_name.client`` taking client's id and the corresponding line in a URLconf looks like this:: ('^client/(\d+)/$', 'app_name.client') and this app's URLconf is included into the project's URLconf under some path:: ('^clients/', include('project_name.app_name.urls')) then in a template you can create a link for a certain client like this:: {% url app_name.client client.id %} The URL will look like ``/clients/client/123/``.
f10974:m0
@register.filter<EOL>def partition(thelist, n):
try:<EOL><INDENT>n = int(n)<EOL>thelist = list(thelist)<EOL><DEDENT>except (ValueError, TypeError):<EOL><INDENT>return [thelist]<EOL><DEDENT>p = len(thelist) / n<EOL>return [thelist[p*i:p*(i+<NUM_LIT:1>)] for i in range(n - <NUM_LIT:1>)] + [thelist[p*(i+<NUM_LIT:1>):]]<EOL>
Break a list into ``n`` pieces. The last list may be larger than the rest if the list doesn't break cleanly. That is:: >>> l = range(10) >>> partition(l, 2) [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]] >>> partition(l, 3) [[0, 1, 2], [3, 4, 5], [6, 7, 8, 9]] >>> partition(l, 4) [[0, 1], [2, 3], [4, 5], [6, 7, 8, 9]] >>> partition(l, 5) [[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]
f10975:m0
@register.filter<EOL>def partition_horizontal(thelist, n):
try:<EOL><INDENT>n = int(n)<EOL>thelist = list(thelist)<EOL><DEDENT>except (ValueError, TypeError):<EOL><INDENT>return [thelist]<EOL><DEDENT>newlists = [list() for i in range(int(ceil(len(thelist) / float(n))))]<EOL>for i, val in enumerate(thelist):<EOL><INDENT>newlists[i/n].append(val)<EOL><DEDENT>return newlists<EOL>
Break a list into ``n`` peices, but "horizontally." That is, ``partition_horizontal(range(10), 3)`` gives:: [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10]] Clear as mud?
f10975:m1
@register.filter<EOL>def partition_horizontal_twice(thelist, numbers):
n, n2 = numbers.split('<STR_LIT:U+002C>')<EOL>try:<EOL><INDENT>n = int(n)<EOL>n2 = int(n2)<EOL>thelist = list(thelist)<EOL><DEDENT>except (ValueError, TypeError):<EOL><INDENT>return [thelist]<EOL><DEDENT>newlists = []<EOL>while thelist:<EOL><INDENT>newlists.append(thelist[:n])<EOL>thelist = thelist[n:]<EOL>newlists.append(thelist[:n2])<EOL>thelist = thelist[n2:]<EOL><DEDENT>return newlists<EOL>
numbers is split on a comma to n and n2. Break a list into peices each peice alternating between n and n2 items long ``partition_horizontal_twice(range(14), "3,4")`` gives:: [[0, 1, 2], [3, 4, 5, 6], [7, 8, 9], [10, 11, 12, 13]] Clear as mud?
f10975:m2
def custom_server_error(request, template_name='<STR_LIT>', admin_template_name='<STR_LIT>'):
trace = None<EOL>if request.user.is_authenticated() and (request.user.is_staff or request.user.is_superuser):<EOL><INDENT>try:<EOL><INDENT>import traceback, sys<EOL>trace = traceback.format_exception(*(sys.exc_info()))<EOL>if not request.user.is_superuser and trace:<EOL><INDENT>trace = trace[-<NUM_LIT:1>:]<EOL><DEDENT>trace = '<STR_LIT:\n>'.join(trace)<EOL><DEDENT>except:<EOL><INDENT>pass<EOL><DEDENT><DEDENT>if request.path.startswith('<STR_LIT>' % admin.site.name):<EOL><INDENT>template_name = admin_template_name<EOL><DEDENT>t = loader.get_template(template_name) <EOL>return http.HttpResponseServerError(t.render(Context({'<STR_LIT>': trace})))<EOL>
500 error handler. Displays a full trackback for superusers and the first line of the traceback for staff members. Templates: `500.html` or `500A.html` (admin) Context: trace Holds the traceback information for debugging.
f10976:m0
def add(self, iterable):
item1 = item2 = MarkovChain.START<EOL>for item3 in iterable:<EOL><INDENT>self[(item1, item2)].add_side(item3)<EOL>item1 = item2<EOL>item2 = item3<EOL><DEDENT>self[(item1, item2)].add_side(MarkovChain.END)<EOL>
Insert an iterable (pattern) item into the markov chain. The order of the pattern will define more of the chain.
f10979:c1:m1
def random_output(self, max=<NUM_LIT:100>):
output = []<EOL>item1 = item2 = MarkovChain.START<EOL>for i in range(max-<NUM_LIT:3>):<EOL><INDENT>item3 = self[(item1, item2)].roll()<EOL>if item3 is MarkovChain.END:<EOL><INDENT>break<EOL><DEDENT>output.append(item3)<EOL>item1 = item2<EOL>item2 = item3<EOL><DEDENT>return output<EOL>
Generate a list of elements from the markov chain. The `max` value is in place in order to prevent excessive iteration.
f10979:c1:m2
def _invalidate_cache(self, instance):
cache.set(instance.cache_key, None, <NUM_LIT:5>)<EOL>
Explicitly set a None value instead of just deleting so we don't have any race conditions where: Thread 1 -> Cache miss, get object from DB Thread 2 -> Object saved, deleted from cache Thread 1 -> Store (stale) object fetched from DB in cache Five second should be more than enough time to prevent this from happening for a web app.
f10980:c0:m3
def get(self, *args, **kwargs):
if self.query.where:<EOL><INDENT>return super(CachingQuerySet, self).get(*args, **kwargs)<EOL><DEDENT>if len(kwargs) == <NUM_LIT:1>:<EOL><INDENT>k = kwargs.keys()[<NUM_LIT:0>]<EOL>if k in ('<STR_LIT>', '<STR_LIT>', '<STR_LIT:%s>' % self.model._meta.pk.attname, <EOL>'<STR_LIT>' % self.model._meta.pk.attname):<EOL><INDENT>obj = cache.get(self.model._cache_key(pk=kwargs.values()[<NUM_LIT:0>]))<EOL>if obj is not None:<EOL><INDENT>obj.from_cache = True<EOL>return obj<EOL><DEDENT><DEDENT><DEDENT>return super(CachingQuerySet, self).get(*args, **kwargs)<EOL>
Checks the cache to see if there's a cached entry for this pk. If not, fetches using super then stores the result in cache. Most of the logic here was gathered from a careful reading of ``django.db.models.sql.query.add_filter``
f10980:c1:m1
def generate_requirements(output_path=None):
from django.conf import settings<EOL>reqs = set()<EOL>for app in settings.INSTALLED_APPS:<EOL><INDENT>if app in list(mapping.keys()):<EOL><INDENT>reqs |= set(mapping[app])<EOL><DEDENT><DEDENT>if output_path is None:<EOL><INDENT>print("<STR_LIT>")<EOL>for item in reqs:<EOL><INDENT>print(item)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>try:<EOL><INDENT>out_file = open(output_path, '<STR_LIT:w>')<EOL>out_file.write("<STR_LIT>")<EOL>for item in reqs:<EOL><INDENT>out_file.write("<STR_LIT>" % item)<EOL><DEDENT><DEDENT>finally:<EOL><INDENT>out_file.close()<EOL><DEDENT><DEDENT>
Loop through the INSTALLED_APPS and create a set of requirements for pip. if output_path is ``None`` then write to standard out, otherwise write to the path.
f10986:m0
def clean_email(self):
if User.objects.filter(email__iexact=self.cleaned_data['<STR_LIT:email>']):<EOL><INDENT>raise forms.ValidationError(_(<EOL>"<STR_LIT>"))<EOL><DEDENT>return self.cleaned_data['<STR_LIT:email>']<EOL>
Validate that the supplied email address is unique for the site.
f10989:c0:m0
def handle_expired_accounts():
ACTIVATED = RegistrationProfile.ACTIVATED<EOL>expiration_date = datetime.timedelta(days=settings.ACCOUNT_ACTIVATION_DAYS)<EOL>to_delete = []<EOL>print("<STR_LIT>" % str(RegistrationProfile.objects.all().count()))<EOL>for profile in RegistrationProfile.objects.all():<EOL><INDENT>print("<STR_LIT>" % profile.user)<EOL>if profile.activation_key == ACTIVATED:<EOL><INDENT>print("<STR_LIT>")<EOL>to_delete.append(profile.pk)<EOL>continue<EOL><DEDENT>if profile.user.is_active and profile.user.date_joined + expiration_date <= datetime.datetime.now():<EOL><INDENT>print("<STR_LIT>")<EOL>user = profile.user<EOL>user.is_active = False<EOL>site = Site.objects.get_current()<EOL>ctx_dict = { '<STR_LIT>': site, <EOL>'<STR_LIT>': profile.activation_key}<EOL>subject = render_to_string(<EOL>'<STR_LIT>',<EOL>ctx_dict)<EOL>subject = '<STR_LIT>'.join(subject.splitlines())<EOL>message = render_to_string(<EOL>'<STR_LIT>',<EOL>ctx_dict)<EOL>user.email_user(subject, message, settings.DEFAULT_FROM_EMAIL)<EOL>user.save()<EOL><DEDENT><DEDENT>print("<STR_LIT>" % str(len(to_delete)))<EOL>RegistrationProfile.objects.filter(pk__in=to_delete).delete()<EOL>
Check of expired accounts.
f10990:m0
def activate(self, request, activation_key):
if SHA1_RE.search(activation_key):<EOL><INDENT>try:<EOL><INDENT>profile = RegistrationProfile.objects.get(activation_key=activation_key)<EOL><DEDENT>except RegistrationProfile.DoesNotExist:<EOL><INDENT>return False<EOL><DEDENT>user = profile.user<EOL>user.is_active = True<EOL>user.save()<EOL>profile.activation_key = RegistrationProfile.ACTIVATED<EOL>profile.save()<EOL>return user<EOL><DEDENT>return False<EOL>
Override default activation process. This will activate the user even if its passed its expiration date.
f10990:c1:m0
def register(self, request, **kwargs):
if Site._meta.installed:<EOL><INDENT>site = Site.objects.get_current()<EOL><DEDENT>else:<EOL><INDENT>site = RequestSite(request)<EOL><DEDENT>email = kwargs['<STR_LIT:email>']<EOL>password = User.objects.make_random_password()<EOL>username = sha_constructor(str(email)).hexdigest()[:<NUM_LIT:30>]<EOL>incr = <NUM_LIT:0><EOL>while User.objects.filter(username=username).count() > <NUM_LIT:0>:<EOL><INDENT>incr += <NUM_LIT:1><EOL>username = sha_constructor(str(email + str(incr))).hexdigest()[:<NUM_LIT:30>]<EOL><DEDENT>new_user = User.objects.create_user(username, email, password)<EOL>new_user.save()<EOL>registration_profile = RegistrationProfile.objects.create_profile(<EOL>new_user)<EOL>auth_user = authenticate(username=username, password=password)<EOL>login(request, auth_user)<EOL>request.session.set_expiry(<NUM_LIT:0>)<EOL>if hasattr(settings, '<STR_LIT>') and getattr(settings, '<STR_LIT>'):<EOL><INDENT>app_label, model_name = settings.AUTH_PROFILE_MODULE.split('<STR_LIT:.>')<EOL>model = models.get_model(app_label, model_name)<EOL>try:<EOL><INDENT>profile = new_user.get_profile()<EOL><DEDENT>except model.DoesNotExist:<EOL><INDENT>profile = model(user=new_user)<EOL>profile.save() <EOL><DEDENT><DEDENT>self.send_activation_email(<EOL>new_user, registration_profile, password, site) <EOL>signals.user_registered.send(sender=self.__class__,<EOL>user=new_user,<EOL>request=request)<EOL>return new_user<EOL>
Create and immediately log in a new user. Only require a email to register, username is generated automatically and a password is random generated and emailed to the user. Activation is still required for account uses after specified number of days.
f10990:c1:m1
def send_activation_email(self, user, profile, password, site):
ctx_dict = { '<STR_LIT:password>': password, <EOL>'<STR_LIT>': site, <EOL>'<STR_LIT>': profile.activation_key,<EOL>'<STR_LIT>': settings.ACCOUNT_ACTIVATION_DAYS}<EOL>subject = render_to_string(<EOL>'<STR_LIT>',<EOL>ctx_dict)<EOL>subject = '<STR_LIT>'.join(subject.splitlines())<EOL>message = render_to_string('<STR_LIT>',<EOL>ctx_dict)<EOL>try:<EOL><INDENT>user.email_user(subject, message, settings.DEFAULT_FROM_EMAIL)<EOL><DEDENT>except:<EOL><INDENT>pass<EOL><DEDENT>
Custom send email method to supplied the activation link and new generated password.
f10990:c1:m2
def registration_allowed(self, request):
return getattr(settings, '<STR_LIT>', True)<EOL>
Indicate whether account registration is currently permitted, based on the value of the setting ``REGISTRATION_OPEN``. This is determined as follows: * If ``REGISTRATION_OPEN`` is not specified in settings, or is set to ``True``, registration is permitted. * If ``REGISTRATION_OPEN`` is both specified and set to ``False``, registration is not permitted.
f10990:c1:m3
def post_registration_redirect(self, request, user):
next_url = "<STR_LIT>"<EOL>if "<STR_LIT>" in request.GET or "<STR_LIT>" in request.POST:<EOL><INDENT>next_url = request.GET.get("<STR_LIT>", None) or request.POST.get("<STR_LIT>", None) or "<STR_LIT:/>"<EOL><DEDENT>return (next_url, (), {})<EOL>
After registration, redirect to the home page or supplied "next" query string or hidden field value.
f10990:c1:m5
def get_url(self, url_or_dict):
if isinstance(url_or_dict, basestring):<EOL><INDENT>url_or_dict = {'<STR_LIT>': url_or_dict}<EOL><DEDENT>try:<EOL><INDENT>return reverse(**url_or_dict)<EOL><DEDENT>except NoReverseMatch:<EOL><INDENT>if MENU_DEBUG:<EOL><INDENT>print >>stderr,'<STR_LIT>' % url_or_dict<EOL><DEDENT><DEDENT>
Returns the reversed url given a string or dict and prints errors if MENU_DEBUG is enabled
f10995:c0:m0
def diff(a, b, segmenter=None):
a, b = list(a), list(b)<EOL>segmenter = segmenter or SEGMENTER<EOL>a_segments = segmenter.segment(a)<EOL>b_segments = segmenter.segment(b)<EOL>return diff_segments(a_segments, b_segments)<EOL>
Performs a diff comparison between two sequences of tokens (`a` and `b`) using `segmenter` to cluster and match :class:`deltas.MatchableSegment`. :Example: >>> from deltas import segment_matcher, text_split >>> >>> a = text_split.tokenize("This is some text. This is some other text.") >>> b = text_split.tokenize("This is some other text. This is some text.") >>> operations = segment_matcher.diff(a, b) >>> >>> for op in operations: ... print(op.name, repr(''.join(a[op.a1:op.a2])), ... repr(''.join(b[op.b1:op.b2]))) ... equal 'This is some other text.' 'This is some other text.' insert '' ' ' equal 'This is some text.' 'This is some text.' delete ' ' '' :Parameters: a : `list`(:class:`deltas.tokenizers.Token`) Initial sequence b : `list`(:class:`deltas.tokenizers.Token`) Changed sequence segmenter : :class:`deltas.Segmenter` A segmenter to use on the tokens. :Returns: An `iterable` of operations.
f11008:m0
def diff_segments(a_segments, b_segments):
<EOL>a_segment_tokens, b_segment_tokens = _cluster_matching_segments(a_segments,<EOL>b_segments)<EOL>clustered_ops = sequence_matcher.diff(a_segment_tokens, b_segment_tokens)<EOL>return (op for op in SegmentOperationsExpander(clustered_ops,<EOL>a_segment_tokens,<EOL>b_segment_tokens).expand())<EOL>
Performs a diff comparison between two pre-clustered :class:`deltas.Segment` trees. In most cases, segmentation takes 100X more time than actually performing the diff. :Parameters: a_segments : :class:`deltas.Segment` An initial sequence b_segments : :class:`deltas.Segment` A changed sequence :Returns: An `iterable` of operations.
f11008:m1
def process(texts, *args, **kwargs):
processor = SegmentMatcher.Processor(*args, **kwargs)<EOL>for text in texts:<EOL><INDENT>yield processor.process(text)<EOL><DEDENT>
Processes a single sequence of texts with a :class:`~deltas.SegmentMatcher`. :Parameters: texts : `iterable`(`str`) sequence of texts args : `tuple` passed to :class:`~deltas.SegmentMatcher`'s constructor kwaths : `dict` passed to :class:`~deltas.SegmentMatcher`'s constructor
f11008:m2
def _get_matchable_segments(segments):
for subsegment in segments:<EOL><INDENT>if isinstance(subsegment, Token):<EOL><INDENT>break <EOL><DEDENT>if isinstance(subsegment, Segment):<EOL><INDENT>if isinstance(subsegment, MatchableSegment):<EOL><INDENT>yield subsegment<EOL><DEDENT>for matchable_subsegment in _get_matchable_segments(subsegment):<EOL><INDENT>yield matchable_subsegment<EOL><DEDENT><DEDENT><DEDENT>
Performs a depth-first search of the segment tree to get all matchable segments.
f11008:m5
def processor(self, *args, **kwargs):
return self.Processor(self.tokenizer, self.segmenter, *args, **kwargs)<EOL>
Constructs and configures a processor to process versions of a text.
f11008:c0:m1
def processor(self):
raise NotImplementedError()<EOL>
Configures and returns a new :class:`~deltas.DiffEngine.Processor`
f11013:c0:m0
@classmethod<EOL><INDENT>def from_config(cls, config, name, section_key="<STR_LIT>"):<DEDENT>
section = config[section_key][name]<EOL>if '<STR_LIT>' in section:<EOL><INDENT>return yamlconf.import_module(section['<STR_LIT>'])<EOL><DEDENT>else:<EOL><INDENT>Engine = yamlconf.import_module(section['<STR_LIT:class>'])<EOL>return Engine.from_config(config, name, section_key=section_key)<EOL><DEDENT>
Constructs a :class:`deltas.DiffEngine` from a configuration doc.
f11013:c0:m1
def diff(a, b):
a, b = list(a), list(b)<EOL>opcodes = SM(None, a, b).get_opcodes()<EOL>return parse_opcodes(opcodes)<EOL>
Performs a longest common substring diff. :Parameters: a : sequence of `comparable` Initial sequence b : sequence of `comparable` Changed sequence :Returns: An `iterable` of operations.
f11014:m0
def tokens(self):
for subsegment_or_token in self:<EOL><INDENT>if isinstance(subsegment_or_token, Segment):<EOL><INDENT>subsegment = subsegment_or_token<EOL>for token in subsegment.tokens():<EOL><INDENT>yield token<EOL><DEDENT><DEDENT>else:<EOL><INDENT>token = subsegment_or_token<EOL>yield token<EOL><DEDENT><DEDENT>
`generator` : the tokens in this segment
f11016:c0:m3
@property<EOL><INDENT>def end(self):<DEDENT>
return self.start + sum(<NUM_LIT:1> for _ in self.tokens())<EOL>
The index of the last :class:`deltas.Token` in the segment.
f11016:c0:m4
def segment(self, tokens):
look_ahead = LookAhead(tokens)<EOL>segments = Segment()<EOL>while not look_ahead.empty():<EOL><INDENT>if look_ahead.peek().type not in self.whitespace: <EOL><INDENT>paragraph = MatchableSegment(look_ahead.i)<EOL>while not look_ahead.empty() andlook_ahead.peek().type not in self.paragraph_end:<EOL><INDENT>if look_ahead.peek().type == "<STR_LIT>": <EOL><INDENT>tab_depth = <NUM_LIT:1><EOL>sentence = MatchableSegment(<EOL>look_ahead.i, [next(look_ahead)])<EOL>while not look_ahead.empty() and tab_depth > <NUM_LIT:0>:<EOL><INDENT>tab_depth += look_ahead.peek().type == "<STR_LIT>"<EOL>tab_depth -= look_ahead.peek().type == "<STR_LIT>"<EOL>sentence.append(next(look_ahead))<EOL><DEDENT>paragraph.append(sentence)<EOL><DEDENT>elif look_ahead.peek().type not in self.whitespace: <EOL><INDENT>sentence = MatchableSegment(<EOL>look_ahead.i, [next(look_ahead)])<EOL>sub_depth = int(sentence[<NUM_LIT:0>].type in SUB_OPEN)<EOL>while not look_ahead.empty():<EOL><INDENT>sub_depth += look_ahead.peek().type in SUB_OPEN<EOL>sub_depth -= look_ahead.peek().type in SUB_CLOSE<EOL>sentence.append(next(look_ahead))<EOL>if sentence[-<NUM_LIT:1>].type in self.sentence_end and sub_depth <= <NUM_LIT:0>:<EOL><INDENT>non_whitespace = sum(s.type not in self.whitespace for s in sentence)<EOL>if non_whitespace >= self.min_sentence:<EOL><INDENT>break<EOL><DEDENT><DEDENT><DEDENT>paragraph.append(sentence)<EOL><DEDENT>else: <EOL><INDENT>whitespace = Segment(look_ahead.i, [next(look_ahead)])<EOL>paragraph.append(whitespace)<EOL><DEDENT><DEDENT>segments.append(paragraph)<EOL><DEDENT>else: <EOL><INDENT>whitespace = Segment(look_ahead.i, [next(look_ahead)])<EOL>segments.append(whitespace)<EOL><DEDENT><DEDENT>return segments<EOL>
Segments a sequence of tokens into a sequence of segments. :Parameters: tokens : `list` ( :class:`~deltas.Token` )
f11019:c0:m1
def segment(self, tokens):
raise NotImplementedError()<EOL>
Segments a sequence of :class:`~deltas.Token` into a `iterable` of :class:`~deltas.Segment`
f11022:c0:m1
@classmethod<EOL><INDENT>def from_config(cls, config, name, section_key="<STR_LIT>"):<DEDENT>
section = config[section_key][name]<EOL>segmenter_class_path = section['<STR_LIT:class>']<EOL>Segmenter = yamlconf.import_module(segmenter_class_path)<EOL>return Segmenter.from_config(config, name, section_key=section_key)<EOL>
Constructs a segmenter from a configuration doc.
f11022:c0:m2
def tokenize(self, text, token_class=Token):
raise NotImplementedError()<EOL>
Tokenizes a text.
f11025:c0:m0
def _tokenize(self, text, token_class=None):
token_class = token_class or Token<EOL>tokens = {}<EOL>for i, match in enumerate(self.regex.finditer(text)):<EOL><INDENT>value = match.group(<NUM_LIT:0>)<EOL>try:<EOL><INDENT>token = tokens[value]<EOL><DEDENT>except KeyError:<EOL><INDENT>type = match.lastgroup<EOL>token = token_class(value, type=type)<EOL>tokens[value] = token<EOL><DEDENT>yield token<EOL><DEDENT>
Tokenizes a text :Returns: A `list` of tokens
f11025:c1:m2
def tokens(self):
yield self<EOL>
Returns an iterator of *self*. This method reflects the behavior of :meth:`deltas.Segment.tokens`
f11029:c0:m1
def apply(operations, a_tokens, b_tokens):
for operation in operations:<EOL><INDENT>if isinstance(operation, Equal):<EOL><INDENT>for t in a_tokens[operation.a1:operation.a2]: yield t<EOL><DEDENT>elif isinstance(operation, Insert):<EOL><INDENT>for t in b_tokens[operation.b1:operation.b2]: yield t<EOL><DEDENT>elif isinstance(operation, Delete):<EOL><INDENT>pass<EOL><DEDENT>else:<EOL><INDENT>raise TypeError("<STR_LIT>" +"<STR_LIT>".format(type(operation)))<EOL><DEDENT><DEDENT>
Applies a sequences of operations to tokens -- copies tokens from `a_tokens` and `b_tokens` according to `operations`. :Parameters: operations : sequence of :~class:`deltas.Operation` Operations to perform a_tokens : list of `comparable` Starting sequence of comparable tokens b_tokens : list of `comparable` Ending list of comparable tokens :Returns: A new list of tokens
f11030:m0
def _process_json(data):
requests = []<EOL>for item in data:<EOL><INDENT>committee = GradCommittee()<EOL>committee.status = item.get('<STR_LIT:status>')<EOL>committee.committee_type = item.get('<STR_LIT>')<EOL>committee.dept = item.get('<STR_LIT>')<EOL>committee.degree_title = item.get('<STR_LIT>')<EOL>committee.degree_type = item.get('<STR_LIT>')<EOL>committee.major_full_name = item.get('<STR_LIT>')<EOL>committee.start_date = parse_datetime(item.get('<STR_LIT>'))<EOL>committee.end_date = parse_datetime(item.get('<STR_LIT>'))<EOL>for member in item.get('<STR_LIT>'):<EOL><INDENT>if member.get('<STR_LIT:status>') == "<STR_LIT>":<EOL><INDENT>continue<EOL><DEDENT>com_mem = GradCommitteeMember()<EOL>com_mem.first_name = member.get('<STR_LIT>')<EOL>com_mem.last_name = member.get('<STR_LIT>')<EOL>if member.get('<STR_LIT>') andlen(member.get('<STR_LIT>')):<EOL><INDENT>com_mem.member_type = member.get('<STR_LIT>').lower()<EOL><DEDENT>if member.get('<STR_LIT>') andlen(member.get('<STR_LIT>')):<EOL><INDENT>com_mem.reading_type = member.get('<STR_LIT>').lower()<EOL><DEDENT>com_mem.dept = member.get('<STR_LIT>')<EOL>com_mem.email = member.get('<STR_LIT:email>')<EOL>com_mem.status = member.get('<STR_LIT:status>')<EOL>committee.members.append(com_mem)<EOL><DEDENT>requests.append(committee)<EOL><DEDENT>return requests<EOL>
return a list of GradCommittee objects.
f11043:m1
def _process_json(data):
requests = []<EOL>for item in data:<EOL><INDENT>petition = GradPetition()<EOL>petition.description = item.get('<STR_LIT:description>')<EOL>petition.submit_date = parse_datetime(item.get('<STR_LIT>'))<EOL>petition.decision_date = parse_datetime(item.get('<STR_LIT>'))<EOL>if item.get('<STR_LIT>') and len(item.get('<STR_LIT>')):<EOL><INDENT>petition.dept_recommend = item.get('<STR_LIT>').lower()<EOL><DEDENT>if item.get('<STR_LIT>') andlen(item.get('<STR_LIT>')):<EOL><INDENT>petition.gradschool_decision =item.get('<STR_LIT>').lower()<EOL><DEDENT>requests.append(petition)<EOL><DEDENT>return requests<EOL>
return a list of GradPetition objects.
f11045:m1
def _process_json(data):
requests = []<EOL>for item in data:<EOL><INDENT>leave = GradLeave()<EOL>leave.reason = item.get('<STR_LIT>')<EOL>leave.submit_date = parse_datetime(item.get('<STR_LIT>'))<EOL>if item.get('<STR_LIT:status>') and len(item.get('<STR_LIT:status>')):<EOL><INDENT>leave.status = item.get('<STR_LIT:status>').lower()<EOL><DEDENT>for quarter in item.get('<STR_LIT>'):<EOL><INDENT>term = GradTerm()<EOL>term.quarter = quarter.get('<STR_LIT>').lower()<EOL>term.year = quarter.get('<STR_LIT>')<EOL>leave.terms.append(term)<EOL><DEDENT>requests.append(leave)<EOL><DEDENT>return requests<EOL>
return a list of GradLeave objects.
f11047:m1
def is_status_await(self):
return self.status.startswith(self.AWAITING_STATUS_PREFIX)<EOL>
return true if status is: Awaiting Dept Action, Awaiting Dept Action (Final Exam), Awaiting Dept Action (General Exam)
f11048:c1:m3
def _process_json(json_data):
requests = []<EOL>for item in json_data:<EOL><INDENT>degree = GradDegree()<EOL>degree.degree_title = item["<STR_LIT>"]<EOL>degree.exam_place = item["<STR_LIT>"]<EOL>degree.exam_date = parse_datetime(item.get("<STR_LIT>"))<EOL>degree.req_type = item["<STR_LIT>"]<EOL>degree.major_full_name = item["<STR_LIT>"]<EOL>degree.submit_date = parse_datetime(item.get("<STR_LIT>"))<EOL>degree.decision_date = parse_datetime(item.get('<STR_LIT>'))<EOL>degree.status = item["<STR_LIT:status>"]<EOL>degree.target_award_year = item["<STR_LIT>"]<EOL>if item.get("<STR_LIT>")andlen(item.get("<STR_LIT>")):<EOL><INDENT>degree.target_award_quarter = item["<STR_LIT>"].lower()<EOL><DEDENT>requests.append(degree)<EOL><DEDENT>return requests<EOL>
return a list of GradDegree objects.
f11049:m1
def timeline(self, request, drip_id, into_past, into_future):
from django.shortcuts import render, get_object_or_404<EOL>drip = get_object_or_404(Drip, id=drip_id)<EOL>shifted_drips = []<EOL>seen_users = set()<EOL>for shifted_drip in drip.drip.walk(into_past=int(into_past), into_future=int(into_future)+<NUM_LIT:1>):<EOL><INDENT>shifted_drip.prune()<EOL>shifted_drips.append({<EOL>'<STR_LIT>': shifted_drip,<EOL>'<STR_LIT>': shifted_drip.get_queryset().exclude(id__in=seen_users)<EOL>})<EOL>seen_users.update(shifted_drip.get_queryset().values_list('<STR_LIT:id>', flat=True))<EOL><DEDENT>return render(request, '<STR_LIT>', locals())<EOL>
Return a list of people who should get emails.
f11052:c2:m0
def now(self):
return conditional_now() + self.timedelta(**self.now_shift_kwargs)<EOL>
This allows us to override what we consider "now", making it easy to build timelines of who gets what when.
f11053:c1:m1
def timedelta(self, *a, **kw):
from datetime import timedelta<EOL>return timedelta(*a, **kw)<EOL>
If needed, this allows us the ability to manipuate the slicing of time.
f11053:c1:m2