signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
def graphql_to_sql(schema, graphql_query, parameters, compiler_metadata,<EOL>type_equivalence_hints=None):
compilation_result = compile_graphql_to_sql(<EOL>schema, graphql_query, compiler_metadata, type_equivalence_hints=type_equivalence_hints)<EOL>return compilation_result._replace(<EOL>query=insert_arguments_into_query(compilation_result, parameters))<EOL>
Compile the GraphQL input using the schema into a SQL query and associated metadata. Args: schema: GraphQL schema object describing the schema of the graph to be queried graphql_query: the GraphQL query to compile to SQL, as a string parameters: dict, mapping argument name to its value, for every parameter the query expects. compiler_metadata: SqlMetadata object, provides SQLAlchemy specific backend information type_equivalence_hints: optional dict of GraphQL interface or type -> GraphQL union. Used as a workaround for GraphQL's lack of support for inheritance across "types" (i.e. non-interfaces), as well as a workaround for Gremlin's total lack of inheritance-awareness. The key-value pairs in the dict specify that the "key" type is equivalent to the "value" type, i.e. that the GraphQL type or interface in the key is the most-derived common supertype of every GraphQL type in the "value" GraphQL union. Recursive expansion of type equivalence hints is not performed, and only type-level correctness of this argument is enforced. See README.md for more details on everything this parameter does. ***** Be very careful with this option, as bad input here will lead to incorrect output queries being generated. ***** Returns: a CompilationResult object, containing: - query: string, the resulting compiled and parameterized query string - language: string, specifying the language to which the query was compiled - output_metadata: dict, output name -> OutputMetadata namedtuple object - input_metadata: dict, name of input variables -> inferred GraphQL type, based on use
f12696:m1
def graphql_to_gremlin(schema, graphql_query, parameters, type_equivalence_hints=None):
compilation_result = compile_graphql_to_gremlin(<EOL>schema, graphql_query, type_equivalence_hints=type_equivalence_hints)<EOL>return compilation_result._replace(<EOL>query=insert_arguments_into_query(compilation_result, parameters))<EOL>
Compile the GraphQL input using the schema into a Gremlin query and associated metadata. Args: schema: GraphQL schema object describing the schema of the graph to be queried graphql_query: the GraphQL query to compile to Gremlin, as a string parameters: dict, mapping argument name to its value, for every parameter the query expects. type_equivalence_hints: optional dict of GraphQL interface or type -> GraphQL union. Used as a workaround for GraphQL's lack of support for inheritance across "types" (i.e. non-interfaces), as well as a workaround for Gremlin's total lack of inheritance-awareness. The key-value pairs in the dict specify that the "key" type is equivalent to the "value" type, i.e. that the GraphQL type or interface in the key is the most-derived common supertype of every GraphQL type in the "value" GraphQL union. Recursive expansion of type equivalence hints is not performed, and only type-level correctness of this argument is enforced. See README.md for more details on everything this parameter does. ***** Be very careful with this option, as bad input here will lead to incorrect output queries being generated. ***** Returns: a CompilationResult object, containing: - query: string, the resulting compiled and parameterized query string - language: string, specifying the language to which the query was compiled - output_metadata: dict, output name -> OutputMetadata namedtuple object - input_metadata: dict, name of input variables -> inferred GraphQL type, based on use
f12696:m2
def get_graphql_schema_from_orientdb_schema_data(schema_data, class_to_field_type_overrides=None,<EOL>hidden_classes=None):
if class_to_field_type_overrides is None:<EOL><INDENT>class_to_field_type_overrides = dict()<EOL><DEDENT>if hidden_classes is None:<EOL><INDENT>hidden_classes = set()<EOL><DEDENT>schema_graph = SchemaGraph(schema_data)<EOL>return get_graphql_schema_from_schema_graph(schema_graph, class_to_field_type_overrides,<EOL>hidden_classes)<EOL>
Construct a GraphQL schema from an OrientDB schema. Args: schema_data: list of dicts describing the classes in the OrientDB schema. The following format is the way the data is structured in OrientDB 2. See the README.md file for an example of how to query this data. Each dict has the following string fields: - name: string, the name of the class. - superClasses (optional): list of strings, the name of the class's superclasses. - superClass (optional): string, the name of the class's superclass. May be used instead of superClasses if there is only one superClass. Used for backwards compatibility with OrientDB. - customFields (optional): dict, string -> string, data defined on the class instead of instances of the class. - abstract: bool, true if the class is abstract. - properties: list of dicts, describing the class's properties. Each property dictionary has the following string fields: - name: string, the name of the property. - type: int, builtin OrientDB type ID of the property. See schema_properties.py for the mapping. - linkedType (optional): int, if the property is a collection of builtin OrientDB objects, then it indicates their type ID. - linkedClass (optional): string, if the property is a collection of class instances, then it indicates the name of the class. If class is an edge class, and the field name is either 'in' or 'out', then it describes the name of an endpoint of the edge. - defaultValue: string, the textual representation of the default value for the property, as returned by OrientDB's schema introspection code, e.g., '{}' for the embedded set type. Note that if the property is a collection type, it must have a default value. class_to_field_type_overrides: optional dict, class name -> {field name -> field type}, (string -> {string -> GraphQLType}). Used to override the type of a field in the class where it's first defined and all the class's subclasses. hidden_classes: optional set of strings, classes to not include in the GraphQL schema. Returns: tuple of (GraphQL schema object, GraphQL type equivalence hints dict). The tuple is of type (GraphQLSchema, {GraphQLObjectType -> GraphQLUnionType}).
f12696:m3
def read_file(filename):
<EOL>here = os.path.abspath(os.path.dirname(__file__))<EOL>with codecs.open(os.path.join(here, '<STR_LIT>', filename), '<STR_LIT:r>') as f:<EOL><INDENT>return f.read()<EOL><DEDENT>
Read package file as text to get name and version
f12698:m0
def find_version():
version_file = read_file('<STR_LIT>')<EOL>version_match = re.search(r'<STR_LIT>',<EOL>version_file, re.M)<EOL>if version_match:<EOL><INDENT>return version_match.group(<NUM_LIT:1>)<EOL><DEDENT>raise RuntimeError('<STR_LIT>')<EOL>
Only define version in one place
f12698:m1
def find_name():
name_file = read_file('<STR_LIT>')<EOL>name_match = re.search(r'<STR_LIT>',<EOL>name_file, re.M)<EOL>if name_match:<EOL><INDENT>return name_match.group(<NUM_LIT:1>)<EOL><DEDENT>raise RuntimeError('<STR_LIT>')<EOL>
Only define name in one place
f12698:m2
def find_long_description():
return read_file('<STR_LIT>')<EOL>
Return the content of the README.md file.
f12698:m3
def _create_animal_fed_at_random_event_statement(from_name):
event_name = random.choice(FEEDING_EVENT_NAMES_LIST)<EOL>return create_edge_statement('<STR_LIT>', '<STR_LIT>', from_name, '<STR_LIT>', event_name)<EOL>
Return a SQL statement to create an Animal_FedAt edge connected to a random Event.
f12699:m0
def _get_animal_aliases(animal_name, parent_names):
base_name, _ = extract_base_name_and_label(animal_name)<EOL>random_aliases = [base_name + '<STR_LIT:_>' + str(random.randint(<NUM_LIT:0>, <NUM_LIT:9>)) for _ in range(NUM_ALIASES)]<EOL>if len(parent_names) > <NUM_LIT:2>:<EOL><INDENT>return random_aliases + random.sample(parent_names, <NUM_LIT:2>)<EOL><DEDENT>else:<EOL><INDENT>return random_aliases<EOL><DEDENT>
Return list of animal aliases.
f12699:m1
def _create_animal_statement(animal_name, parent_names):
field_name_to_value = {<EOL>'<STR_LIT:name>': animal_name,<EOL>'<STR_LIT>': get_uuid(),<EOL>'<STR_LIT>': random.choice(ANIMAL_COLOR_LIST), <EOL>'<STR_LIT>': get_random_date(),<EOL>'<STR_LIT>': get_random_net_worth(),<EOL>'<STR_LIT>': _get_animal_aliases(animal_name, parent_names),<EOL>}<EOL>return create_vertex_statement('<STR_LIT>', field_name_to_value)<EOL>
Return a SQL statement to create an Animal vertex.
f12699:m2
def _create_animal_parent_of_statement(from_name, to_name):
return create_edge_statement('<STR_LIT>', '<STR_LIT>', from_name, '<STR_LIT>', to_name)<EOL>
Return a SQL statement to create an Animal_ParentOf edge.
f12699:m3
def _detect_subtype(entity_name):
if entity_name in SPECIES_LIST:<EOL><INDENT>return '<STR_LIT>'<EOL><DEDENT>elif entity_name in FOOD_LIST:<EOL><INDENT>return '<STR_LIT>'<EOL><DEDENT>elif extract_base_name_and_label(entity_name)[<NUM_LIT:0>] in SPECIES_LIST:<EOL><INDENT>return '<STR_LIT>'<EOL><DEDENT>else:<EOL><INDENT>raise AssertionError(u'<STR_LIT>'<EOL>u'<STR_LIT>'.format(entity_name))<EOL><DEDENT>
Detect and return the type of Entity from its name.
f12699:m4
def _create_entity_related_statement(from_name, to_name):
from_class = _detect_subtype(from_name)<EOL>to_class = _detect_subtype(to_name)<EOL>return create_edge_statement('<STR_LIT>', from_class, from_name, to_class, to_name)<EOL>
Return a SQL statement to create an Entity_Related edge.
f12699:m5
def _create_animal_of_species_statement(from_name, to_name):
return create_edge_statement('<STR_LIT>', '<STR_LIT>', from_name, '<STR_LIT>', to_name)<EOL>
Return a SQL statement to create an Animal_OfSpecies edge.
f12699:m6
def _get_initial_animal_generators(species_name, current_animal_names):
command_list = []<EOL>for index in range(NUM_INITIAL_ANIMALS):<EOL><INDENT>animal_name = create_name(species_name, str(index))<EOL>current_animal_names.append(animal_name)<EOL>command_list.append(_create_animal_statement(animal_name, []))<EOL><DEDENT>return command_list<EOL>
Return a list of SQL statements to create initial animals for a given species.
f12699:m7
def _get_new_parents(current_animal_names, previous_parent_sets, num_parents):
while True:<EOL><INDENT>new_parent_names = frozenset(random.sample(current_animal_names, num_parents))<EOL>if new_parent_names not in previous_parent_sets:<EOL><INDENT>return new_parent_names<EOL><DEDENT><DEDENT>
Return a set of `num_parents` parent names that is not present in `previous_parent_sets`.
f12699:m8
def _get_animal_entity_related_commands(all_animal_names):
command_list = []<EOL>num_samples = int(NUM_ENTITY_RELATED_COMMANDS_MULTIPLIER * len(all_animal_names))<EOL>in_neighbor_samples = random.sample(all_animal_names, num_samples)<EOL>out_neighbor_samples = random.sample(all_animal_names, num_samples)<EOL>for from_name, to_name in zip(in_neighbor_samples, out_neighbor_samples):<EOL><INDENT>command_list.append(_create_entity_related_statement(from_name, to_name))<EOL>food_name = random.choice(FOOD_LIST)<EOL>command_list.append(_create_entity_related_statement(from_name, food_name))<EOL>command_list.append(_create_entity_related_statement(food_name, to_name))<EOL><DEDENT>return command_list<EOL>
Return a list of commands to create EntityRelated edges between random Animals and Foods.
f12699:m9
def get_animal_generation_commands():
command_list = []<EOL>species_to_names = {}<EOL>previous_parent_sets = set()<EOL>all_animal_names = []<EOL>for species_name in SPECIES_LIST:<EOL><INDENT>current_animal_names = []<EOL>species_to_names[species_name] = current_animal_names<EOL>command_list.extend(_get_initial_animal_generators(species_name, current_animal_names))<EOL>for _ in range(NUM_GENERATIONS):<EOL><INDENT>new_parent_names = _get_new_parents(<EOL>current_animal_names, previous_parent_sets, NUM_PARENTS)<EOL>previous_parent_sets.add(new_parent_names)<EOL>parent_indices = sorted([<EOL>index<EOL>for _, index in [<EOL>extract_base_name_and_label(parent_name)<EOL>for parent_name in new_parent_names<EOL>]<EOL>])<EOL>new_label = '<STR_LIT:(>' + '<STR_LIT:_>'.join(parent_indices) + '<STR_LIT:)>'<EOL>new_animal_name = create_name(species_name, new_label)<EOL>current_animal_names.append(new_animal_name)<EOL>command_list.append(_create_animal_statement(new_animal_name, sorted(new_parent_names)))<EOL>for parent_name in sorted(new_parent_names):<EOL><INDENT>new_edge = _create_animal_parent_of_statement(new_animal_name, parent_name)<EOL>command_list.append(new_edge)<EOL><DEDENT><DEDENT>for animal_name in current_animal_names:<EOL><INDENT>command_list.append(_create_animal_of_species_statement(animal_name, species_name))<EOL>if random.random() > ANIMAL_FED_AT_EXISTENCE_THRESHOLD:<EOL><INDENT>command_list.append(_create_animal_fed_at_random_event_statement(animal_name))<EOL><DEDENT>command_list.append(_create_entity_related_statement(animal_name, animal_name))<EOL>command_list.append(_create_entity_related_statement(animal_name, species_name))<EOL>command_list.append(_create_entity_related_statement(species_name, animal_name))<EOL><DEDENT>all_animal_names.extend(current_animal_names)<EOL><DEDENT>return command_list + _get_animal_entity_related_commands(all_animal_names)<EOL>
Return a list of SQL statements to create animal vertices and their corresponding edges.
f12699:m10
def read_file(filename):
<EOL>top_level_directory = path.dirname(path.dirname(path.dirname(path.abspath(__file__))))<EOL>with codecs.open(path.join(top_level_directory, '<STR_LIT>', filename), '<STR_LIT:r>') as f:<EOL><INDENT>return f.read()<EOL><DEDENT>
Read and return text from the file specified by `filename`, in the project root directory.
f12700:m0
def find_version():
version_file = read_file('<STR_LIT>')<EOL>version_match = re.search(r'<STR_LIT>', version_file, re.M)<EOL>if version_match:<EOL><INDENT>return version_match.group(<NUM_LIT:1>)<EOL><DEDENT>raise RuntimeError('<STR_LIT>')<EOL>
Return current version of package.
f12700:m1
def main():
random.seed(<NUM_LIT:0>)<EOL>module_path = path.relpath(__file__)<EOL>current_datetime = datetime.datetime.now().isoformat()<EOL>log_message = ('<STR_LIT>'<EOL>'<STR_LIT>'<EOL>'<STR_LIT>')<EOL>sys.stdout.write(<EOL>log_message.format(path=module_path, datetime=current_datetime, version=find_version()))<EOL>sql_command_generators = [<EOL>get_event_generation_commands,<EOL>get_species_generation_commands,<EOL>get_animal_generation_commands,<EOL>]<EOL>for sql_command_generator in sql_command_generators:<EOL><INDENT>sql_command_list = sql_command_generator()<EOL>sys.stdout.write('<STR_LIT:\n>'.join(sql_command_list))<EOL>sys.stdout.write('<STR_LIT:\n>')<EOL><DEDENT>
Print a list of SQL commands to generate the testing database.
f12700:m2
def get_uuid():
return str(UUID(int=random.randint(<NUM_LIT:0>, <NUM_LIT:2>**<NUM_LIT> - <NUM_LIT:1>)))<EOL>
Return a pseudorandom uuid.
f12701:m0
def get_random_net_worth():
return Decimal(int(<NUM_LIT> * random.random()) / <NUM_LIT>)<EOL>
Return a pseudorandom net worth.
f12701:m1
def get_random_limbs():
return random.randint(<NUM_LIT:2>, <NUM_LIT:10>)<EOL>
Return a pseudorandom number of limbs.
f12701:m2
def get_random_date():
random_year = random.randint(<NUM_LIT>, <NUM_LIT>) <EOL>random_month = random.randint(<NUM_LIT:1>, <NUM_LIT:12>) <EOL>random_day = random.randint(<NUM_LIT:1>, <NUM_LIT>) <EOL>return datetime.date(random_year, random_month, random_day)<EOL>
Return a pseudorandom date.
f12701:m3
def select_vertex_statement(vertex_type, name):
template = '<STR_LIT>'<EOL>args = {'<STR_LIT>': vertex_type, '<STR_LIT:name>': name}<EOL>return template.format(**args)<EOL>
Return a SQL statement to select a vertex of given type by its `name` field.
f12701:m4
def set_statement(field_name, field_value):
if not isinstance(field_name, six.string_types):<EOL><INDENT>raise AssertionError(u'<STR_LIT>'.format(field_name))<EOL><DEDENT>field_value_representation = repr(field_value)<EOL>if isinstance(field_value, datetime.date):<EOL><INDENT>field_value_representation = '<STR_LIT>' + field_value.isoformat() + '<STR_LIT>'<EOL><DEDENT>template = '<STR_LIT>'<EOL>return template.format(field_name, field_value_representation)<EOL>
Return a SQL clause (used in creating a vertex) to set a field to a value.
f12701:m5
def create_vertex_statement(vertex_type, field_name_to_value):
statement = CREATE_VERTEX + vertex_type<EOL>set_field_clauses = [<EOL>set_statement(field_name, field_name_to_value[field_name])<EOL>for field_name in sorted(six.iterkeys(field_name_to_value))<EOL>]<EOL>statement += '<STR_LIT>' + '<STR_LIT:U+002CU+0020>'.join(set_field_clauses)<EOL>return statement<EOL>
Return a SQL statement to create a vertex.
f12701:m6
def create_edge_statement(edge_name, from_class, from_name, to_class, to_name):
statement = CREATE_EDGE + edge_name + '<STR_LIT>'<EOL>from_select = select_vertex_statement(from_class, from_name)<EOL>to_select = select_vertex_statement(to_class, to_name)<EOL>return statement.format(from_select, to_select)<EOL>
Return a SQL statement to create a edge.
f12701:m7
def create_name(base_name, label):
return base_name + SEPARATOR + label<EOL>
Return a name formed by joining a base name with a label.
f12701:m8
def extract_base_name_and_label(name):
if not isinstance(name, six.string_types):<EOL><INDENT>raise AssertionError(u'<STR_LIT>'.format(name))<EOL><DEDENT>split_name = name.split(SEPARATOR)<EOL>if len(split_name) != <NUM_LIT:2>:<EOL><INDENT>raise AssertionError(u'<STR_LIT>'<EOL>.format(SEPARATOR, name))<EOL><DEDENT>return split_name<EOL>
Extract and return a pair of (base_name, label) from a given name field.
f12701:m9
def _create_food_statement(food_name):
field_name_to_value = {'<STR_LIT:name>': food_name, '<STR_LIT>': get_uuid()}<EOL>return create_vertex_statement('<STR_LIT>', field_name_to_value)<EOL>
Return a SQL statement to create a Food vertex.
f12702:m0
def _create_species_statement(species_name):
field_name_to_value = {'<STR_LIT:name>': species_name, '<STR_LIT>': get_random_limbs(), '<STR_LIT>': get_uuid()}<EOL>return create_vertex_statement('<STR_LIT>', field_name_to_value)<EOL>
Return a SQL statement to create a Species vertex.
f12702:m1
def _create_species_eats_statement(from_name, to_name):
if to_name in SPECIES_LIST:<EOL><INDENT>to_class = '<STR_LIT>'<EOL><DEDENT>elif to_name in FOOD_LIST:<EOL><INDENT>to_class = '<STR_LIT>'<EOL><DEDENT>else:<EOL><INDENT>raise AssertionError(u'<STR_LIT>'.format(to_name))<EOL><DEDENT>return create_edge_statement('<STR_LIT>', '<STR_LIT>', from_name, to_class, to_name)<EOL>
Return a SQL statement to create a Species_Eats edge.
f12702:m2
def get_species_generation_commands():
command_list = []<EOL>for food_name in FOOD_LIST:<EOL><INDENT>command_list.append(_create_food_statement(food_name))<EOL><DEDENT>for species_name in SPECIES_LIST:<EOL><INDENT>command_list.append(_create_species_statement(species_name))<EOL><DEDENT>for species_name in SPECIES_LIST:<EOL><INDENT>for food_or_species_name in random.sample(SPECIES_LIST + FOOD_LIST, NUM_FOODS):<EOL><INDENT>command_list.append(_create_species_eats_statement(species_name, food_or_species_name))<EOL><DEDENT><DEDENT>return command_list<EOL>
Return a list of SQL statements to create all species vertices.
f12702:m3
def _create_feeding_event_statement(event_name):
field_name_to_value = {'<STR_LIT:name>': event_name, '<STR_LIT>': get_random_date(), '<STR_LIT>': get_uuid()}<EOL>return create_vertex_statement('<STR_LIT>', field_name_to_value)<EOL>
Return a SQL statement to create a FeedingEvent vertex.
f12703:m0
def get_event_generation_commands():
command_list = []<EOL>for event_name in FEEDING_EVENT_NAMES_LIST:<EOL><INDENT>command_list.append(_create_feeding_event_statement(event_name))<EOL><DEDENT>return command_list<EOL>
Return a list of SQL statements to create all event vertices.
f12703:m1
def rel(path, parent=None, par=False):
try:<EOL><INDENT>res = os.path.relpath(path, parent)<EOL><DEDENT>except ValueError:<EOL><INDENT>if not par:<EOL><INDENT>return abs(path)<EOL><DEDENT>raise<EOL><DEDENT>else:<EOL><INDENT>if not par and not issub(res):<EOL><INDENT>return abs(path)<EOL><DEDENT>return res<EOL><DEDENT>
Takes *path* and computes the relative path from *parent*. If *parent* is omitted, the current working directory is used. If *par* is #True, a relative path is always created when possible. Otherwise, a relative path is only returned if *path* lives inside the *parent* directory.
f12717:m2
def issub(path):
if isabs(path):<EOL><INDENT>return False<EOL><DEDENT>if path.startswith(curdir + sep) or path.startswith(pardir + sep) orpath == curdir or path == pardir:<EOL><INDENT>return False<EOL><DEDENT>return True<EOL>
Returns #True if *path* is a relative path that does not point outside of its parent directory or is equal to its parent directory (thus, this function will also return False for a path like `./`).
f12717:m4
def isglob(path):
return '<STR_LIT:*>' in path or '<STR_LIT:?>' in path<EOL>
Checks if *path* is a glob pattern. Returns #True if it is, #False if not.
f12717:m5
def glob(patterns, parent=None, excludes=None, include_dotfiles=False,<EOL>ignore_false_excludes=False):
if not glob2:<EOL><INDENT>raise glob2_ext<EOL><DEDENT>if isinstance(patterns, str):<EOL><INDENT>patterns = [patterns]<EOL><DEDENT>if not parent:<EOL><INDENT>parent = os.getcwd()<EOL><DEDENT>result = []<EOL>for pattern in patterns:<EOL><INDENT>if not os.path.isabs(pattern):<EOL><INDENT>pattern = os.path.join(parent, pattern)<EOL><DEDENT>result += glob2.glob(canonical(pattern))<EOL><DEDENT>for pattern in (excludes or ()):<EOL><INDENT>if not os.path.isabs(pattern):<EOL><INDENT>pattern = os.path.join(parent, pattern)<EOL><DEDENT>pattern = canonical(pattern)<EOL>if not isglob(pattern):<EOL><INDENT>try:<EOL><INDENT>result.remove(pattern)<EOL><DEDENT>except ValueError as exc:<EOL><INDENT>if not ignore_false_excludes:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(exc, pattern))<EOL><DEDENT><DEDENT><DEDENT>else:<EOL><INDENT>for item in glob2.glob(pattern):<EOL><INDENT>try:<EOL><INDENT>result.remove(item)<EOL><DEDENT>except ValueError as exc:<EOL><INDENT>if not ignore_false_excludes:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(exc, pattern))<EOL><DEDENT><DEDENT><DEDENT><DEDENT><DEDENT>return result<EOL>
Wrapper for #glob2.glob() that accepts an arbitrary number of patterns and matches them. The paths are normalized with #norm(). Relative patterns are automaticlly joined with *parent*. If the parameter is omitted, it defaults to the current working directory. If *excludes* is specified, it must be a string or a list of strings that is/contains glob patterns or filenames to be removed from the result before returning. > Every file listed in *excludes* will only remove **one** match from > the result list that was generated from *patterns*. Thus, if you > want to exclude some files with a pattern except for a specific file > that would also match that pattern, simply list that file another > time in the *patterns*. # Parameters patterns (list of str): A list of glob patterns or filenames. parent (str): The parent directory for relative paths. excludes (list of str): A list of glob patterns or filenames. include_dotfiles (bool): If True, `*` and `**` can also capture file or directory names starting with a dot. ignore_false_excludes (bool): False by default. If True, items listed in *excludes* that have not been globbed will raise an exception. # Returns list of str: A list of filenames.
f12717:m6
def addtobase(subject, base_suffix):
if not base_suffix:<EOL><INDENT>return subject<EOL><DEDENT>base, ext = os.path.splitext(subject)<EOL>return base + base_suffix + ext<EOL>
Adds the string *base_suffix* to the basename of *subject*.
f12717:m7
def addprefix(subject, prefix):
if not prefix:<EOL><INDENT>return subject<EOL><DEDENT>dir_, base = split(subject)<EOL>if callable(prefix):<EOL><INDENT>base = prefix(base)<EOL><DEDENT>else:<EOL><INDENT>base = prefix + base<EOL><DEDENT>return join(dir_, base)<EOL>
Adds the specified *prefix* to the last path element in *subject*. If *prefix* is a callable, it must accept exactly one argument, which is the last path element, and return a modified value.
f12717:m8
def addsuffix(subject, suffix, replace=False):
if not suffix and not replace:<EOL><INDENT>return subject<EOL><DEDENT>if replace:<EOL><INDENT>subject = rmvsuffix(subject)<EOL><DEDENT>if suffix and callable(suffix):<EOL><INDENT>subject = suffix(subject)<EOL><DEDENT>elif suffix:<EOL><INDENT>subject += suffix<EOL><DEDENT>return subject<EOL>
Adds the specified *suffix* to the *subject*. If *replace* is True, the old suffix will be removed first. If *suffix* is callable, it must accept exactly one argument and return a modified value.
f12717:m9
def setsuffix(subject, suffix):
return addsuffix(subject, suffix, replace=True)<EOL>
Synonymous for passing the True for the *replace* parameter in #addsuffix().
f12717:m10
def rmvsuffix(subject):
index = subject.rfind('<STR_LIT:.>')<EOL>if index > subject.replace('<STR_LIT:\\>', '<STR_LIT:/>').rfind('<STR_LIT:/>'):<EOL><INDENT>subject = subject[:index]<EOL><DEDENT>return subject<EOL>
Remove the suffix from *subject*.
f12717:m11
def getsuffix(subject):
index = subject.rfind('<STR_LIT:.>')<EOL>if index > subject.replace('<STR_LIT:\\>', '<STR_LIT:/>').rfind('<STR_LIT:/>'):<EOL><INDENT>return subject[index+<NUM_LIT:1>:]<EOL><DEDENT>return None<EOL>
Returns the suffix of a filename. If the file has no suffix, returns None. Can return an empty string if the filenam ends with a period.
f12717:m12
def makedirs(path, exist_ok=True):
try:<EOL><INDENT>os.makedirs(path)<EOL><DEDENT>except OSError as exc:<EOL><INDENT>if exist_ok and exc.errno == errno.EEXIST:<EOL><INDENT>return<EOL><DEDENT>raise<EOL><DEDENT>
Like #os.makedirs(), with *exist_ok* defaulting to #True.
f12717:m13
def chmod_update(flags, modstring):
mapping = {<EOL>'<STR_LIT:r>': (_stat.S_IRUSR, _stat.S_IRGRP, _stat.S_IROTH),<EOL>'<STR_LIT:w>': (_stat.S_IWUSR, _stat.S_IWGRP, _stat.S_IWOTH),<EOL>'<STR_LIT:x>': (_stat.S_IXUSR, _stat.S_IXGRP, _stat.S_IXOTH)<EOL>}<EOL>target, direction = '<STR_LIT:a>', None<EOL>for c in modstring:<EOL><INDENT>if c in '<STR_LIT>':<EOL><INDENT>direction = c<EOL>continue<EOL><DEDENT>if c in '<STR_LIT>':<EOL><INDENT>target = c<EOL>direction = None <EOL>continue<EOL><DEDENT>if c in '<STR_LIT>' and direction in '<STR_LIT>':<EOL><INDENT>if target == '<STR_LIT:a>':<EOL><INDENT>mask = functools.reduce(operator.or_, mapping[c])<EOL><DEDENT>else:<EOL><INDENT>mask = mapping[c]['<STR_LIT>'.index(target)]<EOL><DEDENT>if direction == '<STR_LIT:->':<EOL><INDENT>flags &= ~mask<EOL><DEDENT>else:<EOL><INDENT>flags |= mask<EOL><DEDENT>continue<EOL><DEDENT>raise ValueError('<STR_LIT>'.format(modstring))<EOL><DEDENT>return flags<EOL>
Modifies *flags* according to *modstring*.
f12717:m14
def chmod_repr(flags):
template = '<STR_LIT>'<EOL>order = (_stat.S_IRUSR, _stat.S_IWUSR, _stat.S_IXUSR,<EOL>_stat.S_IRGRP, _stat.S_IWGRP, _stat.S_IXGRP,<EOL>_stat.S_IROTH, _stat.S_IWOTH, _stat.S_IXOTH)<EOL>return '<STR_LIT>'.join(template[i] if flags&x else '<STR_LIT:->'<EOL>for i, x in enumerate(order))<EOL>
Returns a string representation of the access flags *flags*.
f12717:m15
def compare_timestamp(src, dst):
try:<EOL><INDENT>dst_time = os.path.getmtime(dst)<EOL><DEDENT>except OSError as exc:<EOL><INDENT>if exc.errno == errno.ENOENT:<EOL><INDENT>return True <EOL><DEDENT><DEDENT>src_time = os.path.getmtime(src)<EOL>return src_time > dst_time<EOL>
Compares the timestamps of file *src* and *dst*, returning #True if the *dst* is out of date or does not exist. Raises an #OSError if the *src* file does not exist.
f12717:m17
def dynamic_exec(code, resolve, assign=None, delete=None, automatic_builtins=True,<EOL>filename=None, module_name=None, _type='<STR_LIT>'):
parse_filename = filename or '<STR_LIT>'<EOL>ast_node = transform(ast.parse(code, parse_filename, mode=_type))<EOL>code = compile(ast_node, parse_filename, _type)<EOL>if hasattr(resolve, '<STR_LIT>'):<EOL><INDENT>if assign is not None:<EOL><INDENT>raise TypeError('<STR_LIT>')<EOL><DEDENT>if delete is not None:<EOL><INDENT>raise TypeError('<STR_LIT>')<EOL><DEDENT>input_mapping = resolve<EOL>def resolve(x):<EOL><INDENT>try:<EOL><INDENT>return input_mapping[x]<EOL><DEDENT>except KeyError:<EOL><INDENT>raise NameError(x)<EOL><DEDENT><DEDENT>assign = input_mapping.__setitem__<EOL>delete = input_mapping.__delitem__<EOL><DEDENT>else:<EOL><INDENT>input_mapping = False<EOL><DEDENT>class DynamicMapping(object):<EOL><INDENT>_data = {}<EOL>_deleted = set()<EOL>def __repr__(self):<EOL><INDENT>if input_mapping:<EOL><INDENT>return '<STR_LIT>'.format(input_mapping)<EOL><DEDENT>else:<EOL><INDENT>return '<STR_LIT>'.format(resolve, assign)<EOL><DEDENT><DEDENT>def __getitem__(self, key):<EOL><INDENT>if key in self._deleted:<EOL><INDENT>raise NameError(key)<EOL><DEDENT>if assign is None:<EOL><INDENT>try:<EOL><INDENT>return self._data[key]<EOL><DEDENT>except KeyError:<EOL><INDENT>pass <EOL><DEDENT><DEDENT>try:<EOL><INDENT>return resolve(key)<EOL><DEDENT>except NameError as exc:<EOL><INDENT>if automatic_builtins and not key.startswith('<STR_LIT:_>'):<EOL><INDENT>try:<EOL><INDENT>return getattr(builtins, key)<EOL><DEDENT>except AttributeError:<EOL><INDENT>pass<EOL><DEDENT><DEDENT>raise exc<EOL><DEDENT><DEDENT>def __setitem__(self, key, value):<EOL><INDENT>self._deleted.discard(key)<EOL>if assign is None:<EOL><INDENT>self._data[key] = value<EOL><DEDENT>else:<EOL><INDENT>assign(key, value)<EOL><DEDENT><DEDENT>def __delitem__(self, key):<EOL><INDENT>if delete is None:<EOL><INDENT>self._deleted.add(key)<EOL><DEDENT>else:<EOL><INDENT>delete(key)<EOL><DEDENT><DEDENT>def get(self, key, default=None):<EOL><INDENT>try:<EOL><INDENT>return self[key]<EOL><DEDENT>except NameError:<EOL><INDENT>return default<EOL><DEDENT><DEDENT><DEDENT>mapping = DynamicMapping()<EOL>globals_ = {'<STR_LIT>': mapping}<EOL>if filename:<EOL><INDENT>mapping['<STR_LIT>'] = filename<EOL>globals_['<STR_LIT>'] = filename<EOL><DEDENT>if module_name:<EOL><INDENT>mapping['<STR_LIT>'] = module_name<EOL>globals_['<STR_LIT>'] = module_name<EOL><DEDENT>return (exec_ if _type == '<STR_LIT>' else eval)(code, globals_)<EOL>
Transforms the Python source code *code* and evaluates it so that the *resolve* and *assign* functions are called respectively for when a global variable is access or assigned. If *resolve* is a mapping, *assign* must be omitted. #KeyError#s raised by the mapping are automatically converted to #NameError#s. Otherwise, *resolve* and *assign* must be callables that have the same interface as `__getitem__()`, and `__setitem__()`. If *assign* is omitted in that case, assignments will be redirected to a separate dictionary and keys in that dictionary will be checked before continuing with the *resolve* callback.
f12719:m2
def __get_subscript(self, name, ctx=None):
assert isinstance(name, string_types), name<EOL>return ast.Subscript(<EOL>value=ast.Name(id=self.data_var, ctx=ast.Load()),<EOL>slice=ast.Index(value=ast.Str(s=name)),<EOL>ctx=ctx)<EOL>
Returns `<data_var>["<name>"]`
f12719:c0:m6
def __get_subscript_assign(self, name):
return ast.Assign(<EOL>targets=[self.__get_subscript(name, ast.Store())],<EOL>value=ast.Name(id=name, ctx=ast.Load()))<EOL>
Returns `<data_var>["<name>"] = <name>`.
f12719:c0:m7
def __get_subscript_delete(self, name):
return ast.Delete(targets=[self.__get_subscript(name, ast.Del())])<EOL>
Returns `del <data_var>["<name>"]`.
f12719:c0:m8
def __visit_target(self, node):
if isinstance(node, ast.Name) and isinstance(node.ctx, ast.Store):<EOL><INDENT>self.__add_variable(node.id)<EOL><DEDENT>elif isinstance(node, (ast.Tuple, ast.List)):<EOL><INDENT>[self.__visit_target(x) for x in node.elts]<EOL><DEDENT>
Call this method to visit assignment targets and to add local variables to the current stack frame. Used in #visit_Assign() and #__visit_comprehension().
f12719:c0:m9
def run_as_admin(command, cwd=None, environ=None):
if isinstance(command, str):<EOL><INDENT>command = shlex.split(command)<EOL><DEDENT>if os.name == '<STR_LIT>':<EOL><INDENT>return _run_as_admin_windows(command, cwd, environ)<EOL><DEDENT>elif os.name == '<STR_LIT>':<EOL><INDENT>command = ['<STR_LIT>', '<STR_LIT>'] + list(command)<EOL>sys.exit(subprocess.call(command))<EOL><DEDENT>else:<EOL><INDENT>raise RuntimeError('<STR_LIT>'.format(os.name))<EOL><DEDENT>
Runs a command as an admin in the specified *cwd* and *environ*. On Windows, this creates a temporary directory where this information is stored temporarily so that the new process can launch the proper subprocess.
f12722:m3
def parse_config(filename):
tag = None<EOL>branch = None<EOL>message = '<STR_LIT>'<EOL>upgrades = {}<EOL>subs = {}<EOL>with open(filename) as fp:<EOL><INDENT>for i, line in enumerate(fp):<EOL><INDENT>line = line.strip()<EOL>if not line or line.startswith('<STR_LIT:#>'): continue<EOL>key, sep, value = line.partition('<STR_LIT:U+0020>')<EOL>if not key or not value:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(i+<NUM_LIT:1>))<EOL><DEDENT>if key == '<STR_LIT>':<EOL><INDENT>tag = value.strip()<EOL><DEDENT>elif key == '<STR_LIT>':<EOL><INDENT>branch = value.strip()<EOL><DEDENT>elif key == '<STR_LIT:message>':<EOL><INDENT>message = value.strip()<EOL><DEDENT>elif key == '<STR_LIT>':<EOL><INDENT>filename, sep, pattern = value.partition('<STR_LIT::>')<EOL>if not filename or not sep or not pattern or '<STR_LIT>' not in pattern:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(i+<NUM_LIT:1>))<EOL><DEDENT>upgrade = upgrades.setdefault(filename, [])<EOL>upgrade.append(pattern)<EOL><DEDENT>elif key == '<STR_LIT>':<EOL><INDENT>filename, sep, pattern = value.partition('<STR_LIT::>')<EOL>pattern = pattern.partition('<STR_LIT::>')[::<NUM_LIT:2>]<EOL>if not pattern[<NUM_LIT:0>] or not pattern[<NUM_LIT:1>]:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(i+<NUM_LIT:1>))<EOL><DEDENT>subs.setdefault(filename, []).append(pattern)<EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(key, i+<NUM_LIT:1>))<EOL><DEDENT><DEDENT><DEDENT>return Config(tag, branch, message, upgrades, subs)<EOL>
Parses a versionupgrade configuration file. Example: tag v{VERSION} branch v{VERSION} message Prepare {VERSION} release upgrade setup.py: version = '{VERSION}' upgrade __init__.py:__version__ = '{VERSION}' sub docs/changelog/v{VERSION}.md:# v{VERSION} (unreleased):# v{VERSION} ({DATE}) Available commands: - tag: Create a Git tag with the specified name. - branch: Create a Git branch with the specified name. - message: The commit message for upgraded version numbers. - upgrade: Upgrade the version number in the file matching the pattern. The same file may be listed multiple times. The pattern may actually be a regular expression and will be searched in every line of the file. - sub: Specify a file where the part of the file matching the first string will be replaced by the second string. Returns a #Config object.
f12725:m0
def match_version_pattern(filename, pattern):
if "<STR_LIT>" not in pattern:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>pattern = pattern.replace('<STR_LIT>', '<STR_LIT>')<EOL>expr = re.compile(pattern)<EOL>with open(filename) as fp:<EOL><INDENT>lines = fp.read().split('<STR_LIT:\n>')<EOL><DEDENT>for i, line in enumerate(lines):<EOL><INDENT>match = expr.search(line)<EOL>if match:<EOL><INDENT>return Match(filename, lines, line_index=i,<EOL>version=Version(match.group('<STR_LIT:v>')), span=match.span('<STR_LIT:v>'))<EOL><DEDENT><DEDENT>return None<EOL>
Matches a single version upgrade pattern in the specified *filename* and returns the match information. Returns a #Match object or #None if the *pattern* did not match.
f12725:m1
def get_changed_files(include_staged=False):
process = subprocess.Popen(['<STR_LIT>', '<STR_LIT:status>', '<STR_LIT>'],<EOL>stdout=subprocess.PIPE, stderr=subprocess.STDOUT)<EOL>stdout, __ = process.communicate()<EOL>if process.returncode != <NUM_LIT:0>:<EOL><INDENT>raise ValueError(stdout)<EOL><DEDENT>files = []<EOL>for line in stdout.decode().split('<STR_LIT:\n>'):<EOL><INDENT>if not line or line.startswith('<STR_LIT:#>'): continue<EOL>assert line[<NUM_LIT:2>] == '<STR_LIT:U+0020>'<EOL>if not include_staged and line[<NUM_LIT:1>] == '<STR_LIT:U+0020>': continue<EOL>files.append(line[<NUM_LIT:3>:])<EOL><DEDENT>return files<EOL>
Returns a list of the files that changed in the Git repository. This is used to check if the files that are supposed to be upgraded have changed. If so, the upgrade will be prevented.
f12725:m2
def parse(ignore_file='<STR_LIT>', git_dir='<STR_LIT>', additional_files=(),<EOL>global_=True, root_dir=None, defaults=True):
result = IgnoreListCollection()<EOL>if root_dir is None:<EOL><INDENT>if git_dir is None:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>root_dir = os.path.dirname(os.path.abspath(git_dir))<EOL><DEDENT>def parse(filename, root=None):<EOL><INDENT>if os.path.isfile(filename):<EOL><INDENT>if root is None:<EOL><INDENT>root = os.path.dirname(os.path.abspath(filename))<EOL><DEDENT>with open(filename) as fp:<EOL><INDENT>result.parse(fp, root)<EOL><DEDENT><DEDENT><DEDENT>result.append(IgnoreList(root_dir))<EOL>if ignore_file is not None:<EOL><INDENT>parse(ignore_file)<EOL><DEDENT>for filename in additional_files:<EOL><INDENT>parse(filename)<EOL><DEDENT>if git_dir is not None:<EOL><INDENT>parse(os.path.join(git_dir, '<STR_LIT:info>', '<STR_LIT>'), root_dir)<EOL><DEDENT>if global_:<EOL><INDENT>parse(os.path.expanduser('<STR_LIT>'), root_dir)<EOL><DEDENT>if defaults:<EOL><INDENT>result.append(get_defaults(root_dir))<EOL><DEDENT>return result<EOL>
Collects a list of all ignore patterns configured in a local Git repository as specified in the Git documentation. See https://git-scm.com/docs/gitignore#_description The returned #IgnoreListCollection is guaranteed to contain at least one #IgnoreList with #IgnoreList.root pointing to the specified *root_dir* (which defaults to the parent directory of *git_dir*) as the first element.
f12726:m0
def get_defaults(root):
defaults = IgnoreList(root)<EOL>defaults.parse([<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>'<EOL>])<EOL>return defaults<EOL>
Returns a default #IgnoreList which excludes common SCM files and directories.
f12726:m1
def walk(patterns, dirname):
join = os.path.join<EOL>for root, dirs, files in os.walk(dirname, topdown=True):<EOL><INDENT>dirs[:] = [d for d in dirs if patterns.match(join(root, d), True) != MATCH_IGNORE]<EOL>files[:] = [f for f in files if patterns.match(join(root, f), False) != MATCH_IGNORE]<EOL>yield root, dirs, files<EOL><DEDENT>
Like #os.walk(), but filters the files and directories that are excluded by the specified *patterns*. # Arguments patterns (IgnoreList, IgnoreListCollection): Can also be any object that implements the #IgnoreList.match() interface. dirname (str): The directory to walk.
f12726:m2
def parse(self, lines):
if isinstance(lines, str):<EOL><INDENT>lines = lines.split('<STR_LIT:\n>')<EOL><DEDENT>sub = _re.sub<EOL>for line in lines:<EOL><INDENT>if line.endswith('<STR_LIT:\n>'):<EOL><INDENT>line = line[:-<NUM_LIT:1>]<EOL><DEDENT>line = line.lstrip()<EOL>if not line.startswith('<STR_LIT:#>'):<EOL><INDENT>invert = False<EOL>if line.startswith('<STR_LIT:!>'):<EOL><INDENT>line = line[<NUM_LIT:1>:]<EOL>invert = True<EOL><DEDENT>while line.endswith('<STR_LIT:U+0020>') and line[-<NUM_LIT:2>:] != '<STR_LIT>':<EOL><INDENT>line = line[:-<NUM_LIT:1>]<EOL><DEDENT>line = sub(r'<STR_LIT>', r'<STR_LIT>', line)<EOL>if '<STR_LIT:/>' in line and not line.startswith('<STR_LIT:/>'):<EOL><INDENT>line = '<STR_LIT:/>' + line<EOL><DEDENT>self.patterns.append(Pattern(line, invert))<EOL><DEDENT><DEDENT>
Parses the `.gitignore` file represented by the *lines*.
f12726:c1:m3
def match(self, filename, isdir):
fnmatch = _fnmatch.fnmatch<EOL>ignored = False<EOL>filename = self.convert_path(filename)<EOL>basename = os.path.basename(filename)<EOL>for pattern in self.patterns:<EOL><INDENT>if pattern.dir_only and not isdir:<EOL><INDENT>continue<EOL><DEDENT>if (not ignored or pattern.invert) and pattern.match(filename):<EOL><INDENT>if pattern.invert: <EOL><INDENT>return MATCH_INCLUDE<EOL><DEDENT>ignored = True<EOL><DEDENT><DEDENT>if ignored:<EOL><INDENT>return MATCH_IGNORE<EOL><DEDENT>else:<EOL><INDENT>return MATCH_DEFAULT<EOL><DEDENT>
Match the specified *filename*. If *isdir* is False, directory-only patterns will be ignored. Returns one of - #MATCH_DEFAULT - #MATCH_IGNORE - #MATCH_INCLUDE
f12726:c1:m5
def parse(self, lines, root):
lst = IgnoreList(root)<EOL>lst.parse(lines)<EOL>self.append(lst)<EOL>
Shortcut for #IgnoreList.parse() and #IgnoreListCollection.append().
f12726:c2:m0
def match(self, filename, isdir=False):
for lst in self:<EOL><INDENT>result = lst.match(filename, isdir)<EOL>if result != MATCH_DEFAULT:<EOL><INDENT>return result<EOL><DEDENT><DEDENT>return MATCH_DEFAULT<EOL>
Match all the #IgnoreList#s` in this collection. Returns one of - #MATCH_DEFAULT - #MATCH_IGNORE - #MATCH_INCLUDE
f12726:c2:m1
def get_staticmethod_func(cm):
if hasattr(cm, '<STR_LIT>'):<EOL><INDENT>return cm.__func__<EOL><DEDENT>else:<EOL><INDENT>return cm.__get__(int)<EOL><DEDENT>
Returns the function wrapped by the #staticmethod *cm*.
f12728:m0
def unpack_text_io_wrapper(fp, encoding):
if isinstance(fp, io.TextIOWrapper):<EOL><INDENT>if fp.writable() and encoding is not None and fp.encoding != encoding:<EOL><INDENT>msg = '<STR_LIT>'<EOL>raise RuntimeError(msg.format(fp.encoding, encoding))<EOL><DEDENT>if encoding is None:<EOL><INDENT>encoding = fp.encoding<EOL><DEDENT>fp = fp.buffer<EOL><DEDENT>return fp, encoding<EOL>
If *fp* is a #io.TextIOWrapper object, this function returns the underlying binary stream and the encoding of the IO-wrapper object. If *encoding* is not None and does not match with the encoding specified in the IO-wrapper, a #RuntimeError is raised.
f12730:m0
def __call__(self, value):
if self.type is None:<EOL><INDENT>return value<EOL><DEDENT>if self.null and value is None:<EOL><INDENT>return value<EOL><DEDENT>if isinstance(value, self.type):<EOL><INDENT>return value<EOL><DEDENT>func = self.adapter or self.type<EOL>try:<EOL><INDENT>return func(value)<EOL><DEDENT>except (ValueError, TypeError) as exc:<EOL><INDENT>raise ValidationError(self, value, exc)<EOL><DEDENT>
Calls the #adapter function of the field. If the adapter is not defined, the #type type/function is called instead. If *value* is already an instance of the field's #type, *value* is returned as-is.
f12731:c0:m2
def check_type(self, value):
if self.null and value is None:<EOL><INDENT>return<EOL><DEDENT>if self.type is not None and not isinstance(value, self.type):<EOL><INDENT>msg = '<STR_LIT>'<EOL>raise TypeError(msg.format(self.full_name, self.type.__name__))<EOL><DEDENT>
Raises a #TypeError if *value* is not an instance of the field's #type.
f12731:c0:m3
def get_default(self):
if self.default is not NotImplemented:<EOL><INDENT>return self.default<EOL><DEDENT>elif self.default_factory is not None:<EOL><INDENT>return self.default_factory()<EOL><DEDENT>else:<EOL><INDENT>raise RuntimeError('<STR_LIT>'.format(self.full_name))<EOL><DEDENT>
Return the default value of the field. Returns either #default, the return value of #default_factory or raises a #RuntimeError if the field has no default value.
f12731:c0:m4
@property<EOL><INDENT>def has_default(self):<DEDENT>
return self.default is not NotImplemented or self.default_factory is not None<EOL>
Returns true if this field supplies a default values, false if not.
f12731:c0:m5
@property<EOL><INDENT>def full_name(self):<DEDENT>
entity = self.entity.__name__ if self.entity is not None else None<EOL>name = self.name if self.name is not None else None<EOL>if entity and name:<EOL><INDENT>return entity + '<STR_LIT:.>' + name<EOL><DEDENT>elif entity:<EOL><INDENT>return entity + '<STR_LIT>'<EOL><DEDENT>elif name:<EOL><INDENT>return '<STR_LIT>' + name<EOL><DEDENT>else:<EOL><INDENT>return '<STR_LIT>'<EOL><DEDENT>
The full name of the field. This is the field's entities name concatenated with the field's name. If the field is unnamed or not bound to an entity, the result respectively contains None.
f12731:c0:m6
@property<EOL><INDENT>def type_name(self):<DEDENT>
res = self.type.__name__<EOL>if self.type.__module__ not in ('<STR_LIT>', '<STR_LIT>'):<EOL><INDENT>res = self.type.__module__ + '<STR_LIT:.>' + res<EOL><DEDENT>return res<EOL>
Returns the full type identifier of the field.
f12731:c0:m7
def __iter__(self):
values = self._values.values()<EOL>values.sort(key=lambda x: x.value)<EOL>return iter(values)<EOL>
Iterator over value-sorted enumeration values.
f12733:c2:m1
def __new__(cls, value, _allow_fallback=True):
<EOL>if isinstance(value, compat.integer_types):<EOL><INDENT>try:<EOL><INDENT>value = cls._values[value]<EOL><DEDENT>except KeyError:<EOL><INDENT>if _allow_fallback and cls.__fallback__ is not None:<EOL><INDENT>return cls.__fallback__<EOL><DEDENT>raise NoSuchEnumerationValue(cls.__name__, value)<EOL><DEDENT><DEDENT>elif isinstance(value, string_types):<EOL><INDENT>try:<EOL><INDENT>new_value = getattr(cls, value)<EOL>if type(new_value) != cls:<EOL><INDENT>raise AttributeError<EOL><DEDENT><DEDENT>except AttributeError:<EOL><INDENT>raise NoSuchEnumerationValue(cls.__name__, value)<EOL><DEDENT>value = new_value<EOL><DEDENT>if type(value) == cls:<EOL><INDENT>return value<EOL><DEDENT>raise TypeError('<STR_LIT>' % (cls.__name__, type(value).__name__))<EOL>
Creates a new instance of the Enumeration. *value* must be the integral number of one of the existing enumerations. #NoSuchEnumerationValue is raised in any other case. If a fallback was defined, it is returned only if *value* is an integer, not if it is a string.
f12733:c3:m0
def register_opener(suffix, opener=None):
if opener is None:<EOL><INDENT>def decorator(func):<EOL><INDENT>register_opener(suffix, func)<EOL>return func<EOL><DEDENT>return decorator<EOL><DEDENT>if suffix in openers:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(suffix))<EOL><DEDENT>openers[suffix] = opener<EOL>
Register a callback that opens an archive with the specified *suffix*. The object returned by the *opener* must implement the #tarfile.Tarfile interface, more specifically the following methods: - `add(filename, arcname) -> None` - `getnames() -> list of str` - `getmember(filename) -> TarInfo` - `extractfile(filename) -> file obj` This function can be used as a decorator when *opener* is not provided. The opener must accept the following arguments: %%arglist file (file-like): A file-like object to read the archive data from. mode (str): The mode to open the file in. Valid values are `'w'`, `'r'` and `'a'`. options (dict): A dictionary with possibly additional arguments.
f12737:m0
def get_opener(filename):
for suffix, opener in openers.items():<EOL><INDENT>if filename.endswith(suffix):<EOL><INDENT>return suffix, opener<EOL><DEDENT><DEDENT>raise UnknownArchive(filename)<EOL>
Finds a matching opener that is registed with :func:`register_opener` and returns a tuple ``(suffix, opener)``. If there is no opener that can handle this filename, :class:`UnknownArchive` is raised.
f12737:m1
def open(filename=None, file=None, mode='<STR_LIT:r>', suffix=None, options=None):
if mode not in ('<STR_LIT:r>', '<STR_LIT:w>', '<STR_LIT:a>'):<EOL><INDENT>raise ValueError("<STR_LIT>".format(mode))<EOL><DEDENT>if suffix is None:<EOL><INDENT>suffix, opener = get_opener(filename)<EOL>if file is not None:<EOL><INDENT>filename = None <EOL><DEDENT><DEDENT>else:<EOL><INDENT>if file is not None and filename is not None:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>try:<EOL><INDENT>opener = openers[suffix]<EOL><DEDENT>except KeyError:<EOL><INDENT>raise UnknownArchive(suffix)<EOL><DEDENT><DEDENT>if options is None:<EOL><INDENT>options = {}<EOL><DEDENT>if file is not None:<EOL><INDENT>if mode in '<STR_LIT>' and not hasattr(file, '<STR_LIT>'):<EOL><INDENT>raise TypeError("<STR_LIT>", file)<EOL><DEDENT>if mode == '<STR_LIT:r>' and not hasattr(file, '<STR_LIT>'):<EOL><INDENT>raise TypeError("<STR_LIT>", file)<EOL><DEDENT><DEDENT>if [filename, file].count(None) != <NUM_LIT:1>:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>if filename is not None:<EOL><INDENT>file = builtins.open(filename, mode + '<STR_LIT:b>')<EOL><DEDENT>try:<EOL><INDENT>return opener(file, mode, options)<EOL><DEDENT>except:<EOL><INDENT>if filename is not None:<EOL><INDENT>file.close()<EOL><DEDENT>raise<EOL><DEDENT>
Opens the archive at the specified *filename* or from the file-like object *file* using the appropriate opener. A specific opener can be specified by passing the *suffix* argument. # Parameters filename (str): A filename to open the archive from. file (file-like): A file-like object as source/destination. mode (str): The mode to open the archive in. suffix (str): Possible override for the *filename* suffix. Must be specified when *file* is passed instead of *filename*. options (dict): A dictionary that will be passed to the opener with which additional options can be specified. return (archive-like): An object that represents the archive and follows the interface of the #tarfile.TarFile class.
f12737:m2
def extract(archive, directory, suffix=None, unpack_single_dir=False,<EOL>check_extract_file=None, progress_callback=None, default_mode='<STR_LIT>'):
if isinstance(archive, str):<EOL><INDENT>with open(archive, suffix=suffix) as archive:<EOL><INDENT>return extract(archive, directory, None, unpack_single_dir,<EOL>check_extract_file, progress_callback, default_mode)<EOL><DEDENT><DEDENT>if isinstance(default_mode, str):<EOL><INDENT>default_mode = int(default_mode, <NUM_LIT:8>)<EOL><DEDENT>if progress_callback:<EOL><INDENT>progress_callback(-<NUM_LIT:1>, <NUM_LIT:0>, None)<EOL><DEDENT>names = archive.getnames()<EOL>toplevel_dirs = set()<EOL>for name in names:<EOL><INDENT>parts = name.split('<STR_LIT:/>')<EOL>if len(parts) > <NUM_LIT:1>:<EOL><INDENT>toplevel_dirs.add(parts[<NUM_LIT:0>])<EOL><DEDENT><DEDENT>if unpack_single_dir and len(toplevel_dirs) == <NUM_LIT:1>:<EOL><INDENT>stripdir = next(iter(toplevel_dirs)) + '<STR_LIT:/>'<EOL><DEDENT>else:<EOL><INDENT>stripdir = None<EOL><DEDENT>for index, name in enumerate(names):<EOL><INDENT>if progress_callback:<EOL><INDENT>progress_callback(index + <NUM_LIT:1>, len(names), name)<EOL><DEDENT>if name.startswith('<STR_LIT:..>') or name.startswith('<STR_LIT:/>') or os.path.isabs(name):<EOL><INDENT>continue<EOL><DEDENT>if check_extract_file and not check_extract_file(name):<EOL><INDENT>continue<EOL><DEDENT>if name.endswith('<STR_LIT:/>'):<EOL><INDENT>continue<EOL><DEDENT>if stripdir:<EOL><INDENT>filename = name[len(stripdir):]<EOL>if not filename:<EOL><INDENT>continue<EOL><DEDENT><DEDENT>else:<EOL><INDENT>filename = name<EOL><DEDENT>info = archive.getmember(name)<EOL>src = archive.extractfile(name)<EOL>if not src:<EOL><INDENT>continue<EOL><DEDENT>try:<EOL><INDENT>filename = os.path.join(directory, filename)<EOL>dirname = os.path.dirname(filename)<EOL>if not os.path.exists(dirname):<EOL><INDENT>os.makedirs(dirname)<EOL><DEDENT>with builtins.open(filename, '<STR_LIT:wb>') as dst:<EOL><INDENT>shutil.copyfileobj(src, dst)<EOL><DEDENT>os.chmod(filename, info.mode or default_mode)<EOL>os.utime(filename, (-<NUM_LIT:1>, info.mtime))<EOL><DEDENT>finally:<EOL><INDENT>src.close()<EOL><DEDENT><DEDENT>if progress_callback:<EOL><INDENT>progress_callback(len(names), len(names), None)<EOL><DEDENT>
Extract the contents of *archive* to the specified *directory*. This function ensures that no file is extracted outside of the target directory (which can theoretically happen if the arcname is not relative or points to a parent directory). # Parameters archive (str, archive-like): The filename of an archive or an already opened archive. directory (str): Path to the directory to unpack the contents to. unpack_single_dir (bool): If this is True and if the archive contains only a single top-level directory, its contents will be placed directly into the target *directory*.
f12737:m3
def seek(self, offset, mode='<STR_LIT>', renew=False):
mapping = {os.SEEK_SET: '<STR_LIT>', os.SEEK_CUR: '<STR_LIT>', os.SEEK_END: '<STR_LIT:end>'}<EOL>mode = mapping.get(mode, mode)<EOL>if mode not in ('<STR_LIT>', '<STR_LIT>', '<STR_LIT:end>'):<EOL><INDENT>raise ValueError('<STR_LIT>'.format(mode))<EOL><DEDENT>if mode == '<STR_LIT:end>':<EOL><INDENT>offset = len(self.text) + offset<EOL>mode = '<STR_LIT>'<EOL><DEDENT>elif mode == '<STR_LIT>':<EOL><INDENT>offset = self.index + offset<EOL>mode = '<STR_LIT>'<EOL><DEDENT>assert mode == '<STR_LIT>'<EOL>if offset < <NUM_LIT:0>:<EOL><INDENT>offset = <NUM_LIT:0><EOL><DEDENT>elif offset > len(self.text):<EOL><INDENT>offset = len(self.text) + <NUM_LIT:1><EOL><DEDENT>if self.index == offset:<EOL><INDENT>return<EOL><DEDENT>if offset <= abs(self.index - offset):<EOL><INDENT>text, index, lineno, colno = self.text, <NUM_LIT:0>, <NUM_LIT:1>, <NUM_LIT:0><EOL>while index != offset:<EOL><INDENT>nli = text.find('<STR_LIT:\n>', index)<EOL>if nli >= offset or nli < <NUM_LIT:0>:<EOL><INDENT>colno = offset - index<EOL>index = offset<EOL>break<EOL><DEDENT>else:<EOL><INDENT>colno = <NUM_LIT:0><EOL>lineno += <NUM_LIT:1><EOL>index = nli + <NUM_LIT:1><EOL><DEDENT><DEDENT><DEDENT>else:<EOL><INDENT>text, index, lineno, colno = self.text, self.index, self.lineno, self.colno<EOL>if offset < index: <EOL><INDENT>while index != offset:<EOL><INDENT>nli = text.rfind('<STR_LIT:\n>', <NUM_LIT:0>, index)<EOL>if nli < <NUM_LIT:0> or nli <= offset:<EOL><INDENT>if text[offset] == '<STR_LIT:\n>':<EOL><INDENT>assert (offset - nli) == <NUM_LIT:0>, (offset, nli)<EOL>nli = text.rfind('<STR_LIT:\n>', <NUM_LIT:0>, index-<NUM_LIT:1>)<EOL>lineno -= <NUM_LIT:1><EOL><DEDENT>colno = offset - nli - <NUM_LIT:1><EOL>index = offset<EOL>break<EOL><DEDENT>else:<EOL><INDENT>lineno -= <NUM_LIT:1><EOL>index = nli - <NUM_LIT:1><EOL><DEDENT><DEDENT><DEDENT>else: <EOL><INDENT>while index != offset:<EOL><INDENT>nli = text.find('<STR_LIT:\n>', index)<EOL>if nli < <NUM_LIT:0> or nli >= offset:<EOL><INDENT>colno = offset - index<EOL>index = offset<EOL><DEDENT>else:<EOL><INDENT>lineno += <NUM_LIT:1><EOL>index = nli + <NUM_LIT:1><EOL><DEDENT><DEDENT><DEDENT><DEDENT>assert lineno >= <NUM_LIT:1><EOL>assert colno >= <NUM_LIT:0><EOL>assert index == offset<EOL>self.index, self.lineno, self.colno = index, lineno, colno<EOL>
Moves the cursor of the Scanner to or by *offset* depending on the *mode*. Is is similar to a file's `seek()` function, however the *mode* parameter also accepts the string-mode values `'set'`, `'cur'` and `'end'`. Note that even for the `'end'` mode, the *offset* must be negative to actually reach back up from the end of the file. If *renew* is set to True, the line and column counting will always begin from the start of the file. Keep in mind that this could can be very slow because it has to go through each and every character until the desired position is reached. Otherwise, if *renew* is set to False, it will be decided if counting from the start is shorter than counting from the current cursor position.
f12738:c0:m5
def next(self):
char = self.char<EOL>if char == '<STR_LIT:\n>':<EOL><INDENT>self.lineno += <NUM_LIT:1><EOL>self.colno = <NUM_LIT:0><EOL><DEDENT>else:<EOL><INDENT>self.colno += <NUM_LIT:1><EOL><DEDENT>self.index += <NUM_LIT:1><EOL>return self.char<EOL>
Move on to the next character in the text.
f12738:c0:m6
def readline(self):
start = end = self.index<EOL>while end < len(self.text):<EOL><INDENT>if self.text[end] == '<STR_LIT:\n>':<EOL><INDENT>end += <NUM_LIT:1><EOL>break<EOL><DEDENT>end += <NUM_LIT:1><EOL><DEDENT>result = self.text[start:end]<EOL>self.index = end<EOL>if result.endswith('<STR_LIT:\n>'):<EOL><INDENT>self.colno = <NUM_LIT:0><EOL>self.lineno += <NUM_LIT:1><EOL><DEDENT>else:<EOL><INDENT>self.colno += end - start<EOL><DEDENT>return result<EOL>
Reads a full line from the scanner and returns it.
f12738:c0:m7
def match(self, regex, flags=<NUM_LIT:0>):
if isinstance(regex, str):<EOL><INDENT>regex = re.compile(regex, flags)<EOL><DEDENT>match = regex.match(self.text, self.index)<EOL>if not match:<EOL><INDENT>return None<EOL><DEDENT>start, end = match.start(), match.end()<EOL>lines = self.text.count('<STR_LIT:\n>', start, end)<EOL>self.index = end<EOL>if lines:<EOL><INDENT>self.colno = end - self.text.rfind('<STR_LIT:\n>', start, end) - <NUM_LIT:1><EOL>self.lineno += lines<EOL><DEDENT>else:<EOL><INDENT>self.colno += end - start<EOL><DEDENT>return match<EOL>
Matches the specified *regex* from the current character of the *scanner* and returns the result. The Scanners column and line numbers are updated respectively. # Arguments regex (str, Pattern): The regex to match. flags (int): The flags to use when compiling the pattern.
f12738:c0:m8
def getmatch(self, regex, group=<NUM_LIT:0>, flags=<NUM_LIT:0>):
match = self.match(regex, flags)<EOL>if match:<EOL><INDENT>return match.group(group)<EOL><DEDENT>return None<EOL>
The same as #Scanner.match(), but returns the captured group rather than the regex match object, or None if the pattern didn't match.
f12738:c0:m9
def restore(self, cursor):
if not isinstance(cursor, Cursor):<EOL><INDENT>raise TypeError('<STR_LIT>', type(cursor))<EOL><DEDENT>self.index, self.lineno, self.colno = cursor<EOL>
Moves the scanner back (or forward) to the specified cursor location.
f12738:c0:m10
def update(self):
self.rules_map = {}<EOL>self.skippable_rules = []<EOL>for rule in self.rules:<EOL><INDENT>if not isinstance(rule, Rule):<EOL><INDENT>raise TypeError('<STR_LIT>', type(rule))<EOL><DEDENT>self.rules_map.setdefault(rule.name, []).append(rule)<EOL>if rule.skip:<EOL><INDENT>self.skippable_rules.append(rule)<EOL><DEDENT><DEDENT>
Updates the #rules_map dictionary and #skippable_rules list based on the #rules list. Must be called after #rules or any of its items have been modified. The same rule name may appear multiple times. # Raises TypeError: if an item in the `rules` list is not a rule.
f12738:c1:m4
def expect(self, *names):
if not names:<EOL><INDENT>return<EOL><DEDENT>if not self.token or self.token.type not in names:<EOL><INDENT>raise UnexpectedTokenError(names, self.token)<EOL><DEDENT>
Checks if the current #token#s type name matches with any of the specified *names*. This is useful for asserting multiple valid token types at a specific point in the parsing process. # Arguments names (str): One or more token type names. If zero are passed, nothing happens. # Raises UnexpectedTokenError: If the current #token#s type name does not match with any of the specified *names*.
f12738:c1:m5
def accept(self, *names, **kwargs):
return self.next(*names, as_accept=True, **kwargs)<EOL>
Extracts a token of one of the specified rule names and doesn't error if unsuccessful. Skippable tokens might still be skipped by this method. # Arguments names (str): One or more token names that are accepted. kwargs: Additional keyword arguments for #next(). # Raises ValueError: if a rule with the specified name doesn't exist.
f12738:c1:m6
def next(self, *expectation, **kwargs):
as_accept = kwargs.pop('<STR_LIT>', False)<EOL>weighted = kwargs.pop('<STR_LIT>', False)<EOL>for key in kwargs:<EOL><INDENT>raise TypeError('<STR_LIT>'.format(key))<EOL><DEDENT>if self.token and self.token.type == eof:<EOL><INDENT>if not as_accept and expectation and eof not in expectation:<EOL><INDENT>raise UnexpectedTokenError(expectation, self.token)<EOL><DEDENT>elif as_accept and eof in expectation:<EOL><INDENT>return self.token<EOL><DEDENT>elif as_accept:<EOL><INDENT>return None<EOL><DEDENT>return self.token<EOL><DEDENT>token = None<EOL>while token is None:<EOL><INDENT>cursor = self.scanner.cursor<EOL>if not self.scanner:<EOL><INDENT>token = Token(eof, cursor, None, None)<EOL>break<EOL><DEDENT>value = None<EOL>if weighted:<EOL><INDENT>for rule_name in expectation:<EOL><INDENT>if rule_name == eof:<EOL><INDENT>continue<EOL><DEDENT>rules = self.rules_map.get(rule_name)<EOL>if rules is None:<EOL><INDENT>raise ValueError('<STR_LIT>', rule_name)<EOL><DEDENT>for rule in rules:<EOL><INDENT>value = rule.tokenize(self.scanner)<EOL>if value:<EOL><INDENT>break<EOL><DEDENT><DEDENT>if value:<EOL><INDENT>break<EOL><DEDENT>self.scanner.restore(cursor)<EOL><DEDENT><DEDENT>if not value:<EOL><INDENT>if as_accept and weighted:<EOL><INDENT>check_rules = self.skippable_rules<EOL><DEDENT>else:<EOL><INDENT>check_rules = self.rules<EOL><DEDENT>for rule in check_rules:<EOL><INDENT>if weighted and expectation and rule.name in expectation:<EOL><INDENT>continue<EOL><DEDENT>value = rule.tokenize(self.scanner)<EOL>if value:<EOL><INDENT>break<EOL><DEDENT>self.scanner.restore(cursor)<EOL><DEDENT><DEDENT>if not value:<EOL><INDENT>if as_accept:<EOL><INDENT>return None<EOL><DEDENT>token = Token(None, cursor, self.scanner.char, None)<EOL><DEDENT>else:<EOL><INDENT>assert rule, "<STR_LIT>"<EOL>if type(value) is not Token:<EOL><INDENT>if isinstance(value, tuple):<EOL><INDENT>value, string_repr = value<EOL><DEDENT>else:<EOL><INDENT>string_repr = None<EOL><DEDENT>value = Token(rule.name, cursor, value, string_repr)<EOL><DEDENT>token = value<EOL>expected = rule.name in expectation<EOL>if not expected and rule.skip:<EOL><INDENT>token = None<EOL><DEDENT>elif not expected and as_accept:<EOL><INDENT>self.scanner.restore(cursor)<EOL>return None<EOL><DEDENT><DEDENT><DEDENT>self.token = token<EOL>if as_accept and token and token.type == eof:<EOL><INDENT>if eof in expectation:<EOL><INDENT>return token<EOL><DEDENT>return None<EOL><DEDENT>if token.type is None:<EOL><INDENT>raise TokenizationError(token)<EOL><DEDENT>if not as_accept and expectation and token.type not in expectation:<EOL><INDENT>raise UnexpectedTokenError(expectation, token)<EOL><DEDENT>assert not as_accept or (token and token.type in expectation)<EOL>return token<EOL>
Parses the next token from the input and returns it. The new token can be accessed from the #token attribute after the method was called. If one or more arguments are specified, they must be rule names that are to be expected at the current position. They will be attempted to be matched first (in the specicied order). If the expectation could not be met, an #UnexpectedTokenError is raised. An expected Token will not be skipped, even if its rule defines it so. # Arguments expectation (str): The name of one or more rules that are expected from the current position of the parser. If empty, the first matching token of ALL rules will be returned. In this case, skippable tokens will be skipped. as_accept (bool): If passed True, this method behaves the same as the #accept() method. The default value is #False. weighted (bool): If passed True, the tokens specified with *expectations* are checked first, effectively giving them a higher priority than other they would have from the order in the #rules list. The default value is #False. # Raises ValueError: if an expectation doesn't match with a rule name. UnexpectedTokenError: Ff an expectation is given and the expectation wasn't fulfilled. Only when *as_accept* is set to #False. TokenizationError: if a token could not be generated from the current position of the Scanner.
f12738:c1:m7
def tokenize(self, scanner):
raise NotImplementedError<EOL>
Attempt to extract a token from the position of the *scanner* and return it. If a non-#Token instance is returned, it will be used as the tokens value. Any value that evaluates to #False will make the Lexer assume that the rule couldn't capture a Token. The #Token.value must not necessarily be a string though, it can be any data type or even a complex datatype, only the user must know about it and handle the token in a special manner.
f12738:c2:m1
def skip(stackframe=<NUM_LIT:1>):
def trace(frame, event, args):<EOL><INDENT>raise ContextSkipped<EOL><DEDENT>sys.settrace(lambda *args, **kwargs: None)<EOL>frame = sys._getframe(stackframe + <NUM_LIT:1>)<EOL>frame.f_trace = trace<EOL>
Must be called from within `__enter__()`. Performs some magic to have a #ContextSkipped exception be raised the moment the with context is entered. The #ContextSkipped must then be handled in `__exit__()` to suppress the propagation of the exception. > Important: This function does not raise an exception by itself, thus > the `__enter__()` method will continue to execute after using this function.
f12743:m0
def get_staticmethod_func(cm):
if hasattr(cm, '<STR_LIT>'):<EOL><INDENT>return cm.__func__<EOL><DEDENT>else:<EOL><INDENT>return cm.__get__(int)<EOL><DEDENT>
Returns the function wrapped by the #staticmethod *cm*.
f12744:m0
def mro_resolve(name, bases, dict):
if name in dict:<EOL><INDENT>return dict[name]<EOL><DEDENT>for base in bases:<EOL><INDENT>if hasattr(base, name):<EOL><INDENT>return getattr(base, name)<EOL><DEDENT>try:<EOL><INDENT>return mro_resolve(name, base.__bases__, {})<EOL><DEDENT>except KeyError:<EOL><INDENT>pass<EOL><DEDENT><DEDENT>raise KeyError(name)<EOL>
Given a tuple of baseclasses and a dictionary that takes precedence over any value in the bases, finds a value with the specified *name* and returns it. Raises #KeyError if the value can not be found.
f12744:m1
def getter(name, key=None):
if not key:<EOL><INDENT>key = lambda x: x<EOL><DEDENT>def wrapper(self):<EOL><INDENT>return key(getattr(self, name))<EOL><DEDENT>wrapper.__name__ = wrapper.__qualname__ = name<EOL>return property(wrapper)<EOL>
Creates a read-only property for the attribute name *name*. If a *key* function is provided, it can be used to post-process the value of the attribute.
f12745:m0
def _build_opstackd():
def _call_function_argc(argc):<EOL><INDENT>func_obj = <NUM_LIT:1><EOL>args_pos = (argc & <NUM_LIT>)<EOL>args_kw = ((argc >> <NUM_LIT:8>) & <NUM_LIT>) * <NUM_LIT:2><EOL>return func_obj + args_pos + args_kw<EOL><DEDENT>def _make_function_argc(argc):<EOL><INDENT>args_pos = (argc + <NUM_LIT>)<EOL>args_kw = ((argc >> <NUM_LIT:8>) & <NUM_LIT>) * <NUM_LIT:2><EOL>annotations = (argc >> <NUM_LIT>)<EOL>anootations_names = <NUM_LIT:1> if annotations else <NUM_LIT:0><EOL>code_obj = <NUM_LIT:1><EOL>qualname = <NUM_LIT:1><EOL>return args_pos + args_kw + annotations + anootations_names + code_obj + qualname<EOL><DEDENT>result = {<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': -<NUM_LIT:1>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:1>,<EOL>'<STR_LIT>': <NUM_LIT:2>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': -<NUM_LIT:1>,<EOL>'<STR_LIT>': <NUM_LIT:0>, <EOL>'<STR_LIT>': <NUM_LIT:0>, <EOL>'<STR_LIT>': -<NUM_LIT:1>, <EOL>'<STR_LIT>': -<NUM_LIT:1>, <EOL>'<STR_LIT>': -<NUM_LIT:2>, <EOL>'<STR_LIT>': -<NUM_LIT:1>, <EOL>'<STR_LIT>': -<NUM_LIT:1>,<EOL>'<STR_LIT>': -<NUM_LIT:1>,<EOL>'<STR_LIT>': -<NUM_LIT:1>,<EOL>'<STR_LIT>': -<NUM_LIT:1>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': lambda op: op.arg,<EOL>'<STR_LIT>': lambda op: (op.arg & <NUM_LIT>) - (op.arg >> <NUM_LIT:8> & <NUM_LIT>), <EOL>'<STR_LIT>': -<NUM_LIT:2>,<EOL>'<STR_LIT>': -<NUM_LIT:1>,<EOL>'<STR_LIT>': -<NUM_LIT:1>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:1>,<EOL>'<STR_LIT>': <NUM_LIT:1>,<EOL>'<STR_LIT>': lambda op: <NUM_LIT:1> - op.arg,<EOL>'<STR_LIT>': lambda op: <NUM_LIT:1> - op.arg,<EOL>'<STR_LIT>': lambda op: <NUM_LIT:1> - op.arg,<EOL>'<STR_LIT>': lambda op: <NUM_LIT:1> - op.arg,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:1>, <EOL>'<STR_LIT>': <NUM_LIT:1>,<EOL>'<STR_LIT>': <NUM_LIT:1>,<EOL>'<STR_LIT>': -<NUM_LIT:1>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:1>,<EOL>'<STR_LIT>': <NUM_LIT:1>,<EOL>'<STR_LIT>': -<NUM_LIT:1>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': lambda op: -op.arg,<EOL>'<STR_LIT>': lambda op: <NUM_LIT:1> - _call_function_argc(op.arg),<EOL>'<STR_LIT>': lambda op: <NUM_LIT:1> - _make_function_argc(op.arg),<EOL>'<STR_LIT>': lambda op: <NUM_LIT:1> - op.arg,<EOL>'<STR_LIT>': lambda op: <NUM_LIT:1> - _call_function_argc(op.arg),<EOL>}<EOL>if sys.version >= '<STR_LIT>':<EOL><INDENT>result.update({<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>'<STR_LIT>': <NUM_LIT:0>,<EOL>})<EOL><DEDENT>if sys.version <= '<STR_LIT>':<EOL><INDENT>result.update({<EOL>'<STR_LIT>': lambda op: <NUM_LIT:1> - _call_function_argc(op.arg),<EOL>'<STR_LIT>': lambda op: <NUM_LIT:1> - _call_function_argc(op.arg),<EOL>})<EOL><DEDENT>for code in dis.opmap.keys():<EOL><INDENT>if code.startswith('<STR_LIT>'):<EOL><INDENT>result[code] = <NUM_LIT:0><EOL><DEDENT>elif code.startswith('<STR_LIT>') or code.startswith('<STR_LIT>'):<EOL><INDENT>result[code] = -<NUM_LIT:1><EOL><DEDENT><DEDENT>return result<EOL>
Builds a dictionary that maps the name of an op-code to the number of elemnts it adds to the stack when executed. For some opcodes, the dictionary may contain a function which requires the #dis.Instruction object to determine the actual value. The dictionary mostly only contains information for instructions used in expressions.
f12746:m0
def get_stackdelta(op):
res = opstackd[op.opname]<EOL>if callable(res):<EOL><INDENT>res = res(op)<EOL><DEDENT>return res<EOL>
Returns the number of elements that the instruction *op* adds to the stack. # Arguments op (dis.Instruction): The instruction to retrieve the stackdelta value for. # Raises KeyError: If the instruction *op* is not supported.
f12746:m1
def get_assigned_name(frame):
SEARCHING, MATCHED = <NUM_LIT:1>, <NUM_LIT:2><EOL>state = SEARCHING<EOL>result = '<STR_LIT>'<EOL>stacksize = <NUM_LIT:0><EOL>for op in dis.get_instructions(frame.f_code):<EOL><INDENT>if state == SEARCHING and op.offset == frame.f_lasti:<EOL><INDENT>if not op.opname.startswith('<STR_LIT>'):<EOL><INDENT>raise RuntimeError('<STR_LIT>')<EOL><DEDENT>state = MATCHED<EOL>stacksize = <NUM_LIT:1><EOL><DEDENT>elif state == MATCHED:<EOL><INDENT>try:<EOL><INDENT>stacksize += get_stackdelta(op)<EOL><DEDENT>except KeyError:<EOL><INDENT>raise RuntimeError('<STR_LIT>'<EOL>'<STR_LIT>'.format(op.opname))<EOL><DEDENT>if stacksize == <NUM_LIT:0>:<EOL><INDENT>if op.opname not in ('<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>'):<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>return result + op.argval<EOL><DEDENT>elif stacksize < <NUM_LIT:0>:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>if op.opname.startswith('<STR_LIT>'):<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>elif op.opname == '<STR_LIT>':<EOL><INDENT>result += op.argval + '<STR_LIT:.>'<EOL><DEDENT><DEDENT><DEDENT>if not result:<EOL><INDENT>raise RuntimeError('<STR_LIT>')<EOL><DEDENT>assert False<EOL>
Checks the bytecode of *frame* to find the name of the variable a result is being assigned to and returns that name. Returns the full left operand of the assignment. Raises a #ValueError if the variable name could not be retrieved from the bytecode (eg. if an unpack sequence is on the left side of the assignment). > **Known Limitations**: The expression in the *frame* from which this > function is called must be the first part of that expression. For > example, `foo = [get_assigned_name(get_frame())] + [42]` works, > but `foo = [42, get_assigned_name(get_frame())]` does not! ```python >>> var = get_assigned_name(sys._getframe()) >>> assert var == 'var' ``` __Available in Python 3.4, 3.5__
f12746:m2
@dualmethod<EOL><INDENT>def call(cls, iterable, *a, **kw):<DEDENT>
return cls(x(*a, **kw) for x in iterable)<EOL>
Calls every item in *iterable* with the specified arguments.
f12747:c0:m3