signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
@staticmethod<EOL><INDENT>def parse(readDataInstance):<DEDENT>
dbgDir = ImageDebugDirectory()<EOL>dbgDir.characteristics.value = readDataInstance.readDword()<EOL>dbgDir.timeDateStamp.value = readDataInstance.readDword()<EOL>dbgDir.majorVersion.value = readDataInstance.readWord()<EOL>dbgDir.minorVersion.value = readDataInstance.readWord()<EOL>dbgDir.type.value = readDataInstance.readDword()<EOL>dbgDir.sizeOfData.value = readDataInstance.readDword()<EOL>dbgDir.addressOfData.value = readDataInstance.readDword()<EOL>dbgDir.pointerToRawData.value = readDataInstance.readDword()<EOL>return dbgDir<EOL>
Returns a new L{ImageDebugDirectory} object. @type readDataInstance: L{ReadData} @param readDataInstance: A new L{ReadData} object with data to be parsed as a L{ImageDebugDirectory} object. @rtype: L{ImageDebugDirectory} @return: A new L{ImageDebugDirectory} object.
f11794:c10:m2
def __init__(self, shouldPack = True):
self.shouldPack = shouldPack<EOL>
Array of L{ImageDebugDirectory} objects. @type shouldPack: bool @param shouldPack: (Optional) If set to c{True}, the object will be packed. If set to C{False}, the object won't be packed.
f11794:c11:m0
def getType(self):
return consts.IMAGE_DEBUG_DIRECTORIES<EOL>
Returns L{consts.IMAGE_DEBUG_DIRECTORIES}.
f11794:c11:m2
@staticmethod<EOL><INDENT>def parse(readDataInstance, nDebugEntries):<DEDENT>
dbgEntries = ImageDebugDirectories()<EOL>dataLength = len(readDataInstance)<EOL>toRead = nDebugEntries * consts.SIZEOF_IMAGE_DEBUG_ENTRY32<EOL>if dataLength >= toRead:<EOL><INDENT>for i in range(nDebugEntries):<EOL><INDENT>dbgEntry = ImageDebugDirectory.parse(readDataInstance)<EOL>dbgEntries.append(dbgEntry)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>raise excep.DataLengthException("<STR_LIT>")<EOL><DEDENT>return dbgEntries<EOL>
Returns a new L{ImageDebugDirectories} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{ImageDebugDirectories} object. @type nDebugEntries: int @param nDebugEntries: Number of L{ImageDebugDirectory} objects in the C{readDataInstance} object. @rtype: L{ImageDebugDirectories} @return: A new L{ImageDebugDirectories} object. @raise DataLengthException: If not enough data to read in the C{readDataInstance} object.
f11794:c11:m3
def __init__(self, shouldPack = True):
baseclasses.BaseStructClass.__init__(self, shouldPack)<EOL>self.moduleName = datatypes.String("<STR_LIT>") <EOL>self.numberOfImports = datatypes.DWORD(<NUM_LIT:0>) <EOL>self._attrsList = ["<STR_LIT>", "<STR_LIT>"]<EOL>
Class used to store metadata from the L{ImageImportDescriptor} object. @type shouldPack: bool @param shouldPack: (Optional) If set to c{True}, the object will be packed. If set to C{False}, the object won't be packed.
f11794:c12:m0
def getType(self):
return consts.IID_METADATA<EOL>
Returns L{consts.IID_METADATA}.
f11794:c12:m1
def __init__(self, shouldPack = True):
baseclasses.BaseStructClass.__init__(self, shouldPack)<EOL>self.metaData = ImageImportDescriptorMetaData() <EOL>self.originalFirstThunk = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.timeDateStamp = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.forwarderChain = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.name = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.firstThunk = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.iat = ImportAddressTable() <EOL>self._attrsList = ["<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT:name>", "<STR_LIT>"]<EOL>
Class representation of a C{IMAGE_IMPORT_DESCRIPTOR} structure. @see: Figure 5 U{http://msdn.microsoft.com/es-ar/magazine/bb985996%28en-us%29.aspx} @type shouldPack: bool @param shouldPack: (Optional) If set to c{True}, the object will be packed. If set to C{False}, the object won't be packed.
f11794:c13:m0
@staticmethod<EOL><INDENT>def parse(readDataInstance):<DEDENT>
iid = ImageImportDescriptorEntry()<EOL>iid.originalFirstThunk.value = readDataInstance.readDword()<EOL>iid.timeDateStamp.value = readDataInstance.readDword()<EOL>iid.forwarderChain.value = readDataInstance.readDword()<EOL>iid.name.value = readDataInstance.readDword()<EOL>iid.firstThunk.value = readDataInstance.readDword()<EOL>return iid<EOL>
Returns a new L{ImageImportDescriptorEntry} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{ImageImportDescriptorEntry}. @rtype: L{ImageImportDescriptorEntry} @return: A new L{ImageImportDescriptorEntry} object.
f11794:c13:m1
def getType(self):
return consts.IMAGE_IMPORT_DESCRIPTOR_ENTRY<EOL>
Returns C{consts.IMAGE_IMPORT_DESCRIPTOR_ENTRY}.
f11794:c13:m2
def __init__(self, shouldPack = True):
self.shouldPack = shouldPack<EOL>
Array of L{ImageImportDescriptorEntry} objects. @type shouldPack: bool @param shouldPack: (Optional) If set to c{True}, the object will be packed. If set to C{False}, the object won't be packed.
f11794:c14:m0
def getType(self):
return consts.IMAGE_IMPORT_DESCRIPTOR<EOL>
Returns L{consts.IMAGE_IMPORT_DESCRIPTOR}.
f11794:c14:m2
@staticmethod<EOL><INDENT>def parse(readDataInstance, nEntries):<DEDENT>
importEntries = ImageImportDescriptor()<EOL>dataLength = len(readDataInstance)<EOL>toRead = nEntries * consts.SIZEOF_IMAGE_IMPORT_ENTRY32<EOL>if dataLength >= toRead:<EOL><INDENT>for i in range(nEntries):<EOL><INDENT>importEntry = ImageImportDescriptorEntry.parse(readDataInstance)<EOL>importEntries.append(importEntry)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>raise excep.DataLengthException("<STR_LIT>")<EOL><DEDENT>return importEntries<EOL>
Returns a new L{ImageImportDescriptor} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{ImageImportDescriptor} object. @type nEntries: int @param nEntries: The number of L{ImageImportDescriptorEntry} objects in the C{readDataInstance} object. @rtype: L{ImageImportDescriptor} @return: A new L{ImageImportDescriptor} object. @raise DataLengthException: If not enough data to read.
f11794:c14:m3
def __init__(self, shouldPack = True):
baseclasses.BaseStructClass.__init__(self, shouldPack)<EOL>self.firstThunk = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.originalFirstThunk = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.hint = datatypes.WORD(<NUM_LIT:0>) <EOL>self.name = datatypes.String("<STR_LIT>") <EOL>self._attrsList = ["<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT:name>"]<EOL>
A class representation of a C{} structure. @type shouldPack: bool @param shouldPack: (Optional) If set to c{True}, the object will be packed. If set to C{False}, the object won't be packed.
f11794:c15:m0
def getType(self):
return consts.IMPORT_ADDRESS_TABLE_ENTRY<EOL>
Returns L{consts.IMPORT_ADDRESS_TABLE_ENTRY}.
f11794:c15:m1
def __init__(self, shouldPack = True):
baseclasses.BaseStructClass.__init__(self, shouldPack)<EOL>self.firstThunk = datatypes.QWORD(<NUM_LIT:0>) <EOL>self.originalFirstThunk = datatypes.QWORD(<NUM_LIT:0>) <EOL>self.hint = datatypes.WORD(<NUM_LIT:0>) <EOL>self.name = datatypes.String("<STR_LIT>") <EOL>self._attrsList = ["<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT:name>"]<EOL>
A class representation of a C{} structure. @type shouldPack: bool @param shouldPack: (Optional) If set to c{True}, the object will be packed. If set to C{False}, the object won't be packed.
f11794:c16:m0
def getType(self):
return consts.IMPORT_ADDRESS_TABLE_ENTRY64<EOL>
Returns L{consts.IMPORT_ADDRESS_TABLE_ENTRY64}.
f11794:c16:m1
def __init__(self, shouldPack = True):
baseclasses.BaseStructClass.__init__(self, shouldPack)<EOL>self.ordinal = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.functionRva = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.nameOrdinal = datatypes.WORD(<NUM_LIT:0>) <EOL>self.nameRva = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.name = datatypes.String("<STR_LIT>") <EOL>self._attrsList = ["<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT:name>"]<EOL>
A class representation of a C{} structure. @type shouldPack: bool @param shouldPack: (Optional) If set to c{True}, the object will be packed. If set to C{False}, the object won't be packed.
f11794:c19:m0
def getType(self):
return consts.EXPORT_TABLE_ENTRY<EOL>
Returns L{consts.EXPORT_TABLE_ENTRY}.
f11794:c19:m2
@staticmethod<EOL><INDENT>def parse(readDataInstance):<DEDENT>
exportEntry = ExportTableEntry()<EOL>exportEntry.functionRva.value = readDataInstance.readDword()<EOL>exportEntry.nameOrdinal.value = readDataInstance.readWord()<EOL>exportEntry.nameRva.value = readDataInstance.readDword()<EOL>exportEntry.name.value = readDataInstance.readString()<EOL>return exportEntry<EOL>
Returns a new L{ExportTableEntry} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{ExportTableEntry} object. @rtype: L{ExportTableEntry} @return: A new L{ExportTableEntry} object.
f11794:c19:m3
def __init__(self, shouldPack = True):
baseclasses.BaseStructClass.__init__(self, shouldPack)<EOL>self.exportTable = ExportTable()<EOL>self.characteristics = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.timeDateStamp = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.majorVersion = datatypes.WORD(<NUM_LIT:0>) <EOL>self.minorVersion = datatypes.WORD(<NUM_LIT:0>) <EOL>self.name = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.base = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.numberOfFunctions = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.numberOfNames = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.addressOfFunctions = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.addressOfNames = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.addressOfNameOrdinals = datatypes.DWORD(<NUM_LIT:0>) <EOL>self._attrsList = ["<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT:name>", "<STR_LIT>", "<STR_LIT>","<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT>"]<EOL>
Class representation of a C{IMAGE_EXPORT_DIRECTORY} structure. @see: Figure 2 U{http://msdn.microsoft.com/en-us/magazine/bb985996.aspx} @type shouldPack: bool @param shouldPack: (Optional) If set to c{True}, the object will be packed. If set to C{False}, the object won't be packed.
f11794:c20:m0
def getType(self):
return consts.EXPORT_DIRECTORY<EOL>
Returns L{consts.EXPORT_DIRECTORY}.
f11794:c20:m1
@staticmethod<EOL><INDENT>def parse(readDataInstance):<DEDENT>
et = ImageExportTable()<EOL>et.characteristics.value = readDataInstance.readDword()<EOL>et.timeDateStamp.value = readDataInstance.readDword()<EOL>et.majorVersion.value = readDataInstance.readWord()<EOL>et.minorVersion.value = readDataInstance.readWord()<EOL>et.name.value = readDataInstance.readDword()<EOL>et.base.value = readDataInstance.readDword()<EOL>et.numberOfFunctions.value = readDataInstance.readDword()<EOL>et.numberOfNames.value = readDataInstance.readDword()<EOL>et.addressOfFunctions.value = readDataInstance.readDword()<EOL>et.addressOfNames.value = readDataInstance.readDword()<EOL>et.addressOfNameOrdinals.value = readDataInstance.readDword()<EOL>return et<EOL>
Returns a new L{ImageExportTable} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{ImageExportTable} object. @rtype: L{ImageExportTable} @return: A new L{ImageExportTable} object.
f11794:c20:m2
def __init__(self, shouldPack = True):
baseclasses.BaseStructClass.__init__(self, shouldPack)<EOL>self.directory = NetDirectory() <EOL>self.netMetaDataHeader = NetMetaDataHeader() <EOL>self.netMetaDataStreams = NetMetaDataStreams() <EOL>self._attrsList = ["<STR_LIT>", "<STR_LIT>", "<STR_LIT>"]<EOL>
A class to abstract data from the .NET PE format. @type shouldPack: bool @param shouldPack: (Optional) If set to c{True}, the object will be packed. If set to C{False}, the object won't be packed.
f11794:c21:m0
@staticmethod<EOL><INDENT>def parse(readDataInstance):<DEDENT>
nd = NETDirectory()<EOL>nd.directory = NetDirectory.parse(readDataInstance)<EOL>nd.netMetaDataHeader = NetMetaDataHeader.parse(readDataInstance)<EOL>nd.netMetaDataStreams = NetMetaDataStreams.parse(readDataInstance)<EOL>return nd<EOL>
Returns a new L{NETDirectory} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{NETDirectory} object. @rtype: L{NETDirectory} @return: A new L{NETDirectory} object.
f11794:c21:m1
def getType(self):
return consts.NET_DIRECTORY<EOL>
Returns L{consts.NET_DIRECTORY}.
f11794:c21:m2
def __init__(self, shouldPack = True):
baseclasses.BaseStructClass.__init__(self, shouldPack)<EOL>self.cb = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.majorRuntimeVersion = datatypes.WORD(<NUM_LIT:0>) <EOL>self.minorRuntimeVersion = datatypes.WORD(<NUM_LIT:0>) <EOL>self.metaData = datadirs.Directory() <EOL>self.flags = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.entryPointToken = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.resources = datadirs.Directory() <EOL>self.strongNameSignature = datadirs.Directory() <EOL>self.codeManagerTable = datadirs.Directory() <EOL>self.vTableFixups = datadirs.Directory() <EOL>self.exportAddressTableJumps = datadirs.Directory() <EOL>self.managedNativeHeader = datadirs.Directory() <EOL>self._attrsList = ["<STR_LIT>","<STR_LIT>","<STR_LIT>","<STR_LIT>","<STR_LIT>","<STR_LIT>","<STR_LIT>","<STR_LIT>","<STR_LIT>","<STR_LIT>", "<STR_LIT>","<STR_LIT>"]<EOL>
A class representation of the C{IMAGE_COR20_HEADER} structure. @see: U{http://www.ntcore.com/files/dotnetformat.htm} @type shouldPack: bool @param shouldPack: (Optional) If set to c{True}, the object will be packed. If set to C{False}, the object won't be packed.
f11794:c22:m0
def getType(self):
return consts.IMAGE_COR20_HEADER<EOL>
Returns L{consts.IMAGE_COR20_HEADER}.
f11794:c22:m1
@staticmethod<EOL><INDENT>def parse(readDataInstance):<DEDENT>
nd = NetDirectory()<EOL>nd.cb.value = readDataInstance.readDword()<EOL>nd.majorRuntimeVersion.value= readDataInstance.readWord()<EOL>nd.minorRuntimeVersion.value = readDataInstance.readWord()<EOL>nd.metaData.rva.value = readDataInstance.readDword()<EOL>nd.metaData.size.value = readDataInstance.readDword()<EOL>nd.metaData.name.value = "<STR_LIT>"<EOL>nd.flags.value = readDataInstance.readDword()<EOL>nd.entryPointToken.value = readDataInstance.readDword()<EOL>nd.resources.rva.value = readDataInstance.readDword()<EOL>nd.resources.size.value = readDataInstance.readDword()<EOL>nd.resources.name.value = "<STR_LIT>"<EOL>nd.strongNameSignature.rva.value = readDataInstance.readDword()<EOL>nd.strongNameSignature.size.value = readDataInstance.readDword()<EOL>nd.strongNameSignature.name.value = "<STR_LIT>"<EOL>nd.codeManagerTable.rva.value = readDataInstance.readDword()<EOL>nd.codeManagerTable.size.value = readDataInstance.readDword()<EOL>nd.codeManagerTable.name.value = "<STR_LIT>"<EOL>nd.vTableFixups.rva.value = readDataInstance.readDword()<EOL>nd.vTableFixups.size.value = readDataInstance.readDword()<EOL>nd.vTableFixups.name.value = "<STR_LIT>"<EOL>nd.exportAddressTableJumps.rva.value = readDataInstance.readDword()<EOL>nd.exportAddressTableJumps.size.value = readDataInstance.readDword()<EOL>nd.exportAddressTableJumps.name.value = "<STR_LIT>"<EOL>nd.managedNativeHeader.rva.value = readDataInstance.readDword()<EOL>nd.managedNativeHeader.size.value = readDataInstance.readDword()<EOL>nd.managedNativeHeader.name.value = "<STR_LIT>"<EOL>return nd<EOL>
Returns a new L{NetDirectory} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{NetDirectory} object. @rtype: L{NetDirectory} @return: A new L{NetDirectory} object.
f11794:c22:m2
def getType(self):
return consts.NET_METADATA_HEADER<EOL>
Returns L{consts.NET_METADATA_HEADER}.
f11794:c23:m1
@staticmethod<EOL><INDENT>def parse(readDataInstance):<DEDENT>
nmh = NetMetaDataHeader()<EOL>nmh.signature.value = readDataInstance.readDword()<EOL>nmh.majorVersion.value = readDataInstance.readWord()<EOL>nmh.minorVersion.value = readDataInstance.readWord()<EOL>nmh.reserved.value = readDataInstance.readDword()<EOL>nmh.versionLength.value = readDataInstance.readDword()<EOL>nmh.versionString.value = readDataInstance.readAlignedString()<EOL>nmh.flags.value = readDataInstance.readWord()<EOL>nmh.numberOfStreams.value = readDataInstance.readWord()<EOL>return nmh<EOL>
Returns a new L{NetMetaDataHeader} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{NetMetaDataHeader} object. @rtype: L{NetMetaDataHeader} @return: A new L{NetMetaDataHeader} object.
f11794:c23:m2
def getType(self):
return consts.NET_METADATA_STREAM_ENTRY<EOL>
Returns L{consts.NET_METADATA_STREAM_ENTRY}.
f11794:c24:m1
@staticmethod<EOL><INDENT>def parse(readDataInstance):<DEDENT>
n = NetMetaDataStreamEntry()<EOL>n.offset.value = readDataInstance.readDword()<EOL>n.size.value = readDataInstance.readDword()<EOL>n.name.value = readDataInstance.readAlignedString()<EOL>return n<EOL>
Returns a new L{NetMetaDataStreamEntry} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{NetMetaDataStreamEntry}. @rtype: L{NetMetaDataStreamEntry} @return: A new L{NetMetaDataStreamEntry} object.
f11794:c24:m2
def getType(self):
return consts.NET_METADATA_STREAMS<EOL>
Returns L{consts.NET_METADATA_STREAMS}.
f11794:c25:m4
@staticmethod<EOL><INDENT>def parse(readDataInstance, nStreams):<DEDENT>
streams = NetMetaDataStreams()<EOL>for i in range(nStreams):<EOL><INDENT>streamEntry = NetMetaDataStreamEntry()<EOL>streamEntry.offset.value = readDataInstance.readDword()<EOL>streamEntry.size.value = readDataInstance.readDword()<EOL>streamEntry.name.value = readDataInstance.readAlignedString()<EOL>streams.update({ i: streamEntry, streamEntry.name.value: streamEntry })<EOL><DEDENT>return streams<EOL>
Returns a new L{NetMetaDataStreams} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{NetMetaDataStreams} object. @type nStreams: int @param nStreams: The number of L{NetMetaDataStreamEntry} objects in the C{readDataInstance} object. @rtype: L{NetMetaDataStreams} @return: A new L{NetMetaDataStreams} object.
f11794:c25:m5
def getType(self):
return consts.NET_METADATA_TABLE_HEADER<EOL>
Returns L{consts.NET_METADATA_TABLE_HEADER}.
f11794:c26:m1
@staticmethod<EOL><INDENT>def parse(readDataInstance):<DEDENT>
th = NetMetaDataTableHeader()<EOL>th.reserved_1.value = readDataInstance.readDword()<EOL>th.majorVersion.value = readDataInstance.readByte()<EOL>th.minorVersion.value = readDataInstance.readByte()<EOL>th.heapOffsetSizes.value = readDataInstance.readByte()<EOL>th.reserved_2.value = readDataInstance.readByte()<EOL>th.maskValid.value = readDataInstance.readQword()<EOL>th.maskSorted.value = readDataInstance.readQword()<EOL>return th<EOL>
Returns a new L{NetMetaDataTableHeader} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{NetMetaDataTableHeader} object. @rtype: L{NetMetaDataTableHeader} @return: A new L{NetMetaDataTableHeader} object.
f11794:c26:m2
def __init__(self, shouldPack = True):
baseclasses.BaseStructClass.__init__(self, shouldPack)<EOL>self.netMetaDataTableHeader = NetMetaDataTableHeader() <EOL>self.tables = None <EOL>self._attrsList = ["<STR_LIT>", "<STR_LIT>"]<EOL>
NetMetaDataTables object. @todo: Parse every table in this struct and store them in the C{self.tables} attribute.
f11794:c27:m0
def getType(self):
return consts.NET_METADATA_TABLES<EOL>
Returns L{consts.NET_METADATA_TABLES}.
f11794:c27:m1
@staticmethod<EOL><INDENT>def parse(readDataInstance, netMetaDataStreams):<DEDENT>
dt = NetMetaDataTables()<EOL>dt.netMetaDataTableHeader = NetMetaDataTableHeader.parse(readDataInstance)<EOL>dt.tables = {}<EOL>metadataTableDefinitions = dotnet.MetadataTableDefinitions(dt, netMetaDataStreams)<EOL>for i in xrange(<NUM_LIT:64>):<EOL><INDENT>dt.tables[i] = { "<STR_LIT>": <NUM_LIT:0> }<EOL>if dt.netMetaDataTableHeader.maskValid.value >> i & <NUM_LIT:1>:<EOL><INDENT>dt.tables[i]["<STR_LIT>"] = readDataInstance.readDword()<EOL><DEDENT>if i in dotnet.MetadataTableNames:<EOL><INDENT>dt.tables[dotnet.MetadataTableNames[i]] = dt.tables[i]<EOL><DEDENT><DEDENT>for i in xrange(<NUM_LIT:64>):<EOL><INDENT>dt.tables[i]["<STR_LIT:data>"] = []<EOL>for j in range(dt.tables[i]["<STR_LIT>"]):<EOL><INDENT>row = None<EOL>if i in metadataTableDefinitions:<EOL><INDENT>row = readDataInstance.readFields(metadataTableDefinitions[i])<EOL><DEDENT>dt.tables[i]["<STR_LIT:data>"].append(row)<EOL><DEDENT><DEDENT>for i in xrange(<NUM_LIT:64>):<EOL><INDENT>if i in dotnet.MetadataTableNames:<EOL><INDENT>dt.tables[dotnet.MetadataTableNames[i]] = dt.tables[i]["<STR_LIT:data>"]<EOL><DEDENT>dt.tables[i] = dt.tables[i]["<STR_LIT:data>"]<EOL><DEDENT>return dt<EOL>
Returns a new L{NetMetaDataTables} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{NetMetaDataTables} object. @rtype: L{NetMetaDataTables} @return: A new L{NetMetaDataTables} object.
f11794:c27:m2
def __init__(self, shouldPack = True):
baseclasses.BaseStructClass.__init__(self, shouldPack)<EOL>self.signature = datatypes.DWORD(<NUM_LIT:0>)<EOL>self.readerCount = datatypes.DWORD(<NUM_LIT:0>)<EOL>self.readerTypeLength = datatypes.DWORD(<NUM_LIT:0>)<EOL>self.version = datatypes.DWORD(<NUM_LIT:0>)<EOL>self.resourceCount = datatypes.DWORD(<NUM_LIT:0>)<EOL>self.resourceTypeCount = datatypes.DWORD(<NUM_LIT:0>)<EOL>self.resourceTypes = None<EOL>self.resourceHashes = None<EOL>self.resourceNameOffsets = None<EOL>self.dataSectionOffset = datatypes.DWORD(<NUM_LIT:0>)<EOL>self.resourceNames = None<EOL>self.resourceOffsets = None<EOL>self.info = None<EOL>self._attrsList = ["<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT:version>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT>", "<STR_LIT:info>"]<EOL>
NetResources object. @todo: Parse every resource in this struct and store them in the C{self.resources} attribute.
f11794:c28:m0
def getType(self):
return consts.NET_RESOURCES<EOL>
Returns L{consts.NET_RESOURCES}.
f11794:c28:m3
@staticmethod<EOL><INDENT>def parse(readDataInstance):<DEDENT>
r = NetResources()<EOL>r.signature = readDataInstance.readDword()<EOL>if r.signature != <NUM_LIT>:<EOL><INDENT>return r<EOL><DEDENT>r.readerCount = readDataInstance.readDword()<EOL>r.readerTypeLength = readDataInstance.readDword()<EOL>r.readerType = utils.ReadData(readDataInstance.read(r.readerTypeLength)).readDotNetBlob()<EOL>r.version = readDataInstance.readDword()<EOL>r.resourceCount = readDataInstance.readDword()<EOL>r.resourceTypeCount = readDataInstance.readDword()<EOL>r.resourceTypes = []<EOL>for i in xrange(r.resourceTypeCount):<EOL><INDENT>r.resourceTypes.append(readDataInstance.readDotNetBlob())<EOL><DEDENT>readDataInstance.skipBytes(<NUM_LIT:8> - readDataInstance.tell() & <NUM_LIT>)<EOL>r.resourceHashes = []<EOL>for i in xrange(r.resourceCount):<EOL><INDENT>r.resourceHashes.append(readDataInstance.readDword())<EOL><DEDENT>r.resourceNameOffsets = []<EOL>for i in xrange(r.resourceCount):<EOL><INDENT>r.resourceNameOffsets.append(readDataInstance.readDword())<EOL><DEDENT>r.dataSectionOffset = readDataInstance.readDword()<EOL>r.resourceNames = []<EOL>r.resourceOffsets = []<EOL>base = readDataInstance.tell()<EOL>for i in xrange(r.resourceCount):<EOL><INDENT>readDataInstance.setOffset(base + r.resourceNameOffsets[i])<EOL>r.resourceNames.append(readDataInstance.readDotNetUnicodeString())<EOL>r.resourceOffsets.append(readDataInstance.readDword())<EOL><DEDENT>r.info = {}<EOL>for i in xrange(r.resourceCount):<EOL><INDENT>readDataInstance.setOffset(r.dataSectionOffset + r.resourceOffsets[i])<EOL>r.info[i] = readDataInstance.read(len(readDataInstance))<EOL>r.info[r.resourceNames[i]] = r.info[i]<EOL><DEDENT>return r<EOL>
Returns a new L{NetResources} object. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} object with data to be parsed as a L{NetResources} object. @rtype: L{NetResources} @return: A new L{NetResources} object.
f11794:c28:m4
def __init__(self, shouldPack = True):
self.name = datatypes.String("<STR_LIT>")<EOL>self.rva = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.size = datatypes.DWORD(<NUM_LIT:0>) <EOL>self.info = None <EOL>self.shouldPack = shouldPack<EOL>
Class representation of the C{IMAGE_DATA_DIRECTORY} structure. @see: U{http://msdn.microsoft.com/es-es/library/windows/desktop/ms680305%28v=vs.85%29.aspx} @type shouldPack: bool @param shouldPack: If set to C{True} the L{Directory} object will be packed. If set to C{False} the object won't be packed.
f11795:c0:m0
@staticmethod<EOL><INDENT>def parse(readDataInstance):<DEDENT>
d = Directory()<EOL>d.rva.value = readDataInstance.readDword()<EOL>d.size.value = readDataInstance.readDword()<EOL>return d<EOL>
Returns a L{Directory}-like object. @type readDataInstance: L{ReadData} @param readDataInstance: L{ReadData} object to read from. @rtype: L{Directory} @return: L{Directory} object.
f11795:c0:m4
def getType(self):
return consts.IMAGE_DATA_DIRECTORY<EOL>
Returns a value that identifies the L{Directory} object.
f11795:c0:m5
def __init__(self, shouldPack = True):
self.shouldPack = shouldPack<EOL>for i in range(consts.IMAGE_NUMBEROF_DIRECTORY_ENTRIES):<EOL><INDENT>dir = Directory()<EOL>dir.name.value = dirs[i]<EOL>self.append(dir)<EOL><DEDENT>
Array of L{Directory} objects. @type shouldPack: bool @param shouldPack: If set to C{True} the L{DataDirectory} object will be packed. If set to C{False} the object won't packed.
f11795:c1:m0
@staticmethod<EOL><INDENT>def parse(readDataInstance):<DEDENT>
if len(readDataInstance) == consts.IMAGE_NUMBEROF_DIRECTORY_ENTRIES * <NUM_LIT:8>:<EOL><INDENT>newDataDirectory = DataDirectory()<EOL>for i in range(consts.IMAGE_NUMBEROF_DIRECTORY_ENTRIES):<EOL><INDENT>newDataDirectory[i].name.value = dirs[i]<EOL>newDataDirectory[i].rva.value = readDataInstance.readDword()<EOL>newDataDirectory[i].size.value = readDataInstance.readDword()<EOL><DEDENT><DEDENT>else:<EOL><INDENT>raise excep.DirectoryEntriesLengthException("<STR_LIT>")<EOL><DEDENT>return newDataDirectory<EOL>
Returns a L{DataDirectory}-like object. @type readDataInstance: L{ReadData} @param readDataInstance: L{ReadData} object to read from. @rtype: L{DataDirectory} @return: The L{DataDirectory} object containing L{consts.IMAGE_NUMBEROF_DIRECTORY_ENTRIES} L{Directory} objects. @raise DirectoryEntriesLengthException: The L{ReadData} instance has an incorrect number of L{Directory} objects.
f11795:c1:m2
def _generate_configs_from_default(self, overrides=None):<EOL>
config = DEFAULT_CONFIG.copy()<EOL>if not overrides:<EOL><INDENT>overrides = {}<EOL><DEDENT>for k, v in overrides.items():<EOL><INDENT>config[k] = v<EOL><DEDENT>return config<EOL>
Generate configs by inheriting from defaults
f11800:c0:m1
def read_ical(self, ical_file_location):
with open(ical_file_location, '<STR_LIT:r>') as ical_file:<EOL><INDENT>data = ical_file.read()<EOL><DEDENT>self.cal = Calendar.from_ical(data)<EOL>return self.cal<EOL>
Read the ical file
f11800:c0:m2
def read_csv(self, csv_location, csv_configs=None):<EOL>
csv_configs = self._generate_configs_from_default(csv_configs)<EOL>with open(csv_location, '<STR_LIT:r>') as csv_file:<EOL><INDENT>csv_reader = csv.reader(csv_file)<EOL>self.csv_data = list(csv_reader)<EOL><DEDENT>self.csv_data = self.csv_data[csv_configs['<STR_LIT>']:]<EOL>return self.csv_data<EOL>
Read the csv file
f11800:c0:m3
def make_ical(self, csv_configs=None):<EOL>
csv_configs = self._generate_configs_from_default(csv_configs)<EOL>self.cal = Calendar()<EOL>for row in self.csv_data:<EOL><INDENT>event = Event()<EOL>event.add('<STR_LIT>', row[csv_configs['<STR_LIT>']])<EOL>event.add('<STR_LIT>', row[csv_configs['<STR_LIT>']])<EOL>event.add('<STR_LIT>', row[csv_configs['<STR_LIT>']])<EOL>event.add('<STR_LIT:description>', row[csv_configs['<STR_LIT>']])<EOL>event.add('<STR_LIT:location>', row[csv_configs['<STR_LIT>']])<EOL>self.cal.add_component(event)<EOL><DEDENT>return self.cal<EOL>
Make iCal entries
f11800:c0:m4
def make_csv(self):
for event in self.cal.subcomponents:<EOL><INDENT>if event.name != '<STR_LIT>':<EOL><INDENT>continue<EOL><DEDENT>row = [<EOL>event.get('<STR_LIT>'),<EOL>event.get('<STR_LIT>').dt,<EOL>event.get('<STR_LIT>').dt,<EOL>event.get('<STR_LIT>'),<EOL>event.get('<STR_LIT>'),<EOL>]<EOL>row = [str(x) for x in row]<EOL>self.csv_data.append(row)<EOL><DEDENT>
Make CSV
f11800:c0:m5
def save_ical(self, ical_location):
data = self.cal.to_ical()<EOL>with open(ical_location, '<STR_LIT:w>') as ical_file:<EOL><INDENT>ical_file.write(data.decode('<STR_LIT:utf-8>'))<EOL><DEDENT>
Save the calendar instance to a file
f11800:c0:m6
def save_csv(self, csv_location):
with open(csv_location, '<STR_LIT:w>') as csv_handle:<EOL><INDENT>writer = csv.writer(csv_handle)<EOL>for row in self.csv_data:<EOL><INDENT>writer.writerow(row)<EOL><DEDENT><DEDENT>
Save the csv to a file
f11800:c0:m7
def diff(x, y, x_only=False, y_only=False):
<EOL>if len(x) == <NUM_LIT:0> and len(y) > <NUM_LIT:0>:<EOL><INDENT>return y <EOL><DEDENT>elif len(y) == <NUM_LIT:0> and len(x) > <NUM_LIT:0>:<EOL><INDENT>return x <EOL><DEDENT>elif len(y) == <NUM_LIT:0> and len(x) == <NUM_LIT:0>:<EOL><INDENT>return []<EOL><DEDENT>if isinstance(x, dict):<EOL><INDENT>x = list(x.items())<EOL><DEDENT>if isinstance(y, dict):<EOL><INDENT>y = list(y.items())<EOL><DEDENT>try:<EOL><INDENT>input_type = type(x[<NUM_LIT:0>])<EOL><DEDENT>except IndexError:<EOL><INDENT>input_type = type(y[<NUM_LIT:0>])<EOL><DEDENT>if input_type not in (str, int, float):<EOL><INDENT>first_set = set(map(tuple, x))<EOL>secnd_set = set(map(tuple, y))<EOL><DEDENT>else:<EOL><INDENT>first_set = set(x)<EOL>secnd_set = set(y)<EOL><DEDENT>longest = first_set if len(first_set) > len(secnd_set) else secnd_set<EOL>shortest = secnd_set if len(first_set) > len(secnd_set) else first_set<EOL>uniques = {i for i in longest if i not in shortest}<EOL>for i in shortest:<EOL><INDENT>if i not in longest:<EOL><INDENT>uniques.add(i)<EOL><DEDENT><DEDENT>if x_only:<EOL><INDENT>return [input_type(i) for i in uniques if input_type(i) in x]<EOL><DEDENT>elif y_only:<EOL><INDENT>return [input_type(i) for i in uniques if input_type(i) in y]<EOL><DEDENT>else:<EOL><INDENT>return [input_type(i) for i in uniques]<EOL><DEDENT>
Retrieve a unique of list of elements that do not exist in both x and y. Capable of parsing one-dimensional (flat) and two-dimensional (lists of lists) lists. :param x: list #1 :param y: list #2 :param x_only: Return only unique values from x :param y_only: Return only unique values from y :return: list of unique values
f11809:m0
def differentiate(x, y):
return diff(x, y)<EOL>
Wrapper function for legacy imports of differentiate.
f11809:m1
def get_version(package_name, version_file='<STR_LIT>'):
filename = os.path.join(os.path.dirname(__file__), package_name, version_file)<EOL>with open(filename, '<STR_LIT:rb>') as fp:<EOL><INDENT>return fp.read().decode('<STR_LIT:utf8>').split('<STR_LIT:=>')[<NUM_LIT:1>].strip("<STR_LIT>")<EOL><DEDENT>
Retrieve the package version from a version file in the package root.
f11812:m0
def clean(self):
if self.config.clean:<EOL><INDENT>logger.info('<STR_LIT>')<EOL>self.execute(self.config.clean)<EOL><DEDENT>
Clean the workspace
f11825:c0:m6
def publish(self):
if self.config.publish:<EOL><INDENT>logger.info('<STR_LIT>')<EOL>self.execute(self.config.publish)<EOL><DEDENT>
Publish the current release to PyPI
f11825:c0:m9
def ansi(color, text):
code = COLOR_CODES[color]<EOL>return '<STR_LIT>'.format(code, text, RESET_TERM)<EOL>
Wrap text in an ansi escape sequence
f11826:m0
def validate(self):
Override this method to implement initial validation
f11827:c0:m1
def check_output(*args, **kwargs):
if hasattr(subprocess, '<STR_LIT>'):<EOL><INDENT>return subprocess.check_output(stderr=subprocess.STDOUT, universal_newlines=True,<EOL>*args, **kwargs)<EOL><DEDENT>else:<EOL><INDENT>process = subprocess.Popen(*args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,<EOL>universal_newlines=True, **kwargs)<EOL>output, _ = process.communicate()<EOL>retcode = process.poll()<EOL>if retcode:<EOL><INDENT>error = subprocess.CalledProcessError(retcode, args[<NUM_LIT:0>])<EOL>error.output = output<EOL>raise error<EOL><DEDENT>return output<EOL><DEDENT>
Compatibility wrapper for Python 2.6 missin g subprocess.check_output
f11829:m0
def execute(self, command):
execute(command, verbose=self.verbose)<EOL>
Execute a command
f11830:c0:m1
def validate(self, dryrun=False):
raise NotImplementedError<EOL>
Ensure the working dir is a repository and there is no modified files
f11830:c0:m2
def commit(self, message):
raise NotImplementedError<EOL>
Commit all modified files
f11830:c0:m3
def tag(self, name, annotation=None):
raise NotImplementedError<EOL>
Create a tag
f11830:c0:m4
def push(self):
raise NotImplementedError<EOL>
Push changes to remote repository
f11830:c0:m5
def color(code):
return lambda t: '<STR_LIT>'.format(code, t)<EOL>
A simple ANSI color wrapper factory
f11832:m0
def header(text):
print('<STR_LIT:U+0020>'.join((blue('<STR_LIT>'), cyan(text))))<EOL>sys.stdout.flush()<EOL>
Display an header
f11832:m1
def info(text, *args, **kwargs):
text = text.format(*args, **kwargs)<EOL>print('<STR_LIT:U+0020>'.join((purple('<STR_LIT>'), text)))<EOL>sys.stdout.flush()<EOL>
Display informations
f11832:m2
def success(text):
print('<STR_LIT:U+0020>'.join((green('<STR_LIT>'), white(text))))<EOL>sys.stdout.flush()<EOL>
Display a success message
f11832:m3
def error(text):
print(red('<STR_LIT>'.format(text)))<EOL>sys.stdout.flush()<EOL>
Display an error message
f11832:m4
@task<EOL>def clean(ctx):
header(clean.__doc__)<EOL>with ctx.cd(ROOT):<EOL><INDENT>for pattern in CLEAN_PATTERNS:<EOL><INDENT>info(pattern)<EOL>ctx.run('<STR_LIT>'.format('<STR_LIT:U+0020>'.join(CLEAN_PATTERNS)))<EOL><DEDENT><DEDENT>
Cleanup all build artifacts
f11832:m6
@task<EOL>def deps(ctx):
header(deps.__doc__)<EOL>with ctx.cd(ROOT):<EOL><INDENT>ctx.run('<STR_LIT>', pty=True)<EOL><DEDENT>
Install or update development dependencies
f11832:m7
@task<EOL>def cover(ctx, report=False, verbose=False):
header(cover.__doc__)<EOL>cmd = [<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>]<EOL>if verbose:<EOL><INDENT>cmd.append('<STR_LIT>')<EOL><DEDENT>if report:<EOL><INDENT>cmd += [<EOL>'<STR_LIT>'.format(ROOT),<EOL>'<STR_LIT>'.format(ROOT),<EOL>'<STR_LIT>'<EOL>]<EOL><DEDENT>with ctx.cd(ROOT):<EOL><INDENT>ctx.run('<STR_LIT:U+0020>'.join(cmd), pty=True)<EOL><DEDENT>
Run tests suite with coverage
f11832:m9
@task<EOL>def qa(ctx):
header(qa.__doc__)<EOL>with ctx.cd(ROOT):<EOL><INDENT>info('<STR_LIT>')<EOL>flake8_results = ctx.run('<STR_LIT>', pty=True, warn=True)<EOL>if flake8_results.failed:<EOL><INDENT>error('<STR_LIT>')<EOL><DEDENT>else:<EOL><INDENT>success('<STR_LIT>')<EOL><DEDENT>info('<STR_LIT>')<EOL>readme_results = ctx.run('<STR_LIT>', pty=True, warn=True, hide=True)<EOL>if readme_results.failed:<EOL><INDENT>print(readme_results.stdout)<EOL>error('<STR_LIT>')<EOL><DEDENT>else:<EOL><INDENT>success('<STR_LIT>')<EOL><DEDENT><DEDENT>if flake8_results.failed or readme_results.failed:<EOL><INDENT>exit('<STR_LIT>', flake8_results.return_code or readme_results.return_code)<EOL><DEDENT>success('<STR_LIT>')<EOL>
Run a quality report
f11832:m10
@task<EOL>def tox(ctx):
header(tox.__doc__)<EOL>ctx.run('<STR_LIT>', pty=True)<EOL>
Run test in all Python versions
f11832:m11
@task<EOL>def doc(ctx):
header(doc.__doc__)<EOL>with ctx.cd(os.path.join(ROOT, '<STR_LIT>')):<EOL><INDENT>ctx.run('<STR_LIT>', pty=True)<EOL><DEDENT>success('<STR_LIT>')<EOL>
Build the documentation
f11832:m12
@task<EOL>def completion(ctx):
header(completion.__doc__)<EOL>with ctx.cd(ROOT):<EOL><INDENT>ctx.run('<STR_LIT>', pty=True)<EOL><DEDENT>success('<STR_LIT>')<EOL>
Generate bash completion script
f11832:m13
@task<EOL>def dist(ctx):
header(dist.__doc__)<EOL>with ctx.cd(ROOT):<EOL><INDENT>ctx.run('<STR_LIT>', pty=True)<EOL><DEDENT>success('<STR_LIT>')<EOL>
Package for distribution
f11832:m14
def rst(filename):
return io.open(filename).read()<EOL>
Load rst file and sanitize it for PyPI. Remove unsupported github tags: - code-block directive - travis ci build badge
f11833:m0
def pip(name):
with io.open(os.path.join('<STR_LIT>', '<STR_LIT>'.format(name))) as f:<EOL><INDENT>return f.readlines()<EOL><DEDENT>
Parse requirements file
f11833:m1
def add_asset(self, asset='<STR_LIT>', amount=<NUM_LIT:0>, timestamp=datetime.utcnow()):
if amount < <NUM_LIT:0>:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>'.format(amount))<EOL><DEDENT>if asset not in self.assets:<EOL><INDENT>self.assets[asset] = amount<EOL><DEDENT>else:<EOL><INDENT>self.assets[asset] += amount<EOL><DEDENT>self.history.append({<EOL>'<STR_LIT>': str(timestamp),<EOL>'<STR_LIT>': asset,<EOL>'<STR_LIT>': +amount<EOL>})<EOL>
Adds the given amount of an asset to this portfolio. :param asset: the asset to add to the portfolio :param amount: the amount of the asset to add :param timestamp: datetime obj indicating the time the asset was added
f11838:c0:m1
def get_value(self, timestamp=datetime.utcnow(), asset=None):
value = <NUM_LIT:0><EOL>backdated_assets = self.assets.copy()<EOL>for trade in list(reversed(self.history)):<EOL><INDENT>if dateutil.parser.parse(trade['<STR_LIT>']) > timestamp:<EOL><INDENT>backdated_assets[trade['<STR_LIT>']] -= trade['<STR_LIT>']<EOL>if backdated_assets[trade['<STR_LIT>']] == <NUM_LIT:0>:<EOL><INDENT>del backdated_assets[trade['<STR_LIT>']]<EOL><DEDENT><DEDENT><DEDENT>if asset:<EOL><INDENT>if asset not in backdated_assets:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>.format(asset))<EOL><DEDENT>if asset != '<STR_LIT>':<EOL><INDENT>amount = backdated_assets[asset]<EOL>price = self.manager.get_price(asset, timestamp.date())<EOL>value = price * amount<EOL><DEDENT>else:<EOL><INDENT>return backdated_assets['<STR_LIT>']<EOL><DEDENT><DEDENT>else:<EOL><INDENT>for backdated_asset in backdated_assets:<EOL><INDENT>amount = backdated_assets[backdated_asset]<EOL>if backdated_asset != '<STR_LIT>':<EOL><INDENT>price = self.manager.get_price(<EOL>backdated_asset,<EOL>timestamp.date()<EOL>)<EOL>value += price * amount<EOL><DEDENT>else:<EOL><INDENT>value += amount<EOL><DEDENT><DEDENT><DEDENT>return value<EOL>
Get the value of the portfolio at a given time. :param asset: gets the value of a given asset in this portfolio if specified; if None, returns the portfolio's value :param timestamp: a datetime obj to check the portfolio's value at :returns: the value of the portfolio
f11838:c0:m2
def get_historical_value(<EOL>self,<EOL>start,<EOL>end=datetime.utcnow(),<EOL>freq='<STR_LIT:D>',<EOL>date_format='<STR_LIT>',<EOL>chart=False,<EOL>filename='<STR_LIT>'<EOL>):
date_range = pd.date_range(start, end, freq=freq)<EOL>to_remove = []<EOL>while len(date_range) > <NUM_LIT>:<EOL><INDENT>for index, date in enumerate(date_range):<EOL><INDENT>if index % <NUM_LIT:2> == <NUM_LIT:0> and index != <NUM_LIT:0>:<EOL><INDENT>to_remove.append(date)<EOL><DEDENT><DEDENT>date_range = date_range.drop(to_remove)<EOL>to_remove = []<EOL><DEDENT>values = []<EOL>for date in date_range:<EOL><INDENT>values.append(self.get_value(date))<EOL><DEDENT>time_series = pd.DataFrame(index=date_range, data={'<STR_LIT>': values})<EOL>if chart:<EOL><INDENT>axes = time_series.plot(rot=<NUM_LIT>)<EOL>axes.set_xlabel('<STR_LIT>')<EOL>axes.set_ylabel('<STR_LIT>')<EOL>plt.savefig(filename)<EOL><DEDENT>else:<EOL><INDENT>dates = time_series.index.strftime(date_format).tolist()<EOL>return {'<STR_LIT>': dates, '<STR_LIT>': values}<EOL><DEDENT>
Display a chart of this portfolios value during the specified timeframe. :param start: datetime obj left bound of the time interval :param end: datetime obj right bound of the time interval :param freq: a time frequency within the interval :param date_format: the format of the date/x-axis labels :param chart: whether to display a chart or return data :returns: a dict of historical value data if chart is false
f11838:c0:m3
def remove_asset(self, asset='<STR_LIT>', amount=<NUM_LIT:0>, timestamp=datetime.utcnow()):
if amount < <NUM_LIT:0>:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>'.format(amount))<EOL><DEDENT>if self.get_value(timestamp, asset) < amount:<EOL><INDENT>raise ValueError('<STR_LIT>'<EOL>'<STR_LIT>'.format(amount, self.assets[asset]))<EOL><DEDENT>self.assets[asset] -= amount<EOL>self.history.append({<EOL>'<STR_LIT>': str(timestamp),<EOL>'<STR_LIT>': asset,<EOL>'<STR_LIT>': -amount<EOL>})<EOL>
Removes the given amount of an asset to this portfolio. :param asset: the asset to add to the portfolio :param amount: the amount of the asset to add :param timestamp: datetime obj indicating the time the asset was added
f11838:c0:m4
def trade_asset(<EOL>self,<EOL>amount,<EOL>from_asset,<EOL>to_asset,<EOL>timestamp=datetime.utcnow()<EOL>):
if to_asset == '<STR_LIT>':<EOL><INDENT>price = <NUM_LIT:1>/self.manager.get_price(from_asset, timestamp.date())<EOL><DEDENT>else:<EOL><INDENT>price = self.manager.get_price(to_asset, timestamp.date())<EOL><DEDENT>self.remove_asset(from_asset, amount, timestamp)<EOL>self.add_asset(to_asset, amount * <NUM_LIT:1>/price, timestamp)<EOL>
Exchanges one asset for another. If it's a backdated trade, the historical exchange rate is used. :param amount: the amount of the asset to trade :param from_asset: the asset you are selling :param to_asset: the asset you are buying :param timestamp: datetime obj indicating the time the asset was traded
f11838:c0:m5
def retrieve_data(self):
<EOL>df = self.manager.get_historic_data(self.start.date(), self.end.date())<EOL>df.replace(<NUM_LIT:0>, np.nan, inplace=True)<EOL>return df<EOL>
Retrives data as a DataFrame.
f11839:c0:m1
def get_min_risk(self, weights, cov_matrix):
def func(weights):<EOL><INDENT>"""<STR_LIT>"""<EOL>return np.matmul(np.matmul(weights.transpose(), cov_matrix), weights)<EOL><DEDENT>def func_deriv(weights):<EOL><INDENT>"""<STR_LIT>"""<EOL>return (<EOL>np.matmul(weights.transpose(), cov_matrix.transpose()) +<EOL>np.matmul(weights.transpose(), cov_matrix)<EOL>)<EOL><DEDENT>constraints = ({'<STR_LIT:type>': '<STR_LIT>', '<STR_LIT>': lambda weights: (weights.sum() - <NUM_LIT:1>)})<EOL>solution = self.solve_minimize(func, weights, constraints, func_deriv=func_deriv)<EOL>allocation = solution.x<EOL>return allocation<EOL>
Minimizes the variance of a portfolio.
f11839:c0:m2
def get_max_return(self, weights, returns):
def func(weights):<EOL><INDENT>"""<STR_LIT>"""<EOL>return np.dot(weights, returns.values) * -<NUM_LIT:1><EOL><DEDENT>constraints = ({'<STR_LIT:type>': '<STR_LIT>', '<STR_LIT>': lambda weights: (weights.sum() - <NUM_LIT:1>)})<EOL>solution = self.solve_minimize(func, weights, constraints)<EOL>max_return = solution.fun * -<NUM_LIT:1><EOL>return max_return<EOL>
Maximizes the returns of a portfolio.
f11839:c0:m3
def efficient_frontier(<EOL>self,<EOL>returns,<EOL>cov_matrix,<EOL>min_return,<EOL>max_return,<EOL>count<EOL>):
columns = [coin for coin in self.SUPPORTED_COINS]<EOL>values = pd.DataFrame(columns=columns)<EOL>weights = [<NUM_LIT:1>/len(self.SUPPORTED_COINS)] * len(self.SUPPORTED_COINS)<EOL>def func(weights):<EOL><INDENT>"""<STR_LIT>"""<EOL>return np.matmul(np.matmul(weights.transpose(), cov_matrix), weights)<EOL><DEDENT>def func_deriv(weights):<EOL><INDENT>"""<STR_LIT>"""<EOL>return (<EOL>np.matmul(weights.transpose(), cov_matrix.transpose()) +<EOL>np.matmul(weights.transpose(), cov_matrix)<EOL>)<EOL><DEDENT>for point in np.linspace(min_return, max_return, count):<EOL><INDENT>constraints = (<EOL>{'<STR_LIT:type>': '<STR_LIT>', '<STR_LIT>': lambda weights: (weights.sum() - <NUM_LIT:1>)},<EOL>{'<STR_LIT:type>': '<STR_LIT>', '<STR_LIT>': lambda weights, i=point: (<EOL>np.dot(weights, returns.values) - i<EOL>)}<EOL>)<EOL>solution = self.solve_minimize(func, weights, constraints, func_deriv=func_deriv)<EOL>columns = {}<EOL>for index, coin in enumerate(self.SUPPORTED_COINS):<EOL><INDENT>columns[coin] = math.floor(solution.x[index] * <NUM_LIT:100> * <NUM_LIT:100>) / <NUM_LIT:100><EOL><DEDENT>values = values.append(columns, ignore_index=True)<EOL><DEDENT>return values<EOL>
Returns a DataFrame of efficient portfolio allocations for `count` risk indices.
f11839:c0:m4
def solve_minimize(<EOL>self,<EOL>func,<EOL>weights,<EOL>constraints,<EOL>lower_bound=<NUM_LIT:0.0>,<EOL>upper_bound=<NUM_LIT:1.0>,<EOL>func_deriv=False<EOL>):
bounds = ((lower_bound, upper_bound), ) * len(self.SUPPORTED_COINS)<EOL>return minimize(<EOL>fun=func, x0=weights, jac=func_deriv, bounds=bounds,<EOL>constraints=constraints, method='<STR_LIT>', options={'<STR_LIT>': False}<EOL>)<EOL>
Returns the solution to a minimization problem.
f11839:c0:m5
def allocate(self):
df = self.manager.get_historic_data()[self.SUPPORTED_COINS]<EOL>change_columns = []<EOL>for column in df:<EOL><INDENT>if column in self.SUPPORTED_COINS:<EOL><INDENT>change_column = '<STR_LIT>'.format(column)<EOL>values = pd.Series(<EOL>(df[column].shift(-<NUM_LIT:1>) - df[column]) /<EOL>-df[column].shift(-<NUM_LIT:1>)<EOL>).values<EOL>df[change_column] = values<EOL>change_columns.append(change_column)<EOL><DEDENT><DEDENT>columns = change_columns<EOL>risks = df[columns].apply(np.nanvar, axis=<NUM_LIT:0>)<EOL>returns = df[columns].apply(np.nanmean, axis=<NUM_LIT:0>)<EOL>cov_matrix = df[columns].cov()<EOL>cov_matrix.values[[np.arange(len(self.SUPPORTED_COINS))] * <NUM_LIT:2>] = df[columns].apply(np.nanvar, axis=<NUM_LIT:0>)<EOL>weights = np.array([<NUM_LIT:1>/len(self.SUPPORTED_COINS)] * len(self.SUPPORTED_COINS)).reshape(len(self.SUPPORTED_COINS), <NUM_LIT:1>)<EOL>min_risk = self.get_min_risk(weights, cov_matrix)<EOL>min_return = np.dot(min_risk, returns.values)<EOL>max_return = self.get_max_return(weights, returns)<EOL>frontier = self.efficient_frontier(<EOL>returns, cov_matrix, min_return, max_return, <NUM_LIT:6><EOL>)<EOL>return frontier<EOL>
Returns an efficient portfolio allocation for the given risk index.
f11839:c0:m6
@property<EOL><INDENT>def base_point(self):<DEDENT>
return JacobianPoint(self, self.Gx, self.Gy)<EOL>
Returns the base point for this curve. Returns: JacobianPoint: The base point.
f11841:c0:m1
def inverse(self, N):
if N == <NUM_LIT:0>:<EOL><INDENT>return <NUM_LIT:0><EOL><DEDENT>lm, hm = <NUM_LIT:1>, <NUM_LIT:0><EOL>low, high = N % self.P, self.P<EOL>while low > <NUM_LIT:1>:<EOL><INDENT>r = high//low<EOL>nm, new = hm - lm * r, high - low * r<EOL>lm, low, hm, high = nm, new, lm, low<EOL><DEDENT>return lm % self.P<EOL>
Returns the modular inverse of an integer with respect to the field characteristic, P. Use the Extended Euclidean Algorithm: https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm
f11841:c0:m2
def is_on_curve(self, point):
X, Y = point.X, point.Y<EOL>return (<EOL>pow(Y, <NUM_LIT:2>, self.P) - pow(X, <NUM_LIT:3>, self.P) - self.a * X - self.b<EOL>) % self.P == <NUM_LIT:0><EOL>
Checks whether a point is on the curve. Args: point (AffinePoint): Point to be checked. Returns: bool: True if point is on the curve, False otherwise.
f11841:c0:m3
def generate_private_key(self):
random_string = base64.b64encode(os.urandom(<NUM_LIT>)).decode('<STR_LIT:utf-8>')<EOL>binary_data = bytes(random_string, '<STR_LIT:utf-8>')<EOL>hash_object = hashlib.sha256(binary_data)<EOL>message_digest_bin = hash_object.digest()<EOL>message_digest_hex = binascii.hexlify(message_digest_bin)<EOL>return message_digest_hex<EOL>
Generates a private key based on the password. SHA-256 is a member of the SHA-2 cryptographic hash functions designed by the NSA. SHA stands for Secure Hash Algorithm. The password is converted to bytes and hashed with SHA-256. The binary output is converted to a hex representation. Args: data (str): The data to be hashed with SHA-256. Returns: bytes: The hexadecimal representation of the hashed binary data.
f11841:c1:m1
def generate_public_key(self):
private_key = int(self.private_key, <NUM_LIT:16>)<EOL>if private_key >= self.N:<EOL><INDENT>raise Exception('<STR_LIT>')<EOL><DEDENT>G = JacobianPoint(self.Gx, self.Gy, <NUM_LIT:1>)<EOL>public_key = G * private_key<EOL>x_hex = '<STR_LIT>'.format(public_key.X, <NUM_LIT:64>)<EOL>y_hex = '<STR_LIT>'.format(public_key.Y, <NUM_LIT:64>)<EOL>return '<STR_LIT>' + x_hex + y_hex<EOL>
Generates a public key from the hex-encoded private key using elliptic curve cryptography. The private key is multiplied by a predetermined point on the elliptic curve called the generator point, G, resulting in the corresponding private key. The generator point is always the same for all Bitcoin users. Jacobian coordinates are used to represent the elliptic curve point G. https://en.wikibooks.org/wiki/Cryptography/Prime_Curve/Jacobian_Coordinates The exponentiating by squaring (also known by double-and-add) method is used for the elliptic curve multiplication that results in the public key. https://en.wikipedia.org/wiki/Exponentiation_by_squaring Bitcoin public keys are 65 bytes. The first byte is 0x04, next 32 bytes correspond to the X coordinate, and last 32 bytes correspond to the Y coordinate. They are typically encoded as 130-length hex characters. Args: private_key (bytes): UTF-8 encoded hexadecimal Returns: str: The public key in hexadecimal representation.
f11841:c1:m2
def generate_address(self):
binary_pubkey = binascii.unhexlify(self.public_key)<EOL>binary_digest_sha256 = hashlib.sha256(binary_pubkey).digest()<EOL>binary_digest_ripemd160 = hashlib.new('<STR_LIT>', binary_digest_sha256).digest()<EOL>binary_version_byte = bytes([<NUM_LIT:0>])<EOL>binary_with_version_key = binary_version_byte + binary_digest_ripemd160<EOL>checksum_intermed = hashlib.sha256(binary_with_version_key).digest()<EOL>checksum_intermed = hashlib.sha256(checksum_intermed).digest()<EOL>checksum = checksum_intermed[:<NUM_LIT:4>]<EOL>binary_address = binary_digest_ripemd160 + checksum<EOL>leading_zero_bytes = <NUM_LIT:0><EOL>for char in binary_address:<EOL><INDENT>if char == <NUM_LIT:0>:<EOL><INDENT>leading_zero_bytes += <NUM_LIT:1><EOL><DEDENT><DEDENT>inp = binary_address + checksum<EOL>result = <NUM_LIT:0><EOL>while len(inp) > <NUM_LIT:0>:<EOL><INDENT>result *= <NUM_LIT><EOL>result += inp[<NUM_LIT:0>]<EOL>inp = inp[<NUM_LIT:1>:]<EOL><DEDENT>result_bytes = bytes()<EOL>while result > <NUM_LIT:0>:<EOL><INDENT>curcode = '<STR_LIT>'[result % <NUM_LIT>]<EOL>result_bytes = bytes([ord(curcode)]) + result_bytes<EOL>result //= <NUM_LIT><EOL><DEDENT>pad_size = <NUM_LIT:0> - len(result_bytes)<EOL>padding_element = b'<STR_LIT:1>'<EOL>if pad_size > <NUM_LIT:0>:<EOL><INDENT>result_bytes = padding_element * pad_size + result_bytes<EOL><DEDENT>result = '<STR_LIT>'.join([chr(y) for y in result_bytes])<EOL>address = '<STR_LIT:1>' * leading_zero_bytes + result<EOL>return address<EOL>
Creates a Bitcoin address from the public key. Details of the steps for creating the address are outlined in this link: https://en.bitcoin.it/wiki/Technical_background_of_version_1_Bitcoin_addresses The last step is Base58Check encoding, which is similar to Base64 encoding but slightly different to create a more human-readable string where '1' and 'l' won't get confused. More on Base64Check encoding here: https://en.bitcoin.it/wiki/Base58Check_encoding
f11841:c1:m3
def double(self):
X1, Y1, Z1 = self.X, self.Y, self.Z<EOL>if Y1 == <NUM_LIT:0>:<EOL><INDENT>return POINT_AT_INFINITY<EOL><DEDENT>S = (<NUM_LIT:4> * X1 * Y1 ** <NUM_LIT:2>) % self.P<EOL>M = (<NUM_LIT:3> * X1 ** <NUM_LIT:2> + self.a * Z1 ** <NUM_LIT:4>) % self.P<EOL>X3 = (M ** <NUM_LIT:2> - <NUM_LIT:2> * S) % self.P<EOL>Y3 = (M * (S - X3) - <NUM_LIT:8> * Y1 ** <NUM_LIT:4>) % self.P<EOL>Z3 = (<NUM_LIT:2> * Y1 * Z1) % self.P<EOL>return JacobianPoint(X3, Y3, Z3)<EOL>
Doubles this point. Returns: JacobianPoint: The point corresponding to `2 * self`.
f11841:c2:m5