desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Upload a file to a key into a bucket on GS, using GS resumable upload protocol. :type key: :class:`boto.s3.key.Key` or subclass :param key: The Key object to which data is to be uploaded :type fp: file-like object :param fp: The file pointer to upload :type headers: dict :param headers: The headers to pass along with ...
def send_file(self, key, fp, headers, cb=None, num_cb=10, hash_algs=None):
if (not headers): headers = {} CT = 'Content-Type' if ((CT in headers) and (headers[CT] is None)): del headers[CT] headers['User-Agent'] = UserAgent if isinstance(fp, KeyFile): file_length = fp.getkey().size else: fp.seek(0, os.SEEK_END) file_length = fp.t...
'Creates a new bucket. By default it\'s located in the USA. You can pass Location.EU to create bucket in the EU. You can also pass a LocationConstraint for where the bucket should be located, and a StorageClass describing how the data should be stored. :type bucket_name: string :param bucket_name: The name of the new b...
def create_bucket(self, bucket_name, headers=None, location=Location.DEFAULT, policy=None, storage_class='STANDARD'):
check_lowercase_bucketname(bucket_name) if policy: if headers: headers[self.provider.acl_header] = policy else: headers = {self.provider.acl_header: policy} if (not location): location = Location.DEFAULT location_elem = ('<LocationConstraint>%s</LocationCo...
'Retrieves a bucket by name. If the bucket does not exist, an ``S3ResponseError`` will be raised. If you are unsure if the bucket exists or not, you can use the ``S3Connection.lookup`` method, which will either return a valid bucket or ``None``. :type bucket_name: string :param bucket_name: The name of the bucket :type...
def get_bucket(self, bucket_name, validate=True, headers=None):
bucket = self.bucket_class(self, bucket_name) if validate: bucket.get_all_keys(headers, maxkeys=0) return bucket
'Open this key for reading :type headers: dict :param headers: Headers to pass in the web request :type query_args: string :param query_args: Arguments to pass in the query string (ie, \'torrent\') :type override_num_retries: int :param override_num_retries: If not None will override configured num_retries parameter fo...
def open_read(self, headers=None, query_args='', override_num_retries=None, response_headers=None):
if self.generation: if query_args: query_args += '&' query_args += ('generation=%s' % self.generation) super(Key, self).open_read(headers=headers, query_args=query_args, override_num_retries=override_num_retries, response_headers=response_headers)
'Retrieve an object from GCS using the name of the Key object as the key in GCS. Write the contents of the object to the file pointed to by \'fp\'. :type fp: File -like object :param fp: :type headers: dict :param headers: additional HTTP headers that will be sent with the GET request. :type cb: function :param cb: a c...
def get_contents_to_file(self, fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None, hash_algs=None):
if (self.bucket is not None): if res_download_handler: res_download_handler.get_file(self, fp, headers, cb, num_cb, torrent=torrent, version_id=version_id, hash_algs=hash_algs) else: self.get_file(fp, headers, cb, num_cb, torrent=torrent, version_id=version_id, response_heade...
':type fp: file :param fp: File pointer to the file to hash. The file pointer will be reset to the same position before the method returns. :type algorithm: zero-argument constructor for hash objects that implements update() and digest() (e.g. hashlib.md5) :type size: int :param size: (optional) The Maximum number of b...
def compute_hash(self, fp, algorithm, size=None):
(hex_digest, b64_digest, data_size) = compute_hash(fp, size=size, hash_algorithm=algorithm) self.size = data_size return (hex_digest, b64_digest)
'Upload a file to GCS. :type fp: file :param fp: The file pointer to upload. The file pointer must point at the offset from which you wish to upload. ie. if uploading the full file, it should point at the start of the file. Normally when a file is opened for reading, the fp will point at the first byte. See the bytes p...
def send_file(self, fp, headers=None, cb=None, num_cb=10, query_args=None, chunked_transfer=False, size=None, hash_algs=None):
self._send_file_internal(fp, headers=headers, cb=cb, num_cb=num_cb, query_args=query_args, chunked_transfer=chunked_transfer, size=size, hash_algs=hash_algs)
'Convenience method that provides a quick way to add an email grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT\'s the new ACL back to GS. :type permission: string :param permission: The permission being granted. Should ...
def add_email_grant(self, permission, email_address):
acl = self.get_acl() acl.add_email_grant(permission, email_address) self.set_acl(acl)
'Convenience method that provides a quick way to add a canonical user grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT\'s the new ACL back to GS. :type permission: string :param permission: The permission being granted....
def add_user_grant(self, permission, user_id):
acl = self.get_acl() acl.add_user_grant(permission, user_id) self.set_acl(acl)
'Convenience method that provides a quick way to add an email group grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT\'s the new ACL back to GS. :type permission: string :param permission: The permission being granted. S...
def add_group_email_grant(self, permission, email_address, headers=None):
acl = self.get_acl(headers=headers) acl.add_group_email_grant(permission, email_address) self.set_acl(acl, headers=headers)
'Convenience method that provides a quick way to add a canonical group grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT\'s the new ACL back to GS. :type permission: string :param permission: The permission being granted...
def add_group_grant(self, permission, group_id):
acl = self.get_acl() acl.add_group_grant(permission, group_id) self.set_acl(acl)
'Store an object in GS using the name of the Key object as the key in GS and the contents of the file pointed to by \'fp\' as the contents. :type fp: file :param fp: The file whose contents are to be uploaded. :type headers: dict :param headers: (optional) Additional HTTP headers to be sent with the PUT request. :type ...
def set_contents_from_file(self, fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, res_upload_handler=None, size=None, rewind=False, if_generation=None):
provider = self.bucket.connection.provider if (res_upload_handler and size): raise BotoClientError('"size" param not supported for resumable uploads.') headers = (headers or {}) if policy: headers[provider.acl_header] = policy if rewind: fp.seek(0, os.SEEK_S...
'Store an object in GS using the name of the Key object as the key in GS and the contents of the file named by \'filename\'. See set_contents_from_file method for details about the parameters. :type filename: string :param filename: The name of the file that you want to put onto GS. :type headers: dict :param headers: ...
def set_contents_from_filename(self, filename, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=None, res_upload_handler=None, if_generation=None):
self.local_hashes = {} with open(filename, 'rb') as fp: self.set_contents_from_file(fp, headers, replace, cb, num_cb, policy, md5, res_upload_handler, if_generation=if_generation)
'Store an object in GCS using the name of the Key object as the key in GCS and the string \'s\' as the contents. See set_contents_from_file method for details about the parameters. :type headers: dict :param headers: Additional headers to pass along with the request to AWS. :type replace: bool :param replace: If True, ...
def set_contents_from_string(self, s, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, if_generation=None):
self.md5 = None self.base64md5 = None fp = StringIO(get_utf8_value(s)) r = self.set_contents_from_file(fp, headers, replace, cb, num_cb, policy, md5, if_generation=if_generation) fp.close() return r
'Store an object using the name of the Key object as the key in cloud and the contents of the data stream pointed to by \'fp\' as the contents. The stream object is not seekable and total size is not known. This has the implication that we can\'t specify the Content-Size and Content-MD5 in the header. So for huge uploa...
def set_contents_from_stream(self, *args, **kwargs):
if_generation = kwargs.pop('if_generation', None) if (if_generation is not None): headers = kwargs.get('headers', {}) headers['x-goog-if-generation-match'] = str(if_generation) kwargs['headers'] = headers super(Key, self).set_contents_from_stream(*args, **kwargs)
'Sets the ACL for this object. :type acl_or_str: string or :class:`boto.gs.acl.ACL` :param acl_or_str: A canned ACL string (see :data:`~.gs.acl.CannedACLStrings`) or an ACL object. :type headers: dict :param headers: Additional headers to set during the request. :type generation: int :param generation: If specified, se...
def set_acl(self, acl_or_str, headers=None, generation=None, if_generation=None, if_metageneration=None):
if (self.bucket is not None): self.bucket.set_acl(acl_or_str, self.name, headers=headers, generation=generation, if_generation=if_generation, if_metageneration=if_metageneration)
'Returns the ACL of this object. :param dict headers: Additional headers to set during the request. :param int generation: If specified, gets the ACL for a specific generation of a versioned object. If not specified, the current version is returned. :rtype: :class:`.gs.acl.ACL`'
def get_acl(self, headers=None, generation=None):
if (self.bucket is not None): return self.bucket.get_acl(self.name, headers=headers, generation=generation)
'Returns the ACL string of this object. :param dict headers: Additional headers to set during the request. :param int generation: If specified, gets the ACL for a specific generation of a versioned object. If not specified, the current version is returned. :rtype: str'
def get_xml_acl(self, headers=None, generation=None):
if (self.bucket is not None): return self.bucket.get_xml_acl(self.name, headers=headers, generation=generation)
'Sets this objects\'s ACL to an XML string. :type acl_str: string :param acl_str: A string containing the ACL XML. :type headers: dict :param headers: Additional headers to set during the request. :type generation: int :param generation: If specified, sets the ACL for a specific generation of a versioned object. If not...
def set_xml_acl(self, acl_str, headers=None, generation=None, if_generation=None, if_metageneration=None):
if (self.bucket is not None): return self.bucket.set_xml_acl(acl_str, self.name, headers=headers, generation=generation, if_generation=if_generation, if_metageneration=if_metageneration)
'Sets this objects\'s ACL using a predefined (canned) value. :type acl_str: string :param acl_str: A canned ACL string. See :data:`~.gs.acl.CannedACLStrings`. :type headers: dict :param headers: Additional headers to set during the request. :type generation: int :param generation: If specified, sets the ACL for a speci...
def set_canned_acl(self, acl_str, headers=None, generation=None, if_generation=None, if_metageneration=None):
if (self.bucket is not None): return self.bucket.set_canned_acl(acl_str, self.name, headers=headers, generation=generation, if_generation=if_generation, if_metageneration=if_metageneration)
'Create a new object from a sequence of existing objects. The content of the object representing this Key will be the concatenation of the given object sequence. For more detail, visit https://developers.google.com/storage/docs/composite-objects :type components list of Keys :param components List of gs.Keys representi...
def compose(self, components, content_type=None, headers=None):
compose_req = [] for key in components: if (key.bucket.name != self.bucket.name): raise BotoClientError('GCS does not support inter-bucket composing') generation_tag = '' if key.generation: generation_tag = ('<Generation>%s</Generation>' % str(key.g...
'Verify parse level for a given tag.'
def validateParseLevel(self, tag, level):
if (self.parse_level != level): raise InvalidCorsError(('Invalid tag %s at parse level %d: ' % (tag, self.parse_level)))
'SAX XML logic for parsing new element found.'
def startElement(self, name, attrs, connection):
if (name == CORS_CONFIG): self.validateParseLevel(name, 0) self.parse_level += 1 elif (name == CORS): self.validateParseLevel(name, 1) self.parse_level += 1 elif (name in self.legal_collections): self.validateParseLevel(name, 2) self.parse_level += 1 s...
'SAX XML logic for parsing new element found.'
def endElement(self, name, value, connection):
if (name == CORS_CONFIG): self.validateParseLevel(name, 1) self.parse_level -= 1 elif (name == CORS): self.validateParseLevel(name, 2) self.parse_level -= 1 self.cors.append(self.collections) self.collections = [] elif (name in self.legal_collections): ...
'Convert CORS object into XML string representation.'
def to_xml(self):
s = (('<' + CORS_CONFIG) + '>') for collections in self.cors: s += (('<' + CORS) + '>') for (collection, elements_or_value) in collections: assert (collection is not None) s += (('<' + collection) + '>') if isinstance(elements_or_value, str): s...
'Creates a hosted connection on an interconnect. Allocates a VLAN number and a specified amount of bandwidth for use by a hosted connection on the given interconnect. :type bandwidth: string :param bandwidth: Bandwidth of the connection. Example: " 500Mbps " Default: None :type connection_name: string :param connection...
def allocate_connection_on_interconnect(self, bandwidth, connection_name, owner_account, interconnect_id, vlan):
params = {'bandwidth': bandwidth, 'connectionName': connection_name, 'ownerAccount': owner_account, 'interconnectId': interconnect_id, 'vlan': vlan} return self.make_request(action='AllocateConnectionOnInterconnect', body=json.dumps(params))
'Provisions a private virtual interface to be owned by a different customer. The owner of a connection calls this function to provision a private virtual interface which will be owned by another AWS customer. Virtual interfaces created using this function must be confirmed by the virtual interface owner by calling Conf...
def allocate_private_virtual_interface(self, connection_id, owner_account, new_private_virtual_interface_allocation):
params = {'connectionId': connection_id, 'ownerAccount': owner_account, 'newPrivateVirtualInterfaceAllocation': new_private_virtual_interface_allocation} return self.make_request(action='AllocatePrivateVirtualInterface', body=json.dumps(params))
'Provisions a public virtual interface to be owned by a different customer. The owner of a connection calls this function to provision a public virtual interface which will be owned by another AWS customer. Virtual interfaces created using this function must be confirmed by the virtual interface owner by calling Confir...
def allocate_public_virtual_interface(self, connection_id, owner_account, new_public_virtual_interface_allocation):
params = {'connectionId': connection_id, 'ownerAccount': owner_account, 'newPublicVirtualInterfaceAllocation': new_public_virtual_interface_allocation} return self.make_request(action='AllocatePublicVirtualInterface', body=json.dumps(params))
'Confirm the creation of a hosted connection on an interconnect. Upon creation, the hosted connection is initially in the \'Ordering\' state, and will remain in this state until the owner calls ConfirmConnection to confirm creation of the hosted connection. :type connection_id: string :param connection_id: ID of the co...
def confirm_connection(self, connection_id):
params = {'connectionId': connection_id} return self.make_request(action='ConfirmConnection', body=json.dumps(params))
'Accept ownership of a private virtual interface created by another customer. After the virtual interface owner calls this function, the virtual interface will be created and attached to the given virtual private gateway, and will be available for handling traffic. :type virtual_interface_id: string :param virtual_inte...
def confirm_private_virtual_interface(self, virtual_interface_id, virtual_gateway_id):
params = {'virtualInterfaceId': virtual_interface_id, 'virtualGatewayId': virtual_gateway_id} return self.make_request(action='ConfirmPrivateVirtualInterface', body=json.dumps(params))
'Accept ownership of a public virtual interface created by another customer. After the virtual interface owner calls this function, the specified virtual interface will be created and made available for handling traffic. :type virtual_interface_id: string :param virtual_interface_id: ID of the virtual interface. Exampl...
def confirm_public_virtual_interface(self, virtual_interface_id):
params = {'virtualInterfaceId': virtual_interface_id} return self.make_request(action='ConfirmPublicVirtualInterface', body=json.dumps(params))
'Creates a new connection between the customer network and a specific AWS Direct Connect location. A connection links your internal network to an AWS Direct Connect location over a standard 1 gigabit or 10 gigabit Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct C...
def create_connection(self, location, bandwidth, connection_name):
params = {'location': location, 'bandwidth': bandwidth, 'connectionName': connection_name} return self.make_request(action='CreateConnection', body=json.dumps(params))
'Creates a new interconnect between a AWS Direct Connect partner\'s network and a specific AWS Direct Connect location. An interconnect is a connection which is capable of hosting other connections. The AWS Direct Connect partner can use an interconnect to provide sub-1Gbps AWS Direct Connect service to tier 2 customer...
def create_interconnect(self, interconnect_name, bandwidth, location):
params = {'interconnectName': interconnect_name, 'bandwidth': bandwidth, 'location': location} return self.make_request(action='CreateInterconnect', body=json.dumps(params))
'Creates a new private virtual interface. A virtual interface is the VLAN that transports AWS Direct Connect traffic. A private virtual interface supports sending traffic to a single virtual private cloud (VPC). :type connection_id: string :param connection_id: ID of the connection. Example: dxcon-fg5678gh Default: Non...
def create_private_virtual_interface(self, connection_id, new_private_virtual_interface):
params = {'connectionId': connection_id, 'newPrivateVirtualInterface': new_private_virtual_interface} return self.make_request(action='CreatePrivateVirtualInterface', body=json.dumps(params))
'Creates a new public virtual interface. A virtual interface is the VLAN that transports AWS Direct Connect traffic. A public virtual interface supports sending traffic to public services of AWS such as Amazon Simple Storage Service (Amazon S3). :type connection_id: string :param connection_id: ID of the connection. Ex...
def create_public_virtual_interface(self, connection_id, new_public_virtual_interface):
params = {'connectionId': connection_id, 'newPublicVirtualInterface': new_public_virtual_interface} return self.make_request(action='CreatePublicVirtualInterface', body=json.dumps(params))
'Deletes the connection. Deleting a connection only stops the AWS Direct Connect port hour and data transfer charges. You need to cancel separately with the providers any services or charges for cross-connects or network circuits that connect you to the AWS Direct Connect location. :type connection_id: string :param co...
def delete_connection(self, connection_id):
params = {'connectionId': connection_id} return self.make_request(action='DeleteConnection', body=json.dumps(params))
'Deletes the specified interconnect. :type interconnect_id: string :param interconnect_id: The ID of the interconnect. Example: dxcon-abc123'
def delete_interconnect(self, interconnect_id):
params = {'interconnectId': interconnect_id} return self.make_request(action='DeleteInterconnect', body=json.dumps(params))
'Deletes a virtual interface. :type virtual_interface_id: string :param virtual_interface_id: ID of the virtual interface. Example: dxvif-123dfg56 Default: None'
def delete_virtual_interface(self, virtual_interface_id):
params = {'virtualInterfaceId': virtual_interface_id} return self.make_request(action='DeleteVirtualInterface', body=json.dumps(params))
'Displays all connections in this region. If a connection ID is provided, the call returns only that particular connection. :type connection_id: string :param connection_id: ID of the connection. Example: dxcon-fg5678gh Default: None'
def describe_connections(self, connection_id=None):
params = {} if (connection_id is not None): params['connectionId'] = connection_id return self.make_request(action='DescribeConnections', body=json.dumps(params))
'Return a list of connections that have been provisioned on the given interconnect. :type interconnect_id: string :param interconnect_id: ID of the interconnect on which a list of connection is provisioned. Example: dxcon-abc123 Default: None'
def describe_connections_on_interconnect(self, interconnect_id):
params = {'interconnectId': interconnect_id} return self.make_request(action='DescribeConnectionsOnInterconnect', body=json.dumps(params))
'Returns a list of interconnects owned by the AWS account. If an interconnect ID is provided, it will only return this particular interconnect. :type interconnect_id: string :param interconnect_id: The ID of the interconnect. Example: dxcon-abc123'
def describe_interconnects(self, interconnect_id=None):
params = {} if (interconnect_id is not None): params['interconnectId'] = interconnect_id return self.make_request(action='DescribeInterconnects', body=json.dumps(params))
'Returns the list of AWS Direct Connect locations in the current AWS region. These are the locations that may be selected when calling CreateConnection or CreateInterconnect.'
def describe_locations(self):
params = {} return self.make_request(action='DescribeLocations', body=json.dumps(params))
'Returns a list of virtual private gateways owned by the AWS account. You can create one or more AWS Direct Connect private virtual interfaces linking to a virtual private gateway. A virtual private gateway can be managed via Amazon Virtual Private Cloud (VPC) console or the `EC2 CreateVpnGateway`_ action.'
def describe_virtual_gateways(self):
params = {} return self.make_request(action='DescribeVirtualGateways', body=json.dumps(params))
'Displays all virtual interfaces for an AWS account. Virtual interfaces deleted fewer than 15 minutes before DescribeVirtualInterfaces is called are also returned. If a connection ID is included then only virtual interfaces associated with this connection will be returned. If a virtual interface ID is included then onl...
def describe_virtual_interfaces(self, connection_id=None, virtual_interface_id=None):
params = {} if (connection_id is not None): params['connectionId'] = connection_id if (virtual_interface_id is not None): params['virtualInterfaceId'] = virtual_interface_id return self.make_request(action='DescribeVirtualInterfaces', body=json.dumps(params))
'Sets up a new in-memory ``Table``. This is useful if the table already exists within DynamoDB & you simply want to use it for additional interactions. The only required parameter is the ``table_name``. However, under the hood, the object will call ``describe_table`` to determine the schema/indexes/throughput. You can ...
def __init__(self, table_name, schema=None, throughput=None, indexes=None, global_indexes=None, connection=None):
self.table_name = table_name self.connection = connection self.throughput = {'read': 5, 'write': 5} self.schema = schema self.indexes = indexes self.global_indexes = global_indexes if (self.connection is None): self.connection = DynamoDBConnection() if (throughput is not None): ...
'Creates a new table in DynamoDB & returns an in-memory ``Table`` object. This will setup a brand new table within DynamoDB. The ``table_name`` must be unique for your AWS account. The ``schema`` is also required to define the key structure of the table. **IMPORTANT** - You should consider the usage pattern of your tab...
@classmethod def create(cls, table_name, schema, throughput=None, indexes=None, global_indexes=None, connection=None):
table = cls(table_name=table_name, connection=connection) table.schema = schema if (throughput is not None): table.throughput = throughput if (indexes is not None): table.indexes = indexes if (global_indexes is not None): table.global_indexes = global_indexes raw_schema =...
'Given a raw schema structure back from a DynamoDB response, parse out & build the high-level Python objects that represent them.'
def _introspect_schema(self, raw_schema, raw_attributes=None):
schema = [] sane_attributes = {} if raw_attributes: for field in raw_attributes: sane_attributes[field['AttributeName']] = field['AttributeType'] for field in raw_schema: data_type = sane_attributes.get(field['AttributeName'], STRING) if (field['KeyType'] == 'HASH'): ...
'Given a raw index/global index structure back from a DynamoDB response, parse out & build the high-level Python objects that represent them.'
def _introspect_all_indexes(self, raw_indexes, map_indexes_projection):
indexes = [] for field in raw_indexes: index_klass = map_indexes_projection.get('ALL') kwargs = {'parts': []} if (field['Projection']['ProjectionType'] == 'ALL'): index_klass = map_indexes_projection.get('ALL') elif (field['Projection']['ProjectionType'] == 'KEYS_ONLY...
'Given a raw index structure back from a DynamoDB response, parse out & build the high-level Python objects that represent them.'
def _introspect_indexes(self, raw_indexes):
return self._introspect_all_indexes(raw_indexes, self._PROJECTION_TYPE_TO_INDEX.get('local_indexes'))
'Given a raw global index structure back from a DynamoDB response, parse out & build the high-level Python objects that represent them.'
def _introspect_global_indexes(self, raw_global_indexes):
return self._introspect_all_indexes(raw_global_indexes, self._PROJECTION_TYPE_TO_INDEX.get('global_indexes'))
'Describes the current structure of the table in DynamoDB. This information will be used to update the ``schema``, ``indexes``, ``global_indexes`` and ``throughput`` information on the ``Table``. Some calls, such as those involving creating keys or querying, will require this information to be populated. It also return...
def describe(self):
result = self.connection.describe_table(self.table_name) raw_throughput = result['Table']['ProvisionedThroughput'] self.throughput['read'] = int(raw_throughput['ReadCapacityUnits']) self.throughput['write'] = int(raw_throughput['WriteCapacityUnits']) if (not self.schema): raw_schema = result...
'Updates table attributes and global indexes in DynamoDB. Optionally accepts a ``throughput`` parameter, which should be a dictionary. If provided, it should specify a ``read`` & ``write`` key, both of which should have an integer value associated with them. Optionally accepts a ``global_indexes`` parameter, which shou...
def update(self, throughput=None, global_indexes=None):
data = None if throughput: self.throughput = throughput data = {'ReadCapacityUnits': int(self.throughput['read']), 'WriteCapacityUnits': int(self.throughput['write'])} gsi_data = None if global_indexes: gsi_data = [] for (gsi_name, gsi_throughput) in global_indexes.items(...
'Creates a global index in DynamoDB after the table has been created. Requires a ``global_indexes`` parameter, which should be a ``GlobalBaseIndexField`` subclass representing the desired index. To update ``global_indexes`` information on the ``Table``, you\'ll need to call ``Table.describe``. Returns ``True`` on succe...
def create_global_secondary_index(self, global_index):
if global_index: gsi_data = [] gsi_data_attr_def = [] gsi_data.append({'Create': global_index.schema()}) for attr_def in global_index.parts: gsi_data_attr_def.append(attr_def.definition()) self.connection.update_table(self.table_name, global_secondary_index_update...
'Deletes a global index in DynamoDB after the table has been created. Requires a ``global_index_name`` parameter, which should be a simple string of the name of the global secondary index. To update ``global_indexes`` information on the ``Table``, you\'ll need to call ``Table.describe``. Returns ``True`` on success. Ex...
def delete_global_secondary_index(self, global_index_name):
if global_index_name: gsi_data = [{'Delete': {'IndexName': global_index_name}}] self.connection.update_table(self.table_name, global_secondary_index_updates=gsi_data) return True else: msg = 'You need to provide the global index name to delete_global_se...
'Updates a global index(es) in DynamoDB after the table has been created. Requires a ``global_indexes`` parameter, which should be a dictionary. If provided, it should specify the index name, which is also a dict containing a ``read`` & ``write`` key, both of which should have an integer value associated with them. To ...
def update_global_secondary_index(self, global_indexes):
if global_indexes: gsi_data = [] for (gsi_name, gsi_throughput) in global_indexes.items(): gsi_data.append({'Update': {'IndexName': gsi_name, 'ProvisionedThroughput': {'ReadCapacityUnits': int(gsi_throughput['read']), 'WriteCapacityUnits': int(gsi_throughput['write'])}}}) self.co...
'Deletes a table in DynamoDB. **IMPORTANT** - Be careful when using this method, there is no undo. Returns ``True`` on success. Example:: >>> users.delete() True'
def delete(self):
self.connection.delete_table(self.table_name) return True
'Given a flat Python dictionary of keys/values, converts it into the nested dictionary DynamoDB expects. Converts:: \'username\': \'john\', \'tags\': [1, 2, 5], ...to...:: \'username\': {\'S\': \'john\'}, \'tags\': {\'NS\': [\'1\', \'2\', \'5\']},'
def _encode_keys(self, keys):
raw_key = {} for (key, value) in keys.items(): raw_key[key] = self._dynamizer.encode(value) return raw_key
'Fetches an item (record) from a table in DynamoDB. To specify the key of the item you\'d like to get, you can specify the key attributes as kwargs. Optionally accepts a ``consistent`` parameter, which should be a boolean. If you provide ``True``, it will perform a consistent (but more expensive) read from DynamoDB. (D...
def get_item(self, consistent=False, attributes=None, **kwargs):
raw_key = self._encode_keys(kwargs) item_data = self.connection.get_item(self.table_name, raw_key, attributes_to_get=attributes, consistent_read=consistent) if ('Item' not in item_data): raise exceptions.ItemNotFound(("Item %s couldn't be found." % kwargs)) item = Item(self) item...
'Return whether an item (record) exists within a table in DynamoDB. To specify the key of the item you\'d like to get, you can specify the key attributes as kwargs. Optionally accepts a ``consistent`` parameter, which should be a boolean. If you provide ``True``, it will perform a consistent (but more expensive) read f...
def has_item(self, **kwargs):
try: self.get_item(**kwargs) except (JSONResponseError, exceptions.ItemNotFound): return False return True
'Look up an entry in DynamoDB. This is mostly backwards compatible with boto.dynamodb. Unlike get_item, it takes hash_key and range_key first, although you may still specify keyword arguments instead. Also unlike the get_item command, if the returned item has no keys (i.e., it does not exist in DynamoDB), a None result...
def lookup(self, *args, **kwargs):
if (not self.schema): self.describe() for (x, arg) in enumerate(args): kwargs[self.schema[x].name] = arg ret = self.get_item(**kwargs) if (not ret.keys()): return None return ret
'Returns a new, blank item This is mostly for consistency with boto.dynamodb'
def new_item(self, *args):
if (not self.schema): self.describe() data = {} for (x, arg) in enumerate(args): data[self.schema[x].name] = arg return Item(self, data=data)
'Saves an entire item to DynamoDB. By default, if any part of the ``Item``\'s original data doesn\'t match what\'s currently in DynamoDB, this request will fail. This prevents other processes from updating the data in between when you read the item & when your request to update the item\'s data is processed, which woul...
def put_item(self, data, overwrite=False):
item = Item(self, data=data) return item.save(overwrite=overwrite)
'The internal variant of ``put_item`` (full data). This is used by the ``Item`` objects, since that operation is represented at the table-level by the API, but conceptually maps better to telling an individual ``Item`` to save itself.'
def _put_item(self, item_data, expects=None):
kwargs = {} if (expects is not None): kwargs['expected'] = expects self.connection.put_item(self.table_name, item_data, **kwargs) return True
'The internal variant of ``put_item`` (partial data). This is used by the ``Item`` objects, since that operation is represented at the table-level by the API, but conceptually maps better to telling an individual ``Item`` to save itself.'
def _update_item(self, key, item_data, expects=None):
raw_key = self._encode_keys(key) kwargs = {} if (expects is not None): kwargs['expected'] = expects self.connection.update_item(self.table_name, raw_key, item_data, **kwargs) return True
'Deletes a single item. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value. Conditional deletes are useful for only deleting items if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is no...
def delete_item(self, expected=None, conditional_operator=None, **kwargs):
expected = self._build_filters(expected, using=FILTER_OPERATORS) raw_key = self._encode_keys(kwargs) try: self.connection.delete_item(self.table_name, raw_key, expected=expected, conditional_operator=conditional_operator) except exceptions.ConditionalCheckFailedException: return False ...
'Returns the fields necessary to make a key for a table. If the ``Table`` does not already have a populated ``schema``, this will request it via a ``Table.describe`` call. Returns a list of fieldnames (strings). Example:: # A simple hash key. >>> users.get_key_fields() [\'username\'] # A complex hash+range key. >>> use...
def get_key_fields(self):
if (not self.schema): self.describe() return [field.name for field in self.schema]
'Allows the batching of writes to DynamoDB. Since each write/delete call to DynamoDB has a cost associated with it, when loading lots of data, it makes sense to batch them, creating as few calls as possible. This returns a context manager that will transparently handle creating these batches. The object you get back li...
def batch_write(self):
return BatchTable(self)
'An internal method for taking query/scan-style ``**kwargs`` & turning them into the raw structure DynamoDB expects for filtering.'
def _build_filters(self, filter_kwargs, using=QUERY_OPERATORS):
if (filter_kwargs is None): return filters = {} for (field_and_op, value) in filter_kwargs.items(): field_bits = field_and_op.split('__') fieldname = '__'.join(field_bits[:(-1)]) try: op = using[field_bits[(-1)]] except KeyError: raise exceptio...
'**WARNING:** This method is provided **strictly** for backward-compatibility. It returns results in an incorrect order. If you are writing new code, please use ``Table.query_2``.'
def query(self, limit=None, index=None, reverse=False, consistent=False, attributes=None, max_page_size=None, **filter_kwargs):
reverse = (not reverse) return self.query_2(limit=limit, index=index, reverse=reverse, consistent=consistent, attributes=attributes, max_page_size=max_page_size, **filter_kwargs)
'Queries for a set of matching items in a DynamoDB table. Queries can be performed against a hash key, a hash+range key or against any data stored in your local secondary indexes. Query filters can be used to filter on arbitrary fields. **Note** - You can not query against arbitrary fields within the data stored in Dyn...
def query_2(self, limit=None, index=None, reverse=False, consistent=False, attributes=None, max_page_size=None, query_filter=None, conditional_operator=None, **filter_kwargs):
if self.schema: if (len(self.schema) == 1): if (len(filter_kwargs) <= 1): if ((not self.global_indexes) or (not len(self.global_indexes))): raise exceptions.QueryError('You must specify more than one key to filter on.') if (attri...
'Queries the exact count of matching items in a DynamoDB table. Queries can be performed against a hash key, a hash+range key or against any data stored in your local secondary indexes. Query filters can be used to filter on arbitrary fields. To specify the filters of the items you\'d like to get, you can specify the f...
def query_count(self, index=None, consistent=False, conditional_operator=None, query_filter=None, scan_index_forward=True, limit=None, exclusive_start_key=None, **filter_kwargs):
key_conditions = self._build_filters(filter_kwargs, using=QUERY_OPERATORS) built_query_filter = self._build_filters(query_filter, using=FILTER_OPERATORS) count_buffer = 0 last_evaluated_key = exclusive_start_key while True: raw_results = self.connection.query(self.table_name, index_name=inde...
'The internal method that performs the actual queries. Used extensively by ``ResultSet`` to perform each (paginated) request.'
def _query(self, limit=None, index=None, reverse=False, consistent=False, exclusive_start_key=None, select=None, attributes_to_get=None, query_filter=None, conditional_operator=None, **filter_kwargs):
kwargs = {'limit': limit, 'index_name': index, 'consistent_read': consistent, 'select': select, 'attributes_to_get': attributes_to_get, 'conditional_operator': conditional_operator} if reverse: kwargs['scan_index_forward'] = False if exclusive_start_key: kwargs['exclusive_start_key'] = {} ...
'Scans across all items within a DynamoDB table. Scans can be performed against a hash key or a hash+range key. You can additionally filter the results after the table has been read but before the response is returned by using query filters. To specify the filters of the items you\'d like to get, you can specify the fi...
def scan(self, limit=None, segment=None, total_segments=None, max_page_size=None, attributes=None, conditional_operator=None, **filter_kwargs):
results = ResultSet(max_page_size=max_page_size) kwargs = filter_kwargs.copy() kwargs.update({'limit': limit, 'segment': segment, 'total_segments': total_segments, 'attributes': attributes, 'conditional_operator': conditional_operator}) results.to_call(self._scan, **kwargs) return results
'The internal method that performs the actual scan. Used extensively by ``ResultSet`` to perform each (paginated) request.'
def _scan(self, limit=None, exclusive_start_key=None, segment=None, total_segments=None, attributes=None, conditional_operator=None, **filter_kwargs):
kwargs = {'limit': limit, 'segment': segment, 'total_segments': total_segments, 'attributes_to_get': attributes, 'conditional_operator': conditional_operator} if exclusive_start_key: kwargs['exclusive_start_key'] = {} for (key, value) in exclusive_start_key.items(): kwargs['exclusive...
'Fetches many specific items in batch from a table. Requires a ``keys`` parameter, which should be a list of dictionaries. Each dictionary should consist of the keys values to specify. Optionally accepts a ``consistent`` parameter, which should be a boolean. If you provide ``True``, a strongly consistent read will be u...
def batch_get(self, keys, consistent=False, attributes=None):
results = BatchGetResultSet(keys=keys, max_batch_get=self.max_batch_get) results.to_call(self._batch_get, consistent=consistent, attributes=attributes) return results
'The internal method that performs the actual batch get. Used extensively by ``BatchGetResultSet`` to perform each (paginated) request.'
def _batch_get(self, keys, consistent=False, attributes=None):
items = {self.table_name: {'Keys': []}} if consistent: items[self.table_name]['ConsistentRead'] = True if (attributes is not None): items[self.table_name]['AttributesToGet'] = attributes for key_data in keys: raw_key = {} for (key, value) in key_data.items(): ...
'Returns a (very) eventually consistent count of the number of items in a table. Lag time is about 6 hours, so don\'t expect a high degree of accuracy. Example:: >>> users.count() 6'
def count(self):
info = self.describe() return info['Table'].get('ItemCount', 0)
'Creates a Python schema field, to represent the data to pass to DynamoDB. Requires a ``name`` parameter, which should be a string name of the field. Optionally accepts a ``data_type`` parameter, which should be a constant from ``boto.dynamodb2.types``. (Default: ``STRING``)'
def __init__(self, name, data_type=STRING):
self.name = name self.data_type = data_type
'Returns the attribute definition structure DynamoDB expects. Example:: >>> field.definition() \'AttributeName\': \'username\', \'AttributeType\': \'S\','
def definition(self):
return {'AttributeName': self.name, 'AttributeType': self.data_type}
'Returns the schema structure DynamoDB expects. Example:: >>> field.schema() \'AttributeName\': \'username\', \'KeyType\': \'HASH\','
def schema(self):
return {'AttributeName': self.name, 'KeyType': self.attr_type}
'Returns the attribute definition structure DynamoDB expects. Example:: >>> index.definition() \'AttributeName\': \'username\', \'AttributeType\': \'S\','
def definition(self):
definition = [] for part in self.parts: definition.append({'AttributeName': part.name, 'AttributeType': part.data_type}) return definition
'Returns the schema structure DynamoDB expects. Example:: >>> index.schema() \'IndexName\': \'LastNameIndex\', \'KeySchema\': [ \'AttributeName\': \'username\', \'KeyType\': \'HASH\', \'Projection\': { \'ProjectionType\': \'KEYS_ONLY\','
def schema(self):
key_schema = [] for part in self.parts: key_schema.append(part.schema()) return {'IndexName': self.name, 'KeySchema': key_schema, 'Projection': {'ProjectionType': self.projection_type}}
'Returns the schema structure DynamoDB expects. Example:: >>> index.schema() \'IndexName\': \'LastNameIndex\', \'KeySchema\': [ \'AttributeName\': \'username\', \'KeyType\': \'HASH\', \'Projection\': { \'ProjectionType\': \'KEYS_ONLY\', \'ProvisionedThroughput\': { \'ReadCapacityUnits\': 5, \'WriteCapacityUnits\': 5'
def schema(self):
schema_data = super(GlobalBaseIndexField, self).schema() schema_data['ProvisionedThroughput'] = {'ReadCapacityUnits': int(self.throughput['read']), 'WriteCapacityUnits': int(self.throughput['write'])} return schema_data
'The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key. A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem will return a partial result if the response size limit is exceeded, the...
def batch_get_item(self, request_items, return_consumed_capacity=None):
params = {'RequestItems': request_items} if (return_consumed_capacity is not None): params['ReturnConsumedCapacity'] = return_consumed_capacity return self.make_request(action='BatchGetItem', body=json.dumps(params))
'The BatchWriteItem operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem can write up to 16 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 400 KB. BatchWriteItem cannot update items. To update items, use t...
def batch_write_item(self, request_items, return_consumed_capacity=None, return_item_collection_metrics=None):
params = {'RequestItems': request_items} if (return_consumed_capacity is not None): params['ReturnConsumedCapacity'] = return_consumed_capacity if (return_item_collection_metrics is not None): params['ReturnItemCollectionMetrics'] = return_item_collection_metrics return self.make_request...
'The CreateTable operation adds a new table to your account. In an AWS account, table names must be unique within each region. That is, you can have two tables with same name if you create the tables in different regions. CreateTable is an asynchronous operation. Upon receiving a CreateTable request, DynamoDB immediate...
def create_table(self, attribute_definitions, table_name, key_schema, provisioned_throughput, local_secondary_indexes=None, global_secondary_indexes=None):
params = {'AttributeDefinitions': attribute_definitions, 'TableName': table_name, 'KeySchema': key_schema, 'ProvisionedThroughput': provisioned_throughput} if (local_secondary_indexes is not None): params['LocalSecondaryIndexes'] = local_secondary_indexes if (global_secondary_indexes is not None): ...
'Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value. In addition to deleting an item, you can also return the item\'s attribute values in the same operation, using the ReturnValues parameter. Unless...
def delete_item(self, table_name, key, expected=None, conditional_operator=None, return_values=None, return_consumed_capacity=None, return_item_collection_metrics=None, condition_expression=None, expression_attribute_names=None, expression_attribute_values=None):
params = {'TableName': table_name, 'Key': key} if (expected is not None): params['Expected'] = expected if (conditional_operator is not None): params['ConditionalOperator'] = conditional_operator if (return_values is not None): params['ReturnValues'] = return_values if (retur...
'The DeleteTable operation deletes a table and all of its items. After a DeleteTable request, the specified table is in the `DELETING` state until DynamoDB completes the deletion. If the table is in the `ACTIVE` state, you can delete it. If a table is in `CREATING` or `UPDATING` states, then DynamoDB returns a Resource...
def delete_table(self, table_name):
params = {'TableName': table_name} return self.make_request(action='DeleteTable', body=json.dumps(params))
'Returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table. If you issue a DescribeTable request immediately after a CreateTable request, DynamoDB might return a ResourceNotFoundException. This is because DescribeTable uses ...
def describe_table(self, table_name):
params = {'TableName': table_name} return self.make_request(action='DescribeTable', body=json.dumps(params))
'The GetItem operation returns a set of attributes for the item with the given primary key. If there is no matching item, GetItem does not return any data. GetItem provides an eventually consistent read by default. If your application requires a strongly consistent read, set ConsistentRead to `True`. Although a strongl...
def get_item(self, table_name, key, attributes_to_get=None, consistent_read=None, return_consumed_capacity=None, projection_expression=None, expression_attribute_names=None):
params = {'TableName': table_name, 'Key': key} if (attributes_to_get is not None): params['AttributesToGet'] = attributes_to_get if (consistent_read is not None): params['ConsistentRead'] = consistent_read if (return_consumed_capacity is not None): params['ReturnConsumedCapacity'...
'Returns an array of table names associated with the current account and endpoint. The output from ListTables is paginated, with each page returning a maximum of 100 table names. :type exclusive_start_table_name: string :param exclusive_start_table_name: The first table name that this operation will evaluate. Use the v...
def list_tables(self, exclusive_start_table_name=None, limit=None):
params = {} if (exclusive_start_table_name is not None): params['ExclusiveStartTableName'] = exclusive_start_table_name if (limit is not None): params['Limit'] = limit return self.make_request(action='ListTables', body=json.dumps(params))
'Creates a new item, or replaces an old item with a new item. If an item that has the same primary key as the new item already exists in the specified table, the new item completely replaces the existing item. You can perform a conditional put operation (add a new item if one with the specified primary key doesn\'t exi...
def put_item(self, table_name, item, expected=None, return_values=None, return_consumed_capacity=None, return_item_collection_metrics=None, conditional_operator=None, condition_expression=None, expression_attribute_names=None, expression_attribute_values=None):
params = {'TableName': table_name, 'Item': item} if (expected is not None): params['Expected'] = expected if (return_values is not None): params['ReturnValues'] = return_values if (return_consumed_capacity is not None): params['ReturnConsumedCapacity'] = return_consumed_capacity ...
'A Query operation directly accesses items from a table using the table primary key, or from an index using the index key. You must provide a specific hash key value. You can narrow the scope of the query by using comparison operators on the range key value, or on the index key. You can use the ScanIndexForward paramet...
def query(self, table_name, key_conditions, index_name=None, select=None, attributes_to_get=None, limit=None, consistent_read=None, query_filter=None, conditional_operator=None, scan_index_forward=None, exclusive_start_key=None, return_consumed_capacity=None, projection_expression=None, filter_expression=None, expressi...
params = {'TableName': table_name, 'KeyConditions': key_conditions} if (index_name is not None): params['IndexName'] = index_name if (select is not None): params['Select'] = select if (attributes_to_get is not None): params['AttributesToGet'] = attributes_to_get if (limit is ...
'The Scan operation returns one or more items and item attributes by accessing every item in the table. To have DynamoDB return fewer items, you can provide a ScanFilter operation. If the total number of scanned items exceeds the maximum data set size limit of 1 MB, the scan stops and results are returned to the user a...
def scan(self, table_name, attributes_to_get=None, limit=None, select=None, scan_filter=None, conditional_operator=None, exclusive_start_key=None, return_consumed_capacity=None, total_segments=None, segment=None, projection_expression=None, filter_expression=None, expression_attribute_names=None, expression_attribute_v...
params = {'TableName': table_name} if (attributes_to_get is not None): params['AttributesToGet'] = attributes_to_get if (limit is not None): params['Limit'] = limit if (select is not None): params['Select'] = select if (scan_filter is not None): params['ScanFilter'] =...
'Edits an existing item\'s attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update (insert a new attribute name-value pair if it doesn\'t exist, or replace an existing name-value pair if it has certain expected att...
def update_item(self, table_name, key, attribute_updates=None, expected=None, conditional_operator=None, return_values=None, return_consumed_capacity=None, return_item_collection_metrics=None, update_expression=None, condition_expression=None, expression_attribute_names=None, expression_attribute_values=None):
params = {'TableName': table_name, 'Key': key} if (attribute_updates is not None): params['AttributeUpdates'] = attribute_updates if (expected is not None): params['Expected'] = expected if (conditional_operator is not None): params['ConditionalOperator'] = conditional_operator ...
'Updates the provisioned throughput for the given table, or manages the global secondary indexes on the table. You can increase or decrease the table\'s provisioned throughput values within the maximums and minimums listed in the `Limits`_ section in the Amazon DynamoDB Developer Guide . In addition, you can use Update...
def update_table(self, table_name, provisioned_throughput=None, global_secondary_index_updates=None, attribute_definitions=None):
params = {'TableName': table_name} if (attribute_definitions is not None): params['AttributeDefinitions'] = attribute_definitions if (provisioned_throughput is not None): params['ProvisionedThroughput'] = provisioned_throughput if (global_secondary_index_updates is not None): par...
'Constructs an (unsaved) ``Item`` instance. To persist the data in DynamoDB, you\'ll need to call the ``Item.save`` (or ``Item.partial_save``) on the instance. Requires a ``table`` parameter, which should be a ``Table`` instance. This is required, as DynamoDB\'s API is focus around all operations being table-level. It\...
def __init__(self, table, data=None, loaded=False):
self.table = table self._loaded = loaded self._orig_data = {} self._data = data self._dynamizer = table._dynamizer if isinstance(self._data, Item): self._data = self._data._data if (self._data is None): self._data = {} if self._loaded: self._orig_data = deepcopy(s...
'Checks the ``-orig_data`` against the ``_data`` to determine what changes to the data are present. Returns a dictionary containing the keys ``adds``, ``changes`` & ``deletes``, containing the updated data.'
def _determine_alterations(self):
alterations = {'adds': {}, 'changes': {}, 'deletes': []} orig_keys = set(self._orig_data.keys()) data_keys = set(self._data.keys()) for key in orig_keys.intersection(data_keys): if (self._data[key] != self._orig_data[key]): if self._is_storable(self._data[key]): alter...
'Returns whether or not the data has changed on the ``Item``. Optionally accepts a ``data`` argument, which accepts the output from ``self._determine_alterations()`` if you\'ve already called it. Typically unnecessary to do. Default is ``None``. Example: >>> user.needs_save() False >>> user[\'first_name\'] = \'Johann\'...
def needs_save(self, data=None):
if (data is None): data = self._determine_alterations() needs_save = False for kind in ['adds', 'changes', 'deletes']: if len(data[kind]): needs_save = True break return needs_save
'Marks an ``Item`` instance as no longer needing to be saved. Example: >>> user.needs_save() False >>> user[\'first_name\'] = \'Johann\' >>> user.needs_save() True >>> user.mark_clean() >>> user.needs_save() False'
def mark_clean(self):
self._orig_data = deepcopy(self._data)