signature stringlengths 8 3.44k | body stringlengths 0 1.41M | docstring stringlengths 1 122k | id stringlengths 5 17 |
|---|---|---|---|
def getArtifact(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Artifact from Run
Get artifact by `<name>` from a specific run.
**Public Artifacts**, in-order to get an artifact you need the scope
`queue:get-artifact:<name>`, where `<name>` is the name of the artifact.
But if the artifact `name` starts with `public/`, authentication and
authorization is not necessary to fetch the artifact.
**API Clients**, this method will redirect you to the artifact, if it is
stored externally. Either way, the response may not be JSON. So API
client users might want to generate a signed URL for this end-point and
use that URL with an HTTP client that can handle responses correctly.
**Downloading artifacts**
There are some special considerations for those http clients which download
artifacts. This api endpoint is designed to be compatible with an HTTP 1.1
compliant client, but has extra features to ensure the download is valid.
It is strongly recommend that consumers use either taskcluster-lib-artifact (JS),
taskcluster-lib-artifact-go (Go) or the CLI written in Go to interact with
artifacts.
In order to download an artifact the following must be done:
1. Obtain queue url. Building a signed url with a taskcluster client is
recommended
1. Make a GET request which does not follow redirects
1. In all cases, if specified, the
x-taskcluster-location-{content,transfer}-{sha256,length} values must be
validated to be equal to the Content-Length and Sha256 checksum of the
final artifact downloaded. as well as any intermediate redirects
1. If this response is a 500-series error, retry using an exponential
backoff. No more than 5 retries should be attempted
1. If this response is a 400-series error, treat it appropriately for
your context. This might be an error in responding to this request or
an Error storage type body. This request should not be retried.
1. If this response is a 200-series response, the response body is the artifact.
If the x-taskcluster-location-{content,transfer}-{sha256,length} and
x-taskcluster-location-content-encoding are specified, they should match
this response body
1. If the response type is a 300-series redirect, the artifact will be at the
location specified by the `Location` header. There are multiple artifact storage
types which use a 300-series redirect.
1. For all redirects followed, the user must verify that the content-sha256, content-length,
transfer-sha256, transfer-length and content-encoding match every further request. The final
artifact must also be validated against the values specified in the original queue response
1. Caching of requests with an x-taskcluster-artifact-storage-type value of `reference`
must not occur
1. A request which has x-taskcluster-artifact-storage-type value of `blob` and does not
have x-taskcluster-location-content-sha256 or x-taskcluster-location-content-length
must be treated as an error
**Headers**
The following important headers are set on the response to this method:
* location: the url of the artifact if a redirect is to be performed
* x-taskcluster-artifact-storage-type: the storage type. Example: blob, s3, error
The following important headers are set on responses to this method for Blob artifacts
* x-taskcluster-location-content-sha256: the SHA256 of the artifact
*after* any content-encoding is undone. Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
* x-taskcluster-location-content-length: the number of bytes *after* any content-encoding
is undone
* x-taskcluster-location-transfer-sha256: the SHA256 of the artifact
*before* any content-encoding is undone. This is the SHA256 of what is sent over
the wire. Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
* x-taskcluster-location-transfer-length: the number of bytes *after* any content-encoding
is undone
* x-taskcluster-location-content-encoding: the content-encoding used. It will either
be `gzip` or `identity` right now. This is hardcoded to a value set when the artifact
was created and no content-negotiation occurs
* x-taskcluster-location-content-type: the content-type of the artifact
**Caching**, artifacts may be cached in data centers closer to the
workers in-order to reduce bandwidth costs. This can lead to longer
response times. Caching can be skipped by setting the header
`x-taskcluster-skip-cache: true`, this should only be used for resources
where request volume is known to be low, and caching not useful.
(This feature may be disabled in the future, use is sparingly!)
This method is ``stable`` | f15732:c0:m18 |
def listArtifacts(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Artifacts from Run
Returns a list of artifacts and associated meta-data for a given run.
As a task may have many artifacts paging may be necessary. If this
end-point returns a `continuationToken`, you should call the end-point
again with the `continuationToken` as the query-string option:
`continuationToken`.
By default this end-point will list up-to 1000 artifacts in a single page
you may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-artifacts-response.json#``
This method is ``experimental`` | f15732:c0:m20 |
def listProvisioners(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get a list of all active provisioners
Get all active provisioners.
The term "provisioner" is taken broadly to mean anything with a provisionerId.
This does not necessarily mean there is an associated service performing any
provisioning activity.
The response is paged. If this end-point returns a `continuationToken`, you
should call the end-point again with the `continuationToken` as a query-string
option. By default this end-point will list up to 1000 provisioners in a single
page. You may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-provisioners-response.json#``
This method is ``experimental`` | f15732:c0:m22 |
def getProvisioner(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get an active provisioner
Get an active provisioner.
The term "provisioner" is taken broadly to mean anything with a provisionerId.
This does not necessarily mean there is an associated service performing any
provisioning activity.
This method gives output: ``v1/provisioner-response.json#``
This method is ``experimental`` | f15732:c0:m23 |
def declareProvisioner(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Update a provisioner
Declare a provisioner, supplying some details about it.
`declareProvisioner` allows updating one or more properties of a provisioner as long as the required scopes are
possessed. For example, a request to update the `aws-provisioner-v1`
provisioner with a body `{description: 'This provisioner is great'}` would require you to have the scope
`queue:declare-provisioner:aws-provisioner-v1#description`.
The term "provisioner" is taken broadly to mean anything with a provisionerId.
This does not necessarily mean there is an associated service performing any
provisioning activity.
This method takes input: ``v1/update-provisioner-request.json#``
This method gives output: ``v1/provisioner-response.json#``
This method is ``experimental`` | f15732:c0:m24 |
def pendingTasks(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Number of Pending Tasks
Get an approximate number of pending tasks for the given `provisionerId`
and `workerType`.
The underlying Azure Storage Queues only promises to give us an estimate.
Furthermore, we cache the result in memory for 20 seconds. So consumers
should be no means expect this to be an accurate number.
It is, however, a solid estimate of the number of pending tasks.
This method gives output: ``v1/pending-tasks-response.json#``
This method is ``stable`` | f15732:c0:m25 |
def listWorkerTypes(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get a list of all active worker-types
Get all active worker-types for the given provisioner.
The response is paged. If this end-point returns a `continuationToken`, you
should call the end-point again with the `continuationToken` as a query-string
option. By default this end-point will list up to 1000 worker-types in a single
page. You may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-workertypes-response.json#``
This method is ``experimental`` | f15732:c0:m26 |
def getWorkerType(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get a worker-type
Get a worker-type from a provisioner.
This method gives output: ``v1/workertype-response.json#``
This method is ``experimental`` | f15732:c0:m27 |
def declareWorkerType(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Update a worker-type
Declare a workerType, supplying some details about it.
`declareWorkerType` allows updating one or more properties of a worker-type as long as the required scopes are
possessed. For example, a request to update the `gecko-b-1-w2008` worker-type within the `aws-provisioner-v1`
provisioner with a body `{description: 'This worker type is great'}` would require you to have the scope
`queue:declare-worker-type:aws-provisioner-v1/gecko-b-1-w2008#description`.
This method takes input: ``v1/update-workertype-request.json#``
This method gives output: ``v1/workertype-response.json#``
This method is ``experimental`` | f15732:c0:m28 |
def listWorkers(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get a list of all active workers of a workerType
Get a list of all active workers of a workerType.
`listWorkers` allows a response to be filtered by quarantined and non quarantined workers.
To filter the query, you should call the end-point with `quarantined` as a query-string option with a
true or false value.
The response is paged. If this end-point returns a `continuationToken`, you
should call the end-point again with the `continuationToken` as a query-string
option. By default this end-point will list up to 1000 workers in a single
page. You may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-workers-response.json#``
This method is ``experimental`` | f15732:c0:m29 |
def getWorker(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get a worker-type
Get a worker from a worker-type.
This method gives output: ``v1/worker-response.json#``
This method is ``experimental`` | f15732:c0:m30 |
def quarantineWorker(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Quarantine a worker
Quarantine a worker
This method takes input: ``v1/quarantine-worker-request.json#``
This method gives output: ``v1/worker-response.json#``
This method is ``experimental`` | f15732:c0:m31 |
def declareWorker(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Declare a worker
Declare a worker, supplying some details about it.
`declareWorker` allows updating one or more properties of a worker as long as the required scopes are
possessed.
This method takes input: ``v1/update-worker-request.json#``
This method gives output: ``v1/worker-response.json#``
This method is ``experimental`` | f15732:c0:m32 |
def ping(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Ping Server
Respond without doing anything.
This endpoint is used to check that the service is up.
This method is ``stable`` | f15733:c0:m0 |
def githubWebHookConsumer(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Consume GitHub WebHook
Capture a GitHub event and publish it via pulse, if it's a push,
release or pull request.
This method is ``experimental`` | f15733:c0:m1 |
def builds(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | List of Builds
A paginated list of builds that have been run in
Taskcluster. Can be filtered on various git-specific
fields.
This method gives output: ``v1/build-list.json#``
This method is ``experimental`` | f15733:c0:m2 |
def badge(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Latest Build Status Badge
Checks the status of the latest build of a given branch
and returns corresponding badge svg.
This method is ``experimental`` | f15733:c0:m3 |
def repository(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Repository Info
Returns any repository metadata that is
useful within Taskcluster related services.
This method gives output: ``v1/repository.json#``
This method is ``experimental`` | f15733:c0:m4 |
def createStatus(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Post a status against a given changeset
For a given changeset (SHA) of a repository, this will attach a "commit status"
on github. These statuses are links displayed next to each revision.
The status is either OK (green check) or FAILURE (red cross),
made of a custom title and link.
This method takes input: ``v1/create-status.json#``
This method is ``experimental`` | f15733:c0:m6 |
def createComment(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Post a comment on a given GitHub Issue or Pull Request
For a given Issue or Pull Request of a repository, this will write a new message.
This method takes input: ``v1/create-comment.json#``
This method is ``experimental`` | f15733:c0:m7 |
def workerTypeCreated(self, *args, **kwargs): | ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL> | WorkerType Created Message
When a new `workerType` is created a message will be published to this
exchange.
This exchange outputs: ``http://schemas.taskcluster.net/aws-provisioner/v1/worker-type-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* workerType: WorkerType that this message concerns. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | f15734:c0:m0 |
def workerTypeUpdated(self, *args, **kwargs): | ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL> | WorkerType Updated Message
When a `workerType` is updated a message will be published to this
exchange.
This exchange outputs: ``http://schemas.taskcluster.net/aws-provisioner/v1/worker-type-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* workerType: WorkerType that this message concerns. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | f15734:c0:m1 |
def workerTypeRemoved(self, *args, **kwargs): | ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL> | WorkerType Removed Message
When a `workerType` is removed a message will be published to this
exchange.
This exchange outputs: ``http://schemas.taskcluster.net/aws-provisioner/v1/worker-type-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* workerType: WorkerType that this message concerns. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | f15734:c0:m2 |
def ping(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Ping Server
Respond without doing anything.
This endpoint is used to check that the service is up.
This method is ``stable`` | f15735:c0:m0 |
def findTask(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Find Indexed Task
Find a task by index path, returning the highest-rank task with that path. If no
task exists for the given path, this API end-point will respond with a 404 status.
This method gives output: ``v1/indexed-task-response.json#``
This method is ``stable`` | f15735:c0:m1 |
def listNamespaces(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | List Namespaces
List the namespaces immediately under a given namespace.
This endpoint
lists up to 1000 namespaces. If more namespaces are present, a
`continuationToken` will be returned, which can be given in the next
request. For the initial request, the payload should be an empty JSON
object.
This method gives output: ``v1/list-namespaces-response.json#``
This method is ``stable`` | f15735:c0:m2 |
def listTasks(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | List Tasks
List the tasks immediately under a given namespace.
This endpoint
lists up to 1000 tasks. If more tasks are present, a
`continuationToken` will be returned, which can be given in the next
request. For the initial request, the payload should be an empty JSON
object.
**Remark**, this end-point is designed for humans browsing for tasks, not
services, as that makes little sense.
This method gives output: ``v1/list-tasks-response.json#``
This method is ``stable`` | f15735:c0:m3 |
def insertTask(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Insert Task into Index
Insert a task into the index. If the new rank is less than the existing rank
at the given index path, the task is not indexed but the response is still 200 OK.
Please see the introduction above for information
about indexing successfully completed tasks automatically using custom routes.
This method takes input: ``v1/insert-task-request.json#``
This method gives output: ``v1/indexed-task-response.json#``
This method is ``stable`` | f15735:c0:m4 |
def findArtifactFromTask(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Artifact From Indexed Task
Find a task by index path and redirect to the artifact on the most recent
run with the given `name`.
Note that multiple calls to this endpoint may return artifacts from differen tasks
if a new task is inserted into the index between calls. Avoid using this method as
a stable link to multiple, connected files if the index path does not contain a
unique identifier. For example, the following two links may return unrelated files:
* https://tc.example.com/api/index/v1/task/some-app.win64.latest.installer/artifacts/public/installer.exe`
* https://tc.example.com/api/index/v1/task/some-app.win64.latest.installer/artifacts/public/debug-symbols.zip`
This problem be remedied by including the revision in the index path or by bundling both
installer and debug symbols into a single artifact.
If no task exists for the given index path, this API end-point responds with 404.
This method is ``stable`` | f15735:c0:m5 |
def createSession(*args, **kwargs): | return requests.Session(*args, **kwargs)<EOL> | Create a new requests session. This passes through all positional and
keyword arguments to the requests.Session() constructor | f15737:m0 |
def createTemporaryCredentials(clientId, accessToken, start, expiry, scopes, name=None): | for scope in scopes:<EOL><INDENT>if not isinstance(scope, six.string_types):<EOL><INDENT>raise exceptions.TaskclusterFailure('<STR_LIT>')<EOL><DEDENT><DEDENT>if expiry - start > datetime.timedelta(days=<NUM_LIT>):<EOL><INDENT>raise exceptions.TaskclusterFailure('<STR_LIT>')<EOL><DEDENT>cert = dict(<EOL>version=<NUM_LIT:1>,<EOL>scopes=scopes,<EOL>start=calendar.timegm(start.utctimetuple()) * <NUM_LIT:1000>,<EOL>expiry=calendar.timegm(expiry.utctimetuple()) * <NUM_LIT:1000>,<EOL>seed=utils.slugId().encode('<STR_LIT:ascii>') + utils.slugId().encode('<STR_LIT:ascii>'),<EOL>)<EOL>if name:<EOL><INDENT>cert['<STR_LIT>'] = utils.toStr(clientId)<EOL><DEDENT>sig = ['<STR_LIT>' + utils.toStr(cert['<STR_LIT:version>'])]<EOL>if name:<EOL><INDENT>sig.extend([<EOL>'<STR_LIT>' + utils.toStr(name),<EOL>'<STR_LIT>' + utils.toStr(clientId),<EOL>])<EOL><DEDENT>sig.extend([<EOL>'<STR_LIT>' + utils.toStr(cert['<STR_LIT>']),<EOL>'<STR_LIT>' + utils.toStr(cert['<STR_LIT:start>']),<EOL>'<STR_LIT>' + utils.toStr(cert['<STR_LIT>']),<EOL>'<STR_LIT>'<EOL>] + scopes)<EOL>sigStr = '<STR_LIT:\n>'.join(sig).encode()<EOL>if isinstance(accessToken, six.text_type):<EOL><INDENT>accessToken = accessToken.encode()<EOL><DEDENT>sig = hmac.new(accessToken, sigStr, hashlib.sha256).digest()<EOL>cert['<STR_LIT>'] = utils.encodeStringForB64Header(sig)<EOL>newToken = hmac.new(accessToken, cert['<STR_LIT>'], hashlib.sha256).digest()<EOL>newToken = utils.makeB64UrlSafe(utils.encodeStringForB64Header(newToken)).replace(b'<STR_LIT:=>', b'<STR_LIT>')<EOL>return {<EOL>'<STR_LIT>': name or clientId,<EOL>'<STR_LIT>': newToken,<EOL>'<STR_LIT>': utils.dumpJson(cert),<EOL>}<EOL> | Create a set of temporary credentials
Callers should not apply any clock skew; clock drift is accounted for by
auth service.
clientId: the issuing clientId
accessToken: the issuer's accessToken
start: start time of credentials (datetime.datetime)
expiry: expiration time of credentials, (datetime.datetime)
scopes: list of scopes granted
name: credential name (optional)
Returns a dictionary in the form:
{ 'clientId': str, 'accessToken: str, 'certificate': str} | f15737:m2 |
def _createSession(self): | return createSession()<EOL> | Create a requests session.
Helper method which can be overridden by child classes. | f15737:c0:m1 |
def makeHawkExt(self): | o = self.options<EOL>c = o.get('<STR_LIT>', {})<EOL>if c.get('<STR_LIT>') and c.get('<STR_LIT>'):<EOL><INDENT>ext = {}<EOL>cert = c.get('<STR_LIT>')<EOL>if cert:<EOL><INDENT>if six.PY3 and isinstance(cert, six.binary_type):<EOL><INDENT>cert = cert.decode()<EOL><DEDENT>if isinstance(cert, six.string_types):<EOL><INDENT>cert = json.loads(cert)<EOL><DEDENT>ext['<STR_LIT>'] = cert<EOL><DEDENT>if '<STR_LIT>' in o:<EOL><INDENT>ext['<STR_LIT>'] = o['<STR_LIT>']<EOL><DEDENT>return utils.makeB64UrlSafe(utils.encodeStringForB64Header(utils.dumpJson(ext)).strip())<EOL><DEDENT>else:<EOL><INDENT>return {}<EOL><DEDENT> | Make an 'ext' for Hawk authentication | f15737:c0:m2 |
def buildSignedUrl(self, methodName, *args, **kwargs): | if '<STR_LIT>' in kwargs:<EOL><INDENT>expiration = kwargs['<STR_LIT>']<EOL>del kwargs['<STR_LIT>']<EOL><DEDENT>else:<EOL><INDENT>expiration = self.options['<STR_LIT>']<EOL><DEDENT>expiration = int(time.time() + expiration) <EOL>requestUrl = self.buildUrl(methodName, *args, **kwargs)<EOL>if not self._hasCredentials():<EOL><INDENT>raise exceptions.TaskclusterAuthFailure('<STR_LIT>')<EOL><DEDENT>clientId = utils.toStr(self.options['<STR_LIT>']['<STR_LIT>'])<EOL>accessToken = utils.toStr(self.options['<STR_LIT>']['<STR_LIT>'])<EOL>def genBewit():<EOL><INDENT>resource = mohawk.base.Resource(<EOL>credentials={<EOL>'<STR_LIT:id>': clientId,<EOL>'<STR_LIT:key>': accessToken,<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>},<EOL>method='<STR_LIT:GET>',<EOL>ext=utils.toStr(self.makeHawkExt()),<EOL>url=requestUrl,<EOL>timestamp=expiration,<EOL>nonce='<STR_LIT>',<EOL>)<EOL>bewit = mohawk.bewit.get_bewit(resource)<EOL>return bewit.rstrip('<STR_LIT:=>')<EOL><DEDENT>bewit = genBewit()<EOL>if not bewit:<EOL><INDENT>raise exceptions.TaskclusterFailure('<STR_LIT>')<EOL><DEDENT>u = urllib.parse.urlparse(requestUrl)<EOL>qs = u.query<EOL>if qs:<EOL><INDENT>qs += '<STR_LIT:&>'<EOL><DEDENT>qs += '<STR_LIT>' % bewit<EOL>return urllib.parse.urlunparse((<EOL>u.scheme,<EOL>u.netloc,<EOL>u.path,<EOL>u.params,<EOL>qs,<EOL>u.fragment,<EOL>))<EOL> | Build a signed URL. This URL contains the credentials needed to access
a resource. | f15737:c0:m5 |
def _constructUrl(self, route): | return liburls.api(<EOL>self.options['<STR_LIT>'],<EOL>self.serviceName,<EOL>self.apiVersion,<EOL>route.rstrip('<STR_LIT:/>'))<EOL> | Construct a URL for the given route on this service, based on the
rootUrl | f15737:c0:m6 |
def _makeApiCall(self, entry, *args, **kwargs): | x = self._processArgs(entry, *args, **kwargs)<EOL>routeParams, payload, query, paginationHandler, paginationLimit = x<EOL>route = self._subArgsInRoute(entry, routeParams)<EOL>if paginationLimit and '<STR_LIT>' in entry.get('<STR_LIT>', []):<EOL><INDENT>query['<STR_LIT>'] = paginationLimit<EOL><DEDENT>if query:<EOL><INDENT>_route = route + '<STR_LIT:?>' + urllib.parse.urlencode(query)<EOL><DEDENT>else:<EOL><INDENT>_route = route<EOL><DEDENT>response = self._makeHttpRequest(entry['<STR_LIT>'], _route, payload)<EOL>if paginationHandler:<EOL><INDENT>paginationHandler(response)<EOL>while response.get('<STR_LIT>'):<EOL><INDENT>query['<STR_LIT>'] = response['<STR_LIT>']<EOL>_route = route + '<STR_LIT:?>' + urllib.parse.urlencode(query)<EOL>response = self._makeHttpRequest(entry['<STR_LIT>'], _route, payload)<EOL>paginationHandler(response)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>return response<EOL><DEDENT> | This function is used to dispatch calls to other functions
for a given API Reference entry | f15737:c0:m7 |
def _processArgs(self, entry, *_args, **_kwargs): | <EOL>args = list(_args)<EOL>kwargs = copy.deepcopy(_kwargs)<EOL>reqArgs = entry['<STR_LIT:args>']<EOL>routeParams = {}<EOL>query = {}<EOL>payload = None<EOL>kwApiArgs = {}<EOL>paginationHandler = None<EOL>paginationLimit = None<EOL>if len(kwargs) == <NUM_LIT:0>:<EOL><INDENT>if '<STR_LIT:input>' in entry and len(args) == len(reqArgs) + <NUM_LIT:1>:<EOL><INDENT>payload = args.pop()<EOL><DEDENT>if len(args) != len(reqArgs):<EOL><INDENT>log.debug(args)<EOL>log.debug(reqArgs)<EOL>raise exceptions.TaskclusterFailure('<STR_LIT>')<EOL><DEDENT>log.debug('<STR_LIT>')<EOL><DEDENT>else:<EOL><INDENT>isFlatKwargs = True<EOL>if len(kwargs) == len(reqArgs):<EOL><INDENT>for arg in reqArgs:<EOL><INDENT>if not kwargs.get(arg, False):<EOL><INDENT>isFlatKwargs = False<EOL>break<EOL><DEDENT><DEDENT>if '<STR_LIT:input>' in entry and len(args) != <NUM_LIT:1>:<EOL><INDENT>isFlatKwargs = False<EOL><DEDENT>if '<STR_LIT:input>' not in entry and len(args) != <NUM_LIT:0>:<EOL><INDENT>isFlatKwargs = False<EOL><DEDENT>else:<EOL><INDENT>pass <EOL><DEDENT><DEDENT>else:<EOL><INDENT>isFlatKwargs = False<EOL><DEDENT>if isFlatKwargs:<EOL><INDENT>if '<STR_LIT:input>' in entry:<EOL><INDENT>payload = args.pop()<EOL><DEDENT>kwApiArgs = kwargs<EOL>log.debug('<STR_LIT>')<EOL>warnings.warn(<EOL>"<STR_LIT>",<EOL>PendingDeprecationWarning<EOL>)<EOL><DEDENT>else:<EOL><INDENT>kwApiArgs = kwargs.get('<STR_LIT>', {})<EOL>payload = kwargs.get('<STR_LIT>', None)<EOL>query = kwargs.get('<STR_LIT>', {})<EOL>paginationHandler = kwargs.get('<STR_LIT>', None)<EOL>paginationLimit = kwargs.get('<STR_LIT>', None)<EOL>log.debug('<STR_LIT>')<EOL><DEDENT><DEDENT>if '<STR_LIT:input>' in entry and isinstance(payload, type(None)):<EOL><INDENT>raise exceptions.TaskclusterFailure('<STR_LIT>')<EOL><DEDENT>for arg in args:<EOL><INDENT>if not isinstance(arg, six.string_types) and not isinstance(arg, int):<EOL><INDENT>raise exceptions.TaskclusterFailure(<EOL>'<STR_LIT>' % (arg, entry['<STR_LIT:name>']))<EOL><DEDENT><DEDENT>for name, arg in six.iteritems(kwApiArgs):<EOL><INDENT>if not isinstance(arg, six.string_types) and not isinstance(arg, int):<EOL><INDENT>raise exceptions.TaskclusterFailure(<EOL>'<STR_LIT>' % (name, arg, entry['<STR_LIT:name>']))<EOL><DEDENT><DEDENT>if len(args) > <NUM_LIT:0> and len(kwApiArgs) > <NUM_LIT:0>:<EOL><INDENT>raise exceptions.TaskclusterFailure('<STR_LIT>')<EOL><DEDENT>if len(reqArgs) > len(args) + len(kwApiArgs):<EOL><INDENT>raise exceptions.TaskclusterFailure(<EOL>'<STR_LIT>' % (<EOL>entry['<STR_LIT:name>'], len(reqArgs), len(args) + len(kwApiArgs)))<EOL><DEDENT>if len(args) > len(reqArgs):<EOL><INDENT>raise exceptions.TaskclusterFailure('<STR_LIT>',<EOL>entry['<STR_LIT:name>'])<EOL><DEDENT>i = <NUM_LIT:0><EOL>for arg in args:<EOL><INDENT>log.debug('<STR_LIT>', arg)<EOL>routeParams[reqArgs[i]] = arg<EOL>i += <NUM_LIT:1><EOL><DEDENT>log.debug('<STR_LIT>', routeParams)<EOL>routeParams.update(kwApiArgs)<EOL>log.debug('<STR_LIT>', routeParams)<EOL>if len(reqArgs) != len(routeParams):<EOL><INDENT>errMsg = '<STR_LIT>' % (<EOL>entry['<STR_LIT:name>'],<EOL>'<STR_LIT:U+002C>'.join(reqArgs),<EOL>routeParams.keys())<EOL>log.error(errMsg)<EOL>raise exceptions.TaskclusterFailure(errMsg)<EOL><DEDENT>for reqArg in reqArgs:<EOL><INDENT>if reqArg not in routeParams:<EOL><INDENT>errMsg = '<STR_LIT>' % (<EOL>entry['<STR_LIT:name>'], reqArg)<EOL>log.error(errMsg)<EOL>raise exceptions.TaskclusterFailure(errMsg)<EOL><DEDENT><DEDENT>return routeParams, payload, query, paginationHandler, paginationLimit<EOL> | Given an entry, positional and keyword arguments, figure out what
the query-string options, payload and api arguments are. | f15737:c0:m8 |
def _subArgsInRoute(self, entry, args): | route = entry['<STR_LIT>']<EOL>for arg, val in six.iteritems(args):<EOL><INDENT>toReplace = "<STR_LIT>" % arg<EOL>if toReplace not in route:<EOL><INDENT>raise exceptions.TaskclusterFailure(<EOL>'<STR_LIT>' % (arg, entry['<STR_LIT:name>']))<EOL><DEDENT>val = urllib.parse.quote(str(val).encode("<STR_LIT:utf-8>"), '<STR_LIT>')<EOL>route = route.replace("<STR_LIT>" % arg, val)<EOL><DEDENT>return route.lstrip('<STR_LIT:/>')<EOL> | Given a route like "/task/<taskId>/artifacts" and a mapping like
{"taskId": "12345"}, return a string like "/task/12345/artifacts" | f15737:c0:m9 |
def _hasCredentials(self): | cred = self.options.get('<STR_LIT>')<EOL>return (<EOL>cred and<EOL>'<STR_LIT>' in cred and<EOL>'<STR_LIT>' in cred and<EOL>cred['<STR_LIT>'] and<EOL>cred['<STR_LIT>']<EOL>)<EOL> | Return True, if credentials is given | f15737:c0:m10 |
def _makeHttpRequest(self, method, route, payload): | url = self._constructUrl(route)<EOL>log.debug('<STR_LIT>', url)<EOL>hawkExt = self.makeHawkExt()<EOL>if payload is not None:<EOL><INDENT>payload = utils.dumpJson(payload)<EOL><DEDENT>retry = -<NUM_LIT:1> <EOL>retries = self.options['<STR_LIT>']<EOL>while retry < retries:<EOL><INDENT>retry += <NUM_LIT:1><EOL>if retry > <NUM_LIT:0>:<EOL><INDENT>time.sleep(utils.calculateSleepTime(retry))<EOL><DEDENT>if self._hasCredentials():<EOL><INDENT>sender = mohawk.Sender(<EOL>credentials={<EOL>'<STR_LIT:id>': self.options['<STR_LIT>']['<STR_LIT>'],<EOL>'<STR_LIT:key>': self.options['<STR_LIT>']['<STR_LIT>'],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>},<EOL>ext=hawkExt if hawkExt else {},<EOL>url=url,<EOL>content=payload if payload else '<STR_LIT>',<EOL>content_type='<STR_LIT:application/json>' if payload else '<STR_LIT>',<EOL>method=method,<EOL>)<EOL>headers = {'<STR_LIT>': sender.request_header}<EOL><DEDENT>else:<EOL><INDENT>log.debug('<STR_LIT>')<EOL>headers = {}<EOL><DEDENT>if payload:<EOL><INDENT>headers['<STR_LIT:Content-Type>'] = '<STR_LIT:application/json>'<EOL><DEDENT>log.debug('<STR_LIT>', retry)<EOL>try:<EOL><INDENT>response = utils.makeSingleHttpRequest(method, url, payload, headers)<EOL><DEDENT>except requests.exceptions.RequestException as rerr:<EOL><INDENT>if retry < retries:<EOL><INDENT>log.warn('<STR_LIT>' % rerr)<EOL>continue<EOL><DEDENT>raise exceptions.TaskclusterConnectionError(<EOL>"<STR_LIT>",<EOL>superExc=rerr<EOL>)<EOL><DEDENT>status = response.status_code<EOL>if status == <NUM_LIT>:<EOL><INDENT>return None<EOL><DEDENT>if <NUM_LIT> <= status and status < <NUM_LIT> and retry < retries:<EOL><INDENT>log.warn('<STR_LIT>' % status)<EOL>continue<EOL><DEDENT>if status < <NUM_LIT:200> or status >= <NUM_LIT>:<EOL><INDENT>data = {}<EOL>try:<EOL><INDENT>data = response.json()<EOL><DEDENT>except Exception:<EOL><INDENT>pass <EOL><DEDENT>message = "<STR_LIT>"<EOL>if isinstance(data, dict):<EOL><INDENT>message = data.get('<STR_LIT:message>')<EOL><DEDENT>else:<EOL><INDENT>if status == <NUM_LIT>:<EOL><INDENT>message = "<STR_LIT>"<EOL><DEDENT>elif status == <NUM_LIT>:<EOL><INDENT>message = "<STR_LIT>"<EOL><DEDENT><DEDENT>if status == <NUM_LIT>:<EOL><INDENT>raise exceptions.TaskclusterAuthFailure(<EOL>message,<EOL>status_code=status,<EOL>body=data,<EOL>superExc=None<EOL>)<EOL><DEDENT>raise exceptions.TaskclusterRestFailure(<EOL>message,<EOL>status_code=status,<EOL>body=data,<EOL>superExc=None<EOL>)<EOL><DEDENT>try:<EOL><INDENT>return response.json()<EOL><DEDENT>except ValueError:<EOL><INDENT>return {"<STR_LIT>": response}<EOL><DEDENT><DEDENT>assert False, "<STR_LIT>"<EOL> | Make an HTTP Request for the API endpoint. This method wraps
the logic about doing failure retry and passes off the actual work
of doing an HTTP request to another method. | f15737:c0:m11 |
def ping(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Ping Server
Respond without doing anything.
This endpoint is used to check that the service is up.
This method is ``stable`` | f15738:c0:m0 |
def listClients(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | List Clients
Get a list of all clients. With `prefix`, only clients for which
it is a prefix of the clientId are returned.
By default this end-point will try to return up to 1000 clients in one
request. But it **may return less, even none**.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `listClients` with the last `continuationToken` until you
get a result without a `continuationToken`.
This method gives output: ``v1/list-clients-response.json#``
This method is ``stable`` | f15738:c0:m1 |
def client(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Client
Get information about a single client.
This method gives output: ``v1/get-client-response.json#``
This method is ``stable`` | f15738:c0:m2 |
def createClient(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Create Client
Create a new client and get the `accessToken` for this client.
You should store the `accessToken` from this API call as there is no
other way to retrieve it.
If you loose the `accessToken` you can call `resetAccessToken` to reset
it, and a new `accessToken` will be returned, but you cannot retrieve the
current `accessToken`.
If a client with the same `clientId` already exists this operation will
fail. Use `updateClient` if you wish to update an existing client.
The caller's scopes must satisfy `scopes`.
This method takes input: ``v1/create-client-request.json#``
This method gives output: ``v1/create-client-response.json#``
This method is ``stable`` | f15738:c0:m3 |
def resetAccessToken(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Reset `accessToken`
Reset a clients `accessToken`, this will revoke the existing
`accessToken`, generate a new `accessToken` and return it from this
call.
There is no way to retrieve an existing `accessToken`, so if you loose it
you must reset the accessToken to acquire it again.
This method gives output: ``v1/create-client-response.json#``
This method is ``stable`` | f15738:c0:m4 |
def updateClient(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Update Client
Update an exisiting client. The `clientId` and `accessToken` cannot be
updated, but `scopes` can be modified. The caller's scopes must
satisfy all scopes being added to the client in the update operation.
If no scopes are given in the request, the client's scopes remain
unchanged
This method takes input: ``v1/create-client-request.json#``
This method gives output: ``v1/get-client-response.json#``
This method is ``stable`` | f15738:c0:m5 |
def enableClient(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Enable Client
Enable a client that was disabled with `disableClient`. If the client
is already enabled, this does nothing.
This is typically used by identity providers to re-enable clients that
had been disabled when the corresponding identity's scopes changed.
This method gives output: ``v1/get-client-response.json#``
This method is ``stable`` | f15738:c0:m6 |
def disableClient(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Disable Client
Disable a client. If the client is already disabled, this does nothing.
This is typically used by identity providers to disable clients when the
corresponding identity's scopes no longer satisfy the client's scopes.
This method gives output: ``v1/get-client-response.json#``
This method is ``stable`` | f15738:c0:m7 |
def deleteClient(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Delete Client
Delete a client, please note that any roles related to this client must
be deleted independently.
This method is ``stable`` | f15738:c0:m8 |
def listRoles(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | List Roles
Get a list of all roles, each role object also includes the list of
scopes it expands to.
This method gives output: ``v1/list-roles-response.json#``
This method is ``stable`` | f15738:c0:m9 |
def listRoleIds(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | List Role IDs
If no limit is given, the roleIds of all roles are returned. Since this
list may become long, callers can use the `limit` and `continuationToken`
query arguments to page through the responses.
This method gives output: ``v1/list-role-ids-response.json#``
This method is ``stable`` | f15738:c0:m10 |
def listRoles2(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | List Roles
If no limit is given, all roles are returned. Since this
list may become long, callers can use the `limit` and `continuationToken`
query arguments to page through the responses.
This method gives output: ``v1/list-roles2-response.json#``
This method is ``stable`` | f15738:c0:m11 |
def role(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Role
Get information about a single role, including the set of scopes that the
role expands to.
This method gives output: ``v1/get-role-response.json#``
This method is ``stable`` | f15738:c0:m12 |
def createRole(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Create Role
Create a new role.
The caller's scopes must satisfy the new role's scopes.
If there already exists a role with the same `roleId` this operation
will fail. Use `updateRole` to modify an existing role.
Creation of a role that will generate an infinite expansion will result
in an error response.
This method takes input: ``v1/create-role-request.json#``
This method gives output: ``v1/get-role-response.json#``
This method is ``stable`` | f15738:c0:m13 |
def updateRole(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Update Role
Update an existing role.
The caller's scopes must satisfy all of the new scopes being added, but
need not satisfy all of the role's existing scopes.
An update of a role that will generate an infinite expansion will result
in an error response.
This method takes input: ``v1/create-role-request.json#``
This method gives output: ``v1/get-role-response.json#``
This method is ``stable`` | f15738:c0:m14 |
def deleteRole(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Delete Role
Delete a role. This operation will succeed regardless of whether or not
the role exists.
This method is ``stable`` | f15738:c0:m15 |
def expandScopesGet(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Expand Scopes
Return an expanded copy of the given scopeset, with scopes implied by any
roles included.
This call uses the GET method with an HTTP body. It remains only for
backward compatibility.
This method takes input: ``v1/scopeset.json#``
This method gives output: ``v1/scopeset.json#``
This method is ``deprecated`` | f15738:c0:m16 |
def expandScopes(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Expand Scopes
Return an expanded copy of the given scopeset, with scopes implied by any
roles included.
This method takes input: ``v1/scopeset.json#``
This method gives output: ``v1/scopeset.json#``
This method is ``stable`` | f15738:c0:m17 |
def currentScopes(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Current Scopes
Return the expanded scopes available in the request, taking into account all sources
of scopes and scope restrictions (temporary credentials, assumeScopes, client scopes,
and roles).
This method gives output: ``v1/scopeset.json#``
This method is ``stable`` | f15738:c0:m18 |
def awsS3Credentials(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Temporary Read/Write Credentials S3
Get temporary AWS credentials for `read-write` or `read-only` access to
a given `bucket` and `prefix` within that bucket.
The `level` parameter can be `read-write` or `read-only` and determines
which type of credentials are returned. Please note that the `level`
parameter is required in the scope guarding access. The bucket name must
not contain `.`, as recommended by Amazon.
This method can only allow access to a whitelisted set of buckets. To add
a bucket to that whitelist, contact the Taskcluster team, who will add it to
the appropriate IAM policy. If the bucket is in a different AWS account, you
will also need to add a bucket policy allowing access from the Taskcluster
account. That policy should look like this:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allow-taskcluster-auth-to-delegate-access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::692406183521:root"
},
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::<bucket>",
"arn:aws:s3:::<bucket>/*"
]
}
]
}
```
The credentials are set to expire after an hour, but this behavior is
subject to change. Hence, you should always read the `expires` property
from the response, if you intend to maintain active credentials in your
application.
Please note that your `prefix` may not start with slash `/`. Such a prefix
is allowed on S3, but we forbid it here to discourage bad behavior.
Also note that if your `prefix` doesn't end in a slash `/`, the STS
credentials may allow access to unexpected keys, as S3 does not treat
slashes specially. For example, a prefix of `my-folder` will allow
access to `my-folder/file.txt` as expected, but also to `my-folder.txt`,
which may not be intended.
Finally, note that the `PutObjectAcl` call is not allowed. Passing a canned
ACL other than `private` to `PutObject` is treated as a `PutObjectAcl` call, and
will result in an access-denied error from AWS. This limitation is due to a
security flaw in Amazon S3 which might otherwise allow indefinite access to
uploaded objects.
**EC2 metadata compatibility**, if the querystring parameter
`?format=iam-role-compat` is given, the response will be compatible
with the JSON exposed by the EC2 metadata service. This aims to ease
compatibility for libraries and tools built to auto-refresh credentials.
For details on the format returned by EC2 metadata service see:
[EC2 User Guide](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials).
This method gives output: ``v1/aws-s3-credentials-response.json#``
This method is ``stable`` | f15738:c0:m19 |
def azureAccounts(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | List Accounts Managed by Auth
Retrieve a list of all Azure accounts managed by Taskcluster Auth.
This method gives output: ``v1/azure-account-list-response.json#``
This method is ``stable`` | f15738:c0:m20 |
def azureTables(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | List Tables in an Account Managed by Auth
Retrieve a list of all tables in an account.
This method gives output: ``v1/azure-table-list-response.json#``
This method is ``stable`` | f15738:c0:m21 |
def azureTableSAS(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Shared-Access-Signature for Azure Table
Get a shared access signature (SAS) string for use with a specific Azure
Table Storage table.
The `level` parameter can be `read-write` or `read-only` and determines
which type of credentials are returned. If level is read-write, it will create the
table if it doesn't already exist.
This method gives output: ``v1/azure-table-access-response.json#``
This method is ``stable`` | f15738:c0:m22 |
def azureContainers(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | List containers in an Account Managed by Auth
Retrieve a list of all containers in an account.
This method gives output: ``v1/azure-container-list-response.json#``
This method is ``stable`` | f15738:c0:m23 |
def azureContainerSAS(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Shared-Access-Signature for Azure Container
Get a shared access signature (SAS) string for use with a specific Azure
Blob Storage container.
The `level` parameter can be `read-write` or `read-only` and determines
which type of credentials are returned. If level is read-write, it will create the
container if it doesn't already exist.
This method gives output: ``v1/azure-container-response.json#``
This method is ``stable`` | f15738:c0:m24 |
def sentryDSN(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get DSN for Sentry Project
Get temporary DSN (access credentials) for a sentry project.
The credentials returned can be used with any Sentry client for up to
24 hours, after which the credentials will be automatically disabled.
If the project doesn't exist it will be created, and assigned to the
initial team configured for this component. Contact a Sentry admin
to have the project transferred to a team you have access to if needed
This method gives output: ``v1/sentry-dsn-response.json#``
This method is ``stable`` | f15738:c0:m25 |
def statsumToken(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Token for Statsum Project
Get temporary `token` and `baseUrl` for sending metrics to statsum.
The token is valid for 24 hours, clients should refresh after expiration.
This method gives output: ``v1/statsum-token-response.json#``
This method is ``stable`` | f15738:c0:m26 |
def websocktunnelToken(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get a client token for the Websocktunnel service
Get a temporary token suitable for use connecting to a
[websocktunnel](https://github.com/taskcluster/websocktunnel) server.
The resulting token will only be accepted by servers with a matching audience
value. Reaching such a server is the callers responsibility. In general,
a server URL or set of URLs should be provided to the caller as configuration
along with the audience value.
The token is valid for a limited time (on the scale of hours). Callers should
refresh it before expiration.
This method gives output: ``v1/websocktunnel-token-response.json#``
This method is ``stable`` | f15738:c0:m27 |
def authenticateHawk(self, *args, **kwargs): | return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Authenticate Hawk Request
Validate the request signature given on input and return list of scopes
that the authenticating client has.
This method is used by other services that wish rely on Taskcluster
credentials for authentication. This way we can use Hawk without having
the secret credentials leave this service.
This method takes input: ``v1/authenticate-hawk-request.json#``
This method gives output: ``v1/authenticate-hawk-response.json#``
This method is ``stable`` | f15738:c0:m28 |
async def ping(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Ping Server
Respond without doing anything.
This endpoint is used to check that the service is up.
This method is ``stable`` | f15739:c0:m0 |
async def task(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Task Definition
This end-point will return the task-definition. Notice that the task
definition may have been modified by queue, if an optional property is
not specified the queue may provide a default value.
This method gives output: ``v1/task.json#``
This method is ``stable`` | f15739:c0:m1 |
async def status(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT:status>"], *args, **kwargs)<EOL> | Get task status
Get task status structure from `taskId`
This method gives output: ``v1/task-status-response.json#``
This method is ``stable`` | f15739:c0:m2 |
async def listTaskGroup(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | List Task Group
List tasks sharing the same `taskGroupId`.
As a task-group may contain an unbounded number of tasks, this end-point
may return a `continuationToken`. To continue listing tasks you must call
the `listTaskGroup` again with the `continuationToken` as the
query-string option `continuationToken`.
By default this end-point will try to return up to 1000 members in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `listTaskGroup` with the last `continuationToken` until you
get a result without a `continuationToken`.
If you are not interested in listing all the members at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/list-task-group-response.json#``
This method is ``stable`` | f15739:c0:m3 |
async def listDependentTasks(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | List Dependent Tasks
List tasks that depend on the given `taskId`.
As many tasks from different task-groups may dependent on a single tasks,
this end-point may return a `continuationToken`. To continue listing
tasks you must call `listDependentTasks` again with the
`continuationToken` as the query-string option `continuationToken`.
By default this end-point will try to return up to 1000 tasks in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `listDependentTasks` with the last `continuationToken` until
you get a result without a `continuationToken`.
If you are not interested in listing all the tasks at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/list-dependent-tasks-response.json#``
This method is ``stable`` | f15739:c0:m4 |
async def createTask(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Create New Task
Create a new task, this is an **idempotent** operation, so repeat it if
you get an internal server error or network connection is dropped.
**Task `deadline`**: the deadline property can be no more than 5 days
into the future. This is to limit the amount of pending tasks not being
taken care of. Ideally, you should use a much shorter deadline.
**Task expiration**: the `expires` property must be greater than the
task `deadline`. If not provided it will default to `deadline` + one
year. Notice, that artifacts created by task must expire before the task.
**Task specific routing-keys**: using the `task.routes` property you may
define task specific routing-keys. If a task has a task specific
routing-key: `<route>`, then when the AMQP message about the task is
published, the message will be CC'ed with the routing-key:
`route.<route>`. This is useful if you want another component to listen
for completed tasks you have posted. The caller must have scope
`queue:route:<route>` for each route.
**Dependencies**: any tasks referenced in `task.dependencies` must have
already been created at the time of this call.
**Scopes**: Note that the scopes required to complete this API call depend
on the content of the `scopes`, `routes`, `schedulerId`, `priority`,
`provisionerId`, and `workerType` properties of the task definition.
**Legacy Scopes**: The `queue:create-task:..` scope without a priority and
the `queue:define-task:..` and `queue:task-group-id:..` scopes are considered
legacy and should not be used. Note that the new, non-legacy scopes require
a `queue:scheduler-id:..` scope as well as scopes for the proper priority.
This method takes input: ``v1/create-task-request.json#``
This method gives output: ``v1/task-status-response.json#``
This method is ``stable`` | f15739:c0:m5 |
async def defineTask(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Define Task
**Deprecated**, this is the same as `createTask` with a **self-dependency**.
This is only present for legacy.
This method takes input: ``v1/create-task-request.json#``
This method gives output: ``v1/task-status-response.json#``
This method is ``deprecated`` | f15739:c0:m6 |
async def scheduleTask(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Schedule Defined Task
scheduleTask will schedule a task to be executed, even if it has
unresolved dependencies. A task would otherwise only be scheduled if
its dependencies were resolved.
This is useful if you have defined a task that depends on itself or on
some other task that has not been resolved, but you wish the task to be
scheduled immediately.
This will announce the task as pending and workers will be allowed to
claim it and resolve the task.
**Note** this operation is **idempotent** and will not fail or complain
if called with a `taskId` that is already scheduled, or even resolved.
To reschedule a task previously resolved, use `rerunTask`.
This method gives output: ``v1/task-status-response.json#``
This method is ``stable`` | f15739:c0:m7 |
async def rerunTask(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Rerun a Resolved Task
This method _reruns_ a previously resolved task, even if it was
_completed_. This is useful if your task completes unsuccessfully, and
you just want to run it from scratch again. This will also reset the
number of `retries` allowed.
This method is deprecated in favour of creating a new task with the same
task definition (but with a new taskId).
Remember that `retries` in the task status counts the number of runs that
the queue have started because the worker stopped responding, for example
because a spot node died.
**Remark** this operation is idempotent, if you try to rerun a task that
is not either `failed` or `completed`, this operation will just return
the current task status.
This method gives output: ``v1/task-status-response.json#``
This method is ``deprecated`` | f15739:c0:m8 |
async def cancelTask(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Cancel Task
This method will cancel a task that is either `unscheduled`, `pending` or
`running`. It will resolve the current run as `exception` with
`reasonResolved` set to `canceled`. If the task isn't scheduled yet, ie.
it doesn't have any runs, an initial run will be added and resolved as
described above. Hence, after canceling a task, it cannot be scheduled
with `queue.scheduleTask`, but a new run can be created with
`queue.rerun`. These semantics is equivalent to calling
`queue.scheduleTask` immediately followed by `queue.cancelTask`.
**Remark** this operation is idempotent, if you try to cancel a task that
isn't `unscheduled`, `pending` or `running`, this operation will just
return the current task status.
This method gives output: ``v1/task-status-response.json#``
This method is ``stable`` | f15739:c0:m9 |
async def claimWork(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Claim Work
Claim pending task(s) for the given `provisionerId`/`workerType` queue.
If any work is available (even if fewer than the requested number of
tasks, this will return immediately. Otherwise, it will block for tens of
seconds waiting for work. If no work appears, it will return an emtpy
list of tasks. Callers should sleep a short while (to avoid denial of
service in an error condition) and call the endpoint again. This is a
simple implementation of "long polling".
This method takes input: ``v1/claim-work-request.json#``
This method gives output: ``v1/claim-work-response.json#``
This method is ``stable`` | f15739:c0:m10 |
async def claimTask(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Claim Task
claim a task - never documented
This method takes input: ``v1/task-claim-request.json#``
This method gives output: ``v1/task-claim-response.json#``
This method is ``deprecated`` | f15739:c0:m11 |
async def reclaimTask(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Reclaim task
Refresh the claim for a specific `runId` for given `taskId`. This updates
the `takenUntil` property and returns a new set of temporary credentials
for performing requests on behalf of the task. These credentials should
be used in-place of the credentials returned by `claimWork`.
The `reclaimTask` requests serves to:
* Postpone `takenUntil` preventing the queue from resolving
`claim-expired`,
* Refresh temporary credentials used for processing the task, and
* Abort execution if the task/run have been resolved.
If the `takenUntil` timestamp is exceeded the queue will resolve the run
as _exception_ with reason `claim-expired`, and proceeded to retry to the
task. This ensures that tasks are retried, even if workers disappear
without warning.
If the task is resolved, this end-point will return `409` reporting
`RequestConflict`. This typically happens if the task have been canceled
or the `task.deadline` have been exceeded. If reclaiming fails, workers
should abort the task and forget about the given `runId`. There is no
need to resolve the run or upload artifacts.
This method gives output: ``v1/task-reclaim-response.json#``
This method is ``stable`` | f15739:c0:m12 |
async def reportCompleted(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Report Run Completed
Report a task completed, resolving the run as `completed`.
This method gives output: ``v1/task-status-response.json#``
This method is ``stable`` | f15739:c0:m13 |
async def reportFailed(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Report Run Failed
Report a run failed, resolving the run as `failed`. Use this to resolve
a run that failed because the task specific code behaved unexpectedly.
For example the task exited non-zero, or didn't produce expected output.
Do not use this if the task couldn't be run because if malformed
payload, or other unexpected condition. In these cases we have a task
exception, which should be reported with `reportException`.
This method gives output: ``v1/task-status-response.json#``
This method is ``stable`` | f15739:c0:m14 |
async def reportException(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Report Task Exception
Resolve a run as _exception_. Generally, you will want to report tasks as
failed instead of exception. You should `reportException` if,
* The `task.payload` is invalid,
* Non-existent resources are referenced,
* Declared actions cannot be executed due to unavailable resources,
* The worker had to shutdown prematurely,
* The worker experienced an unknown error, or,
* The task explicitly requested a retry.
Do not use this to signal that some user-specified code crashed for any
reason specific to this code. If user-specific code hits a resource that
is temporarily unavailable worker should report task _failed_.
This method takes input: ``v1/task-exception-request.json#``
This method gives output: ``v1/task-status-response.json#``
This method is ``stable`` | f15739:c0:m15 |
async def createArtifact(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Create Artifact
This API end-point creates an artifact for a specific run of a task. This
should **only** be used by a worker currently operating on this task, or
from a process running within the task (ie. on the worker).
All artifacts must specify when they `expires`, the queue will
automatically take care of deleting artifacts past their
expiration point. This features makes it feasible to upload large
intermediate artifacts from data processing applications, as the
artifacts can be set to expire a few days later.
We currently support 3 different `storageType`s, each storage type have
slightly different features and in some cases difference semantics.
We also have 2 deprecated `storageType`s which are only maintained for
backwards compatiability and should not be used in new implementations
**Blob artifacts**, are useful for storing large files. Currently, these
are all stored in S3 but there are facilities for adding support for other
backends in futre. A call for this type of artifact must provide information
about the file which will be uploaded. This includes sha256 sums and sizes.
This method will return a list of general form HTTP requests which are signed
by AWS S3 credentials managed by the Queue. Once these requests are completed
the list of `ETag` values returned by the requests must be passed to the
queue `completeArtifact` method
**S3 artifacts**, DEPRECATED is useful for static files which will be
stored on S3. When creating an S3 artifact the queue will return a
pre-signed URL to which you can do a `PUT` request to upload your
artifact. Note that `PUT` request **must** specify the `content-length`
header and **must** give the `content-type` header the same value as in
the request to `createArtifact`.
**Azure artifacts**, DEPRECATED are stored in _Azure Blob Storage_ service
which given the consistency guarantees and API interface offered by Azure
is more suitable for artifacts that will be modified during the execution
of the task. For example docker-worker has a feature that persists the
task log to Azure Blob Storage every few seconds creating a somewhat
live log. A request to create an Azure artifact will return a URL
featuring a [Shared-Access-Signature](http://msdn.microsoft.com/en-us/library/azure/dn140256.aspx),
refer to MSDN for further information on how to use these.
**Warning: azure artifact is currently an experimental feature subject
to changes and data-drops.**
**Reference artifacts**, only consists of meta-data which the queue will
store for you. These artifacts really only have a `url` property and
when the artifact is requested the client will be redirect the URL
provided with a `303` (See Other) redirect. Please note that we cannot
delete artifacts you upload to other service, we can only delete the
reference to the artifact, when it expires.
**Error artifacts**, only consists of meta-data which the queue will
store for you. These artifacts are only meant to indicate that you the
worker or the task failed to generate a specific artifact, that you
would otherwise have uploaded. For example docker-worker will upload an
error artifact, if the file it was supposed to upload doesn't exists or
turns out to be a directory. Clients requesting an error artifact will
get a `424` (Failed Dependency) response. This is mainly designed to
ensure that dependent tasks can distinguish between artifacts that were
suppose to be generated and artifacts for which the name is misspelled.
**Artifact immutability**, generally speaking you cannot overwrite an
artifact when created. But if you repeat the request with the same
properties the request will succeed as the operation is idempotent.
This is useful if you need to refresh a signed URL while uploading.
Do not abuse this to overwrite artifacts created by another entity!
Such as worker-host overwriting artifact created by worker-code.
As a special case the `url` property on _reference artifacts_ can be
updated. You should only use this to update the `url` property for
reference artifacts your process has created.
This method takes input: ``v1/post-artifact-request.json#``
This method gives output: ``v1/post-artifact-response.json#``
This method is ``stable`` | f15739:c0:m16 |
async def completeArtifact(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Complete Artifact
This endpoint finalises an upload done through the blob `storageType`.
The queue will ensure that the task/run is still allowing artifacts
to be uploaded. For single-part S3 blob artifacts, this endpoint
will simply ensure the artifact is present in S3. For multipart S3
artifacts, the endpoint will perform the commit step of the multipart
upload flow. As the final step for both multi and single part artifacts,
the `present` entity field will be set to `true` to reflect that the
artifact is now present and a message published to pulse. NOTE: This
endpoint *must* be called for all artifacts of storageType 'blob'
This method takes input: ``v1/put-artifact-request.json#``
This method is ``experimental`` | f15739:c0:m17 |
async def getArtifact(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Artifact from Run
Get artifact by `<name>` from a specific run.
**Public Artifacts**, in-order to get an artifact you need the scope
`queue:get-artifact:<name>`, where `<name>` is the name of the artifact.
But if the artifact `name` starts with `public/`, authentication and
authorization is not necessary to fetch the artifact.
**API Clients**, this method will redirect you to the artifact, if it is
stored externally. Either way, the response may not be JSON. So API
client users might want to generate a signed URL for this end-point and
use that URL with an HTTP client that can handle responses correctly.
**Downloading artifacts**
There are some special considerations for those http clients which download
artifacts. This api endpoint is designed to be compatible with an HTTP 1.1
compliant client, but has extra features to ensure the download is valid.
It is strongly recommend that consumers use either taskcluster-lib-artifact (JS),
taskcluster-lib-artifact-go (Go) or the CLI written in Go to interact with
artifacts.
In order to download an artifact the following must be done:
1. Obtain queue url. Building a signed url with a taskcluster client is
recommended
1. Make a GET request which does not follow redirects
1. In all cases, if specified, the
x-taskcluster-location-{content,transfer}-{sha256,length} values must be
validated to be equal to the Content-Length and Sha256 checksum of the
final artifact downloaded. as well as any intermediate redirects
1. If this response is a 500-series error, retry using an exponential
backoff. No more than 5 retries should be attempted
1. If this response is a 400-series error, treat it appropriately for
your context. This might be an error in responding to this request or
an Error storage type body. This request should not be retried.
1. If this response is a 200-series response, the response body is the artifact.
If the x-taskcluster-location-{content,transfer}-{sha256,length} and
x-taskcluster-location-content-encoding are specified, they should match
this response body
1. If the response type is a 300-series redirect, the artifact will be at the
location specified by the `Location` header. There are multiple artifact storage
types which use a 300-series redirect.
1. For all redirects followed, the user must verify that the content-sha256, content-length,
transfer-sha256, transfer-length and content-encoding match every further request. The final
artifact must also be validated against the values specified in the original queue response
1. Caching of requests with an x-taskcluster-artifact-storage-type value of `reference`
must not occur
1. A request which has x-taskcluster-artifact-storage-type value of `blob` and does not
have x-taskcluster-location-content-sha256 or x-taskcluster-location-content-length
must be treated as an error
**Headers**
The following important headers are set on the response to this method:
* location: the url of the artifact if a redirect is to be performed
* x-taskcluster-artifact-storage-type: the storage type. Example: blob, s3, error
The following important headers are set on responses to this method for Blob artifacts
* x-taskcluster-location-content-sha256: the SHA256 of the artifact
*after* any content-encoding is undone. Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
* x-taskcluster-location-content-length: the number of bytes *after* any content-encoding
is undone
* x-taskcluster-location-transfer-sha256: the SHA256 of the artifact
*before* any content-encoding is undone. This is the SHA256 of what is sent over
the wire. Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
* x-taskcluster-location-transfer-length: the number of bytes *after* any content-encoding
is undone
* x-taskcluster-location-content-encoding: the content-encoding used. It will either
be `gzip` or `identity` right now. This is hardcoded to a value set when the artifact
was created and no content-negotiation occurs
* x-taskcluster-location-content-type: the content-type of the artifact
**Caching**, artifacts may be cached in data centers closer to the
workers in-order to reduce bandwidth costs. This can lead to longer
response times. Caching can be skipped by setting the header
`x-taskcluster-skip-cache: true`, this should only be used for resources
where request volume is known to be low, and caching not useful.
(This feature may be disabled in the future, use is sparingly!)
This method is ``stable`` | f15739:c0:m18 |
async def listArtifacts(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Artifacts from Run
Returns a list of artifacts and associated meta-data for a given run.
As a task may have many artifacts paging may be necessary. If this
end-point returns a `continuationToken`, you should call the end-point
again with the `continuationToken` as the query-string option:
`continuationToken`.
By default this end-point will list up-to 1000 artifacts in a single page
you may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-artifacts-response.json#``
This method is ``experimental`` | f15739:c0:m20 |
async def listProvisioners(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get a list of all active provisioners
Get all active provisioners.
The term "provisioner" is taken broadly to mean anything with a provisionerId.
This does not necessarily mean there is an associated service performing any
provisioning activity.
The response is paged. If this end-point returns a `continuationToken`, you
should call the end-point again with the `continuationToken` as a query-string
option. By default this end-point will list up to 1000 provisioners in a single
page. You may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-provisioners-response.json#``
This method is ``experimental`` | f15739:c0:m22 |
async def getProvisioner(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get an active provisioner
Get an active provisioner.
The term "provisioner" is taken broadly to mean anything with a provisionerId.
This does not necessarily mean there is an associated service performing any
provisioning activity.
This method gives output: ``v1/provisioner-response.json#``
This method is ``experimental`` | f15739:c0:m23 |
async def declareProvisioner(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Update a provisioner
Declare a provisioner, supplying some details about it.
`declareProvisioner` allows updating one or more properties of a provisioner as long as the required scopes are
possessed. For example, a request to update the `aws-provisioner-v1`
provisioner with a body `{description: 'This provisioner is great'}` would require you to have the scope
`queue:declare-provisioner:aws-provisioner-v1#description`.
The term "provisioner" is taken broadly to mean anything with a provisionerId.
This does not necessarily mean there is an associated service performing any
provisioning activity.
This method takes input: ``v1/update-provisioner-request.json#``
This method gives output: ``v1/provisioner-response.json#``
This method is ``experimental`` | f15739:c0:m24 |
async def pendingTasks(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get Number of Pending Tasks
Get an approximate number of pending tasks for the given `provisionerId`
and `workerType`.
The underlying Azure Storage Queues only promises to give us an estimate.
Furthermore, we cache the result in memory for 20 seconds. So consumers
should be no means expect this to be an accurate number.
It is, however, a solid estimate of the number of pending tasks.
This method gives output: ``v1/pending-tasks-response.json#``
This method is ``stable`` | f15739:c0:m25 |
async def listWorkerTypes(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get a list of all active worker-types
Get all active worker-types for the given provisioner.
The response is paged. If this end-point returns a `continuationToken`, you
should call the end-point again with the `continuationToken` as a query-string
option. By default this end-point will list up to 1000 worker-types in a single
page. You may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-workertypes-response.json#``
This method is ``experimental`` | f15739:c0:m26 |
async def getWorkerType(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get a worker-type
Get a worker-type from a provisioner.
This method gives output: ``v1/workertype-response.json#``
This method is ``experimental`` | f15739:c0:m27 |
async def declareWorkerType(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Update a worker-type
Declare a workerType, supplying some details about it.
`declareWorkerType` allows updating one or more properties of a worker-type as long as the required scopes are
possessed. For example, a request to update the `gecko-b-1-w2008` worker-type within the `aws-provisioner-v1`
provisioner with a body `{description: 'This worker type is great'}` would require you to have the scope
`queue:declare-worker-type:aws-provisioner-v1/gecko-b-1-w2008#description`.
This method takes input: ``v1/update-workertype-request.json#``
This method gives output: ``v1/workertype-response.json#``
This method is ``experimental`` | f15739:c0:m28 |
async def listWorkers(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get a list of all active workers of a workerType
Get a list of all active workers of a workerType.
`listWorkers` allows a response to be filtered by quarantined and non quarantined workers.
To filter the query, you should call the end-point with `quarantined` as a query-string option with a
true or false value.
The response is paged. If this end-point returns a `continuationToken`, you
should call the end-point again with the `continuationToken` as a query-string
option. By default this end-point will list up to 1000 workers in a single
page. You may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-workers-response.json#``
This method is ``experimental`` | f15739:c0:m29 |
async def getWorker(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Get a worker-type
Get a worker from a worker-type.
This method gives output: ``v1/worker-response.json#``
This method is ``experimental`` | f15739:c0:m30 |
async def quarantineWorker(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Quarantine a worker
Quarantine a worker
This method takes input: ``v1/quarantine-worker-request.json#``
This method gives output: ``v1/worker-response.json#``
This method is ``experimental`` | f15739:c0:m31 |
async def declareWorker(self, *args, **kwargs): | return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL> | Declare a worker
Declare a worker, supplying some details about it.
`declareWorker` allows updating one or more properties of a worker as long as the required scopes are
possessed.
This method takes input: ``v1/update-worker-request.json#``
This method gives output: ``v1/worker-response.json#``
This method is ``experimental`` | f15739:c0:m32 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.