signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
async def backendStatus(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Backend Status This endpoint is used to show when the last time the provisioner has checked in. A check in is done through the deadman's snitch api. It is done at the conclusion of a provisioning iteration and used to tell if the background provisioning process is still running. **Warning** this api end-point is **not stable**. This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/backend-status-response.json#`` This method is ``experimental``
f15750:c0:m13
async def ping(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Ping Server Respond without doing anything. This endpoint is used to check that the service is up. This method is ``stable``
f15750:c0:m14
async def ping(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Ping Server Respond without doing anything. This endpoint is used to check that the service is up. This method is ``stable``
f15751:c0:m0
async def purgeCache(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Purge Worker Cache Publish a purge-cache message to purge caches named `cacheName` with `provisionerId` and `workerType` in the routing-key. Workers should be listening for this message and purge caches when they see it. This method takes input: ``v1/purge-cache-request.json#`` This method is ``stable``
f15751:c0:m1
async def allPurgeRequests(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
All Open Purge Requests This is useful mostly for administors to view the set of open purge requests. It should not be used by workers. They should use the purgeRequests endpoint that is specific to their workerType and provisionerId. This method gives output: ``v1/all-purge-cache-request-list.json#`` This method is ``stable``
f15751:c0:m2
async def purgeRequests(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Open Purge Requests for a provisionerId/workerType pair List of caches that need to be purged if they are from before a certain time. This is safe to be used in automation from workers. This method gives output: ``v1/purge-cache-request-list.json#`` This method is ``stable``
f15751:c0:m3
async def ping(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Ping Server Respond without doing anything. This endpoint is used to check that the service is up. This method is ``stable``
f15752:c0:m0
async def email(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT:email>"], *args, **kwargs)<EOL>
Send an Email Send an email to `address`. The content is markdown and will be rendered to HTML, but both the HTML and raw markdown text will be sent in the email. If a link is included, it will be rendered to a nice button in the HTML version of the email This method takes input: ``v1/email-request.json#`` This method is ``experimental``
f15752:c0:m1
async def pulse(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Publish a Pulse Message Publish a message on pulse with the given `routingKey`. This method takes input: ``v1/pulse-request.json#`` This method is ``experimental``
f15752:c0:m2
async def irc(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Post IRC Message Post a message on IRC to a specific channel or user, or a specific user on a specific channel. Success of this API method does not imply the message was successfully posted. This API method merely inserts the IRC message into a queue that will be processed by a background process. This allows us to re-send the message in face of connection issues. However, if the user isn't online the message will be dropped without error. We maybe improve this behavior in the future. For now just keep in mind that IRC is a best-effort service. This method takes input: ``v1/irc-request.json#`` This method is ``experimental``
f15752:c0:m3
async def addDenylistAddress(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Denylist Given Address Add the given address to the notification denylist. The address can be of either of the three supported address type namely pulse, email or IRC(user or channel). Addresses in the denylist will be ignored by the notification service. This method takes input: ``v1/notification-address.json#`` This method is ``experimental``
f15752:c0:m4
async def deleteDenylistAddress(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Delete Denylisted Address Delete the specified address from the notification denylist. This method takes input: ``v1/notification-address.json#`` This method is ``experimental``
f15752:c0:m5
async def list(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT:list>"], *args, **kwargs)<EOL>
List Denylisted Notifications Lists all the denylisted addresses. By default this end-point will try to return up to 1000 addresses in one request. But it **may return less**, even if more tasks are available. It may also return a `continuationToken` even though there are no more results. However, you can only be sure to have seen all results if you keep calling `list` with the last `continuationToken` until you get a result without a `continuationToken`. If you are not interested in listing all the members at once, you may use the query-string option `limit` to return fewer. This method gives output: ``v1/notification-address-list.json#`` This method is ``experimental``
f15752:c0:m6
async def ping(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Ping Server Respond without doing anything. This endpoint is used to check that the service is up. This method is ``stable``
f15753:c0:m0
async def oidcCredentials(self, *args, **kwargs):
return await self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Get Taskcluster credentials given a suitable `access_token` Given an OIDC `access_token` from a trusted OpenID provider, return a set of Taskcluster credentials for use on behalf of the identified user. This method is typically not called with a Taskcluster client library and does not accept Hawk credentials. The `access_token` should be given in an `Authorization` header: ``` Authorization: Bearer abc.xyz ``` The `access_token` is first verified against the named :provider, then passed to the provider's APIBuilder to retrieve a user profile. That profile is then used to generate Taskcluster credentials appropriate to the user. Note that the resulting credentials may or may not include a `certificate` property. Callers should be prepared for either alternative. The given credentials will expire in a relatively short time. Callers should monitor this expiration and refresh the credentials if necessary, by calling this endpoint again, if they have expired. This method gives output: ``v1/oidc-credentials-response.json#`` This method is ``experimental``
f15753:c0:m1
def createSession(*args, **kwargs):
return asyncutils.createSession(*args, **kwargs)<EOL>
Create a new aiohttp session. This passes through all positional and keyword arguments to the asyncutils.createSession() constructor. It's preferred to do something like async with createSession(...) as session: queue = Queue(session=session) await queue.ping() or async with createSession(...) as session: async with Queue(session=session) as queue: await queue.ping() in the client code.
f15754:m0
def _createSession(self):
return None<EOL>
If self.session isn't set, don't create an implicit. To avoid `session.close()` warnings at the end of tasks, and various strongly-worded aiohttp warnings about using `async with`, let's set `self.session` to `None` if no session is passed in to `__init__`. The `asyncutils` functions will create a new session per call in that case.
f15754:c0:m1
async def _makeApiCall(self, entry, *args, **kwargs):
x = self._processArgs(entry, *args, **kwargs)<EOL>routeParams, payload, query, paginationHandler, paginationLimit = x<EOL>route = self._subArgsInRoute(entry, routeParams)<EOL>if paginationLimit and '<STR_LIT>' in entry.get('<STR_LIT>', []):<EOL><INDENT>query['<STR_LIT>'] = paginationLimit<EOL><DEDENT>if query:<EOL><INDENT>_route = route + '<STR_LIT:?>' + urllib.parse.urlencode(query)<EOL><DEDENT>else:<EOL><INDENT>_route = route<EOL><DEDENT>response = await self._makeHttpRequest(entry['<STR_LIT>'], _route, payload)<EOL>if paginationHandler:<EOL><INDENT>paginationHandler(response)<EOL>while response.get('<STR_LIT>'):<EOL><INDENT>query['<STR_LIT>'] = response['<STR_LIT>']<EOL>_route = route + '<STR_LIT:?>' + urllib.parse.urlencode(query)<EOL>response = await self._makeHttpRequest(entry['<STR_LIT>'], _route, payload)<EOL>paginationHandler(response)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>return response<EOL><DEDENT>
This function is used to dispatch calls to other functions for a given API Reference entry
f15754:c0:m2
async def _makeHttpRequest(self, method, route, payload):
url = self._constructUrl(route)<EOL>log.debug('<STR_LIT>', url)<EOL>hawkExt = self.makeHawkExt()<EOL>if payload is not None:<EOL><INDENT>payload = utils.dumpJson(payload)<EOL><DEDENT>retry = -<NUM_LIT:1> <EOL>retries = self.options['<STR_LIT>']<EOL>while retry < retries:<EOL><INDENT>retry += <NUM_LIT:1><EOL>if retry > <NUM_LIT:0>:<EOL><INDENT>snooze = float(retry * retry) / <NUM_LIT><EOL>log.info('<STR_LIT>', snooze)<EOL>await asyncio.sleep(utils.calculateSleepTime(retry))<EOL><DEDENT>if self._hasCredentials():<EOL><INDENT>sender = mohawk.Sender(<EOL>credentials={<EOL>'<STR_LIT:id>': self.options['<STR_LIT>']['<STR_LIT>'],<EOL>'<STR_LIT:key>': self.options['<STR_LIT>']['<STR_LIT>'],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>},<EOL>ext=hawkExt if hawkExt else {},<EOL>url=url,<EOL>content=payload if payload else '<STR_LIT>',<EOL>content_type='<STR_LIT:application/json>' if payload else '<STR_LIT>',<EOL>method=method,<EOL>)<EOL>headers = {'<STR_LIT>': sender.request_header}<EOL><DEDENT>else:<EOL><INDENT>log.debug('<STR_LIT>')<EOL>headers = {}<EOL><DEDENT>if payload:<EOL><INDENT>headers['<STR_LIT:Content-Type>'] = '<STR_LIT:application/json>'<EOL><DEDENT>log.debug('<STR_LIT>', retry)<EOL>try:<EOL><INDENT>response = await asyncutils.makeSingleHttpRequest(<EOL>method, url, payload, headers, session=self.session<EOL>)<EOL><DEDENT>except aiohttp.ClientError as rerr:<EOL><INDENT>if retry < retries:<EOL><INDENT>log.warn('<STR_LIT>' % rerr)<EOL>continue<EOL><DEDENT>raise exceptions.TaskclusterConnectionError(<EOL>"<STR_LIT>",<EOL>superExc=rerr<EOL>)<EOL><DEDENT>status = response.status<EOL>if status == <NUM_LIT>:<EOL><INDENT>return None<EOL><DEDENT>if <NUM_LIT> <= status and status < <NUM_LIT> and retry < retries:<EOL><INDENT>log.warn('<STR_LIT>' % status)<EOL>continue<EOL><DEDENT>if status < <NUM_LIT:200> or status >= <NUM_LIT>:<EOL><INDENT>data = {}<EOL>try:<EOL><INDENT>data = await response.json()<EOL><DEDENT>except Exception:<EOL><INDENT>pass <EOL><DEDENT>message = "<STR_LIT>"<EOL>if isinstance(data, dict):<EOL><INDENT>message = data.get('<STR_LIT:message>')<EOL><DEDENT>else:<EOL><INDENT>if status == <NUM_LIT>:<EOL><INDENT>message = "<STR_LIT>"<EOL><DEDENT>elif status == <NUM_LIT>:<EOL><INDENT>message = "<STR_LIT>"<EOL><DEDENT>else:<EOL><INDENT>message = "<STR_LIT>" % (str(status), str(data)[:<NUM_LIT>])<EOL><DEDENT><DEDENT>if status == <NUM_LIT>:<EOL><INDENT>raise exceptions.TaskclusterAuthFailure(<EOL>message,<EOL>status_code=status,<EOL>body=data,<EOL>superExc=None<EOL>)<EOL><DEDENT>raise exceptions.TaskclusterRestFailure(<EOL>message,<EOL>status_code=status,<EOL>body=data,<EOL>superExc=None<EOL>)<EOL><DEDENT>try:<EOL><INDENT>await response.release()<EOL>return await response.json()<EOL><DEDENT>except (ValueError, aiohttp.client_exceptions.ContentTypeError):<EOL><INDENT>return {"<STR_LIT>": response}<EOL><DEDENT><DEDENT>assert False, "<STR_LIT>"<EOL>
Make an HTTP Request for the API endpoint. This method wraps the logic about doing failure retry and passes off the actual work of doing an HTTP request to another method.
f15754:c0:m3
def clientCreated(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Client Created Messages Message that a new client has been created. This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys: * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15755:c0:m0
def clientUpdated(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Client Updated Messages Message that a new client has been updated. This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys: * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15755:c0:m1
def clientDeleted(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Client Deleted Messages Message that a new client has been deleted. This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys: * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15755:c0:m2
def roleCreated(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Role Created Messages Message that a new role has been created. This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys: * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15755:c0:m3
def roleUpdated(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Role Updated Messages Message that a new role has been updated. This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys: * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15755:c0:m4
def roleDeleted(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Role Deleted Messages Message that a new role has been deleted. This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys: * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15755:c0:m5
def pullRequest(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT:action>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
GitHub Pull Request Event When a GitHub pull request event is posted it will be broadcast on this exchange with the designated `organization` and `repository` in the routing-key along with event specific metadata in the payload. This exchange outputs: ``v1/github-pull-request-message.json#``This exchange takes the following keys: * routingKeyKind: Identifier for the routing-key kind. This is always `"primary"` for the formalized routing key. (required) * organization: The GitHub `organization` which had an event. All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped. (required) * repository: The GitHub `repository` which had an event.All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped. (required) * action: The GitHub `action` which triggered an event. See for possible values see the payload actions property. (required)
f15756:c0:m0
def push(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
GitHub push Event When a GitHub push event is posted it will be broadcast on this exchange with the designated `organization` and `repository` in the routing-key along with event specific metadata in the payload. This exchange outputs: ``v1/github-push-message.json#``This exchange takes the following keys: * routingKeyKind: Identifier for the routing-key kind. This is always `"primary"` for the formalized routing key. (required) * organization: The GitHub `organization` which had an event. All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped. (required) * repository: The GitHub `repository` which had an event.All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped. (required)
f15756:c0:m1
def release(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
GitHub release Event When a GitHub release event is posted it will be broadcast on this exchange with the designated `organization` and `repository` in the routing-key along with event specific metadata in the payload. This exchange outputs: ``v1/github-release-message.json#``This exchange takes the following keys: * routingKeyKind: Identifier for the routing-key kind. This is always `"primary"` for the formalized routing key. (required) * organization: The GitHub `organization` which had an event. All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped. (required) * repository: The GitHub `repository` which had an event.All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped. (required)
f15756:c0:m2
def taskGroupCreationRequested(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
tc-gh requested the Queue service to create all the tasks in a group supposed to signal that taskCreate API has been called for every task in the task group for this particular repo and this particular organization currently used for creating initial status indicators in GitHub UI using Statuses API. This particular exchange can also be bound to RabbitMQ queues by custom routes - for that, Pass in the array of routes as a second argument to the publish method. Currently, we do use the statuses routes to bind the handler that creates the initial status. This exchange outputs: ``v1/task-group-creation-requested.json#``This exchange takes the following keys: * routingKeyKind: Identifier for the routing-key kind. This is always `"primary"` for the formalized routing key. (required) * organization: The GitHub `organization` which had an event. All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped. (required) * repository: The GitHub `repository` which had an event.All periods have been replaced by % - such that foo.bar becomes foo%bar - and all other special characters aside from - and _ have been stripped. (required)
f15756:c0:m3
def taskDefined(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Task Defined Messages When a task is created or just defined a message is posted to this exchange. This message exchange is mainly useful when tasks are scheduled by a scheduler that uses `defineTask` as this does not make the task `pending`. Thus, no `taskPending` message is published. Please, note that messages are also published on this exchange if defined using `createTask`. This exchange outputs: ``v1/task-defined-message.json#``This exchange takes the following keys: * routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required) * taskId: `taskId` for the task this message concerns (required) * runId: `runId` of latest run for the task, `_` if no run is exists for the task. * workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. * workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. * provisionerId: `provisionerId` this task is targeted at. (required) * workerType: `workerType` this task must run on. (required) * schedulerId: `schedulerId` this task was created by. (required) * taskGroupId: `taskGroupId` this task was created in. (required) * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15757:c0:m0
def taskPending(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Task Pending Messages When a task becomes `pending` a message is posted to this exchange. This is useful for workers who doesn't want to constantly poll the queue for new tasks. The queue will also be authority for task states and claims. But using this exchange workers should be able to distribute work efficiently and they would be able to reduce their polling interval significantly without affecting general responsiveness. This exchange outputs: ``v1/task-pending-message.json#``This exchange takes the following keys: * routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required) * taskId: `taskId` for the task this message concerns (required) * runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required) * workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. * workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. * provisionerId: `provisionerId` this task is targeted at. (required) * workerType: `workerType` this task must run on. (required) * schedulerId: `schedulerId` this task was created by. (required) * taskGroupId: `taskGroupId` this task was created in. (required) * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15757:c0:m1
def taskRunning(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Task Running Messages Whenever a task is claimed by a worker, a run is started on the worker, and a message is posted on this exchange. This exchange outputs: ``v1/task-running-message.json#``This exchange takes the following keys: * routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required) * taskId: `taskId` for the task this message concerns (required) * runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required) * workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required) * workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required) * provisionerId: `provisionerId` this task is targeted at. (required) * workerType: `workerType` this task must run on. (required) * schedulerId: `schedulerId` this task was created by. (required) * taskGroupId: `taskGroupId` this task was created in. (required) * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15757:c0:m2
def artifactCreated(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Artifact Creation Messages Whenever the `createArtifact` end-point is called, the queue will create a record of the artifact and post a message on this exchange. All of this happens before the queue returns a signed URL for the caller to upload the actual artifact with (pending on `storageType`). This means that the actual artifact is rarely available when this message is posted. But it is not unreasonable to assume that the artifact will will become available at some point later. Most signatures will expire in 30 minutes or so, forcing the uploader to call `createArtifact` with the same payload again in-order to continue uploading the artifact. However, in most cases (especially for small artifacts) it's very reasonable assume the artifact will be available within a few minutes. This property means that this exchange is mostly useful for tools monitoring task evaluation. One could also use it count number of artifacts per task, or _index_ artifacts though in most cases it'll be smarter to index artifacts after the task in question have completed successfully. This exchange outputs: ``v1/artifact-created-message.json#``This exchange takes the following keys: * routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required) * taskId: `taskId` for the task this message concerns (required) * runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required) * workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required) * workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required) * provisionerId: `provisionerId` this task is targeted at. (required) * workerType: `workerType` this task must run on. (required) * schedulerId: `schedulerId` this task was created by. (required) * taskGroupId: `taskGroupId` this task was created in. (required) * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15757:c0:m3
def taskCompleted(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Task Completed Messages When a task is successfully completed by a worker a message is posted this exchange. This message is routed using the `runId`, `workerGroup` and `workerId` that completed the task. But information about additional runs is also available from the task status structure. This exchange outputs: ``v1/task-completed-message.json#``This exchange takes the following keys: * routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required) * taskId: `taskId` for the task this message concerns (required) * runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required) * workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required) * workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required) * provisionerId: `provisionerId` this task is targeted at. (required) * workerType: `workerType` this task must run on. (required) * schedulerId: `schedulerId` this task was created by. (required) * taskGroupId: `taskGroupId` this task was created in. (required) * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15757:c0:m4
def taskFailed(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Task Failed Messages When a task ran, but failed to complete successfully a message is posted to this exchange. This is same as worker ran task-specific code, but the task specific code exited non-zero. This exchange outputs: ``v1/task-failed-message.json#``This exchange takes the following keys: * routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required) * taskId: `taskId` for the task this message concerns (required) * runId: `runId` of latest run for the task, `_` if no run is exists for the task. * workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. * workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. * provisionerId: `provisionerId` this task is targeted at. (required) * workerType: `workerType` this task must run on. (required) * schedulerId: `schedulerId` this task was created by. (required) * taskGroupId: `taskGroupId` this task was created in. (required) * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15757:c0:m5
def taskException(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Task Exception Messages Whenever Taskcluster fails to run a message is posted to this exchange. This happens if the task isn't completed before its `deadlìne`, all retries failed (i.e. workers stopped responding), the task was canceled by another entity, or the task carried a malformed payload. The specific _reason_ is evident from that task status structure, refer to the `reasonResolved` property for the last run. This exchange outputs: ``v1/task-exception-message.json#``This exchange takes the following keys: * routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required) * taskId: `taskId` for the task this message concerns (required) * runId: `runId` of latest run for the task, `_` if no run is exists for the task. * workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. * workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. * provisionerId: `provisionerId` this task is targeted at. (required) * workerType: `workerType` this task must run on. (required) * schedulerId: `schedulerId` this task was created by. (required) * taskGroupId: `taskGroupId` this task was created in. (required) * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15757:c0:m6
def taskGroupResolved(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Task Group Resolved Messages A message is published on task-group-resolved whenever all submitted tasks (whether scheduled or unscheduled) for a given task group have been resolved, regardless of whether they resolved as successful or not. A task group may be resolved multiple times, since new tasks may be submitted against an already resolved task group. This exchange outputs: ``v1/task-group-resolved.json#``This exchange takes the following keys: * routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required) * taskGroupId: `taskGroupId` for the task-group this message concerns (required) * schedulerId: `schedulerId` for the task-group this message concerns (required) * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15757:c0:m7
def ping(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Ping Server Respond without doing anything. This endpoint is used to check that the service is up. This method is ``stable``
f15758:c0:m0
def listHookGroups(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
List hook groups This endpoint will return a list of all hook groups with at least one hook. This method gives output: ``v1/list-hook-groups-response.json#`` This method is ``stable``
f15758:c0:m1
def listHooks(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
List hooks in a given group This endpoint will return a list of all the hook definitions within a given hook group. This method gives output: ``v1/list-hooks-response.json#`` This method is ``stable``
f15758:c0:m2
def hook(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Get hook definition This endpoint will return the hook definition for the given `hookGroupId` and hookId. This method gives output: ``v1/hook-definition.json#`` This method is ``stable``
f15758:c0:m3
def getHookStatus(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Get hook status This endpoint will return the current status of the hook. This represents a snapshot in time and may vary from one call to the next. This method is deprecated in favor of listLastFires. This method gives output: ``v1/hook-status.json#`` This method is ``deprecated``
f15758:c0:m4
def createHook(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Create a hook This endpoint will create a new hook. The caller's credentials must include the role that will be used to create the task. That role must satisfy task.scopes as well as the necessary scopes to add the task to the queue. This method takes input: ``v1/create-hook-request.json#`` This method gives output: ``v1/hook-definition.json#`` This method is ``stable``
f15758:c0:m5
def updateHook(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Update a hook This endpoint will update an existing hook. All fields except `hookGroupId` and `hookId` can be modified. This method takes input: ``v1/create-hook-request.json#`` This method gives output: ``v1/hook-definition.json#`` This method is ``stable``
f15758:c0:m6
def removeHook(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Delete a hook This endpoint will remove a hook definition. This method is ``stable``
f15758:c0:m7
def triggerHook(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Trigger a hook This endpoint will trigger the creation of a task from a hook definition. The HTTP payload must match the hooks `triggerSchema`. If it does, it is provided as the `payload` property of the JSON-e context used to render the task template. This method takes input: ``v1/trigger-hook.json#`` This method gives output: ``v1/trigger-hook-response.json#`` This method is ``stable``
f15758:c0:m8
def getTriggerToken(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Get a trigger token Retrieve a unique secret token for triggering the specified hook. This token can be deactivated with `resetTriggerToken`. This method gives output: ``v1/trigger-token-response.json#`` This method is ``stable``
f15758:c0:m9
def resetTriggerToken(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Reset a trigger token Reset the token for triggering a given hook. This invalidates token that may have been issued via getTriggerToken with a new token. This method gives output: ``v1/trigger-token-response.json#`` This method is ``stable``
f15758:c0:m10
def triggerHookWithToken(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Trigger a hook with a token This endpoint triggers a defined hook with a valid token. The HTTP payload must match the hooks `triggerSchema`. If it does, it is provided as the `payload` property of the JSON-e context used to render the task template. This method takes input: ``v1/trigger-hook.json#`` This method gives output: ``v1/trigger-hook-response.json#`` This method is ``stable``
f15758:c0:m11
def listLastFires(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Get information about recent hook fires This endpoint will return information about the the last few times this hook has been fired, including whether the hook was fired successfully or not This method gives output: ``v1/list-lastFires-response.json#`` This method is ``experimental``
f15758:c0:m12
def ping(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Ping Server Respond without doing anything. This endpoint is used to check that the service is up. This method is ``stable``
f15759:c0:m0
def listWorkerTypes(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
See the list of worker types which are known to be managed This method is only for debugging the ec2-manager This method gives output: ``v1/list-worker-types.json#`` This method is ``experimental``
f15759:c0:m1
def runInstance(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Run an instance Request an instance of a worker type This method takes input: ``v1/run-instance-request.json#`` This method is ``experimental``
f15759:c0:m2
def terminateWorkerType(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Terminate all resources from a worker type Terminate all instances for this worker type This method is ``experimental``
f15759:c0:m3
def workerTypeStats(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Look up the resource stats for a workerType Return an object which has a generic state description. This only contains counts of instances This method gives output: ``v1/worker-type-resources.json#`` This method is ``experimental``
f15759:c0:m4
def workerTypeHealth(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Look up the resource health for a workerType Return a view of the health of a given worker type This method gives output: ``v1/health.json#`` This method is ``experimental``
f15759:c0:m5
def workerTypeErrors(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Look up the most recent errors of a workerType Return a list of the most recent errors encountered by a worker type This method gives output: ``v1/errors.json#`` This method is ``experimental``
f15759:c0:m6
def workerTypeState(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Look up the resource state for a workerType Return state information for a given worker type This method gives output: ``v1/worker-type-state.json#`` This method is ``experimental``
f15759:c0:m7
def ensureKeyPair(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Ensure a KeyPair for a given worker type exists Idempotently ensure that a keypair of a given name exists This method takes input: ``v1/create-key-pair.json#`` This method is ``experimental``
f15759:c0:m8
def removeKeyPair(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Ensure a KeyPair for a given worker type does not exist Ensure that a keypair of a given name does not exist. This method is ``experimental``
f15759:c0:m9
def terminateInstance(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Terminate an instance Terminate an instance in a specified region This method is ``experimental``
f15759:c0:m10
def getPrices(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Request prices for EC2 Return a list of possible prices for EC2 This method gives output: ``v1/prices.json#`` This method is ``experimental``
f15759:c0:m11
def getSpecificPrices(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Request prices for EC2 Return a list of possible prices for EC2 This method takes input: ``v1/prices-request.json#`` This method gives output: ``v1/prices.json#`` This method is ``experimental``
f15759:c0:m12
def getHealth(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Get EC2 account health metrics Give some basic stats on the health of our EC2 account This method gives output: ``v1/health.json#`` This method is ``experimental``
f15759:c0:m13
def getRecentErrors(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Look up the most recent errors in the provisioner across all worker types Return a list of recent errors encountered This method gives output: ``v1/errors.json#`` This method is ``experimental``
f15759:c0:m14
def regions(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
See the list of regions managed by this ec2-manager This method is only for debugging the ec2-manager This method is ``experimental``
f15759:c0:m15
def amiUsage(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
See the list of AMIs and their usage List AMIs and their usage by returning a list of objects in the form: { region: string volumetype: string lastused: timestamp } This method is ``experimental``
f15759:c0:m16
def ebsUsage(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
See the current EBS volume usage list Lists current EBS volume usage by returning a list of objects that are uniquely defined by {region, volumetype, state} in the form: { region: string, volumetype: string, state: string, totalcount: integer, totalgb: integer, touched: timestamp (last time that information was updated), } This method is ``experimental``
f15759:c0:m17
def dbpoolStats(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Statistics on the Database client pool This method is only for debugging the ec2-manager This method is ``experimental``
f15759:c0:m18
def allState(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
List out the entire internal state This method is only for debugging the ec2-manager This method is ``experimental``
f15759:c0:m19
def sqsStats(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Statistics on the sqs queues This method is only for debugging the ec2-manager This method is ``experimental``
f15759:c0:m20
def purgeQueues(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Purge the SQS queues This method is only for debugging the ec2-manager This method is ``experimental``
f15759:c0:m21
def jobs(self, *args, **kwargs):
ref = {<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>'<STR_LIT>': [<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': False,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>{<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT:name>': '<STR_LIT>',<EOL>},<EOL>],<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>}<EOL>return self._makeTopicExchange(ref, *args, **kwargs)<EOL>
Job Messages When a task run is scheduled or resolved, a message is posted to this exchange in a Treeherder consumable format. This exchange outputs: ``v1/pulse-job.json#``This exchange takes the following keys: * destination: destination (required) * project: project (required) * reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
f15760:c0:m0
def ping(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Ping Server Respond without doing anything. This endpoint is used to check that the service is up. This method is ``stable``
f15762:c0:m0
def set(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Set Secret Set the secret associated with some key. If the secret already exists, it is updated instead. This method takes input: ``v1/secret.json#`` This method is ``stable``
f15762:c0:m1
def remove(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Delete Secret Delete the secret associated with some key. This method is ``stable``
f15762:c0:m2
def get(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Read Secret Read the secret associated with some key. If the secret has recently expired, the response code 410 is returned. If the caller lacks the scope necessary to get the secret, the call will fail with a 403 code regardless of whether the secret exists. This method gives output: ``v1/secret.json#`` This method is ``stable``
f15762:c0:m3
def list(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT:list>"], *args, **kwargs)<EOL>
List Secrets List the names of all secrets. By default this end-point will try to return up to 1000 secret names in one request. But it **may return less**, even if more tasks are available. It may also return a `continuationToken` even though there are no more results. However, you can only be sure to have seen all results if you keep calling `listTaskGroup` with the last `continuationToken` until you get a result without a `continuationToken`. If you are not interested in listing all the members at once, you may use the query-string option `limit` to return fewer. This method gives output: ``v1/secret-list.json#`` This method is ``stable``
f15762:c0:m4
def listWorkerTypeSummaries(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
List worker types with details Return a list of worker types, including some summary information about current capacity for each. While this list includes all defined worker types, there may be running EC2 instances for deleted worker types that are not included here. The list is unordered. This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/list-worker-types-summaries-response.json#`` This method is ``stable``
f15763:c0:m0
def createWorkerType(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Create new Worker Type Create a worker type. A worker type contains all the configuration needed for the provisioner to manage the instances. Each worker type knows which regions and which instance types are allowed for that worker type. Remember that Capacity is the number of concurrent tasks that can be run on a given EC2 resource and that Utility is the relative performance rate between different instance types. There is no way to configure different regions to have different sets of instance types so ensure that all instance types are available in all regions. This function is idempotent. Once a worker type is in the provisioner, a back ground process will begin creating instances for it based on its capacity bounds and its pending task count from the Queue. It is the worker's responsibility to shut itself down. The provisioner has a limit (currently 96hours) for all instances to prevent zombie instances from running indefinitely. The provisioner will ensure that all instances created are tagged with aws resource tags containing the provisioner id and the worker type. If provided, the secrets in the global, region and instance type sections are available using the secrets api. If specified, the scopes provided will be used to generate a set of temporary credentials available with the other secrets. This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#`` This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#`` This method is ``stable``
f15763:c0:m1
def updateWorkerType(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Update Worker Type Provide a new copy of a worker type to replace the existing one. This will overwrite the existing worker type definition if there is already a worker type of that name. This method will return a 200 response along with a copy of the worker type definition created Note that if you are using the result of a GET on the worker-type end point that you will need to delete the lastModified and workerType keys from the object returned, since those fields are not allowed the request body for this method Otherwise, all input requirements and actions are the same as the create method. This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#`` This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#`` This method is ``stable``
f15763:c0:m2
def workerTypeLastModified(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Get Worker Type Last Modified Time This method is provided to allow workers to see when they were last modified. The value provided through UserData can be compared against this value to see if changes have been made If the worker type definition has not been changed, the date should be identical as it is the same stored value. This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-last-modified.json#`` This method is ``stable``
f15763:c0:m3
def workerType(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Get Worker Type Retrieve a copy of the requested worker type definition. This copy contains a lastModified field as well as the worker type name. As such, it will require manipulation to be able to use the results of this method to submit date to the update method. This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#`` This method is ``stable``
f15763:c0:m4
def removeWorkerType(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Delete Worker Type Delete a worker type definition. This method will only delete the worker type definition from the storage table. The actual deletion will be handled by a background worker. As soon as this method is called for a worker type, the background worker will immediately submit requests to cancel all spot requests for this worker type as well as killing all instances regardless of their state. If you want to gracefully remove a worker type, you must either ensure that no tasks are created with that worker type name or you could theoretically set maxCapacity to 0, though, this is not a supported or tested action This method is ``stable``
f15763:c0:m5
def listWorkerTypes(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
List Worker Types Return a list of string worker type names. These are the names of all managed worker types known to the provisioner. This does not include worker types which are left overs from a deleted worker type definition but are still running in AWS. This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/list-worker-types-response.json#`` This method is ``stable``
f15763:c0:m6
def createSecret(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Create new Secret Insert a secret into the secret storage. The supplied secrets will be provided verbatime via `getSecret`, while the supplied scopes will be converted into credentials by `getSecret`. This method is not ordinarily used in production; instead, the provisioner creates a new secret directly for each spot bid. This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-secret-request.json#`` This method is ``stable``
f15763:c0:m7
def getSecret(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Get a Secret Retrieve a secret from storage. The result contains any passwords or other restricted information verbatim as well as a temporary credential based on the scopes specified when the secret was created. It is important that this secret is deleted by the consumer (`removeSecret`), or else the secrets will be visible to any process which can access the user data associated with the instance. This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-secret-response.json#`` This method is ``stable``
f15763:c0:m8
def instanceStarted(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Report an instance starting An instance will report in by giving its instance id as well as its security token. The token is given and checked to ensure that it matches a real token that exists to ensure that random machines do not check in. We could generate a different token but that seems like overkill This method is ``stable``
f15763:c0:m9
def removeSecret(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Remove a Secret Remove a secret. After this call, a call to `getSecret` with the given token will return no information. It is very important that the consumer of a secret delete the secret from storage before handing over control to untrusted processes to prevent credential and/or secret leakage. This method is ``stable``
f15763:c0:m10
def getLaunchSpecs(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Get All Launch Specifications for WorkerType This method returns a preview of all possible launch specifications that this worker type definition could submit to EC2. It is used to test worker types, nothing more **This API end-point is experimental and may be subject to change without warning.** This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-launch-specs-response.json#`` This method is ``experimental``
f15763:c0:m11
def state(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT:state>"], *args, **kwargs)<EOL>
Get AWS State for a worker type Return the state of a given workertype as stored by the provisioner. This state is stored as three lists: 1 for running instances, 1 for pending requests. The `summary` property contains an updated summary similar to that returned from `listWorkerTypeSummaries`. This method is ``stable``
f15763:c0:m12
def backendStatus(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Backend Status This endpoint is used to show when the last time the provisioner has checked in. A check in is done through the deadman's snitch api. It is done at the conclusion of a provisioning iteration and used to tell if the background provisioning process is still running. **Warning** this api end-point is **not stable**. This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/backend-status-response.json#`` This method is ``experimental``
f15763:c0:m13
def ping(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Ping Server Respond without doing anything. This endpoint is used to check that the service is up. This method is ``stable``
f15763:c0:m14
def ping(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Ping Server Respond without doing anything. This endpoint is used to check that the service is up. This method is ``stable``
f15764:c0:m0
def purgeCache(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Purge Worker Cache Publish a purge-cache message to purge caches named `cacheName` with `provisionerId` and `workerType` in the routing-key. Workers should be listening for this message and purge caches when they see it. This method takes input: ``v1/purge-cache-request.json#`` This method is ``stable``
f15764:c0:m1
def allPurgeRequests(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
All Open Purge Requests This is useful mostly for administors to view the set of open purge requests. It should not be used by workers. They should use the purgeRequests endpoint that is specific to their workerType and provisionerId. This method gives output: ``v1/all-purge-cache-request-list.json#`` This method is ``stable``
f15764:c0:m2
def purgeRequests(self, *args, **kwargs):
return self._makeApiCall(self.funcinfo["<STR_LIT>"], *args, **kwargs)<EOL>
Open Purge Requests for a provisionerId/workerType pair List of caches that need to be purged if they are from before a certain time. This is safe to be used in automation from workers. This method gives output: ``v1/purge-cache-request-list.json#`` This method is ``stable``
f15764:c0:m3
def calculateSleepTime(attempt):
if attempt <= <NUM_LIT:0>:<EOL><INDENT>return <NUM_LIT:0><EOL><DEDENT>delay = float(<NUM_LIT:2> ** (attempt - <NUM_LIT:1>)) * float(DELAY_FACTOR)<EOL>delay = delay * (RANDOMIZATION_FACTOR * (random.random() * <NUM_LIT:2> - <NUM_LIT:1>) + <NUM_LIT:1>)<EOL>return min(delay, MAX_DELAY)<EOL>
From the go client https://github.com/taskcluster/go-got/blob/031f55c/backoff.go#L24-L29
f15765:m0
def fromNow(offset, dateObj=None):
<EOL>future = True<EOL>offset = offset.lstrip()<EOL>if offset.startswith('<STR_LIT:->'):<EOL><INDENT>future = False<EOL>offset = offset[<NUM_LIT:1>:].lstrip()<EOL><DEDENT>if offset.startswith('<STR_LIT:+>'):<EOL><INDENT>offset = offset[<NUM_LIT:1>:].lstrip()<EOL><DEDENT>m = r.match(offset)<EOL>if m is None:<EOL><INDENT>raise ValueError("<STR_LIT>" % offset)<EOL><DEDENT>days = <NUM_LIT:0><EOL>hours = <NUM_LIT:0><EOL>minutes = <NUM_LIT:0><EOL>seconds = <NUM_LIT:0><EOL>if m.group('<STR_LIT>'):<EOL><INDENT>years = int(m.group('<STR_LIT>'))<EOL>days += <NUM_LIT> * years<EOL><DEDENT>if m.group('<STR_LIT>'):<EOL><INDENT>months = int(m.group('<STR_LIT>'))<EOL>days += <NUM_LIT:30> * months<EOL><DEDENT>days += int(m.group('<STR_LIT>') or <NUM_LIT:0>)<EOL>hours += int(m.group('<STR_LIT>') or <NUM_LIT:0>)<EOL>minutes += int(m.group('<STR_LIT>') or <NUM_LIT:0>)<EOL>seconds += int(m.group('<STR_LIT>') or <NUM_LIT:0>)<EOL>delta = datetime.timedelta(<EOL>weeks=int(m.group('<STR_LIT>') or <NUM_LIT:0>),<EOL>days=days,<EOL>hours=hours,<EOL>minutes=minutes,<EOL>seconds=seconds,<EOL>)<EOL>if not dateObj:<EOL><INDENT>dateObj = datetime.datetime.utcnow()<EOL><DEDENT>return dateObj + delta if future else dateObj - delta<EOL>
Generate a `datetime.datetime` instance which is offset using a string. See the README.md for a full example, but offset could be '1 day' for a datetime object one day in the future
f15765:m2
def fromNowJSON(offset):
return stringDate(fromNow(offset))<EOL>
Like fromNow() but returns in a taskcluster-json compatible way
f15765:m3