code
string | signature
string | docstring
string | loss_without_docstring
float64 | loss_with_docstring
float64 | factor
float64 |
|---|---|---|---|---|---|
return self._makeApiCall(self.funcinfo["pendingTasks"], *args, **kwargs)
|
def pendingTasks(self, *args, **kwargs)
|
Get Number of Pending Tasks
Get an approximate number of pending tasks for the given `provisionerId`
and `workerType`.
The underlying Azure Storage Queues only promises to give us an estimate.
Furthermore, we cache the result in memory for 20 seconds. So consumers
should be no means expect this to be an accurate number.
It is, however, a solid estimate of the number of pending tasks.
This method gives output: ``v1/pending-tasks-response.json#``
This method is ``stable``
| 12.7616
| 17.367836
| 0.734784
|
return self._makeApiCall(self.funcinfo["quarantineWorker"], *args, **kwargs)
|
def quarantineWorker(self, *args, **kwargs)
|
Quarantine a worker
Quarantine a worker
This method takes input: ``v1/quarantine-worker-request.json#``
This method gives output: ``v1/worker-response.json#``
This method is ``experimental``
| 10.081695
| 11.330914
| 0.889751
|
return self._makeApiCall(self.funcinfo["declareWorker"], *args, **kwargs)
|
def declareWorker(self, *args, **kwargs)
|
Declare a worker
Declare a worker, supplying some details about it.
`declareWorker` allows updating one or more properties of a worker as long as the required scopes are
possessed.
This method takes input: ``v1/update-worker-request.json#``
This method gives output: ``v1/worker-response.json#``
This method is ``experimental``
| 15.898732
| 21.799057
| 0.729331
|
return await self._makeApiCall(self.funcinfo["findTask"], *args, **kwargs)
|
async def findTask(self, *args, **kwargs)
|
Find Indexed Task
Find a task by index path, returning the highest-rank task with that path. If no
task exists for the given path, this API end-point will respond with a 404 status.
This method gives output: ``v1/indexed-task-response.json#``
This method is ``stable``
| 12.112565
| 21.849869
| 0.554354
|
return await self._makeApiCall(self.funcinfo["listNamespaces"], *args, **kwargs)
|
async def listNamespaces(self, *args, **kwargs)
|
List Namespaces
List the namespaces immediately under a given namespace.
This endpoint
lists up to 1000 namespaces. If more namespaces are present, a
`continuationToken` will be returned, which can be given in the next
request. For the initial request, the payload should be an empty JSON
object.
This method gives output: ``v1/list-namespaces-response.json#``
This method is ``stable``
| 13.301975
| 21.032631
| 0.632445
|
return await self._makeApiCall(self.funcinfo["listTasks"], *args, **kwargs)
|
async def listTasks(self, *args, **kwargs)
|
List Tasks
List the tasks immediately under a given namespace.
This endpoint
lists up to 1000 tasks. If more tasks are present, a
`continuationToken` will be returned, which can be given in the next
request. For the initial request, the payload should be an empty JSON
object.
**Remark**, this end-point is designed for humans browsing for tasks, not
services, as that makes little sense.
This method gives output: ``v1/list-tasks-response.json#``
This method is ``stable``
| 12.340103
| 21.853516
| 0.564674
|
return await self._makeApiCall(self.funcinfo["insertTask"], *args, **kwargs)
|
async def insertTask(self, *args, **kwargs)
|
Insert Task into Index
Insert a task into the index. If the new rank is less than the existing rank
at the given index path, the task is not indexed but the response is still 200 OK.
Please see the introduction above for information
about indexing successfully completed tasks automatically using custom routes.
This method takes input: ``v1/insert-task-request.json#``
This method gives output: ``v1/indexed-task-response.json#``
This method is ``stable``
| 13.219021
| 24.477274
| 0.540053
|
return await self._makeApiCall(self.funcinfo["findArtifactFromTask"], *args, **kwargs)
|
async def findArtifactFromTask(self, *args, **kwargs)
|
Get Artifact From Indexed Task
Find a task by index path and redirect to the artifact on the most recent
run with the given `name`.
Note that multiple calls to this endpoint may return artifacts from differen tasks
if a new task is inserted into the index between calls. Avoid using this method as
a stable link to multiple, connected files if the index path does not contain a
unique identifier. For example, the following two links may return unrelated files:
* https://tc.example.com/api/index/v1/task/some-app.win64.latest.installer/artifacts/public/installer.exe`
* https://tc.example.com/api/index/v1/task/some-app.win64.latest.installer/artifacts/public/debug-symbols.zip`
This problem be remedied by including the revision in the index path or by bundling both
installer and debug symbols into a single artifact.
If no task exists for the given index path, this API end-point responds with 404.
This method is ``stable``
| 10.563093
| 19.064034
| 0.554085
|
return await self._makeApiCall(self.funcinfo["allPurgeRequests"], *args, **kwargs)
|
async def allPurgeRequests(self, *args, **kwargs)
|
All Open Purge Requests
This is useful mostly for administors to view
the set of open purge requests. It should not
be used by workers. They should use the purgeRequests
endpoint that is specific to their workerType and
provisionerId.
This method gives output: ``v1/all-purge-cache-request-list.json#``
This method is ``stable``
| 12.733225
| 18.592018
| 0.684876
|
return await self._makeApiCall(self.funcinfo["purgeRequests"], *args, **kwargs)
|
async def purgeRequests(self, *args, **kwargs)
|
Open Purge Requests for a provisionerId/workerType pair
List of caches that need to be purged if they are from before
a certain time. This is safe to be used in automation from
workers.
This method gives output: ``v1/purge-cache-request-list.json#``
This method is ``stable``
| 13.593371
| 21.673969
| 0.627175
|
escape_parts = re.compile('\x01?\x1b\\[([0-9;]*)m\x02?')
chunks = escape_parts.split(text)
i = 0
for chunk in chunks:
if chunk != '':
if i % 2 == 0:
self.stream.write(chunk)
else:
c = chunk.split(';')
r = Magic.rdisplay(c)
self.display(**r) #see caveat 0
self.flush()
i += 1
|
def write(self, text)
|
Parses text and prints proper output to the terminal
This method will extract escape codes from the text and
handle them as well as possible for whichever platform
is being used. At the moment only the display escape codes
are supported.
| 5.529374
| 5.145142
| 1.074679
|
return NotImplemented
fno = stdout.fileno()
mode = self.termios.tcgetattr(fno)
try:
self.tty.setraw(fno, self.termios.TCSANOW)
ch = self.read(1)
finally:
self.termios.tcsetattr(fno, self.termios.TCSANOW, mode)
return ch
|
def getch(self)
|
Don't use this yet
It doesn't belong here but I haven't yet thought about a proper
way to implement this feature and the features that will depend on
it.
| 3.380006
| 3.280955
| 1.03019
|
codes, fg, bg = Magic.displayformat(codes, fg, bg)
self.stream.write(Magic.display(codes, fg, bg))
self.flush()
|
def display(self, codes=[], fg=None, bg=None)
|
Displays the codes using ANSI escapes
| 6.641467
| 6.687409
| 0.99313
|
for d in range(distance):
self.stream.write(self._get_cap('move '+place))
self.flush()
|
def move(self, place, distance = 1)
|
see doc in Term class
| 13.557664
| 11.8986
| 1.139434
|
if scope == 'line':
self.clear('beginning of line')
self.clear('end of line')
else: self.stream.write(self._get_cap('clear '+scope))
self.flush()
|
def clear(self, scope = 'screen')
|
see doc in Term class
| 8.098363
| 7.000919
| 1.156757
|
self.curses.setupterm()
return self.curses.tigetnum('cols'), self.curses.tigetnum('lines')
|
def get_size(self)
|
see doc in Term class
| 4.686799
| 3.652941
| 1.283021
|
codes, fg, bg = Magic.displayformat(codes, fg, bg)
color = 0
for c in codes:
try:
f = getattr(self, '_display_' + c)
out = f()
if out: color |= out
except AttributeError:
pass
cfg, cfgi, cbg, cbgi = self._split_attributes(
self._get_console_info()['attributes'])
if self.reverse_input:
cfg, cbg = (cbg // 0x10), (cfg * 0x10)
cfgi, cbgi = (cbgi // 0x10), (cfgi * 0x10)
if fg != None:
color |= self.FG[fg]
self.real_fg = self.FG[fg]
else: color |= cfg
if bg != None:
color |= self.BG[bg]
else: color |= cbg
color |= (cfgi | cbgi)
fg, fgi, bg, bgi = self._split_attributes(color)
if self.dim_output:
# intense black
fg = 0
fgi = self.FG_INTENSITY
if self.reverse_output:
fg, bg = (bg // 0x10), (fg * 0x10)
fgi, bgi = (bgi // 0x10), (fgi * 0x10)
self.reverse_input = True
if self.hidden_output:
fg = (bg // 0x10)
fgi = (bgi // 0x10)
self._set_attributes(fg | fgi | bg | bgi)
|
def display(self, codes=[], fg=None, bg=None)
|
Displays codes using Windows kernel calls
| 3.549023
| 3.530451
| 1.00526
|
attr = self._get_console_info()
cols = attr['window']['right'] - attr['window']['left'] + 1
lines = attr['window']['bottom'] - attr['window']['top'] + 1
return cols, lines
|
def get_size(self)
|
see doc in Term class
| 4.35913
| 4.066217
| 1.072036
|
fg = attrs & self.FG_ALL
fgi = attrs & self.FG_INTENSITY
bg = attrs & self.BG_ALL
bgi = attrs & self.BG_INTENSITY
return fg, fgi, bg, bgi
|
def _split_attributes(self, attrs)
|
Spilt attribute code
Takes an attribute code and returns a tuple containing
foreground (fg), foreground intensity (fgi), background (bg), and
background intensity (bgi)
Attributes can be joined using ``fg | fgi | bg | bgi``
| 3.551277
| 2.256175
| 1.574025
|
x, y = self._get_position()
if place == 'up':
y -= distance
elif place == 'down':
for i in range(distance): print
nx, ny = self._get_position()
y = ny
self.move('beginning of line')
elif place == 'left':
x -= distance
elif place == 'right':
x += distance
elif place == 'beginning of line':
x = 0
elif place == 'beginning of screen':
x = 0
y = self._get_console_info()['window']['top']
else:
raise ValueError("invalid place to move")
self._set_position((x, y))
|
def move(self, place, distance = 1)
|
see doc in Term class
| 3.486785
| 3.366531
| 1.03572
|
#TODO: clear attributes too
if scope == 'screen':
bos = (0, self._get_console_info()['window']['top'])
cols, lines = self.get_size()
length = cols * lines
self._clear_console(length, bos)
self.move('beginning of screen')
elif scope == ' beginning of line':
pass
elif scope == 'end of line':
curx, cury = self._get_position()
cols, lines = self.get_size()
coord = (curx, cury)
length = cols - curx
self._clear_console(length, coord)
elif scope == 'end of screen':
curx, cury = self._get_position()
coord = (curx, cury)
cols, lines = self.get_size()
length = (lines - cury) * cols - curx
self._clear_console(length, coord)
elif scope == 'line':
curx, cury = self._get_position()
coord = (0, cury)
cols, lines = self.get_size()
self._clear_console(cols, coord)
self._set_position((curx, cury))
elif scope == 'left':
self.move('left')
self.write(' ')
elif scope == 'right':
self.write(' ')
self.move('left')
else:
raise ValueError("invalid scope to clear")
|
def clear(self, scope = 'screen')
|
see doc in Term class
According to http://support.microsoft.com/kb/99261 the best way
to clear the console is to write out empty spaces
| 2.816714
| 2.729513
| 1.031947
|
#TODO: unicode support
strbuffer = self.ctypes.create_string_buffer(1024)
size = self.ctypes.c_short(1024)
#unicode versions are (Get|Set)ConsolTitleW
self.ctypes.windll.kernel32.GetConsoleTitleA(strbuffer, size)
return strbuffer.value
|
def _get_title(self)
|
According to http://support.microsoft.com/kb/124103 the buffer
size is 1024
Does not support unicode, only ANSI
| 8.247613
| 5.799774
| 1.422058
|
x, y = coord
return self.ctypes.c_int(y << 16 | x)
|
def _get_coord(self, coord)
|
It's a hack, see fixcoord in pyreadline's console.py (revision
1289)
| 10.376822
| 6.934254
| 1.496458
|
if isinstance(codes, basestring):
codes = [codes]
else:
codes = list(codes)
for code in codes:
if code not in Magic.DISPLAY.keys():
raise ValueError("'%s' not a valid display value" % code)
for color in (fg, bg):
if color != None:
if color not in Magic.COLORS.keys():
raise ValueError("'%s' not a valid color" % color)
return [codes, fg, bg]
|
def displayformat(codes=[], fg=None, bg=None)
|
Makes sure all arguments are valid
| 2.742549
| 2.583107
| 1.061725
|
dcodes = []
fg = bg = None
for code in codes:
code = int(code)
offset = code // 10
decimal = code % 10
if offset == 3 and decimal in Magic.COLORS.values(): fg = decimal
elif offset == 4 and decimal in Magic.COLORS.values(): bg = decimal
elif code in Magic.DISPLAY.values(): dcodes.append(code)
else: pass # drop unhandled values
r = {}
if len(codes): r['codes'] = [Magic.rDISPLAY[c] for c in dcodes]
if fg != None: r['fg'] = Magic.rCOLORS[fg]
if bg != None: r['bg'] = Magic.rCOLORS[bg]
return r
|
def rdisplay(codes)
|
Reads a list of codes and generates dict
>>> Magic.rdisplay([])
{}
>>> result = Magic.rdisplay([1,2,34,46])
>>> sorted(result.keys())
['bg', 'codes', 'fg']
>>> sorted(result['codes'])
['bright', 'dim']
>>> result['bg']
'cyan'
>>> result['fg']
'blue'
| 3.809299
| 3.471942
| 1.097167
|
return self._makeApiCall(self.funcinfo["listClients"], *args, **kwargs)
|
def listClients(self, *args, **kwargs)
|
List Clients
Get a list of all clients. With `prefix`, only clients for which
it is a prefix of the clientId are returned.
By default this end-point will try to return up to 1000 clients in one
request. But it **may return less, even none**.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `listClients` with the last `continuationToken` until you
get a result without a `continuationToken`.
This method gives output: ``v1/list-clients-response.json#``
This method is ``stable``
| 11.999548
| 16.51792
| 0.726456
|
return self._makeApiCall(self.funcinfo["client"], *args, **kwargs)
|
def client(self, *args, **kwargs)
|
Get Client
Get information about a single client.
This method gives output: ``v1/get-client-response.json#``
This method is ``stable``
| 16.800453
| 20.783997
| 0.808336
|
return self._makeApiCall(self.funcinfo["createClient"], *args, **kwargs)
|
def createClient(self, *args, **kwargs)
|
Create Client
Create a new client and get the `accessToken` for this client.
You should store the `accessToken` from this API call as there is no
other way to retrieve it.
If you loose the `accessToken` you can call `resetAccessToken` to reset
it, and a new `accessToken` will be returned, but you cannot retrieve the
current `accessToken`.
If a client with the same `clientId` already exists this operation will
fail. Use `updateClient` if you wish to update an existing client.
The caller's scopes must satisfy `scopes`.
This method takes input: ``v1/create-client-request.json#``
This method gives output: ``v1/create-client-response.json#``
This method is ``stable``
| 13.74135
| 17.736021
| 0.774771
|
return self._makeApiCall(self.funcinfo["updateClient"], *args, **kwargs)
|
def updateClient(self, *args, **kwargs)
|
Update Client
Update an exisiting client. The `clientId` and `accessToken` cannot be
updated, but `scopes` can be modified. The caller's scopes must
satisfy all scopes being added to the client in the update operation.
If no scopes are given in the request, the client's scopes remain
unchanged
This method takes input: ``v1/create-client-request.json#``
This method gives output: ``v1/get-client-response.json#``
This method is ``stable``
| 13.304675
| 16.72472
| 0.79551
|
return self._makeApiCall(self.funcinfo["enableClient"], *args, **kwargs)
|
def enableClient(self, *args, **kwargs)
|
Enable Client
Enable a client that was disabled with `disableClient`. If the client
is already enabled, this does nothing.
This is typically used by identity providers to re-enable clients that
had been disabled when the corresponding identity's scopes changed.
This method gives output: ``v1/get-client-response.json#``
This method is ``stable``
| 14.429929
| 18.101351
| 0.797174
|
return self._makeApiCall(self.funcinfo["disableClient"], *args, **kwargs)
|
def disableClient(self, *args, **kwargs)
|
Disable Client
Disable a client. If the client is already disabled, this does nothing.
This is typically used by identity providers to disable clients when the
corresponding identity's scopes no longer satisfy the client's scopes.
This method gives output: ``v1/get-client-response.json#``
This method is ``stable``
| 13.950239
| 20.716976
| 0.673372
|
return self._makeApiCall(self.funcinfo["deleteClient"], *args, **kwargs)
|
def deleteClient(self, *args, **kwargs)
|
Delete Client
Delete a client, please note that any roles related to this client must
be deleted independently.
This method is ``stable``
| 13.176303
| 18.956249
| 0.69509
|
return self._makeApiCall(self.funcinfo["listRoles"], *args, **kwargs)
|
def listRoles(self, *args, **kwargs)
|
List Roles
Get a list of all roles, each role object also includes the list of
scopes it expands to.
This method gives output: ``v1/list-roles-response.json#``
This method is ``stable``
| 11.478325
| 14.675297
| 0.782153
|
return self._makeApiCall(self.funcinfo["listRoleIds"], *args, **kwargs)
|
def listRoleIds(self, *args, **kwargs)
|
List Role IDs
If no limit is given, the roleIds of all roles are returned. Since this
list may become long, callers can use the `limit` and `continuationToken`
query arguments to page through the responses.
This method gives output: ``v1/list-role-ids-response.json#``
This method is ``stable``
| 10.360328
| 13.558514
| 0.76412
|
return self._makeApiCall(self.funcinfo["listRoles2"], *args, **kwargs)
|
def listRoles2(self, *args, **kwargs)
|
List Roles
If no limit is given, all roles are returned. Since this
list may become long, callers can use the `limit` and `continuationToken`
query arguments to page through the responses.
This method gives output: ``v1/list-roles2-response.json#``
This method is ``stable``
| 10.348824
| 13.67588
| 0.756721
|
return self._makeApiCall(self.funcinfo["createRole"], *args, **kwargs)
|
def createRole(self, *args, **kwargs)
|
Create Role
Create a new role.
The caller's scopes must satisfy the new role's scopes.
If there already exists a role with the same `roleId` this operation
will fail. Use `updateRole` to modify an existing role.
Creation of a role that will generate an infinite expansion will result
in an error response.
This method takes input: ``v1/create-role-request.json#``
This method gives output: ``v1/get-role-response.json#``
This method is ``stable``
| 13.100797
| 19.590164
| 0.668744
|
return self._makeApiCall(self.funcinfo["updateRole"], *args, **kwargs)
|
def updateRole(self, *args, **kwargs)
|
Update Role
Update an existing role.
The caller's scopes must satisfy all of the new scopes being added, but
need not satisfy all of the role's existing scopes.
An update of a role that will generate an infinite expansion will result
in an error response.
This method takes input: ``v1/create-role-request.json#``
This method gives output: ``v1/get-role-response.json#``
This method is ``stable``
| 13.099588
| 18.390396
| 0.712306
|
return self._makeApiCall(self.funcinfo["deleteRole"], *args, **kwargs)
|
def deleteRole(self, *args, **kwargs)
|
Delete Role
Delete a role. This operation will succeed regardless of whether or not
the role exists.
This method is ``stable``
| 12.457632
| 19.543098
| 0.637444
|
return self._makeApiCall(self.funcinfo["expandScopesGet"], *args, **kwargs)
|
def expandScopesGet(self, *args, **kwargs)
|
Expand Scopes
Return an expanded copy of the given scopeset, with scopes implied by any
roles included.
This call uses the GET method with an HTTP body. It remains only for
backward compatibility.
This method takes input: ``v1/scopeset.json#``
This method gives output: ``v1/scopeset.json#``
This method is ``deprecated``
| 12.257007
| 14.587184
| 0.840259
|
return self._makeApiCall(self.funcinfo["expandScopes"], *args, **kwargs)
|
def expandScopes(self, *args, **kwargs)
|
Expand Scopes
Return an expanded copy of the given scopeset, with scopes implied by any
roles included.
This method takes input: ``v1/scopeset.json#``
This method gives output: ``v1/scopeset.json#``
This method is ``stable``
| 13.771347
| 17.121534
| 0.804329
|
return self._makeApiCall(self.funcinfo["currentScopes"], *args, **kwargs)
|
def currentScopes(self, *args, **kwargs)
|
Get Current Scopes
Return the expanded scopes available in the request, taking into account all sources
of scopes and scope restrictions (temporary credentials, assumeScopes, client scopes,
and roles).
This method gives output: ``v1/scopeset.json#``
This method is ``stable``
| 13.064919
| 16.650679
| 0.784648
|
return self._makeApiCall(self.funcinfo["awsS3Credentials"], *args, **kwargs)
|
def awsS3Credentials(self, *args, **kwargs)
|
Get Temporary Read/Write Credentials S3
Get temporary AWS credentials for `read-write` or `read-only` access to
a given `bucket` and `prefix` within that bucket.
The `level` parameter can be `read-write` or `read-only` and determines
which type of credentials are returned. Please note that the `level`
parameter is required in the scope guarding access. The bucket name must
not contain `.`, as recommended by Amazon.
This method can only allow access to a whitelisted set of buckets. To add
a bucket to that whitelist, contact the Taskcluster team, who will add it to
the appropriate IAM policy. If the bucket is in a different AWS account, you
will also need to add a bucket policy allowing access from the Taskcluster
account. That policy should look like this:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allow-taskcluster-auth-to-delegate-access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::692406183521:root"
},
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::<bucket>",
"arn:aws:s3:::<bucket>/*"
]
}
]
}
```
The credentials are set to expire after an hour, but this behavior is
subject to change. Hence, you should always read the `expires` property
from the response, if you intend to maintain active credentials in your
application.
Please note that your `prefix` may not start with slash `/`. Such a prefix
is allowed on S3, but we forbid it here to discourage bad behavior.
Also note that if your `prefix` doesn't end in a slash `/`, the STS
credentials may allow access to unexpected keys, as S3 does not treat
slashes specially. For example, a prefix of `my-folder` will allow
access to `my-folder/file.txt` as expected, but also to `my-folder.txt`,
which may not be intended.
Finally, note that the `PutObjectAcl` call is not allowed. Passing a canned
ACL other than `private` to `PutObject` is treated as a `PutObjectAcl` call, and
will result in an access-denied error from AWS. This limitation is due to a
security flaw in Amazon S3 which might otherwise allow indefinite access to
uploaded objects.
**EC2 metadata compatibility**, if the querystring parameter
`?format=iam-role-compat` is given, the response will be compatible
with the JSON exposed by the EC2 metadata service. This aims to ease
compatibility for libraries and tools built to auto-refresh credentials.
For details on the format returned by EC2 metadata service see:
[EC2 User Guide](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials).
This method gives output: ``v1/aws-s3-credentials-response.json#``
This method is ``stable``
| 10.357337
| 15.77142
| 0.656716
|
return self._makeApiCall(self.funcinfo["azureTables"], *args, **kwargs)
|
def azureTables(self, *args, **kwargs)
|
List Tables in an Account Managed by Auth
Retrieve a list of all tables in an account.
This method gives output: ``v1/azure-table-list-response.json#``
This method is ``stable``
| 13.14632
| 14.876391
| 0.883704
|
return self._makeApiCall(self.funcinfo["azureTableSAS"], *args, **kwargs)
|
def azureTableSAS(self, *args, **kwargs)
|
Get Shared-Access-Signature for Azure Table
Get a shared access signature (SAS) string for use with a specific Azure
Table Storage table.
The `level` parameter can be `read-write` or `read-only` and determines
which type of credentials are returned. If level is read-write, it will create the
table if it doesn't already exist.
This method gives output: ``v1/azure-table-access-response.json#``
This method is ``stable``
| 11.284507
| 15.944201
| 0.70775
|
return self._makeApiCall(self.funcinfo["azureContainers"], *args, **kwargs)
|
def azureContainers(self, *args, **kwargs)
|
List containers in an Account Managed by Auth
Retrieve a list of all containers in an account.
This method gives output: ``v1/azure-container-list-response.json#``
This method is ``stable``
| 12.662032
| 15.326218
| 0.826168
|
return self._makeApiCall(self.funcinfo["azureContainerSAS"], *args, **kwargs)
|
def azureContainerSAS(self, *args, **kwargs)
|
Get Shared-Access-Signature for Azure Container
Get a shared access signature (SAS) string for use with a specific Azure
Blob Storage container.
The `level` parameter can be `read-write` or `read-only` and determines
which type of credentials are returned. If level is read-write, it will create the
container if it doesn't already exist.
This method gives output: ``v1/azure-container-response.json#``
This method is ``stable``
| 12.159605
| 16.181993
| 0.751428
|
return self._makeApiCall(self.funcinfo["sentryDSN"], *args, **kwargs)
|
def sentryDSN(self, *args, **kwargs)
|
Get DSN for Sentry Project
Get temporary DSN (access credentials) for a sentry project.
The credentials returned can be used with any Sentry client for up to
24 hours, after which the credentials will be automatically disabled.
If the project doesn't exist it will be created, and assigned to the
initial team configured for this component. Contact a Sentry admin
to have the project transferred to a team you have access to if needed
This method gives output: ``v1/sentry-dsn-response.json#``
This method is ``stable``
| 12.870086
| 21.569767
| 0.596672
|
return self._makeApiCall(self.funcinfo["statsumToken"], *args, **kwargs)
|
def statsumToken(self, *args, **kwargs)
|
Get Token for Statsum Project
Get temporary `token` and `baseUrl` for sending metrics to statsum.
The token is valid for 24 hours, clients should refresh after expiration.
This method gives output: ``v1/statsum-token-response.json#``
This method is ``stable``
| 10.891448
| 13.373541
| 0.814403
|
return self._makeApiCall(self.funcinfo["websocktunnelToken"], *args, **kwargs)
|
def websocktunnelToken(self, *args, **kwargs)
|
Get a client token for the Websocktunnel service
Get a temporary token suitable for use connecting to a
[websocktunnel](https://github.com/taskcluster/websocktunnel) server.
The resulting token will only be accepted by servers with a matching audience
value. Reaching such a server is the callers responsibility. In general,
a server URL or set of URLs should be provided to the caller as configuration
along with the audience value.
The token is valid for a limited time (on the scale of hours). Callers should
refresh it before expiration.
This method gives output: ``v1/websocktunnel-token-response.json#``
This method is ``stable``
| 11.563721
| 14.896029
| 0.776296
|
width = utils.term.width
printy(bold(title.center(width)).as_utf8)
printy(bold((line * width)[:width]).as_utf8)
|
def h1(title, line=OVERLINE)
|
Prints bold text with line beneath it spanning width of terminal
| 10.662756
| 10.124791
| 1.053133
|
r
if isinstance(color, basestring):
color = grapefruit.Color.NewFromHtml(color)
if isinstance(color, int):
(r, g, b) = xterm256.xterm_to_rgb(color)
elif hasattr(color, 'rgb'):
(r, g, b) = [int(c * 255.0) for c in color.rgb]
else:
(r, g, b) = color
assert isinstance(r, int) and 0 <= r <= 255
assert isinstance(g, int) and 0 <= g <= 255
assert isinstance(b, int) and 0 <= b <= 255
return (r, g, b)
|
def parse_color(color)
|
r"""Turns a color into an (r, g, b) tuple
>>> parse_color('white')
(255, 255, 255)
>>> parse_color('#ff0000')
(255, 0, 0)
>>> parse_color('#f00')
(255, 0, 0)
>>> parse_color((255, 0, 0))
(255, 0, 0)
>>> from fabulous import grapefruit
>>> parse_color(grapefruit.Color((0.0, 1.0, 0.0)))
(0, 255, 0)
| 2.308157
| 2.117607
| 1.089984
|
r
(r, g, b) = parse_color(color)
gcolor = grapefruit.Color((r / 255.0, g / 255.0, b / 255.0))
complement = gcolor.ComplementaryColor()
(r, g, b) = [int(c * 255.0) for c in complement.rgb]
return (r, g, b)
|
def complement(color)
|
r"""Calculates polar opposite of color
This isn't guaranteed to look good >_> (especially with brighter, higher
intensity colors.) This will be replaced with a formula that produces
better looking colors in the future.
>>> complement('red')
(0, 255, 76)
>>> complement((0, 100, 175))
(175, 101, 0)
| 3.178391
| 4.015661
| 0.791499
|
width = utils.term.width
printy(bold(title.center(width)))
printy(bold((bar * width)[:width]))
|
def section(title, bar=OVERLINE, strm=sys.stdout)
|
Helper function for testing demo routines
| 11.821371
| 10.792212
| 1.095361
|
ref = {
'exchange': 'worker-type-created',
'name': 'workerTypeCreated',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'http://schemas.taskcluster.net/aws-provisioner/v1/worker-type-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
def workerTypeCreated(self, *args, **kwargs)
|
WorkerType Created Message
When a new `workerType` is created a message will be published to this
exchange.
This exchange outputs: ``http://schemas.taskcluster.net/aws-provisioner/v1/worker-type-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* workerType: WorkerType that this message concerns. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
| 6.513806
| 2.931341
| 2.222125
|
ref = {
'exchange': 'worker-type-updated',
'name': 'workerTypeUpdated',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'http://schemas.taskcluster.net/aws-provisioner/v1/worker-type-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
def workerTypeUpdated(self, *args, **kwargs)
|
WorkerType Updated Message
When a `workerType` is updated a message will be published to this
exchange.
This exchange outputs: ``http://schemas.taskcluster.net/aws-provisioner/v1/worker-type-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* workerType: WorkerType that this message concerns. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
| 6.766564
| 2.985801
| 2.266247
|
ref = {
'exchange': 'worker-type-removed',
'name': 'workerTypeRemoved',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'http://schemas.taskcluster.net/aws-provisioner/v1/worker-type-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
def workerTypeRemoved(self, *args, **kwargs)
|
WorkerType Removed Message
When a `workerType` is removed a message will be published to this
exchange.
This exchange outputs: ``http://schemas.taskcluster.net/aws-provisioner/v1/worker-type-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* workerType: WorkerType that this message concerns. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
| 6.688625
| 2.948334
| 2.268612
|
ref = {
'exchange': 'jobs',
'name': 'jobs',
'routingKey': [
{
'multipleWords': False,
'name': 'destination',
},
{
'multipleWords': False,
'name': 'project',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/pulse-job.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
def jobs(self, *args, **kwargs)
|
Job Messages
When a task run is scheduled or resolved, a message is posted to
this exchange in a Treeherder consumable format.
This exchange outputs: ``v1/pulse-job.json#``This exchange takes the following keys:
* destination: destination (required)
* project: project (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
| 6.296947
| 3.711395
| 1.696652
|
for scope in scopes:
if not isinstance(scope, six.string_types):
raise exceptions.TaskclusterFailure('Scope must be string')
# Credentials can only be valid for 31 days. I hope that
# this is validated on the server somehow...
if expiry - start > datetime.timedelta(days=31):
raise exceptions.TaskclusterFailure('Only 31 days allowed')
# We multiply times by 1000 because the auth service is JS and as a result
# uses milliseconds instead of seconds
cert = dict(
version=1,
scopes=scopes,
start=calendar.timegm(start.utctimetuple()) * 1000,
expiry=calendar.timegm(expiry.utctimetuple()) * 1000,
seed=utils.slugId().encode('ascii') + utils.slugId().encode('ascii'),
)
# if this is a named temporary credential, include the issuer in the certificate
if name:
cert['issuer'] = utils.toStr(clientId)
sig = ['version:' + utils.toStr(cert['version'])]
if name:
sig.extend([
'clientId:' + utils.toStr(name),
'issuer:' + utils.toStr(clientId),
])
sig.extend([
'seed:' + utils.toStr(cert['seed']),
'start:' + utils.toStr(cert['start']),
'expiry:' + utils.toStr(cert['expiry']),
'scopes:'
] + scopes)
sigStr = '\n'.join(sig).encode()
if isinstance(accessToken, six.text_type):
accessToken = accessToken.encode()
sig = hmac.new(accessToken, sigStr, hashlib.sha256).digest()
cert['signature'] = utils.encodeStringForB64Header(sig)
newToken = hmac.new(accessToken, cert['seed'], hashlib.sha256).digest()
newToken = utils.makeB64UrlSafe(utils.encodeStringForB64Header(newToken)).replace(b'=', b'')
return {
'clientId': name or clientId,
'accessToken': newToken,
'certificate': utils.dumpJson(cert),
}
|
def createTemporaryCredentials(clientId, accessToken, start, expiry, scopes, name=None)
|
Create a set of temporary credentials
Callers should not apply any clock skew; clock drift is accounted for by
auth service.
clientId: the issuing clientId
accessToken: the issuer's accessToken
start: start time of credentials (datetime.datetime)
expiry: expiration time of credentials, (datetime.datetime)
scopes: list of scopes granted
name: credential name (optional)
Returns a dictionary in the form:
{ 'clientId': str, 'accessToken: str, 'certificate': str}
| 3.626096
| 3.585941
| 1.011198
|
o = self.options
c = o.get('credentials', {})
if c.get('clientId') and c.get('accessToken'):
ext = {}
cert = c.get('certificate')
if cert:
if six.PY3 and isinstance(cert, six.binary_type):
cert = cert.decode()
if isinstance(cert, six.string_types):
cert = json.loads(cert)
ext['certificate'] = cert
if 'authorizedScopes' in o:
ext['authorizedScopes'] = o['authorizedScopes']
# .encode('base64') inserts a newline, which hawk doesn't
# like but doesn't strip itself
return utils.makeB64UrlSafe(utils.encodeStringForB64Header(utils.dumpJson(ext)).strip())
else:
return {}
|
def makeHawkExt(self)
|
Make an 'ext' for Hawk authentication
| 5.622715
| 5.1856
| 1.084294
|
if 'expiration' in kwargs:
expiration = kwargs['expiration']
del kwargs['expiration']
else:
expiration = self.options['signedUrlExpiration']
expiration = int(time.time() + expiration) # Mainly so that we throw if it's not a number
requestUrl = self.buildUrl(methodName, *args, **kwargs)
if not self._hasCredentials():
raise exceptions.TaskclusterAuthFailure('Invalid Hawk Credentials')
clientId = utils.toStr(self.options['credentials']['clientId'])
accessToken = utils.toStr(self.options['credentials']['accessToken'])
def genBewit():
# We need to fix the output of get_bewit. It returns a url-safe base64
# encoded string, which contains a list of tokens separated by '\'.
# The first one is the clientId, the second is an int, the third is
# url-safe base64 encoded MAC, the fourth is the ext param.
# The problem is that the nested url-safe base64 encoded MAC must be
# base64 (i.e. not url safe) or server-side will complain.
# id + '\\' + exp + '\\' + mac + '\\' + options.ext;
resource = mohawk.base.Resource(
credentials={
'id': clientId,
'key': accessToken,
'algorithm': 'sha256',
},
method='GET',
ext=utils.toStr(self.makeHawkExt()),
url=requestUrl,
timestamp=expiration,
nonce='',
# content='',
# content_type='',
)
bewit = mohawk.bewit.get_bewit(resource)
return bewit.rstrip('=')
bewit = genBewit()
if not bewit:
raise exceptions.TaskclusterFailure('Did not receive a bewit')
u = urllib.parse.urlparse(requestUrl)
qs = u.query
if qs:
qs += '&'
qs += 'bewit=%s' % bewit
return urllib.parse.urlunparse((
u.scheme,
u.netloc,
u.path,
u.params,
qs,
u.fragment,
))
|
def buildSignedUrl(self, methodName, *args, **kwargs)
|
Build a signed URL. This URL contains the credentials needed to access
a resource.
| 4.749628
| 4.705598
| 1.009357
|
return liburls.api(
self.options['rootUrl'],
self.serviceName,
self.apiVersion,
route.rstrip('/'))
|
def _constructUrl(self, route)
|
Construct a URL for the given route on this service, based on the
rootUrl
| 16.615015
| 14.44633
| 1.15012
|
x = self._processArgs(entry, *args, **kwargs)
routeParams, payload, query, paginationHandler, paginationLimit = x
route = self._subArgsInRoute(entry, routeParams)
# TODO: Check for limit being in the Query of the api ref
if paginationLimit and 'limit' in entry.get('query', []):
query['limit'] = paginationLimit
if query:
_route = route + '?' + urllib.parse.urlencode(query)
else:
_route = route
response = self._makeHttpRequest(entry['method'], _route, payload)
if paginationHandler:
paginationHandler(response)
while response.get('continuationToken'):
query['continuationToken'] = response['continuationToken']
_route = route + '?' + urllib.parse.urlencode(query)
response = self._makeHttpRequest(entry['method'], _route, payload)
paginationHandler(response)
else:
return response
|
def _makeApiCall(self, entry, *args, **kwargs)
|
This function is used to dispatch calls to other functions
for a given API Reference entry
| 4.068024
| 3.885698
| 1.046922
|
route = entry['route']
for arg, val in six.iteritems(args):
toReplace = "<%s>" % arg
if toReplace not in route:
raise exceptions.TaskclusterFailure(
'Arg %s not found in route for %s' % (arg, entry['name']))
val = urllib.parse.quote(str(val).encode("utf-8"), '')
route = route.replace("<%s>" % arg, val)
return route.lstrip('/')
|
def _subArgsInRoute(self, entry, args)
|
Given a route like "/task/<taskId>/artifacts" and a mapping like
{"taskId": "12345"}, return a string like "/task/12345/artifacts"
| 4.401681
| 4.408571
| 0.998437
|
cred = self.options.get('credentials')
return (
cred and
'clientId' in cred and
'accessToken' in cred and
cred['clientId'] and
cred['accessToken']
)
|
def _hasCredentials(self)
|
Return True, if credentials is given
| 4.38803
| 4.242522
| 1.034297
|
url = self._constructUrl(route)
log.debug('Full URL used is: %s', url)
hawkExt = self.makeHawkExt()
# Serialize payload if given
if payload is not None:
payload = utils.dumpJson(payload)
# Do a loop of retries
retry = -1 # we plus first in the loop, and attempt 1 is retry 0
retries = self.options['maxRetries']
while retry < retries:
retry += 1
# if this isn't the first retry then we sleep
if retry > 0:
time.sleep(utils.calculateSleepTime(retry))
# Construct header
if self._hasCredentials():
sender = mohawk.Sender(
credentials={
'id': self.options['credentials']['clientId'],
'key': self.options['credentials']['accessToken'],
'algorithm': 'sha256',
},
ext=hawkExt if hawkExt else {},
url=url,
content=payload if payload else '',
content_type='application/json' if payload else '',
method=method,
)
headers = {'Authorization': sender.request_header}
else:
log.debug('Not using hawk!')
headers = {}
if payload:
# Set header for JSON if payload is given, note that we serialize
# outside this loop.
headers['Content-Type'] = 'application/json'
log.debug('Making attempt %d', retry)
try:
response = utils.makeSingleHttpRequest(method, url, payload, headers)
except requests.exceptions.RequestException as rerr:
if retry < retries:
log.warn('Retrying because of: %s' % rerr)
continue
# raise a connection exception
raise exceptions.TaskclusterConnectionError(
"Failed to establish connection",
superExc=rerr
)
# Handle non 2xx status code and retry if possible
status = response.status_code
if status == 204:
return None
# Catch retryable errors and go to the beginning of the loop
# to do the retry
if 500 <= status and status < 600 and retry < retries:
log.warn('Retrying because of a %s status code' % status)
continue
# Throw errors for non-retryable errors
if status < 200 or status >= 300:
data = {}
try:
data = response.json()
except Exception:
pass # Ignore JSON errors in error messages
# Find error message
message = "Unknown Server Error"
if isinstance(data, dict):
message = data.get('message')
else:
if status == 401:
message = "Authentication Error"
elif status == 500:
message = "Internal Server Error"
# Raise TaskclusterAuthFailure if this is an auth issue
if status == 401:
raise exceptions.TaskclusterAuthFailure(
message,
status_code=status,
body=data,
superExc=None
)
# Raise TaskclusterRestFailure for all other issues
raise exceptions.TaskclusterRestFailure(
message,
status_code=status,
body=data,
superExc=None
)
# Try to load JSON
try:
return response.json()
except ValueError:
return {"response": response}
# This code-path should be unreachable
assert False, "Error from last retry should have been raised!"
|
def _makeHttpRequest(self, method, route, payload)
|
Make an HTTP Request for the API endpoint. This method wraps
the logic about doing failure retry and passes off the actual work
of doing an HTTP request to another method.
| 4.123868
| 4.103013
| 1.005083
|
url = self._constructUrl(route)
log.debug('Full URL used is: %s', url)
hawkExt = self.makeHawkExt()
# Serialize payload if given
if payload is not None:
payload = utils.dumpJson(payload)
# Do a loop of retries
retry = -1 # we plus first in the loop, and attempt 1 is retry 0
retries = self.options['maxRetries']
while retry < retries:
retry += 1
# if this isn't the first retry then we sleep
if retry > 0:
snooze = float(retry * retry) / 10.0
log.info('Sleeping %0.2f seconds for exponential backoff', snooze)
await asyncio.sleep(utils.calculateSleepTime(retry))
# Construct header
if self._hasCredentials():
sender = mohawk.Sender(
credentials={
'id': self.options['credentials']['clientId'],
'key': self.options['credentials']['accessToken'],
'algorithm': 'sha256',
},
ext=hawkExt if hawkExt else {},
url=url,
content=payload if payload else '',
content_type='application/json' if payload else '',
method=method,
)
headers = {'Authorization': sender.request_header}
else:
log.debug('Not using hawk!')
headers = {}
if payload:
# Set header for JSON if payload is given, note that we serialize
# outside this loop.
headers['Content-Type'] = 'application/json'
log.debug('Making attempt %d', retry)
try:
response = await asyncutils.makeSingleHttpRequest(
method, url, payload, headers, session=self.session
)
except aiohttp.ClientError as rerr:
if retry < retries:
log.warn('Retrying because of: %s' % rerr)
continue
# raise a connection exception
raise exceptions.TaskclusterConnectionError(
"Failed to establish connection",
superExc=rerr
)
status = response.status
if status == 204:
return None
# Catch retryable errors and go to the beginning of the loop
# to do the retry
if 500 <= status and status < 600 and retry < retries:
log.warn('Retrying because of a %s status code' % status)
continue
# Throw errors for non-retryable errors
if status < 200 or status >= 300:
# Parse messages from errors
data = {}
try:
data = await response.json()
except Exception:
pass # Ignore JSON errors in error messages
# Find error message
message = "Unknown Server Error"
if isinstance(data, dict):
message = data.get('message')
else:
if status == 401:
message = "Authentication Error"
elif status == 500:
message = "Internal Server Error"
else:
message = "Unknown Server Error %s\n%s" % (str(status), str(data)[:1024])
# Raise TaskclusterAuthFailure if this is an auth issue
if status == 401:
raise exceptions.TaskclusterAuthFailure(
message,
status_code=status,
body=data,
superExc=None
)
# Raise TaskclusterRestFailure for all other issues
raise exceptions.TaskclusterRestFailure(
message,
status_code=status,
body=data,
superExc=None
)
# Try to load JSON
try:
await response.release()
return await response.json()
except (ValueError, aiohttp.client_exceptions.ContentTypeError):
return {"response": response}
# This code-path should be unreachable
assert False, "Error from last retry should have been raised!"
|
async def _makeHttpRequest(self, method, route, payload)
|
Make an HTTP Request for the API endpoint. This method wraps
the logic about doing failure retry and passes off the actual work
of doing an HTTP request to another method.
| 4.14269
| 4.112082
| 1.007444
|
return self._makeApiCall(self.funcinfo["listWorkerTypeSummaries"], *args, **kwargs)
|
def listWorkerTypeSummaries(self, *args, **kwargs)
|
List worker types with details
Return a list of worker types, including some summary information about
current capacity for each. While this list includes all defined worker types,
there may be running EC2 instances for deleted worker types that are not
included here. The list is unordered.
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/list-worker-types-summaries-response.json#``
This method is ``stable``
| 9.098146
| 10.789435
| 0.843246
|
return self._makeApiCall(self.funcinfo["workerTypeLastModified"], *args, **kwargs)
|
def workerTypeLastModified(self, *args, **kwargs)
|
Get Worker Type Last Modified Time
This method is provided to allow workers to see when they were
last modified. The value provided through UserData can be
compared against this value to see if changes have been made
If the worker type definition has not been changed, the date
should be identical as it is the same stored value.
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-last-modified.json#``
This method is ``stable``
| 12.954634
| 15.249971
| 0.849486
|
return self._makeApiCall(self.funcinfo["removeWorkerType"], *args, **kwargs)
|
def removeWorkerType(self, *args, **kwargs)
|
Delete Worker Type
Delete a worker type definition. This method will only delete
the worker type definition from the storage table. The actual
deletion will be handled by a background worker. As soon as this
method is called for a worker type, the background worker will
immediately submit requests to cancel all spot requests for this
worker type as well as killing all instances regardless of their
state. If you want to gracefully remove a worker type, you must
either ensure that no tasks are created with that worker type name
or you could theoretically set maxCapacity to 0, though, this is
not a supported or tested action
This method is ``stable``
| 13.615961
| 18.677698
| 0.728996
|
return self._makeApiCall(self.funcinfo["getSecret"], *args, **kwargs)
|
def getSecret(self, *args, **kwargs)
|
Get a Secret
Retrieve a secret from storage. The result contains any passwords or
other restricted information verbatim as well as a temporary credential
based on the scopes specified when the secret was created.
It is important that this secret is deleted by the consumer (`removeSecret`),
or else the secrets will be visible to any process which can access the
user data associated with the instance.
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-secret-response.json#``
This method is ``stable``
| 12.483332
| 17.721292
| 0.704426
|
return self._makeApiCall(self.funcinfo["instanceStarted"], *args, **kwargs)
|
def instanceStarted(self, *args, **kwargs)
|
Report an instance starting
An instance will report in by giving its instance id as well
as its security token. The token is given and checked to ensure
that it matches a real token that exists to ensure that random
machines do not check in. We could generate a different token
but that seems like overkill
This method is ``stable``
| 19.048504
| 18.193613
| 1.046989
|
return self._makeApiCall(self.funcinfo["state"], *args, **kwargs)
|
def state(self, *args, **kwargs)
|
Get AWS State for a worker type
Return the state of a given workertype as stored by the provisioner.
This state is stored as three lists: 1 for running instances, 1 for
pending requests. The `summary` property contains an updated summary
similar to that returned from `listWorkerTypeSummaries`.
This method is ``stable``
| 16.607525
| 19.154484
| 0.867031
|
return self._makeApiCall(self.funcinfo["backendStatus"], *args, **kwargs)
|
def backendStatus(self, *args, **kwargs)
|
Backend Status
This endpoint is used to show when the last time the provisioner
has checked in. A check in is done through the deadman's snitch
api. It is done at the conclusion of a provisioning iteration
and used to tell if the background provisioning process is still
running.
**Warning** this api end-point is **not stable**.
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/backend-status-response.json#``
This method is ``experimental``
| 12.228003
| 15.284621
| 0.80002
|
return await self._makeApiCall(self.funcinfo["set"], *args, **kwargs)
|
async def set(self, *args, **kwargs)
|
Set Secret
Set the secret associated with some key. If the secret already exists, it is
updated instead.
This method takes input: ``v1/secret.json#``
This method is ``stable``
| 16.32428
| 29.481861
| 0.553706
|
return await self._makeApiCall(self.funcinfo["remove"], *args, **kwargs)
|
async def remove(self, *args, **kwargs)
|
Delete Secret
Delete the secret associated with some key.
This method is ``stable``
| 16.918133
| 32.53352
| 0.520022
|
return await self._makeApiCall(self.funcinfo["get"], *args, **kwargs)
|
async def get(self, *args, **kwargs)
|
Read Secret
Read the secret associated with some key. If the secret has recently
expired, the response code 410 is returned. If the caller lacks the
scope necessary to get the secret, the call will fail with a 403 code
regardless of whether the secret exists.
This method gives output: ``v1/secret.json#``
This method is ``stable``
| 16.441494
| 26.650238
| 0.616936
|
while True:
stdout.write(prompt_text)
value = stdout.raw_input(prompt_ext)
if value == '': return default
try:
if cast != None: value = cast(value, *castarg, **castkwarg)
except ValueError as details:
if cast in NICE_INPUT_ERRORS: # see comment above this constant
stderr.write(ERROR_MESSAGE % (NICE_INPUT_ERRORS[cast] % details))
else: stderr.write(ERROR_MESSAGE % (DEFAULT_INPUT_ERRORS % str(details)))
continue
return value
|
def input_object(prompt_text, cast = None, default = None,
prompt_ext = ': ', castarg = [], castkwarg = {})
|
Gets input from the command line and validates it.
prompt_text
A string. Used to prompt the user. Do not include a trailing
space.
prompt_ext
Added on to the prompt at the end. At the moment this must not
include any control stuff because it is send directly to
raw_input
cast
This can be any callable object (class, function, type, etc). It
simply calls the cast with the given arguements and returns the
result. If a ValueError is raised, it
will output an error message and prompt the user again.
Because some builtin python objects don't do casting in the way
that we might like you can easily write a wrapper function that
looks and the input and returns the appropriate object or exception.
Look in the cast submodule for examples.
If cast is None, then it will do nothing (and you will have a string)
default
function returns this value if the user types nothing in. This is
can be used to cancel the input so-to-speek
castarg, castkwarg
list and dictionary. Extra arguments passed on to the cast.
| 4.334048
| 4.327962
| 1.001406
|
values = list(values)
for i in range(len(values)):
if not isinstance(values[i], dict):
values[i] = {'values': [values[i]]}
try:
import readline, rlcomplete
wordlist = [ str(v) for value in values
for v in value['values']]
completer = rlcomplete.ListCompleter(wordlist, ignorecase)
readline.parse_and_bind("tab: complete")
readline.set_completer(completer.complete)
except ImportError:
pass
valuelist = []
for item in values:
entry = ( display('bright', item.get('fg'), item.get('bg')) +
str(item['values'][0]) + display(['default']) )
if str(item['values'][0]) == str(default): entry = '['+entry+']'
if list_values: entry += ' : ' + item['desc']
valuelist.append(entry)
if list_values: question += os.linesep + os.linesep.join(valuelist) + os.linesep
else: question += ' (' + '/'.join(valuelist) + ')'
return input_object(question, cast = query_cast, default=default,
castarg=[values,ignorecase])
|
def query(question, values, default=None, list_values = False, ignorecase = True )
|
Preset a few options
The question argument is a string, nothing magical.
The values argument accepts input in two different forms. The simpler form
(a tuple with strings) looks like:
.. code-block:: python
('Male','Female')
And it will pop up a question asking the user for a gender and requiring
the user to enter either 'male' or 'female' (case doesn't matter unless
you set the third arguement to false).
The other form is something like:
.. code-block:: python
({'values':('Male','M'),'fg':'cyan'},
{'values':('Female','F'),'fg':'magenta'})
This will pop up a question with Male/Female (each with appropriate
colouring). Additionally, if the user types in just 'M', it will be
treated as if 'Male' was typed in. The first item in the 'values' tuple
is treated as default and is the one that is returned by the function
if the user chooses one in that group.
In addition the function can handle non-string objects quite fine. It
simple displays the output object.__str__() and compares the user's input
against that. So the the code
.. code-block:: python
query("Python rocks? ",(True, False))
will return a bool (True) when the user types in the string 'True' (Of
course there isn't any other reasonable answer than True anyways :P)
``default`` is the value function returns if the user types nothing in. This is
can be used to cancel the input so-to-speek
Using list_values = False will display a list, with descriptions printed out
from the 'desc' keyword
| 3.99449
| 4.119315
| 0.969698
|
if ignorecase: value = value.lower()
for item in answers:
for a in item['values']:
if ignorecase and (value == str(a).lower()):
return item['values'][0]
elif value == a:
return item['values'][0]
raise ValueError("Response '%s' not understood, please try again." % value)
|
def query_cast(value, answers, ignorecase = False)
|
A cast function for query
Answers should look something like it does in query
| 3.720143
| 4.064571
| 0.915261
|
try:
import readline, rlcomplete
completer = rlcomplete.PathCompleter()
readline.set_completer_delims(completer.delims)
readline.parse_and_bind("tab: complete")
readline.set_completer(completer.complete)
except ImportError:
pass
while True:
f = raw_input(prompt_text)
if f == '': return default
f = os.path.expanduser(f)
if len(f) != 0 and f[0] == os.path.sep:
f = os.path.abspath(f)
try:
return open(f, *filearg, **filekwarg)
except IOError as e:
stderr.write(ERROR_MESSAGE % ("unable to open %s : %s" % (f, e)))
|
def file_chooser(prompt_text = "Enter File: ", default=None, filearg=[], filekwarg={})
|
A simple tool to get a file from the user. Takes keyworded arguemnts
and passes them to open().
If the user enters nothing the function will return the ``default`` value.
Otherwise it continues to prompt the user until it get's a decent response.
filekwarg may contain arguements passed on to ``open()``.
| 2.461095
| 2.503139
| 0.983204
|
return self._makeApiCall(self.funcinfo["getHookStatus"], *args, **kwargs)
|
def getHookStatus(self, *args, **kwargs)
|
Get hook status
This endpoint will return the current status of the hook. This represents a
snapshot in time and may vary from one call to the next.
This method is deprecated in favor of listLastFires.
This method gives output: ``v1/hook-status.json#``
This method is ``deprecated``
| 10.829028
| 14.410114
| 0.751488
|
return self._makeApiCall(self.funcinfo["createHook"], *args, **kwargs)
|
def createHook(self, *args, **kwargs)
|
Create a hook
This endpoint will create a new hook.
The caller's credentials must include the role that will be used to
create the task. That role must satisfy task.scopes as well as the
necessary scopes to add the task to the queue.
This method takes input: ``v1/create-hook-request.json#``
This method gives output: ``v1/hook-definition.json#``
This method is ``stable``
| 14.777387
| 19.089968
| 0.774092
|
return self._makeApiCall(self.funcinfo["listLastFires"], *args, **kwargs)
|
def listLastFires(self, *args, **kwargs)
|
Get information about recent hook fires
This endpoint will return information about the the last few times this hook has been
fired, including whether the hook was fired successfully or not
This method gives output: ``v1/list-lastFires-response.json#``
This method is ``experimental``
| 10.836079
| 12.565206
| 0.862388
|
ref = {
'exchange': 'task-defined',
'name': 'taskDefined',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-defined-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
def taskDefined(self, *args, **kwargs)
|
Task Defined Messages
When a task is created or just defined a message is posted to this
exchange.
This message exchange is mainly useful when tasks are scheduled by a
scheduler that uses `defineTask` as this does not make the task
`pending`. Thus, no `taskPending` message is published.
Please, note that messages are also published on this exchange if defined
using `createTask`.
This exchange outputs: ``v1/task-defined-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task.
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
| 3.070005
| 1.896406
| 1.618854
|
ref = {
'exchange': 'task-pending',
'name': 'taskPending',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-pending-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
def taskPending(self, *args, **kwargs)
|
Task Pending Messages
When a task becomes `pending` a message is posted to this exchange.
This is useful for workers who doesn't want to constantly poll the queue
for new tasks. The queue will also be authority for task states and
claims. But using this exchange workers should be able to distribute work
efficiently and they would be able to reduce their polling interval
significantly without affecting general responsiveness.
This exchange outputs: ``v1/task-pending-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
| 3.061253
| 1.897729
| 1.613114
|
ref = {
'exchange': 'task-running',
'name': 'taskRunning',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-running-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
def taskRunning(self, *args, **kwargs)
|
Task Running Messages
Whenever a task is claimed by a worker, a run is started on the worker,
and a message is posted on this exchange.
This exchange outputs: ``v1/task-running-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required)
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required)
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
| 3.048222
| 1.88686
| 1.6155
|
ref = {
'exchange': 'artifact-created',
'name': 'artifactCreated',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/artifact-created-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
def artifactCreated(self, *args, **kwargs)
|
Artifact Creation Messages
Whenever the `createArtifact` end-point is called, the queue will create
a record of the artifact and post a message on this exchange. All of this
happens before the queue returns a signed URL for the caller to upload
the actual artifact with (pending on `storageType`).
This means that the actual artifact is rarely available when this message
is posted. But it is not unreasonable to assume that the artifact will
will become available at some point later. Most signatures will expire in
30 minutes or so, forcing the uploader to call `createArtifact` with
the same payload again in-order to continue uploading the artifact.
However, in most cases (especially for small artifacts) it's very
reasonable assume the artifact will be available within a few minutes.
This property means that this exchange is mostly useful for tools
monitoring task evaluation. One could also use it count number of
artifacts per task, or _index_ artifacts though in most cases it'll be
smarter to index artifacts after the task in question have completed
successfully.
This exchange outputs: ``v1/artifact-created-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required)
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required)
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
| 3.209855
| 1.916553
| 1.674806
|
ref = {
'exchange': 'task-completed',
'name': 'taskCompleted',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-completed-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
def taskCompleted(self, *args, **kwargs)
|
Task Completed Messages
When a task is successfully completed by a worker a message is posted
this exchange.
This message is routed using the `runId`, `workerGroup` and `workerId`
that completed the task. But information about additional runs is also
available from the task status structure.
This exchange outputs: ``v1/task-completed-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required)
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required)
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
| 3.097823
| 1.925077
| 1.609195
|
ref = {
'exchange': 'task-failed',
'name': 'taskFailed',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-failed-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
def taskFailed(self, *args, **kwargs)
|
Task Failed Messages
When a task ran, but failed to complete successfully a message is posted
to this exchange. This is same as worker ran task-specific code, but the
task specific code exited non-zero.
This exchange outputs: ``v1/task-failed-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task.
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
| 3.063371
| 1.909644
| 1.604158
|
ref = {
'exchange': 'task-exception',
'name': 'taskException',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-exception-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
def taskException(self, *args, **kwargs)
|
Task Exception Messages
Whenever Taskcluster fails to run a message is posted to this exchange.
This happens if the task isn't completed before its `deadlìne`,
all retries failed (i.e. workers stopped responding), the task was
canceled by another entity, or the task carried a malformed payload.
The specific _reason_ is evident from that task status structure, refer
to the `reasonResolved` property for the last run.
This exchange outputs: ``v1/task-exception-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task.
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
| 3.078043
| 1.890791
| 1.627913
|
ref = {
'exchange': 'task-group-resolved',
'name': 'taskGroupResolved',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-group-resolved.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
def taskGroupResolved(self, *args, **kwargs)
|
Task Group Resolved Messages
A message is published on task-group-resolved whenever all submitted
tasks (whether scheduled or unscheduled) for a given task group have
been resolved, regardless of whether they resolved as successful or
not. A task group may be resolved multiple times, since new tasks may
be submitted against an already resolved task group.
This exchange outputs: ``v1/task-group-resolved.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskGroupId: `taskGroupId` for the task-group this message concerns (required)
* schedulerId: `schedulerId` for the task-group this message concerns (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
| 4.761736
| 2.957434
| 1.610091
|
if text != self.text:
self.matches = self.completelist(text)
self.text = text
try:
return self.matches[state]
except IndexError:
return None
|
def complete(self, text, state)
|
The actual completion method
This method is not meant to be overridden. Override the
completelist method instead. It will make your life much easier.
For more detail see documentation for readline.set_completer
| 3.894924
| 3.502329
| 1.112095
|
if not prefix.startswith('~'):
raise ValueError("prefix must start with ~")
try: import pwd
except ImportError:
try: import winpwd as pwd
except ImportError: return []
return ['~' + u[0] for u in pwd.getpwall() if u[0].startswith(prefix[1:])]
|
def matchuserhome(prefix)
|
To find matches that start with prefix.
For example, if prefix = '~user' this
returns list of possible matches in form of ['~userspam','~usereggs'] etc.
matchuserdir('~') returns all users
| 4.089888
| 3.751215
| 1.090284
|
path = os.path.expanduser(text)
if len(path) == 0 or path[0] != os.path.sep:
path = os.path.join(os.getcwd(), path)
if text == '~':
dpath = dtext = ''
bpath = '~'
files = ['~/']
elif text.startswith('~') and text.find('/', 1) < 0:
return self.matchuserhome(text)
else:
dtext = os.path.dirname(text)
dpath = os.path.dirname(path)
bpath = os.path.basename(path)
files = os.listdir(dpath)
if bpath =='':
matches = [self.buildpath(text, f) for f in files if not f.startswith('.')]
else:
matches = [self.buildpath(dtext, f) for f in files if f.startswith(bpath)]
if len(matches) == 0 and os.path.basename(path)=='..':
files = os.listdir(path)
matches = [os.path.join(text, f) for f in files]
return matches
|
def completelist(self, text)
|
Return a list of potential matches for completion
n.b. you want to complete to a file in the current working directory
that starts with a ~, use ./~ when typing in. Paths that start with
~ are magical and specify users' home paths
| 2.928554
| 2.821175
| 1.038062
|
return await self._makeApiCall(self.funcinfo["status"], *args, **kwargs)
|
async def status(self, *args, **kwargs)
|
Get task status
Get task status structure from `taskId`
This method gives output: ``v1/task-status-response.json#``
This method is ``stable``
| 15.748368
| 22.937397
| 0.68658
|
return await self._makeApiCall(self.funcinfo["listTaskGroup"], *args, **kwargs)
|
async def listTaskGroup(self, *args, **kwargs)
|
List Task Group
List tasks sharing the same `taskGroupId`.
As a task-group may contain an unbounded number of tasks, this end-point
may return a `continuationToken`. To continue listing tasks you must call
the `listTaskGroup` again with the `continuationToken` as the
query-string option `continuationToken`.
By default this end-point will try to return up to 1000 members in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `listTaskGroup` with the last `continuationToken` until you
get a result without a `continuationToken`.
If you are not interested in listing all the members at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/list-task-group-response.json#``
This method is ``stable``
| 11.194547
| 19.158709
| 0.584306
|
return await self._makeApiCall(self.funcinfo["listDependentTasks"], *args, **kwargs)
|
async def listDependentTasks(self, *args, **kwargs)
|
List Dependent Tasks
List tasks that depend on the given `taskId`.
As many tasks from different task-groups may dependent on a single tasks,
this end-point may return a `continuationToken`. To continue listing
tasks you must call `listDependentTasks` again with the
`continuationToken` as the query-string option `continuationToken`.
By default this end-point will try to return up to 1000 tasks in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `listDependentTasks` with the last `continuationToken` until
you get a result without a `continuationToken`.
If you are not interested in listing all the tasks at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/list-dependent-tasks-response.json#``
This method is ``stable``
| 11.768744
| 19.635559
| 0.599359
|
return await self._makeApiCall(self.funcinfo["createTask"], *args, **kwargs)
|
async def createTask(self, *args, **kwargs)
|
Create New Task
Create a new task, this is an **idempotent** operation, so repeat it if
you get an internal server error or network connection is dropped.
**Task `deadline`**: the deadline property can be no more than 5 days
into the future. This is to limit the amount of pending tasks not being
taken care of. Ideally, you should use a much shorter deadline.
**Task expiration**: the `expires` property must be greater than the
task `deadline`. If not provided it will default to `deadline` + one
year. Notice, that artifacts created by task must expire before the task.
**Task specific routing-keys**: using the `task.routes` property you may
define task specific routing-keys. If a task has a task specific
routing-key: `<route>`, then when the AMQP message about the task is
published, the message will be CC'ed with the routing-key:
`route.<route>`. This is useful if you want another component to listen
for completed tasks you have posted. The caller must have scope
`queue:route:<route>` for each route.
**Dependencies**: any tasks referenced in `task.dependencies` must have
already been created at the time of this call.
**Scopes**: Note that the scopes required to complete this API call depend
on the content of the `scopes`, `routes`, `schedulerId`, `priority`,
`provisionerId`, and `workerType` properties of the task definition.
**Legacy Scopes**: The `queue:create-task:..` scope without a priority and
the `queue:define-task:..` and `queue:task-group-id:..` scopes are considered
legacy and should not be used. Note that the new, non-legacy scopes require
a `queue:scheduler-id:..` scope as well as scopes for the proper priority.
This method takes input: ``v1/create-task-request.json#``
This method gives output: ``v1/task-status-response.json#``
This method is ``stable``
| 12.782084
| 22.82023
| 0.560121
|
return await self._makeApiCall(self.funcinfo["claimWork"], *args, **kwargs)
|
async def claimWork(self, *args, **kwargs)
|
Claim Work
Claim pending task(s) for the given `provisionerId`/`workerType` queue.
If any work is available (even if fewer than the requested number of
tasks, this will return immediately. Otherwise, it will block for tens of
seconds waiting for work. If no work appears, it will return an emtpy
list of tasks. Callers should sleep a short while (to avoid denial of
service in an error condition) and call the endpoint again. This is a
simple implementation of "long polling".
This method takes input: ``v1/claim-work-request.json#``
This method gives output: ``v1/claim-work-response.json#``
This method is ``stable``
| 14.032276
| 22.696192
| 0.618266
|
return await self._makeApiCall(self.funcinfo["claimTask"], *args, **kwargs)
|
async def claimTask(self, *args, **kwargs)
|
Claim Task
claim a task - never documented
This method takes input: ``v1/task-claim-request.json#``
This method gives output: ``v1/task-claim-response.json#``
This method is ``deprecated``
| 13.626156
| 18.705124
| 0.728472
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.