_id
stringlengths 2
7
| title
stringlengths 1
88
| partition
stringclasses 3
values | text
stringlengths 75
19.8k
| language
stringclasses 1
value | meta_information
dict |
|---|---|---|---|---|---|
q10300
|
Hooks.getTriggerToken
|
train
|
async def getTriggerToken(self, *args, **kwargs):
"""
Get a trigger token
Retrieve a unique secret token for triggering the specified hook. This
token can be deactivated with `resetTriggerToken`.
This method gives output: ``v1/trigger-token-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["getTriggerToken"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10301
|
Hooks.resetTriggerToken
|
train
|
async def resetTriggerToken(self, *args, **kwargs):
"""
Reset a trigger token
Reset the token for triggering a given hook. This invalidates token that
may have been issued via getTriggerToken with a new token.
This method gives output: ``v1/trigger-token-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["resetTriggerToken"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10302
|
Hooks.triggerHookWithToken
|
train
|
async def triggerHookWithToken(self, *args, **kwargs):
"""
Trigger a hook with a token
This endpoint triggers a defined hook with a valid token.
The HTTP payload must match the hooks `triggerSchema`. If it does, it is
provided as the `payload` property of the JSON-e context used to render the
task template.
This method takes input: ``v1/trigger-hook.json#``
This method gives output: ``v1/trigger-hook-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["triggerHookWithToken"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10303
|
AwsProvisioner.createWorkerType
|
train
|
async def createWorkerType(self, *args, **kwargs):
"""
Create new Worker Type
Create a worker type. A worker type contains all the configuration
needed for the provisioner to manage the instances. Each worker type
knows which regions and which instance types are allowed for that
worker type. Remember that Capacity is the number of concurrent tasks
that can be run on a given EC2 resource and that Utility is the relative
performance rate between different instance types. There is no way to
configure different regions to have different sets of instance types
so ensure that all instance types are available in all regions.
This function is idempotent.
Once a worker type is in the provisioner, a back ground process will
begin creating instances for it based on its capacity bounds and its
pending task count from the Queue. It is the worker's responsibility
to shut itself down. The provisioner has a limit (currently 96hours)
for all instances to prevent zombie instances from running indefinitely.
The provisioner will ensure that all instances created are tagged with
aws resource tags containing the provisioner id and the worker type.
If provided, the secrets in the global, region and instance type sections
are available using the secrets api. If specified, the scopes provided
will be used to generate a set of temporary credentials available with
the other secrets.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["createWorkerType"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10304
|
AwsProvisioner.updateWorkerType
|
train
|
async def updateWorkerType(self, *args, **kwargs):
"""
Update Worker Type
Provide a new copy of a worker type to replace the existing one.
This will overwrite the existing worker type definition if there
is already a worker type of that name. This method will return a
200 response along with a copy of the worker type definition created
Note that if you are using the result of a GET on the worker-type
end point that you will need to delete the lastModified and workerType
keys from the object returned, since those fields are not allowed
the request body for this method
Otherwise, all input requirements and actions are the same as the
create method.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["updateWorkerType"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10305
|
AwsProvisioner.workerType
|
train
|
async def workerType(self, *args, **kwargs):
"""
Get Worker Type
Retrieve a copy of the requested worker type definition.
This copy contains a lastModified field as well as the worker
type name. As such, it will require manipulation to be able to
use the results of this method to submit date to the update
method.
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["workerType"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10306
|
AwsProvisioner.createSecret
|
train
|
async def createSecret(self, *args, **kwargs):
"""
Create new Secret
Insert a secret into the secret storage. The supplied secrets will
be provided verbatime via `getSecret`, while the supplied scopes will
be converted into credentials by `getSecret`.
This method is not ordinarily used in production; instead, the provisioner
creates a new secret directly for each spot bid.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-secret-request.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["createSecret"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10307
|
AwsProvisioner.removeSecret
|
train
|
async def removeSecret(self, *args, **kwargs):
"""
Remove a Secret
Remove a secret. After this call, a call to `getSecret` with the given
token will return no information.
It is very important that the consumer of a
secret delete the secret from storage before handing over control
to untrusted processes to prevent credential and/or secret leakage.
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["removeSecret"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10308
|
AwsProvisioner.getLaunchSpecs
|
train
|
async def getLaunchSpecs(self, *args, **kwargs):
"""
Get All Launch Specifications for WorkerType
This method returns a preview of all possible launch specifications
that this worker type definition could submit to EC2. It is used to
test worker types, nothing more
**This API end-point is experimental and may be subject to change without warning.**
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-launch-specs-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["getLaunchSpecs"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10309
|
resolve_font
|
train
|
def resolve_font(name):
"""Turns font names into absolute filenames
This is case sensitive. The extension should be omitted.
For example::
>>> path = resolve_font('NotoSans-Bold')
>>> fontdir = os.path.join(os.path.dirname(__file__), 'fonts')
>>> noto_path = os.path.join(fontdir, 'NotoSans-Bold.ttf')
>>> noto_path = os.path.abspath(noto_path)
>>> assert path == noto_path
Absolute paths are allowed::
>>> resolve_font(noto_path) == noto_path
True
Raises :exc:`FontNotFound` on failure::
>>> try:
... resolve_font('blahahaha')
... assert False
... except FontNotFound:
... pass
"""
if os.path.exists(name):
return os.path.abspath(name)
fonts = get_font_files()
if name in fonts:
return fonts[name]
raise FontNotFound("Can't find %r :'( Try adding it to ~/.fonts" % name)
|
python
|
{
"resource": ""
}
|
q10310
|
get_font_files
|
train
|
def get_font_files():
"""Returns a list of all font files we could find
Returned as a list of dir/files tuples::
get_font_files() -> {'FontName': '/abs/FontName.ttf', ...]
For example::
>>> fonts = get_font_files()
>>> 'NotoSans-Bold' in fonts
True
>>> fonts['NotoSans-Bold'].endswith('/NotoSans-Bold.ttf')
True
"""
roots = [
'/usr/share/fonts/truetype', # where ubuntu puts fonts
'/usr/share/fonts', # where fedora puts fonts
os.path.expanduser('~/.fonts'), # custom user fonts
os.path.abspath(os.path.join(os.path.dirname(__file__), 'fonts')),
]
result = {}
for root in roots:
for path, dirs, names in os.walk(root):
for name in names:
if name.endswith(('.ttf', '.otf')):
result[name[:-4]] = os.path.join(path, name)
return result
|
python
|
{
"resource": ""
}
|
q10311
|
PurgeCache.purgeCache
|
train
|
def purgeCache(self, *args, **kwargs):
"""
Purge Worker Cache
Publish a purge-cache message to purge caches named `cacheName` with
`provisionerId` and `workerType` in the routing-key. Workers should
be listening for this message and purge caches when they see it.
This method takes input: ``v1/purge-cache-request.json#``
This method is ``stable``
"""
return self._makeApiCall(self.funcinfo["purgeCache"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10312
|
is_asdf
|
train
|
def is_asdf(raw):
"""If the password is in the order on keyboard."""
reverse = raw[::-1]
asdf = ''.join(ASDF)
return raw in asdf or reverse in asdf
|
python
|
{
"resource": ""
}
|
q10313
|
is_by_step
|
train
|
def is_by_step(raw):
"""If the password is alphabet step by step."""
# make sure it is unicode
delta = ord(raw[1]) - ord(raw[0])
for i in range(2, len(raw)):
if ord(raw[i]) - ord(raw[i-1]) != delta:
return False
return True
|
python
|
{
"resource": ""
}
|
q10314
|
is_common_password
|
train
|
def is_common_password(raw, freq=0):
"""If the password is common used.
10k top passwords: https://xato.net/passwords/more-top-worst-passwords/
"""
frequent = WORDS.get(raw, 0)
if freq:
return frequent > freq
return bool(frequent)
|
python
|
{
"resource": ""
}
|
q10315
|
check
|
train
|
def check(raw, length=8, freq=0, min_types=3, level=STRONG):
"""Check the safety level of the password.
:param raw: raw text password.
:param length: minimal length of the password.
:param freq: minimum frequency.
:param min_types: minimum character family.
:param level: minimum level to validate a password.
"""
raw = to_unicode(raw)
if level > STRONG:
level = STRONG
if len(raw) < length:
return Strength(False, 'terrible', 'password is too short')
if is_asdf(raw) or is_by_step(raw):
return Strength(False, 'simple', 'password has a pattern')
if is_common_password(raw, freq=freq):
return Strength(False, 'simple', 'password is too common')
types = 0
if LOWER.search(raw):
types += 1
if UPPER.search(raw):
types += 1
if NUMBER.search(raw):
types += 1
if MARKS.search(raw):
types += 1
if types < 2:
return Strength(level <= SIMPLE, 'simple', 'password is too simple')
if types < min_types:
return Strength(level <= MEDIUM, 'medium',
'password is good enough, but not strong')
return Strength(True, 'strong', 'password is perfect')
|
python
|
{
"resource": ""
}
|
q10316
|
InfobloxNetMRI._make_request
|
train
|
def _make_request(self, url, method="get", data=None, extra_headers=None):
"""Prepares the request, checks for authentication and retries in case of issues
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict
"""
attempts = 0
while attempts < 1:
# Authenticate first if not authenticated already
if not self._is_authenticated:
self._authenticate()
# Make the request and check for authentication errors
# This allows us to catch session timeouts for long standing connections
try:
return self._send_request(url, method, data, extra_headers)
except HTTPError as e:
if e.response.status_code == 403:
logger.info("Authenticated session against NetMRI timed out. Retrying.")
self._is_authenticated = False
attempts += 1
else:
# re-raise other HTTP errors
raise
|
python
|
{
"resource": ""
}
|
q10317
|
InfobloxNetMRI._send_request
|
train
|
def _send_request(self, url, method="get", data=None, extra_headers=None):
"""Performs a given request and returns a json object
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict
"""
headers = {'Content-type': 'application/json'}
if isinstance(extra_headers, dict):
headers.update(extra_headers)
if not data or "password" not in data:
logger.debug("Sending {method} request to {url} with data {data}".format(
method=method.upper(), url=url, data=data)
)
r = self.session.request(method, url, headers=headers, data=data)
r.raise_for_status()
return r.json()
|
python
|
{
"resource": ""
}
|
q10318
|
InfobloxNetMRI._get_api_version
|
train
|
def _get_api_version(self):
"""Fetches the most recent API version
Returns:
str
"""
url = "{base_url}/api/server_info".format(base_url=self._base_url())
server_info = self._make_request(url=url, method="get")
return server_info["latest_api_version"]
|
python
|
{
"resource": ""
}
|
q10319
|
InfobloxNetMRI._authenticate
|
train
|
def _authenticate(self):
""" Perform an authentication against NetMRI"""
url = "{base_url}/api/authenticate".format(base_url=self._base_url())
data = json.dumps({'username': self.username, "password": self.password})
# Bypass authentication check in make_request by using _send_request
logger.debug("Authenticating against NetMRI")
self._send_request(url, method="post", data=data)
self._is_authenticated = True
|
python
|
{
"resource": ""
}
|
q10320
|
InfobloxNetMRI._controller_name
|
train
|
def _controller_name(self, objtype):
"""Determines the controller name for the object's type
Args:
objtype (str): The object type
Returns:
A string with the controller name
"""
# would be better to use inflect.pluralize here, but would add a dependency
if objtype.endswith('y'):
return objtype[:-1] + 'ies'
if objtype[-1] in 'sx' or objtype[-2:] in ['sh', 'ch']:
return objtype + 'es'
if objtype.endswith('an'):
return objtype[:-2] + 'en'
return objtype + 's'
|
python
|
{
"resource": ""
}
|
q10321
|
InfobloxNetMRI._object_url
|
train
|
def _object_url(self, objtype, objid):
"""Generate the URL for the specified object
Args:
objtype (str): The object's type
objid (int): The objects ID
Returns:
A string containing the URL of the object
"""
return "{base_url}/api/{api_version}/{controller}/{obj_id}".format(
base_url=self._base_url(),
api_version=self.api_version,
controller=self._controller_name(objtype),
obj_id=objid
)
|
python
|
{
"resource": ""
}
|
q10322
|
InfobloxNetMRI._method_url
|
train
|
def _method_url(self, method_name):
"""Generate the URL for the requested method
Args:
method_name (str): Name of the method
Returns:
A string containing the URL of the method
"""
return "{base_url}/api/{api}/{method}".format(
base_url=self._base_url(),
api=self.api_version,
method=method_name
)
|
python
|
{
"resource": ""
}
|
q10323
|
InfobloxNetMRI.api_request
|
train
|
def api_request(self, method_name, params):
"""Execute an arbitrary method.
Args:
method_name (str): include the controller name: 'devices/search'
params (dict): the method parameters
Returns:
A dict with the response
Raises:
requests.exceptions.HTTPError
"""
url = self._method_url(method_name)
data = json.dumps(params)
return self._make_request(url=url, method="post", data=data)
|
python
|
{
"resource": ""
}
|
q10324
|
InfobloxNetMRI.show
|
train
|
def show(self, objtype, objid):
"""Query for a specific resource by ID
Args:
objtype (str): object type, e.g. 'device', 'interface'
objid (int): object ID (DeviceID, etc.)
Returns:
A dict with that object
Raises:
requests.exceptions.HTTPError
"""
url = self._object_url(objtype, int(objid))
return self._make_request(url, method="get")
|
python
|
{
"resource": ""
}
|
q10325
|
Notify.irc
|
train
|
def irc(self, *args, **kwargs):
"""
Post IRC Message
Post a message on IRC to a specific channel or user, or a specific user
on a specific channel.
Success of this API method does not imply the message was successfully
posted. This API method merely inserts the IRC message into a queue
that will be processed by a background process.
This allows us to re-send the message in face of connection issues.
However, if the user isn't online the message will be dropped without
error. We maybe improve this behavior in the future. For now just keep
in mind that IRC is a best-effort service.
This method takes input: ``v1/irc-request.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["irc"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10326
|
Notify.addDenylistAddress
|
train
|
def addDenylistAddress(self, *args, **kwargs):
"""
Denylist Given Address
Add the given address to the notification denylist. The address
can be of either of the three supported address type namely pulse, email
or IRC(user or channel). Addresses in the denylist will be ignored
by the notification service.
This method takes input: ``v1/notification-address.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["addDenylistAddress"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10327
|
Notify.deleteDenylistAddress
|
train
|
def deleteDenylistAddress(self, *args, **kwargs):
"""
Delete Denylisted Address
Delete the specified address from the notification denylist.
This method takes input: ``v1/notification-address.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["deleteDenylistAddress"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10328
|
Notify.list
|
train
|
def list(self, *args, **kwargs):
"""
List Denylisted Notifications
Lists all the denylisted addresses.
By default this end-point will try to return up to 1000 addresses in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `list` with the last `continuationToken` until you
get a result without a `continuationToken`.
If you are not interested in listing all the members at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/notification-address-list.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["list"], *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10329
|
AuthEvents.clientCreated
|
train
|
def clientCreated(self, *args, **kwargs):
"""
Client Created Messages
Message that a new client has been created.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-created',
'name': 'clientCreated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10330
|
AuthEvents.clientUpdated
|
train
|
def clientUpdated(self, *args, **kwargs):
"""
Client Updated Messages
Message that a new client has been updated.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-updated',
'name': 'clientUpdated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10331
|
AuthEvents.clientDeleted
|
train
|
def clientDeleted(self, *args, **kwargs):
"""
Client Deleted Messages
Message that a new client has been deleted.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-deleted',
'name': 'clientDeleted',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10332
|
AuthEvents.roleCreated
|
train
|
def roleCreated(self, *args, **kwargs):
"""
Role Created Messages
Message that a new role has been created.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-created',
'name': 'roleCreated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10333
|
AuthEvents.roleUpdated
|
train
|
def roleUpdated(self, *args, **kwargs):
"""
Role Updated Messages
Message that a new role has been updated.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-updated',
'name': 'roleUpdated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10334
|
AuthEvents.roleDeleted
|
train
|
def roleDeleted(self, *args, **kwargs):
"""
Role Deleted Messages
Message that a new role has been deleted.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-deleted',
'name': 'roleDeleted',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10335
|
memoize
|
train
|
def memoize(function):
"""A very simple memoize decorator to optimize pure-ish functions
Don't use this unless you've examined the code and see the
potential risks.
"""
cache = {}
@functools.wraps(function)
def _memoize(*args):
if args in cache:
return cache[args]
result = function(*args)
cache[args] = result
return result
return function
|
python
|
{
"resource": ""
}
|
q10336
|
TerminalInfo.dimensions
|
train
|
def dimensions(self):
"""Returns terminal dimensions
Don't save this information for long periods of time because
the user might resize their terminal.
:return: Returns ``(width, height)``. If there's no terminal
to be found, we'll just return ``(79, 40)``.
"""
try:
call = fcntl.ioctl(self.termfd, termios.TIOCGWINSZ, "\000" * 8)
except IOError:
return (79, 40)
else:
height, width = struct.unpack("hhhh", call)[:2]
return (width, height)
|
python
|
{
"resource": ""
}
|
q10337
|
sp_msg
|
train
|
def sp_msg(cmd, pipe=None, data=None):
"""Produces skypipe protocol multipart message"""
msg = [SP_HEADER, cmd]
if pipe is not None:
msg.append(pipe)
if data is not None:
msg.append(data)
return msg
|
python
|
{
"resource": ""
}
|
q10338
|
stream_skypipe_output
|
train
|
def stream_skypipe_output(endpoint, name=None):
"""Generator for reading skypipe data"""
name = name or ''
socket = ctx.socket(zmq.DEALER)
socket.connect(endpoint)
try:
socket.send_multipart(sp_msg(SP_CMD_LISTEN, name))
while True:
msg = socket.recv_multipart()
try:
data = parse_skypipe_data_stream(msg, name)
if data:
yield data
except EOFError:
raise StopIteration()
finally:
socket.send_multipart(sp_msg(SP_CMD_UNLISTEN, name))
socket.close()
|
python
|
{
"resource": ""
}
|
q10339
|
parse_skypipe_data_stream
|
train
|
def parse_skypipe_data_stream(msg, for_pipe):
"""May return data from skypipe message or raises EOFError"""
header = str(msg.pop(0))
command = str(msg.pop(0))
pipe_name = str(msg.pop(0))
data = str(msg.pop(0))
if header != SP_HEADER: return
if pipe_name != for_pipe: return
if command != SP_CMD_DATA: return
if data == SP_DATA_EOF:
raise EOFError()
else:
return data
|
python
|
{
"resource": ""
}
|
q10340
|
skypipe_input_stream
|
train
|
def skypipe_input_stream(endpoint, name=None):
"""Returns a context manager for streaming data into skypipe"""
name = name or ''
class context_manager(object):
def __enter__(self):
self.socket = ctx.socket(zmq.DEALER)
self.socket.connect(endpoint)
return self
def send(self, data):
data_msg = sp_msg(SP_CMD_DATA, name, data)
self.socket.send_multipart(data_msg)
def __exit__(self, *args, **kwargs):
eof_msg = sp_msg(SP_CMD_DATA, name, SP_DATA_EOF)
self.socket.send_multipart(eof_msg)
self.socket.close()
return context_manager()
|
python
|
{
"resource": ""
}
|
q10341
|
stream_stdin_lines
|
train
|
def stream_stdin_lines():
"""Generator for unbuffered line reading from STDIN"""
stdin = os.fdopen(sys.stdin.fileno(), 'r', 0)
while True:
line = stdin.readline()
if line:
yield line
else:
break
|
python
|
{
"resource": ""
}
|
q10342
|
run
|
train
|
def run(endpoint, name=None):
"""Runs the skypipe client"""
try:
if os.isatty(0):
# output mode
for data in stream_skypipe_output(endpoint, name):
sys.stdout.write(data)
sys.stdout.flush()
else:
# input mode
with skypipe_input_stream(endpoint, name) as stream:
for line in stream_stdin_lines():
stream.send(line)
except KeyboardInterrupt:
pass
|
python
|
{
"resource": ""
}
|
q10343
|
DateTimeRange.validate_time_inversion
|
train
|
def validate_time_inversion(self):
"""
Check time inversion of the time range.
:raises ValueError:
If |attr_start_datetime| is
bigger than |attr_end_datetime|.
:raises TypeError:
Any one of |attr_start_datetime| and |attr_end_datetime|,
or both is inappropriate datetime value.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
time_range = DateTimeRange("2015-03-22T10:10:00+0900", "2015-03-22T10:00:00+0900")
try:
time_range.validate_time_inversion()
except ValueError:
print "time inversion"
:Output:
.. parsed-literal::
time inversion
"""
if not self.is_set():
# for python2/3 compatibility
raise TypeError
if self.start_datetime > self.end_datetime:
raise ValueError(
"time inversion found: {:s} > {:s}".format(
str(self.start_datetime), str(self.end_datetime)
)
)
|
python
|
{
"resource": ""
}
|
q10344
|
DateTimeRange.set_start_datetime
|
train
|
def set_start_datetime(self, value, timezone=None):
"""
Set the start time of the time range.
:param value: |param_start_datetime|
:type value: |datetime|/|str|
:raises ValueError: If the value is invalid as a |datetime| value.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
time_range = DateTimeRange()
print(time_range)
time_range.set_start_datetime("2015-03-22T10:00:00+0900")
print(time_range)
:Output:
.. parsed-literal::
NaT - NaT
2015-03-22T10:00:00+0900 - NaT
"""
if value is None:
self.__start_datetime = None
return
try:
self.__start_datetime = typepy.type.DateTime(
value, strict_level=typepy.StrictLevel.MIN, timezone=timezone
).convert()
except typepy.TypeConversionError as e:
raise ValueError(e)
|
python
|
{
"resource": ""
}
|
q10345
|
DateTimeRange.set_end_datetime
|
train
|
def set_end_datetime(self, value, timezone=None):
"""
Set the end time of the time range.
:param datetime.datetime/str value: |param_end_datetime|
:raises ValueError: If the value is invalid as a |datetime| value.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
time_range = DateTimeRange()
print(time_range)
time_range.set_end_datetime("2015-03-22T10:10:00+0900")
print(time_range)
:Output:
.. parsed-literal::
NaT - NaT
NaT - 2015-03-22T10:10:00+0900
"""
if value is None:
self.__end_datetime = None
return
try:
self.__end_datetime = typepy.type.DateTime(
value, strict_level=typepy.StrictLevel.MIN, timezone=timezone
).convert()
except typepy.TypeConversionError as e:
raise ValueError(e)
|
python
|
{
"resource": ""
}
|
q10346
|
DateTimeRange.intersection
|
train
|
def intersection(self, x):
"""
Newly set a time range that overlaps
the input and the current time range.
:param DateTimeRange x:
Value to compute intersection with the current time range.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
dtr0 = DateTimeRange("2015-03-22T10:00:00+0900", "2015-03-22T10:10:00+0900")
dtr1 = DateTimeRange("2015-03-22T10:05:00+0900", "2015-03-22T10:15:00+0900")
dtr0.intersection(dtr1)
:Output:
.. parsed-literal::
2015-03-22T10:05:00+0900 - 2015-03-22T10:10:00+0900
"""
self.validate_time_inversion()
x.validate_time_inversion()
if any([x.start_datetime in self, self.start_datetime in x]):
start_datetime = max(self.start_datetime, x.start_datetime)
end_datetime = min(self.end_datetime, x.end_datetime)
else:
start_datetime = None
end_datetime = None
return DateTimeRange(
start_datetime=start_datetime,
end_datetime=end_datetime,
start_time_format=self.start_time_format,
end_time_format=self.end_time_format,
)
|
python
|
{
"resource": ""
}
|
q10347
|
DateTimeRange.encompass
|
train
|
def encompass(self, x):
"""
Newly set a time range that encompasses
the input and the current time range.
:param DateTimeRange x:
Value to compute encompass with the current time range.
:Sample Code:
.. code:: python
from datetimerange import DateTimeRange
dtr0 = DateTimeRange("2015-03-22T10:00:00+0900", "2015-03-22T10:10:00+0900")
dtr1 = DateTimeRange("2015-03-22T10:05:00+0900", "2015-03-22T10:15:00+0900")
dtr0.encompass(dtr1)
:Output:
.. parsed-literal::
2015-03-22T10:00:00+0900 - 2015-03-22T10:15:00+0900
"""
self.validate_time_inversion()
x.validate_time_inversion()
return DateTimeRange(
start_datetime=min(self.start_datetime, x.start_datetime),
end_datetime=max(self.end_datetime, x.end_datetime),
start_time_format=self.start_time_format,
end_time_format=self.end_time_format,
)
|
python
|
{
"resource": ""
}
|
q10348
|
wait_for
|
train
|
def wait_for(text, finish=None, io=None):
"""Displays dots until returned event is set"""
if finish:
finish.set()
time.sleep(0.1) # threads, sigh
if not io:
io = sys.stdout
finish = threading.Event()
io.write(text)
def _wait():
while not finish.is_set():
io.write('.')
io.flush()
finish.wait(timeout=1)
io.write('\n')
threading.Thread(target=_wait).start()
return finish
|
python
|
{
"resource": ""
}
|
q10349
|
lookup_endpoint
|
train
|
def lookup_endpoint(cli):
"""Looks up the application endpoint from dotcloud"""
url = '/applications/{0}/environment'.format(APPNAME)
environ = cli.user.get(url).item
port = environ['DOTCLOUD_SATELLITE_ZMQ_PORT']
host = socket.gethostbyname(environ['DOTCLOUD_SATELLITE_ZMQ_HOST'])
return "tcp://{0}:{1}".format(host, port)
|
python
|
{
"resource": ""
}
|
q10350
|
setup
|
train
|
def setup(cli):
"""Everything to make skypipe ready to use"""
if not cli.global_config.loaded:
setup_dotcloud_account(cli)
discover_satellite(cli)
cli.success("Skypipe is ready for action")
|
python
|
{
"resource": ""
}
|
q10351
|
discover_satellite
|
train
|
def discover_satellite(cli, deploy=True, timeout=5):
"""Looks to make sure a satellite exists, returns endpoint
First makes sure we have dotcloud account credentials. Then it looks
up the environment for the satellite app. This will contain host and
port to construct an endpoint. However, if app doesn't exist, or
endpoint does not check out, we call `launch_satellite` to deploy,
which calls `discover_satellite` again when finished. Ultimately we
return a working endpoint. If deploy is False it will not try to
deploy.
"""
if not cli.global_config.loaded:
cli.die("Please setup skypipe by running `skypipe --setup`")
try:
endpoint = lookup_endpoint(cli)
ok = client.check_skypipe_endpoint(endpoint, timeout)
if ok:
return endpoint
else:
return launch_satellite(cli) if deploy else None
except (RESTAPIError, KeyError):
return launch_satellite(cli) if deploy else None
|
python
|
{
"resource": ""
}
|
q10352
|
launch_satellite
|
train
|
def launch_satellite(cli):
"""Deploys a new satellite app over any existing app"""
cli.info("Launching skypipe satellite:")
finish = wait_for(" Pushing to dotCloud")
# destroy any existing satellite
destroy_satellite(cli)
# create new satellite app
url = '/applications'
try:
cli.user.post(url, {
'name': APPNAME,
'flavor': 'sandbox'
})
except RESTAPIError as e:
if e.code == 409:
cli.die('Application "{0}" already exists.'.format(APPNAME))
else:
cli.die('Creating application "{0}" failed: {1}'.format(APPNAME, e))
class args: application = APPNAME
#cli._connect(args)
# push satellite code
protocol = 'rsync'
url = '/applications/{0}/push-endpoints{1}'.format(APPNAME, '')
endpoint = cli._select_endpoint(cli.user.get(url).items, protocol)
class args: path = satellite_path
cli.push_with_rsync(args, endpoint)
# tell dotcloud to deploy, then wait for it to finish
revision = None
clean = False
url = '/applications/{0}/deployments'.format(APPNAME)
response = cli.user.post(url, {'revision': revision, 'clean': clean})
deploy_trace_id = response.trace_id
deploy_id = response.item['deploy_id']
original_stdout = sys.stdout
finish = wait_for(" Waiting for deployment", finish, original_stdout)
try:
sys.stdout = StringIO()
res = cli._stream_deploy_logs(APPNAME, deploy_id,
deploy_trace_id=deploy_trace_id, follow=True)
if res != 0:
return res
except KeyboardInterrupt:
cli.error('You\'ve closed your log stream with Ctrl-C, ' \
'but the deployment is still running in the background.')
cli.error('If you aborted because of an error ' \
'(e.g. the deployment got stuck), please e-mail\n' \
'support@dotcloud.com and mention this trace ID: {0}'
.format(deploy_trace_id))
cli.error('If you want to continue following your deployment, ' \
'try:\n{0}'.format(
cli._fmt_deploy_logs_command(deploy_id)))
cli.die()
except RuntimeError:
# workaround for a bug in the current dotcloud client code
pass
finally:
sys.stdout = original_stdout
finish = wait_for(" Satellite coming online", finish)
endpoint = lookup_endpoint(cli)
ok = client.check_skypipe_endpoint(endpoint, 120)
finish.set()
time.sleep(0.1) # sigh, threads
if ok:
return endpoint
else:
cli.die("Satellite failed to come online")
|
python
|
{
"resource": ""
}
|
q10353
|
Dumper.pg_backup
|
train
|
def pg_backup(self, pg_dump_exe='pg_dump', exclude_schema=None):
"""Call the pg_dump command to create a db backup
Parameters
----------
pg_dump_exe: str
the pg_dump command path
exclude_schema: str[]
list of schemas to be skipped
"""
command = [
pg_dump_exe, '-Fc', '-f', self.file,
'service={}'.format(self.pg_service)
]
if exclude_schema:
command.append(' '.join("--exclude-schema={}".format(schema) for schema in exclude_schema))
subprocess.check_output(command, stderr=subprocess.STDOUT)
|
python
|
{
"resource": ""
}
|
q10354
|
Dumper.pg_restore
|
train
|
def pg_restore(self, pg_restore_exe='pg_restore', exclude_schema=None):
"""Call the pg_restore command to restore a db backup
Parameters
----------
pg_restore_exe: str
the pg_restore command path
"""
command = [
pg_restore_exe, '-d',
'service={}'.format(self.pg_service),
'--no-owner'
]
if exclude_schema:
exclude_schema_available = False
try:
pg_version = subprocess.check_output(['pg_restore','--version'])
pg_version = str(pg_version).replace('\\n', '').replace("'", '').split(' ')[-1]
exclude_schema_available = LooseVersion(pg_version) >= LooseVersion("10.0")
except subprocess.CalledProcessError as e:
print("*** Could not get pg_restore version:\n", e.stderr)
if exclude_schema_available:
command.append(' '.join("--exclude-schema={}".format(schema) for schema in exclude_schema))
command.append(self.file)
try:
subprocess.check_output(command)
except subprocess.CalledProcessError as e:
print("*** pg_restore failed:\n", command, '\n', e.stderr)
|
python
|
{
"resource": ""
}
|
q10355
|
Upgrader.__get_delta_files
|
train
|
def __get_delta_files(self):
"""Search for delta files and return a dict of Delta objects, keyed by directory names."""
files = [(d, f) for d in self.dirs for f in listdir(d) if isfile(join(d, f))]
deltas = OrderedDict()
for d, f in files:
file_ = join(d, f)
if not Delta.is_valid_delta_name(file_):
continue
delta = Delta(file_)
if d not in deltas:
deltas[d] = []
deltas[d].append(delta)
# sort delta objects in each bucket
for d in deltas:
deltas[d].sort(key=lambda x: (x.get_version(), x.get_priority(), x.get_name()))
return deltas
|
python
|
{
"resource": ""
}
|
q10356
|
Upgrader.__run_delta_sql
|
train
|
def __run_delta_sql(self, delta):
"""Execute the delta sql file on the database"""
self.__run_sql_file(delta.get_file())
self.__update_upgrades_table(delta)
|
python
|
{
"resource": ""
}
|
q10357
|
Upgrader.__run_delta_py
|
train
|
def __run_delta_py(self, delta):
"""Execute the delta py file"""
self.__run_py_file(delta.get_file(), delta.get_name())
self.__update_upgrades_table(delta)
|
python
|
{
"resource": ""
}
|
q10358
|
Upgrader.__run_pre_all
|
train
|
def __run_pre_all(self):
"""Execute the pre-all.py and pre-all.sql files if they exist"""
# if the list of delta dirs is [delta1, delta2] the pre scripts of delta2 are
# executed before the pre scripts of delta1
for d in reversed(self.dirs):
pre_all_py_path = os.path.join(d, 'pre-all.py')
if os.path.isfile(pre_all_py_path):
print(' Applying pre-all.py...', end=' ')
self.__run_py_file(pre_all_py_path, 'pre-all')
print('OK')
pre_all_sql_path = os.path.join(d, 'pre-all.sql')
if os.path.isfile(pre_all_sql_path):
print(' Applying pre-all.sql...', end=' ')
self.__run_sql_file(pre_all_sql_path)
print('OK')
|
python
|
{
"resource": ""
}
|
q10359
|
Upgrader.__run_post_all
|
train
|
def __run_post_all(self):
"""Execute the post-all.py and post-all.sql files if they exist"""
# if the list of delta dirs is [delta1, delta2] the post scripts of delta1 are
# executed before the post scripts of delta2
for d in self.dirs:
post_all_py_path = os.path.join(d, 'post-all.py')
if os.path.isfile(post_all_py_path):
print(' Applying post-all.py...', end=' ')
self.__run_py_file(post_all_py_path, 'post-all')
print('OK')
post_all_sql_path = os.path.join(d, 'post-all.sql')
if os.path.isfile(post_all_sql_path):
print(' Applying post-all.sql...', end=' ')
self.__run_sql_file(post_all_sql_path)
print('OK')
|
python
|
{
"resource": ""
}
|
q10360
|
Upgrader.__run_sql_file
|
train
|
def __run_sql_file(self, filepath):
"""Execute the sql file at the passed path
Parameters
----------
filepath: str
the path of the file to execute"""
with open(filepath, 'r') as delta_file:
sql = delta_file.read()
if self.variables:
self.cursor.execute(sql, self.variables)
else:
self.cursor.execute(sql)
self.connection.commit()
|
python
|
{
"resource": ""
}
|
q10361
|
Upgrader.__run_py_file
|
train
|
def __run_py_file(self, filepath, module_name):
"""Execute the python file at the passed path
Parameters
----------
filepath: str
the path of the file to execute
module_name: str
the name of the python module
"""
# Import the module
spec = importlib.util.spec_from_file_location(module_name, filepath)
delta_py = importlib.util.module_from_spec(spec)
spec.loader.exec_module(delta_py)
# Get the python file's directory path
# Note: we add a separator for backward compatibility, as existing DeltaPy subclasses
# may assume that delta_dir ends with a separator
dir_ = dirname(filepath) + os.sep
# Search for subclasses of DeltaPy
for name in dir(delta_py):
obj = getattr(delta_py, name)
if inspect.isclass(obj) and not obj == DeltaPy and issubclass(
obj, DeltaPy):
delta_py_inst = obj(
self.current_db_version(), dir_, self.dirs, self.pg_service,
self.upgrades_table, variables=self.variables)
delta_py_inst.run()
|
python
|
{
"resource": ""
}
|
q10362
|
Delta.is_valid_delta_name
|
train
|
def is_valid_delta_name(file):
"""Return if a file has a valid name
A delta file name can be:
- pre-all.py
- pre-all.sql
- delta_x.x.x_ddmmyyyy.pre.py
- delta_x.x.x_ddmmyyyy.pre.sql
- delta_x.x.x_ddmmyyyy.py
- delta_x.x.x_ddmmyyyy.sql
- delta_x.x.x_ddmmyyyy.post.py
- delta_x.x.x_ddmmyyyy.post.sql
- post-all.py
- post-all.sql
where x.x.x is the version number and _ddmmyyyy is an optional
description, usually representing the date of the delta file
"""
filename = basename(file)
pattern = re.compile(Delta.FILENAME_PATTERN)
if re.match(pattern, filename):
return True
return False
|
python
|
{
"resource": ""
}
|
q10363
|
Delta.get_checksum
|
train
|
def get_checksum(self):
"""Return the md5 checksum of the delta file."""
with open(self.file, 'rb') as f:
cs = md5(f.read()).hexdigest()
return cs
|
python
|
{
"resource": ""
}
|
q10364
|
Delta.get_type
|
train
|
def get_type(self):
"""Return the type of the delta file.
Returns
-------
type: int
"""
ext = self.match.group(5)
if ext == 'pre.py':
return DeltaType.PRE_PYTHON
elif ext == 'pre.sql':
return DeltaType.PRE_SQL
elif ext == 'py':
return DeltaType.PYTHON
elif ext == 'sql':
return DeltaType.SQL
elif ext == 'post.py':
return DeltaType.POST_PYTHON
elif ext == 'post.sql':
return DeltaType.POST_SQL
|
python
|
{
"resource": ""
}
|
q10365
|
DeltaPy.variable
|
train
|
def variable(self, name: str, default_value=None):
"""
Safely returns the value of the variable given in PUM
Parameters
----------
name
the name of the variable
default_value
the default value for the variable if it does not exist
"""
return self.__variables.get(name, default_value)
|
python
|
{
"resource": ""
}
|
q10366
|
Checker.run_checks
|
train
|
def run_checks(self):
"""Run all the checks functions.
Returns
-------
bool
True if all the checks are true
False otherwise
dict
Dictionary of lists of differences
"""
result = True
differences_dict = {}
if 'tables' not in self.ignore_list:
tmp_result, differences_dict['tables'] = self.check_tables()
result = False if not tmp_result else result
if 'columns' not in self.ignore_list:
tmp_result, differences_dict['columns'] = self.check_columns(
'views' not in self.ignore_list)
result = False if not tmp_result else result
if 'constraints' not in self.ignore_list:
tmp_result, differences_dict['constraints'] = \
self.check_constraints()
result = False if not tmp_result else result
if 'views' not in self.ignore_list:
tmp_result, differences_dict['views'] = self.check_views()
result = False if not tmp_result else result
if 'sequences' not in self.ignore_list:
tmp_result, differences_dict['sequences'] = self.check_sequences()
result = False if not tmp_result else result
if 'indexes' not in self.ignore_list:
tmp_result, differences_dict['indexes'] = self.check_indexes()
result = False if not tmp_result else result
if 'triggers' not in self.ignore_list:
tmp_result, differences_dict['triggers'] = self.check_triggers()
result = False if not tmp_result else result
if 'functions' not in self.ignore_list:
tmp_result, differences_dict['functions'] = self.check_functions()
result = False if not tmp_result else result
if 'rules' not in self.ignore_list:
tmp_result, differences_dict['rules'] = self.check_rules()
result = False if not tmp_result else result
if self.verbose_level == 0:
differences_dict = None
return result, differences_dict
|
python
|
{
"resource": ""
}
|
q10367
|
Checker.__check_equals
|
train
|
def __check_equals(self, query):
"""Check if the query results on the two databases are equals.
Returns
-------
bool
True if the results are the same
False otherwise
list
A list with the differences
"""
self.cur1.execute(query)
records1 = self.cur1.fetchall()
self.cur2.execute(query)
records2 = self.cur2.fetchall()
result = True
differences = []
d = difflib.Differ()
records1 = [str(x) for x in records1]
records2 = [str(x) for x in records2]
for line in d.compare(records1, records2):
if line[0] in ('-', '+'):
result = False
if self.verbose_level == 1:
differences.append(line[0:79])
elif self.verbose_level == 2:
differences.append(line)
return result, differences
|
python
|
{
"resource": ""
}
|
q10368
|
ask_for_confirmation
|
train
|
def ask_for_confirmation(prompt=None, resp=False):
"""Prompt for a yes or no response from the user.
Parameters
----------
prompt: basestring
The question to be prompted to the user.
resp: bool
The default value assumed by the caller when user simply
types ENTER.
Returns
-------
bool
True if the user response is 'y' or 'Y'
False if the user response is 'n' or 'N'
"""
global input
if prompt is None:
prompt = 'Confirm'
if resp:
prompt = '%s [%s]|%s: ' % (prompt, 'y', 'n')
else:
prompt = '%s [%s]|%s: ' % (prompt, 'n', 'y')
while True:
# Fix for Python2. In python3 raw_input() is now input()
try:
input = raw_input
except NameError:
pass
ans = input(prompt)
if not ans:
return resp
if ans not in ['y', 'Y', 'n', 'N']:
print('please enter y or n.')
continue
if ans == 'y' or ans == 'Y':
return True
if ans == 'n' or ans == 'N':
return False
|
python
|
{
"resource": ""
}
|
q10369
|
AuthDecorator.handle_target
|
train
|
def handle_target(self, request, controller_args, controller_kwargs):
"""Only here to set self.request and get rid of it after
this will set self.request so the target method can access request using
self.request, just like in the controller.
"""
self.request = request
super(AuthDecorator, self).handle_target(request, controller_args, controller_kwargs)
del self.request
|
python
|
{
"resource": ""
}
|
q10370
|
HTTPClient.get
|
train
|
def get(self, uri, query=None, **kwargs):
"""make a GET request"""
return self.fetch('get', uri, query, **kwargs)
|
python
|
{
"resource": ""
}
|
q10371
|
HTTPClient.post
|
train
|
def post(self, uri, body=None, **kwargs):
"""make a POST request"""
return self.fetch('post', uri, kwargs.pop("query", {}), body, **kwargs)
|
python
|
{
"resource": ""
}
|
q10372
|
HTTPClient.post_file
|
train
|
def post_file(self, uri, body, files, **kwargs):
"""POST a file"""
# requests doesn't actually need us to open the files but we do anyway because
# if we don't then the filename isn't preserved, so we assume each string
# value is a filepath
for key in files.keys():
if isinstance(files[key], basestring):
files[key] = open(files[key], 'rb')
kwargs["files"] = files
# we ignore content type for posting files since it requires very specific things
ct = self.headers.pop("content-type", None)
ret = self.fetch('post', uri, {}, body, **kwargs)
if ct:
self.headers["content-type"] = ct
# close all the files
for fp in files.values():
fp.close()
return ret
|
python
|
{
"resource": ""
}
|
q10373
|
HTTPClient.delete
|
train
|
def delete(self, uri, query=None, **kwargs):
"""make a DELETE request"""
return self.fetch('delete', uri, query, **kwargs)
|
python
|
{
"resource": ""
}
|
q10374
|
HTTPClient.get_fetch_headers
|
train
|
def get_fetch_headers(self, method, headers):
"""merge class headers with passed in headers
:param method: string, (eg, GET or POST), this is passed in so you can customize
headers based on the method that you are calling
:param headers: dict, all the headers passed into the fetch method
:returns: passed in headers merged with global class headers
"""
all_headers = self.headers.copy()
if headers:
all_headers.update(headers)
return Headers(all_headers)
|
python
|
{
"resource": ""
}
|
q10375
|
HTTPClient.get_fetch_request
|
train
|
def get_fetch_request(self, method, fetch_url, *args, **kwargs):
"""This is handy if you want to modify the request right before passing it
to requests, or you want to do something extra special customized
:param method: string, the http method (eg, GET, POST)
:param fetch_url: string, the full url with query params
:param *args: any other positional arguments
:param **kwargs: any keyword arguments to pass to requests
:returns: a requests.Response compatible object instance
"""
return requests.request(method, fetch_url, *args, **kwargs)
|
python
|
{
"resource": ""
}
|
q10376
|
HTTPClient.get_fetch_response
|
train
|
def get_fetch_response(self, res):
"""the goal of this method is to make the requests object more endpoints like
res -- requests Response -- the native requests response instance, we manipulate
it a bit to make it look a bit more like the internal endpoints.Response object
"""
res.code = res.status_code
res.headers = Headers(res.headers)
res._body = None
res.body = ''
body = res.content
if body:
if self.is_json(res.headers):
res._body = res.json()
else:
res._body = body
res.body = String(body, res.encoding)
return res
|
python
|
{
"resource": ""
}
|
q10377
|
HTTPClient.is_json
|
train
|
def is_json(self, headers):
"""return true if content_type is a json content type"""
ret = False
ct = headers.get("content-type", "").lower()
if ct:
ret = ct.lower().rfind("json") >= 0
return ret
|
python
|
{
"resource": ""
}
|
q10378
|
ReflectMethod.params
|
train
|
def params(self):
"""return information about the params that the given http option takes"""
ret = {}
for rd in self.decorators:
args = rd.args
kwargs = rd.kwargs
if param in rd:
is_required = kwargs.get('required', 'default' not in kwargs)
ret[args[0]] = {'required': is_required, 'other_names': args[1:], 'options': kwargs}
return ret
|
python
|
{
"resource": ""
}
|
q10379
|
BaseServer.create_call
|
train
|
def create_call(self, raw_request, **kwargs):
"""create a call object that has endpoints understandable request and response
instances"""
req = self.create_request(raw_request, **kwargs)
res = self.create_response(**kwargs)
rou = self.create_router(**kwargs)
c = self.call_class(req, res, rou)
return c
|
python
|
{
"resource": ""
}
|
q10380
|
RateLimitDecorator.decorate
|
train
|
def decorate(self, func, limit=0, ttl=0, *anoop, **kwnoop):
"""see target for an explanation of limit and ttl"""
self.limit = int(limit)
self.ttl = int(ttl)
return super(RateLimitDecorator, self).decorate(func, target=None, *anoop, **kwnoop)
|
python
|
{
"resource": ""
}
|
q10381
|
ratelimit.decorate
|
train
|
def decorate(self, func, limit, ttl, *anoop, **kwnoop):
"""make limit and ttl required"""
return super(ratelimit, self).decorate(func, limit, ttl, *anoop, **kwnoop)
|
python
|
{
"resource": ""
}
|
q10382
|
Base64.encode
|
train
|
def encode(cls, s):
"""converts a plain text string to base64 encoding
:param s: unicode str|bytes, the base64 encoded string
:returns: unicode str
"""
b = ByteString(s)
be = base64.b64encode(b).strip()
return String(be)
|
python
|
{
"resource": ""
}
|
q10383
|
Base64.decode
|
train
|
def decode(cls, s):
"""decodes a base64 string to plain text
:param s: unicode str|bytes, the base64 encoded string
:returns: unicode str
"""
b = ByteString(s)
bd = base64.b64decode(b)
return String(bd)
|
python
|
{
"resource": ""
}
|
q10384
|
MimeType.find_type
|
train
|
def find_type(cls, val):
"""return the mimetype from the given string value
if value is a path, then the extension will be found, if val is an extension then
that will be used to find the mimetype
"""
mt = ""
index = val.rfind(".")
if index == -1:
val = "fake.{}".format(val)
elif index == 0:
val = "fake{}".format(val)
mt = mimetypes.guess_type(val)[0]
if mt is None:
mt = ""
return mt
|
python
|
{
"resource": ""
}
|
q10385
|
AcceptHeader.filter
|
train
|
def filter(self, media_type, **params):
"""
iterate all the accept media types that match media_type
media_type -- string -- the media type to filter by
**params -- dict -- further filter by key: val
return -- generator -- yields all matching media type info things
"""
mtype, msubtype = self._split_media_type(media_type)
for x in self.__iter__():
# all the params have to match to make the media type valid
matched = True
for k, v in params.items():
if x[2].get(k, None) != v:
matched = False
break
if matched:
if x[0][0] == '*':
if x[0][1] == '*':
yield x
elif x[0][1] == msubtype:
yield x
elif mtype == '*':
if msubtype == '*':
yield x
elif x[0][1] == msubtype:
yield x
elif x[0][0] == mtype:
if msubtype == '*':
yield x
elif x[0][1] == '*':
yield x
elif x[0][1] == msubtype:
yield x
|
python
|
{
"resource": ""
}
|
q10386
|
Application.create_request
|
train
|
def create_request(self, raw_request, **kwargs):
"""
create instance of request
raw_request -- the raw request object retrieved from a WSGI server
"""
r = self.request_class()
for k, v in raw_request.items():
if k.startswith('HTTP_'):
r.set_header(k[5:], v)
else:
r.environ[k] = v
r.method = raw_request['REQUEST_METHOD']
r.path = raw_request['PATH_INFO']
r.query = raw_request['QUERY_STRING']
# handle headers not prefixed with http
for k, t in {'CONTENT_TYPE': None, 'CONTENT_LENGTH': int}.items():
v = r.environ.pop(k, None)
if v:
r.set_header(k, t(v) if t else v)
if 'wsgi.input' in raw_request:
if "CONTENT_LENGTH" in raw_request and int(r.get_header("CONTENT_LENGTH", 0)) <= 0:
r.body_kwargs = {}
else:
if r.get_header('transfer-encoding', "").lower().startswith('chunked'):
raise IOError("Server does not support chunked requests")
else:
r.body_input = raw_request['wsgi.input']
else:
r.body_kwargs = {}
return r
|
python
|
{
"resource": ""
}
|
q10387
|
WebsocketApplication.create_environ
|
train
|
def create_environ(self, req, payload):
"""This will take the original request and the new websocket payload and
merge them into a new request instance"""
ws_req = req.copy()
del ws_req.controller_info
ws_req.environ.pop("wsgi.input", None)
ws_req.body_kwargs = payload.body
ws_req.environ["REQUEST_METHOD"] = payload.method
ws_req.method = payload.method
ws_req.environ["PATH_INFO"] = payload.path
ws_req.path = payload.path
ws_req.environ["WS_PAYLOAD"] = payload
ws_req.environ["WS_ORIGINAL"] = req
ws_req.payload = payload
ws_req.parent = req
return {"WS_REQUEST": ws_req}
|
python
|
{
"resource": ""
}
|
q10388
|
find_module_path
|
train
|
def find_module_path():
"""find where the master module is located"""
master_modname = __name__.split(".", 1)[0]
master_module = sys.modules[master_modname]
#return os.path.dirname(os.path.realpath(os.path.join(inspect.getsourcefile(endpoints), "..")))
path = os.path.dirname(inspect.getsourcefile(master_module))
return path
|
python
|
{
"resource": ""
}
|
q10389
|
Headers._convert_string_name
|
train
|
def _convert_string_name(self, k):
"""converts things like FOO_BAR to Foo-Bar which is the normal form"""
k = String(k, "iso-8859-1")
klower = k.lower().replace('_', '-')
bits = klower.split('-')
return "-".join((bit.title() for bit in bits))
|
python
|
{
"resource": ""
}
|
q10390
|
Url._normalize_params
|
train
|
def _normalize_params(self, *paths, **query_kwargs):
"""a lot of the helper methods are very similar, this handles their arguments"""
kwargs = {}
if paths:
fragment = paths[-1]
if fragment:
if fragment.startswith("#"):
kwargs["fragment"] = fragment
paths.pop(-1)
kwargs["path"] = "/".join(self.normalize_paths(*paths))
kwargs["query_kwargs"] = query_kwargs
return kwargs
|
python
|
{
"resource": ""
}
|
q10391
|
Url.controller
|
train
|
def controller(self, *paths, **query_kwargs):
"""create a new url object using the controller path as a base
if you have a controller `foo.BarController` then this would create a new
Url instance with `host/foo/bar` as the base path, so any *paths will be
appended to `/foo/bar`
:example:
# controller foo.BarController
print url # http://host.com/foo/bar/some_random_path
print url.controller() # http://host.com/foo/bar
print url.controller("che", boom="bam") # http://host/foo/bar/che?boom=bam
:param *paths: list, the paths to append to the controller path
:param **query_kwargs: dict, any query string params to add
"""
kwargs = self._normalize_params(*paths, **query_kwargs)
if self.controller_path:
if "path" in kwargs:
paths = self.normalize_paths(self.controller_path, kwargs["path"])
kwargs["path"] = "/".join(paths)
else:
kwargs["path"] = self.controller_path
return self.create(self.root, **kwargs)
|
python
|
{
"resource": ""
}
|
q10392
|
Url.base
|
train
|
def base(self, *paths, **query_kwargs):
"""create a new url object using the current base path as a base
if you had requested /foo/bar, then this would append *paths and **query_kwargs
to /foo/bar
:example:
# current path: /foo/bar
print url # http://host.com/foo/bar
print url.base() # http://host.com/foo/bar
print url.base("che", boom="bam") # http://host/foo/bar/che?boom=bam
:param *paths: list, the paths to append to the current path without query params
:param **query_kwargs: dict, any query string params to add
"""
kwargs = self._normalize_params(*paths, **query_kwargs)
if self.path:
if "path" in kwargs:
paths = self.normalize_paths(self.path, kwargs["path"])
kwargs["path"] = "/".join(paths)
else:
kwargs["path"] = self.path
return self.create(self.root, **kwargs)
|
python
|
{
"resource": ""
}
|
q10393
|
Url.host
|
train
|
def host(self, *paths, **query_kwargs):
"""create a new url object using the host as a base
if you had requested http://host/foo/bar, then this would append *paths and **query_kwargs
to http://host
:example:
# current url: http://host/foo/bar
print url # http://host.com/foo/bar
print url.host_url() # http://host.com/
print url.host_url("che", boom="bam") # http://host/che?boom=bam
:param *paths: list, the paths to append to the current path without query params
:param **query_kwargs: dict, any query string params to add
"""
kwargs = self._normalize_params(*paths, **query_kwargs)
return self.create(self.root, **kwargs)
|
python
|
{
"resource": ""
}
|
q10394
|
Request.accept_encoding
|
train
|
def accept_encoding(self):
"""The encoding the client requested the response to use"""
# https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Charset
ret = ""
accept_encoding = self.get_header("Accept-Charset", "")
if accept_encoding:
bits = re.split(r"\s+", accept_encoding)
bits = bits[0].split(";")
ret = bits[0]
return ret
|
python
|
{
"resource": ""
}
|
q10395
|
Request.encoding
|
train
|
def encoding(self):
"""the character encoding of the request, usually only set in POST type requests"""
encoding = None
ct = self.get_header('content-type')
if ct:
ah = AcceptHeader(ct)
if ah.media_types:
encoding = ah.media_types[0][2].get("charset", None)
return encoding
|
python
|
{
"resource": ""
}
|
q10396
|
Request.access_token
|
train
|
def access_token(self):
"""return an Oauth 2.0 Bearer access token if it can be found"""
access_token = self.get_auth_bearer()
if not access_token:
access_token = self.query_kwargs.get('access_token', '')
if not access_token:
access_token = self.body_kwargs.get('access_token', '')
return access_token
|
python
|
{
"resource": ""
}
|
q10397
|
Request.client_tokens
|
train
|
def client_tokens(self):
"""try and get Oauth 2.0 client id and secret first from basic auth header,
then from GET or POST parameters
return -- tuple -- client_id, client_secret
"""
client_id, client_secret = self.get_auth_basic()
if not client_id and not client_secret:
client_id = self.query_kwargs.get('client_id', '')
client_secret = self.query_kwargs.get('client_secret', '')
if not client_id and not client_secret:
client_id = self.body_kwargs.get('client_id', '')
client_secret = self.body_kwargs.get('client_secret', '')
return client_id, client_secret
|
python
|
{
"resource": ""
}
|
q10398
|
Request.ips
|
train
|
def ips(self):
"""return all the possible ips of this request, this will include public and private ips"""
r = []
names = ['X_FORWARDED_FOR', 'CLIENT_IP', 'X_REAL_IP', 'X_FORWARDED',
'X_CLUSTER_CLIENT_IP', 'FORWARDED_FOR', 'FORWARDED', 'VIA',
'REMOTE_ADDR']
for name in names:
vs = self.get_header(name, '')
if vs:
r.extend(map(lambda v: v.strip(), vs.split(',')))
vs = self.environ.get(name, '')
if vs:
r.extend(map(lambda v: v.strip(), vs.split(',')))
return r
|
python
|
{
"resource": ""
}
|
q10399
|
Request.ip
|
train
|
def ip(self):
"""return the public ip address"""
r = ''
# this was compiled from here:
# https://github.com/un33k/django-ipware
# http://www.ietf.org/rfc/rfc3330.txt (IPv4)
# http://www.ietf.org/rfc/rfc5156.txt (IPv6)
# https://en.wikipedia.org/wiki/Reserved_IP_addresses
format_regex = re.compile(r'\s')
ip_regex = re.compile(r'^(?:{})'.format(r'|'.join([
r'0\.', # reserved for 'self-identification'
r'10\.', # class A
r'169\.254', # link local block
r'172\.(?:1[6-9]|2[0-9]|3[0-1])\.', # class B
r'192\.0\.2\.', # documentation/examples
r'192\.168', # class C
r'255\.{3}', # broadcast address
r'2001\:db8', # documentation/examples
r'fc00\:', # private
r'fe80\:', # link local unicast
r'ff00\:', # multicast
r'127\.', # localhost
r'\:\:1' # localhost
])))
ips = self.ips
for ip in ips:
if not format_regex.search(ip) and not ip_regex.match(ip):
r = ip
break
return r
|
python
|
{
"resource": ""
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.