_id
stringlengths 2
7
| title
stringlengths 1
88
| partition
stringclasses 3
values | text
stringlengths 75
19.8k
| language
stringclasses 1
value | meta_information
dict |
|---|---|---|---|---|---|
q12500
|
rfftn
|
train
|
def rfftn(a, s=None, axes=None, norm=None):
"""
Compute the N-dimensional discrete Fourier Transform for real input.
This function computes the N-dimensional discrete Fourier Transform over
any number of axes in an M-dimensional real array by means of the Fast
Fourier Transform (FFT). By default, all axes are transformed, with the
real transform performed over the last axis, while the remaining
transforms are complex.
Parameters
----------
a : array_like
Input array, taken to be real.
s : sequence of ints, optional
Shape (length along each transformed axis) to use from the input.
(``s[0]`` refers to axis 0, ``s[1]`` to axis 1, etc.).
The final element of `s` corresponds to `n` for ``rfft(x, n)``, while
for the remaining axes, it corresponds to `n` for ``fft(x, n)``.
Along any axis, if the given shape is smaller than that of the input,
the input is cropped. If it is larger, the input is padded with zeros.
if `s` is not given, the shape of the input along the axes specified
by `axes` is used.
axes : sequence of ints, optional
Axes over which to compute the FFT. If not given, the last ``len(s)``
axes are used, or all axes if `s` is also not specified.
norm : {None, "ortho"}, optional
.. versionadded:: 1.10.0
Normalization mode (see `numpy.fft`). Default is None.
Returns
-------
out : complex ndarray
The truncated or zero-padded input, transformed along the axes
indicated by `axes`, or by a combination of `s` and `a`,
as explained in the parameters section above.
The length of the last axis transformed will be ``s[-1]//2+1``,
while the remaining transformed axes will have lengths according to
`s`, or unchanged from the input.
Raises
------
ValueError
If `s` and `axes` have different length.
IndexError
If an element of `axes` is larger than than the number of axes of `a`.
See Also
--------
irfftn : The inverse of `rfftn`, i.e. the inverse of the n-dimensional FFT
of real input.
fft : The one-dimensional FFT, with definitions and conventions used.
rfft : The one-dimensional FFT of real input.
fftn : The n-dimensional FFT.
rfft2 : The two-dimensional FFT of real input.
Notes
-----
The transform for real input is performed over the last transformation
axis, as by `rfft`, then the transform over the remaining axes is
performed as by `fftn`. The order of the output is as for `rfft` for the
final transformation axis, and as for `fftn` for the remaining
transformation axes.
See `fft` for details, definitions and conventions used.
Examples
--------
>>> a = np.ones((2, 2, 2))
>>> np.fft.rfftn(a)
array([[[ 8.+0.j, 0.+0.j],
[ 0.+0.j, 0.+0.j]],
[[ 0.+0.j, 0.+0.j],
[ 0.+0.j, 0.+0.j]]])
>>> np.fft.rfftn(a, axes=(2, 0))
array([[[ 4.+0.j, 0.+0.j],
[ 4.+0.j, 0.+0.j]],
[[ 0.+0.j, 0.+0.j],
[ 0.+0.j, 0.+0.j]]])
"""
unitary = _unitary(norm)
if unitary:
a = asarray(a)
s, axes = _cook_nd_args(a, s, axes)
output = mkl_fft.rfftn_numpy(a, s, axes)
if unitary:
n_tot = prod(asarray(s, dtype=output.dtype))
output *= 1 / sqrt(n_tot)
return output
|
python
|
{
"resource": ""
}
|
q12501
|
rfft2
|
train
|
def rfft2(a, s=None, axes=(-2, -1), norm=None):
"""
Compute the 2-dimensional FFT of a real array.
Parameters
----------
a : array
Input array, taken to be real.
s : sequence of ints, optional
Shape of the FFT.
axes : sequence of ints, optional
Axes over which to compute the FFT.
norm : {None, "ortho"}, optional
.. versionadded:: 1.10.0
Normalization mode (see `numpy.fft`). Default is None.
Returns
-------
out : ndarray
The result of the real 2-D FFT.
See Also
--------
rfftn : Compute the N-dimensional discrete Fourier Transform for real
input.
Notes
-----
This is really just `rfftn` with different default behavior.
For more details see `rfftn`.
"""
return rfftn(a, s, axes, norm)
|
python
|
{
"resource": ""
}
|
q12502
|
irfftn
|
train
|
def irfftn(a, s=None, axes=None, norm=None):
"""
Compute the inverse of the N-dimensional FFT of real input.
This function computes the inverse of the N-dimensional discrete
Fourier Transform for real input over any number of axes in an
M-dimensional array by means of the Fast Fourier Transform (FFT). In
other words, ``irfftn(rfftn(a), a.shape) == a`` to within numerical
accuracy. (The ``a.shape`` is necessary like ``len(a)`` is for `irfft`,
and for the same reason.)
The input should be ordered in the same way as is returned by `rfftn`,
i.e. as for `irfft` for the final transformation axis, and as for `ifftn`
along all the other axes.
Parameters
----------
a : array_like
Input array.
s : sequence of ints, optional
Shape (length of each transformed axis) of the output
(``s[0]`` refers to axis 0, ``s[1]`` to axis 1, etc.). `s` is also the
number of input points used along this axis, except for the last axis,
where ``s[-1]//2+1`` points of the input are used.
Along any axis, if the shape indicated by `s` is smaller than that of
the input, the input is cropped. If it is larger, the input is padded
with zeros. If `s` is not given, the shape of the input along the
axes specified by `axes` is used.
axes : sequence of ints, optional
Axes over which to compute the inverse FFT. If not given, the last
`len(s)` axes are used, or all axes if `s` is also not specified.
Repeated indices in `axes` means that the inverse transform over that
axis is performed multiple times.
norm : {None, "ortho"}, optional
.. versionadded:: 1.10.0
Normalization mode (see `numpy.fft`). Default is None.
Returns
-------
out : ndarray
The truncated or zero-padded input, transformed along the axes
indicated by `axes`, or by a combination of `s` or `a`,
as explained in the parameters section above.
The length of each transformed axis is as given by the corresponding
element of `s`, or the length of the input in every axis except for the
last one if `s` is not given. In the final transformed axis the length
of the output when `s` is not given is ``2*(m-1)`` where ``m`` is the
length of the final transformed axis of the input. To get an odd
number of output points in the final axis, `s` must be specified.
Raises
------
ValueError
If `s` and `axes` have different length.
IndexError
If an element of `axes` is larger than than the number of axes of `a`.
See Also
--------
rfftn : The forward n-dimensional FFT of real input,
of which `ifftn` is the inverse.
fft : The one-dimensional FFT, with definitions and conventions used.
irfft : The inverse of the one-dimensional FFT of real input.
irfft2 : The inverse of the two-dimensional FFT of real input.
Notes
-----
See `fft` for definitions and conventions used.
See `rfft` for definitions and conventions used for real input.
Examples
--------
>>> a = np.zeros((3, 2, 2))
>>> a[0, 0, 0] = 3 * 2 * 2
>>> np.fft.irfftn(a)
array([[[ 1., 1.],
[ 1., 1.]],
[[ 1., 1.],
[ 1., 1.]],
[[ 1., 1.],
[ 1., 1.]]])
"""
output = mkl_fft.irfftn_numpy(a, s, axes)
if _unitary(norm):
output *= sqrt(_tot_size(output, axes))
return output
|
python
|
{
"resource": ""
}
|
q12503
|
irfft2
|
train
|
def irfft2(a, s=None, axes=(-2, -1), norm=None):
"""
Compute the 2-dimensional inverse FFT of a real array.
Parameters
----------
a : array_like
The input array
s : sequence of ints, optional
Shape of the inverse FFT.
axes : sequence of ints, optional
The axes over which to compute the inverse fft.
Default is the last two axes.
norm : {None, "ortho"}, optional
.. versionadded:: 1.10.0
Normalization mode (see `numpy.fft`). Default is None.
Returns
-------
out : ndarray
The result of the inverse real 2-D FFT.
See Also
--------
irfftn : Compute the inverse of the N-dimensional FFT of real input.
Notes
-----
This is really `irfftn` with different defaults.
For more details see `irfftn`.
"""
return irfftn(a, s, axes, norm)
|
python
|
{
"resource": ""
}
|
q12504
|
cli
|
train
|
def cli(obj, environment, service, resource, event, group, tags, customer, start, duration, text, delete):
"""Suppress alerts for specified duration based on alert attributes."""
client = obj['client']
if delete:
client.delete_blackout(delete)
else:
if not environment:
raise click.UsageError('Missing option "--environment" / "-E".')
try:
blackout = client.create_blackout(
environment=environment,
service=service,
resource=resource,
event=event,
group=group,
tags=tags,
customer=customer,
start=start,
duration=duration,
text=text
)
except Exception as e:
click.echo('ERROR: {}'.format(e))
sys.exit(1)
click.echo(blackout.id)
|
python
|
{
"resource": ""
}
|
q12505
|
cli
|
train
|
def cli(obj, username, scopes, duration, text, customer, delete):
"""Create or delete an API key."""
client = obj['client']
if delete:
client.delete_key(delete)
else:
try:
expires = datetime.utcnow() + timedelta(seconds=duration) if duration else None
key = client.create_key(username, scopes, expires, text, customer)
except Exception as e:
click.echo('ERROR: {}'.format(e))
sys.exit(1)
click.echo(key.key)
|
python
|
{
"resource": ""
}
|
q12506
|
cli
|
train
|
def cli(obj, id, name, email, password, status, roles, text, email_verified, delete):
"""Create user or update user details, including password reset."""
client = obj['client']
if delete:
client.delete_user(delete)
elif id:
if not any([name, email, password, status, roles, text, email_verified]):
click.echo('Nothing to update.')
sys.exit(1)
try:
r = client.update_user(
id, name=name, email=email, password=password, status=status,
roles=roles, attributes=None, text=text, email_verified=email_verified
)
except Exception as e:
click.echo('ERROR: {}'.format(e))
sys.exit(1)
if r['status'] == 'ok':
click.echo('Updated.')
else:
click.echo(r['message'])
else:
if not email:
raise click.UsageError('Need "--email" to create user.')
if not password:
password = click.prompt('Password', hide_input=True)
try:
user = client.create_user(
name=name, email=email, password=password, status=status,
roles=roles, attributes=None, text=text, email_verified=email_verified
)
except Exception as e:
click.echo('ERROR: {}'.format(e))
sys.exit(1)
click.echo(user.id)
|
python
|
{
"resource": ""
}
|
q12507
|
cli
|
train
|
def cli(ctx, obj):
"""Show Alerta server and client versions."""
client = obj['client']
click.echo('alerta {}'.format(client.mgmt_status()['version']))
click.echo('alerta client {}'.format(client_version))
click.echo('requests {}'.format(requests_version))
click.echo('click {}'.format(click.__version__))
ctx.exit()
|
python
|
{
"resource": ""
}
|
q12508
|
cli
|
train
|
def cli(obj, show_userinfo):
"""Display logged in user or full userinfo."""
client = obj['client']
userinfo = client.userinfo()
if show_userinfo:
for k, v in userinfo.items():
if isinstance(v, list):
v = ', '.join(v)
click.echo('{:20}: {}'.format(k, v))
else:
click.echo(userinfo['preferred_username'])
|
python
|
{
"resource": ""
}
|
q12509
|
cli
|
train
|
def cli(obj):
"""Display client config downloaded from API server."""
for k, v in obj.items():
if isinstance(v, list):
v = ', '.join(v)
click.echo('{:20}: {}'.format(k, v))
|
python
|
{
"resource": ""
}
|
q12510
|
cli
|
train
|
def cli(obj, ids, query, filters):
"""Delete alerts."""
client = obj['client']
if ids:
total = len(ids)
else:
if not (query or filters):
click.confirm('Deleting all alerts. Do you want to continue?', abort=True)
if query:
query = [('q', query)]
else:
query = build_query(filters)
total, _, _ = client.get_count(query)
ids = [a.id for a in client.get_alerts(query)]
with click.progressbar(ids, label='Deleting {} alerts'.format(total)) as bar:
for id in bar:
client.delete_alert(id)
|
python
|
{
"resource": ""
}
|
q12511
|
cli
|
train
|
def cli(obj):
"""Display API server uptime in days, hours."""
client = obj['client']
status = client.mgmt_status()
now = datetime.fromtimestamp(int(status['time']) / 1000.0)
uptime = datetime(1, 1, 1) + timedelta(seconds=int(status['uptime']) / 1000.0)
click.echo('{} up {} days {:02d}:{:02d}'.format(
now.strftime('%H:%M'),
uptime.day - 1, uptime.hour, uptime.minute
))
|
python
|
{
"resource": ""
}
|
q12512
|
cli
|
train
|
def cli(obj, ids, query, filters, tags):
"""Remove tags from alerts."""
client = obj['client']
if ids:
total = len(ids)
else:
if query:
query = [('q', query)]
else:
query = build_query(filters)
total, _, _ = client.get_count(query)
ids = [a.id for a in client.get_alerts(query)]
with click.progressbar(ids, label='Untagging {} alerts'.format(total)) as bar:
for id in bar:
client.untag_alert(id, tags)
|
python
|
{
"resource": ""
}
|
q12513
|
cli
|
train
|
def cli(ctx, ids, query, filters, details, interval):
"""Watch for new alerts."""
if details:
display = 'details'
else:
display = 'compact'
from_date = None
auto_refresh = True
while auto_refresh:
try:
auto_refresh, from_date = ctx.invoke(query_cmd, ids=ids, query=query,
filters=filters, display=display, from_date=from_date)
time.sleep(interval)
except (KeyboardInterrupt, SystemExit) as e:
sys.exit(e)
|
python
|
{
"resource": ""
}
|
q12514
|
cli
|
train
|
def cli(obj):
"""Display API server switch status and usage metrics."""
client = obj['client']
metrics = client.mgmt_status()['metrics']
headers = {'title': 'METRIC', 'type': 'TYPE', 'name': 'NAME', 'value': 'VALUE', 'average': 'AVERAGE'}
click.echo(tabulate([{
'title': m['title'],
'type': m['type'],
'name': '{}.{}'.format(m['group'], m['name']),
'value': m.get('value', None) or m.get('count', 0),
'average': int(m['totalTime']) * 1.0 / int(m['count']) if m['type'] == 'timer' else None
} for m in metrics], headers=headers, tablefmt=obj['output']))
|
python
|
{
"resource": ""
}
|
q12515
|
cli
|
train
|
def cli(obj, ids, query, filters, attributes):
"""Update alert attributes."""
client = obj['client']
if ids:
total = len(ids)
else:
if query:
query = [('q', query)]
else:
query = build_query(filters)
total, _, _ = client.get_count(query)
ids = [a.id for a in client.get_alerts(query)]
with click.progressbar(ids, label='Updating {} alerts'.format(total)) as bar:
for id in bar:
client.update_attributes(id, dict(a.split('=') for a in attributes))
|
python
|
{
"resource": ""
}
|
q12516
|
cli
|
train
|
def cli(obj, origin, tags, timeout, customer, delete):
"""Send or delete a heartbeat."""
client = obj['client']
if delete:
client.delete_heartbeat(delete)
else:
try:
heartbeat = client.heartbeat(origin=origin, tags=tags, timeout=timeout, customer=customer)
except Exception as e:
click.echo('ERROR: {}'.format(e))
sys.exit(1)
click.echo(heartbeat.id)
|
python
|
{
"resource": ""
}
|
q12517
|
cli
|
train
|
def cli(obj):
"""List customer lookups."""
client = obj['client']
if obj['output'] == 'json':
r = client.http.get('/customers')
click.echo(json.dumps(r['customers'], sort_keys=True, indent=4, ensure_ascii=False))
else:
headers = {'id': 'ID', 'customer': 'CUSTOMER', 'match': 'GROUP'}
click.echo(tabulate([c.tabular() for c in client.get_customers()], headers=headers, tablefmt=obj['output']))
|
python
|
{
"resource": ""
}
|
q12518
|
cli
|
train
|
def cli(obj):
"""List API keys."""
client = obj['client']
if obj['output'] == 'json':
r = client.http.get('/keys')
click.echo(json.dumps(r['keys'], sort_keys=True, indent=4, ensure_ascii=False))
else:
timezone = obj['timezone']
headers = {
'id': 'ID', 'key': 'API KEY', 'user': 'USER', 'scopes': 'SCOPES', 'text': 'TEXT',
'expireTime': 'EXPIRES', 'count': 'COUNT', 'lastUsedTime': 'LAST USED', 'customer': 'CUSTOMER'
}
click.echo(tabulate([k.tabular(timezone) for k in client.get_keys()], headers=headers, tablefmt=obj['output']))
|
python
|
{
"resource": ""
}
|
q12519
|
cli
|
train
|
def cli(obj, role, scopes, delete):
"""Add or delete role-to-permission lookup entry."""
client = obj['client']
if delete:
client.delete_perm(delete)
else:
if not role:
raise click.UsageError('Missing option "--role".')
if not scopes:
raise click.UsageError('Missing option "--scope".')
try:
perm = client.create_perm(role, scopes)
except Exception as e:
click.echo('ERROR: {}'.format(e))
sys.exit(1)
click.echo(perm.id)
|
python
|
{
"resource": ""
}
|
q12520
|
cli
|
train
|
def cli(obj):
"""Display alerts like unix "top" command."""
client = obj['client']
timezone = obj['timezone']
screen = Screen(client, timezone)
screen.run()
|
python
|
{
"resource": ""
}
|
q12521
|
cli
|
train
|
def cli(obj, expired=None, info=None):
"""Trigger the expiration and deletion of alerts."""
client = obj['client']
client.housekeeping(expired_delete_hours=expired, info_delete_hours=info)
|
python
|
{
"resource": ""
}
|
q12522
|
cli
|
train
|
def cli(obj, purge):
"""List alert suppressions."""
client = obj['client']
if obj['output'] == 'json':
r = client.http.get('/blackouts')
click.echo(json.dumps(r['blackouts'], sort_keys=True, indent=4, ensure_ascii=False))
else:
timezone = obj['timezone']
headers = {
'id': 'ID', 'priority': 'P', 'environment': 'ENVIRONMENT', 'service': 'SERVICE', 'resource': 'RESOURCE',
'event': 'EVENT', 'group': 'GROUP', 'tags': 'TAGS', 'customer': 'CUSTOMER', 'startTime': 'START', 'endTime': 'END',
'duration': 'DURATION', 'user': 'USER', 'createTime': 'CREATED', 'text': 'COMMENT',
'status': 'STATUS', 'remaining': 'REMAINING'
}
blackouts = client.get_blackouts()
click.echo(tabulate([b.tabular(timezone) for b in blackouts], headers=headers, tablefmt=obj['output']))
expired = [b for b in blackouts if b.status == 'expired']
if purge:
with click.progressbar(expired, label='Purging {} blackouts'.format(len(expired))) as bar:
for b in bar:
client.delete_blackout(b.id)
|
python
|
{
"resource": ""
}
|
q12523
|
cli
|
train
|
def cli(obj, name, email, password, status, text):
"""Create new Basic Auth user."""
client = obj['client']
if not email:
raise click.UsageError('Need "--email" to sign-up new user.')
if not password:
raise click.UsageError('Need "--password" to sign-up new user.')
try:
r = client.signup(name=name, email=email, password=password, status=status, attributes=None, text=text)
except Exception as e:
click.echo('ERROR: {}'.format(e))
sys.exit(1)
if 'token' in r:
click.echo('Signed Up.')
else:
raise AuthError
|
python
|
{
"resource": ""
}
|
q12524
|
cli
|
train
|
def cli(ctx, config_file, profile, endpoint_url, output, color, debug):
"""
Alerta client unified command-line tool.
"""
config = Config(config_file)
config.get_config_for_profle(profile)
config.get_remote_config(endpoint_url)
ctx.obj = config.options
# override current options with command-line options or environment variables
ctx.obj['output'] = output or config.options['output']
ctx.obj['color'] = color or os.environ.get('CLICOLOR', None) or config.options['color']
endpoint = endpoint_url or config.options['endpoint']
ctx.obj['client'] = Client(
endpoint=endpoint,
key=config.options['key'],
token=get_token(endpoint),
username=config.options.get('username', None),
password=config.options.get('password', None),
timeout=float(config.options['timeout']),
ssl_verify=config.options['sslverify'],
debug=debug or os.environ.get('DEBUG', None) or config.options['debug']
)
|
python
|
{
"resource": ""
}
|
q12525
|
cli
|
train
|
def cli(obj, ids, query, filters):
"""Show raw data for alerts."""
client = obj['client']
if ids:
query = [('id', x) for x in ids]
elif query:
query = [('q', query)]
else:
query = build_query(filters)
alerts = client.search(query)
headers = {'id': 'ID', 'rawData': 'RAW DATA'}
click.echo(
tabulate([{'id': a.id, 'rawData': a.raw_data} for a in alerts], headers=headers, tablefmt=obj['output']))
|
python
|
{
"resource": ""
}
|
q12526
|
cli
|
train
|
def cli(obj, name, email, password, status, text):
"""Update current user details, including password reset."""
if not any([name, email, password, status, text]):
click.echo('Nothing to update.')
sys.exit(1)
client = obj['client']
try:
r = client.update_me(name=name, email=email, password=password, status=status, attributes=None, text=text)
except Exception as e:
click.echo('ERROR: {}'.format(e))
sys.exit(1)
if r['status'] == 'ok':
click.echo('Updated.')
else:
click.echo(r['message'])
|
python
|
{
"resource": ""
}
|
q12527
|
cli
|
train
|
def cli(obj, ids, query, filters):
"""Show status and severity changes for alerts."""
client = obj['client']
if obj['output'] == 'json':
r = client.http.get('/alerts/history')
click.echo(json.dumps(r['history'], sort_keys=True, indent=4, ensure_ascii=False))
else:
timezone = obj['timezone']
if ids:
query = [('id', x) for x in ids]
elif query:
query = [('q', query)]
else:
query = build_query(filters)
alerts = client.get_history(query)
headers = {'id': 'ID', 'updateTime': 'LAST UPDATED', 'severity': 'SEVERITY', 'status': 'STATUS',
'type': 'TYPE', 'customer': 'CUSTOMER', 'environment': 'ENVIRONMENT', 'service': 'SERVICE',
'resource': 'RESOURCE', 'group': 'GROUP', 'event': 'EVENT', 'value': 'VALUE', 'text': 'TEXT'}
click.echo(
tabulate([a.tabular(timezone) for a in alerts], headers=headers, tablefmt=obj['output']))
|
python
|
{
"resource": ""
}
|
q12528
|
transcribe
|
train
|
def transcribe(files=[], pre=10, post=50):
'''Uses pocketsphinx to transcribe audio files'''
total = len(files)
for i, f in enumerate(files):
filename = f.replace('.temp.wav', '') + '.transcription.txt'
if os.path.exists(filename) is False:
print(str(i+1) + '/' + str(total) + ' Transcribing ' + f)
transcript = subprocess.check_output(['pocketsphinx_continuous', '-infile', f, '-time', 'yes', '-logfn', '/dev/null', '-vad_prespeech', str(pre), '-vad_postspeech', str(post)])
with open(filename, 'w') as outfile:
outfile.write(transcript.decode('utf8'))
os.remove(f)
|
python
|
{
"resource": ""
}
|
q12529
|
convert_timestamps
|
train
|
def convert_timestamps(files):
'''Converts pocketsphinx transcriptions to usable timestamps'''
sentences = []
for f in files:
if not f.endswith('.transcription.txt'):
f = f + '.transcription.txt'
if os.path.exists(f) is False:
continue
with open(f, 'r') as infile:
lines = infile.readlines()
lines = [re.sub(r'\(.*?\)', '', l).strip().split(' ') for l in lines]
lines = [l for l in lines if len(l) == 4]
seg_start = -1
seg_end = -1
for index, line in enumerate(lines):
word, start, end, conf = line
if word == '<s>' or word == '<sil>' or word == '</s>':
if seg_start == -1:
seg_start = index
seg_end = -1
else:
seg_end = index
if seg_start > -1 and seg_end > -1:
words = lines[seg_start+1:seg_end]
start = float(lines[seg_start][1])
end = float(lines[seg_end][1])
if words:
sentences.append({'start': start, 'end': end, 'words': words, 'file': f})
if word == '</s>':
seg_start = -1
else:
seg_start = seg_end
seg_end = -1
return sentences
|
python
|
{
"resource": ""
}
|
q12530
|
text
|
train
|
def text(files):
'''Returns the whole transcribed text'''
sentences = convert_timestamps(files)
out = []
for s in sentences:
out.append(' '.join([w[0] for w in s['words']]))
return '\n'.join(out)
|
python
|
{
"resource": ""
}
|
q12531
|
search
|
train
|
def search(query, files, mode='sentence', regex=False):
'''Searches for words or sentences containing a search phrase'''
out = []
sentences = convert_timestamps(files)
if mode == 'fragment':
out = fragment_search(query, sentences, regex)
elif mode == 'word':
out = word_search(query, sentences, regex)
elif mode == 'franken':
out = franken_sentence(query, files)
else:
out = sentence_search(query, sentences, regex)
return out
|
python
|
{
"resource": ""
}
|
q12532
|
extract_words
|
train
|
def extract_words(files):
''' Extracts individual words form files and exports them to individual files. '''
output_directory = 'extracted_words'
if not os.path.exists(output_directory):
os.makedirs(output_directory)
for f in files:
file_format = None
source_segment = None
if f.lower().endswith('.mp3'):
file_format = 'mp3'
source_segment = AudioSegment.from_mp3(f)
elif f.lower().endswith('.wav'):
file_format = 'wav'
source_segment = AudioSegment.from_wav(f)
if not file_format or source_segment:
print('Unsupported audio format for ' + f)
sentences = convert_timestamps(files)
for s in sentences:
for word in s['words']:
start = float(word[1]) * 1000
end = float(word[2]) * 1000
word = word[0]
total_time = end - start
audio = AudioSegment.silent(duration=total_time)
audio = audio.overlay(source_segment[start:end])
number = 0
output_path = None
while True:
output_filename = word
if number:
output_filename += "_" + str(number)
output_filename = output_filename + '.' + file_format
output_path = os.path.join(output_directory, output_filename)
if not os.path.exists(output_path):
# this file doesn't exist, so we can continue
break
# file already exists, increment name and try again
number += 1
print('Exporting to: ' + output_path)
audio.export(output_path, format=file_format)
|
python
|
{
"resource": ""
}
|
q12533
|
compose
|
train
|
def compose(segments, out='out.mp3', padding=0, crossfade=0, layer=False):
'''Stiches together a new audiotrack'''
files = {}
working_segments = []
audio = AudioSegment.empty()
if layer:
total_time = max([s['end'] - s['start'] for s in segments]) * 1000
audio = AudioSegment.silent(duration=total_time)
for i, s in enumerate(segments):
try:
start = s['start'] * 1000
end = s['end'] * 1000
f = s['file'].replace('.transcription.txt', '')
if f not in files:
if f.endswith('.wav'):
files[f] = AudioSegment.from_wav(f)
elif f.endswith('.mp3'):
files[f] = AudioSegment.from_mp3(f)
segment = files[f][start:end]
print(start, end, f)
if layer:
audio = audio.overlay(segment, times=1)
else:
if i > 0:
audio = audio.append(segment, crossfade=crossfade)
else:
audio = audio + segment
if padding > 0:
audio = audio + AudioSegment.silent(duration=padding)
s['duration'] = len(segment)
working_segments.append(s)
except:
continue
audio.export(out, format=os.path.splitext(out)[1].replace('.', ''))
return working_segments
|
python
|
{
"resource": ""
}
|
q12534
|
load
|
train
|
def load(target, **namespace):
""" Import a module or fetch an object from a module.
* ``package.module`` returns `module` as a module object.
* ``pack.mod:name`` returns the module variable `name` from `pack.mod`.
* ``pack.mod:func()`` calls `pack.mod.func()` and returns the result.
The last form accepts not only function calls, but any type of
expression. Keyword arguments passed to this function are available as
local variables. Example: ``import_string('re:compile(x)', x='[a-z]')``
"""
module, target = target.split(":", 1) if ':' in target else (target, None)
if module not in sys.modules: __import__(module)
if not target: return sys.modules[module]
if target.isalnum(): return getattr(sys.modules[module], target)
package_name = module.split('.')[0]
namespace[package_name] = sys.modules[package_name]
return eval('%s.%s' % (module, target), namespace)
|
python
|
{
"resource": ""
}
|
q12535
|
Route.all_plugins
|
train
|
def all_plugins(self):
''' Yield all Plugins affecting this route. '''
unique = set()
for p in reversed(self.app.plugins + self.plugins):
if True in self.skiplist: break
name = getattr(p, 'name', False)
if name and (name in self.skiplist or name in unique): continue
if p in self.skiplist or type(p) in self.skiplist: continue
if name: unique.add(name)
yield p
|
python
|
{
"resource": ""
}
|
q12536
|
BaseRequest.remote_route
|
train
|
def remote_route(self):
""" A list of all IPs that were involved in this request, starting with
the client IP and followed by zero or more proxies. This does only
work if all proxies support the ```X-Forwarded-For`` header. Note
that this information can be forged by malicious clients. """
proxy = self.environ.get('HTTP_X_FORWARDED_FOR')
if proxy: return [ip.strip() for ip in proxy.split(',')]
remote = self.environ.get('REMOTE_ADDR')
return [remote] if remote else []
|
python
|
{
"resource": ""
}
|
q12537
|
BaseResponse.get_header
|
train
|
def get_header(self, name, default=None):
''' Return the value of a previously defined header. If there is no
header with that name, return a default value. '''
return self._headers.get(_hkey(name), [default])[-1]
|
python
|
{
"resource": ""
}
|
q12538
|
MultiDict.get
|
train
|
def get(self, key, default=None, index=-1, type=None):
''' Return the most recent value for a key.
:param default: The default value to be returned if the key is not
present or the type conversion fails.
:param index: An index for the list of available values.
:param type: If defined, this callable is used to cast the value
into a specific type. Exception are suppressed and result in
the default value to be returned.
'''
try:
val = self.dict[key][index]
return type(val) if type else val
except Exception, e:
pass
return default
|
python
|
{
"resource": ""
}
|
q12539
|
JSXTransformer.transform_string
|
train
|
def transform_string(self, jsx, harmony=False, strip_types=False):
""" Transform ``jsx`` JSX string into javascript
:param jsx: JSX source code
:type jsx: basestring
:keyword harmony: Transform ES6 code into ES3 (default: False)
:type harmony: bool
:keyword strip_types: Strip type declarations (default: False)
:type harmony: bool
:return: compiled JS code
:rtype: str
"""
opts = {'harmony': harmony, 'stripTypes': strip_types}
try:
result = self.context.call(
'%s.transform' % self.JSX_TRANSFORMER_JS_EXPR, jsx, opts)
except execjs.ProgramError as e:
raise TransformError(str(e))
js = result['code']
return js
|
python
|
{
"resource": ""
}
|
q12540
|
Leaderboard.pool
|
train
|
def pool(self, host, port, db, pools={}, **options):
'''
Fetch a redis conenction pool for the unique combination of host
and port. Will create a new one if there isn't one already.
'''
key = (host, port, db)
rval = pools.get(key)
if not isinstance(rval, ConnectionPool):
rval = ConnectionPool(host=host, port=port, db=db, **options)
pools[key] = rval
return rval
|
python
|
{
"resource": ""
}
|
q12541
|
Leaderboard.rank_member
|
train
|
def rank_member(self, member, score, member_data=None):
'''
Rank a member in the leaderboard.
@param member [String] Member name.
@param score [float] Member score.
@param member_data [String] Optional member data.
'''
self.rank_member_in(self.leaderboard_name, member, score, member_data)
|
python
|
{
"resource": ""
}
|
q12542
|
Leaderboard.rank_member_if
|
train
|
def rank_member_if(
self, rank_conditional, member, score, member_data=None):
'''
Rank a member in the leaderboard based on execution of the +rank_conditional+.
The +rank_conditional+ is passed the following parameters:
member: Member name.
current_score: Current score for the member in the leaderboard.
score: Member score.
member_data: Optional member data.
leaderboard_options: Leaderboard options, e.g. 'reverse': Value of reverse option
@param rank_conditional [function] Function which must return +True+ or +False+ that controls whether or not the member is ranked in the leaderboard.
@param member [String] Member name.
@param score [float] Member score.
@param member_data [String] Optional member_data.
'''
self.rank_member_if_in(
self.leaderboard_name,
rank_conditional,
member,
score,
member_data)
|
python
|
{
"resource": ""
}
|
q12543
|
Leaderboard.rank_member_if_in
|
train
|
def rank_member_if_in(
self,
leaderboard_name,
rank_conditional,
member,
score,
member_data=None):
'''
Rank a member in the named leaderboard based on execution of the +rank_conditional+.
The +rank_conditional+ is passed the following parameters:
member: Member name.
current_score: Current score for the member in the leaderboard.
score: Member score.
member_data: Optional member data.
leaderboard_options: Leaderboard options, e.g. 'reverse': Value of reverse option
@param leaderboard_name [String] Name of the leaderboard.
@param rank_conditional [function] Function which must return +True+ or +False+ that controls whether or not the member is ranked in the leaderboard.
@param member [String] Member name.
@param score [float] Member score.
@param member_data [String] Optional member_data.
'''
current_score = self.redis_connection.zscore(leaderboard_name, member)
if current_score is not None:
current_score = float(current_score)
if rank_conditional(self, member, current_score, score, member_data, {'reverse': self.order}):
self.rank_member_in(leaderboard_name, member, score, member_data)
|
python
|
{
"resource": ""
}
|
q12544
|
Leaderboard.member_data_for_in
|
train
|
def member_data_for_in(self, leaderboard_name, member):
'''
Retrieve the optional member data for a given member in the named leaderboard.
@param leaderboard_name [String] Name of the leaderboard.
@param member [String] Member name.
@return String of optional member data.
'''
return self.redis_connection.hget(
self._member_data_key(leaderboard_name), member)
|
python
|
{
"resource": ""
}
|
q12545
|
Leaderboard.members_data_for_in
|
train
|
def members_data_for_in(self, leaderboard_name, members):
'''
Retrieve the optional member data for a given list of members in the named leaderboard.
@param leaderboard_name [String] Name of the leaderboard.
@param members [Array] Member names.
@return Array of strings of optional member data.
'''
return self.redis_connection.hmget(
self._member_data_key(leaderboard_name), members)
|
python
|
{
"resource": ""
}
|
q12546
|
Leaderboard.update_member_data
|
train
|
def update_member_data(self, member, member_data):
'''
Update the optional member data for a given member in the leaderboard.
@param member [String] Member name.
@param member_data [String] Optional member data.
'''
self.update_member_data_in(self.leaderboard_name, member, member_data)
|
python
|
{
"resource": ""
}
|
q12547
|
Leaderboard.update_member_data_in
|
train
|
def update_member_data_in(self, leaderboard_name, member, member_data):
'''
Update the optional member data for a given member in the named leaderboard.
@param leaderboard_name [String] Name of the leaderboard.
@param member [String] Member name.
@param member_data [String] Optional member data.
'''
self.redis_connection.hset(
self._member_data_key(leaderboard_name),
member,
member_data)
|
python
|
{
"resource": ""
}
|
q12548
|
Leaderboard.total_pages_in
|
train
|
def total_pages_in(self, leaderboard_name, page_size=None):
'''
Retrieve the total number of pages in the named leaderboard.
@param leaderboard_name [String] Name of the leaderboard.
@param page_size [int, nil] Page size to be used when calculating the total number of pages.
@return the total number of pages in the named leaderboard.
'''
if page_size is None:
page_size = self.page_size
return int(
math.ceil(
self.total_members_in(leaderboard_name) /
float(page_size)))
|
python
|
{
"resource": ""
}
|
q12549
|
Leaderboard.total_members_in_score_range
|
train
|
def total_members_in_score_range(self, min_score, max_score):
'''
Retrieve the total members in a given score range from the leaderboard.
@param min_score [float] Minimum score.
@param max_score [float] Maximum score.
@return the total members in a given score range from the leaderboard.
'''
return self.total_members_in_score_range_in(
self.leaderboard_name, min_score, max_score)
|
python
|
{
"resource": ""
}
|
q12550
|
Leaderboard.total_members_in_score_range_in
|
train
|
def total_members_in_score_range_in(
self, leaderboard_name, min_score, max_score):
'''
Retrieve the total members in a given score range from the named leaderboard.
@param leaderboard_name Name of the leaderboard.
@param min_score [float] Minimum score.
@param max_score [float] Maximum score.
@return the total members in a given score range from the named leaderboard.
'''
return self.redis_connection.zcount(
leaderboard_name, min_score, max_score)
|
python
|
{
"resource": ""
}
|
q12551
|
Leaderboard.total_scores_in
|
train
|
def total_scores_in(self, leaderboard_name):
'''
Sum of scores for all members in the named leaderboard.
@param leaderboard_name Name of the leaderboard.
@return Sum of scores for all members in the named leaderboard.
'''
return sum([leader[self.SCORE_KEY] for leader in self.all_leaders_from(self.leaderboard_name)])
|
python
|
{
"resource": ""
}
|
q12552
|
Leaderboard.score_for_in
|
train
|
def score_for_in(self, leaderboard_name, member):
'''
Retrieve the score for a member in the named leaderboard.
@param leaderboard_name Name of the leaderboard.
@param member [String] Member name.
@return the score for a member in the leaderboard or +None+ if the member is not in the leaderboard.
'''
score = self.redis_connection.zscore(leaderboard_name, member)
if score is not None:
score = float(score)
return score
|
python
|
{
"resource": ""
}
|
q12553
|
Leaderboard.change_score_for
|
train
|
def change_score_for(self, member, delta, member_data=None):
'''
Change the score for a member in the leaderboard by a score delta which can be positive or negative.
@param member [String] Member name.
@param delta [float] Score change.
@param member_data [String] Optional member data.
'''
self.change_score_for_member_in(self.leaderboard_name, member, delta, member_data)
|
python
|
{
"resource": ""
}
|
q12554
|
Leaderboard.remove_members_in_score_range
|
train
|
def remove_members_in_score_range(self, min_score, max_score):
'''
Remove members from the leaderboard in a given score range.
@param min_score [float] Minimum score.
@param max_score [float] Maximum score.
'''
self.remove_members_in_score_range_in(
self.leaderboard_name,
min_score,
max_score)
|
python
|
{
"resource": ""
}
|
q12555
|
Leaderboard.remove_members_outside_rank_in
|
train
|
def remove_members_outside_rank_in(self, leaderboard_name, rank):
'''
Remove members from the named leaderboard in a given rank range.
@param leaderboard_name [String] Name of the leaderboard.
@param rank [int] the rank (inclusive) which we should keep.
@return the total member count which was removed.
'''
if self.order == self.DESC:
rank = -(rank) - 1
return self.redis_connection.zremrangebyrank(
leaderboard_name, 0, rank)
else:
return self.redis_connection.zremrangebyrank(
leaderboard_name, rank, -1)
|
python
|
{
"resource": ""
}
|
q12556
|
Leaderboard.page_for
|
train
|
def page_for(self, member, page_size=DEFAULT_PAGE_SIZE):
'''
Determine the page where a member falls in the leaderboard.
@param member [String] Member name.
@param page_size [int] Page size to be used in determining page location.
@return the page where a member falls in the leaderboard.
'''
return self.page_for_in(self.leaderboard_name, member, page_size)
|
python
|
{
"resource": ""
}
|
q12557
|
Leaderboard.page_for_in
|
train
|
def page_for_in(self, leaderboard_name, member,
page_size=DEFAULT_PAGE_SIZE):
'''
Determine the page where a member falls in the named leaderboard.
@param leaderboard [String] Name of the leaderboard.
@param member [String] Member name.
@param page_size [int] Page size to be used in determining page location.
@return the page where a member falls in the leaderboard.
'''
rank_for_member = None
if self.order == self.ASC:
rank_for_member = self.redis_connection.zrank(
leaderboard_name,
member)
else:
rank_for_member = self.redis_connection.zrevrank(
leaderboard_name,
member)
if rank_for_member is None:
rank_for_member = 0
else:
rank_for_member += 1
return int(math.ceil(float(rank_for_member) / float(page_size)))
|
python
|
{
"resource": ""
}
|
q12558
|
Leaderboard.percentile_for_in
|
train
|
def percentile_for_in(self, leaderboard_name, member):
'''
Retrieve the percentile for a member in the named leaderboard.
@param leaderboard_name [String] Name of the leaderboard.
@param member [String] Member name.
@return the percentile for a member in the named leaderboard.
'''
if not self.check_member_in(leaderboard_name, member):
return None
responses = self.redis_connection.pipeline().zcard(
leaderboard_name).zrevrank(leaderboard_name, member).execute()
percentile = math.ceil(
(float(
(responses[0] -
responses[1] -
1)) /
float(
responses[0]) *
100))
if self.order == self.ASC:
return 100 - percentile
else:
return percentile
|
python
|
{
"resource": ""
}
|
q12559
|
Leaderboard.score_for_percentile_in
|
train
|
def score_for_percentile_in(self, leaderboard_name, percentile):
'''
Calculate the score for a given percentile value in the leaderboard.
@param leaderboard_name [String] Name of the leaderboard.
@param percentile [float] Percentile value (0.0 to 100.0 inclusive).
@return the score corresponding to the percentile argument. Return +None+ for arguments outside 0-100 inclusive and for leaderboards with no members.
'''
if not 0 <= percentile <= 100:
return None
total_members = self.total_members_in(leaderboard_name)
if total_members < 1:
return None
if self.order == self.ASC:
percentile = 100 - percentile
index = (total_members - 1) * (percentile / 100.0)
scores = [
pair[1] for pair in self.redis_connection.zrange(
leaderboard_name, int(
math.floor(index)), int(
math.ceil(index)), withscores=True)]
if index == math.floor(index):
return scores[0]
else:
interpolate_fraction = index - math.floor(index)
return scores[0] + interpolate_fraction * (scores[1] - scores[0])
|
python
|
{
"resource": ""
}
|
q12560
|
Leaderboard.expire_leaderboard_for
|
train
|
def expire_leaderboard_for(self, leaderboard_name, seconds):
'''
Expire the given leaderboard in a set number of seconds. Do not use this with
leaderboards that utilize member data as there is no facility to cascade the
expiration out to the keys for the member data.
@param leaderboard_name [String] Name of the leaderboard.
@param seconds [int] Number of seconds after which the leaderboard will be expired.
'''
pipeline = self.redis_connection.pipeline()
pipeline.expire(leaderboard_name, seconds)
pipeline.expire(self._member_data_key(leaderboard_name), seconds)
pipeline.execute()
|
python
|
{
"resource": ""
}
|
q12561
|
Leaderboard.leaders
|
train
|
def leaders(self, current_page, **options):
'''
Retrieve a page of leaders from the leaderboard.
@param current_page [int] Page to retrieve from the leaderboard.
@param options [Hash] Options to be used when retrieving the page from the leaderboard.
@return a page of leaders from the leaderboard.
'''
return self.leaders_in(self.leaderboard_name, current_page, **options)
|
python
|
{
"resource": ""
}
|
q12562
|
Leaderboard.leaders_in
|
train
|
def leaders_in(self, leaderboard_name, current_page, **options):
'''
Retrieve a page of leaders from the named leaderboard.
@param leaderboard_name [String] Name of the leaderboard.
@param current_page [int] Page to retrieve from the named leaderboard.
@param options [Hash] Options to be used when retrieving the page from the named leaderboard.
@return a page of leaders from the named leaderboard.
'''
if current_page < 1:
current_page = 1
page_size = options.get('page_size', self.page_size)
index_for_redis = current_page - 1
starting_offset = (index_for_redis * page_size)
if starting_offset < 0:
starting_offset = 0
ending_offset = (starting_offset + page_size) - 1
raw_leader_data = self._range_method(
self.redis_connection,
leaderboard_name,
int(starting_offset),
int(ending_offset),
withscores=False)
return self._parse_raw_members(
leaderboard_name, raw_leader_data, **options)
|
python
|
{
"resource": ""
}
|
q12563
|
Leaderboard.all_leaders_from
|
train
|
def all_leaders_from(self, leaderboard_name, **options):
'''
Retrieves all leaders from the named leaderboard.
@param leaderboard_name [String] Name of the leaderboard.
@param options [Hash] Options to be used when retrieving the leaders from the named leaderboard.
@return the named leaderboard.
'''
raw_leader_data = self._range_method(
self.redis_connection, leaderboard_name, 0, -1, withscores=False)
return self._parse_raw_members(
leaderboard_name, raw_leader_data, **options)
|
python
|
{
"resource": ""
}
|
q12564
|
Leaderboard.members_from_score_range
|
train
|
def members_from_score_range(
self, minimum_score, maximum_score, **options):
'''
Retrieve members from the leaderboard within a given score range.
@param minimum_score [float] Minimum score (inclusive).
@param maximum_score [float] Maximum score (inclusive).
@param options [Hash] Options to be used when retrieving the data from the leaderboard.
@return members from the leaderboard that fall within the given score range.
'''
return self.members_from_score_range_in(
self.leaderboard_name, minimum_score, maximum_score, **options)
|
python
|
{
"resource": ""
}
|
q12565
|
Leaderboard.members_from_score_range_in
|
train
|
def members_from_score_range_in(
self, leaderboard_name, minimum_score, maximum_score, **options):
'''
Retrieve members from the named leaderboard within a given score range.
@param leaderboard_name [String] Name of the leaderboard.
@param minimum_score [float] Minimum score (inclusive).
@param maximum_score [float] Maximum score (inclusive).
@param options [Hash] Options to be used when retrieving the data from the leaderboard.
@return members from the leaderboard that fall within the given score range.
'''
raw_leader_data = []
if self.order == self.DESC:
raw_leader_data = self.redis_connection.zrevrangebyscore(
leaderboard_name,
maximum_score,
minimum_score)
else:
raw_leader_data = self.redis_connection.zrangebyscore(
leaderboard_name,
minimum_score,
maximum_score)
return self._parse_raw_members(
leaderboard_name, raw_leader_data, **options)
|
python
|
{
"resource": ""
}
|
q12566
|
Leaderboard.members_from_rank_range
|
train
|
def members_from_rank_range(self, starting_rank, ending_rank, **options):
'''
Retrieve members from the leaderboard within a given rank range.
@param starting_rank [int] Starting rank (inclusive).
@param ending_rank [int] Ending rank (inclusive).
@param options [Hash] Options to be used when retrieving the data from the leaderboard.
@return members from the leaderboard that fall within the given rank range.
'''
return self.members_from_rank_range_in(
self.leaderboard_name, starting_rank, ending_rank, **options)
|
python
|
{
"resource": ""
}
|
q12567
|
Leaderboard.members_from_rank_range_in
|
train
|
def members_from_rank_range_in(
self, leaderboard_name, starting_rank, ending_rank, **options):
'''
Retrieve members from the named leaderboard within a given rank range.
@param leaderboard_name [String] Name of the leaderboard.
@param starting_rank [int] Starting rank (inclusive).
@param ending_rank [int] Ending rank (inclusive).
@param options [Hash] Options to be used when retrieving the data from the leaderboard.
@return members from the leaderboard that fall within the given rank range.
'''
starting_rank -= 1
if starting_rank < 0:
starting_rank = 0
ending_rank -= 1
if ending_rank > self.total_members_in(leaderboard_name):
ending_rank = self.total_members_in(leaderboard_name) - 1
raw_leader_data = []
if self.order == self.DESC:
raw_leader_data = self.redis_connection.zrevrange(
leaderboard_name,
starting_rank,
ending_rank,
withscores=False)
else:
raw_leader_data = self.redis_connection.zrange(
leaderboard_name,
starting_rank,
ending_rank,
withscores=False)
return self._parse_raw_members(
leaderboard_name, raw_leader_data, **options)
|
python
|
{
"resource": ""
}
|
q12568
|
Leaderboard.top
|
train
|
def top(self, number, **options):
'''
Retrieve members from the leaderboard within a range from 1 to the number given.
@param ending_rank [int] Ending rank (inclusive).
@param options [Hash] Options to be used when retrieving the data from the leaderboard.
@return number from the leaderboard that fall within the given rank range.
'''
return self.top_in(self.leaderboard_name, number, **options)
|
python
|
{
"resource": ""
}
|
q12569
|
Leaderboard.top_in
|
train
|
def top_in(self, leaderboard_name, number, **options):
'''
Retrieve members from the named leaderboard within a range from 1 to the number given.
@param leaderboard_name [String] Name of the leaderboard.
@param starting_rank [int] Starting rank (inclusive).
@param ending_rank [int] Ending rank (inclusive).
@param options [Hash] Options to be used when retrieving the data from the leaderboard.
@return members from the leaderboard that fall within the given rank range.
'''
return self.members_from_rank_range_in(leaderboard_name, 1, number, **options)
|
python
|
{
"resource": ""
}
|
q12570
|
Leaderboard.around_me
|
train
|
def around_me(self, member, **options):
'''
Retrieve a page of leaders from the leaderboard around a given member.
@param member [String] Member name.
@param options [Hash] Options to be used when retrieving the page from the leaderboard.
@return a page of leaders from the leaderboard around a given member.
'''
return self.around_me_in(self.leaderboard_name, member, **options)
|
python
|
{
"resource": ""
}
|
q12571
|
Leaderboard.around_me_in
|
train
|
def around_me_in(self, leaderboard_name, member, **options):
'''
Retrieve a page of leaders from the named leaderboard around a given member.
@param leaderboard_name [String] Name of the leaderboard.
@param member [String] Member name.
@param options [Hash] Options to be used when retrieving the page from the named leaderboard.
@return a page of leaders from the named leaderboard around a given member. Returns an empty array for a non-existent member.
'''
reverse_rank_for_member = None
if self.order == self.DESC:
reverse_rank_for_member = self.redis_connection.zrevrank(
leaderboard_name,
member)
else:
reverse_rank_for_member = self.redis_connection.zrank(
leaderboard_name,
member)
if reverse_rank_for_member is None:
return []
page_size = options.get('page_size', self.page_size)
starting_offset = reverse_rank_for_member - (page_size // 2)
if starting_offset < 0:
starting_offset = 0
ending_offset = (starting_offset + page_size) - 1
raw_leader_data = self._range_method(
self.redis_connection,
leaderboard_name,
int(starting_offset),
int(ending_offset),
withscores=False)
return self._parse_raw_members(
leaderboard_name, raw_leader_data, **options)
|
python
|
{
"resource": ""
}
|
q12572
|
Leaderboard.ranked_in_list
|
train
|
def ranked_in_list(self, members, **options):
'''
Retrieve a page of leaders from the leaderboard for a given list of members.
@param members [Array] Member names.
@param options [Hash] Options to be used when retrieving the page from the leaderboard.
@return a page of leaders from the leaderboard for a given list of members.
'''
return self.ranked_in_list_in(
self.leaderboard_name, members, **options)
|
python
|
{
"resource": ""
}
|
q12573
|
Leaderboard.ranked_in_list_in
|
train
|
def ranked_in_list_in(self, leaderboard_name, members, **options):
'''
Retrieve a page of leaders from the named leaderboard for a given list of members.
@param leaderboard_name [String] Name of the leaderboard.
@param members [Array] Member names.
@param options [Hash] Options to be used when retrieving the page from the named leaderboard.
@return a page of leaders from the named leaderboard for a given list of members.
'''
ranks_for_members = []
pipeline = self.redis_connection.pipeline()
for member in members:
if self.order == self.ASC:
pipeline.zrank(leaderboard_name, member)
else:
pipeline.zrevrank(leaderboard_name, member)
pipeline.zscore(leaderboard_name, member)
responses = pipeline.execute()
for index, member in enumerate(members):
data = {}
data[self.MEMBER_KEY] = member
rank = responses[index * 2]
if rank is not None:
rank += 1
else:
if not options.get('include_missing', True):
continue
data[self.RANK_KEY] = rank
score = responses[index * 2 + 1]
if score is not None:
score = float(score)
data[self.SCORE_KEY] = score
ranks_for_members.append(data)
if ('with_member_data' in options) and (True == options['with_member_data']):
for index, member_data in enumerate(self.members_data_for_in(leaderboard_name, members)):
try:
ranks_for_members[index][self.MEMBER_DATA_KEY] = member_data
except:
pass
if 'sort_by' in options:
sort_value_if_none = float('-inf') if self.order == self.ASC else float('+inf')
if self.RANK_KEY == options['sort_by']:
ranks_for_members = sorted(
ranks_for_members,
key=lambda member: member.get(self.RANK_KEY) if member.get(self.RANK_KEY) is not None else sort_value_if_none
)
elif self.SCORE_KEY == options['sort_by']:
ranks_for_members = sorted(
ranks_for_members,
key=lambda member: member.get(self.SCORE_KEY) if member.get(self.SCORE_KEY) is not None else sort_value_if_none
)
return ranks_for_members
|
python
|
{
"resource": ""
}
|
q12574
|
Leaderboard.merge_leaderboards
|
train
|
def merge_leaderboards(self, destination, keys, aggregate='SUM'):
'''
Merge leaderboards given by keys with this leaderboard into a named destination leaderboard.
@param destination [String] Destination leaderboard name.
@param keys [Array] Leaderboards to be merged with the current leaderboard.
@param options [Hash] Options for merging the leaderboards.
'''
keys.insert(0, self.leaderboard_name)
self.redis_connection.zunionstore(destination, keys, aggregate)
|
python
|
{
"resource": ""
}
|
q12575
|
Leaderboard.intersect_leaderboards
|
train
|
def intersect_leaderboards(self, destination, keys, aggregate='SUM'):
'''
Intersect leaderboards given by keys with this leaderboard into a named destination leaderboard.
@param destination [String] Destination leaderboard name.
@param keys [Array] Leaderboards to be merged with the current leaderboard.
@param options [Hash] Options for intersecting the leaderboards.
'''
keys.insert(0, self.leaderboard_name)
self.redis_connection.zinterstore(destination, keys, aggregate)
|
python
|
{
"resource": ""
}
|
q12576
|
Leaderboard._member_data_key
|
train
|
def _member_data_key(self, leaderboard_name):
'''
Key for retrieving optional member data.
@param leaderboard_name [String] Name of the leaderboard.
@return a key in the form of +leaderboard_name:member_data+
'''
if self.global_member_data is False:
return '%s:%s' % (leaderboard_name, self.member_data_namespace)
else:
return self.member_data_namespace
|
python
|
{
"resource": ""
}
|
q12577
|
Leaderboard._parse_raw_members
|
train
|
def _parse_raw_members(
self, leaderboard_name, members, members_only=False, **options):
'''
Parse the raw leaders data as returned from a given leader board query. Do associative
lookups with the member to rank, score and potentially sort the results.
@param leaderboard_name [String] Name of the leaderboard.
@param members [List] A list of members as returned from a sorted set range query
@param members_only [bool] Set True to return the members as is, Default is False.
@param options [Hash] Options to be used when retrieving the page from the named leaderboard.
@return a list of members.
'''
if members_only:
return [{self.MEMBER_KEY: m} for m in members]
if members:
return self.ranked_in_list_in(leaderboard_name, members, **options)
else:
return []
|
python
|
{
"resource": ""
}
|
q12578
|
TieRankingLeaderboard.delete_leaderboard_named
|
train
|
def delete_leaderboard_named(self, leaderboard_name):
'''
Delete the named leaderboard.
@param leaderboard_name [String] Name of the leaderboard.
'''
pipeline = self.redis_connection.pipeline()
pipeline.delete(leaderboard_name)
pipeline.delete(self._member_data_key(leaderboard_name))
pipeline.delete(self._ties_leaderboard_key(leaderboard_name))
pipeline.execute()
|
python
|
{
"resource": ""
}
|
q12579
|
TieRankingLeaderboard.rank_member_across
|
train
|
def rank_member_across(
self, leaderboards, member, score, member_data=None):
'''
Rank a member across multiple leaderboards.
@param leaderboards [Array] Leaderboard names.
@param member [String] Member name.
@param score [float] Member score.
@param member_data [String] Optional member data.
'''
for leaderboard_name in leaderboards:
self.rank_member_in(leaderboard, member, score, member_data)
|
python
|
{
"resource": ""
}
|
q12580
|
TieRankingLeaderboard.expire_leaderboard_at_for
|
train
|
def expire_leaderboard_at_for(self, leaderboard_name, timestamp):
'''
Expire the given leaderboard at a specific UNIX timestamp. Do not use this with
leaderboards that utilize member data as there is no facility to cascade the
expiration out to the keys for the member data.
@param leaderboard_name [String] Name of the leaderboard.
@param timestamp [int] UNIX timestamp at which the leaderboard will be expired.
'''
pipeline = self.redis_connection.pipeline()
pipeline.expireat(leaderboard_name, timestamp)
pipeline.expireat(
self._ties_leaderboard_key(leaderboard_name), timestamp)
pipeline.expireat(self._member_data_key(leaderboard_name), timestamp)
pipeline.execute()
|
python
|
{
"resource": ""
}
|
q12581
|
check_key
|
train
|
def check_key(key, allowed):
"""
Validate that the specified key is allowed according the provided
list of patterns.
"""
if key in allowed:
return True
for pattern in allowed:
if fnmatch(key, pattern):
return True
return False
|
python
|
{
"resource": ""
}
|
q12582
|
cs_encode
|
train
|
def cs_encode(s):
"""Encode URI component like CloudStack would do before signing.
java.net.URLEncoder.encode(s).replace('+', '%20')
"""
if PY2 and isinstance(s, text_type):
s = s.encode("utf-8")
return quote(s, safe="*")
|
python
|
{
"resource": ""
}
|
q12583
|
transform
|
train
|
def transform(params):
"""
Transforms an heterogeneous map of params into a CloudStack
ready mapping of parameter to values.
It handles lists and dicts.
>>> p = {"a": 1, "b": "foo", "c": ["eggs", "spam"], "d": {"key": "value"}}
>>> transform(p)
>>> print(p)
{'a': '1', 'b': 'foo', 'c': 'eggs,spam', 'd[0].key': 'value'}
"""
for key, value in list(params.items()):
if value is None:
params.pop(key)
continue
if isinstance(value, (string_type, binary_type)):
continue
if isinstance(value, integer_types):
params[key] = text_type(value)
elif isinstance(value, (list, tuple, set, dict)):
if not value:
params.pop(key)
else:
if isinstance(value, dict):
value = [value]
if isinstance(value, set):
value = list(value)
if not isinstance(value[0], dict):
params[key] = ",".join(value)
else:
params.pop(key)
for index, val in enumerate(value):
for name, v in val.items():
k = "%s[%d].%s" % (key, index, name)
params[k] = text_type(v)
else:
raise ValueError(type(value))
|
python
|
{
"resource": ""
}
|
q12584
|
read_config
|
train
|
def read_config(ini_group=None):
"""
Read the configuration from the environment, or config.
First it try to go for the environment, then it overrides
those with the cloudstack.ini file.
"""
env_conf = dict(DEFAULT_CONFIG)
for key in REQUIRED_CONFIG_KEYS.union(ALLOWED_CONFIG_KEYS):
env_key = "CLOUDSTACK_{0}".format(key.upper())
value = os.getenv(env_key)
if value:
env_conf[key] = value
# overrides means we have a .ini to read
overrides = os.getenv('CLOUDSTACK_OVERRIDES', '').strip()
if not overrides and set(env_conf).issuperset(REQUIRED_CONFIG_KEYS):
return env_conf
ini_conf = read_config_from_ini(ini_group)
overrides = {s.lower() for s in re.split(r'\W+', overrides)}
config = dict(dict(env_conf, **ini_conf),
**{k: v for k, v in env_conf.items() if k in overrides})
missings = REQUIRED_CONFIG_KEYS.difference(config)
if missings:
raise ValueError("the configuration is missing the following keys: " +
", ".join(missings))
# convert booleans values.
bool_keys = ('dangerous_no_tls_verify',)
for bool_key in bool_keys:
if isinstance(config[bool_key], string_type):
try:
config[bool_key] = strtobool(config[bool_key])
except ValueError:
pass
return config
|
python
|
{
"resource": ""
}
|
q12585
|
CloudStack._response_value
|
train
|
def _response_value(self, response, json=True):
"""Parses the HTTP response as a the cloudstack value.
It throws an exception if the server didn't answer with a 200.
"""
if json:
contentType = response.headers.get("Content-Type", "")
if not contentType.startswith(("application/json",
"text/javascript")):
if response.status_code == 200:
raise CloudStackException(
"JSON (application/json) was expected, got {!r}"
.format(contentType),
response=response)
raise CloudStackException(
"HTTP {0.status_code} {0.reason}"
.format(response),
"Make sure endpoint URL {!r} is correct."
.format(self.endpoint),
response=response)
try:
data = response.json()
except ValueError as e:
raise CloudStackException(
"HTTP {0.status_code} {0.reason}"
.format(response),
"{0!s}. Malformed JSON document".format(e),
response=response)
[key] = data.keys()
data = data[key]
else:
data = response.text
if response.status_code != 200:
raise CloudStackException(
"HTTP {0} response from CloudStack".format(
response.status_code),
data,
response=response)
return data
|
python
|
{
"resource": ""
}
|
q12586
|
CloudStack._jobresult
|
train
|
def _jobresult(self, jobid, json=True, headers=None):
"""Poll the async job result.
To be run via in a Thread, the result is put within
the result list which is a hack.
"""
failures = 0
total_time = self.job_timeout or 2**30
remaining = timedelta(seconds=total_time)
endtime = datetime.now() + remaining
while remaining.total_seconds() > 0:
timeout = max(min(self.timeout, remaining.total_seconds()), 1)
try:
kind, params = self._prepare_request('queryAsyncJobResult',
jobid=jobid)
transform(params)
params['signature'] = self._sign(params)
req = requests.Request(self.method,
self.endpoint,
headers=headers,
**{kind: params})
prepped = req.prepare()
if self.trace:
print(prepped.method, prepped.url, file=sys.stderr)
if prepped.headers:
print(prepped.headers, "\n", file=sys.stderr)
if prepped.body:
print(prepped.body, file=sys.stderr)
else:
print(file=sys.stderr)
with requests.Session() as session:
response = session.send(prepped,
timeout=timeout,
verify=self.verify,
cert=self.cert)
j = self._response_value(response, json)
if self.trace:
print(response.status_code, response.reason,
file=sys.stderr)
headersTrace = "\n".join(
"{}: {}".format(k, v)
for k, v in response.headers.items())
print(headersTrace, "\n", file=sys.stderr)
print(response.text, "\n", file=sys.stderr)
failures = 0
if j['jobstatus'] != PENDING:
if j['jobresultcode'] or j['jobstatus'] != SUCCESS:
raise CloudStackException("Job failure",
response=response)
if 'jobresult' not in j:
raise CloudStackException("Unknown job result",
response=response)
return j['jobresult']
except CloudStackException:
raise
except Exception as e:
failures += 1
if failures > 10:
raise e
time.sleep(self.poll_interval)
remaining = endtime - datetime.now()
if response:
response.status_code = 408
raise CloudStackException("Timeout waiting for async job result",
jobid,
response=response)
|
python
|
{
"resource": ""
}
|
q12587
|
_format_json
|
train
|
def _format_json(data, theme):
"""Pretty print a dict as a JSON, with colors if pygments is present."""
output = json.dumps(data, indent=2, sort_keys=True)
if pygments and sys.stdout.isatty():
style = get_style_by_name(theme)
formatter = Terminal256Formatter(style=style)
return pygments.highlight(output, JsonLexer(), formatter)
return output
|
python
|
{
"resource": ""
}
|
q12588
|
Parser.parse
|
train
|
def parse(self, data=b''):
"""
Parses the wire protocol from NATS for the client
and dispatches the subscription callbacks.
"""
self.buf.extend(data)
while self.buf:
if self.state == AWAITING_CONTROL_LINE:
msg = MSG_RE.match(self.buf)
if msg:
try:
subject, sid, _, reply, needed_bytes = msg.groups()
self.msg_arg["subject"] = subject
self.msg_arg["sid"] = int(sid)
if reply:
self.msg_arg["reply"] = reply
else:
self.msg_arg["reply"] = b''
self.needed = int(needed_bytes)
del self.buf[:msg.end()]
self.state = AWAITING_MSG_PAYLOAD
continue
except:
raise ErrProtocol("nats: malformed MSG")
ok = OK_RE.match(self.buf)
if ok:
# Do nothing and just skip.
del self.buf[:ok.end()]
continue
err = ERR_RE.match(self.buf)
if err:
err_msg = err.groups()
yield self.nc._process_err(err_msg)
del self.buf[:err.end()]
continue
ping = PING_RE.match(self.buf)
if ping:
del self.buf[:ping.end()]
yield self.nc._process_ping()
continue
pong = PONG_RE.match(self.buf)
if pong:
del self.buf[:pong.end()]
yield self.nc._process_pong()
continue
info = INFO_RE.match(self.buf)
if info:
info_line = info.groups()[0]
self.nc._process_info(info_line)
del self.buf[:info.end()]
continue
# If nothing matched at this point, then probably
# a split buffer and need to gather more bytes,
# otherwise it would mean that there is an issue
# and we're getting malformed control lines.
if len(self.buf
) < MAX_CONTROL_LINE_SIZE and _CRLF_ not in self.buf:
break
else:
raise ErrProtocol("nats: unknown protocol")
elif self.state == AWAITING_MSG_PAYLOAD:
if len(self.buf) >= self.needed + CRLF_SIZE:
subject = self.msg_arg["subject"]
sid = self.msg_arg["sid"]
reply = self.msg_arg["reply"]
# Consume msg payload from buffer and set next parser state.
payload = bytes(self.buf[:self.needed])
del self.buf[:self.needed + CRLF_SIZE]
self.state = AWAITING_CONTROL_LINE
yield self.nc._process_msg(sid, subject, reply, payload)
else:
# Wait until we have enough bytes in buffer.
break
|
python
|
{
"resource": ""
}
|
q12589
|
Client._server_connect
|
train
|
def _server_connect(self, s):
"""
Sets up a TCP connection to the server.
"""
self._socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self._socket.setblocking(0)
self._socket.settimeout(1.0)
if self.options["tcp_nodelay"]:
self._socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
self.io = tornado.iostream.IOStream(self._socket,
max_buffer_size=self._max_read_buffer_size,
max_write_buffer_size=self._max_write_buffer_size,
read_chunk_size=self._read_chunk_size)
# Connect to server with a deadline
future = self.io.connect((s.uri.hostname, s.uri.port))
yield tornado.gen.with_timeout(
timedelta(seconds=self.options["connect_timeout"]), future)
# Called whenever disconnected from the server.
self.io.set_close_callback(self._process_op_err)
|
python
|
{
"resource": ""
}
|
q12590
|
Client.connect_command
|
train
|
def connect_command(self):
'''
Generates a JSON string with the params to be used
when sending CONNECT to the server.
->> CONNECT {"verbose": false, "pedantic": false, "lang": "python2" }
'''
options = {
"verbose": self.options["verbose"],
"pedantic": self.options["pedantic"],
"lang": __lang__,
"version": __version__,
"protocol": PROTOCOL
}
if "auth_required" in self._server_info:
if self._server_info["auth_required"] == True:
# In case there is no password, then consider handle
# sending a token instead.
if self.options["user"] is not None and self.options["password"] is not None:
options["user"] = self.options["user"]
options["pass"] = self.options["password"]
elif self.options["token"] is not None:
options["auth_token"] = self.options["token"]
elif self._current_server.uri.password is None:
options["auth_token"] = self._current_server.uri.username
else:
options["user"] = self._current_server.uri.username
options["pass"] = self._current_server.uri.password
if self.options["name"] is not None:
options["name"] = self.options["name"]
if self.options["no_echo"] is not None:
options["echo"] = not self.options["no_echo"]
args = json.dumps(options, sort_keys=True)
return CONNECT_PROTO.format(CONNECT_OP, args, _CRLF_)
|
python
|
{
"resource": ""
}
|
q12591
|
Client.send_command
|
train
|
def send_command(self, cmd, priority=False):
"""
Flushes a command to the server as a bytes payload.
"""
if priority:
self._pending.insert(0, cmd)
else:
self._pending.append(cmd)
self._pending_size += len(cmd)
if self._pending_size > DEFAULT_PENDING_SIZE:
yield self._flush_pending()
|
python
|
{
"resource": ""
}
|
q12592
|
Client._flush_timeout
|
train
|
def _flush_timeout(self, timeout):
"""
Takes a timeout and sets up a future which will return True
once the server responds back otherwise raise a TimeoutError.
"""
future = tornado.concurrent.Future()
yield self._send_ping(future)
try:
result = yield tornado.gen.with_timeout(
timedelta(seconds=timeout), future)
except tornado.gen.TimeoutError:
# Set the future to False so it can be ignored in _process_pong,
# and try to remove from the list of pending pongs.
future.set_result(False)
for i, pong_future in enumerate(self._pongs):
if pong_future == future:
del self._pongs[i]
break
raise
raise tornado.gen.Return(result)
|
python
|
{
"resource": ""
}
|
q12593
|
Client.subscribe
|
train
|
def subscribe(
self,
subject="",
queue="",
cb=None,
future=None,
max_msgs=0,
is_async=False,
pending_msgs_limit=DEFAULT_SUB_PENDING_MSGS_LIMIT,
pending_bytes_limit=DEFAULT_SUB_PENDING_BYTES_LIMIT,
):
"""
Sends a SUB command to the server. Takes a queue parameter
which can be used in case of distributed queues or left empty
if it is not the case, and a callback that will be dispatched
message for processing them.
"""
if self.is_closed:
raise ErrConnectionClosed
if self.is_draining:
raise ErrConnectionDraining
self._ssid += 1
sid = self._ssid
sub = Subscription(
subject=subject,
queue=queue,
cb=cb,
future=future,
max_msgs=max_msgs,
is_async=is_async,
sid=sid,
)
self._subs[sid] = sub
if cb is not None:
sub.pending_msgs_limit = pending_msgs_limit
sub.pending_bytes_limit = pending_bytes_limit
sub.pending_queue = tornado.queues.Queue(
maxsize=pending_msgs_limit)
@tornado.gen.coroutine
def wait_for_msgs():
while True:
sub = wait_for_msgs.sub
err_cb = wait_for_msgs.err_cb
try:
sub = wait_for_msgs.sub
if sub.closed:
break
msg = yield sub.pending_queue.get()
if msg is None:
break
sub.received += 1
sub.pending_size -= len(msg.data)
if sub.max_msgs > 0 and sub.received >= sub.max_msgs:
# If we have hit the max for delivered msgs, remove sub.
self._subs.pop(sub.sid, None)
self._remove_subscription(sub)
# Invoke depending of type of handler.
if sub.is_async:
# NOTE: Deprecate this usage in a next release,
# the handler implementation ought to decide
# the concurrency level at which the messages
# should be processed.
self._loop.spawn_callback(sub.cb, msg)
else:
yield sub.cb(msg)
except Exception as e:
# All errors from calling an async subscriber
# handler are async errors.
if err_cb is not None:
yield err_cb(e)
# Bind the subscription and error cb if present
wait_for_msgs.sub = sub
wait_for_msgs.err_cb = self._error_cb
self._loop.spawn_callback(wait_for_msgs)
elif future is not None:
# Used to handle the single response from a request
# based on auto unsubscribe.
sub.future = future
# Send SUB command...
sub_cmd = b''.join([
SUB_OP, _SPC_,
sub.subject.encode(), _SPC_,
sub.queue.encode(), _SPC_, ("%d" % sid).encode(), _CRLF_
])
yield self.send_command(sub_cmd)
yield self._flush_pending()
raise tornado.gen.Return(sid)
|
python
|
{
"resource": ""
}
|
q12594
|
Client.subscribe_async
|
train
|
def subscribe_async(self, subject, **kwargs):
"""
Schedules callback from subscription to be processed asynchronously
in the next iteration of the loop.
"""
kwargs["is_async"] = True
sid = yield self.subscribe(subject, **kwargs)
raise tornado.gen.Return(sid)
|
python
|
{
"resource": ""
}
|
q12595
|
Client.unsubscribe
|
train
|
def unsubscribe(self, ssid, max_msgs=0):
"""
Takes a subscription sequence id and removes the subscription
from the client, optionally after receiving more than max_msgs,
and unsubscribes immediatedly.
"""
if self.is_closed:
raise ErrConnectionClosed
sub = None
try:
sub = self._subs[ssid]
except KeyError:
# Already unsubscribed.
return
# In case subscription has already received enough messages
# then announce to the server that we are unsubscribing and
# remove the callback locally too.
if max_msgs == 0 or sub.received >= max_msgs:
self._subs.pop(ssid, None)
self._remove_subscription(sub)
# We will send these for all subs when we reconnect anyway,
# so that we can suppress here.
if not self.is_reconnecting:
yield self.auto_unsubscribe(ssid, max_msgs)
|
python
|
{
"resource": ""
}
|
q12596
|
Client._process_ping
|
train
|
def _process_ping(self):
"""
The server will be periodically sending a PING, and if the the client
does not reply a PONG back a number of times, it will close the connection
sending an `-ERR 'Stale Connection'` error.
"""
yield self.send_command(PONG_PROTO)
if self._flush_queue.empty():
yield self._flush_pending()
|
python
|
{
"resource": ""
}
|
q12597
|
Client._process_msg
|
train
|
def _process_msg(self, sid, subject, reply, data):
"""
Dispatches the received message to the stored subscription.
It first tries to detect whether the message should be
dispatched to a passed callback. In case there was not
a callback, then it tries to set the message into a future.
"""
payload_size = len(data)
self.stats['in_msgs'] += 1
self.stats['in_bytes'] += payload_size
msg = Msg(subject=subject.decode(), reply=reply.decode(), data=data)
# Don't process the message if the subscription has been removed
sub = self._subs.get(sid)
if sub is None:
raise tornado.gen.Return()
# Check if it is an old style request.
if sub.future is not None:
sub.future.set_result(msg)
# Discard subscription since done
self._subs.pop(sid, None)
self._remove_subscription(sub)
raise tornado.gen.Return()
# Let subscription wait_for_msgs coroutine process the messages,
# but in case sending to the subscription task would block,
# then consider it to be an slow consumer and drop the message.
try:
sub.pending_size += payload_size
if sub.pending_size >= sub.pending_bytes_limit:
# Substract again the bytes since throwing away
# the message so would not be pending data.
sub.pending_size -= payload_size
if self._error_cb is not None:
yield self._error_cb(ErrSlowConsumer())
raise tornado.gen.Return()
sub.pending_queue.put_nowait(msg)
except tornado.queues.QueueFull:
if self._error_cb is not None:
yield self._error_cb(ErrSlowConsumer())
|
python
|
{
"resource": ""
}
|
q12598
|
Client._process_info
|
train
|
def _process_info(self, info_line):
"""
Process INFO lines sent by the server to reconfigure client
with latest updates from cluster to enable server discovery.
"""
info = tornado.escape.json_decode(info_line.decode())
if 'connect_urls' in info:
if info['connect_urls']:
connect_urls = []
for connect_url in info['connect_urls']:
uri = urlparse("nats://%s" % connect_url)
srv = Srv(uri)
srv.discovered = True
# Filter for any similar server in the server pool already.
should_add = True
for s in self._server_pool:
if uri.netloc == s.uri.netloc:
should_add = False
if should_add:
connect_urls.append(srv)
if self.options["dont_randomize"] is not True:
shuffle(connect_urls)
for srv in connect_urls:
self._server_pool.append(srv)
|
python
|
{
"resource": ""
}
|
q12599
|
Client._next_server
|
train
|
def _next_server(self):
"""
Chooses next available server to connect.
"""
if self.options["dont_randomize"]:
server = self._server_pool.pop(0)
self._server_pool.append(server)
else:
shuffle(self._server_pool)
s = None
for server in self._server_pool:
if self.options["max_reconnect_attempts"] > 0 and (
server.reconnects >
self.options["max_reconnect_attempts"]):
continue
else:
s = server
return s
|
python
|
{
"resource": ""
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.