code string | signature string | docstring string | loss_without_docstring float64 | loss_with_docstring float64 | factor float64 |
|---|---|---|---|---|---|
try:
with open(self._ef_site_config, 'r') as yml_file:
return yaml.safe_load(yml_file)
except (IOError, yaml.parser.ParserError) as error:
print("Error: {}".format(error), file=sys.stderr)
sys.exit(1) | def load(self) | Loads the config | 3.575196 | 3.542599 | 1.009201 |
with tf.variable_scope("interpolate_focus_loss"):
# Select the probs or weights with the labels.
t = tf.reduce_sum(labels * interpolation_values, axis=-1)
return (1 - t) * loss1 + t * loss2 | def interpolate_loss(labels, loss1, loss2, interpolation_values) | Interpolate two losses linearly.
:param labels: A float tensor of shape [batch_size, ..., num_classes] representing the label class probabilities.
:param loss1: A float tensor of shape [batch_size, ...] representing the loss1 for interpolation.
:param loss2: A float tensor of shape [batch_size, ...] representing the loss2 for interpolation.
:param interpolation_values: The values for each class how much focal loss should be interpolated in.
:return: A tensor representing the weighted cross entropy. | 6.444309 | 6.322377 | 1.019286 |
with tf.variable_scope("alpha_balance"):
# Broadcast multiply labels with alpha weights to select weights and then reduce them along last axis.
weights = tf.reduce_sum(labels * alpha_weights, axis=-1)
return weights * loss | def alpha_balance_loss(labels, loss, alpha_weights) | Calculate the alpha balanced cross_entropy.
This means for each sample the cross entropy is calculated and then weighted by the class specific weight.
:param labels: A float tensor of shape [batch_size, ..., num_classes] representing the label class probabilities.
:param loss: A float tensor of shape [batch_size, ...] representing the loss that should be focused.
:param alpha_weights: A float tensor of shape [1, ..., num_classes] (... is filled with ones to match number
of dimensions to labels tensor) representing the weights for each class.
:return: A tensor representing the weighted cross entropy. | 6.491366 | 7.164525 | 0.906043 |
with tf.variable_scope("batch_alpha_balance"):
# Compute the occurrence probability for each class
mu, _ = tf.nn.moments(labels, [0, 1, 2])
# For weighting a class should be down weighted by its occurrence probability.
not_mu = 1 - mu
# Select the class specific not_mu
not_mu_class = tf.reduce_sum(labels * not_mu, axis=-1)
return not_mu_class * loss | def batch_alpha_balance_loss(labels, loss) | Calculate the alpha balanced cross_entropy.
This means for each sample the cross entropy is calculated and then weighted by the class specific weight.
There is yet no paper for this type of loss.
:param labels: A float tensor of shape [batch_size, ..., num_classes] representing the label class probabilities.
:param loss: A float tensor of shape [batch_size, ...] representing the loss that should be focused.
:return: A tensor representing the weighted cross entropy. | 5.66394 | 5.544017 | 1.021631 |
with tf.variable_scope("mask_loss"):
mask = tf.cast(tf.cast(binary_tensor, tf.bool), tf.float32)
return input_tensor * mask | def mask_loss(input_tensor, binary_tensor) | Mask a loss by using a tensor filled with 0 or 1.
:param input_tensor: A float tensor of shape [batch_size, ...] representing the loss/cross_entropy
:param binary_tensor: A float tensor of shape [batch_size, ...] representing the mask.
:return: A float tensor of shape [batch_size, ...] representing the masked loss. | 3.101772 | 3.048652 | 1.017424 |
mask = tf.cast(tf.cast(mask, tf.bool), tf.float32)
active_pixels = tf.reduce_sum(mask)
active_pixels = tf_if(tf.equal(active_pixels, 0), epsilon, active_pixels)
return tf.reduce_sum(loss, axis=axis) / active_pixels | def mean_on_masked(loss, mask, epsilon=1e-8, axis=None) | Average a loss correctly when it was masked.
:param loss: A float tensor of shape [batch_size, ...] representing the (already masked) loss to be averaged.
:param mask: A float tensor of shape [batch_size, ...] representing the mask.
:param epsilon: Offset of log for numerical stability.
:param axis: The dimensions to reduce. If None (the default), reduces all dimensions.
Must be in the range [-rank(input_tensor), rank(input_tensor)). | 2.536991 | 3.211882 | 0.789877 |
return mean_on_masked(mask_loss(input_tensor, binary_tensor), binary_tensor, axis=axis) | def mask_and_mean_loss(input_tensor, binary_tensor, axis=None) | Mask a loss by using a tensor filled with 0 or 1 and average correctly.
:param input_tensor: A float tensor of shape [batch_size, ...] representing the loss/cross_entropy
:param binary_tensor: A float tensor of shape [batch_size, ...] representing the mask.
:return: A float tensor of shape [batch_size, ...] representing the masked loss.
:param axis: The dimensions to reduce. If None (the default), reduces all dimensions.
Must be in the range [-rank(input_tensor), rank(input_tensor)). | 4.890055 | 8.524791 | 0.573628 |
with tf.variable_scope("variance_corrected_loss"):
sigma_cost = 0
if sigma_2 is None:
# FIXME the paper has been updated Apr 2018, check if implementation is still valid.
sigma = tf.get_variable(name="sigma", dtype=tf.float32, initializer=tf.constant(1.0), trainable=True)
sigma_2 = tf.pow(sigma, 2)
tf.summary.scalar("sigma2", sigma_2)
sigma_cost = tf.log(sigma_2 + 1.0)
return 0.5 / sigma_2 * loss + sigma_cost | def variance_corrected_loss(loss, sigma_2=None) | Create a variance corrected loss.
When summing variance corrected losses you get the same as multiloss.
This is especially usefull for keras where when having multiple losses they are summed by keras.
This multi-loss implementation is inspired by the Paper "Multi-Task Learning Using Uncertainty to Weight Losses
for Scene Geometry and Semantics" by Kendall, Gal and Cipolla.
:param loss: The loss that should be variance corrected.
:param sigma_2: Optional a variance (sigma squared) to use. If none is provided it is learned.
:return: The variance corrected loss. | 3.54409 | 3.477222 | 1.01923 |
with tf.variable_scope(logging_namespace):
sum_loss = 0
for loss_name, loss in losses.items():
if loss_name not in exclude_from_weighting:
with tf.variable_scope(loss_name) as scope:
sum_loss += variance_corrected_loss(loss)
else:
sum_loss += loss
return sum_loss | def multiloss(losses, logging_namespace="multiloss", exclude_from_weighting=[]) | Create a loss from multiple losses my mixing them.
This multi-loss implementation is inspired by the Paper "Multi-Task Learning Using Uncertainty to Weight Losses
for Scene Geometry and Semantics" by Kendall, Gal and Cipolla.
:param losses: A dict containing all losses that should be merged.
:param logging_namespace: Variable scope in which multiloss lives.
:param exclude_from_weighting: A list of losses that are already weighted and should not be sigma weighted.
:return: A single loss. | 2.501373 | 2.313552 | 1.081183 |
with tf.variable_scope("focus_loss"):
# Compute p_t that is used in paper.
# FIXME is it possible that the 1-p term does not make any sense?
p_t = tf.reduce_sum(probs * labels, axis=-1)# + tf.reduce_sum((1.0 - probs) * (1.0 - labels), axis=-1)
focal_factor = tf.pow(1.0 - p_t, gamma) if gamma > 0 else 1 # Improve stability for gamma = 0
return tf.stop_gradient(focal_factor) * loss | def focus_loss(labels, probs, loss, gamma) | Calculate the alpha balanced focal loss.
See the focal loss paper: "Focal Loss for Dense Object Detection" [by Facebook AI Research]
:param labels: A float tensor of shape [batch_size, ..., num_classes] representing the label class probabilities.
:param probs: A float tensor of shape [batch_size, ..., num_classes] representing the probs (after softmax).
:param loss: A float tensor of shape [batch_size, ...] representing the loss that should be focused.
:param gamma: The focus parameter.
:return: A tensor representing the weighted cross entropy. | 4.471058 | 4.482658 | 0.997412 |
'''Decorator for composable network layers.'''
def layer_decorated(self, *args, **kwargs):
# Automatically set a name if not provided.
name = kwargs.setdefault('name', self.get_unique_name(op.__name__))
# Figure out the layer inputs.
if len(self.terminals) == 0:
raise RuntimeError('No input variables found for layer %s.' % name)
elif len(self.terminals) == 1:
layer_input = self.terminals[0]
else:
layer_input = list(self.terminals)
# Perform the operation and get the output.
layer_output = op(self, layer_input, *args, **kwargs)
# Add to layer LUT.
self.layers[name] = layer_output
# This output is now the input for the next layer.
self.feed(layer_output)
# Return self for chained calls.
return self
return layer_decorated | def layer(op) | Decorator for composable network layers. | 1.569358 | 1.611814 | 0.973659 |
'''Decorator for composable network layers.'''
def layer_decorated(self, *args, **kwargs):
# Automatically set a name if not provided.
name = kwargs.setdefault('name', self.get_unique_name(op.__name__))
output_names = kwargs.setdefault('output_names', self.get_unique_name(op.__name__))
# Figure out the layer inputs.
if len(self.terminals) == 0:
raise RuntimeError('No input variables found for layer %s.' % name)
elif len(self.terminals) == 1:
layer_input = self.terminals[0]
else:
layer_input = list(self.terminals)
# Perform the operation and get the output.
layer_output = op(self, layer_input, *args, **kwargs)
# Add to layer LUT.
for i in range(len(output_names)):
self.layers[output_names[i]] = layer_output[i]
# This output is now the input for the next layer.
self.feed(layer_output)
# Return self for chained calls.
return self
return layer_decorated | def multi_output_layer(op) | Decorator for composable network layers. | 1.931064 | 1.912531 | 1.009691 |
'''Load network weights.
data_path: The path to the numpy-serialized network weights
session: The current TensorFlow session
ignore_missing: If true, serialized weights for missing layers are ignored.
'''
if data_path.endswith(".npz"):
data_dict = np.load(data_path)
keys = sorted(data_dict.keys())
for i, k in enumerate(keys):
data = data_dict[k]
op_name = "_".join(k.split("_")[:-1])
param_name = "weights" if k.split("_")[-1] == "W" else "biases"
if self.verbose:
print("Loaded: {} {}".format(op_name, param_name))
if op_name not in self.weights:
self.weights[op_name] = {}
self.weights[op_name][param_name] = data
elif data_path.endswith(".npy"):
data_dict = np.load(data_path).item()
for op_name in data_dict:
with tf.variable_scope(op_name, reuse=True):
for param_name, data in data_dict[op_name].iteritems():
if self.verbose:
print("Loaded: {} {}".format(op_name, param_name))
if op_name not in self.weights:
self.weights[op_name] = {}
self.weights[op_name][param_name] = data
else:
raise RuntimeError("Invalid file type.") | def _load(self, data_path, ignore_missing=False) | Load network weights.
data_path: The path to the numpy-serialized network weights
session: The current TensorFlow session
ignore_missing: If true, serialized weights for missing layers are ignored. | 1.934888 | 1.719115 | 1.125514 |
'''Set the input(s) for the next operation by replacing the terminal nodes.
The arguments can be either layer names or the actual layers.
'''
assert len(args) != 0
self.terminals = []
for fed_layer in args:
if isinstance(fed_layer, str):
try:
fed_layer = self.layers[fed_layer]
except KeyError:
raise KeyError('Unknown layer name fed: %s' % fed_layer)
self.terminals.append(fed_layer)
return self | def feed(self, *args) | Set the input(s) for the next operation by replacing the terminal nodes.
The arguments can be either layer names or the actual layers. | 3.69042 | 2.119252 | 1.741378 |
'''Returns an index-suffixed unique name for the given prefix.
This is used for auto-generating layer names based on the type-prefix.
'''
ident = sum(t.startswith(prefix) for t, _ in self.layers.items()) + 1
return '%s_%d' % (prefix, ident) | def get_unique_name(self, prefix) | Returns an index-suffixed unique name for the given prefix.
This is used for auto-generating layer names based on the type-prefix. | 4.156273 | 2.337513 | 1.778075 |
'''Creates a new TensorFlow variable.'''
if op_name in self.weights and name in self.weights[op_name]:
if self.verbose:
print("Using: {} {}".format(op_name, name))
initializer = tf.constant(self.weights[op_name][name], shape=shape)
return tf.get_variable(name, initializer=initializer, trainable=self.trainable)
return tf.get_variable(name, shape, trainable=self.trainable) | def make_var(self, op_name, name, shape) | Creates a new TensorFlow variable. | 2.591475 | 2.631041 | 0.984962 |
if mode == tf.estimator.ModeKeys.TRAIN:
return "train"
if mode == tf.estimator.ModeKeys.EVAL:
return "eval"
if mode == tf.estimator.ModeKeys.PREDICT:
return "predict"
return "unknown" | def mode_to_str(mode) | Converts a tf.estimator.ModeKeys in a nice readable string.
:param mode: The mdoe as a tf.estimator.ModeKeys
:return: A human readable string representing the mode. | 1.716459 | 1.71045 | 1.003513 |
int_condition = tf.to_float(tf.to_int64(condition))
return a * int_condition + (1 - int_condition) * b | def tf_if(condition, a, b) | Implements an if condition in tensorflow.
:param condition: A boolean condition.
:param a: Case a.
:param b: Case b.
:return: A if condition was true, b otherwise. | 3.527909 | 4.800988 | 0.73483 |
prefix = prefix.replace("\\", "/")
folder = "/".join(prefix.split("/")[:-1])
phase = prefix.split("/")[-1]
config = json.load(open(prefix + '_config.json'))
num_threads = config["num_threads"]
filenames = [folder + "/" + f for f in listdir(folder) if isfile(join(folder, f)) and phase in f and not "config.json" in f]
# Create a tf object for the filename list and the readers.
filename_queue = tf.train.string_input_producer(filenames)
readers = [_read_tf_record(filename_queue, config) for _ in range(num_threads)]
batch_dict = tf.train.shuffle_batch_join(
readers,
batch_size=batch_size,
capacity=10 * batch_size,
min_after_dequeue=5 * batch_size
)
# Add batch dimension to feature and label shape
feature_batch = {}
label_batch = {}
for k in batch_dict.keys():
shape = tuple([batch_size] + list(config[k]["shape"]))
tensor = tf.reshape(batch_dict[k], shape, name="input/"+phase+"/" + k + "_reshape")
if "feature_" in k:
feature_batch["_".join(k.split("_")[1:])] = tensor
if "label_" in k:
label_batch["_".join(k.split("_")[1:])] = tensor
return feature_batch, label_batch | def _read_data_legacy(prefix, batch_size) | Loads a tf record as tensors you can use.
:param prefix: The path prefix as defined in the write data method.
:param batch_size: The batch size you want for the tensors.
:return: A feature tensor dict and a label tensor dict. | 2.693008 | 2.659107 | 1.012749 |
prefix = prefix.replace("\\", "/")
folder = "/".join(prefix.split("/")[:-1])
phase = prefix.split("/")[-1]
config = json.load(open(prefix + '_config.json'))
num_threads = config["num_threads"]
filenames = [folder + "/" + f for f in listdir(folder) if isfile(join(folder, f)) and phase in f and not "config.json" in f]
dataset = tf.data.TFRecordDataset(filenames=filenames, num_parallel_reads=num_threads)
dataset = dataset.shuffle(buffer_size=10 * batch_size)
dataset = dataset.repeat()
dataset = dataset.map(map_func=_create_parser_fn(config, phase), num_parallel_calls=num_threads)
if augmentation is not None:
dataset = dataset.map(map_func=augmentation, num_parallel_calls=num_threads)
dataset = dataset.batch(batch_size=batch_size)
dataset = dataset.prefetch(buffer_size=1)
return dataset | def _read_data(prefix, batch_size, augmentation=None) | Loads a dataset.
:param prefix: The path prefix as defined in the write data method.
:param batch_size: The batch size you want for the tensors.
:param augmentation: An augmentation function.
:return: A tensorflow.data.dataset object. | 2.094593 | 2.184144 | 0.958999 |
# Check if the version is too old for dataset api to work better than manually loading data.
if tf.__version__.startswith("1.6") or tf.__version__.startswith("1.5") or tf.__version__.startswith("1.4") \
or tf.__version__.startswith("1.3") or tf.__version__.startswith("1.2") \
or tf.__version__.startswith("1.1") or tf.__version__.startswith("1.0"):
def input_fn():
with tf.variable_scope("input_pipeline"):
return _read_data_legacy(prefix, batch_size)
return input_fn
else:
def input_fn():
with tf.variable_scope("input_pipeline"):
return _read_data(prefix, batch_size, augmentation)
return input_fn | def create_input_fn(prefix, batch_size, augmentation=None) | Loads a dataset.
:param prefix: The path prefix as defined in the write data method.
:param batch_size: The batch size you want for the tensors.
:param augmentation: An augmentation function.
:return: An input function for a tf estimator. | 2.496884 | 2.487857 | 1.003628 |
if not isinstance(sequence, Sequence) and not (callable(getattr(sequence, "__getitem__", None)) and callable(getattr(sequence, "__len__", None))):
raise ValueError("sequence must be tf.keras.utils.Sequence or a subtype or implement __len__(self) and __getitem__(self, idx)")
prefix = os.path.join(hyper_params.train.get("tf_records_path", "tfrecords"), mode)
prefix = prefix.replace("\\", "/")
data_tmp_folder = "/".join(prefix.split("/")[:-1])
if not os.path.exists(data_tmp_folder):
os.makedirs(data_tmp_folder)
args = [(hyper_params, sequence, num_threads, i, (prefix + "_%d.tfrecords") % i) for i in range(num_threads)]
# Retrieve a single batch
sample_feature, sample_label = sequence[0]
config = {"num_threads": num_threads}
for k in sample_feature.keys():
config["feature_" + k] = {"shape": sample_feature[k].shape[1:], "dtype": sample_feature[k].dtype.name}
for k in sample_label.keys():
config["label_" + k] = {"shape": sample_label[k].shape[1:], "dtype": sample_label[k].dtype.name}
with open(prefix + '_config.json', 'w') as outfile:
json.dump(config, outfile)
pool = Pool(processes=num_threads)
pool.map(_write_tf_record_pool_helper, args) | def write_data(hyper_params,
mode,
sequence,
num_threads) | Write a tf record containing a feature dict and a label dict.
:param hyper_params: The hyper parameters required for writing {"problem": {"augmentation": {"steps": Int}}}
:param mode: The mode specifies the purpose of the data. Typically it is either "train" or "validation".
:param sequence: A tf.keras.utils.sequence.
:param num_threads: The number of threads. (Recommended: 4 for training and 2 for validation seems to works nice)
:return: | 2.673705 | 2.607838 | 1.025257 |
env_valid(value)
self._env_full = value
if value.find(".") == -1:
# plain environment, e.g. prod, staging, proto<n>
self._env = value
self._account_alias = get_account_alias(value)
else:
# "<env>.<account_alias>" form, e.g. global.ellationeng or mgmt.ellationeng
self._env, self._account_alias = value.split(".")
# since we extracted an env, must reconfirm that it's legit
global_env_valid(self._env)
self._env_short = get_env_short(value) | def env(self, value) | Sets context.env, context.env_short, and context.account_alias if env is valid
For envs of the form "global.<account>" and "mgmt.<account_alias>",
env is captured as "global" or "mgmt" and account_alias is parsed
out of the full env rather than looked up
Args:
value: the fully-qualified env value
Raises:
ValueError if env is not valid | 8.95974 | 6.357053 | 1.409417 |
if type(sr) is not EFServiceRegistry:
raise TypeError("sr value must be type 'EFServiceRegistry'")
self._service_registry = sr | def service_registry(self, sr) | Sets service registry object in context, doesn't check it
Args:
sr: EFServiceRegistry object | 7.59705 | 4.960177 | 1.531609 |
if type(value) is not str:
raise TypeError("commit value must be string")
self._account_id = value | def account_id(self, value) | Sets the current account id
Args:
value: current account id (string)
Returns:
None | 7.176274 | 7.95774 | 0.901798 |
if client_id is None:
return self._aws_clients
elif self._aws_clients is not None and self._aws_clients.has_key(client_id):
return self._aws_clients[client_id]
else:
return None | def aws_client(self, client_id=None) | Get AWS client if it exists (must have been formerly stored with set_aws_clients)
If client_id is not provided, returns the dictionary of all clients
Args:
client_id: label for the client, e.g. 'ec2'; omit to get a dictionary of all clients
Returns:
aws client if found, or None if not | 2.398226 | 2.200551 | 1.08983 |
if type(clients) is not dict:
raise TypeError("clients must be a dict")
self._aws_clients = clients | def set_aws_clients(self, clients) | Stash a dictionary of AWS clients in the context object
Args:
clients: dictionary of clients | 4.228904 | 4.441717 | 0.952088 |
# Service must exist in service registry
if not context.service_registry.service_record(context.service_name):
fail("service: {} not found in service registry: {}".format(
context.service_name, context.service_registry.filespec))
service_type = context.service_registry.service_record(context.service_name)["type"]
# Key must be valid
if context.key not in EFConfig.VERSION_KEYS:
fail("invalid key: {}; see VERSION_KEYS in ef_config for supported keys".format(context.key))
# Lookup allowed key for service type
if "allowed_types" in EFConfig.VERSION_KEYS[context.key] and \
service_type not in EFConfig.VERSION_KEYS[context.key]["allowed_types"]:
fail("service_type: {} is not allowed for key {}; see VERSION_KEYS[KEY]['allowed_types']"
"in ef_config and validate service registry entry".format(service_type, context.key))
return True | def validate_context(context) | Set the key for the current context.
Args:
context: a populated EFVersionContext object | 4.219247 | 4.13581 | 1.020174 |
# get the current AMI
key = "{}/{}".format(context.env, context.service_name)
print_if_verbose("precheck_ami_id with key: {}".format(key))
current_ami = context.versionresolver.lookup("ami-id,{}".format(key))
print_if_verbose("ami found: {}".format(current_ami))
# If bootstrapping (this will be the first entry in the version history)
# then we can't check it vs. running version
if current_ami is None:
print_if_verbose("precheck passed without check because current AMI is None")
return True
# Otherwise perform a consistency check
# 1. get IDs of instances running the AMI - will find instances in all environments
instances_running_ami = context.aws_client("ec2").describe_instances(
Filters=[{
'Name': 'image-id',
'Values': [current_ami]
}]
)["Reservations"]
if instances_running_ami:
instances_running_ami = [resv["Instances"][0]["InstanceId"] for resv in instances_running_ami]
print_if_verbose("instances running ami {}:\n{}".format(current_ami, repr(instances_running_ami)))
# 2. Get IDs of instances running as <context.env>-<context.service_name>
env_service = "{}-{}".format(context.env, context.service_name)
instances_running_as_env_service = context.aws_client("ec2").describe_instances(
Filters=[{
'Name': 'iam-instance-profile.arn',
'Values': ["arn:aws:iam::*:instance-profile/{}-{}".format(context.env, context.service_name)]
}]
)["Reservations"]
if instances_running_as_env_service:
instances_running_as_env_service = \
[resv["Instances"][0]["InstanceId"] for resv in instances_running_as_env_service]
print_if_verbose("instances running as {}".format(env_service))
print_if_verbose(repr(instances_running_as_env_service))
# 3. Instances running as env-service should be a subset of instances running the AMI
for instance_id in instances_running_as_env_service:
if instance_id not in instances_running_ami:
raise RuntimeError("Instance: {} not running expected ami: {}".format(instance_id, current_ami))
# Check passed - all is well
return True | def precheck_ami_id(context) | Is the AMI in service the same as the AMI marked current in the version records?
This tool won't update records unless the world state is coherent.
Args:
context: a populated EFVersionContext object
Returns:
True if ok to proceed
Raises:
RuntimeError if not ok to proceed | 2.897552 | 2.804551 | 1.033161 |
# get the current dist-hash
key = "{}/{}/dist-hash".format(context.service_name, context.env)
print_if_verbose("precheck_dist_hash with key: {}".format(key))
try:
current_dist_hash = Version(context.aws_client("s3").get_object(
Bucket=EFConfig.S3_VERSION_BUCKET,
Key=key
))
print_if_verbose("dist-hash found: {}".format(current_dist_hash.value))
except ClientError as error:
if error.response["Error"]["Code"] == "NoSuchKey":
# If bootstrapping (this will be the first entry in the version history)
# then we can't check it vs. current version, thus we cannot get the key
print_if_verbose("precheck passed without check because current dist-hash is None")
return True
else:
fail("Exception while prechecking dist_hash for {} {}: {}".format(context.service_name, context.env, error))
# Otherwise perform a consistency check
# 1. get dist version in service for environment
try:
response = urllib2.urlopen(current_dist_hash.location, None, 5)
if response.getcode() != 200:
raise IOError("Non-200 response " + str(response.getcode()) + " reading " + current_dist_hash.location)
dist_hash_in_service = response.read().strip()
except urllib2.URLError as error:
raise IOError("URLError in http_get_dist_version: " + repr(error))
# 2. dist version in service should be the same as "current" dist version
if dist_hash_in_service != current_dist_hash.value:
raise RuntimeError("{} dist-hash in service: {} but expected dist-hash: {}"
.format(key, dist_hash_in_service, current_dist_hash.value))
# Check passed - all is well
return True | def precheck_dist_hash(context) | Is the dist in service the same as the dist marked current in the version records?
This tool won't update records unless the world state is coherent.
Args:
context: a populated EFVersionContext object
Returns:
True if ok to proceed
Raises:
RuntimeError if not ok to proceed | 4.271876 | 4.153538 | 1.028491 |
if context.noprecheck:
return True
func_name = "precheck_" + context.key.replace("-", "_")
if func_name in globals() and isfunction(globals()[func_name]):
return globals()[func_name](context)
else:
return True | def precheck(context) | calls a function named "precheck_<key>" where <key> is context_key with '-' changed to '_'
(e.g. "precheck_ami_id")
Checking function should return True if OK, or raise RuntimeError w/ message if not
Args:
context: a populated EFVersionContext object
Returns:
True if the precheck passed, or if there was no precheck function for context.key
Raises:
RuntimeError if precheck failed, with explanatory message | 3.743734 | 2.818016 | 1.3285 |
s3_key = "{}/{}/{}".format(context.service_name, context.env, context.key)
object_version_list = context.aws_client("s3").list_object_versions(
Bucket=EFConfig.S3_VERSION_BUCKET,
Delimiter='/',
MaxKeys=context.limit,
Prefix=s3_key
)
if "Versions" not in object_version_list:
return []
object_versions = []
for version in object_version_list["Versions"]:
object_version = Version(context.aws_client("s3").get_object(
Bucket=EFConfig.S3_VERSION_BUCKET,
Key=s3_key,
VersionId=version["VersionId"]
))
# Stop if a stable version was found and return_stable was set
if return_stable and object_version.status == EFConfig.S3_VERSION_STATUS_STABLE:
return [object_version]
object_versions.append(object_version)
# If caller is looking for a 'stable' version and we made it to here, a stable version was not found
if return_stable:
return []
else:
return sorted(object_versions, key=lambda v: v.last_modified, reverse=True) | def get_versions(context, return_stable=False) | Get all versions of a key
Args:
context: a populated EFVersionContext object
return_stable: (default:False) If True, stop fetching if 'stable' version is found; return only that version
Returns:
json list of object data sorted in reverse by last_modified (newest version is first). Each item is a dict:
{
'value': <value>,
'last_modified": <YYYY-MM-DDThh:mm:ssZ>, (ISO8601 date time string)
'modified_by': '<arn:aws:...>',
'version_id': '<version_id>',
'status': See EF_Config.S3_VERSION_STATUS_* for possible values
} | 2.849204 | 2.50036 | 1.139517 |
versions = get_versions(context)
for version in versions:
if version.value == value:
return version
fail("Didn't find a matching version for: "
"{}:{} in env/service: {}/{}".format(
context.key, value,
context.env, context.service_name)) | def get_version_by_value(context, value) | Get the latest version that matches the provided ami-id
Args:
context: a populated EFVersionContext object
value: the value of the version to look for | 6.911055 | 6.447622 | 1.071877 |
last_stable = get_versions(context, return_stable=True)
if len(last_stable) != 1:
fail("Didn't find a version marked stable for key: {} in env/service: {}/{}".format(
context.key, context.env, context.service_name))
context.value = last_stable[0].value
context.commit_hash = last_stable[0].commit_hash
context.build_number = last_stable[0].build_number
context.location = last_stable[0].location
context.stable = True
cmd_set(context) | def cmd_rollback(context) | Roll back by finding the most recent "stable" tagged version, and putting it again, so that
it's the new "current" version.
Args:
context: a populated EFVersionContext object | 4.61068 | 4.346208 | 1.060851 |
version = get_version_by_value(context, context.rollback_to)
context.value = version.value
context.commit_hash = version.commit_hash
context.build_number = version.build_number
context.location = version.location
context.stable = True
cmd_set(context) | def cmd_rollback_to(context) | Roll back by finding a specific version in the history of the service and
putting it as the new current version.
Args:
context: a populated EFVersionContext object | 5.030889 | 5.485759 | 0.917082 |
# If key value is a special symbol, see if this env allows it
if context.value in EFConfig.SPECIAL_VERSIONS and context.env_short not in EFConfig.SPECIAL_VERSION_ENVS:
fail("special version: {} not allowed in env: {}".format(context.value, context.env_short))
# If key value is a special symbol, the record cannot be marked "stable"
if context.value in EFConfig.SPECIAL_VERSIONS and context.stable:
fail("special versions such as: {} cannot be marked 'stable'".format(context.value))
# Resolve any references
if context.value == "=prod":
context.value = context.versionresolver.lookup("{},{}/{}".format(context.key, "prod", context.service_name))
elif context.value == "=staging":
context.value = context.versionresolver.lookup("{},{}/{}".format(context.key, "staging", context.service_name))
elif context.value == "=latest":
if not EFConfig.VERSION_KEYS[context.key]["allow_latest"]:
fail("=latest cannot be used with key: {}".format(context.key))
func_name = "_getlatest_" + context.key.replace("-", "_")
if func_name in globals() and isfunction(globals()[func_name]):
context.value = globals()[func_name](context)
else:
raise RuntimeError("{} version for {}/{} is '=latest' but can't look up because method not found: {}".format(
context.key, context.env, context.service_name, func_name))
# precheck to confirm coherent world state before attempting set - whatever that means for the current key type
try:
precheck(context)
except Exception as e:
fail("Precheck failed: {}".format(e.message))
s3_key = "{}/{}/{}".format(context.service_name, context.env, context.key)
s3_version_status = EFConfig.S3_VERSION_STATUS_STABLE if context.stable else EFConfig.S3_VERSION_STATUS_UNDEFINED
# If the set would put a value and status that are the same as the existing 'current' value/status, don't do it
context.limit = 1
current_version = get_versions(context)
# If there is no 'current version' it's ok, just means the set will write the first entry
if len(current_version) == 1 and current_version[0].status == s3_version_status and \
current_version[0].value == context.value:
print("Version not written because current version and new version have identical value and status: {} {}"
.format(current_version[0].value, current_version[0].status))
return
if not context.commit:
print("=== DRY RUN ===\nUse --commit to set value\n=== DRY RUN ===")
print("would set key: {} with value: {} {} {} {} {}".format(
s3_key, context.value, context.build_number, context.commit_hash, context.location, s3_version_status))
else:
context.aws_client("s3").put_object(
ACL='bucket-owner-full-control',
Body=context.value,
Bucket=EFConfig.S3_VERSION_BUCKET,
ContentEncoding=EFConfig.S3_VERSION_CONTENT_ENCODING,
Key=s3_key,
Metadata={
EFConfig.S3_VERSION_BUILDNUMBER_KEY: context.build_number,
EFConfig.S3_VERSION_COMMITHASH_KEY: context.commit_hash,
EFConfig.S3_VERSION_LOCATION_KEY: context.location,
EFConfig.S3_VERSION_MODIFIEDBY_KEY: context.aws_client("sts").get_caller_identity()["Arn"],
EFConfig.S3_VERSION_STATUS_KEY: s3_version_status
},
StorageClass='STANDARD'
)
print("set key: {} with value: {} {} {} {} {}".format(
s3_key, context.value, context.build_number, context.commit_hash, context.location, s3_version_status)) | def cmd_set(context) | Set the new "current" value for a key.
If the existing current version and the new version have identical /value/ and /status,
then nothing is written, to avoid stacking up redundant entreis in the version table.
Args:
context: a populated EFVersionContext object | 3.586002 | 3.398262 | 1.055246 |
return {
"build_number": self._build_number,
"commit_hash": self._commit_hash,
"last_modified": self._last_modified,
"location": self._location,
"modified_by": self._modified_by,
"status": self._status,
"value": self._value,
"version_id": self._version_id
} | def to_json(self) | called by VersionEncoder.default() when doing json.dumps() on the object
the json materializes in reverse order from the order used here | 2.728096 | 2.414486 | 1.129886 |
# TODO add some housekeeping
for i in range(steps):
self.step(**kwargs) | def learn(self, steps=1, **kwargs) | Train the model using the environment and the agent.
Note that the model might be shared between multiple agents (which most probably are of the same type)
at the same time.
:param steps: The number of steps to train for. | 10.1731 | 14.738155 | 0.690256 |
default = "default"
if not self.parameters:
return None
# Hierarchically lookup the value
result = None
if default in self.parameters and symbol in self.parameters[default]:
result = self.parameters[default][symbol]
if self.env_short in self.parameters and symbol in self.parameters[self.env_short]:
result = self.parameters[self.env_short][symbol]
# This lookup is redundant when env_short == env, but it's also cheap
if self.env in self.parameters and symbol in self.parameters[self.env]:
result = self.parameters[self.env][symbol]
# Finally, convert any list of items into a single \n-delimited string
if isinstance(result, list):
result = "\n".join(result)
return result | def get_value(self, symbol) | Hierarchically searches for 'symbol' in the parameters blob if there is one (would have
been retrieved by 'load()'). Order is: default, <env_short>, <env>
Args:
symbol: the key to resolve
Returns:
Hierarchically resolved value for 'symbol' in the environment set by the constructor,
or None if a match is not found or there are no parameters | 3.534507 | 2.968988 | 1.190476 |
with tf.variable_scope("debug_overlay"):
if not classification.get_shape()[3] in [1, 2, 3]:
raise RuntimeError("The classification can either be of 1, 2 or 3 dimensions as last dimension, but shape is {}".format(classification.get_shape().as_list()))
size = rgb_image.get_shape()[1:3]
if classification.get_shape()[3] == 1:
classification = tf.pad(classification, [[0, 0], [0, 0], [0, 0], [0, 2]], "CONSTANT")
elif classification.get_shape()[3] == 2:
classification = tf.pad(classification, [[0, 0], [0, 0], [0, 0], [0, 1]], "CONSTANT")
casted_classification = tf.cast(classification, dtype=tf.float32)
target_size = (int(classification.get_shape()[1] * scale), int(classification.get_shape()[2] * scale))
scaled_image = tf.image.resize_images(casted_classification, size=target_size, method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
cropped_img = tf.image.crop_to_bounding_box(scaled_image, 0, 0, size[0], size[1])
return 0.5 * rgb_image + 0.5 * 255 * cropped_img | def overlay_classification_on_image(classification, rgb_image, scale=1) | Overlay a classification either 1 channel or 3 channels on an input image.
:param classification: The classification tensor of shape [bach_size, v, u, 1] or [batch_size, v, u, 3].
The value range of the classification tensor is supposed to be 0 to 1.
:param rgb_image: The input image of shape [batch_size, h, w, 3].
The input image value range is 0-255. And channel order is RGB.
If you have BGR you can use image[..., ::-1] to make it RGB.
:param scale: The scale with which to multiply the size of the image to achieve the normal size.
:return: The merged image tensor. | 2.02059 | 1.997918 | 1.011348 |
one_hot = tf.one_hot(tensor, classes)
shape = one_hot.get_shape().as_list()
return tf.reshape(one_hot, shape=[-1, shape[1], shape[2], shape[4]]) | def inflate_to_one_hot(tensor, classes) | Converts a tensor with index form to a one hot tensor.
:param tensor: A tensor of shape [batch, h, w, 1]
:param classes: The number of classes that exist. (length of one hot encoding)
:return: A tensor of shape [batch, h, w, classes]. | 2.552092 | 2.726182 | 0.936141 |
if lookup in EFConfig.ENV_ACCOUNT_MAP:
return EFConfig.ENV_ACCOUNT_MAP[lookup]
else:
return None | def accountaliasofenv(self, lookup, default=None) | Args:
lookup: ENV_SHORT name of an env, such as: 'prod' or 'proto'
default: the optional value to return if lookup failed; returns None if not set
Returns:
The account alias of the account that hosts the env named in lookupor default/None if no match found | 5.992324 | 6.111232 | 0.980543 |
try:
if lookup in EFConfig.CUSTOM_DATA:
return EFConfig.CUSTOM_DATA[lookup]
else:
return default
except AttributeError:
return default | def customdata(self, lookup, default=None) | Args:
lookup: the custom data file
default: the optional value to return if lookup failed; returns None if not set
Returns:
The custom data returned from the file 'lookup' or default/None if no match found | 4.903842 | 5.191147 | 0.944655 |
print(message, file=sys.stderr)
if exception_data:
print(repr(exception_data))
sys.exit(1) | def fail(message, exception_data=None) | Print a failure message and exit nonzero | 2.874685 | 2.325131 | 1.236354 |
metadata_path = __METADATA_PREFIX + metadata_path
try:
response = urllib2.urlopen(metadata_path, None, timeout)
if response.getcode() != 200:
raise IOError("Non-200 response " + str(response.getcode()) + " reading " + metadata_path)
return response.read()
except urllib2.URLError as error:
raise IOError("URLError in http_get_metadata: " + repr(error)) | def http_get_metadata(metadata_path, timeout=__HTTP_DEFAULT_TIMEOUT_SEC) | Fetch AWS metadata from http://169.254.169.254/latest/meta-data/<metadata_path>
ARGS:
metadata_path - the optional path and required key to the EC2 metadata (e.g. "instance-id")
RETURN:
response content on success
RAISE:
URLError if there was a problem reading metadata | 2.459995 | 2.794434 | 0.88032 |
if not isfile(__VIRT_WHAT) or not access(__VIRT_WHAT, X_OK):
raise IOError("virt-what not available")
try:
return subprocess.check_output(["sudo", "-n", __VIRT_WHAT]).split('\n')[0:2] == __VIRT_WHAT_VIRTUALBOX_WITH_KVM
except subprocess.CalledProcessError as e:
raise IOError("virt-what failed execution with {}".format(e)) | def is_in_virtualbox() | Is the current environment a virtualbox instance?
Returns a boolean
Raises IOError if the necessary tooling isn't available | 5.290099 | 5.169502 | 1.023329 |
# If the metadata endpoint responds, this is an EC2 instance
# If it doesn't, we can safely say this isn't EC2 and try the other options
try:
response = http_get_metadata("instance-id", 1)
if response[:2] == "i-":
return "ec2"
except:
pass
# Virtualbox?
try:
if is_in_virtualbox():
return "virtualbox-kvm"
except:
pass
# Outside virtualbox/vagrant but not in aws; hostname is "<name>.local"
hostname = gethostname()
if re.findall(r"\.local$", hostname):
return "local"
# we have no idea where we are
return "unknown" | def whereami() | Determine if this is an ec2 instance or "running locally"
Returns:
"ec2" - this is an ec2 instance
"virtualbox-kvm" - kernel VM (virtualbox with vagrant)
"local" - running locally and not in a known VM
"unknown" - I have no idea where I am | 7.479298 | 5.649723 | 1.323834 |
try:
info = json.loads(http_get_metadata('iam/info'))
except Exception as error:
raise IOError("Error looking up metadata:iam/info: " + repr(error))
return info["InstanceProfileArn"].split(":")[5].split("/")[1].split("-",1)[0] | def http_get_instance_env() | Returns: just the env this ec2 instance is in. Doesn't require API access like get_instance_aws_context does
Example return value: "staging" | 6.582217 | 5.570989 | 1.181517 |
result = {}
try:
result["region"] = http_get_metadata("placement/availability-zone/")
result["region"] = result["region"][:-1]
result["instance_id"] = http_get_metadata('instance-id')
except IOError as error:
raise IOError("Error looking up metadata:availability-zone or instance-id: " + repr(error))
try:
instance_desc = ec2_client.describe_instances(InstanceIds=[result["instance_id"]])
except Exception as error:
raise IOError("Error calling describe_instances: " + repr(error))
result["account"] = instance_desc["Reservations"][0]["OwnerId"]
arn = instance_desc["Reservations"][0]["Instances"][0]["IamInstanceProfile"]["Arn"]
result["role"] = arn.split(":")[5].split("/")[1]
env = re.search("^(" + EFConfig.VALID_ENV_REGEX + ")-", result["role"])
if not env:
raise IOError("Did not find environment in role name: " + result["role"])
result["env"] = env.group(1)
result["env_short"] = result["env"].strip(".0123456789")
result["service"] = "-".join(result["role"].split("-")[1:])
return result | def get_instance_aws_context(ec2_client) | Returns: a dictionary of aws context
dictionary will contain these entries:
region, instance_id, account, role, env, env_short, service
Raises: IOError if couldn't read metadata or lookup attempt failed | 2.992815 | 2.583093 | 1.158617 |
try:
current_repo = subprocess.check_output(["git", "remote", "-v", "show"])
except subprocess.CalledProcessError as error:
raise RuntimeError("Exception checking current repo", error)
current_repo = re.findall("(https://|@)(.*?)(.git|[ ])", current_repo)[0][1].replace(":", "/")
if current_repo != EFConfig.EF_REPO:
raise RuntimeError("Must be in " + EFConfig.EF_REPO + " repo. Current repo is: " + current_repo)
try:
current_branch = subprocess.check_output(["git", "rev-parse", "--abbrev-ref", "HEAD"]).rstrip()
except subprocess.CalledProcessError as error:
raise RuntimeError("Exception checking current branch: " + repr(error))
if current_branch != EFConfig.EF_REPO_BRANCH:
raise RuntimeError("Must be on branch: " + EFConfig.EF_REPO_BRANCH + ". Current branch is: " + current_branch)
try:
subprocess.check_call(["git", "pull", "-q", "origin", EFConfig.EF_REPO_BRANCH])
except subprocess.CalledProcessError as error:
raise RuntimeError("Exception running 'git pull': " + repr(error)) | def pull_repo() | Pulls latest version of EF_REPO_BRANCH from EF_REPO (as set in ef_config.py) if client is in EF_REPO
and on the branch EF_REPO_BRANCH
Raises:
RuntimeError with message if not in the correct repo on the correct branch | 2.556305 | 2.275414 | 1.123446 |
if not profile:
profile = None
client_key = (region, profile)
aws_clients = client_cache.get(client_key, {})
requested_clients = set(clients)
new_clients = requested_clients.difference(aws_clients)
if not new_clients:
return aws_clients
session = aws_clients.get("SESSION")
try:
if not session:
session = boto3.Session(region_name=region, profile_name=profile)
aws_clients["SESSION"] = session
# build clients
client_dict = {c: session.client(c) for c in new_clients}
# append the session itself in case it's needed by the client code - can't get it from the clients themselves
aws_clients.update(client_dict)
# add the created clients to the cache
client_cache[client_key] = aws_clients
return aws_clients
except ClientError as error:
raise RuntimeError("Exception logging in with Session() and creating clients", error) | def create_aws_clients(region, profile, *clients) | Create boto3 clients for one or more AWS services. These are the services used within the libs:
cloudformation, cloudfront, ec2, iam, lambda, route53, waf
Args:
region: the region in which to create clients that are region-specific (all but IAM)
profile: Name of profile (in .aws/credentials). Pass the value None if using instance credentials on EC2 or Lambda
clients: names of the clients to create (lowercase, must match what boto3 expects)
Returns:
A dictionary of <key>,<value> pairs for several AWS services, using the labels above as keys, e.g.:
{ "cloudfront": <cloudfront_client>, ... }
Dictionary contains an extra record, "SESSION" - pointing to the session that created the clients | 3.704268 | 3.629626 | 1.020565 |
env_valid(env)
# Env is a global env of the form "env.<account_alias>" (e.g. "mgmt.<account_alias>")
if env.find(".") > -1:
base, ext = env.split(".")
return ext
# Ordinary env, possibly a proto env ending with a digit that is stripped to look up the alias
else:
env_short = env.strip(".0123456789")
if env_short not in EFConfig.ENV_ACCOUNT_MAP:
raise ValueError("generic env: {} has no entry in ENV_ACCOUNT_MAP of ef_site_config.py".format(env_short))
return EFConfig.ENV_ACCOUNT_MAP[env_short] | def get_account_alias(env) | Given an env, return <account_alias> if env is valid
Args:
env: an environment, such as "prod", "staging", "proto<N>", "mgmt.<account_alias>"
Returns:
the alias of the AWS account that holds the env
Raises:
ValueError if env is misformatted or doesn't name a known environment | 8.075972 | 6.653375 | 1.213816 |
env_valid(env)
if env.find(".") > -1:
env_short, ext = env.split(".")
else:
env_short = env.strip(".0123456789")
return env_short | def get_env_short(env) | Given an env, return <env_short> if env is valid
Args:
env: an environment, such as "prod", "staging", "proto<N>", "mgmt.<account_alias>"
Returns:
the shortname of the env, such as "prod", "staging", "proto", "mgmt"
Raises:
ValueError if env is misformatted or doesn't name a known environment | 4.274085 | 4.649391 | 0.919278 |
if env not in EFConfig.ENV_LIST:
raise ValueError("unknown env: {}; env must be one of: ".format(env) + ", ".join(EFConfig.ENV_LIST))
return True | def env_valid(env) | Given an env, determine if it's valid
Args:
env: the env to check
Returns:
True if the env is valid
Raises:
ValueError with message if the env is not valid | 6.140776 | 5.980912 | 1.026729 |
if env not in EFConfig.ACCOUNT_SCOPED_ENVS:
raise ValueError("Invalid global env: {}; global envs are: {}".format(env, EFConfig.ACCOUNT_SCOPED_ENVS))
return True | def global_env_valid(env) | Given an env, determine if it's a valid "global" or "mgmt" env as listed in EFConfig
Args:
env: the env to check
Returns:
True if the env is a valid global env in EFConfig
Raises:
ValueError with message if the env is not valid | 7.133867 | 4.613442 | 1.546322 |
# Converting all periods to underscores because they are invalid in KMS alias names
key_alias = '{}-{}'.format(env, service.replace('.', '_'))
try:
response = kms_client.encrypt(
KeyId='alias/{}'.format(key_alias),
Plaintext=secret.encode()
)
except ClientError as error:
if error.response['Error']['Code'] == "NotFoundException":
fail("Key '{}' not found. You may need to run ef-generate for this environment.".format(key_alias), error)
else:
fail("boto3 exception occurred while performing kms encrypt operation.", error)
encrypted_secret = base64.b64encode(response['CiphertextBlob'])
return encrypted_secret | def kms_encrypt(kms_client, service, env, secret) | Encrypt string for use by a given service/environment
Args:
kms_client (boto3 kms client object): Instantiated kms client object. Usually created through create_aws_clients.
service (string): name of the service that the secret is being encrypted for.
env (string): environment that the secret is being encrypted for.
secret (string): value to be encrypted
Returns:
a populated EFPWContext object
Raises:
SystemExit(1): If there is an error with the boto3 encryption call (ex. missing kms key) | 3.672442 | 3.61529 | 1.015808 |
try:
decrypted_secret = kms_client.decrypt(CiphertextBlob=base64.b64decode(secret))['Plaintext']
except TypeError:
fail("Malformed base64 string data")
except ClientError as error:
if error.response["Error"]["Code"] == "InvalidCiphertextException":
fail("The decrypt request was rejected because the specified ciphertext \
has been corrupted or is otherwise invalid.", error)
elif error.response["Error"]["Code"] == "NotFoundException":
fail("The decrypt request was rejected because the specified entity or resource could not be found.", error)
else:
fail("boto3 exception occurred while performing kms decrypt operation.", error)
return decrypted_secret | def kms_decrypt(kms_client, secret) | Decrypt kms-encrypted string
Args:
kms_client (boto3 kms client object): Instantiated kms client object. Usually created through create_aws_clients.
secret (string): base64 encoded value to be decrypted
Returns:
a populated EFPWContext object
Raises:
SystemExit(1): If there is an error with the boto3 decryption call (ex. malformed secret) | 3.523331 | 3.230732 | 1.090567 |
try:
response = kms_client.describe_key(KeyId=alias)
key_arn = response["KeyMetadata"]["Arn"]
except ClientError as error:
raise RuntimeError("Failed to obtain key arn for alias {}, error: {}".format(alias, error.response["Error"]["Message"]))
return key_arn | def kms_key_arn(kms_client, alias) | Obtain the full key arn based on the key alias provided
Args:
kms_client (boto3 kms client object): Instantiated kms client object. Usually created through create_aws_clients.
alias (string): alias of key, example alias/proto0-evs-drm.
Returns:
string of the full key arn | 2.581167 | 2.557667 | 1.009188 |
for suffix in EFConfig.PARAMETER_FILE_SUFFIXES:
parameters_file = template_full_path.replace("/templates", "/parameters") + suffix
if exists(parameters_file):
return parameters_file
else:
continue
return None | def get_template_parameters_file(template_full_path) | Checks for existance of parameters file against supported suffixes and returns parameters file path if found
Args:
template_full_path: full filepath for template file
Returns:
filename of parameters file if it exists | 4.871911 | 4.593093 | 1.060704 |
for suffix in EFConfig.PARAMETER_FILE_SUFFIXES:
parameters_key = template_key.replace("/templates", "/parameters") + suffix
try:
obj = s3_resource.Object(EFConfig.S3_CONFIG_BUCKET, parameters_key)
obj.get()
return parameters_key
except ClientError:
continue
return None | def get_template_parameters_s3(template_key, s3_resource) | Checks for existance of parameters object in S3 against supported suffixes and returns parameters file key if found
Args:
template_key: S3 key for template file. omit bucket.
s3_resource: a boto3 s3 resource
Returns:
filename of parameters file if it exists | 4.283934 | 3.528391 | 1.214132 |
try:
# See if {{ENV}}-{{SERVICE}} matches ASG name
response = asg_client.describe_auto_scaling_groups(AutoScalingGroupNames=["{}-{}".format(env, service)])
if len(response["AutoScalingGroups"]) == 0:
# See if {{ENV}}-{{SERVICE}} matches ASG tag name
response = asg_client.describe_tags(Filters=[{ "Name": "Key", "Values": ["Name"] }, { "Name": "Value", "Values": ["{}-{}".format(env, service)]}])
if len(response["Tags"]) == 0:
# Query does not match either of the above, return None
return None
else:
asg_name = response["Tags"][0]["ResourceId"]
response = asg_client.describe_auto_scaling_groups(AutoScalingGroupNames=[asg_name])
return response["AutoScalingGroups"]
else:
return response["AutoScalingGroups"]
except ClientError as error:
raise RuntimeError("Error in finding autoscaling group {} {}".format(env, service), error) | def get_autoscaling_group_properties(asg_client, env, service) | Gets the autoscaling group properties based on the service name that is provided. This function will attempt the find
the autoscaling group base on the following logic:
1. If the service name provided matches the autoscaling group name
2. If the service name provided matches the Name tag of the autoscaling group
3. If the service name provided does not match the above, return None
Args:
clients: Instantiated boto3 autoscaling client
env: Name of the environment to search for the autoscaling group
service: Name of the service
Returns:
JSON object of the autoscaling group properties if it exists | 2.642809 | 2.440262 | 1.083002 |
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = gpu_memory_usage
config.gpu_options.allow_growth = allow_growth
return config | def get_default_config(gpu_memory_usage=0.75, allow_growth=False) | A helper to create sessions easily.
:param gpu_memory_usage: How much of the gpu should be used for your project.
:param allow_growth: If you want to have a fixed gpus size or if it should grow and use just as much as it needs.
:return: A configuration you can pass to your session when creating it. | 1.560179 | 1.884411 | 0.82794 |
if not tf.gfile.Exists(checkpoint_path):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
"directory: %s" % checkpoint_path)
if not output_nodes:
print("You need to supply the name of a node to --output_node_names.")
return -1
# We retrieve our checkpoint fullpath
checkpoint = tf.train.get_checkpoint_state(checkpoint_path)
input_checkpoint = checkpoint.model_checkpoint_path
# We precise the file fullname of our freezed graph
output_graph = checkpoint_path + "/frozen_model.pb"
# We clear devices to allow TensorFlow to control on which device it will load operations
clear_devices = True
# We start a session using a temporary fresh Graph
with tf.Session(graph=tf.Graph()) as sess:
# We import the meta graph in the current default Graph
saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=clear_devices)
# We restore the weights
saver.restore(sess, input_checkpoint)
# We use a built-in TF helper to export variables to constants
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess, # The session is used to retrieve the weights
tf.get_default_graph().as_graph_def(), # The graph_def is used to retrieve the nodes
output_nodes # The output node names are used to select the useful nodes
)
# Finally we serialize and dump the output graph to the filesystem
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
print("%d ops in the final graph." % len(output_graph_def.node))
return output_graph_def | def export_graph(checkpoint_path, output_nodes) | Export a graph stored in a checkpoint as a *.pb file.
:param checkpoint_path: The checkpoint path which should be frozen.
:param output_nodes: The output nodes you care about as a list of strings (their names).
:return: | 1.775979 | 1.769825 | 1.003477 |
# Load graph def from protobuff and import the definition
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
if placeholders is None:
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name=namespace_prefix)
else:
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, input_map=placeholders, name=namespace_prefix)
return graph | def load_graph(frozen_graph_filename, namespace_prefix="", placeholders=None) | Loads a frozen graph from a *.pb file.
:param frozen_graph_filename: The file which graph to load.
:param namespace_prefix: A namespace for your graph to live in. This is useful when having multiple models.
:param placeholders: A dict containing the new placeholders that replace the old inputs.
:return: The graph that can now be passed to a session when creating it. | 1.757114 | 1.802937 | 0.974584 |
client = EFAwsResolver.__CLIENTS['elbv2']
elbs = client.describe_load_balancers(Names=[lookup])
# getting the first one, since we requested only one lb
elb = elbs['LoadBalancers'][0]
return elb | def _elbv2_load_balancer(self, lookup) | Args:
lookup: the friendly name of the V2 elb to look up
Returns:
A dict with the load balancer description
Raises:
botocore.exceptions.ClientError: no such load-balancer | 7.758968 | 7.610325 | 1.019532 |
# @todo: Only searches the first 100 certificates in the account
try:
# This a region-specific client, so we'll make a new client in the right place using existing SESSION
region_name, domain_name = lookup.split("/")
acm_client = EFAwsResolver.__CLIENTS["SESSION"].client(service_name="acm", region_name=region_name)
response = acm_client.list_certificates(
CertificateStatuses=['ISSUED'],
MaxItems=100
)
except Exception:
return default
# No certificates
if len(response["CertificateSummaryList"]) < 1:
return default
# One or more certificates - find cert with latest IssuedAt date or an arbitrary cert if none are dated
best_match_cert = None
for cert_handle in response["CertificateSummaryList"]:
if cert_handle["DomainName"] == domain_name:
cert = acm_client.describe_certificate(CertificateArn=cert_handle["CertificateArn"])["Certificate"]
# Patch up cert if there is no IssuedAt (i.e. cert was not issued by Amazon)
if not cert.has_key("IssuedAt"):
cert[u"IssuedAt"] = datetime.datetime(1970, 1, 1, 0, 0)
if best_match_cert is None:
best_match_cert = cert
elif cert["IssuedAt"] > best_match_cert["IssuedAt"]:
best_match_cert = cert
if best_match_cert is not None:
return best_match_cert["CertificateArn"]
return default | def acm_certificate_arn(self, lookup, default=None) | Args:
lookup: region/domain on the certificate to be looked up
default: the optional value to return if lookup failed; returns None if not set
Returns:
ARN of a certificate with status "Issued" for the region/domain, if found, or default/None if no match
If more than one "Issued" certificate matches the region/domain:
- if any matching cert was issued by Amazon, returns ARN of certificate with most recent IssuedAt timestamp
- if no certs were issued by Amazon, returns ARN of an arbitrary matching certificate
- certificates issued by Amazon take precedence over certificates not issued by Amazon | 4.035397 | 3.708251 | 1.088221 |
public_ip = self.ec2_elasticip_elasticip_ipaddress(lookup)
if public_ip is None:
return default
try:
eips = EFAwsResolver.__CLIENTS["ec2"].describe_addresses(
PublicIps=[public_ip]
)
# Public IP not found
except ClientError:
return default
eip_id = eips["Addresses"][0]["AllocationId"]
return eip_id | def ec2_elasticip_elasticip_id(self, lookup, default=None) | Args:
lookup: the CloudFormation resource name of the Elastic IP ID to look up
default: the optional value to return if lookup failed; returns None if not set
Returns:
The ID of the first Elastic IP found with a description matching 'lookup' or default/None if no match found | 4.600173 | 4.376011 | 1.051225 |
# Extract environment from resource ID to build stack name
m = re.search('ElasticIp([A-Z]?[a-z]+[0-9]?)\w+', lookup)
# The lookup string was not a valid ElasticIp resource label
if m is None:
return default
env = m.group(1)
stackname = "{}-elasticip".format(env.lower())
# Convert env substring to title in case {{ENV}} substitution is being used
lookup = lookup.replace(env, env.title())
# Look up the EIP resource in the stack to get the IP address assigned to the EIP
try:
eip_stack = EFAwsResolver.__CLIENTS["cloudformation"].describe_stack_resources(
StackName=stackname,
LogicalResourceId=lookup
)
except ClientError:
return default
stack_resources = eip_stack["StackResources"]
# Resource does not exist in stack
if len(stack_resources) < 1:
return default
eip_publicip = stack_resources[0]["PhysicalResourceId"]
return eip_publicip | def ec2_elasticip_elasticip_ipaddress(self, lookup, default=None) | Args:
lookup: the CloudFormation resource name of the Elastic IP address to look up
default: the optional value to return if lookup failed; returns None if not set
Returns:
The IP address of the first Elastic IP found with a description matching 'lookup' or default/None if no match | 5.410589 | 5.303118 | 1.020266 |
enis = EFAwsResolver.__CLIENTS["ec2"].describe_network_interfaces(Filters=[{
'Name': 'description',
'Values': [lookup]
}])
if len(enis.get("NetworkInterfaces")) > 0:
return enis["NetworkInterfaces"][0]["NetworkInterfaceId"]
else:
return default | def ec2_eni_eni_id(self, lookup, default=None) | Args:
lookup: the description of the Elastic Network Interface (ENI) to look up
default: the optional value to return if lookup failed; returns None if not set
Returns:
The ID of the first ENI found with a description matching 'lookup' or default/None if no match found | 4.090989 | 3.905149 | 1.047588 |
network_acl_id = EFAwsResolver.__CLIENTS["ec2"].describe_network_acls(Filters=[{
'Name': 'tag:Name',
'Values': [lookup]
}])
if len(network_acl_id["NetworkAcls"]) > 0:
return network_acl_id["NetworkAcls"][0]["NetworkAclId"]
else:
return default | def ec2_network_network_acl_id(self, lookup, default=None) | Args:
lookup: the friendly name of the network ACL we are looking up
default: the optional value to return if lookup failed; returns None if not set
Returns:
the ID of the network ACL, or None if no match found | 3.083224 | 3.273471 | 0.941882 |
try:
response = EFAwsResolver.__CLIENTS["ec2"].describe_security_groups(Filters=[{
'Name':'group-name', 'Values':[lookup]
}])
except:
return default
if len(response["SecurityGroups"]) > 0:
return response["SecurityGroups"][0]["GroupId"]
else:
return default | def ec2_security_group_security_group_id(self, lookup, default=None) | Args:
lookup: the friendly name of a security group to look up
default: the optional value to return if lookup failed; returns None if not set
Returns:
Security group ID if target found or default/None if no match | 3.938618 | 3.760791 | 1.047285 |
subnets = EFAwsResolver.__CLIENTS["ec2"].describe_subnets(Filters=[{
'Name': 'tag:Name',
'Values': [lookup]
}])
if len(subnets["Subnets"]) > 0:
return subnets["Subnets"][0]["SubnetId"]
else:
return default | def ec2_subnet_subnet_id(self, lookup, default=None) | Return:
the ID of a single subnet or default/None if no match
Args:
lookup: the friendly name of the subnet to look up (subnet-<env>-a or subnet-<env>-b)
default: the optional value to return if lookup failed; returns None if not set | 3.565306 | 3.494797 | 1.020175 |
vpc_id = self.ec2_vpc_vpc_id(lookup)
if vpc_id is None:
return default
subnets = EFAwsResolver.__CLIENTS["ec2"].describe_subnets(Filters=[{
'Name': 'vpc-id',
'Values': [vpc_id]
}])
if len(subnets["Subnets"]) > 0:
# Strip the metadata section (subnets["Subnets"])
az_list = [s["AvailabilityZone"] for s in subnets["Subnets"]]
# Add internal ", " only. This is called literally from: "{{aws...}}" - CF template needs the outer quotes
return "\", \"".join(az_list)
else:
return default | def ec2_vpc_availabilityzones(self, lookup, default=None) | Args:
lookup: the friendly name of a VPC to look up
default: the optional value to return if lookup failed; returns None if not set
Returns:
A comma-separated list of availability zones in use in the named VPC or default/None if no match | 7.065547 | 7.255671 | 0.973797 |
vpc_id = self.ec2_vpc_vpc_id(lookup)
if vpc_id is None:
return default
subnets = EFAwsResolver.__CLIENTS["ec2"].describe_subnets(Filters=[{
'Name': 'vpc-id',
'Values': [vpc_id]
}])
if len(subnets["Subnets"]) > 0:
# Strip the metadata section (subnets["Subnets"])
subnet_list = [s["SubnetId"] for s in subnets["Subnets"]]
# Add internal ", " only. This is called literally from: "{{aws...}}" - reuses the outer quotes
return "\", \"".join(subnet_list)
else:
return default | def ec2_vpc_subnets(self, lookup, default=None) | Args:
lookup - the friendly name of the VPC whose subnets we want
Returns:
A comma-separated list of all subnets in use in the named VPC or default/None if no match found | 6.968383 | 6.935757 | 1.004704 |
vpcs = EFAwsResolver.__CLIENTS["ec2"].describe_vpcs(Filters=[{
'Name': 'tag:Name',
'Values': [lookup]
}])
if len(vpcs.get("Vpcs")) > 0:
return vpcs["Vpcs"][0]["CidrBlock"]
else:
return default | def ec2_vpc_cidrblock(self, lookup, default=None) | Args:
lookup - the friendly name of the VPC whose CIDR block we want
Returns:
The CIDR block of the named VPC, or default/None if no match found | 3.731839 | 3.632073 | 1.027468 |
try:
elb = self._elbv2_load_balancer(lookup)
return elb['CanonicalHostedZoneId']
except ClientError:
return default | def elbv2_load_balancer_hosted_zone(self, lookup, default=None) | Args:
lookup: the friendly name of the V2 elb to look up
default: value to return in case of no match
Returns:
The hosted zone ID of the ELB found with a name matching 'lookup'. | 4.31273 | 3.970058 | 1.086314 |
try:
elb = self._elbv2_load_balancer(lookup)
return elb['DNSName']
except ClientError:
return default | def elbv2_load_balancer_dns_name(self, lookup, default=None) | Args:
lookup: the friendly name of the V2 elb to look up
default: value to return in case of no match
Returns:
The hosted zone ID of the ELB found with a name matching 'lookup'. | 4.006642 | 3.912577 | 1.024042 |
try:
elb = self._elbv2_load_balancer(lookup)
m = re.search(r'.+?(app\/[^\/]+\/[^\/]+)$', elb['LoadBalancerArn'])
return m.group(1)
except ClientError:
return default | def elbv2_load_balancer_arn_suffix(self, lookup, default=None) | Args:
lookup: the friendly name of the v2 elb to look up
default: value to return in case of no match
Returns:
The shorthand fragment of the ALB's ARN, of the form `app/*/*` | 4.492558 | 3.830588 | 1.172811 |
try:
client = EFAwsResolver.__CLIENTS['elbv2']
elbs = client.describe_target_groups(Names=[lookup])
elb = elbs['TargetGroups'][0]
m = re.search(r'.+?(targetgroup\/[^\/]+\/[^\/]+)$', elb['TargetGroupArn'])
return m.group(1)
except ClientError:
return default | def elbv2_target_group_arn_suffix(self, lookup, default=None) | Args:
lookup: the friendly name of the v2 elb target group
default: value to return in case of no match
Returns:
The shorthand fragment of the target group's ARN, of the form
`targetgroup/*/*` | 4.925075 | 4.442669 | 1.108585 |
# list_rules returns at most 100 rules per request
list_limit = 100
rules = EFAwsResolver.__CLIENTS["waf"].list_rules(Limit=list_limit)
while True:
for rule in rules["Rules"]:
if rule["Name"] == lookup:
return rule["RuleId"]
if rules.has_key("NextMarker"):
rules = EFAwsResolver.__CLIENTS["waf"].list_rules(Limit=list_limit, NextMarker=rules["NextMarker"])
else:
return default | def waf_rule_id(self, lookup, default=None) | Args:
lookup: the friendly name of a WAF rule
default: the optional value to return if lookup failed; returns None if not set
Returns:
the ID of the WAF rule whose name matches 'lookup' or default/None if no match found | 3.779708 | 3.62509 | 1.042652 |
# list_rules returns at most 100 rules per request
list_limit = 100
acls = EFAwsResolver.__CLIENTS["waf"].list_web_acls(Limit=list_limit)
while True:
for acl in acls["WebACLs"]:
if acl["Name"] == lookup:
return acl["WebACLId"]
if acls.has_key("NextMarker"):
acls = EFAwsResolver.__CLIENTS["waf"].list_web_acls(Limit=list_limit, NextMarker=acls["NextMarker"])
else:
return default | def waf_web_acl_id(self, lookup, default=None) | Args:
lookup: the friendly name of a Web ACL
default: the optional value to return if lookup failed; returns None if not set
Returns:
the ID of the WAF Web ACL whose name matches rule_name or default/None if no match found | 3.470664 | 3.446811 | 1.00692 |
list_limit = "100"
# enforce terminal '.' in name, otherwise we could get a partial match of the incorrect zones
if lookup[-1] != '.':
return default
hosted_zones = EFAwsResolver.__CLIENTS["route53"].list_hosted_zones_by_name(DNSName=lookup, MaxItems=list_limit)
# Return if the account has no HostedZones
if not hosted_zones.has_key("HostedZones"):
return default
while True:
for hosted_zone in hosted_zones["HostedZones"]:
if lookup == hosted_zone["Name"] and not hosted_zone["Config"]["PrivateZone"]:
return hosted_zone["Id"].split("/")[2]
if hosted_zones["IsTruncated"]:
hosted_zones = EFAwsResolver.__CLIENTS["route53"].list_hosted_zones_by_name(
DNSName=hosted_zones["NextDNSName"], HostedZoneId=hosted_zones["NextHostedZoneId"], MaxItems=list_limit)
else:
return default | def route53_public_hosted_zone_id(self, lookup, default=None) | Args:
lookup: The zone name to look up. Must end with "."
default: the optional value to return if lookup failed; returns None if not set
Returns:
the ID of the public hosted zone for the 'lookup' domain, or default/None if no match found | 3.648718 | 3.684993 | 0.990156 |
vpc_id = self.ec2_vpc_vpc_id(lookup)
if vpc_id is None:
return default
route_table = EFAwsResolver.__CLIENTS["ec2"].describe_route_tables(Filters=[
{'Name': 'vpc-id', 'Values': [vpc_id]},
{'Name': 'association.main', 'Values': ['true']}
])
if len(route_table["RouteTables"]) is not 1:
return default
return route_table["RouteTables"][0]["RouteTableId"] | def ec2_route_table_main_route_table_id(self, lookup, default=None) | Args:
lookup: the friendly name of the VPC whose main route table we are looking up
default: the optional value to return if lookup failed; returns None if not set
Returns:
the ID of the main route table of the named VPC, or default if no match/multiple matches found | 2.867003 | 2.923943 | 0.980526 |
route_table = EFAwsResolver.__CLIENTS["ec2"].describe_route_tables(Filters=[
{'Name': 'tag-key', 'Values': ['Name']},
{'Name': 'tag-value', 'Values': [lookup]}
])
if len(route_table["RouteTables"]) is not 1:
return default
return route_table["RouteTables"][0]["RouteTableId"] | def ec2_route_table_tagged_route_table_id(self, lookup, default=None) | Args:
lookup: the tagged route table name, should be unique
default: the optional value to return if lookup failed; returns None if not set
Returns:
the ID of the route table, or default if no match/multiple matches found | 3.594698 | 3.722448 | 0.965681 |
# list_distributions returns at most 100 distributions per request
list_limit = "100"
distributions = EFAwsResolver.__CLIENTS["cloudfront"].list_distributions(MaxItems=list_limit)["DistributionList"]
# Return if the account has no Distributions
if not distributions.has_key("Items"):
return default
while True:
for distribution in distributions["Items"]:
if lookup in distribution["Aliases"]["Items"]:
return distribution["DomainName"]
if distributions["IsTruncated"]:
distributions = EFAwsResolver.__CLIENTS["cloudfront"].list_distributions(
MaxItems=list_limit, Marker=distributions["NextMarker"])["DistributionList"]
else:
return default | def cloudfront_domain_name(self, lookup, default=None) | Args:
lookup: any CNAME on the Cloudfront distribution
default: the optional value to return if lookup failed; returns None if not set
Returns:
The domain name (FQDN) of the Cloudfront distrinbution, or default/None if no match | 3.919913 | 3.918747 | 1.000298 |
# list_cloud_front_origin_access_identities returns at most 100 oai's per request
list_limit = "100"
oais = EFAwsResolver.__CLIENTS["cloudfront"].list_cloud_front_origin_access_identities(
MaxItems=list_limit)["CloudFrontOriginAccessIdentityList"]
# Return if the account has no OriginAccessIdentities
if not oais.has_key("Items"):
return default
while True:
for oai in oais["Items"]:
if oai["Comment"] == lookup:
return oai["S3CanonicalUserId"]
if oais["IsTruncated"]:
oais = EFAwsResolver.__CLIENTS["cloudfront"].list_cloud_front_origin_access_identities(
MaxItems=list_limit, Marker=oais["NextMarker"])["CloudFrontOriginAccessIdentityList"]
else:
return default | def cloudfront_origin_access_identity_oai_canonical_user_id(self, lookup, default=None) | Args:
lookup: the FQDN of the Origin Access Identity (from its comments)
default: the optional value to return if lookup failed; returns None if not set
Returns:
the S3 Canonical User ID of the OAI associated with the named FQDN in 'lookup', or default/None if no match | 3.327585 | 3.231227 | 1.029821 |
identity_pool_id = self.cognito_identity_identity_pool_id(lookup, default)
if identity_pool_id == default:
return default
# The ARN has to be constructed because there is no boto3 call that returns the full ARN for a cognito identity pool
return "arn:aws:cognito-identity:{{{{REGION}}}}:{{{{ACCOUNT}}}}:identitypool/{}".format(identity_pool_id) | def cognito_identity_identity_pool_arn(self, lookup, default=None) | Args:
lookup: Cognito Federated Identity name, proto0-cms-identity-pool
default: the optional value to return if lookup failed; returns None if not set
Returns:
the constructed ARN for the cognito identity pool, else default/None | 4.075731 | 4.367222 | 0.933255 |
# List size cannot be greater than 60
list_limit = 60
client = EFAwsResolver.__CLIENTS["cognito-identity"]
response = client.list_identity_pools(MaxResults=list_limit)
while "IdentityPools" in response:
# Loop through all the identity pools
for pool in response["IdentityPools"]:
if pool["IdentityPoolName"] == lookup:
return pool["IdentityPoolId"]
# No match found on this page, but there are more pages
if response.has_key("NextToken"):
response = client.list_identity_pools(MaxResults=list_limit, NextToken=response["NextToken"])
else:
break
return default | def cognito_identity_identity_pool_id(self, lookup, default=None) | Args:
lookup: Cognito Federated Identity name, proto0-cms-identity-pool
default: the optional value to return if lookup failed; returns None if not set
Returns:
the Cognito Identity Pool ID corresponding to the given lookup, else default/None | 3.744906 | 3.743553 | 1.000361 |
client = EFAwsResolver.__CLIENTS["cognito-idp"]
user_pool_id = self.cognito_idp_user_pool_id(lookup, default)
if user_pool_id == default:
return default
response = client.describe_user_pool(UserPoolId=user_pool_id)
if not response.has_key("UserPool"):
return default
return response["UserPool"]["Arn"] | def cognito_idp_user_pool_arn(self, lookup, default=None) | Args:
lookup: Cognito User Pool name, proto0-cms-user-pool
default: the optional value to return if lookup failed; returns None if not set
Returns:
the User Pool ARN corresponding to the given lookup, else default/None | 3.31951 | 3.810198 | 0.871217 |
decrypted_lookup = ef_utils.kms_decrypt(EFAwsResolver.__CLIENTS["kms"], lookup)
return decrypted_lookup | def kms_decrypt_value(self, lookup) | Args:
lookup: the encrypted value to be decrypted by KMS; base64 encoded
Returns:
The decrypted lookup value | 25.630762 | 35.419781 | 0.723628 |
key_arn = ef_utils.kms_key_arn(EFAwsResolver.__CLIENTS["kms"], lookup)
return key_arn | def kms_key_arn(self, lookup) | Args:
lookup: The key alias, EX: alias/proto0-evs-drm
Returns:
The full key arn | 17.423058 | 20.118675 | 0.866014 |
parser = argparse.ArgumentParser()
parser.add_argument("env", help=", ".join(EFConfig.ENV_LIST))
parser.add_argument("--sr", help="optional /path/to/service_registry_file.json", default=None)
parser.add_argument("--commit", help="Make changes in AWS (dry run if omitted)", action="store_true", default=False)
parser.add_argument("--verbose", help="Print additional info", action="store_true", default=False)
parser.add_argument("--devel", help="Allow running from branch; don't refresh from origin", action="store_true",
default=False)
parsed_args = vars(parser.parse_args(args))
context = EFContext()
context.commit = parsed_args["commit"]
context.devel = parsed_args["devel"]
try:
context.env = parsed_args["env"]
except ValueError as e:
fail("Error in env: {}".format(e.message))
# Set up service registry and policy template path which depends on it
context.service_registry = EFServiceRegistry(parsed_args["sr"])
context.policy_template_path = normpath(dirname(context.service_registry.filespec)) + EFConfig.POLICY_TEMPLATE_PATH_SUFFIX
context.verbose = parsed_args["verbose"]
return context | def handle_args_and_set_context(args) | Args:
args: the command line args, probably passed from main() as sys.argv[1:]
Returns:
a populated EFContext object
Raises:
IOError: if service registry file can't be found or can't be opened
RuntimeError: if repo or branch isn't as spec'd in ef_config.EF_REPO and ef_config.EF_REPO_BRANCH
CalledProcessError: if 'git rev-parse' command to find repo root could not be run | 4.33024 | 3.860569 | 1.121659 |
if service_type not in SG_SERVICE_TYPES:
print_if_verbose("not eligible for security group(s); service type: {}".format(service_type))
return
target_name = "{}-{}".format(env, service_name)
if service_type == "aws_ec2":
sg_names = ["{}-ec2".format(target_name)]
elif service_type == "aws_lambda":
sg_names = ["{}-lambda".format(target_name)]
elif service_type == "http_service":
sg_names = [
"{}-ec2".format(target_name),
"{}-elb".format(target_name)
]
elif service_type == "aws_security_group":
sg_names = [target_name]
else:
fail("Unexpected service_type: {} when creating security group for: {}".format(service_type, target_name))
for sg_name in sg_names:
if not AWS_RESOLVER.ec2_security_group_security_group_id(sg_name):
vpc_name = "vpc-{}".format(env)
print("Create security group: {} in vpc: {}".format(sg_name, vpc_name))
vpc = AWS_RESOLVER.ec2_vpc_vpc_id(vpc_name)
if not vpc:
fail("Error: could not get VPC by name: {}".format(vpc_name))
# create security group
if CONTEXT.commit:
try:
new_sg = CLIENTS["ec2"].create_security_group(GroupName=sg_name, VpcId=vpc, Description=sg_name)
except:
fail("Exception creating security group named: {} in VpcId: {}".format(sg_name, vpc_name), sys.exc_info())
print(new_sg["GroupId"])
else:
print_if_verbose("security group already exists: {}".format(sg_name)) | def conditionally_create_security_groups(env, service_name, service_type) | Create security groups as needed; name and number created depend on service_type
Args:
env: the environment the SG will be created in
service_name: name of the service in service registry
service_type: service registry service type: 'aws_ec2', 'aws_lambda', 'aws_security_group', or 'http_service' | 2.79342 | 2.599134 | 1.07475 |
service_type = sr_entry['type']
if service_type not in SERVICE_TYPE_ROLE:
print_if_verbose("not eligible for role (and possibly instance profile); service type: {}".format(service_type))
return
if sr_entry.has_key("assume_role_policy"):
# Explicitly defined AssumeRole policy
assume_role_policy_document = resolve_policy_document(sr_entry["assume_role_policy"])
else:
# Create Service:AssumeRole policy using the service type in the SERVICE_TYPE_ROLE dict
# which must list a service type to use this capacity (most do)
if SERVICE_TYPE_ROLE[service_type] is None:
fail("service_type: {} does not have a default service-type AssumeRole policy".format(service_type))
formatted_principals = '"Service": "{}"'.format(SERVICE_TYPE_ROLE[service_type])
assume_role_policy_document = '''{
"Version" : "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": { ''' + formatted_principals + ''' },
"Action": [ "sts:AssumeRole" ]
}]
}'''
if not get_role_id(role_name):
print("Create role: {}".format(role_name))
print_if_verbose("AssumeRole policy document:\n{}".format(assume_role_policy_document))
if CONTEXT.commit:
try:
new_role = CLIENTS["iam"].create_role(
RoleName=role_name, AssumeRolePolicyDocument=assume_role_policy_document
)
except ClientError as error:
fail("Exception creating new role named: {} {}".format(role_name, sys.exc_info(), error))
print(new_role["Role"]["RoleId"])
else:
print_if_verbose("role already exists: {}".format(role_name)) | def conditionally_create_role(role_name, sr_entry) | Create role_name if a role by that name does not already exist; attach a custom list of Principals
to its AssumeRolePolicy
Args:
role_name: the name for the role to create
sr_entry: service registry entry
Example of a (complex) AssumeRole policy document comprised of two IAM entities and a service:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com",
"AWS": [
"arn:aws:iam::978969509086:root",
"arn:aws:iam::978969509086:role/mgmt-jenkins"
]
},
"Action": "sts:AssumeRole"
}
]
} | 3.818436 | 3.64283 | 1.048206 |
# make instance profile if this service_type gets an instance profile
if service_type not in INSTANCE_PROFILE_SERVICE_TYPES:
print_if_verbose("service type: {} not eligible for instance profile".format(service_type))
return
instance_profile = get_instance_profile(role_name)
if not instance_profile:
print("Create instance profile: {}".format(role_name))
if CONTEXT.commit:
try:
instance_profile = CLIENTS["iam"].create_instance_profile(InstanceProfileName=role_name)
except ClientError as error:
fail("Exception creating instance profile named: {} {}".format(role_name, sys.exc_info(), error))
else:
print_if_verbose("instance profile already exists: {}".format(role_name))
# attach instance profile to role; test 'if instance_profile' because we drop through to here in a dry run
if instance_profile and not instance_profile_contains_role(instance_profile, role_name):
print("Add role: {} to instance profile: {}".format(role_name, role_name))
if CONTEXT.commit:
try:
CLIENTS["iam"].add_role_to_instance_profile(InstanceProfileName=role_name, RoleName=role_name)
except ClientError as error:
fail("Exception adding role to instance profile: {} {}".format(role_name, sys.exc_info(), error))
else:
print_if_verbose("instance profile already contains role: {}".format(role_name)) | def conditionally_create_profile(role_name, service_type) | Check that there is a 1:1 correspondence with an InstanceProfile having the same name
as the role, and that the role is contained in it. Create InstanceProfile and attach to role if needed. | 2.962801 | 2.844782 | 1.041486 |
service_type = sr_entry['type']
if not (service_type in SERVICE_TYPE_ROLE and "aws_managed_policies" in sr_entry):
print_if_verbose("not eligible for policies; service_type: {} is not valid for policies "
"or no 'aws_managed_policies' key in service registry for this role".format(service_type))
return
for policy_name in sr_entry['aws_managed_policies']:
print_if_verbose("loading policy: {} for role: {}".format(policy_name, role_name))
if CONTEXT.commit:
try:
CLIENTS["iam"].attach_role_policy(RoleName=role_name, PolicyArn='arn:aws:iam::aws:policy/' + policy_name)
except:
fail("Exception putting policy: {} onto role: {}".format(policy_name, role_name), sys.exc_info()) | def conditionally_attach_managed_policies(role_name, sr_entry) | If 'aws_managed_policies' key lists the names of AWS managed policies to bind to the role,
attach them to the role
Args:
role_name: name of the role to attach the policies to
sr_entry: service registry entry | 4.433237 | 4.10817 | 1.079127 |
service_type = sr_entry['type']
if not (service_type in SERVICE_TYPE_ROLE and "policies" in sr_entry):
print_if_verbose("not eligible for policies; service_type: {} is not valid for policies "
"or no 'policies' key in service registry for this role".format(service_type))
return
for policy_name in sr_entry['policies']:
print_if_verbose("loading policy: {} for role: {}".format(policy_name, role_name))
try:
policy_document = resolve_policy_document(policy_name)
except:
fail("Exception loading policy: {} for role: {}".format(policy_name, role_name), sys.exc_info())
# inline the policy onto the role
if CONTEXT.commit:
try:
CLIENTS["iam"].put_role_policy(RoleName=role_name, PolicyName=policy_name, PolicyDocument=policy_document)
except:
fail("Exception putting policy: {} onto role: {}".format(policy_name, role_name), sys.exc_info()) | def conditionally_inline_policies(role_name, sr_entry) | If 'policies' key lists the filename prefixes of policies to bind to the role,
load them from the expected path and inline them onto the role
Args:
role_name: name of the role to attach the policies to
sr_entry: service registry entry | 3.615442 | 3.363497 | 1.074906 |
with tf.variable_scope("sum_abs_distance"):
return tf.reduce_sum(tf.abs(preds - labels), axis=-1) | def sum_abs_distance(labels, preds) | Compute the sum of abs distances.
:param labels: A float tensor of shape [batch_size, ..., X] representing the labels.
:param preds: A float tensor of shape [batch_size, ..., X] representing the predictions.
:return: A float tensor of shape [batch_size, ...] representing the summed absolute distance. | 2.604192 | 2.965021 | 0.878305 |
with tf.variable_scope("l1_distance"):
return tf.norm(preds - labels, ord=1) | def l1_distance(labels, preds) | Compute the l1_distance.
:param labels: A float tensor of shape [batch_size, ..., X] representing the labels.
:param preds: A float tensor of shape [batch_size, ..., X] representing the predictions.
:return: A float tensor of shape [batch_size, ...] representing the l1 distance. | 3.50119 | 4.636141 | 0.755195 |
with tf.variable_scope("smooth_l1"):
return tf.reduce_sum(tf.losses.huber_loss(
labels=labels,
predictions=preds,
delta=delta,
loss_collection=None,
reduction=tf.losses.Reduction.NONE
), axis=-1) | def smooth_l1_distance(labels, preds, delta=1.0) | Compute the smooth l1_distance.
:param labels: A float tensor of shape [batch_size, ..., X] representing the labels.
:param preds: A float tensor of shape [batch_size, ..., X] representing the predictions.
:param delta: `float`, the point where the huber loss function changes from a quadratic to linear.
:return: A float tensor of shape [batch_size, ...] representing the smooth l1 distance. | 2.075652 | 2.158465 | 0.961633 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.