body stringlengths 26 98.2k | body_hash int64 -9,222,864,604,528,158,000 9,221,803,474B | docstring stringlengths 1 16.8k | path stringlengths 5 230 | name stringlengths 1 96 | repository_name stringlengths 7 89 | lang stringclasses 1
value | body_without_docstring stringlengths 20 98.2k |
|---|---|---|---|---|---|---|---|
@abstractmethod
def evaluate(self, sentences: Union[(List[Sentence], Dataset)], gold_label_type: str, out_path: Union[(str, Path)]=None, embedding_storage_mode: str='none', mini_batch_size: int=32, num_workers: int=8, main_evaluation_metric: Tuple[(str, str)]=('micro avg', 'f1-score'), exclude_labels: List[str]=[], gol... | -3,746,999,487,125,572,600 | Evaluates the model. Returns a Result object containing evaluation
results and a loss value. Implement this to enable evaluation.
:param data_loader: DataLoader that iterates over dataset to be evaluated
:param out_path: Optional output path to store predictions
:param embedding_storage_mode: One of 'none', 'cpu' or 'g... | flair/nn/model.py | evaluate | MaxDall/flair | python | @abstractmethod
def evaluate(self, sentences: Union[(List[Sentence], Dataset)], gold_label_type: str, out_path: Union[(str, Path)]=None, embedding_storage_mode: str='none', mini_batch_size: int=32, num_workers: int=8, main_evaluation_metric: Tuple[(str, str)]=('micro avg', 'f1-score'), exclude_labels: List[str]=[], gol... |
@abstractmethod
def _get_state_dict(self):
'Returns the state dictionary for this model. Implementing this enables the save() and save_checkpoint()\n functionality.'
raise NotImplementedError | 4,904,642,327,725,068,000 | Returns the state dictionary for this model. Implementing this enables the save() and save_checkpoint()
functionality. | flair/nn/model.py | _get_state_dict | MaxDall/flair | python | @abstractmethod
def _get_state_dict(self):
'Returns the state dictionary for this model. Implementing this enables the save() and save_checkpoint()\n functionality.'
raise NotImplementedError |
@staticmethod
@abstractmethod
def _init_model_with_state_dict(state):
'Initialize the model from a state dictionary. Implementing this enables the load() and load_checkpoint()\n functionality.'
raise NotImplementedError | 1,439,372,108,658,756,600 | Initialize the model from a state dictionary. Implementing this enables the load() and load_checkpoint()
functionality. | flair/nn/model.py | _init_model_with_state_dict | MaxDall/flair | python | @staticmethod
@abstractmethod
def _init_model_with_state_dict(state):
'Initialize the model from a state dictionary. Implementing this enables the load() and load_checkpoint()\n functionality.'
raise NotImplementedError |
def save(self, model_file: Union[(str, Path)], checkpoint: bool=False):
'\n Saves the current model to the provided file.\n :param model_file: the model file\n '
model_state = self._get_state_dict()
optimizer = scheduler = None
if hasattr(self, 'model_card'):
if ('training_p... | 2,092,918,784,861,721,300 | Saves the current model to the provided file.
:param model_file: the model file | flair/nn/model.py | save | MaxDall/flair | python | def save(self, model_file: Union[(str, Path)], checkpoint: bool=False):
'\n Saves the current model to the provided file.\n :param model_file: the model file\n '
model_state = self._get_state_dict()
optimizer = scheduler = None
if hasattr(self, 'model_card'):
if ('training_p... |
@classmethod
def load(cls, model: Union[(str, Path)]):
'\n Loads the model from the given file.\n :param model: the model file\n :return: the loaded text classifier model\n '
model_file = cls._fetch_model(str(model))
with warnings.catch_warnings():
warnings.filterwarnings... | 2,603,128,188,631,163,400 | Loads the model from the given file.
:param model: the model file
:return: the loaded text classifier model | flair/nn/model.py | load | MaxDall/flair | python | @classmethod
def load(cls, model: Union[(str, Path)]):
'\n Loads the model from the given file.\n :param model: the model file\n :return: the loaded text classifier model\n '
model_file = cls._fetch_model(str(model))
with warnings.catch_warnings():
warnings.filterwarnings... |
def forward_pass(self, sentences: Union[(List[DataPoint], DataPoint)], return_label_candidates: bool=False):
'This method does a forward pass through the model given a list of data points as input.\n Returns the tuple (scores, labels) if return_label_candidates = False, where scores are a tensor of logits\n ... | -4,106,641,530,345,396,000 | This method does a forward pass through the model given a list of data points as input.
Returns the tuple (scores, labels) if return_label_candidates = False, where scores are a tensor of logits
produced by the decoder and labels are the string labels for each data point.
Returns the tuple (scores, labels, data_points,... | flair/nn/model.py | forward_pass | MaxDall/flair | python | def forward_pass(self, sentences: Union[(List[DataPoint], DataPoint)], return_label_candidates: bool=False):
'This method does a forward pass through the model given a list of data points as input.\n Returns the tuple (scores, labels) if return_label_candidates = False, where scores are a tensor of logits\n ... |
def predict(self, sentences: Union[(List[Sentence], Sentence)], mini_batch_size: int=32, return_probabilities_for_all_classes: bool=False, verbose: bool=False, label_name: Optional[str]=None, return_loss=False, embedding_storage_mode='none'):
"\n Predicts the class labels for the given sentences. The labels ... | -3,447,273,615,040,302,600 | Predicts the class labels for the given sentences. The labels are directly added to the sentences.
:param sentences: list of sentences
:param mini_batch_size: mini batch size to use
:param return_probabilities_for_all_classes : return probabilities for all classes instead of only best predicted
:param verbose: set to T... | flair/nn/model.py | predict | MaxDall/flair | python | def predict(self, sentences: Union[(List[Sentence], Sentence)], mini_batch_size: int=32, return_probabilities_for_all_classes: bool=False, verbose: bool=False, label_name: Optional[str]=None, return_loss=False, embedding_storage_mode='none'):
"\n Predicts the class labels for the given sentences. The labels ... |
def __init__(__self__, resource_name, opts=None, charset=None, collation=None, instance=None, name=None, project=None, __props__=None, __name__=None, __opts__=None):
'\n Create a Database resource with the given unique name, props, and options.\n \n :param str resource_name: The name of the res... | -8,634,422,539,388,717,000 | Create a Database resource with the given unique name, props, and options.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] project: The ID of the project in which the resource belongs.
If it is not provided, the provider ... | sdk/python/pulumi_gcp/sql/database.py | __init__ | 23doors/pulumi-gcp | python | def __init__(__self__, resource_name, opts=None, charset=None, collation=None, instance=None, name=None, project=None, __props__=None, __name__=None, __opts__=None):
'\n Create a Database resource with the given unique name, props, and options.\n \n :param str resource_name: The name of the res... |
@staticmethod
def get(resource_name, id, opts=None, charset=None, collation=None, instance=None, name=None, project=None, self_link=None):
"\n Get an existing Database resource's state with the given name, id, and optional extra\n properties used to qualify the lookup.\n \n :param str re... | -3,593,032,024,044,406,000 | Get an existing Database resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource... | sdk/python/pulumi_gcp/sql/database.py | get | 23doors/pulumi-gcp | python | @staticmethod
def get(resource_name, id, opts=None, charset=None, collation=None, instance=None, name=None, project=None, self_link=None):
"\n Get an existing Database resource's state with the given name, id, and optional extra\n properties used to qualify the lookup.\n \n :param str re... |
def parse_args():
'Parse input arguments\n\n Use --help to see a pretty description of the arguments\n '
if ('ipykernel' in sys.argv[0]):
sys.argv = [sys.argv[0]]
parser = argparse.ArgumentParser()
parser.add_argument('-n', type=int, default=15, choices=[8, 15, 24, 35, 48, 63, 80], help='N... | 7,348,549,755,786,784,000 | Parse input arguments
Use --help to see a pretty description of the arguments | experiments/npuzzle/solve.py | parse_args | camall3n/focused-macros | python | def parse_args():
'Parse input arguments\n\n Use --help to see a pretty description of the arguments\n '
if ('ipykernel' in sys.argv[0]):
sys.argv = [sys.argv[0]]
parser = argparse.ArgumentParser()
parser.add_argument('-n', type=int, default=15, choices=[8, 15, 24, 35, 48, 63, 80], help='N... |
def solve():
'Instantiate an N-Puzzle and solve with the specified macro-actions and search algorithm'
args = parse_args()
random.seed(args.random_seed)
np.random.seed(args.random_seed)
start = NPuzzle(n=args.n).scramble(seed=args.random_seed)
if args.random_goal:
goal = NPuzzle(n=args.n... | 3,855,904,374,386,665,500 | Instantiate an N-Puzzle and solve with the specified macro-actions and search algorithm | experiments/npuzzle/solve.py | solve | camall3n/focused-macros | python | def solve():
args = parse_args()
random.seed(args.random_seed)
np.random.seed(args.random_seed)
start = NPuzzle(n=args.n).scramble(seed=args.random_seed)
if args.random_goal:
goal = NPuzzle(n=args.n).scramble(seed=(args.random_seed + 1000))
print('Using goal pattern: {:03d}'.for... |
def format_value(value, df=None, doc=None, currency=None, translated=False):
'Format value based on given fieldtype, document reference, currency reference.\n\tIf docfield info (df) is not given, it will try and guess based on the datatype of the value'
if isinstance(df, string_types):
df = frappe._dict... | -4,790,803,982,193,777,000 | Format value based on given fieldtype, document reference, currency reference.
If docfield info (df) is not given, it will try and guess based on the datatype of the value | frappe/utils/formatters.py | format_value | EHASUN/frappe | python | def format_value(value, df=None, doc=None, currency=None, translated=False):
'Format value based on given fieldtype, document reference, currency reference.\n\tIf docfield info (df) is not given, it will try and guess based on the datatype of the value'
if isinstance(df, string_types):
df = frappe._dict... |
def get_client():
'\n Loads the serialized client from database\n '
db = get_db()
pickled_client = db.execute('SELECT pickled_client FROM btc_pay_server_client ORDER BY id').fetchone()
return pickle.loads(pickled_client['pickled_client']) | -9,065,270,775,514,728,000 | Loads the serialized client from database | app/btcpayserver_helper.py | get_client | psqnt/flask-btcpay-example | python | def get_client():
'\n \n '
db = get_db()
pickled_client = db.execute('SELECT pickled_client FROM btc_pay_server_client ORDER BY id').fetchone()
return pickle.loads(pickled_client['pickled_client']) |
def create_invoice(price=Config.TIP_AMOUNT, currency=Config.TIP_CURRENCY, order_id=None, desc=None, notification_url=None, redirect_url=None):
"\n Creates a new invoice and returns invoice id\n :param price: a given price (default is bitcoin)\n :param currency: currency ticker from bitpay API: 'USD', 'EUR'... | -4,921,221,465,815,942,000 | Creates a new invoice and returns invoice id
:param price: a given price (default is bitcoin)
:param currency: currency ticker from bitpay API: 'USD', 'EUR', 'BTC' etc
:return: invoice_id -> str | app/btcpayserver_helper.py | create_invoice | psqnt/flask-btcpay-example | python | def create_invoice(price=Config.TIP_AMOUNT, currency=Config.TIP_CURRENCY, order_id=None, desc=None, notification_url=None, redirect_url=None):
"\n Creates a new invoice and returns invoice id\n :param price: a given price (default is bitcoin)\n :param currency: currency ticker from bitpay API: 'USD', 'EUR'... |
def get_invoice(invoice_id: str):
'\n Get an invoice by ID\n '
client = get_client()
return client.get_invoice(invoice_id) | 8,041,105,536,858,479,000 | Get an invoice by ID | app/btcpayserver_helper.py | get_invoice | psqnt/flask-btcpay-example | python | def get_invoice(invoice_id: str):
'\n \n '
client = get_client()
return client.get_invoice(invoice_id) |
def get_most_recent_invoice():
'\n Returns the most return invoice created\n '
client = get_client()
return client.get_invoices()[:1] | -8,889,290,577,071,344,000 | Returns the most return invoice created | app/btcpayserver_helper.py | get_most_recent_invoice | psqnt/flask-btcpay-example | python | def get_most_recent_invoice():
'\n \n '
client = get_client()
return client.get_invoices()[:1] |
def platform_config_update(config):
'\n Update configuration for the remote platform\n\n @param config The configuration dictionary to use/update\n '
global remote_port_map
config['port_map'] = remote_port_map.copy()
config['caps_table_idx'] = 0 | -1,269,685,456,096,339,700 | Update configuration for the remote platform
@param config The configuration dictionary to use/update | src/ptf/platforms/remote.py | platform_config_update | PJHsieh/MarkHsieh_ptf | python | def platform_config_update(config):
'\n Update configuration for the remote platform\n\n @param config The configuration dictionary to use/update\n '
global remote_port_map
config['port_map'] = remote_port_map.copy()
config['caps_table_idx'] = 0 |
def repl(self, clean_code, lastonly):
' REPL\n\n If `self.debug==True` then result is the raw list of lines of bytes,\n otherwise, it is a list of (lineNumber, stdoutLines, valueLines, typeLines),\n where again the last 3 entries are lists of lines of bytes. \n '
self.proc.sendline(c... | 7,573,649,737,421,780,000 | REPL
If `self.debug==True` then result is the raw list of lines of bytes,
otherwise, it is a list of (lineNumber, stdoutLines, valueLines, typeLines),
where again the last 3 entries are lists of lines of bytes. | m2_kernel/kernel.py | repl | MWhybrow92/Macaulay2-Jupyter-Kernel | python | def repl(self, clean_code, lastonly):
' REPL\n\n If `self.debug==True` then result is the raw list of lines of bytes,\n otherwise, it is a list of (lineNumber, stdoutLines, valueLines, typeLines),\n where again the last 3 entries are lists of lines of bytes. \n '
self.proc.sendline(c... |
def __init__(self, *args, **kwargs):
' kernel init - calls __init__ on the parent and sets up the M2Interp object\n '
super().__init__(*args, **kwargs)
self.interp = M2Interp(configpath=os.environ.get('M2JK_CONFIG'))
self.interp.start() | 8,517,768,527,292,954,000 | kernel init - calls __init__ on the parent and sets up the M2Interp object | m2_kernel/kernel.py | __init__ | MWhybrow92/Macaulay2-Jupyter-Kernel | python | def __init__(self, *args, **kwargs):
' \n '
super().__init__(*args, **kwargs)
self.interp = M2Interp(configpath=os.environ.get('M2JK_CONFIG'))
self.interp.start() |
def send_stream(self, text, stderr=False):
' enqueues a stdout or stderr message for the given cell\n '
stdfile = ('stderr' if stderr else 'stdout')
content = {'name': stdfile, 'text': (text + '\n')}
self.send_response(self.iopub_socket, 'stream', content) | -8,047,124,716,450,033,000 | enqueues a stdout or stderr message for the given cell | m2_kernel/kernel.py | send_stream | MWhybrow92/Macaulay2-Jupyter-Kernel | python | def send_stream(self, text, stderr=False):
' \n '
stdfile = ('stderr' if stderr else 'stdout')
content = {'name': stdfile, 'text': (text + '\n')}
self.send_response(self.iopub_socket, 'stream', content) |
def do_execute(self, code, silent, store_history=True, user_expressions=None, allow_stdin=False):
' kernel entry point for the execution of each cell\n '
try:
output_lines = self.interp.execute(code)
except Exception as e:
output_lines = []
self.send_stream(str(e), True)
x... | -5,955,703,454,956,886,000 | kernel entry point for the execution of each cell | m2_kernel/kernel.py | do_execute | MWhybrow92/Macaulay2-Jupyter-Kernel | python | def do_execute(self, code, silent, store_history=True, user_expressions=None, allow_stdin=False):
' \n '
try:
output_lines = self.interp.execute(code)
except Exception as e:
output_lines = []
self.send_stream(str(e), True)
xcount = None
if (not silent):
if (not... |
def checkFormatReturnTraceOnError(file_path):
'Run checkFormat and return the traceback of any exception.'
try:
return checkFormat(file_path)
except:
return traceback.format_exc().split('\n') | -6,684,526,828,670,164,000 | Run checkFormat and return the traceback of any exception. | tools/check_format.py | checkFormatReturnTraceOnError | isholaomotayo/envoy | python | def checkFormatReturnTraceOnError(file_path):
try:
return checkFormat(file_path)
except:
return traceback.format_exc().split('\n') |
def checkOwners(dir_name, owned_directories, error_messages):
'Checks to make sure a given directory is present either in CODEOWNERS or OWNED_EXTENSIONS\n\n Args:\n dir_name: the directory being checked.\n owned_directories: directories currently listed in CODEOWNERS.\n error_messages: where to put an err... | 7,425,760,383,986,747,000 | Checks to make sure a given directory is present either in CODEOWNERS or OWNED_EXTENSIONS
Args:
dir_name: the directory being checked.
owned_directories: directories currently listed in CODEOWNERS.
error_messages: where to put an error message for new unowned directories. | tools/check_format.py | checkOwners | isholaomotayo/envoy | python | def checkOwners(dir_name, owned_directories, error_messages):
'Checks to make sure a given directory is present either in CODEOWNERS or OWNED_EXTENSIONS\n\n Args:\n dir_name: the directory being checked.\n owned_directories: directories currently listed in CODEOWNERS.\n error_messages: where to put an err... |
def checkFormatVisitor(arg, dir_name, names):
'Run checkFormat in parallel for the given files.\n\n Args:\n arg: a tuple (pool, result_list, owned_directories, error_messages)\n pool and result_list are for starting tasks asynchronously.\n owned_directories tracks directories listed in the CODEOWNERS ... | 4,855,462,129,043,061,000 | Run checkFormat in parallel for the given files.
Args:
arg: a tuple (pool, result_list, owned_directories, error_messages)
pool and result_list are for starting tasks asynchronously.
owned_directories tracks directories listed in the CODEOWNERS file.
error_messages is a list of string format errors.
di... | tools/check_format.py | checkFormatVisitor | isholaomotayo/envoy | python | def checkFormatVisitor(arg, dir_name, names):
'Run checkFormat in parallel for the given files.\n\n Args:\n arg: a tuple (pool, result_list, owned_directories, error_messages)\n pool and result_list are for starting tasks asynchronously.\n owned_directories tracks directories listed in the CODEOWNERS ... |
def _send_request(self, request: HttpRequest, **kwargs: Any) -> Awaitable[AsyncHttpResponse]:
'Runs the network request through the client\'s chained policies.\n\n >>> from azure.core.rest import HttpRequest\n >>> request = HttpRequest("GET", "https://www.example.org/")\n <HttpRequest [GET], ur... | 996,727,745,646,716,500 | Runs the network request through the client's chained policies.
>>> from azure.core.rest import HttpRequest
>>> request = HttpRequest("GET", "https://www.example.org/")
<HttpRequest [GET], url: 'https://www.example.org/'>
>>> response = await client._send_request(request)
<AsyncHttpResponse: 200 OK>
For more informat... | sdk/resources/azure-mgmt-resource/azure/mgmt/resource/policy/v2016_04_01/aio/_policy_client.py | _send_request | AikoBB/azure-sdk-for-python | python | def _send_request(self, request: HttpRequest, **kwargs: Any) -> Awaitable[AsyncHttpResponse]:
'Runs the network request through the client\'s chained policies.\n\n >>> from azure.core.rest import HttpRequest\n >>> request = HttpRequest("GET", "https://www.example.org/")\n <HttpRequest [GET], ur... |
@api.doc('list_of_registered_users')
@api.marshal_list_with(_user, envelope='data')
def get(self):
'List all registered users'
return UserService.get_all_users() | 4,302,180,309,125,707,000 | List all registered users | app/project/user/user_controller.py | get | makci97/lms_flask | python | @api.doc('list_of_registered_users')
@api.marshal_list_with(_user, envelope='data')
def get(self):
return UserService.get_all_users() |
@auth.login_required
@AuthService.admin_permission_required
@api.response(201, 'User successfully created.')
@api.doc('create a new user(only for admin)')
@api.expect(_user, validate=True)
def post(self):
'Creates a new User(only for admin) '
user_service = UserService()
return user_service.create_user(requ... | -6,621,883,920,926,674,000 | Creates a new User(only for admin) | app/project/user/user_controller.py | post | makci97/lms_flask | python | @auth.login_required
@AuthService.admin_permission_required
@api.response(201, 'User successfully created.')
@api.doc('create a new user(only for admin)')
@api.expect(_user, validate=True)
def post(self):
' '
user_service = UserService()
return user_service.create_user(request.json) |
@api.doc('get a user')
@api.marshal_with(_user)
def get(self, public_id):
'get a user given its identifier'
user_service = UserService()
user_service.load_user(public_id)
if user_service.is_nan_user():
api.abort(404)
else:
return user_service.get_user_public() | 6,630,313,780,989,657,000 | get a user given its identifier | app/project/user/user_controller.py | get | makci97/lms_flask | python | @api.doc('get a user')
@api.marshal_with(_user)
def get(self, public_id):
user_service = UserService()
user_service.load_user(public_id)
if user_service.is_nan_user():
api.abort(404)
else:
return user_service.get_user_public() |
def __init__(self, name: str, num_qubits: int, params: List, label: Optional[str]=None) -> None:
'Create a new gate.\n\n Args:\n name: The Qobj name of the gate.\n num_qubits: The number of qubits the gate acts on.\n params: A list of parameters.\n label: An option... | 2,209,248,935,091,320,600 | Create a new gate.
Args:
name: The Qobj name of the gate.
num_qubits: The number of qubits the gate acts on.
params: A list of parameters.
label: An optional label for the gate. | qiskit/circuit/gate.py | __init__ | Blacksmith-qi/qiskit-terra | python | def __init__(self, name: str, num_qubits: int, params: List, label: Optional[str]=None) -> None:
'Create a new gate.\n\n Args:\n name: The Qobj name of the gate.\n num_qubits: The number of qubits the gate acts on.\n params: A list of parameters.\n label: An option... |
def to_matrix(self) -> np.ndarray:
'Return a Numpy.array for the gate unitary matrix.\n\n Returns:\n np.ndarray: if the Gate subclass has a matrix definition.\n\n Raises:\n CircuitError: If a Gate subclass does not implement this method an\n exception will be raise... | -7,402,389,822,080,293,000 | Return a Numpy.array for the gate unitary matrix.
Returns:
np.ndarray: if the Gate subclass has a matrix definition.
Raises:
CircuitError: If a Gate subclass does not implement this method an
exception will be raised when this base class method is called. | qiskit/circuit/gate.py | to_matrix | Blacksmith-qi/qiskit-terra | python | def to_matrix(self) -> np.ndarray:
'Return a Numpy.array for the gate unitary matrix.\n\n Returns:\n np.ndarray: if the Gate subclass has a matrix definition.\n\n Raises:\n CircuitError: If a Gate subclass does not implement this method an\n exception will be raise... |
def power(self, exponent: float):
'Creates a unitary gate as `gate^exponent`.\n\n Args:\n exponent (float): Gate^exponent\n\n Returns:\n qiskit.extensions.UnitaryGate: To which `to_matrix` is self.to_matrix^exponent.\n\n Raises:\n CircuitError: If Gate is not un... | 5,892,279,998,234,714,000 | Creates a unitary gate as `gate^exponent`.
Args:
exponent (float): Gate^exponent
Returns:
qiskit.extensions.UnitaryGate: To which `to_matrix` is self.to_matrix^exponent.
Raises:
CircuitError: If Gate is not unitary | qiskit/circuit/gate.py | power | Blacksmith-qi/qiskit-terra | python | def power(self, exponent: float):
'Creates a unitary gate as `gate^exponent`.\n\n Args:\n exponent (float): Gate^exponent\n\n Returns:\n qiskit.extensions.UnitaryGate: To which `to_matrix` is self.to_matrix^exponent.\n\n Raises:\n CircuitError: If Gate is not un... |
def assemble(self) -> 'Instruction':
'Assemble a QasmQobjInstruction'
instruction = super().assemble()
if self.label:
instruction.label = self.label
return instruction | 6,947,129,926,959,157,000 | Assemble a QasmQobjInstruction | qiskit/circuit/gate.py | assemble | Blacksmith-qi/qiskit-terra | python | def assemble(self) -> 'Instruction':
instruction = super().assemble()
if self.label:
instruction.label = self.label
return instruction |
@property
def label(self) -> str:
'Return gate label'
return self._label | 3,554,960,801,669,385,700 | Return gate label | qiskit/circuit/gate.py | label | Blacksmith-qi/qiskit-terra | python | @property
def label(self) -> str:
return self._label |
@label.setter
def label(self, name: str):
'Set gate label to name\n\n Args:\n name (str or None): label to assign unitary\n\n Raises:\n TypeError: name is not string or None.\n '
if isinstance(name, (str, type(None))):
self._label = name
else:
raise... | -3,460,210,592,454,413,000 | Set gate label to name
Args:
name (str or None): label to assign unitary
Raises:
TypeError: name is not string or None. | qiskit/circuit/gate.py | label | Blacksmith-qi/qiskit-terra | python | @label.setter
def label(self, name: str):
'Set gate label to name\n\n Args:\n name (str or None): label to assign unitary\n\n Raises:\n TypeError: name is not string or None.\n '
if isinstance(name, (str, type(None))):
self._label = name
else:
raise... |
def control(self, num_ctrl_qubits: Optional[int]=1, label: Optional[str]=None, ctrl_state: Optional[Union[(int, str)]]=None):
"Return controlled version of gate. See :class:`.ControlledGate` for usage.\n\n Args:\n num_ctrl_qubits: number of controls to add to gate (default=1)\n label: o... | -3,450,574,199,674,920,400 | Return controlled version of gate. See :class:`.ControlledGate` for usage.
Args:
num_ctrl_qubits: number of controls to add to gate (default=1)
label: optional gate label
ctrl_state: The control state in decimal or as a bitstring
(e.g. '111'). If None, use 2**num_ctrl_qubits-1.
Returns:
qiskit... | qiskit/circuit/gate.py | control | Blacksmith-qi/qiskit-terra | python | def control(self, num_ctrl_qubits: Optional[int]=1, label: Optional[str]=None, ctrl_state: Optional[Union[(int, str)]]=None):
"Return controlled version of gate. See :class:`.ControlledGate` for usage.\n\n Args:\n num_ctrl_qubits: number of controls to add to gate (default=1)\n label: o... |
@staticmethod
def _broadcast_single_argument(qarg: List) -> List:
'Expands a single argument.\n\n For example: [q[0], q[1]] -> [q[0]], [q[1]]\n '
for arg0 in qarg:
(yield ([arg0], [])) | -4,360,610,716,977,950,000 | Expands a single argument.
For example: [q[0], q[1]] -> [q[0]], [q[1]] | qiskit/circuit/gate.py | _broadcast_single_argument | Blacksmith-qi/qiskit-terra | python | @staticmethod
def _broadcast_single_argument(qarg: List) -> List:
'Expands a single argument.\n\n For example: [q[0], q[1]] -> [q[0]], [q[1]]\n '
for arg0 in qarg:
(yield ([arg0], [])) |
def broadcast_arguments(self, qargs: List, cargs: List) -> Tuple[(List, List)]:
'Validation and handling of the arguments and its relationship.\n\n For example, ``cx([q[0],q[1]], q[2])`` means ``cx(q[0], q[2]); cx(q[1], q[2])``. This\n method yields the arguments in the right grouping. In the given ex... | -2,935,113,582,535,424,000 | Validation and handling of the arguments and its relationship.
For example, ``cx([q[0],q[1]], q[2])`` means ``cx(q[0], q[2]); cx(q[1], q[2])``. This
method yields the arguments in the right grouping. In the given example::
in: [[q[0],q[1]], q[2]],[]
outs: [q[0], q[2]], []
[q[1], q[2]], []
The gener... | qiskit/circuit/gate.py | broadcast_arguments | Blacksmith-qi/qiskit-terra | python | def broadcast_arguments(self, qargs: List, cargs: List) -> Tuple[(List, List)]:
'Validation and handling of the arguments and its relationship.\n\n For example, ``cx([q[0],q[1]], q[2])`` means ``cx(q[0], q[2]); cx(q[1], q[2])``. This\n method yields the arguments in the right grouping. In the given ex... |
def validate_parameter(self, parameter):
'Gate parameters should be int, float, or ParameterExpression'
if isinstance(parameter, ParameterExpression):
if (len(parameter.parameters) > 0):
return parameter
if (not parameter._symbol_expr.is_real):
raise CircuitError('Bound p... | -5,746,868,024,655,658,000 | Gate parameters should be int, float, or ParameterExpression | qiskit/circuit/gate.py | validate_parameter | Blacksmith-qi/qiskit-terra | python | def validate_parameter(self, parameter):
if isinstance(parameter, ParameterExpression):
if (len(parameter.parameters) > 0):
return parameter
if (not parameter._symbol_expr.is_real):
raise CircuitError('Bound parameter expression is complex in gate {}'.format(self.name))
... |
def weighted_neighbors_loss(train_data, valid_data, kernel):
'Computes the negative log prob per data point.'
(X_train, T_train) = train_data
(X_valid, T_valid) = valid_data
weight_mat = kernel(X_valid, X_train)
label_probs = np.dot(weight_mat, T_train)
label_probs = (label_probs / np.sum(label_... | 3,900,170,860,203,776,000 | Computes the negative log prob per data point. | cpu_ver/hypergrad/kernel_methods.py | weighted_neighbors_loss | LinZichuan/drmad | python | def weighted_neighbors_loss(train_data, valid_data, kernel):
(X_train, T_train) = train_data
(X_valid, T_valid) = valid_data
weight_mat = kernel(X_valid, X_train)
label_probs = np.dot(weight_mat, T_train)
label_probs = (label_probs / np.sum(label_probs, axis=1, keepdims=True))
mean_neg_log_... |
def parse_args():
'PARAMETERS'
parser = argparse.ArgumentParser('training')
parser.add_argument('--use_cpu', action='store_true', default=False, help='use cpu mode')
parser.add_argument('--gpu', type=str, default='0', help='specify gpu device')
parser.add_argument('--batch_size', type=int, default=8... | 7,232,738,429,550,530,000 | PARAMETERS | train_realMulti-DA-Loss_classification.py | parse_args | congw112358/Pointnet_Pointnet2_pytorch | python | def parse_args():
parser = argparse.ArgumentParser('training')
parser.add_argument('--use_cpu', action='store_true', default=False, help='use cpu mode')
parser.add_argument('--gpu', type=str, default='0', help='specify gpu device')
parser.add_argument('--batch_size', type=int, default=8, help='batc... |
def stream(self, start_offset: int=0, shuffle: bool=False, skip_shuffle_at_epoch_end: bool=False, shuffle_seed: Optional[int]=None, shard_rank: int=0, num_shards: int=1, drop_shard_remainder: bool=False) -> yogadl.Stream:
'\n Create a stream from a cache.\n '
if (shuffle and (not skip_shuffle_at_e... | -904,865,744,401,641,300 | Create a stream from a cache. | yogadl/dataref/_local_lmdb_dataref.py | stream | determined-ai/yogadl | python | def stream(self, start_offset: int=0, shuffle: bool=False, skip_shuffle_at_epoch_end: bool=False, shuffle_seed: Optional[int]=None, shard_rank: int=0, num_shards: int=1, drop_shard_remainder: bool=False) -> yogadl.Stream:
'\n \n '
if (shuffle and (not skip_shuffle_at_epoch_end)):
assert (s... |
def __init__(self, title=None, url=None, latest_comment_url=None, type=None):
'UserNotificationSubject - a model defined in Swagger'
self._title = None
self._url = None
self._latest_comment_url = None
self._type = None
self.discriminator = None
if (title is not None):
self.title = ti... | -7,562,347,988,598,502,000 | UserNotificationSubject - a model defined in Swagger | gitee/models/user_notification_subject.py | __init__ | pygitee/pygitee | python | def __init__(self, title=None, url=None, latest_comment_url=None, type=None):
self._title = None
self._url = None
self._latest_comment_url = None
self._type = None
self.discriminator = None
if (title is not None):
self.title = title
if (url is not None):
self.url = url
... |
@property
def title(self):
'Gets the title of this UserNotificationSubject. # noqa: E501\n\n\n :return: The title of this UserNotificationSubject. # noqa: E501\n :rtype: str\n '
return self._title | 6,033,907,202,783,320,000 | Gets the title of this UserNotificationSubject. # noqa: E501
:return: The title of this UserNotificationSubject. # noqa: E501
:rtype: str | gitee/models/user_notification_subject.py | title | pygitee/pygitee | python | @property
def title(self):
'Gets the title of this UserNotificationSubject. # noqa: E501\n\n\n :return: The title of this UserNotificationSubject. # noqa: E501\n :rtype: str\n '
return self._title |
@title.setter
def title(self, title):
'Sets the title of this UserNotificationSubject.\n\n\n :param title: The title of this UserNotificationSubject. # noqa: E501\n :type: str\n '
self._title = title | 5,592,689,324,592,237,000 | Sets the title of this UserNotificationSubject.
:param title: The title of this UserNotificationSubject. # noqa: E501
:type: str | gitee/models/user_notification_subject.py | title | pygitee/pygitee | python | @title.setter
def title(self, title):
'Sets the title of this UserNotificationSubject.\n\n\n :param title: The title of this UserNotificationSubject. # noqa: E501\n :type: str\n '
self._title = title |
@property
def url(self):
'Gets the url of this UserNotificationSubject. # noqa: E501\n\n\n :return: The url of this UserNotificationSubject. # noqa: E501\n :rtype: str\n '
return self._url | 4,854,045,996,556,589,000 | Gets the url of this UserNotificationSubject. # noqa: E501
:return: The url of this UserNotificationSubject. # noqa: E501
:rtype: str | gitee/models/user_notification_subject.py | url | pygitee/pygitee | python | @property
def url(self):
'Gets the url of this UserNotificationSubject. # noqa: E501\n\n\n :return: The url of this UserNotificationSubject. # noqa: E501\n :rtype: str\n '
return self._url |
@url.setter
def url(self, url):
'Sets the url of this UserNotificationSubject.\n\n\n :param url: The url of this UserNotificationSubject. # noqa: E501\n :type: str\n '
self._url = url | -3,394,678,449,273,618,000 | Sets the url of this UserNotificationSubject.
:param url: The url of this UserNotificationSubject. # noqa: E501
:type: str | gitee/models/user_notification_subject.py | url | pygitee/pygitee | python | @url.setter
def url(self, url):
'Sets the url of this UserNotificationSubject.\n\n\n :param url: The url of this UserNotificationSubject. # noqa: E501\n :type: str\n '
self._url = url |
@property
def latest_comment_url(self):
'Gets the latest_comment_url of this UserNotificationSubject. # noqa: E501\n\n\n :return: The latest_comment_url of this UserNotificationSubject. # noqa: E501\n :rtype: str\n '
return self._latest_comment_url | 6,100,275,455,326,668,000 | Gets the latest_comment_url of this UserNotificationSubject. # noqa: E501
:return: The latest_comment_url of this UserNotificationSubject. # noqa: E501
:rtype: str | gitee/models/user_notification_subject.py | latest_comment_url | pygitee/pygitee | python | @property
def latest_comment_url(self):
'Gets the latest_comment_url of this UserNotificationSubject. # noqa: E501\n\n\n :return: The latest_comment_url of this UserNotificationSubject. # noqa: E501\n :rtype: str\n '
return self._latest_comment_url |
@latest_comment_url.setter
def latest_comment_url(self, latest_comment_url):
'Sets the latest_comment_url of this UserNotificationSubject.\n\n\n :param latest_comment_url: The latest_comment_url of this UserNotificationSubject. # noqa: E501\n :type: str\n '
self._latest_comment_url = lates... | -4,382,607,989,924,463,600 | Sets the latest_comment_url of this UserNotificationSubject.
:param latest_comment_url: The latest_comment_url of this UserNotificationSubject. # noqa: E501
:type: str | gitee/models/user_notification_subject.py | latest_comment_url | pygitee/pygitee | python | @latest_comment_url.setter
def latest_comment_url(self, latest_comment_url):
'Sets the latest_comment_url of this UserNotificationSubject.\n\n\n :param latest_comment_url: The latest_comment_url of this UserNotificationSubject. # noqa: E501\n :type: str\n '
self._latest_comment_url = lates... |
@property
def type(self):
'Gets the type of this UserNotificationSubject. # noqa: E501\n\n\n :return: The type of this UserNotificationSubject. # noqa: E501\n :rtype: str\n '
return self._type | -4,663,851,688,619,458,000 | Gets the type of this UserNotificationSubject. # noqa: E501
:return: The type of this UserNotificationSubject. # noqa: E501
:rtype: str | gitee/models/user_notification_subject.py | type | pygitee/pygitee | python | @property
def type(self):
'Gets the type of this UserNotificationSubject. # noqa: E501\n\n\n :return: The type of this UserNotificationSubject. # noqa: E501\n :rtype: str\n '
return self._type |
@type.setter
def type(self, type):
'Sets the type of this UserNotificationSubject.\n\n\n :param type: The type of this UserNotificationSubject. # noqa: E501\n :type: str\n '
self._type = type | -1,639,869,060,860,430,600 | Sets the type of this UserNotificationSubject.
:param type: The type of this UserNotificationSubject. # noqa: E501
:type: str | gitee/models/user_notification_subject.py | type | pygitee/pygitee | python | @type.setter
def type(self, type):
'Sets the type of this UserNotificationSubject.\n\n\n :param type: The type of this UserNotificationSubject. # noqa: E501\n :type: str\n '
self._type = type |
def to_dict(self):
'Returns the model properties as a dict'
result = {}
for (attr, _) in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map((lambda x: (x.to_dict() if hasattr(x, 'to_dict') else x)), value))
e... | -2,232,206,385,540,422,100 | Returns the model properties as a dict | gitee/models/user_notification_subject.py | to_dict | pygitee/pygitee | python | def to_dict(self):
result = {}
for (attr, _) in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map((lambda x: (x.to_dict() if hasattr(x, 'to_dict') else x)), value))
elif hasattr(value, 'to_dict'):
... |
def to_str(self):
'Returns the string representation of the model'
return pprint.pformat(self.to_dict()) | 5,849,158,643,760,736,000 | Returns the string representation of the model | gitee/models/user_notification_subject.py | to_str | pygitee/pygitee | python | def to_str(self):
return pprint.pformat(self.to_dict()) |
def __repr__(self):
'For `print` and `pprint`'
return self.to_str() | -8,960,031,694,814,905,000 | For `print` and `pprint` | gitee/models/user_notification_subject.py | __repr__ | pygitee/pygitee | python | def __repr__(self):
return self.to_str() |
def __eq__(self, other):
'Returns true if both objects are equal'
if (not isinstance(other, UserNotificationSubject)):
return False
return (self.__dict__ == other.__dict__) | 978,677,337,168,194,600 | Returns true if both objects are equal | gitee/models/user_notification_subject.py | __eq__ | pygitee/pygitee | python | def __eq__(self, other):
if (not isinstance(other, UserNotificationSubject)):
return False
return (self.__dict__ == other.__dict__) |
def __ne__(self, other):
'Returns true if both objects are not equal'
return (not (self == other)) | 7,764,124,047,908,058,000 | Returns true if both objects are not equal | gitee/models/user_notification_subject.py | __ne__ | pygitee/pygitee | python | def __ne__(self, other):
return (not (self == other)) |
def memoize(f):
'Memoization decorator for functions taking one or more arguments.'
class Memo(dict):
def __init__(self, f):
super(Memo, self).__init__()
self.f = f
def __call__(self, *args):
return self[args]
def __missing__(self, key):
... | -1,536,678,804,841,662,500 | Memoization decorator for functions taking one or more arguments. | lib/webports/util.py | memoize | DiamondLovesYou/webports | python | def memoize(f):
class Memo(dict):
def __init__(self, f):
super(Memo, self).__init__()
self.f = f
def __call__(self, *args):
return self[args]
def __missing__(self, key):
ret = self[key] = self.f(*key)
return ret
return ... |
def log(message, verbosity=LOG_INFO):
'Log a message to the console (stdout).'
if (log_level < verbosity):
return
sys.stdout.write((str(message) + '\n'))
sys.stdout.flush() | 5,817,673,250,203,344,000 | Log a message to the console (stdout). | lib/webports/util.py | log | DiamondLovesYou/webports | python | def log(message, verbosity=LOG_INFO):
if (log_level < verbosity):
return
sys.stdout.write((str(message) + '\n'))
sys.stdout.flush() |
def log_heading(message, suffix=''):
'Log a colored/highlighted message with optional suffix.'
if colorize.enabled:
log((colorize(message, 'green') + suffix))
elif (log_level > LOG_WARN):
log('###################################################################')
log((message + suffix... | -8,274,271,697,911,984,000 | Log a colored/highlighted message with optional suffix. | lib/webports/util.py | log_heading | DiamondLovesYou/webports | python | def log_heading(message, suffix=):
if colorize.enabled:
log((colorize(message, 'green') + suffix))
elif (log_level > LOG_WARN):
log('###################################################################')
log((message + suffix))
log('###########################################... |
def find_in_path(command_name):
"Search user's PATH for a given executable.\n\n Returns:\n Full path to executable.\n "
extensions = ('',)
if ((not os.path.splitext(command_name)[1]) and (os.name == 'nt')):
extensions = ('.bat', '.com', '.exe')
for path in os.environ.get('PATH', '').split(o... | -9,210,549,876,342,332,000 | Search user's PATH for a given executable.
Returns:
Full path to executable. | lib/webports/util.py | find_in_path | DiamondLovesYou/webports | python | def find_in_path(command_name):
"Search user's PATH for a given executable.\n\n Returns:\n Full path to executable.\n "
extensions = (,)
if ((not os.path.splitext(command_name)[1]) and (os.name == 'nt')):
extensions = ('.bat', '.com', '.exe')
for path in os.environ.get('PATH', ).split(os.pa... |
def download_file(filename, url):
'Download a file from a given URL.\n\n Args:\n filename: the name of the file to download the URL to.\n url: then URL to fetch.\n '
temp_filename = (filename + '.partial')
find_in_path('curl')
curl_cmd = ['curl', '--fail', '--location', '--stderr', '-', '-o', te... | -12,641,727,738,878,662 | Download a file from a given URL.
Args:
filename: the name of the file to download the URL to.
url: then URL to fetch. | lib/webports/util.py | download_file | DiamondLovesYou/webports | python | def download_file(filename, url):
'Download a file from a given URL.\n\n Args:\n filename: the name of the file to download the URL to.\n url: then URL to fetch.\n '
temp_filename = (filename + '.partial')
find_in_path('curl')
curl_cmd = ['curl', '--fail', '--location', '--stderr', '-', '-o', te... |
def check_stamp(filename, contents=None):
'Check that a given stamp file is up-to-date.\n\n Returns: False is the file does not exists or is older that that given\n comparison file, or does not contain the given contents. True otherwise.\n '
if (not os.path.exists(filename)):
return False
if (c... | -4,632,564,270,663,111,000 | Check that a given stamp file is up-to-date.
Returns: False is the file does not exists or is older that that given
comparison file, or does not contain the given contents. True otherwise. | lib/webports/util.py | check_stamp | DiamondLovesYou/webports | python | def check_stamp(filename, contents=None):
'Check that a given stamp file is up-to-date.\n\n Returns: False is the file does not exists or is older that that given\n comparison file, or does not contain the given contents. True otherwise.\n '
if (not os.path.exists(filename)):
return False
if (c... |
@memoize
def get_sdk_root():
'Returns the root of the currently configured Native Client SDK.'
root = os.environ.get('NACL_SDK_ROOT')
if (root is None):
local_sdk_root = os.path.join(paths.OUT_DIR, 'nacl_sdk')
if os.path.exists(local_sdk_root):
root = local_sdk_root
else:... | -2,800,879,476,471,468,000 | Returns the root of the currently configured Native Client SDK. | lib/webports/util.py | get_sdk_root | DiamondLovesYou/webports | python | @memoize
def get_sdk_root():
root = os.environ.get('NACL_SDK_ROOT')
if (root is None):
local_sdk_root = os.path.join(paths.OUT_DIR, 'nacl_sdk')
if os.path.exists(local_sdk_root):
root = local_sdk_root
else:
raise error.Error('$NACL_SDK_ROOT not set')
if (... |
@memoize
def get_sdk_version():
'Returns the version (as a string) of the current SDK.'
getos = os.path.join(get_sdk_root(), 'tools', 'getos.py')
version = subprocess.check_output([getos, '--sdk-version']).strip()
return version | 8,809,473,329,846,702,000 | Returns the version (as a string) of the current SDK. | lib/webports/util.py | get_sdk_version | DiamondLovesYou/webports | python | @memoize
def get_sdk_version():
getos = os.path.join(get_sdk_root(), 'tools', 'getos.py')
version = subprocess.check_output([getos, '--sdk-version']).strip()
return version |
def check_sdk_version(version):
"Returns True if the currently configured SDK is 'version' or above."
return (int(get_sdk_version()) >= int(version)) | -9,164,487,087,495,700,000 | Returns True if the currently configured SDK is 'version' or above. | lib/webports/util.py | check_sdk_version | DiamondLovesYou/webports | python | def check_sdk_version(version):
return (int(get_sdk_version()) >= int(version)) |
@memoize
def get_sdk_revision():
'Returns the revision of the currently configured Native Client SDK.'
getos = os.path.join(get_sdk_root(), 'tools', 'getos.py')
version = subprocess.check_output([getos, '--sdk-revision']).strip()
return int(version) | 164,433,028,036,483,260 | Returns the revision of the currently configured Native Client SDK. | lib/webports/util.py | get_sdk_revision | DiamondLovesYou/webports | python | @memoize
def get_sdk_revision():
getos = os.path.join(get_sdk_root(), 'tools', 'getos.py')
version = subprocess.check_output([getos, '--sdk-revision']).strip()
return int(version) |
@memoize
def get_platform():
'Returns the current platform name according getos.py.'
getos = os.path.join(get_sdk_root(), 'tools', 'getos.py')
platform = subprocess.check_output([getos]).strip()
return platform | -7,034,581,924,587,687,000 | Returns the current platform name according getos.py. | lib/webports/util.py | get_platform | DiamondLovesYou/webports | python | @memoize
def get_platform():
getos = os.path.join(get_sdk_root(), 'tools', 'getos.py')
platform = subprocess.check_output([getos]).strip()
return platform |
@memoize
def get_toolchain_root(config):
'Returns the toolchain folder for a given NaCl toolchain.'
if (config.toolchain == 'emscripten'):
return get_emscripten_root()
platform = get_platform()
if (config.toolchain in ('pnacl', 'clang-newlib')):
tc_dir = os.path.join(('%s_pnacl' % platfo... | 1,493,153,723,794,381,000 | Returns the toolchain folder for a given NaCl toolchain. | lib/webports/util.py | get_toolchain_root | DiamondLovesYou/webports | python | @memoize
def get_toolchain_root(config):
if (config.toolchain == 'emscripten'):
return get_emscripten_root()
platform = get_platform()
if (config.toolchain in ('pnacl', 'clang-newlib')):
tc_dir = os.path.join(('%s_pnacl' % platform))
else:
tc_arch = {'arm': 'arm', 'i686': 'x... |
@memoize
def get_install_root(config):
'Returns the install location given a build configuration.'
tc_dir = get_toolchain_root(config)
if (config.toolchain == 'emscripten'):
return os.path.join(tc_dir, 'system', 'local')
if (config.toolchain == 'pnacl'):
tc_dir = os.path.join(tc_dir, 'le... | 8,301,899,291,888,007,000 | Returns the install location given a build configuration. | lib/webports/util.py | get_install_root | DiamondLovesYou/webports | python | @memoize
def get_install_root(config):
tc_dir = get_toolchain_root(config)
if (config.toolchain == 'emscripten'):
return os.path.join(tc_dir, 'system', 'local')
if (config.toolchain == 'pnacl'):
tc_dir = os.path.join(tc_dir, 'le32-nacl')
else:
tc_dir = os.path.join(tc_dir, (... |
@memoize
def get_install_stamp_root(config):
'Returns the installation metadata folder for the give configuration.'
tc_root = get_install_root(config)
return os.path.join(tc_root, 'var', 'lib', 'npkg') | -5,050,390,960,978,637,000 | Returns the installation metadata folder for the give configuration. | lib/webports/util.py | get_install_stamp_root | DiamondLovesYou/webports | python | @memoize
def get_install_stamp_root(config):
tc_root = get_install_root(config)
return os.path.join(tc_root, 'var', 'lib', 'npkg') |
def get_install_stamp(package_name, config):
'Returns the filename of the install stamp for for a given package.\n\n This file is written at install time and contains metadata\n about the installed package.\n '
root = get_install_stamp_root(config)
return os.path.join(root, (package_name + '.info')) | -550,498,234,523,841,660 | Returns the filename of the install stamp for for a given package.
This file is written at install time and contains metadata
about the installed package. | lib/webports/util.py | get_install_stamp | DiamondLovesYou/webports | python | def get_install_stamp(package_name, config):
'Returns the filename of the install stamp for for a given package.\n\n This file is written at install time and contains metadata\n about the installed package.\n '
root = get_install_stamp_root(config)
return os.path.join(root, (package_name + '.info')) |
def get_list_file(package_name, config):
'Returns the filename of the list of installed files for a given package.\n\n This file is written at install time.\n '
root = get_install_stamp_root(config)
return os.path.join(root, (package_name + '.list')) | -8,665,224,385,076,447,000 | Returns the filename of the list of installed files for a given package.
This file is written at install time. | lib/webports/util.py | get_list_file | DiamondLovesYou/webports | python | def get_list_file(package_name, config):
'Returns the filename of the list of installed files for a given package.\n\n This file is written at install time.\n '
root = get_install_stamp_root(config)
return os.path.join(root, (package_name + '.list')) |
def is_installed(package_name, config, stamp_content=None):
'Returns True if the given package is installed.'
stamp = get_install_stamp(package_name, config)
result = check_stamp(stamp, stamp_content)
return result | 4,999,615,864,298,275,000 | Returns True if the given package is installed. | lib/webports/util.py | is_installed | DiamondLovesYou/webports | python | def is_installed(package_name, config, stamp_content=None):
stamp = get_install_stamp(package_name, config)
result = check_stamp(stamp, stamp_content)
return result |
def check_sdk_root():
'Check validity of NACL_SDK_ROOT.'
root = get_sdk_root()
if (not os.path.isdir(root)):
raise error.Error(('$NACL_SDK_ROOT does not exist: %s' % root))
landmark = os.path.join(root, 'tools', 'getos.py')
if (not os.path.exists(landmark)):
raise error.Error(("$NACL... | -3,336,221,697,445,436,400 | Check validity of NACL_SDK_ROOT. | lib/webports/util.py | check_sdk_root | DiamondLovesYou/webports | python | def check_sdk_root():
root = get_sdk_root()
if (not os.path.isdir(root)):
raise error.Error(('$NACL_SDK_ROOT does not exist: %s' % root))
landmark = os.path.join(root, 'tools', 'getos.py')
if (not os.path.exists(landmark)):
raise error.Error(("$NACL_SDK_ROOT (%s) doesn't look right.... |
def hash_file(filename):
'Return the SHA1 (in hex format) of the contents of the given file.'
block_size = (100 * 1024)
sha1 = hashlib.sha1()
with open(filename) as f:
while True:
data = f.read(block_size)
if (not data):
break
sha1.update(data)... | -6,489,478,050,917,186,000 | Return the SHA1 (in hex format) of the contents of the given file. | lib/webports/util.py | hash_file | DiamondLovesYou/webports | python | def hash_file(filename):
block_size = (100 * 1024)
sha1 = hashlib.sha1()
with open(filename) as f:
while True:
data = f.read(block_size)
if (not data):
break
sha1.update(data)
return sha1.hexdigest() |
def verify_hash(filename, sha1):
'Return True if the sha1 of the given file match the sha1 passed in.'
file_sha1 = hash_file(filename)
if (sha1 != file_sha1):
raise HashVerificationError(('verification failed: %s\nExpected: %s\nActual: %s' % (filename, sha1, file_sha1))) | -2,977,212,297,900,223,000 | Return True if the sha1 of the given file match the sha1 passed in. | lib/webports/util.py | verify_hash | DiamondLovesYou/webports | python | def verify_hash(filename, sha1):
file_sha1 = hash_file(filename)
if (sha1 != file_sha1):
raise HashVerificationError(('verification failed: %s\nExpected: %s\nActual: %s' % (filename, sha1, file_sha1))) |
def remove_tree(directory):
'Recursively remove a directory and its contents.'
if (not os.path.exists(directory)):
return
if (not os.path.isdir(directory)):
raise error.Error('RemoveTree: not a directory: %s', directory)
shutil.rmtree(directory) | -2,689,833,118,508,521,000 | Recursively remove a directory and its contents. | lib/webports/util.py | remove_tree | DiamondLovesYou/webports | python | def remove_tree(directory):
if (not os.path.exists(directory)):
return
if (not os.path.isdir(directory)):
raise error.Error('RemoveTree: not a directory: %s', directory)
shutil.rmtree(directory) |
def rel_path(filename):
'Return a pathname relative to the root the webports src tree.\n\n This is used mostly to make output more readable when printing filenames.'
return os.path.relpath(filename, paths.NACLPORTS_ROOT) | 3,528,904,553,062,792,000 | Return a pathname relative to the root the webports src tree.
This is used mostly to make output more readable when printing filenames. | lib/webports/util.py | rel_path | DiamondLovesYou/webports | python | def rel_path(filename):
'Return a pathname relative to the root the webports src tree.\n\n This is used mostly to make output more readable when printing filenames.'
return os.path.relpath(filename, paths.NACLPORTS_ROOT) |
def __init__(__self__, resource_name: str, opts: Optional[pulumi.ResourceOptions]=None, properties: Optional[pulumi.Input[pulumi.InputType['UserPropertiesArgs']]]=None, user_settings_name: Optional[pulumi.Input[str]]=None, __props__=None, __name__=None, __opts__=None):
"\n Response to get user settings\n ... | -4,359,774,321,173,679,000 | Response to get user settings
API Version: 2018-10-01.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[pulumi.InputType['UserPropertiesArgs']] properties: The cloud shell user settings properties.
:param pulumi.Input[str] user_settin... | sdk/python/pulumi_azure_native/portal/user_settings.py | __init__ | pulumi-bot/pulumi-azure-native | python | def __init__(__self__, resource_name: str, opts: Optional[pulumi.ResourceOptions]=None, properties: Optional[pulumi.Input[pulumi.InputType['UserPropertiesArgs']]]=None, user_settings_name: Optional[pulumi.Input[str]]=None, __props__=None, __name__=None, __opts__=None):
"\n Response to get user settings\n ... |
@staticmethod
def get(resource_name: str, id: pulumi.Input[str], opts: Optional[pulumi.ResourceOptions]=None) -> 'UserSettings':
"\n Get an existing UserSettings resource's state with the given name, id, and optional extra\n properties used to qualify the lookup.\n\n :param str resource_name: T... | -6,685,719,736,812,812,000 | Get an existing UserSettings resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Option... | sdk/python/pulumi_azure_native/portal/user_settings.py | get | pulumi-bot/pulumi-azure-native | python | @staticmethod
def get(resource_name: str, id: pulumi.Input[str], opts: Optional[pulumi.ResourceOptions]=None) -> 'UserSettings':
"\n Get an existing UserSettings resource's state with the given name, id, and optional extra\n properties used to qualify the lookup.\n\n :param str resource_name: T... |
@property
@pulumi.getter
def properties(self) -> pulumi.Output['outputs.UserPropertiesResponse']:
'\n The cloud shell user settings properties.\n '
return pulumi.get(self, 'properties') | 899,342,624,073,554,000 | The cloud shell user settings properties. | sdk/python/pulumi_azure_native/portal/user_settings.py | properties | pulumi-bot/pulumi-azure-native | python | @property
@pulumi.getter
def properties(self) -> pulumi.Output['outputs.UserPropertiesResponse']:
'\n \n '
return pulumi.get(self, 'properties') |
def __get_spike_trains(spike_trains):
'Make sure SpikeTrainsAPI object is always returned'
if isinstance(spike_trains, six.string_types):
return SpikeTrains.load(spike_trains)
elif isinstance(spike_trains, (SpikeTrains, SpikeTrainsAPI)):
return spike_trains
raise AttributeError('Could no... | -1,422,286,837,447,889,700 | Make sure SpikeTrainsAPI object is always returned | bmtk/utils/reports/spike_trains/plotting.py | __get_spike_trains | chenziao/bmtk | python | def __get_spike_trains(spike_trains):
if isinstance(spike_trains, six.string_types):
return SpikeTrains.load(spike_trains)
elif isinstance(spike_trains, (SpikeTrains, SpikeTrainsAPI)):
return spike_trains
raise AttributeError('Could not parse spiketrains. Pass in file-path, SpikeTrains ... |
def __get_population(spike_trains, population):
'Helper function to figure out which population of nodes to use.'
pops = spike_trains.populations
if (population is None):
if (len(pops) > 1):
raise Exception('SpikeTrains contains more than one population of nodes. Use "population" paramet... | 5,916,561,399,342,816,000 | Helper function to figure out which population of nodes to use. | bmtk/utils/reports/spike_trains/plotting.py | __get_population | chenziao/bmtk | python | def __get_population(spike_trains, population):
pops = spike_trains.populations
if (population is None):
if (len(pops) > 1):
raise Exception('SpikeTrains contains more than one population of nodes. Use "population" parameter to specify population to display.')
else:
... |
def __get_node_groups(spike_trains, node_groups, population):
"Helper function for parsing the 'node_groups' params"
if (node_groups is None):
selected_nodes = spike_trains.node_ids(population=population)
return ([{'node_ids': selected_nodes, 'c': 'b'}], selected_nodes)
else:
node_gr... | 7,747,009,254,361,503,000 | Helper function for parsing the 'node_groups' params | bmtk/utils/reports/spike_trains/plotting.py | __get_node_groups | chenziao/bmtk | python | def __get_node_groups(spike_trains, node_groups, population):
if (node_groups is None):
selected_nodes = spike_trains.node_ids(population=population)
return ([{'node_ids': selected_nodes, 'c': 'b'}], selected_nodes)
else:
node_groups = copy.deepcopy(node_groups)
selected_nod... |
def plot_raster(spike_trains, with_histogram=True, population=None, node_groups=None, times=None, title=None, show=True, save_as=None):
"will create a raster plot (plus optional histogram) from a SpikeTrains object or SONATA Spike-Trains file. Will\n return the figure\n\n By default will display all nodes, if... | -5,944,393,047,132,877,000 | will create a raster plot (plus optional histogram) from a SpikeTrains object or SONATA Spike-Trains file. Will
return the figure
By default will display all nodes, if you want to only display a subset of nodes and/or group together different
nodes (by node_id) by dot colors and labels then you can use the node_groups... | bmtk/utils/reports/spike_trains/plotting.py | plot_raster | chenziao/bmtk | python | def plot_raster(spike_trains, with_histogram=True, population=None, node_groups=None, times=None, title=None, show=True, save_as=None):
"will create a raster plot (plus optional histogram) from a SpikeTrains object or SONATA Spike-Trains file. Will\n return the figure\n\n By default will display all nodes, if... |
def plot_rates(spike_trains, population=None, node_groups=None, times=None, smoothing=False, smoothing_params=None, title=None, show=True, save_as=None):
'Calculate and plot the rates of each node in a SpikeTrains object or SONATA Spike-Trains file. If start and stop\n times are not specified from the "times" pa... | -5,521,680,345,867,462,000 | Calculate and plot the rates of each node in a SpikeTrains object or SONATA Spike-Trains file. If start and stop
times are not specified from the "times" parameter, will try to parse values from the timestamps data.
If you want to only display a subset of nodes and/or group together different nodes (by node_id) by dot... | bmtk/utils/reports/spike_trains/plotting.py | plot_rates | chenziao/bmtk | python | def plot_rates(spike_trains, population=None, node_groups=None, times=None, smoothing=False, smoothing_params=None, title=None, show=True, save_as=None):
'Calculate and plot the rates of each node in a SpikeTrains object or SONATA Spike-Trains file. If start and stop\n times are not specified from the "times" pa... |
def plot_rates_boxplot(spike_trains, population=None, node_groups=None, times=None, title=None, show=True, save_as=None):
'Creates a box plot of the firing rates taken from a SpikeTrains object or SONATA Spike-Trains file. If start\n and stop times are not specified from the "times" parameter, will try to parse ... | 4,023,441,265,487,282,700 | Creates a box plot of the firing rates taken from a SpikeTrains object or SONATA Spike-Trains file. If start
and stop times are not specified from the "times" parameter, will try to parse values from the timestamps data.
By default will plot all nodes together. To only display a subset of the nodes and/or create group... | bmtk/utils/reports/spike_trains/plotting.py | plot_rates_boxplot | chenziao/bmtk | python | def plot_rates_boxplot(spike_trains, population=None, node_groups=None, times=None, title=None, show=True, save_as=None):
'Creates a box plot of the firing rates taken from a SpikeTrains object or SONATA Spike-Trains file. If start\n and stop times are not specified from the "times" parameter, will try to parse ... |
def train(model, device, dataloader, optimizer):
'\n Performs one epoch of training.\n Order of rooms in building and in data must match otherwise model will fit wrong rooms to data.\n '
model.reset_iv()
model.train()
model.cooling_policy.eval()
for layer in model.cooling_policy.parameters(... | -8,422,809,956,291,717,000 | Performs one epoch of training.
Order of rooms in building and in data must match otherwise model will fit wrong rooms to data. | src/rcmodel/optimisation.py | train | BFourcin/rcmodel | python | def train(model, device, dataloader, optimizer):
'\n Performs one epoch of training.\n Order of rooms in building and in data must match otherwise model will fit wrong rooms to data.\n '
model.reset_iv()
model.train()
model.cooling_policy.eval()
for layer in model.cooling_policy.parameters(... |
def sort_data(path, dt):
'\n Check if path has sorted data tag (_sorted)\n If not check if data has previously been sorted and exists in the directory.\n Check to see if the value dt is correct\n If not sort data and write filename_sorted.csv\n\n data is sorted by time in ascending order and downsamp... | 5,082,796,445,324,769,000 | Check if path has sorted data tag (_sorted)
If not check if data has previously been sorted and exists in the directory.
Check to see if the value dt is correct
If not sort data and write filename_sorted.csv
data is sorted by time in ascending order and downsampled to a frequency of dt seconds.
Missing values are inte... | src/rcmodel/optimisation.py | sort_data | BFourcin/rcmodel | python | def sort_data(path, dt):
'\n Check if path has sorted data tag (_sorted)\n If not check if data has previously been sorted and exists in the directory.\n Check to see if the value dt is correct\n If not sort data and write filename_sorted.csv\n\n data is sorted by time in ascending order and downsamp... |
def HideNotImplemented(comment):
'Should a given type_context.Comment be hidden because it is tagged as [#not-implemented-hide:]?'
return (annotations.NOT_IMPLEMENTED_HIDE_ANNOTATION in comment.annotations) | -5,335,436,601,311,139,000 | Should a given type_context.Comment be hidden because it is tagged as [#not-implemented-hide:]? | tools/protodoc/protodoc.py | HideNotImplemented | Gsantomaggio/envoy | python | def HideNotImplemented(comment):
return (annotations.NOT_IMPLEMENTED_HIDE_ANNOTATION in comment.annotations) |
def GithubUrl(type_context):
'Obtain data plane API Github URL by path from a TypeContext.\n\n Args:\n type_context: type_context.TypeContext for node.\n\n Returns:\n A string with a corresponding data plane API GitHub Url.\n '
if (type_context.location is not None):
return (DATA_PLANE_API_URL_... | 6,404,972,300,512,804,000 | Obtain data plane API Github URL by path from a TypeContext.
Args:
type_context: type_context.TypeContext for node.
Returns:
A string with a corresponding data plane API GitHub Url. | tools/protodoc/protodoc.py | GithubUrl | Gsantomaggio/envoy | python | def GithubUrl(type_context):
'Obtain data plane API Github URL by path from a TypeContext.\n\n Args:\n type_context: type_context.TypeContext for node.\n\n Returns:\n A string with a corresponding data plane API GitHub Url.\n '
if (type_context.location is not None):
return (DATA_PLANE_API_URL_... |
def FormatCommentWithAnnotations(comment, type_name=''):
"Format a comment string with additional RST for annotations.\n\n Args:\n comment: comment string.\n type_name: optional, 'message' or 'enum' may be specified for additional\n message/enum specific annotations.\n\n Returns:\n A string with add... | -4,937,179,242,579,999,000 | Format a comment string with additional RST for annotations.
Args:
comment: comment string.
type_name: optional, 'message' or 'enum' may be specified for additional
message/enum specific annotations.
Returns:
A string with additional RST from annotations. | tools/protodoc/protodoc.py | FormatCommentWithAnnotations | Gsantomaggio/envoy | python | def FormatCommentWithAnnotations(comment, type_name=):
"Format a comment string with additional RST for annotations.\n\n Args:\n comment: comment string.\n type_name: optional, 'message' or 'enum' may be specified for additional\n message/enum specific annotations.\n\n Returns:\n A string with addit... |
def MapLines(f, s):
'Apply a function across each line in a flat string.\n\n Args:\n f: A string transform function for a line.\n s: A string consisting of potentially multiple lines.\n\n Returns:\n A flat string with f applied to each line.\n '
return '\n'.join((f(line) for line in s.split('\n'))) | 9,207,010,195,570,567,000 | Apply a function across each line in a flat string.
Args:
f: A string transform function for a line.
s: A string consisting of potentially multiple lines.
Returns:
A flat string with f applied to each line. | tools/protodoc/protodoc.py | MapLines | Gsantomaggio/envoy | python | def MapLines(f, s):
'Apply a function across each line in a flat string.\n\n Args:\n f: A string transform function for a line.\n s: A string consisting of potentially multiple lines.\n\n Returns:\n A flat string with f applied to each line.\n '
return '\n'.join((f(line) for line in s.split('\n'))) |
def Indent(spaces, line):
'Indent a string.'
return ((' ' * spaces) + line) | 2,544,529,550,714,990,000 | Indent a string. | tools/protodoc/protodoc.py | Indent | Gsantomaggio/envoy | python | def Indent(spaces, line):
return ((' ' * spaces) + line) |
def IndentLines(spaces, lines):
'Indent a list of strings.'
return map(functools.partial(Indent, spaces), lines) | 7,741,411,605,827,786,000 | Indent a list of strings. | tools/protodoc/protodoc.py | IndentLines | Gsantomaggio/envoy | python | def IndentLines(spaces, lines):
return map(functools.partial(Indent, spaces), lines) |
def FormatHeader(style, text):
"Format RST header.\n\n Args:\n style: underline style, e.g. '=', '-'.\n text: header text\n\n Returns:\n RST formatted header.\n "
return ('%s\n%s\n\n' % (text, (style * len(text)))) | 7,263,651,663,836,982,000 | Format RST header.
Args:
style: underline style, e.g. '=', '-'.
text: header text
Returns:
RST formatted header. | tools/protodoc/protodoc.py | FormatHeader | Gsantomaggio/envoy | python | def FormatHeader(style, text):
"Format RST header.\n\n Args:\n style: underline style, e.g. '=', '-'.\n text: header text\n\n Returns:\n RST formatted header.\n "
return ('%s\n%s\n\n' % (text, (style * len(text)))) |
def FormatExtension(extension):
'Format extension metadata as RST.\n\n Args:\n extension: the name of the extension, e.g. com.acme.foo.\n\n Returns:\n RST formatted extension description.\n '
try:
extension_metadata = EXTENSION_DB[extension]
anchor = FormatAnchor(('extension_' + extensi... | -8,631,740,234,684,610,000 | Format extension metadata as RST.
Args:
extension: the name of the extension, e.g. com.acme.foo.
Returns:
RST formatted extension description. | tools/protodoc/protodoc.py | FormatExtension | Gsantomaggio/envoy | python | def FormatExtension(extension):
'Format extension metadata as RST.\n\n Args:\n extension: the name of the extension, e.g. com.acme.foo.\n\n Returns:\n RST formatted extension description.\n '
try:
extension_metadata = EXTENSION_DB[extension]
anchor = FormatAnchor(('extension_' + extensi... |
def FormatExtensionCategory(extension_category):
'Format extension metadata as RST.\n\n Args:\n extension_category: the name of the extension_category, e.g. com.acme.\n\n Returns:\n RST formatted extension category description.\n '
try:
extensions = EXTENSION_CATEGORIES[extension_category]
... | 2,411,712,575,106,835,500 | Format extension metadata as RST.
Args:
extension_category: the name of the extension_category, e.g. com.acme.
Returns:
RST formatted extension category description. | tools/protodoc/protodoc.py | FormatExtensionCategory | Gsantomaggio/envoy | python | def FormatExtensionCategory(extension_category):
'Format extension metadata as RST.\n\n Args:\n extension_category: the name of the extension_category, e.g. com.acme.\n\n Returns:\n RST formatted extension category description.\n '
try:
extensions = EXTENSION_CATEGORIES[extension_category]
... |
def FormatHeaderFromFile(style, source_code_info, proto_name):
"Format RST header based on special file level title\n\n Args:\n style: underline style, e.g. '=', '-'.\n source_code_info: SourceCodeInfo object.\n proto_name: If the file_level_comment does not contain a user specified\n title, use this... | -6,055,170,154,052,651,000 | Format RST header based on special file level title
Args:
style: underline style, e.g. '=', '-'.
source_code_info: SourceCodeInfo object.
proto_name: If the file_level_comment does not contain a user specified
title, use this as page title.
Returns:
RST formatted header, and file level comment without pag... | tools/protodoc/protodoc.py | FormatHeaderFromFile | Gsantomaggio/envoy | python | def FormatHeaderFromFile(style, source_code_info, proto_name):
"Format RST header based on special file level title\n\n Args:\n style: underline style, e.g. '=', '-'.\n source_code_info: SourceCodeInfo object.\n proto_name: If the file_level_comment does not contain a user specified\n title, use this... |
def FormatFieldTypeAsJson(type_context, field):
'Format FieldDescriptorProto.Type as a pseudo-JSON string.\n\n Args:\n type_context: contextual information for message/enum/field.\n field: FieldDescriptor proto.\n Return: RST formatted pseudo-JSON string representation of field type.\n '
if (TypeNameFr... | -3,217,291,533,705,860,000 | Format FieldDescriptorProto.Type as a pseudo-JSON string.
Args:
type_context: contextual information for message/enum/field.
field: FieldDescriptor proto.
Return: RST formatted pseudo-JSON string representation of field type. | tools/protodoc/protodoc.py | FormatFieldTypeAsJson | Gsantomaggio/envoy | python | def FormatFieldTypeAsJson(type_context, field):
'Format FieldDescriptorProto.Type as a pseudo-JSON string.\n\n Args:\n type_context: contextual information for message/enum/field.\n field: FieldDescriptor proto.\n Return: RST formatted pseudo-JSON string representation of field type.\n '
if (TypeNameFr... |
def FormatMessageAsJson(type_context, msg):
'Format a message definition DescriptorProto as a pseudo-JSON block.\n\n Args:\n type_context: contextual information for message/enum/field.\n msg: message definition DescriptorProto.\n Return: RST formatted pseudo-JSON string representation of message definition... | 7,837,340,100,763,897,000 | Format a message definition DescriptorProto as a pseudo-JSON block.
Args:
type_context: contextual information for message/enum/field.
msg: message definition DescriptorProto.
Return: RST formatted pseudo-JSON string representation of message definition. | tools/protodoc/protodoc.py | FormatMessageAsJson | Gsantomaggio/envoy | python | def FormatMessageAsJson(type_context, msg):
'Format a message definition DescriptorProto as a pseudo-JSON block.\n\n Args:\n type_context: contextual information for message/enum/field.\n msg: message definition DescriptorProto.\n Return: RST formatted pseudo-JSON string representation of message definition... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.