signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
def attr_ignore_predicate(self,fact_dict):
if '<STR_LIT:@>' in fact_dict['<STR_LIT>']:<EOL><INDENT>return True<EOL><DEDENT>if fact_dict['<STR_LIT>'] in ['<STR_LIT>','<STR_LIT:id>','<STR_LIT>']:<EOL><INDENT>return True<EOL><DEDENT>return False<EOL>
The attr_ignore predicate is called for each fact that would be generated for an XML attribute. It takes a fact dictionary of the following form as input:: { 'node_id': 'N001:L000:N000:A000', 'term': 'Hashes/Hash/Simple_Hash_Value', 'attribute': 'condition', 'value': u'Equals' } If the predicate returns 'False, the fact is *not* created. Note that, nevertheless, during import, the information about this attribute is available to the attributed fact as part of the 'attr_dict' that is generated for the creation of each fact and passed to the handler functions called for the fact.
f9160:c0:m7
def datatype_extractor(self,iobject, fact, attr_info, namespace_mapping, add_fact_kargs):
<EOL>if "<STR_LIT>" in attr_info:<EOL><INDENT>embedded_type_info = attr_info.get('<STR_LIT>',None)<EOL>if embedded_type_info:<EOL><INDENT>add_fact_kargs['<STR_LIT>'] = embedded_type_info<EOL>add_fact_kargs['<STR_LIT>'] = namespace_mapping[None]<EOL>add_fact_kargs['<STR_LIT>'] = FactDataType.REFERENCE<EOL><DEDENT>return True<EOL><DEDENT>elif "<STR_LIT>" in attr_info:<EOL><INDENT>add_fact_kargs['<STR_LIT>'] = attr_info["<STR_LIT>"]<EOL>add_fact_kargs['<STR_LIT>'] = namespace_mapping[None]<EOL>return True<EOL><DEDENT>return False<EOL>
The datatype extractor is called for each fact with the aim of determining the fact's datatype. The extractor function has the following signature: - Inputs: - info_object: the information object to which the fact is to be added - fact: the fact dictionary of the following form:: { 'node_id': 'N001:L000:N000:A000', 'term': 'Hashes/Hash/Simple_Hash_Value', 'attribute': 'condition' / False, 'value': u'Equals' } - attr_info: A dictionary with mapping of XML attributes concerning the node in question (note that the keys do *not* have a leading '@' unless it is an internally generated attribute by Dingo. - namespace_mapping: A dictionary containing the namespace mapping extracted from the imported XML file. - add_fact_kargs: The arguments with which the fact will be generated after all handler functions have been called. The dictionary contains the following keys:: 'fact_dt_kind' : <FactDataType.NO_VOCAB/VOCAB_SINGLE/...> 'fact_dt_namespace_name': <human-readable shortname for namespace uri> 'fact_dt_namespace_uri': <namespace uri for datataype namespace> 'fact_term_name' : <Fact Term such as 'Header/Subject/Address'> 'fact_term_attribute': <Attribute key such as 'category' for fact terms describing an attribute> 'values' : <list of FactValue objects that are the values of the fact to be generated> 'node_id_name' : <node identifier such as 'N000:N000:A000' Just as the fact handler functions, the datatype extractor can change the add_fact_kargs dictionary and thus change the way in which the fact is created -- usually, this ability is used to change the following items in the dictionary: - fact_dt_name - fact_dt_namespace_uri - fact_dt_namespace_name (optional -- the defining part is the uri) - fact_dt_kind The extractor returns "True" if datatype info was found; otherwise, False is returned
f9160:c0:m8
def generate_accounts(seeds):
return {<EOL>seed: {<EOL>'<STR_LIT>': encode_hex(sha3(seed)),<EOL>'<STR_LIT:address>': encode_hex(privatekey_to_address(sha3(seed))),<EOL>}<EOL>for seed in seeds<EOL>}<EOL>
Create private keys and addresses for all seeds.
f9167:m0
def mk_genesis(accounts, initial_alloc=denoms.ether * <NUM_LIT>):
genesis = GENESIS_STUB.copy()<EOL>genesis['<STR_LIT>'] = encode_hex(CLUSTER_NAME)<EOL>genesis['<STR_LIT>'].update({<EOL>account: {<EOL>'<STR_LIT>': str(initial_alloc),<EOL>}<EOL>for account in accounts<EOL>})<EOL>genesis['<STR_LIT>']['<STR_LIT>'] = dict(balance=str(initial_alloc))<EOL>return genesis<EOL>
Create a genesis-block dict with allocation for all `accounts`. :param accounts: list of account addresses (hex) :param initial_alloc: the amount to allocate for the `accounts` :return: genesis dict
f9167:m1
@click.argument(<EOL>'<STR_LIT>',<EOL>nargs=-<NUM_LIT:1>,<EOL>type=str,<EOL>)<EOL>@click.argument(<EOL>'<STR_LIT>',<EOL>type=str,<EOL>)<EOL>@cli.command()<EOL>@click.pass_context<EOL>def geth_commands(ctx, geth_hosts, datadir):
pretty = ctx.obj['<STR_LIT>']<EOL>nodes = [] <EOL>for i, host in enumerate(geth_hosts):<EOL><INDENT>nodes.append(create_node_configuration(host=host, node_key_seed=i))<EOL><DEDENT>for node in nodes:<EOL><INDENT>node.pop('<STR_LIT>')<EOL>node.pop('<STR_LIT>')<EOL><DEDENT>config = {'<STR_LIT>'.format(**node): '<STR_LIT:U+0020>'.join(to_cmd(node, datadir=datadir)) for node in nodes}<EOL>config['<STR_LIT>'] = [node['<STR_LIT>'] for node in nodes]<EOL>indent = None<EOL>if pretty:<EOL><INDENT>indent = <NUM_LIT:2><EOL><DEDENT>print(json.dumps(<EOL>config,<EOL>indent=indent,<EOL>))<EOL>
This is helpful to setup a private cluster of geth nodes that won't need discovery (because they can use the content of `static_nodes` as `static-nodes.json`).
f9168:m7
def rgb_color_picker(obj, min_luminance=None, max_luminance=None):
color_value = int.from_bytes(<EOL>hashlib.md5(str(obj).encode('<STR_LIT:utf-8>')).digest(),<EOL>'<STR_LIT>',<EOL>) % <NUM_LIT><EOL>color = Color(f'<STR_LIT>')<EOL>if min_luminance and color.get_luminance() < min_luminance:<EOL><INDENT>color.set_luminance(min_luminance)<EOL><DEDENT>elif max_luminance and color.get_luminance() > max_luminance:<EOL><INDENT>color.set_luminance(max_luminance)<EOL><DEDENT>return color<EOL>
Modified version of colour.RGB_color_picker
f9171:m0
def _address_rxp(self, addr):
try:<EOL><INDENT>addr = to_checksum_address(addr)<EOL>rxp = '<STR_LIT>' + pex(address_checksum_and_decode(addr)) + f'<STR_LIT>'<EOL>self._extra_keys[pex(address_checksum_and_decode(addr))] = addr.lower()<EOL>self._extra_keys[addr[<NUM_LIT:2>:].lower()] = addr.lower()<EOL><DEDENT>except ValueError:<EOL><INDENT>rxp = addr<EOL><DEDENT>return rxp<EOL>
Create a regex string for addresses, that matches several representations: - with(out) '0x' prefix - `pex` version This function takes care of maintaining additional lookup keys for substring matches. In case the given string is no address, it returns the original string.
f9172:c0:m1
def _make_regex(self):
rxp = "<STR_LIT:|>".join(<EOL>map(self._address_rxp, self.keys()),<EOL>)<EOL>self._regex = re.compile(<EOL>rxp,<EOL>re.IGNORECASE,<EOL>)<EOL>
Compile rxp with all keys concatenated.
f9172:c0:m2
def __call__(self, match):
return '<STR_LIT>'.format(self[match.group(<NUM_LIT:0>).lower()])<EOL>
Lookup for each rxp match.
f9172:c0:m6
def translate(self, text):
return self._regex.sub(self, text)<EOL>
Translate text.
f9172:c0:m7
def visit_call(self, node):
try:<EOL><INDENT>self._force_joinall_to_use_set(node)<EOL><DEDENT>except InferenceError:<EOL><INDENT>pass<EOL><DEDENT>
Called on expressions of the form `expr()`, where `expr` is a simple name e.g. `f()` or a path e.g. `v.f()`.
f9173:c0:m0
def _force_joinall_to_use_set(self, node):
for inferred_func in node.func.infer():<EOL><INDENT>if is_joinall(inferred_func):<EOL><INDENT>is_every_value_a_set = all(<EOL>inferred_first_arg.pytype() == '<STR_LIT>'<EOL>for inferred_first_arg in node.args[<NUM_LIT:0>].infer()<EOL>)<EOL>if not is_every_value_a_set:<EOL><INDENT>self.add_message(JOINALL_ID, node=node)<EOL><DEDENT><DEDENT><DEDENT>
This detect usages of the form: >>> from gevent import joinall >>> joinall(...) or: >>> import gevent >>> gevent.joinall(...)
f9173:c0:m1
@property<EOL><INDENT>def nodes(self) -> List[str]:<DEDENT>
if self._scenario_version == <NUM_LIT:1> and '<STR_LIT>' in self._config:<EOL><INDENT>range_config = self._config['<STR_LIT>']<EOL>try:<EOL><INDENT>start, stop = range_config['<STR_LIT>'], range_config['<STR_LIT>'] + <NUM_LIT:1><EOL><DEDENT>except KeyError:<EOL><INDENT>raise MissingNodesConfiguration(<EOL>'<STR_LIT>'<EOL>'<STR_LIT>',<EOL>)<EOL><DEDENT>try:<EOL><INDENT>template = range_config['<STR_LIT>']<EOL><DEDENT>except KeyError:<EOL><INDENT>raise MissingNodesConfiguration(<EOL>'<STR_LIT>',<EOL>)<EOL><DEDENT>return [template.format(i) for i in range(start, stop)]<EOL><DEDENT>try:<EOL><INDENT>return self._config['<STR_LIT:list>']<EOL><DEDENT>except KeyError:<EOL><INDENT>raise MissingNodesConfiguration('<STR_LIT>')<EOL><DEDENT>
Return the list of nodes configured in the scenario's yaml. Should the scenario use version 1, we check if there is a 'setting'. If so, we derive the list of nodes from this dictionary, using its 'first', 'last' and 'template' keys. Should any of these keys be missing, we throw an appropriate exception. If the scenario version is not 1, or no 'range' setting exists, we use the 'list' settings key and return the value. Again, should the key be absent, we throw an appropriate error. :raises MissingNodesConfiguration: if the scenario version is 1 and a 'range' key was detected, but any one of the keys 'first', 'last', 'template' are missing; *or* the scenario version is not 1 or the 'range' key and the 'list' are absent.
f9175:c0:m9
@property<EOL><INDENT>def commands(self) -> dict:<DEDENT>
return self._config.get('<STR_LIT>', {})<EOL>
Return the commands configured for the nodes.
f9175:c0:m10
@property<EOL><INDENT>def version(self) -> int:<DEDENT>
version = self._config.get('<STR_LIT:version>', <NUM_LIT:1>)<EOL>if version not in SUPPORTED_SCENARIO_VERSIONS:<EOL><INDENT>raise InvalidScenarioVersion(f'<STR_LIT>')<EOL><DEDENT>return version<EOL>
Return the scenario's version. If this is not present, we default to version 1. :raises InvalidScenarioVersion: if the supplied version is not present in :var:`SUPPORTED_SCENARIO_VERSIONS`.
f9175:c1:m4
@property<EOL><INDENT>def name(self) -> str:<DEDENT>
return self._yaml_path.stem<EOL>
Return the name of the scenario file, sans extension.
f9175:c1:m5
@property<EOL><INDENT>def settings(self):<DEDENT>
return self._config.get('<STR_LIT>', {})<EOL>
Return the 'settings' dictionary for the scenario.
f9175:c1:m6
@property<EOL><INDENT>def protocol(self) -> str:<DEDENT>
if self.nodes.mode is NodeMode.MANAGED:<EOL><INDENT>if '<STR_LIT>' in self._config:<EOL><INDENT>log.warning('<STR_LIT>')<EOL><DEDENT>return '<STR_LIT:http>'<EOL><DEDENT>return self._config.get('<STR_LIT>', '<STR_LIT:http>')<EOL>
Return the designated protocol of the scenario. If the node's mode is :attr:`NodeMode.MANAGED`, we always choose `http` and display a warning if there was a 'protocol' set explicitly in the scenario's yaml. Otherwise we simply access the 'protocol' key of the yaml, defaulting to 'http' if it does not exist.
f9175:c1:m7
@property<EOL><INDENT>def timeout(self) -> int:<DEDENT>
return self.settings.get('<STR_LIT>', TIMEOUT)<EOL>
Returns the scenario's set timeout in seconds.
f9175:c1:m8
@property<EOL><INDENT>def notification_email(self) -> Union[str, None]:<DEDENT>
return self.settings.get('<STR_LIT>')<EOL>
Return the email address to which notifications are to be sent. If this isn't set, we return None.
f9175:c1:m9
@property<EOL><INDENT>def chain_name(self) -> str:<DEDENT>
return self.settings.get('<STR_LIT>', '<STR_LIT>')<EOL>
Return the name of the chain to be used for this scenario.
f9175:c1:m10
@property<EOL><INDENT>def services(self):<DEDENT>
return self.settings.get('<STR_LIT>', {})<EOL>
Return the
f9175:c1:m11
@property<EOL><INDENT>def gas_price(self) -> str:<DEDENT>
return self._config.get('<STR_LIT>', '<STR_LIT>')<EOL>
Return the configured gas price for this scenario. This defaults to 'fast'.
f9175:c1:m12
@property<EOL><INDENT>def nodes(self) -> NodesConfig:<DEDENT>
return self._nodes<EOL>
Return the configuration of nodes used in this scenario.
f9175:c1:m14
@property<EOL><INDENT>def configuration(self):<DEDENT>
try:<EOL><INDENT>return self._config['<STR_LIT>']<EOL><DEDENT>except KeyError:<EOL><INDENT>raise ScenarioError(<EOL>"<STR_LIT>",<EOL>)<EOL><DEDENT>
Return the scenario's configuration. :raises ScenarioError: if no 'scenario' key is present in the yaml file.
f9175:c1:m15
@property<EOL><INDENT>def task(self) -> Tuple[str, Any]:<DEDENT>
try:<EOL><INDENT>items, = self.configuration.items()<EOL><DEDENT>except ValueError:<EOL><INDENT>raise MultipleTaskDefinitions(<EOL>'<STR_LIT>',<EOL>)<EOL><DEDENT>return items<EOL>
Return the scenario's task configuration as a tuple. :raises MultipleTaskDefinitions: if there is more than one task config under the 'scenario' key.
f9175:c1:m16
@property<EOL><INDENT>def task_config(self) -> Dict:<DEDENT>
_, root_task_config = self.task<EOL>return root_task_config<EOL>
Return the task config for this scenario.
f9175:c1:m17
@property<EOL><INDENT>def task_class(self):<DEDENT>
from scenario_player.tasks.base import get_task_class_for_type<EOL>root_task_type, _ = self.task<EOL>task_class = get_task_class_for_type(root_task_type)<EOL>return task_class<EOL>
Return the Task class type configured for the scenario.
f9175:c1:m18
def decode_event(abi: ABI, log_: Dict) -> Dict:
if isinstance(log_['<STR_LIT>'][<NUM_LIT:0>], str):<EOL><INDENT>log_['<STR_LIT>'][<NUM_LIT:0>] = decode_hex(log_['<STR_LIT>'][<NUM_LIT:0>])<EOL><DEDENT>elif isinstance(log_['<STR_LIT>'][<NUM_LIT:0>], int):<EOL><INDENT>log_['<STR_LIT>'][<NUM_LIT:0>] = decode_hex(hex(log_['<STR_LIT>'][<NUM_LIT:0>]))<EOL><DEDENT>event_id = log_['<STR_LIT>'][<NUM_LIT:0>]<EOL>events = filter_by_type('<STR_LIT>', abi)<EOL>topic_to_event_abi = {<EOL>event_abi_to_log_topic(event_abi): event_abi<EOL>for event_abi in events<EOL>}<EOL>event_abi = topic_to_event_abi[event_id]<EOL>return get_event_data(event_abi, log_)<EOL>
Helper function to unpack event data using a provided ABI Args: abi: The ABI of the contract, not the ABI of the event log_: The raw event data Returns: The decoded event
f9180:m0
def query_blockchain_events(<EOL>web3: Web3,<EOL>contract_manager: ContractManager,<EOL>contract_address: Address,<EOL>contract_name: str,<EOL>topics: List,<EOL>from_block: BlockNumber,<EOL>to_block: BlockNumber,<EOL>) -> List[Dict]:
filter_params = {<EOL>'<STR_LIT>': from_block,<EOL>'<STR_LIT>': to_block,<EOL>'<STR_LIT:address>': to_checksum_address(contract_address),<EOL>'<STR_LIT>': topics,<EOL>}<EOL>events = web3.eth.getLogs(filter_params)<EOL>contract_abi = contract_manager.get_contract_abi(contract_name)<EOL>return [<EOL>decode_event(<EOL>abi=contract_abi,<EOL>log_=raw_event,<EOL>)<EOL>for raw_event in events<EOL>]<EOL>
Returns events emmitted by a contract for a given event name, within a certain range. Args: web3: A Web3 instance contract_manager: A contract manager contract_address: The address of the contract to be filtered, can be `None` contract_name: The name of the contract topics: The topics to filter for from_block: The block to start search events to_block: The block to stop searching for events Returns: All matching events
f9180:m1
def get_or_deploy_token(runner) -> Tuple[ContractProxy, int]:
token_contract = runner.contract_manager.get_contract(CONTRACT_CUSTOM_TOKEN)<EOL>token_config = runner.scenario.get('<STR_LIT>', {})<EOL>if not token_config:<EOL><INDENT>token_config = {}<EOL><DEDENT>address = token_config.get('<STR_LIT:address>')<EOL>block = token_config.get('<STR_LIT>', <NUM_LIT:0>)<EOL>reuse = token_config.get('<STR_LIT>', False)<EOL>token_address_file = runner.data_path.joinpath('<STR_LIT>')<EOL>if reuse:<EOL><INDENT>if address:<EOL><INDENT>raise ScenarioError('<STR_LIT>')<EOL><DEDENT>if token_address_file.exists():<EOL><INDENT>token_data = json.loads(token_address_file.read_text())<EOL>address = token_data['<STR_LIT:address>']<EOL>block = token_data['<STR_LIT>']<EOL><DEDENT><DEDENT>if address:<EOL><INDENT>check_address_has_code(runner.client, address, '<STR_LIT>')<EOL>token_ctr = runner.client.new_contract_proxy(token_contract['<STR_LIT>'], address)<EOL>log.debug(<EOL>"<STR_LIT>",<EOL>address=to_checksum_address(address),<EOL>name=token_ctr.contract.functions.name().call(),<EOL>symbol=token_ctr.contract.functions.symbol().call(),<EOL>)<EOL>return token_ctr, block<EOL><DEDENT>token_id = uuid.uuid4()<EOL>now = datetime.now()<EOL>name = token_config.get('<STR_LIT:name>', f"<STR_LIT>")<EOL>symbol = token_config.get('<STR_LIT>', f"<STR_LIT>")<EOL>decimals = token_config.get('<STR_LIT>', <NUM_LIT:0>)<EOL>log.debug("<STR_LIT>", name=name, symbol=symbol, decimals=decimals)<EOL>token_ctr, receipt = runner.client.deploy_solidity_contract(<EOL>'<STR_LIT>',<EOL>runner.contract_manager.contracts,<EOL>constructor_parameters=(<NUM_LIT:0>, decimals, name, symbol),<EOL>)<EOL>contract_deployment_block = receipt['<STR_LIT>']<EOL>contract_checksum_address = to_checksum_address(token_ctr.contract_address)<EOL>if reuse:<EOL><INDENT>token_address_file.write_text(json.dumps({<EOL>'<STR_LIT:address>': contract_checksum_address,<EOL>'<STR_LIT>': contract_deployment_block,<EOL>}))<EOL><DEDENT>log.info(<EOL>"<STR_LIT>",<EOL>address=contract_checksum_address,<EOL>name=name,<EOL>symbol=symbol,<EOL>)<EOL>return token_ctr, contract_deployment_block<EOL>
Deploy or reuse
f9189:m1
def get_udc_and_token(runner) -> Tuple[Optional[ContractProxy], Optional[ContractProxy]]:
from scenario_player.runner import ScenarioRunner<EOL>assert isinstance(runner, ScenarioRunner)<EOL>udc_config = runner.scenario.services.get('<STR_LIT>', {})<EOL>if not udc_config.get('<STR_LIT>', False):<EOL><INDENT>return None, None<EOL><DEDENT>udc_address = udc_config.get('<STR_LIT:address>')<EOL>if udc_address is None:<EOL><INDENT>contracts = get_contracts_deployment_info(<EOL>chain_id=runner.chain_id,<EOL>version=DEVELOPMENT_CONTRACT_VERSION,<EOL>)<EOL>udc_address = contracts['<STR_LIT>'][CONTRACT_USER_DEPOSIT]['<STR_LIT:address>']<EOL><DEDENT>udc_abi = runner.contract_manager.get_contract_abi(CONTRACT_USER_DEPOSIT)<EOL>udc_proxy = runner.client.new_contract_proxy(udc_abi, udc_address)<EOL>ud_token_address = udc_proxy.contract.functions.token().call()<EOL>custom_token_abi = runner.contract_manager.get_contract_abi(CONTRACT_CUSTOM_TOKEN)<EOL>ud_token_proxy = runner.client.new_contract_proxy(custom_token_abi, ud_token_address)<EOL>return udc_proxy, ud_token_proxy<EOL>
Return contract proxies for the UserDepositContract and associated token
f9189:m2
def mint_token_if_balance_low(<EOL>token_contract: ContractProxy,<EOL>target_address: str,<EOL>min_balance: int,<EOL>fund_amount: int,<EOL>gas_limit: int,<EOL>mint_msg: str,<EOL>no_action_msg: str = None,<EOL>) -> Optional[TransactionHash]:
balance = token_contract.contract.functions.balanceOf(target_address).call()<EOL>if balance < min_balance:<EOL><INDENT>mint_amount = fund_amount - balance<EOL>log.debug(mint_msg, address=target_address, amount=mint_amount)<EOL>return token_contract.transact('<STR_LIT>', gas_limit, mint_amount, target_address)<EOL><DEDENT>else:<EOL><INDENT>if no_action_msg:<EOL><INDENT>log.debug(no_action_msg, balance=balance)<EOL><DEDENT><DEDENT>return None<EOL>
Check token balance and mint if below minimum
f9189:m3
def start(self, stdout=subprocess.PIPE, stderr=subprocess.PIPE):
if self.pre_start_check():<EOL><INDENT>raise AlreadyRunning(self)<EOL><DEDENT>if self.process is None:<EOL><INDENT>command = self.command<EOL>if not self._shell:<EOL><INDENT>command = self.command_parts<EOL><DEDENT>env = os.environ.copy()<EOL>env[ENV_UUID] = self._uuid<EOL>popen_kwargs = {<EOL>'<STR_LIT>': self._shell,<EOL>'<STR_LIT>': subprocess.PIPE,<EOL>'<STR_LIT>': stdout,<EOL>'<STR_LIT>': stderr,<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT>': env,<EOL>}<EOL>if platform.system() != '<STR_LIT>':<EOL><INDENT>popen_kwargs['<STR_LIT>'] = os.setsid<EOL><DEDENT>self.process = subprocess.Popen(<EOL>command,<EOL>**popen_kwargs,<EOL>)<EOL><DEDENT>self._set_timeout()<EOL>self.wait_for(self.check_subprocess)<EOL>return self<EOL>
Merged copy paste from the inheritance chain with modified stdout/err behaviour
f9189:c5:m0
def stop(self, sig=None, timeout=<NUM_LIT:10>):
if self.process is None:<EOL><INDENT>return self<EOL><DEDENT>if sig is None:<EOL><INDENT>sig = self._sig_stop<EOL><DEDENT>try:<EOL><INDENT>os.killpg(self.process.pid, sig)<EOL><DEDENT>except OSError as err:<EOL><INDENT>if err.errno in IGNORED_ERROR_CODES:<EOL><INDENT>pass<EOL><DEDENT>else:<EOL><INDENT>raise<EOL><DEDENT><DEDENT>def process_stopped():<EOL><INDENT>"""<STR_LIT>"""<EOL>return self.running() is False<EOL><DEDENT>self._set_timeout(timeout)<EOL>try:<EOL><INDENT>self.wait_for(process_stopped)<EOL><DEDENT>except TimeoutExpired:<EOL><INDENT>log.warning('<STR_LIT>', process=self)<EOL>pass<EOL><DEDENT>self._kill_all_kids(sig)<EOL>self._clear_process()<EOL>return self<EOL>
Copy paste job from `SimpleExecutor.stop()` to add the `timeout` parameter.
f9189:c5:m1
def determine_run_number(self) -> int:
run_number = <NUM_LIT:0><EOL>run_number_file = self.data_path.joinpath('<STR_LIT>')<EOL>if run_number_file.exists():<EOL><INDENT>run_number = int(run_number_file.read_text()) + <NUM_LIT:1><EOL><DEDENT>run_number_file.write_text(str(run_number))<EOL>log.info('<STR_LIT>', run_number=run_number)<EOL>return run_number<EOL>
Determine the current run number. We check for a run number file, and use any number that is logged there after incrementing it.
f9190:c0:m1
def select_chain(self, chain_urls: Dict[str, List[str]]) -> Tuple[str, List[str]]:
chain_name = self.scenario.chain_name<EOL>if chain_name in ('<STR_LIT>', '<STR_LIT>', '<STR_LIT>'):<EOL><INDENT>chain_name = random.choice(list(chain_urls.keys()))<EOL><DEDENT>log.info('<STR_LIT>', chain=chain_name)<EOL>try:<EOL><INDENT>return chain_name, chain_urls[chain_name]<EOL><DEDENT>except KeyError:<EOL><INDENT>raise ScenarioError(<EOL>f'<STR_LIT>',<EOL>)<EOL><DEDENT>
Select a chain and return its name and RPC URL. If the currently loaded scenario's designated chain is set to 'any', we randomly select a chain from the given `chain_urls`. Otherwise, we will return `ScenarioRunner.scenario.chain_name` and whatever value may be associated with this key in `chain_urls`. :raises ScenarioError: if ScenarioRunner.scenario.chain_name is not one of `('any', 'Any', 'ANY')` and it is not a key in `chain_urls`.
f9190:c0:m2
def _check_workflows_align(config):
jobs_default = config['<STR_LIT>']['<STR_LIT>']['<STR_LIT>']<EOL>jobs_nightly = config['<STR_LIT>']['<STR_LIT>']['<STR_LIT>']<EOL>if jobs_default == jobs_nightly[:len(jobs_default)]:<EOL><INDENT>return True, []<EOL><DEDENT>job_diff = unified_diff(<EOL>[f"<STR_LIT>" for line in jobs_default],<EOL>[f"<STR_LIT>" for line in jobs_nightly[:len(jobs_default)]],<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>)<EOL>message = [<EOL>_yellow(<EOL>"<STR_LIT>"<EOL>"<STR_LIT>",<EOL>),<EOL>]<EOL>for line in job_diff:<EOL><INDENT>if line.startswith('<STR_LIT:->'):<EOL><INDENT>message.append(_red(line))<EOL><DEDENT>elif line.startswith('<STR_LIT:+>'):<EOL><INDENT>message.append(_green(line))<EOL><DEDENT>else:<EOL><INDENT>message.append(line)<EOL><DEDENT><DEDENT>return False, '<STR_LIT>'.join(message)<EOL>
Ensure that the common shared jobs in the `raiden-default` and `nightly` workflows are identical.
f9205:m3
def find_max_pending_transfers(gas_limit):
tester = ContractTester(generate_keys=<NUM_LIT:2>)<EOL>tester.deploy_contract('<STR_LIT>')<EOL>tester.deploy_contract(<EOL>'<STR_LIT>',<EOL>_initialAmount=<NUM_LIT>,<EOL>_decimalUnits=<NUM_LIT:3>,<EOL>_tokenName='<STR_LIT>',<EOL>_tokenSymbol='<STR_LIT>',<EOL>)<EOL>tester.deploy_contract(<EOL>'<STR_LIT>',<EOL>_token_address=tester.contract_address('<STR_LIT>'),<EOL>_secret_registry=tester.contract_address('<STR_LIT>'),<EOL>_chain_id=<NUM_LIT:1>,<EOL>_settlement_timeout_min=<NUM_LIT:100>,<EOL>_settlement_timeout_max=<NUM_LIT:200>,<EOL>)<EOL>tester.call_transaction(<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>_to=tester.accounts[<NUM_LIT:1>],<EOL>_value=<NUM_LIT>,<EOL>)<EOL>receipt = tester.call_transaction(<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>participant1=tester.accounts[<NUM_LIT:0>],<EOL>participant2=tester.accounts[<NUM_LIT:1>],<EOL>settle_timeout=<NUM_LIT>,<EOL>)<EOL>channel_identifier = int(encode_hex(receipt['<STR_LIT>'][<NUM_LIT:0>]['<STR_LIT>'][<NUM_LIT:1>]), <NUM_LIT:16>)<EOL>tester.call_transaction(<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>sender=tester.accounts[<NUM_LIT:0>],<EOL>_spender=tester.contract_address('<STR_LIT>'),<EOL>_value=<NUM_LIT>,<EOL>)<EOL>tester.call_transaction(<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>sender=tester.accounts[<NUM_LIT:1>],<EOL>_spender=tester.contract_address('<STR_LIT>'),<EOL>_value=<NUM_LIT>,<EOL>)<EOL>tester.call_transaction(<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>channel_identifier=channel_identifier,<EOL>participant=tester.accounts[<NUM_LIT:0>],<EOL>total_deposit=<NUM_LIT>,<EOL>partner=tester.accounts[<NUM_LIT:1>],<EOL>)<EOL>tester.call_transaction(<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>channel_identifier=channel_identifier,<EOL>participant=tester.accounts[<NUM_LIT:1>],<EOL>total_deposit=<NUM_LIT>,<EOL>partner=tester.accounts[<NUM_LIT:0>],<EOL>)<EOL>print("<STR_LIT>")<EOL>before_closing = tester.tester.take_snapshot()<EOL>enough = <NUM_LIT:0><EOL>too_much = <NUM_LIT><EOL>nonce = <NUM_LIT:10><EOL>additional_hash = urandom(<NUM_LIT:32>)<EOL>token_network_identifier = tester.contract_address('<STR_LIT>')<EOL>while enough + <NUM_LIT:1> < too_much:<EOL><INDENT>tree_size = (enough + too_much) // <NUM_LIT:2><EOL>tester.tester.revert_to_snapshot(before_closing)<EOL>pending_transfers_tree = get_pending_transfers_tree(<EOL>tester.web3,<EOL>unlockable_amounts=[<NUM_LIT:1>] * tree_size,<EOL>)<EOL>balance_hash = hash_balance_data(<NUM_LIT>, <NUM_LIT>, pending_transfers_tree.merkle_root)<EOL>data_to_sign = pack_balance_proof(<EOL>nonce=nonce,<EOL>balance_hash=balance_hash,<EOL>additional_hash=additional_hash,<EOL>channel_identifier=channel_identifier,<EOL>token_network_identifier=token_network_identifier,<EOL>chain_id=<NUM_LIT:1>,<EOL>)<EOL>signature = LocalSigner(tester.private_keys[<NUM_LIT:1>]).sign(data=data_to_sign)<EOL>tester.call_transaction(<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>channel_identifier=channel_identifier,<EOL>partner=tester.accounts[<NUM_LIT:1>],<EOL>balance_hash=balance_hash,<EOL>nonce=nonce,<EOL>additional_hash=additional_hash,<EOL>signature=signature,<EOL>)<EOL>tester.tester.mine_blocks(<NUM_LIT>) <EOL>tester.call_transaction(<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>channel_identifier=channel_identifier,<EOL>participant1=tester.accounts[<NUM_LIT:0>],<EOL>participant1_transferred_amount=<NUM_LIT:0>,<EOL>participant1_locked_amount=<NUM_LIT:0>,<EOL>participant1_locksroot=b'<STR_LIT:\x00>' * <NUM_LIT:32>,<EOL>participant2=tester.accounts[<NUM_LIT:1>],<EOL>participant2_transferred_amount=<NUM_LIT>,<EOL>participant2_locked_amount=<NUM_LIT>,<EOL>participant2_locksroot=pending_transfers_tree.merkle_root,<EOL>)<EOL>receipt = tester.call_transaction(<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>channel_identifier=channel_identifier,<EOL>participant=tester.accounts[<NUM_LIT:0>],<EOL>partner=tester.accounts[<NUM_LIT:1>],<EOL>merkle_tree_leaves=pending_transfers_tree.packed_transfers,<EOL>)<EOL>gas_used = receipt['<STR_LIT>']<EOL>if gas_used <= gas_limit:<EOL><INDENT>enough = tree_size<EOL>print(f'<STR_LIT>')<EOL><DEDENT>else:<EOL><INDENT>too_much = tree_size<EOL>print(f'<STR_LIT>')<EOL><DEDENT><DEDENT>
Measure gas consumption of TokenNetwork.unlock() depending on number of pending transfers and find the maximum number of pending transfers so gas_limit is not exceeded.
f9206:m0
def wait_for_newchannel(<EOL>raiden: '<STR_LIT>',<EOL>payment_network_id: PaymentNetworkID,<EOL>token_address: TokenAddress,<EOL>partner_address: Address,<EOL>retry_timeout: float,<EOL>) -> None:
channel_state = views.get_channelstate_for(<EOL>views.state_from_raiden(raiden),<EOL>payment_network_id,<EOL>token_address,<EOL>partner_address,<EOL>)<EOL>while channel_state is None:<EOL><INDENT>gevent.sleep(retry_timeout)<EOL>channel_state = views.get_channelstate_for(<EOL>views.state_from_raiden(raiden),<EOL>payment_network_id,<EOL>token_address,<EOL>partner_address,<EOL>)<EOL><DEDENT>
Wait until the channel with partner_address is registered. Note: This does not time out, use gevent.Timeout.
f9212:m1
def wait_for_participant_newbalance(<EOL>raiden: '<STR_LIT>',<EOL>payment_network_id: PaymentNetworkID,<EOL>token_address: TokenAddress,<EOL>partner_address: Address,<EOL>target_address: Address,<EOL>target_balance: TokenAmount,<EOL>retry_timeout: float,<EOL>) -> None:
if target_address == raiden.address:<EOL><INDENT>balance = lambda channel_state: channel_state.our_state.contract_balance<EOL><DEDENT>elif target_address == partner_address:<EOL><INDENT>balance = lambda channel_state: channel_state.partner_state.contract_balance<EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>channel_state = views.get_channelstate_for(<EOL>views.state_from_raiden(raiden),<EOL>payment_network_id,<EOL>token_address,<EOL>partner_address,<EOL>)<EOL>while balance(channel_state) < target_balance:<EOL><INDENT>gevent.sleep(retry_timeout)<EOL>channel_state = views.get_channelstate_for(<EOL>views.state_from_raiden(raiden),<EOL>payment_network_id,<EOL>token_address,<EOL>partner_address,<EOL>)<EOL><DEDENT>
Wait until a given channels balance exceeds the target balance. Note: This does not time out, use gevent.Timeout.
f9212:m2
def wait_for_payment_balance(<EOL>raiden: '<STR_LIT>',<EOL>payment_network_id: PaymentNetworkID,<EOL>token_address: TokenAddress,<EOL>partner_address: Address,<EOL>target_address: Address,<EOL>target_balance: TokenAmount,<EOL>retry_timeout: float,<EOL>) -> None:
def get_balance(end_state):<EOL><INDENT>if end_state.balance_proof:<EOL><INDENT>return end_state.balance_proof.transferred_amount<EOL><DEDENT>else:<EOL><INDENT>return <NUM_LIT:0><EOL><DEDENT><DEDENT>if target_address == raiden.address:<EOL><INDENT>balance = lambda channel_state: get_balance(channel_state.partner_state)<EOL><DEDENT>elif target_address == partner_address:<EOL><INDENT>balance = lambda channel_state: get_balance(channel_state.our_state)<EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>channel_state = views.get_channelstate_for(<EOL>views.state_from_raiden(raiden),<EOL>payment_network_id,<EOL>token_address,<EOL>partner_address,<EOL>)<EOL>while balance(channel_state) < target_balance:<EOL><INDENT>log.critical('<STR_LIT>', b=balance(channel_state), t=target_balance)<EOL>gevent.sleep(retry_timeout)<EOL>channel_state = views.get_channelstate_for(<EOL>views.state_from_raiden(raiden),<EOL>payment_network_id,<EOL>token_address,<EOL>partner_address,<EOL>)<EOL><DEDENT>
Wait until a given channel's balance exceeds the target balance. Note: This does not time out, use gevent.Timeout.
f9212:m3
def wait_for_channel_in_states(<EOL>raiden: '<STR_LIT>',<EOL>payment_network_id: PaymentNetworkID,<EOL>token_address: TokenAddress,<EOL>channel_ids: List[ChannelID],<EOL>retry_timeout: float,<EOL>target_states: Sequence[str],<EOL>) -> None:
chain_state = views.state_from_raiden(raiden)<EOL>token_network = views.get_token_network_by_token_address(<EOL>chain_state=chain_state,<EOL>payment_network_id=payment_network_id,<EOL>token_address=token_address,<EOL>)<EOL>if token_network is None:<EOL><INDENT>raise ValueError(<EOL>f'<STR_LIT>',<EOL>)<EOL><DEDENT>token_network_address = token_network.address<EOL>list_cannonical_ids = [<EOL>CanonicalIdentifier(<EOL>chain_identifier=chain_state.chain_id,<EOL>token_network_address=token_network_address,<EOL>channel_identifier=channel_identifier,<EOL>)<EOL>for channel_identifier in channel_ids<EOL>]<EOL>while list_cannonical_ids:<EOL><INDENT>canonical_id = list_cannonical_ids[-<NUM_LIT:1>]<EOL>chain_state = views.state_from_raiden(raiden)<EOL>channel_state = views.get_channelstate_by_canonical_identifier(<EOL>chain_state=chain_state,<EOL>canonical_identifier=canonical_id,<EOL>)<EOL>channel_is_settled = (<EOL>channel_state is None or<EOL>channel.get_status(channel_state) in target_states<EOL>)<EOL>if channel_is_settled:<EOL><INDENT>list_cannonical_ids.pop()<EOL><DEDENT>else:<EOL><INDENT>gevent.sleep(retry_timeout)<EOL><DEDENT><DEDENT>
Wait until all channels are in `target_states`. Raises: ValueError: If the token_address is not registered in the payment_network. Note: This does not time out, use gevent.Timeout.
f9212:m4
def wait_for_close(<EOL>raiden: '<STR_LIT>',<EOL>payment_network_id: PaymentNetworkID,<EOL>token_address: TokenAddress,<EOL>channel_ids: List[ChannelID],<EOL>retry_timeout: float,<EOL>) -> None:
return wait_for_channel_in_states(<EOL>raiden=raiden,<EOL>payment_network_id=payment_network_id,<EOL>token_address=token_address,<EOL>channel_ids=channel_ids,<EOL>retry_timeout=retry_timeout,<EOL>target_states=CHANNEL_AFTER_CLOSE_STATES,<EOL>)<EOL>
Wait until all channels are closed. Note: This does not time out, use gevent.Timeout.
f9212:m5
def wait_for_settle(<EOL>raiden: '<STR_LIT>',<EOL>payment_network_id: PaymentNetworkID,<EOL>token_address: TokenAddress,<EOL>channel_ids: List[ChannelID],<EOL>retry_timeout: float,<EOL>) -> None:
return wait_for_channel_in_states(<EOL>raiden=raiden,<EOL>payment_network_id=payment_network_id,<EOL>token_address=token_address,<EOL>channel_ids=channel_ids,<EOL>retry_timeout=retry_timeout,<EOL>target_states=(CHANNEL_STATE_SETTLED,),<EOL>)<EOL>
Wait until all channels are settled. Note: This does not time out, use gevent.Timeout.
f9212:m7
def wait_for_settle_all_channels(<EOL>raiden: '<STR_LIT>',<EOL>retry_timeout: float,<EOL>) -> None:
chain_state = views.state_from_raiden(raiden)<EOL>id_paymentnetworkstate = chain_state.identifiers_to_paymentnetworks.items()<EOL>for payment_network_id, payment_network_state in id_paymentnetworkstate:<EOL><INDENT>id_tokennetworkstate = payment_network_state.tokenidentifiers_to_tokennetworks.items()<EOL>for token_network_id, token_network_state in id_tokennetworkstate:<EOL><INDENT>channel_ids = cast(<EOL>List[ChannelID], token_network_state.channelidentifiers_to_channels.keys(),<EOL>)<EOL>wait_for_settle(<EOL>raiden=raiden,<EOL>payment_network_id=payment_network_id,<EOL>token_address=TokenAddress(token_network_id),<EOL>channel_ids=channel_ids,<EOL>retry_timeout=retry_timeout,<EOL>)<EOL><DEDENT><DEDENT>
Wait until all channels are settled. Note: This does not time out, use gevent.Timeout.
f9212:m8
def wait_for_healthy(<EOL>raiden: '<STR_LIT>',<EOL>node_address: Address,<EOL>retry_timeout: float,<EOL>) -> None:
network_statuses = views.get_networkstatuses(<EOL>views.state_from_raiden(raiden),<EOL>)<EOL>while network_statuses.get(node_address) != NODE_NETWORK_REACHABLE:<EOL><INDENT>gevent.sleep(retry_timeout)<EOL>network_statuses = views.get_networkstatuses(<EOL>views.state_from_raiden(raiden),<EOL>)<EOL><DEDENT>
Wait until `node_address` becomes healthy. Note: This does not time out, use gevent.Timeout.
f9212:m9
def wait_for_transfer_success(<EOL>raiden: '<STR_LIT>',<EOL>payment_identifier: PaymentID,<EOL>amount: PaymentAmount,<EOL>retry_timeout: float,<EOL>) -> None:
found = False<EOL>while not found:<EOL><INDENT>state_events = raiden.wal.storage.get_events()<EOL>for event in state_events:<EOL><INDENT>found = (<EOL>isinstance(event, EventPaymentReceivedSuccess) and<EOL>event.identifier == payment_identifier and<EOL>event.amount == amount<EOL>)<EOL>if found:<EOL><INDENT>break<EOL><DEDENT><DEDENT>gevent.sleep(retry_timeout)<EOL><DEDENT>
Wait until a transfer with a specific identifier and amount is seen in the WAL. Note: This does not time out, use gevent.Timeout.
f9212:m10
def handle_tokennetwork_new(raiden: '<STR_LIT>', event: Event):
data = event.event_data<EOL>args = data['<STR_LIT:args>']<EOL>block_number = data['<STR_LIT>']<EOL>token_network_address = args['<STR_LIT>']<EOL>token_address = typing.TokenAddress(args['<STR_LIT>'])<EOL>block_hash = data['<STR_LIT>']<EOL>token_network_proxy = raiden.chain.token_network(token_network_address)<EOL>raiden.blockchain_events.add_token_network_listener(<EOL>token_network_proxy=token_network_proxy,<EOL>contract_manager=raiden.contract_manager,<EOL>from_block=block_number,<EOL>)<EOL>token_network_state = TokenNetworkState(<EOL>token_network_address,<EOL>token_address,<EOL>)<EOL>transaction_hash = event.event_data['<STR_LIT>']<EOL>new_token_network = ContractReceiveNewTokenNetwork(<EOL>transaction_hash=transaction_hash,<EOL>payment_network_identifier=event.originating_contract,<EOL>token_network=token_network_state,<EOL>block_number=block_number,<EOL>block_hash=block_hash,<EOL>)<EOL>raiden.handle_and_track_state_change(new_token_network)<EOL>
Handles a `TokenNetworkCreated` event.
f9213:m0
def assert_partner_state(end_state, partner_state, model):
assert end_state.address == model.participant_address<EOL>assert channel.get_amount_locked(end_state) == model.amount_locked<EOL>assert channel.get_balance(end_state, partner_state) == model.balance<EOL>assert channel.get_distributable(end_state, partner_state) == model.distributable<EOL>assert channel.get_next_nonce(end_state) == model.next_nonce<EOL>assert set(end_state.merkletree.layers[LEAVES]) == set(model.merkletree_leaves)<EOL>assert end_state.contract_balance == model.contract_balance<EOL>
Checks that the stored data for both ends correspond to the model.
f9226:m0
def create_channel_from_models(our_model, partner_model):
channel_state = create(NettingChannelStateProperties(<EOL>reveal_timeout=<NUM_LIT:10>,<EOL>settle_timeout=<NUM_LIT:100>,<EOL>our_state=NettingChannelEndStateProperties(<EOL>address=our_model.participant_address,<EOL>balance=our_model.balance,<EOL>merkletree_leaves=our_model.merkletree_leaves,<EOL>),<EOL>partner_state=NettingChannelEndStateProperties(<EOL>address=partner_model.participant_address,<EOL>balance=partner_model.balance,<EOL>merkletree_leaves=partner_model.merkletree_leaves,<EOL>),<EOL>open_transaction=TransactionExecutionStatusProperties(finished_block_number=<NUM_LIT:1>),<EOL>))<EOL>assert channel_state.our_total_deposit == our_model.contract_balance<EOL>assert channel_state.partner_total_deposit == partner_model.contract_balance<EOL>assert_partner_state(<EOL>channel_state.our_state,<EOL>channel_state.partner_state,<EOL>our_model,<EOL>)<EOL>assert_partner_state(<EOL>channel_state.partner_state,<EOL>channel_state.our_state,<EOL>partner_model,<EOL>)<EOL>return channel_state<EOL>
Utility to instantiate state objects used throughout the tests.
f9226:m2
def trigger_presence_callback(self, user_states: Dict[str, UserPresence]):
if self._presence_callback is None:<EOL><INDENT>raise RuntimeError('<STR_LIT>')<EOL><DEDENT>for user_id, presence in user_states.items():<EOL><INDENT>event = {<EOL>'<STR_LIT>': user_id,<EOL>'<STR_LIT:type>': '<STR_LIT>',<EOL>'<STR_LIT:content>': {<EOL>'<STR_LIT>': presence.value,<EOL>},<EOL>}<EOL>self._presence_callback(event)<EOL><DEDENT>
Trigger the registered presence listener with the given user presence
f9259:c1:m2
def get_list_of_block_numbers(item):
if isinstance(item, list):<EOL><INDENT>return [element['<STR_LIT>'] for element in item]<EOL><DEDENT>if isinstance(item, dict):<EOL><INDENT>block_number = item['<STR_LIT>']<EOL>return [block_number]<EOL><DEDENT>return list()<EOL>
Creates a list of block numbers of the given list/single event
f9268:m4
def wait_for_transaction(<EOL>receiver,<EOL>registry_address,<EOL>token_address,<EOL>sender_address,<EOL>):
while True:<EOL><INDENT>receiver_channel = RaidenAPI(receiver).get_channel_list(<EOL>registry_address=registry_address,<EOL>token_address=token_address,<EOL>partner_address=sender_address,<EOL>)<EOL>transaction_received = (<EOL>len(receiver_channel) == <NUM_LIT:1> and<EOL>receiver_channel[<NUM_LIT:0>].partner_state.balance_proof is not None<EOL>)<EOL>if transaction_received:<EOL><INDENT>break<EOL><DEDENT>gevent.sleep(<NUM_LIT:0.1>)<EOL><DEDENT>
Wait until a first transaction in a channel is received
f9292:m0
def saturated_count(connection_managers, registry_address, token_address):
return [<EOL>is_manager_saturated(manager, registry_address, token_address)<EOL>for manager in connection_managers<EOL>].count(True)<EOL>
Return count of nodes with count of open channels exceeding initial channel target
f9292:m3
def restart_network_and_apiservers(raiden_network, api_servers, port_generator, retry_timeout):
for rest_api in api_servers:<EOL><INDENT>rest_api.stop()<EOL><DEDENT>new_network = restart_network(raiden_network, retry_timeout)<EOL>new_servers = start_apiserver_for_network(new_network, port_generator)<EOL>return (new_network, new_servers)<EOL>
Stop an app and start it back
f9293:m6
def stress_send_serial_transfers(rest_apis, token_address, identifier_generator, deposit):
pairs = list(zip(rest_apis, rest_apis[<NUM_LIT:1>:] + [rest_apis[<NUM_LIT:0>]]))<EOL>for server_from, server_to in pairs:<EOL><INDENT>sequential_transfers(<EOL>server_from=server_from,<EOL>server_to=server_to,<EOL>number_of_transfers=deposit,<EOL>token_address=token_address,<EOL>identifier_generator=identifier_generator,<EOL>)<EOL><DEDENT>for server_to, server_from in pairs:<EOL><INDENT>sequential_transfers(<EOL>server_from=server_from,<EOL>server_to=server_to,<EOL>number_of_transfers=deposit * <NUM_LIT:2>,<EOL>token_address=token_address,<EOL>identifier_generator=identifier_generator,<EOL>)<EOL><DEDENT>for server_from, server_to in pairs:<EOL><INDENT>sequential_transfers(<EOL>server_from=server_from,<EOL>server_to=server_to,<EOL>number_of_transfers=deposit,<EOL>token_address=token_address,<EOL>identifier_generator=identifier_generator,<EOL>)<EOL><DEDENT>
Send `deposit` transfers of value `1` one at a time, without changing the initial capacity.
f9293:m10
def stress_send_parallel_transfers(rest_apis, token_address, identifier_generator, deposit):
pairs = list(zip(rest_apis, rest_apis[<NUM_LIT:1>:] + [rest_apis[<NUM_LIT:0>]]))<EOL>gevent.wait([<EOL>gevent.spawn(<EOL>sequential_transfers,<EOL>server_from=server_from,<EOL>server_to=server_to,<EOL>number_of_transfers=deposit,<EOL>token_address=token_address,<EOL>identifier_generator=identifier_generator,<EOL>)<EOL>for server_from, server_to in pairs<EOL>])<EOL>gevent.wait([<EOL>gevent.spawn(<EOL>sequential_transfers,<EOL>server_from=server_from,<EOL>server_to=server_to,<EOL>number_of_transfers=deposit * <NUM_LIT:2>,<EOL>token_address=token_address,<EOL>identifier_generator=identifier_generator,<EOL>)<EOL>for server_to, server_from in pairs<EOL>])<EOL>gevent.wait([<EOL>gevent.spawn(<EOL>sequential_transfers,<EOL>server_from=server_from,<EOL>server_to=server_to,<EOL>number_of_transfers=deposit,<EOL>token_address=token_address,<EOL>identifier_generator=identifier_generator,<EOL>)<EOL>for server_from, server_to in pairs<EOL>])<EOL>
Send `deposit` transfers in parallel, without changing the initial capacity.
f9293:m11
def stress_send_and_receive_parallel_transfers(<EOL>rest_apis,<EOL>token_address,<EOL>identifier_generator,<EOL>deposit,<EOL>):
pairs = list(zip(rest_apis, rest_apis[<NUM_LIT:1>:] + [rest_apis[<NUM_LIT:0>]]))<EOL>foward_transfers = [<EOL>gevent.spawn(<EOL>sequential_transfers,<EOL>server_from=server_from,<EOL>server_to=server_to,<EOL>number_of_transfers=deposit,<EOL>token_address=token_address,<EOL>identifier_generator=identifier_generator,<EOL>)<EOL>for server_from, server_to in pairs<EOL>]<EOL>backwards_transfers = [<EOL>gevent.spawn(<EOL>sequential_transfers,<EOL>server_from=server_from,<EOL>server_to=server_to,<EOL>number_of_transfers=deposit,<EOL>token_address=token_address,<EOL>identifier_generator=identifier_generator,<EOL>)<EOL>for server_to, server_from in pairs<EOL>]<EOL>gevent.wait(foward_transfers + backwards_transfers)<EOL>
Send transfers of value one in parallel
f9293:m12
def timeout(blockchain_type: str):
return <NUM_LIT> if blockchain_type == '<STR_LIT>' else <NUM_LIT:30><EOL>
As parity nodes are slower, we need to set a longer timeout when waiting for onchain events to complete.
f9306:m0
def sign_and_inject(message: Message, signer: Signer, app: App) -> None:
message.sign(signer)<EOL>MessageHandler().on_message(app.raiden, message)<EOL>
Sign the message with key and inject it directly in the app transport layer.
f9312:m0
def transfer(<EOL>initiator_app: App,<EOL>target_app: App,<EOL>token_address: TokenAddress,<EOL>amount: PaymentAmount,<EOL>identifier: PaymentID,<EOL>fee: FeeAmount = <NUM_LIT:0>,<EOL>timeout: float = <NUM_LIT:10>,<EOL>) -> None:
assert identifier is not None, '<STR_LIT>'<EOL>assert isinstance(target_app.raiden.message_handler, WaitForMessage)<EOL>wait_for_unlock = target_app.raiden.message_handler.wait_for_message(<EOL>Unlock,<EOL>{'<STR_LIT>': identifier},<EOL>)<EOL>payment_network_identifier = initiator_app.raiden.default_registry.address<EOL>token_network_identifier = views.get_token_network_identifier_by_token_address(<EOL>chain_state=views.state_from_app(initiator_app),<EOL>payment_network_id=payment_network_identifier,<EOL>token_address=token_address,<EOL>)<EOL>payment_status = initiator_app.raiden.mediated_transfer_async(<EOL>token_network_identifier=token_network_identifier,<EOL>amount=amount,<EOL>target=target_app.raiden.address,<EOL>identifier=identifier,<EOL>fee=fee,<EOL>)<EOL>with Timeout(seconds=timeout):<EOL><INDENT>wait_for_unlock.get()<EOL>msg = (<EOL>f'<STR_LIT>'<EOL>f'<STR_LIT>'<EOL>)<EOL>assert payment_status.payment_done.get(), msg<EOL><DEDENT>
Nice to read shortcut to make successful LockedTransfer. Note: Only the initiator and target are synched.
f9312:m2
def transfer_and_assert_path(<EOL>path: List[App],<EOL>token_address: TokenAddress,<EOL>amount: PaymentAmount,<EOL>identifier: PaymentID,<EOL>fee: FeeAmount = <NUM_LIT:0>,<EOL>timeout: float = <NUM_LIT:10>,<EOL>) -> None:
assert identifier is not None, '<STR_LIT>'<EOL>secret = random_secret()<EOL>first_app = path[<NUM_LIT:0>]<EOL>payment_network_identifier = first_app.raiden.default_registry.address<EOL>token_network_address = views.get_token_network_identifier_by_token_address(<EOL>chain_state=views.state_from_app(first_app),<EOL>payment_network_id=payment_network_identifier,<EOL>token_address=token_address,<EOL>)<EOL>for app in path:<EOL><INDENT>assert isinstance(app.raiden.message_handler, WaitForMessage)<EOL>msg = (<EOL>'<STR_LIT>'<EOL>)<EOL>assert app.raiden.default_registry.address == payment_network_identifier, msg<EOL>app_token_network_address = views.get_token_network_identifier_by_token_address(<EOL>chain_state=views.state_from_app(app),<EOL>payment_network_id=payment_network_identifier,<EOL>token_address=token_address,<EOL>)<EOL>msg = (<EOL>'<STR_LIT>'<EOL>)<EOL>assert token_network_address == app_token_network_address, msg<EOL><DEDENT>pairs = zip(path[:-<NUM_LIT:1>], path[<NUM_LIT:1>:])<EOL>receiving = list()<EOL>for from_app, to_app in pairs:<EOL><INDENT>from_channel_state = views.get_channelstate_by_token_network_and_partner(<EOL>chain_state=views.state_from_app(from_app),<EOL>token_network_id=token_network_address,<EOL>partner_address=to_app.raiden.address,<EOL>)<EOL>to_channel_state = views.get_channelstate_by_token_network_and_partner(<EOL>chain_state=views.state_from_app(to_app),<EOL>token_network_id=token_network_address,<EOL>partner_address=from_app.raiden.address,<EOL>)<EOL>msg = (<EOL>f'<STR_LIT>'<EOL>f'<STR_LIT>'<EOL>f'<STR_LIT>'<EOL>)<EOL>assert from_channel_state, msg<EOL>assert to_channel_state, msg<EOL>msg = (<EOL>f'<STR_LIT>'<EOL>f'<STR_LIT>'<EOL>f'<STR_LIT>'<EOL>)<EOL>assert channel.get_status(from_channel_state) == CHANNEL_STATE_OPENED, msg<EOL>assert channel.get_status(to_channel_state) == CHANNEL_STATE_OPENED, msg<EOL>receiving.append((to_app, to_channel_state.identifier))<EOL><DEDENT>results = [<EOL>app.raiden.message_handler.wait_for_message(<EOL>Unlock,<EOL>{<EOL>'<STR_LIT>': channel_identifier,<EOL>'<STR_LIT>': token_network_address,<EOL>'<STR_LIT>': identifier,<EOL>'<STR_LIT>': secret,<EOL>},<EOL>)<EOL>for app, channel_identifier in receiving<EOL>]<EOL>last_app = path[-<NUM_LIT:1>]<EOL>payment_status = first_app.raiden.start_mediated_transfer_with_secret(<EOL>token_network_identifier=token_network_address,<EOL>amount=amount,<EOL>fee=fee,<EOL>target=last_app.raiden.address,<EOL>identifier=identifier,<EOL>secret=secret,<EOL>)<EOL>with Timeout(seconds=timeout):<EOL><INDENT>gevent.wait(results)<EOL>msg = (<EOL>f'<STR_LIT>'<EOL>f'<STR_LIT>'<EOL>)<EOL>assert payment_status.payment_done.get(), msg<EOL><DEDENT>
Nice to read shortcut to make successful LockedTransfer. Note: This utility *does not enforce the path*, however it does check the provided path is used in totality. It's the responsability of the caller to ensure the path will be used. All nodes in `path` are synched.
f9312:m3
def assert_synced_channel_state(<EOL>token_network_identifier: TokenNetworkID,<EOL>app0: App,<EOL>balance0: Balance,<EOL>pending_locks0: List[HashTimeLockState],<EOL>app1: App,<EOL>balance1: Balance,<EOL>pending_locks1: List[HashTimeLockState],<EOL>) -> None:
<EOL>channel0 = get_channelstate(app0, app1, token_network_identifier)<EOL>channel1 = get_channelstate(app1, app0, token_network_identifier)<EOL>assert channel0.our_state.contract_balance == channel1.partner_state.contract_balance<EOL>assert channel0.partner_state.contract_balance == channel1.our_state.contract_balance<EOL>total_token = channel0.our_state.contract_balance + channel1.our_state.contract_balance<EOL>our_balance0 = channel.get_balance(channel0.our_state, channel0.partner_state)<EOL>partner_balance0 = channel.get_balance(channel0.partner_state, channel0.our_state)<EOL>assert our_balance0 + partner_balance0 == total_token<EOL>our_balance1 = channel.get_balance(channel1.our_state, channel1.partner_state)<EOL>partner_balance1 = channel.get_balance(channel1.partner_state, channel1.our_state)<EOL>assert our_balance1 + partner_balance1 == total_token<EOL>locked_amount0 = sum(lock.amount for lock in pending_locks0)<EOL>locked_amount1 = sum(lock.amount for lock in pending_locks1)<EOL>assert_balance(channel0, balance0, locked_amount0)<EOL>assert_balance(channel1, balance1, locked_amount1)<EOL>assert_locked(channel0, pending_locks0)<EOL>assert_locked(channel1, pending_locks1)<EOL>assert_mirror(channel0, channel1)<EOL>assert_mirror(channel1, channel0)<EOL>
Assert the values of two synced channels. Note: This assert does not work for an intermediate state, where one message hasn't been delivered yet or has been completely lost.
f9312:m4
def wait_assert(func: Callable, *args, **kwargs) -> None:
while True:<EOL><INDENT>try:<EOL><INDENT>func(*args, **kwargs)<EOL><DEDENT>except AssertionError as e:<EOL><INDENT>try:<EOL><INDENT>gevent.sleep(<NUM_LIT:0.5>)<EOL><DEDENT>except gevent.Timeout:<EOL><INDENT>raise e<EOL><DEDENT><DEDENT>else:<EOL><INDENT>break<EOL><DEDENT><DEDENT>
Utility to re-run `func` if it raises an assert. Return once `func` doesn't hit a failed assert anymore. This will loop forever unless a gevent.Timeout is used.
f9312:m5
def assert_mirror(original: NettingChannelState, mirror: NettingChannelState) -> None:
original_locked_amount = channel.get_amount_locked(original.our_state)<EOL>mirror_locked_amount = channel.get_amount_locked(mirror.partner_state)<EOL>assert original_locked_amount == mirror_locked_amount<EOL>balance0 = channel.get_balance(original.our_state, original.partner_state)<EOL>balance1 = channel.get_balance(mirror.partner_state, mirror.our_state)<EOL>assert balance0 == balance1<EOL>balanceproof0 = channel.get_current_balanceproof(original.our_state)<EOL>balanceproof1 = channel.get_current_balanceproof(mirror.partner_state)<EOL>assert balanceproof0 == balanceproof1<EOL>distributable0 = channel.get_distributable(original.our_state, original.partner_state)<EOL>distributable1 = channel.get_distributable(mirror.partner_state, mirror.our_state)<EOL>assert distributable0 == distributable1<EOL>
Assert that `mirror` has a correct `partner_state` to represent `original`.
f9312:m6
def assert_locked(<EOL>from_channel: NettingChannelState,<EOL>pending_locks: List[HashTimeLockState],<EOL>) -> None:
<EOL>if pending_locks:<EOL><INDENT>leaves = [sha3(lock.encoded) for lock in pending_locks]<EOL>layers = compute_layers(leaves)<EOL>tree = MerkleTreeState(layers)<EOL><DEDENT>else:<EOL><INDENT>tree = make_empty_merkle_tree()<EOL><DEDENT>assert from_channel.our_state.merkletree == tree<EOL>for lock in pending_locks:<EOL><INDENT>pending = lock.secrethash in from_channel.our_state.secrethashes_to_lockedlocks<EOL>unclaimed = lock.secrethash in from_channel.our_state.secrethashes_to_unlockedlocks<EOL>assert pending or unclaimed<EOL><DEDENT>
Assert the locks created from `from_channel`.
f9312:m7
def assert_balance(<EOL>from_channel: NettingChannelState,<EOL>balance: Balance,<EOL>locked: LockedAmount,<EOL>) -> None:
assert balance >= <NUM_LIT:0><EOL>assert locked >= <NUM_LIT:0><EOL>distributable = balance - locked<EOL>channel_distributable = channel.get_distributable(<EOL>from_channel.our_state,<EOL>from_channel.partner_state,<EOL>)<EOL>channel_balance = channel.get_balance(from_channel.our_state, from_channel.partner_state)<EOL>channel_locked_amount = channel.get_amount_locked(from_channel.our_state)<EOL>msg = f'<STR_LIT>'<EOL>assert channel_balance == balance, msg<EOL>msg = (<EOL>f'<STR_LIT>'<EOL>f'<STR_LIT>'<EOL>)<EOL>assert channel_distributable == distributable, msg<EOL>msg = f'<STR_LIT>'<EOL>assert channel_locked_amount == locked, msg<EOL>msg = (<EOL>f'<STR_LIT>'<EOL>f'<STR_LIT>'<EOL>)<EOL>assert balance == locked + distributable, msg<EOL>
Assert the from_channel overall token values.
f9312:m8
def make_mediated_transfer(<EOL>from_channel: NettingChannelState,<EOL>partner_channel: NettingChannelState,<EOL>initiator: InitiatorAddress,<EOL>target: TargetAddress,<EOL>lock: HashTimeLockState,<EOL>pkey: bytes,<EOL>secret: Optional[Secret] = None,<EOL>) -> LockedTransfer:
payment_identifier = channel.get_next_nonce(from_channel.our_state)<EOL>message_identifier = random.randint(<NUM_LIT:0>, UINT64_MAX)<EOL>lockedtransfer = channel.send_lockedtransfer(<EOL>from_channel,<EOL>initiator,<EOL>target,<EOL>lock.amount,<EOL>message_identifier,<EOL>payment_identifier,<EOL>lock.expiration,<EOL>lock.secrethash,<EOL>)<EOL>mediated_transfer_msg = LockedTransfer.from_event(lockedtransfer)<EOL>mediated_transfer_msg.sign(LocalSigner(pkey))<EOL>balance_proof = balanceproof_from_envelope(mediated_transfer_msg)<EOL>lockedtransfer.balance_proof = balance_proof<EOL>assert mediated_transfer_msg.sender == from_channel.our_state.address<EOL>receive_lockedtransfer = lockedtransfersigned_from_message(mediated_transfer_msg)<EOL>channel.handle_receive_lockedtransfer(<EOL>partner_channel,<EOL>receive_lockedtransfer,<EOL>)<EOL>if secret is not None:<EOL><INDENT>secrethash = sha3(secret)<EOL>channel.register_offchain_secret(from_channel, secret, secrethash)<EOL>channel.register_offchain_secret(partner_channel, secret, secrethash)<EOL><DEDENT>return mediated_transfer_msg<EOL>
Helper to create and register a mediated transfer from `from_channel` to `partner_channel`.
f9312:m9
def geth_to_cmd(<EOL>node: Dict,<EOL>datadir: str,<EOL>chain_id: int,<EOL>verbosity: str,<EOL>) -> List[str]:
node_config = [<EOL>'<STR_LIT>',<EOL>'<STR_LIT:port>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT:password>',<EOL>]<EOL>cmd = ['<STR_LIT>']<EOL>for config in node_config:<EOL><INDENT>if config in node:<EOL><INDENT>value = node[config]<EOL>cmd.extend([f'<STR_LIT>', str(value)])<EOL><DEDENT><DEDENT>cmd.extend([<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>'<STR_LIT>', '<STR_LIT>',<EOL>'<STR_LIT>', '<STR_LIT:127.0.0.1>',<EOL>'<STR_LIT>', str(chain_id),<EOL>'<STR_LIT>', str(_GETH_VERBOSITY_LEVEL[verbosity]),<EOL>'<STR_LIT>', datadir,<EOL>])<EOL>if node.get('<STR_LIT>', False):<EOL><INDENT>cmd.append('<STR_LIT>')<EOL><DEDENT>log.debug('<STR_LIT>', command=cmd)<EOL>return cmd<EOL>
Transform a node configuration into a cmd-args list for `subprocess.Popen`. Args: node: a node configuration datadir: the node's datadir verbosity: verbosity one of {'error', 'warn', 'info', 'debug'} Return: cmd-args list
f9313:m2
def geth_create_account(datadir: str, privkey: bytes):
keyfile_path = os.path.join(datadir, '<STR_LIT>')<EOL>with open(keyfile_path, '<STR_LIT:wb>') as handler:<EOL><INDENT>handler.write(<EOL>remove_0x_prefix(encode_hex(privkey)).encode(),<EOL>)<EOL><DEDENT>create = subprocess.Popen(<EOL>['<STR_LIT>', '<STR_LIT>', datadir, '<STR_LIT>', '<STR_LIT>', keyfile_path],<EOL>stdin=subprocess.PIPE,<EOL>universal_newlines=True,<EOL>stdout=subprocess.DEVNULL,<EOL>stderr=subprocess.DEVNULL,<EOL>)<EOL>create.stdin.write(DEFAULT_PASSPHRASE + os.linesep)<EOL>time.sleep(<NUM_LIT>)<EOL>create.stdin.write(DEFAULT_PASSPHRASE + os.linesep)<EOL>create.communicate()<EOL>assert create.returncode == <NUM_LIT:0><EOL>
Create an account in `datadir` -- since we're not interested in the rewards, we don't care about the created address. Args: datadir: the datadir in which the account is created privkey: the private key for the account
f9313:m4
def geth_generate_poa_genesis(<EOL>genesis_path: str,<EOL>genesis_description: GenesisDescription,<EOL>seal_account: bytes,<EOL>) -> None:
alloc = {<EOL>to_normalized_address(address): {<EOL>'<STR_LIT>': DEFAULT_BALANCE_BIN,<EOL>}<EOL>for address in genesis_description.prefunded_accounts<EOL>}<EOL>seal_address_normalized = remove_0x_prefix(<EOL>to_normalized_address(seal_account),<EOL>)<EOL>extra_data = geth_clique_extradata(<EOL>genesis_description.random_marker,<EOL>seal_address_normalized,<EOL>)<EOL>genesis = GENESIS_STUB.copy()<EOL>genesis['<STR_LIT>'].update(alloc)<EOL>genesis['<STR_LIT>']['<STR_LIT>'] = genesis_description.chain_id<EOL>genesis['<STR_LIT>']['<STR_LIT>'] = {'<STR_LIT>': <NUM_LIT:1>, '<STR_LIT>': <NUM_LIT>}<EOL>genesis['<STR_LIT>'] = extra_data<EOL>with open(genesis_path, '<STR_LIT:w>') as handler:<EOL><INDENT>json.dump(genesis, handler)<EOL><DEDENT>
Writes a bare genesis to `genesis_path`.
f9313:m6
def geth_init_datadir(datadir: str, genesis_path: str):
try:<EOL><INDENT>args = [<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>datadir,<EOL>'<STR_LIT>',<EOL>genesis_path,<EOL>]<EOL>subprocess.check_output(args, stderr=subprocess.STDOUT)<EOL><DEDENT>except subprocess.CalledProcessError as e:<EOL><INDENT>msg = '<STR_LIT>'.format(<EOL>e.returncode,<EOL>e.output,<EOL>)<EOL>raise ValueError(msg)<EOL><DEDENT>
Initialize a clients datadir with our custom genesis block. Args: datadir: the datadir in which the blockchain is initialized.
f9313:m7
def eth_check_balance(web3: Web3, accounts_addresses: List[bytes], retries: int = <NUM_LIT:10>) -> None:
addresses = {to_checksum_address(account) for account in accounts_addresses}<EOL>for _ in range(retries):<EOL><INDENT>for address in addresses.copy():<EOL><INDENT>if web3.eth.getBalance(address, '<STR_LIT>') > <NUM_LIT:0>:<EOL><INDENT>addresses.remove(address)<EOL><DEDENT><DEDENT>gevent.sleep(<NUM_LIT:1>)<EOL><DEDENT>if len(addresses) > <NUM_LIT:0>:<EOL><INDENT>raise ValueError(f'<STR_LIT>')<EOL><DEDENT>
Wait until the given addresses have a balance. Raises a ValueError if any of the addresses still have no balance after ``retries``.
f9313:m10
@contextmanager<EOL>def run_private_blockchain(<EOL>web3: Web3,<EOL>eth_nodes: List[EthNodeDescription],<EOL>base_datadir: str,<EOL>log_dir: str,<EOL>verbosity: str,<EOL>genesis_description: GenesisDescription,<EOL>):
<EOL>nodes_configuration = []<EOL>for node in eth_nodes:<EOL><INDENT>config = eth_node_config(<EOL>node.private_key,<EOL>node.p2p_port,<EOL>node.rpc_port,<EOL>**node.extra_config,<EOL>)<EOL>if node.miner:<EOL><INDENT>config['<STR_LIT>'] = to_checksum_address(config['<STR_LIT:address>'])<EOL>config['<STR_LIT>'] = True<EOL>config['<STR_LIT:password>'] = os.path.join(base_datadir, '<STR_LIT>')<EOL><DEDENT>nodes_configuration.append(config)<EOL><DEDENT>blockchain_type = eth_nodes[<NUM_LIT:0>].blockchain_type<EOL>seal_account = privatekey_to_address(eth_nodes[<NUM_LIT:0>].private_key)<EOL>if blockchain_type == '<STR_LIT>':<EOL><INDENT>eth_node_config_set_bootnodes(nodes_configuration)<EOL>genesis_path = os.path.join(base_datadir, '<STR_LIT>')<EOL>geth_generate_poa_genesis(<EOL>genesis_path=genesis_path,<EOL>genesis_description=genesis_description,<EOL>seal_account=seal_account,<EOL>)<EOL><DEDENT>elif blockchain_type == '<STR_LIT>':<EOL><INDENT>genesis_path = os.path.join(base_datadir, '<STR_LIT>')<EOL>parity_generate_chain_spec(<EOL>genesis_path=genesis_path,<EOL>genesis_description=genesis_description,<EOL>seal_account=seal_account,<EOL>)<EOL>parity_create_account(nodes_configuration[<NUM_LIT:0>], base_datadir, genesis_path)<EOL><DEDENT>else:<EOL><INDENT>raise TypeError(f'<STR_LIT>')<EOL><DEDENT>runner = eth_run_nodes(<EOL>eth_node_descs=eth_nodes,<EOL>nodes_configuration=nodes_configuration,<EOL>base_datadir=base_datadir,<EOL>genesis_file=genesis_path,<EOL>chain_id=genesis_description.chain_id,<EOL>random_marker=genesis_description.random_marker,<EOL>verbosity=verbosity,<EOL>logdir=log_dir,<EOL>)<EOL>with runner as executors:<EOL><INDENT>eth_check_balance(web3, genesis_description.prefunded_accounts)<EOL>yield executors<EOL><DEDENT>
Starts a private network with private_keys accounts funded. Args: web3: A Web3 instance used to check when the private chain is running. accounts_to_fund: Accounts that will start with funds in the private chain. eth_nodes: A list of geth node description, containing the details of each node of the private chain. base_datadir: Directory used to store the geth databases. log_dir: Directory used to store the geth logs. verbosity: Verbosity used by the geth nodes. random_marker: A random marked used to identify the private chain.
f9313:m18
def burn_eth(raiden_service, amount_to_leave=<NUM_LIT:0>):
address = to_checksum_address(raiden_service.address)<EOL>client = raiden_service.chain.client<EOL>web3 = client.web3<EOL>gas_price = web3.eth.gasPrice<EOL>value = web3.eth.getBalance(address) - gas_price * (<NUM_LIT> + amount_to_leave)<EOL>transaction_hash = client.send_transaction(<EOL>to=HOP1,<EOL>value=value,<EOL>startgas=<NUM_LIT>,<EOL>)<EOL>client.poll(transaction_hash)<EOL>
Burns all the ETH on the account of the given raiden service
f9314:m0
@singledispatch<EOL>def create(properties: Any, defaults: Optional[Properties] = None) -> Any:
if isinstance(properties, Properties):<EOL><INDENT>return properties.TARGET_TYPE(**_properties_to_kwargs(properties, defaults))<EOL><DEDENT>return properties<EOL>
Create objects from their associated property class. E. g. a NettingChannelState from NettingChannelStateProperties. For any field in properties set to EMPTY a default will be used. The default values can be changed by giving another object of the same property type as the defaults argument.
f9315:m31
def create_network(<EOL>token_network_state: TokenNetworkState,<EOL>our_address: typing.Address,<EOL>routes: List[RouteProperties],<EOL>block_number: typing.BlockNumber,<EOL>block_hash: typing.BlockHash = None,<EOL>) -> Tuple[Any, List[NettingChannelState]]:
block_hash = block_hash or make_block_hash()<EOL>state = token_network_state<EOL>channels = list()<EOL>for count, route in enumerate(routes, <NUM_LIT:1>):<EOL><INDENT>if route.address1 == our_address:<EOL><INDENT>channel = route_properties_to_channel(route)<EOL>state_change = ContractReceiveChannelNew(<EOL>transaction_hash=make_transaction_hash(),<EOL>channel_state=channel,<EOL>block_number=block_number,<EOL>block_hash=block_hash,<EOL>)<EOL>channels.append(channel)<EOL><DEDENT>else:<EOL><INDENT>state_change = ContractReceiveRouteNew(<EOL>transaction_hash=make_transaction_hash(),<EOL>canonical_identifier=make_canonical_identifier(),<EOL>participant1=route.address1,<EOL>participant2=route.address2,<EOL>block_number=block_number,<EOL>block_hash=block_hash,<EOL>)<EOL><DEDENT>iteration = token_network.state_transition(<EOL>token_network_state=state,<EOL>state_change=state_change,<EOL>block_number=block_number,<EOL>block_hash=block_hash,<EOL>)<EOL>state = iteration.new_state<EOL>assert len(state.network_graph.channel_identifier_to_participants) == count<EOL>assert len(state.network_graph.network.edges()) == count<EOL><DEDENT>return state, channels<EOL>
Creates a network from route properties. If the address in the route is our_address, create a channel also. Returns a list of created channels and the new state.
f9315:m45
def deploy_tokens_and_fund_accounts(<EOL>token_amount: int,<EOL>number_of_tokens: int,<EOL>deploy_service: BlockChainService,<EOL>participants: typing.List[typing.Address],<EOL>contract_manager: ContractManager,<EOL>) -> typing.List[typing.TokenAddress]:
result = list()<EOL>for _ in range(number_of_tokens):<EOL><INDENT>token_address = deploy_contract_web3(<EOL>CONTRACT_HUMAN_STANDARD_TOKEN,<EOL>deploy_service.client,<EOL>contract_manager=contract_manager,<EOL>constructor_arguments=(<EOL>token_amount,<EOL><NUM_LIT:2>,<EOL>'<STR_LIT>',<EOL>'<STR_LIT>',<EOL>),<EOL>)<EOL>result.append(token_address)<EOL>for transfer_to in participants:<EOL><INDENT>deploy_service.token(token_address).transfer(<EOL>to_address=transfer_to,<EOL>amount=token_amount // len(participants),<EOL>)<EOL><DEDENT><DEDENT>return result<EOL>
Deploy `number_of_tokens` ERC20 token instances with `token_amount` minted and distributed among `blockchain_services`. Optionally the instances will be registered with the raiden registry. Args: token_amount (int): number of units that will be created per token number_of_tokens (int): number of token instances that will be created deploy_service (BlockChainService): the blockchain connection that will deploy participants (list(address)): participant addresses that will receive tokens
f9319:m1
def make_requests_insecure():
<EOL>requests.Session.verify = property(lambda self: False, lambda self, val: None)<EOL>
Prevent `requests` from performing TLS verification. **THIS MUST ONLY BE USED FOR TESTING PURPOSES!**
f9320:m0
def all_combinations(values):
all_generators = (<EOL>combinations(values, r)<EOL>for r in range(<NUM_LIT:1>, len(values))<EOL>)<EOL>flat = chain.from_iterable(all_generators)<EOL>return flat<EOL>
Returns all possible combinations, from length 1 up to full-length of values.
f9322:m2
def fixture_all_combinations(invalid_values):
<EOL>invalid_values_items = list(invalid_values.items())<EOL>all_invalid_values = all_combinations(invalid_values_items)<EOL>for invalid_combinations in all_invalid_values:<EOL><INDENT>keys_values = (<EOL>product((key,), values)<EOL>for key, values in invalid_combinations<EOL>)<EOL>invalid_instances = product(*keys_values)<EOL>for instance in invalid_instances:<EOL><INDENT>yield dict(instance)<EOL><DEDENT><DEDENT>
Generate all combinations for testing invalid values. `pytest.mark.parametrize` will generate the combination of the full-length values, this is not sufficient for an exhaustive failing test with default values, example:: @pytest.mark.parametrize('x', [0, 1]) @pytest.mark.parametrize('y', [2, 3]) def test_foo(x, y): with pytest.raises(Exception): # failing computation with x and y The above test will generate 4 tests {x:0,y:2}, {x:0,y:3}, {x:1,y:2}, and {x:1,y:3}, but it will not generate a scenario for x and y alone {x:0}, {x:1}, {y:2}, {y:3}.
f9322:m3
def payment_channel_open_and_deposit(app0, app1, token_address, deposit, settle_timeout):
assert token_address<EOL>token_network_address = app0.raiden.default_registry.get_token_network(token_address)<EOL>token_network_proxy = app0.raiden.chain.token_network(token_network_address)<EOL>channel_identifier = token_network_proxy.new_netting_channel(<EOL>partner=app1.raiden.address,<EOL>settle_timeout=settle_timeout,<EOL>given_block_identifier='<STR_LIT>',<EOL>)<EOL>assert channel_identifier<EOL>canonical_identifier = CanonicalIdentifier(<EOL>chain_identifier=state_from_raiden(app0.raiden).chain_id,<EOL>token_network_address=token_network_proxy.address,<EOL>channel_identifier=channel_identifier,<EOL>)<EOL>for app in [app0, app1]:<EOL><INDENT>token = app.raiden.chain.token(token_address)<EOL>payment_channel_proxy = app.raiden.chain.payment_channel(<EOL>canonical_identifier=canonical_identifier,<EOL>)<EOL>previous_balance = token.balance_of(app.raiden.address)<EOL>assert previous_balance >= deposit<EOL>payment_channel_proxy.set_total_deposit(total_deposit=deposit, block_identifier='<STR_LIT>')<EOL>new_balance = token.balance_of(app.raiden.address)<EOL>assert new_balance <= previous_balance - deposit<EOL><DEDENT>check_channel(<EOL>app0,<EOL>app1,<EOL>token_network_proxy.address,<EOL>channel_identifier,<EOL>settle_timeout,<EOL>deposit,<EOL>)<EOL>
Open a new channel with app0 and app1 as participants
f9323:m1
def network_with_minimum_channels(apps, channels_per_node):
<EOL>if channels_per_node > len(apps):<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>if len(apps) == <NUM_LIT:1>:<EOL><INDENT>raise ValueError("<STR_LIT>")<EOL><DEDENT>unconnected_apps = dict()<EOL>channel_count = dict()<EOL>for curr_app in apps:<EOL><INDENT>all_apps = list(apps)<EOL>all_apps.remove(curr_app)<EOL>unconnected_apps[curr_app.raiden.address] = all_apps<EOL>channel_count[curr_app.raiden.address] = <NUM_LIT:0><EOL><DEDENT>for curr_app in sorted(apps, key=lambda app: app.raiden.address):<EOL><INDENT>available_apps = unconnected_apps[curr_app.raiden.address]<EOL>while channel_count[curr_app.raiden.address] < channels_per_node:<EOL><INDENT>least_connect = sorted(<EOL>available_apps,<EOL>key=lambda app: channel_count[app.raiden.address],<EOL>)[<NUM_LIT:0>]<EOL>channel_count[curr_app.raiden.address] += <NUM_LIT:1><EOL>available_apps.remove(least_connect)<EOL>channel_count[least_connect.raiden.address] += <NUM_LIT:1><EOL>unconnected_apps[least_connect.raiden.address].remove(curr_app)<EOL>yield curr_app, least_connect<EOL><DEDENT><DEDENT>
Return the channels that should be created so that each app has at least `channels_per_node` with the other apps. Yields a two-tuple (app1, app2) that must be connected to respect `channels_per_node`. Any preexisting channels will be ignored, so the nodes might end up with more channels open than `channels_per_node`.
f9323:m3
def create_sequential_channels(raiden_apps, channels_per_node):
num_nodes = len(raiden_apps)<EOL>if num_nodes < <NUM_LIT:2>:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>if channels_per_node not in (<NUM_LIT:0>, <NUM_LIT:1>, <NUM_LIT:2>, CHAIN):<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>if channels_per_node == <NUM_LIT:0>:<EOL><INDENT>app_channels = list()<EOL><DEDENT>if channels_per_node == <NUM_LIT:1>:<EOL><INDENT>assert len(raiden_apps) % <NUM_LIT:2> == <NUM_LIT:0>, '<STR_LIT>'<EOL>every_two = iter(raiden_apps)<EOL>app_channels = list(zip(every_two, every_two))<EOL><DEDENT>if channels_per_node == <NUM_LIT:2>:<EOL><INDENT>app_channels = list(zip(raiden_apps, raiden_apps[<NUM_LIT:1>:] + [raiden_apps[<NUM_LIT:0>]]))<EOL><DEDENT>if channels_per_node == CHAIN:<EOL><INDENT>app_channels = list(zip(raiden_apps[:-<NUM_LIT:1>], raiden_apps[<NUM_LIT:1>:]))<EOL><DEDENT>return app_channels<EOL>
Create a fully connected network with `num_nodes`, the nodes are connect sequentially. Returns: A list of apps of size `num_nodes`, with the property that every sequential pair in the list has an open channel with `deposit` for each participant.
f9323:m5
def create_apps(<EOL>chain_id,<EOL>contracts_path,<EOL>blockchain_services,<EOL>endpoint_discovery_services,<EOL>token_network_registry_address,<EOL>secret_registry_address,<EOL>service_registry_address,<EOL>user_deposit_address,<EOL>raiden_udp_ports,<EOL>reveal_timeout,<EOL>settle_timeout,<EOL>database_paths,<EOL>retry_interval,<EOL>retries_before_backoff,<EOL>throttle_capacity,<EOL>throttle_fill_rate,<EOL>nat_invitation_timeout,<EOL>nat_keepalive_retries,<EOL>nat_keepalive_timeout,<EOL>environment_type,<EOL>unrecoverable_error_should_crash,<EOL>local_matrix_url=None,<EOL>private_rooms=None,<EOL>):
<EOL>services = zip(blockchain_services, endpoint_discovery_services, raiden_udp_ports)<EOL>apps = []<EOL>for idx, (blockchain, discovery, port) in enumerate(services):<EOL><INDENT>address = blockchain.client.address<EOL>host = '<STR_LIT:127.0.0.1>'<EOL>config = {<EOL>'<STR_LIT>': chain_id,<EOL>'<STR_LIT>': environment_type,<EOL>'<STR_LIT>': unrecoverable_error_should_crash,<EOL>'<STR_LIT>': reveal_timeout,<EOL>'<STR_LIT>': settle_timeout,<EOL>'<STR_LIT>': contracts_path,<EOL>'<STR_LIT>': database_paths[idx],<EOL>'<STR_LIT>': {<EOL>'<STR_LIT>': DEFAULT_NUMBER_OF_BLOCK_CONFIRMATIONS,<EOL>},<EOL>'<STR_LIT>': {<EOL>'<STR_LIT>': {<EOL>'<STR_LIT>': host,<EOL>'<STR_LIT>': port,<EOL>'<STR_LIT:host>': host,<EOL>'<STR_LIT>': nat_invitation_timeout,<EOL>'<STR_LIT>': nat_keepalive_retries,<EOL>'<STR_LIT>': nat_keepalive_timeout,<EOL>'<STR_LIT:port>': port,<EOL>'<STR_LIT>': retries_before_backoff,<EOL>'<STR_LIT>': retry_interval,<EOL>'<STR_LIT>': throttle_capacity,<EOL>'<STR_LIT>': throttle_fill_rate,<EOL>},<EOL>},<EOL>'<STR_LIT>': True,<EOL>'<STR_LIT>': False,<EOL>}<EOL>use_matrix = local_matrix_url is not None<EOL>if use_matrix:<EOL><INDENT>merge_dict(<EOL>config,<EOL>{<EOL>'<STR_LIT>': '<STR_LIT>',<EOL>'<STR_LIT>': {<EOL>'<STR_LIT>': {<EOL>'<STR_LIT>': ['<STR_LIT>'],<EOL>'<STR_LIT>': retries_before_backoff,<EOL>'<STR_LIT>': retry_interval,<EOL>'<STR_LIT>': local_matrix_url,<EOL>'<STR_LIT>': local_matrix_url.netloc,<EOL>'<STR_LIT>': [],<EOL>'<STR_LIT>': private_rooms,<EOL>},<EOL>},<EOL>},<EOL>)<EOL><DEDENT>config_copy = App.DEFAULT_CONFIG.copy()<EOL>config_copy.update(config)<EOL>registry = blockchain.token_network_registry(token_network_registry_address)<EOL>secret_registry = blockchain.secret_registry(secret_registry_address)<EOL>service_registry = None<EOL>if service_registry_address:<EOL><INDENT>service_registry = blockchain.service_registry(service_registry_address)<EOL><DEDENT>user_deposit = None<EOL>if user_deposit_address:<EOL><INDENT>user_deposit = blockchain.user_deposit(user_deposit_address)<EOL><DEDENT>if use_matrix:<EOL><INDENT>transport = MatrixTransport(config['<STR_LIT>']['<STR_LIT>'])<EOL><DEDENT>else:<EOL><INDENT>throttle_policy = TokenBucket(<EOL>config['<STR_LIT>']['<STR_LIT>']['<STR_LIT>'],<EOL>config['<STR_LIT>']['<STR_LIT>']['<STR_LIT>'],<EOL>)<EOL>transport = UDPTransport(<EOL>address,<EOL>discovery,<EOL>server._udp_socket((host, port)), <EOL>throttle_policy,<EOL>config['<STR_LIT>']['<STR_LIT>'],<EOL>)<EOL><DEDENT>raiden_event_handler = RaidenEventHandler()<EOL>message_handler = WaitForMessage()<EOL>app = App(<EOL>config=config_copy,<EOL>chain=blockchain,<EOL>query_start_block=<NUM_LIT:0>,<EOL>default_registry=registry,<EOL>default_secret_registry=secret_registry,<EOL>default_service_registry=service_registry,<EOL>transport=transport,<EOL>raiden_event_handler=raiden_event_handler,<EOL>message_handler=message_handler,<EOL>discovery=discovery,<EOL>user_deposit=user_deposit,<EOL>)<EOL>apps.append(app)<EOL><DEDENT>return apps<EOL>
Create the apps.
f9323:m6
def parallel_start_apps(raiden_apps):
start_tasks = set()<EOL>for app in raiden_apps:<EOL><INDENT>greenlet = gevent.spawn(app.raiden.start)<EOL>greenlet.name = f'<STR_LIT>'<EOL>start_tasks.add(greenlet)<EOL><DEDENT>gevent.joinall(start_tasks, raise_error=True)<EOL>
Start all the raiden apps in parallel.
f9323:m7
def wait_for_alarm_start(raiden_apps, retry_timeout=DEFAULT_RETRY_TIMEOUT):
apps = list(raiden_apps)<EOL>while apps:<EOL><INDENT>app = apps[-<NUM_LIT:1>]<EOL>if app.raiden.alarm.known_block_number is None:<EOL><INDENT>gevent.sleep(retry_timeout)<EOL><DEDENT>else:<EOL><INDENT>apps.pop()<EOL><DEDENT><DEDENT>
Wait until all Alarm tasks start & set up the last_block
f9323:m9
def wait_for_usable_channel(<EOL>app0,<EOL>app1,<EOL>registry_address,<EOL>token_address,<EOL>our_deposit,<EOL>partner_deposit,<EOL>retry_timeout=DEFAULT_RETRY_TIMEOUT,<EOL>):
waiting.wait_for_newchannel(<EOL>app0.raiden,<EOL>registry_address,<EOL>token_address,<EOL>app1.raiden.address,<EOL>retry_timeout,<EOL>)<EOL>waiting.wait_for_participant_newbalance(<EOL>app0.raiden,<EOL>registry_address,<EOL>token_address,<EOL>app1.raiden.address,<EOL>app0.raiden.address,<EOL>our_deposit,<EOL>retry_timeout,<EOL>)<EOL>waiting.wait_for_participant_newbalance(<EOL>app0.raiden,<EOL>registry_address,<EOL>token_address,<EOL>app1.raiden.address,<EOL>app1.raiden.address,<EOL>partner_deposit,<EOL>retry_timeout,<EOL>)<EOL>waiting.wait_for_healthy(<EOL>app0.raiden,<EOL>app1.raiden.address,<EOL>retry_timeout,<EOL>)<EOL>
Wait until the channel from app0 to app1 is usable. The channel and the deposits are registered, and the partner network state is reachable.
f9323:m10
def wait_for_channels(<EOL>app_channels,<EOL>registry_address,<EOL>token_addresses,<EOL>deposit,<EOL>retry_timeout=DEFAULT_RETRY_TIMEOUT,<EOL>):
for app0, app1 in app_channels:<EOL><INDENT>for token_address in token_addresses:<EOL><INDENT>wait_for_usable_channel(<EOL>app0,<EOL>app1,<EOL>registry_address,<EOL>token_address,<EOL>deposit,<EOL>deposit,<EOL>retry_timeout,<EOL>)<EOL>wait_for_usable_channel(<EOL>app1,<EOL>app0,<EOL>registry_address,<EOL>token_address,<EOL>deposit,<EOL>deposit,<EOL>retry_timeout,<EOL>)<EOL><DEDENT><DEDENT>
Wait until all channels are usable from both directions.
f9323:m12
def ensure_executable(cmd):
if not shutil.which(cmd):<EOL><INDENT>print(<EOL>'<STR_LIT>'<EOL>'<STR_LIT>' % cmd,<EOL>)<EOL>sys.exit(<NUM_LIT:1>)<EOL><DEDENT>
look for the given command and make sure it can be executed
f9324:m0
def check_dict_nested_attrs(item: Dict, dict_data: Dict) -> bool:
for key, value in dict_data.items():<EOL><INDENT>if key not in item:<EOL><INDENT>return False<EOL><DEDENT>item_value = item[key]<EOL>if isinstance(item_value, Mapping):<EOL><INDENT>if not check_dict_nested_attrs(item_value, value):<EOL><INDENT>return False<EOL><DEDENT><DEDENT>elif item_value != value:<EOL><INDENT>return False<EOL><DEDENT><DEDENT>return True<EOL>
Checks the values from `dict_data` are contained in `item` >>> d = {'a': 1, 'b': {'c': 2}} >>> check_dict_nested_attrs(d, {'a': 1) True >>> check_dict_nested_attrs(d, {'b': {'c': 2}}) True >>> check_dict_nested_attrs(d, {'d': []}) False
f9325:m0
def check_nested_attrs(item: Any, attributes: Dict) -> bool:
for name, value in attributes.items():<EOL><INDENT>item_value = getattr(item, name, NOVALUE)<EOL>if isinstance(value, Mapping):<EOL><INDENT>if not check_nested_attrs(item_value, value):<EOL><INDENT>return False<EOL><DEDENT><DEDENT>elif item_value != value:<EOL><INDENT>return False<EOL><DEDENT><DEDENT>return True<EOL>
Checks the attributes from `item` match the values defined in `attributes`. >>> from collections import namedtuple >>> A = namedtuple('A', 'a') >>> B = namedtuple('B', 'b') >>> d = {'a': 1} >>> check_nested_attrs(A(1), {'a': 1}) True >>> check_nested_attrs(A(B(1)), {'a': {'b': 1}}) True >>> check_nested_attrs(A(1), {'a': 2}) False >>> check_nested_attrs(A(1), {'b': 1}) False
f9325:m1
def search_for_item(<EOL>item_list: List,<EOL>item_type: Any,<EOL>attributes: Dict,<EOL>) -> Optional[Any]:
for item in item_list:<EOL><INDENT>if isinstance(item, item_type) and check_nested_attrs(item, attributes):<EOL><INDENT>return item<EOL><DEDENT><DEDENT>return None<EOL>
Search for the first item of type `item_type` with `attributes` in `item_list`. `attributes` are compared using the utility `check_nested_attrs`.
f9325:m2
def raiden_events_search_for_item(<EOL>raiden: RaidenService,<EOL>item_type: Event,<EOL>attributes: Dict,<EOL>) -> Optional[Event]:
return search_for_item(raiden.wal.storage.get_events(), item_type, attributes)<EOL>
Search for the first event of type `item_type` with `attributes` in the `raiden` database. `attributes` are compared using the utility `check_nested_attrs`.
f9325:m3
def raiden_state_changes_search_for_item(<EOL>raiden: RaidenService,<EOL>item_type: StateChange,<EOL>attributes: Dict,<EOL>) -> Optional[StateChange]:
return search_for_item(<EOL>raiden.wal.storage.get_statechanges_by_identifier(<NUM_LIT:0>, '<STR_LIT>'),<EOL>item_type,<EOL>attributes,<EOL>)<EOL>
Search for the first event of type `item_type` with `attributes` in the `raiden` database. `attributes` are compared using the utility `check_nested_attrs`.
f9325:m4
def wait_for_raiden_event(<EOL>raiden: RaidenService,<EOL>item_type: Event,<EOL>attributes: Dict,<EOL>retry_timeout: float,<EOL>) -> Event:
found = None<EOL>while found is None:<EOL><INDENT>found = raiden_events_search_for_item(raiden, item_type, attributes)<EOL>gevent.sleep(retry_timeout)<EOL><DEDENT>return found<EOL>
Wait until an event is seen in the WAL events Note: This does not time out, use gevent.Timeout.
f9325:m7
def wait_for_state_change(<EOL>raiden: RaidenService,<EOL>item_type: StateChange,<EOL>attributes: Dict,<EOL>retry_timeout: float,<EOL>) -> StateChange:
found = None<EOL>while found is None:<EOL><INDENT>found = raiden_state_changes_search_for_item(raiden, item_type, attributes)<EOL>gevent.sleep(retry_timeout)<EOL><DEDENT>return found<EOL>
Wait until a state change is seen in the WAL Note: This does not time out, use gevent.Timeout.
f9325:m8
def dont_handle_lock_expired_mock(app):
def do_nothing(raiden, message): <EOL><INDENT>pass<EOL><DEDENT>return patch.object(<EOL>app.raiden.message_handler,<EOL>'<STR_LIT>',<EOL>side_effect=do_nothing,<EOL>)<EOL>
Takes in a raiden app and returns a mock context where lock_expired is not processed
f9326:m0
def dont_handle_node_change_network_state():
def empty_state_transition(chain_state, state_change): <EOL><INDENT>return TransitionResult(chain_state, list())<EOL><DEDENT>return patch(<EOL>'<STR_LIT>',<EOL>empty_state_transition,<EOL>)<EOL>
Returns a mock context where ActionChangeNodeNetworkState is not processed
f9326:m1