signature
stringlengths
8
3.44k
body
stringlengths
0
1.41M
docstring
stringlengths
1
122k
id
stringlengths
5
17
def _get_action_profile(x, indptr):
N = len(indptr) - <NUM_LIT:1><EOL>action_profile = tuple(x[indptr[i]:indptr[i+<NUM_LIT:1>]] for i in range(N))<EOL>return action_profile<EOL>
Obtain a tuple of mixed actions from a flattened action profile. Parameters ---------- x : array_like(float, ndim=1) Array of flattened mixed action profile of length equal to n_0 + ... + n_N-1, where `out[indptr[i]:indptr[i+1]]` contains player i's mixed action. indptr : array_like(int, ndim=1) Array of index pointers of length N+1, where `indptr[0] = 0` and `indptr[i+1] = indptr[i] + n_i`. Returns ------- action_profile : tuple(ndarray(float, ndim=1)) Tuple of N mixed actions, each of length n_i.
f5062:m3
def _flatten_action_profile(action_profile, indptr):
N = len(indptr) - <NUM_LIT:1><EOL>out = np.empty(indptr[-<NUM_LIT:1>])<EOL>for i in range(N):<EOL><INDENT>if isinstance(action_profile[i], numbers.Integral): <EOL><INDENT>num_actions = indptr[i+<NUM_LIT:1>] - indptr[i]<EOL>mixed_action = pure2mixed(num_actions, action_profile[i])<EOL><DEDENT>else: <EOL><INDENT>mixed_action = action_profile[i]<EOL><DEDENT>out[indptr[i]:indptr[i+<NUM_LIT:1>]] = mixed_action<EOL><DEDENT>return out<EOL>
Flatten the given action profile. Parameters ---------- action_profile : array_like(int or array_like(float, ndim=1)) Profile of actions of the N players, where each player i' action is a pure action (int) or a mixed action (array_like of floats of length n_i). indptr : array_like(int, ndim=1) Array of index pointers of length N+1, where `indptr[0] = 0` and `indptr[i+1] = indptr[i] + n_i`. Returns ------- out : ndarray(float, ndim=1) Array of flattened mixed action profile of length equal to n_0 + ... + n_N-1, where `out[indptr[i]:indptr[i+1]]` contains player i's mixed action.
f5062:m4
def random_game(nums_actions, random_state=None):
N = len(nums_actions)<EOL>if N == <NUM_LIT:0>:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>random_state = check_random_state(random_state)<EOL>players = [<EOL>Player(random_state.random_sample(nums_actions[i:]+nums_actions[:i]))<EOL>for i in range(N)<EOL>]<EOL>g = NormalFormGame(players)<EOL>return g<EOL>
Return a random NormalFormGame instance where the payoffs are drawn independently from the uniform distribution on [0, 1). Parameters ---------- nums_actions : tuple(int) Tuple of the numbers of actions, one for each player. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- g : NormalFormGame
f5064:m0
def covariance_game(nums_actions, rho, random_state=None):
N = len(nums_actions)<EOL>if N <= <NUM_LIT:1>:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>if not (-<NUM_LIT:1> / (N - <NUM_LIT:1>) <= rho <= <NUM_LIT:1>):<EOL><INDENT>lb = '<STR_LIT>' if N == <NUM_LIT:2> else '<STR_LIT>'.format(N-<NUM_LIT:1>)<EOL>raise ValueError('<STR_LIT>'.format(lb))<EOL><DEDENT>mean = np.zeros(N)<EOL>cov = np.empty((N, N))<EOL>cov.fill(rho)<EOL>cov[range(N), range(N)] = <NUM_LIT:1><EOL>random_state = check_random_state(random_state)<EOL>payoff_profile_array =random_state.multivariate_normal(mean, cov, nums_actions)<EOL>g = NormalFormGame(payoff_profile_array)<EOL>return g<EOL>
Return a random NormalFormGame instance where the payoff profiles are drawn independently from the standard multi-normal with the covariance of any pair of payoffs equal to `rho`, as studied in [1]_. Parameters ---------- nums_actions : tuple(int) Tuple of the numbers of actions, one for each player. rho : scalar(float) Covariance of a pair of payoff values. Must be in [-1/(N-1), 1], where N is the number of players. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- g : NormalFormGame References ---------- .. [1] Y. Rinott and M. Scarsini, "On the Number of Pure Strategy Nash Equilibria in Random Games," Games and Economic Behavior (2000), 274-293.
f5064:m1
def random_pure_actions(nums_actions, random_state=None):
random_state = check_random_state(random_state)<EOL>action_profile = tuple(<EOL>[random_state.randint(num_actions) for num_actions in nums_actions]<EOL>)<EOL>return action_profile<EOL>
Return a tuple of random pure actions (integers). Parameters ---------- nums_actions : tuple(int) Tuple of the numbers of actions, one for each player. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- action_profile : Tuple(int) Tuple of actions, one for each player.
f5064:m2
def random_mixed_actions(nums_actions, random_state=None):
random_state = check_random_state(random_state)<EOL>action_profile = tuple(<EOL>[probvec(<NUM_LIT:1>, num_actions, random_state).ravel()<EOL>for num_actions in nums_actions]<EOL>)<EOL>return action_profile<EOL>
Return a tuple of random mixed actions (vectors of floats). Parameters ---------- nums_actions : tuple(int) Tuple of the numbers of actions, one for each player. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- action_profile : tuple(ndarray(float, ndim=1)) Tuple of mixed_actions, one for each player.
f5064:m3
def pure_nash_brute(g, tol=None):
return list(pure_nash_brute_gen(g, tol=tol))<EOL>
Find all pure Nash equilibria of a normal form game by brute force. Parameters ---------- g : NormalFormGame tol : scalar(float), optional(default=None) Tolerance level used in determining best responses. If None, default to the value of the `tol` attribute of `g`. Returns ------- NEs : list(tuple(int)) List of tuples of Nash equilibrium pure actions. If no pure Nash equilibrium is found, return empty list. Examples -------- Consider the "Prisoners' Dilemma" game: >>> PD_bimatrix = [[(1, 1), (-2, 3)], ... [(3, -2), (0, 0)]] >>> g_PD = NormalFormGame(PD_bimatrix) >>> pure_nash_brute(g_PD) [(1, 1)] If we consider the "Matching Pennies" game, which has no pure nash equilibirum: >>> MP_bimatrix = [[(1, -1), (-1, 1)], ... [(-1, 1), (1, -1)]] >>> g_MP = NormalFormGame(MP_bimatrix) >>> pure_nash_brute(g_MP) []
f5065:m0
def pure_nash_brute_gen(g, tol=None):
for a in np.ndindex(*g.nums_actions):<EOL><INDENT>if g.is_nash(a, tol=tol):<EOL><INDENT>yield a<EOL><DEDENT><DEDENT>
Generator version of `pure_nash_brute`. Parameters ---------- g : NormalFormGame tol : scalar(float), optional(default=None) Tolerance level used in determining best responses. If None, default to the value of the `tol` attribute of `g`. Yields ------ out : tuple(int) Tuple of Nash equilibrium pure actions.
f5065:m1
def support_enumeration(g):
return list(support_enumeration_gen(g))<EOL>
Compute mixed-action Nash equilibria with equal support size for a 2-player normal form game by support enumeration. For a non-degenerate game input, these are all the Nash equilibria. The algorithm checks all the equal-size support pairs; if the players have the same number n of actions, there are 2n choose n minus 1 such pairs. This should thus be used only for small games. Parameters ---------- g : NormalFormGame NormalFormGame instance with 2 players. Returns ------- list(tuple(ndarray(float, ndim=1))) List containing tuples of Nash equilibrium mixed actions.
f5066:m0
def support_enumeration_gen(g):
try:<EOL><INDENT>N = g.N<EOL><DEDENT>except:<EOL><INDENT>raise TypeError('<STR_LIT>')<EOL><DEDENT>if N != <NUM_LIT:2>:<EOL><INDENT>raise NotImplementedError('<STR_LIT>')<EOL><DEDENT>return _support_enumeration_gen(g.payoff_arrays)<EOL>
Generator version of `support_enumeration`. Parameters ---------- g : NormalFormGame NormalFormGame instance with 2 players. Yields ------- tuple(ndarray(float, ndim=1)) Tuple of Nash equilibrium mixed actions.
f5066:m1
@jit(nopython=True) <EOL>def _support_enumeration_gen(payoff_matrices):
nums_actions = payoff_matrices[<NUM_LIT:0>].shape<EOL>n_min = min(nums_actions)<EOL>for k in range(<NUM_LIT:1>, n_min+<NUM_LIT:1>):<EOL><INDENT>supps = (np.arange(<NUM_LIT:0>, k, <NUM_LIT:1>, np.int_), np.empty(k, np.int_))<EOL>actions = (np.empty(k+<NUM_LIT:1>), np.empty(k+<NUM_LIT:1>))<EOL>A = np.empty((k+<NUM_LIT:1>, k+<NUM_LIT:1>))<EOL>while supps[<NUM_LIT:0>][-<NUM_LIT:1>] < nums_actions[<NUM_LIT:0>]:<EOL><INDENT>supps[<NUM_LIT:1>][:] = np.arange(k)<EOL>while supps[<NUM_LIT:1>][-<NUM_LIT:1>] < nums_actions[<NUM_LIT:1>]:<EOL><INDENT>if _indiff_mixed_action(<EOL>payoff_matrices[<NUM_LIT:0>], supps[<NUM_LIT:0>], supps[<NUM_LIT:1>], A, actions[<NUM_LIT:1>]<EOL>):<EOL><INDENT>if _indiff_mixed_action(<EOL>payoff_matrices[<NUM_LIT:1>], supps[<NUM_LIT:1>], supps[<NUM_LIT:0>], A, actions[<NUM_LIT:0>]<EOL>):<EOL><INDENT>out = (np.zeros(nums_actions[<NUM_LIT:0>]),<EOL>np.zeros(nums_actions[<NUM_LIT:1>]))<EOL>for p, (supp, action) in enumerate(zip(supps,<EOL>actions)):<EOL><INDENT>out[p][supp] = action[:-<NUM_LIT:1>]<EOL><DEDENT>yield out<EOL><DEDENT><DEDENT>next_k_array(supps[<NUM_LIT:1>])<EOL><DEDENT>next_k_array(supps[<NUM_LIT:0>])<EOL><DEDENT><DEDENT>
Main body of `support_enumeration_gen`. Parameters ---------- payoff_matrices : tuple(ndarray(float, ndim=2)) Tuple of payoff matrices, of shapes (m, n) and (n, m), respectively. Yields ------ out : tuple(ndarray(float, ndim=1)) Tuple of Nash equilibrium mixed actions, of lengths m and n, respectively.
f5066:m2
@jit(nopython=True, cache=True)<EOL>def _indiff_mixed_action(payoff_matrix, own_supp, opp_supp, A, out):
m = payoff_matrix.shape[<NUM_LIT:0>]<EOL>k = len(own_supp)<EOL>for i in range(k):<EOL><INDENT>for j in range(k):<EOL><INDENT>A[j, i] = payoff_matrix[own_supp[i], opp_supp[j]] <EOL><DEDENT><DEDENT>A[:-<NUM_LIT:1>, -<NUM_LIT:1>] = <NUM_LIT:1><EOL>A[-<NUM_LIT:1>, :-<NUM_LIT:1>] = -<NUM_LIT:1><EOL>A[-<NUM_LIT:1>, -<NUM_LIT:1>] = <NUM_LIT:0><EOL>out[:-<NUM_LIT:1>] = <NUM_LIT:0><EOL>out[-<NUM_LIT:1>] = <NUM_LIT:1><EOL>r = _numba_linalg_solve(A, out)<EOL>if r != <NUM_LIT:0>: <EOL><INDENT>return False<EOL><DEDENT>for i in range(k):<EOL><INDENT>if out[i] <= <NUM_LIT:0>:<EOL><INDENT>return False<EOL><DEDENT><DEDENT>val = out[-<NUM_LIT:1>]<EOL>if k == m:<EOL><INDENT>return True<EOL><DEDENT>own_supp_flags = np.zeros(m, np.bool_)<EOL>own_supp_flags[own_supp] = True<EOL>for i in range(m):<EOL><INDENT>if not own_supp_flags[i]:<EOL><INDENT>payoff = <NUM_LIT:0><EOL>for j in range(k):<EOL><INDENT>payoff += payoff_matrix[i, opp_supp[j]] * out[j]<EOL><DEDENT>if payoff > val:<EOL><INDENT>return False<EOL><DEDENT><DEDENT><DEDENT>return True<EOL>
Given a player's payoff matrix `payoff_matrix`, an array `own_supp` of this player's actions, and an array `opp_supp` of the opponent's actions, each of length k, compute the opponent's mixed action whose support equals `opp_supp` and for which the player is indifferent among the actions in `own_supp`, if any such exists. Return `True` if such a mixed action exists and actions in `own_supp` are indeed best responses to it, in which case the outcome is stored in `out`; `False` otherwise. Array `A` is used in intermediate steps. Parameters ---------- payoff_matrix : ndarray(ndim=2) The player's payoff matrix, of shape (m, n). own_supp : ndarray(int, ndim=1) Array containing the player's action indices, of length k. opp_supp : ndarray(int, ndim=1) Array containing the opponent's action indices, of length k. A : ndarray(float, ndim=2) Array used in intermediate steps, of shape (k+1, k+1). out : ndarray(float, ndim=1) Array of length k+1 to store the k nonzero values of the desired mixed action in `out[:-1]` (and the payoff value in `out[-1]`). Returns ------- bool `True` if a desired mixed action exists and `False` otherwise.
f5066:m3
def pure2mixed(num_actions, action):
mixed_action = np.zeros(num_actions)<EOL>mixed_action[action] = <NUM_LIT:1><EOL>return mixed_action<EOL>
Convert a pure action to the corresponding mixed action. Parameters ---------- num_actions : scalar(int) The number of the pure actions (= the length of a mixed action). action : scalar(int) The pure action to convert to the corresponding mixed action. Returns ------- ndarray(float, ndim=1) The mixed action representation of the given pure action.
f5067:m2
@jit(nopython=True, cache=True)<EOL>def best_response_2p(payoff_matrix, opponent_mixed_action, tol=<NUM_LIT>):
n, m = payoff_matrix.shape<EOL>payoff_max = -np.inf<EOL>payoff_vector = np.zeros(n)<EOL>for a in range(n):<EOL><INDENT>for b in range(m):<EOL><INDENT>payoff_vector[a] += payoff_matrix[a, b] * opponent_mixed_action[b]<EOL><DEDENT>if payoff_vector[a] > payoff_max:<EOL><INDENT>payoff_max = payoff_vector[a]<EOL><DEDENT><DEDENT>for a in range(n):<EOL><INDENT>if payoff_vector[a] >= payoff_max - tol:<EOL><INDENT>return a<EOL><DEDENT><DEDENT>
Numba-optimized version of `Player.best_response` compilied in nopython mode, specialized for 2-player games (where there is only one opponent). Return the best response action (with the smallest index if more than one) to `opponent_mixed_action` under `payoff_matrix`. Parameters ---------- payoff_matrix : ndarray(float, ndim=2) Payoff matrix. opponent_mixed_action : ndarray(float, ndim=1) Opponent's mixed action. Its length must be equal to `payoff_matrix.shape[1]`. tol : scalar(float), optional(default=None) Tolerance level used in determining best responses. Returns ------- scalar(int) Best response action.
f5067:m3
def delete_action(self, action, player_idx=<NUM_LIT:0>):
payoff_array_new = np.delete(self.payoff_array, action, player_idx)<EOL>return Player(payoff_array_new)<EOL>
Return a new `Player` instance with the action(s) specified by `action` deleted from the action set of the player specified by `player_idx`. Deletion is not performed in place. Parameters ---------- action : scalar(int) or array_like(int) Integer or array like of integers representing the action(s) to be deleted. player_idx : scalar(int), optional(default=0) Index of the player to delete action(s) for. Returns ------- Player Copy of `self` with the action(s) deleted as specified. Examples -------- >>> player = Player([[3, 0], [0, 3], [1, 1]]) >>> player Player([[3, 0], [0, 3], [1, 1]]) >>> player.delete_action(2) Player([[3, 0], [0, 3]]) >>> player.delete_action(0, player_idx=1) Player([[0], [3], [1]])
f5067:c0:m3
def payoff_vector(self, opponents_actions):
def reduce_last_player(payoff_array, action):<EOL><INDENT>"""<STR_LIT>"""<EOL>if isinstance(action, numbers.Integral): <EOL><INDENT>return payoff_array.take(action, axis=-<NUM_LIT:1>)<EOL><DEDENT>else: <EOL><INDENT>return payoff_array.dot(action)<EOL><DEDENT><DEDENT>if self.num_opponents == <NUM_LIT:1>:<EOL><INDENT>payoff_vector =reduce_last_player(self.payoff_array, opponents_actions)<EOL><DEDENT>elif self.num_opponents >= <NUM_LIT:2>:<EOL><INDENT>payoff_vector = self.payoff_array<EOL>for i in reversed(range(self.num_opponents)):<EOL><INDENT>payoff_vector =reduce_last_player(payoff_vector, opponents_actions[i])<EOL><DEDENT><DEDENT>else: <EOL><INDENT>payoff_vector = self.payoff_array<EOL><DEDENT>return payoff_vector<EOL>
Return an array of payoff values, one for each own action, given a profile of the opponents' actions. Parameters ---------- opponents_actions : see `best_response`. Returns ------- payoff_vector : ndarray(float, ndim=1) An array representing the player's payoff vector given the profile of the opponents' actions.
f5067:c0:m4
def is_best_response(self, own_action, opponents_actions, tol=None):
if tol is None:<EOL><INDENT>tol = self.tol<EOL><DEDENT>payoff_vector = self.payoff_vector(opponents_actions)<EOL>payoff_max = payoff_vector.max()<EOL>if isinstance(own_action, numbers.Integral):<EOL><INDENT>return payoff_vector[own_action] >= payoff_max - tol<EOL><DEDENT>else:<EOL><INDENT>return np.dot(own_action, payoff_vector) >= payoff_max - tol<EOL><DEDENT>
Return True if `own_action` is a best response to `opponents_actions`. Parameters ---------- own_action : scalar(int) or array_like(float, ndim=1) An integer representing a pure action, or an array of floats representing a mixed action. opponents_actions : see `best_response` tol : scalar(float), optional(default=None) Tolerance level used in determining best responses. If None, default to the value of the `tol` attribute. Returns ------- bool True if `own_action` is a best response to `opponents_actions`; False otherwise.
f5067:c0:m5
def best_response(self, opponents_actions, tie_breaking='<STR_LIT>',<EOL>payoff_perturbation=None, tol=None, random_state=None):
if tol is None:<EOL><INDENT>tol = self.tol<EOL><DEDENT>payoff_vector = self.payoff_vector(opponents_actions)<EOL>if payoff_perturbation is not None:<EOL><INDENT>try:<EOL><INDENT>payoff_vector += payoff_perturbation<EOL><DEDENT>except TypeError: <EOL><INDENT>payoff_vector = payoff_vector + payoff_perturbation<EOL><DEDENT><DEDENT>best_responses =np.where(payoff_vector >= payoff_vector.max() - tol)[<NUM_LIT:0>]<EOL>if tie_breaking == '<STR_LIT>':<EOL><INDENT>return best_responses[<NUM_LIT:0>]<EOL><DEDENT>elif tie_breaking == '<STR_LIT>':<EOL><INDENT>return self.random_choice(best_responses,<EOL>random_state=random_state)<EOL><DEDENT>elif tie_breaking is False:<EOL><INDENT>return best_responses<EOL><DEDENT>else:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>
Return the best response action(s) to `opponents_actions`. Parameters ---------- opponents_actions : scalar(int) or array_like A profile of N-1 opponents' actions, represented by either scalar(int), array_like(float), array_like(int), or array_like(array_like(float)). If N=2, then it must be a scalar of integer (in which case it is treated as the opponent's pure action) or a 1-dimensional array of floats (in which case it is treated as the opponent's mixed action). If N>2, then it must be an array of N-1 objects, where each object must be an integer (pure action) or an array of floats (mixed action). tie_breaking : str, optional(default='smallest') str in {'smallest', 'random', False}. Control how, or whether, to break a tie (see Returns for details). payoff_perturbation : array_like(float), optional(default=None) Array of length equal to the number of actions of the player containing the values ("noises") to be added to the payoffs in determining the best response. tol : scalar(float), optional(default=None) Tolerance level used in determining best responses. If None, default to the value of the `tol` attribute. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Relevant only when tie_breaking='random'. Returns ------- scalar(int) or ndarray(int, ndim=1) If tie_breaking=False, returns an array containing all the best response pure actions. If tie_breaking='smallest', returns the best response action with the smallest index; if tie_breaking='random', returns an action randomly chosen from the best response actions.
f5067:c0:m6
def random_choice(self, actions=None, random_state=None):
random_state = check_random_state(random_state)<EOL>if actions is not None:<EOL><INDENT>n = len(actions)<EOL><DEDENT>else:<EOL><INDENT>n = self.num_actions<EOL><DEDENT>if n == <NUM_LIT:1>:<EOL><INDENT>idx = <NUM_LIT:0><EOL><DEDENT>else:<EOL><INDENT>idx = random_state.randint(n)<EOL><DEDENT>if actions is not None:<EOL><INDENT>return actions[idx]<EOL><DEDENT>else:<EOL><INDENT>return idx<EOL><DEDENT>
Return a pure action chosen randomly from `actions`. Parameters ---------- actions : array_like(int), optional(default=None) An array of integers representing pure actions. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- scalar(int) If `actions` is given, returns an integer representing a pure action chosen randomly from `actions`; if not, an action is chosen randomly from the player's all actions.
f5067:c0:m7
def is_dominated(self, action, tol=None, method=None):
if tol is None:<EOL><INDENT>tol = self.tol<EOL><DEDENT>payoff_array = self.payoff_array<EOL>if self.num_opponents == <NUM_LIT:0>:<EOL><INDENT>return payoff_array.max() > payoff_array[action] + tol<EOL><DEDENT>ind = np.ones(self.num_actions, dtype=bool)<EOL>ind[action] = False<EOL>D = payoff_array[ind]<EOL>D -= payoff_array[action]<EOL>if D.shape[<NUM_LIT:0>] == <NUM_LIT:0>: <EOL><INDENT>return False<EOL><DEDENT>if self.num_opponents >= <NUM_LIT:2>:<EOL><INDENT>D.shape = (D.shape[<NUM_LIT:0>], np.prod(D.shape[<NUM_LIT:1>:]))<EOL><DEDENT>if method is None:<EOL><INDENT>from .lemke_howson import lemke_howson<EOL>g_zero_sum = NormalFormGame([Player(D), Player(-D.T)])<EOL>NE = lemke_howson(g_zero_sum)<EOL>return NE[<NUM_LIT:0>] @ D @ NE[<NUM_LIT:1>] > tol<EOL><DEDENT>elif method in ['<STR_LIT>', '<STR_LIT>']:<EOL><INDENT>from scipy.optimize import linprog<EOL>m, n = D.shape<EOL>A = np.empty((n+<NUM_LIT:2>, m+<NUM_LIT:1>))<EOL>A[:n, :m] = -D.T<EOL>A[:n, -<NUM_LIT:1>] = <NUM_LIT:1> <EOL>A[n, :m], A[n+<NUM_LIT:1>, :m] = <NUM_LIT:1>, -<NUM_LIT:1> <EOL>A[n:, -<NUM_LIT:1>] = <NUM_LIT:0><EOL>b = np.empty(n+<NUM_LIT:2>)<EOL>b[:n] = <NUM_LIT:0><EOL>b[n], b[n+<NUM_LIT:1>] = <NUM_LIT:1>, -<NUM_LIT:1><EOL>c = np.zeros(m+<NUM_LIT:1>)<EOL>c[-<NUM_LIT:1>] = -<NUM_LIT:1><EOL>res = linprog(c, A_ub=A, b_ub=b, method=method)<EOL>if res.success:<EOL><INDENT>return res.x[-<NUM_LIT:1>] > tol<EOL><DEDENT>elif res.status == <NUM_LIT:2>: <EOL><INDENT>return False<EOL><DEDENT>else: <EOL><INDENT>msg = '<STR_LIT>'.format(res.status)<EOL>raise RuntimeError(msg)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>'.format(method))<EOL><DEDENT>
Determine whether `action` is strictly dominated by some mixed action. Parameters ---------- action : scalar(int) Integer representing a pure action. tol : scalar(float), optional(default=None) Tolerance level used in determining domination. If None, default to the value of the `tol` attribute. method : str, optional(default=None) If None, `lemke_howson` from `quantecon.game_theory` is used to solve for a Nash equilibrium of an auxiliary zero-sum game. If `method` is set to `'simplex'` or `'interior-point'`, `scipy.optimize.linprog` is used with the method as specified by `method`. Returns ------- bool True if `action` is strictly dominated by some mixed action; False otherwise.
f5067:c0:m8
def dominated_actions(self, tol=None, method=None):
out = []<EOL>for action in range(self.num_actions):<EOL><INDENT>if self.is_dominated(action, tol=tol, method=method):<EOL><INDENT>out.append(action)<EOL><DEDENT><DEDENT>return out<EOL>
Return a list of actions that are strictly dominated by some mixed actions. Parameters ---------- tol : scalar(float), optional(default=None) Tolerance level used in determining domination. If None, default to the value of the `tol` attribute. method : str, optional(default=None) If None, `lemke_howson` from `quantecon.game_theory` is used to solve for a Nash equilibrium of an auxiliary zero-sum game. If `method` is set to `'simplex'` or `'interior-point'`, `scipy.optimize.linprog` is used with the method as specified by `method`. Returns ------- list(int) List of integers representing pure actions, each of which is strictly dominated by some mixed action.
f5067:c0:m9
def delete_action(self, player_idx, action):
<EOL>if -self.N <= player_idx < <NUM_LIT:0>:<EOL><INDENT>player_idx = player_idx + self.N<EOL><DEDENT>players_new = tuple(<EOL>player.delete_action(action, player_idx-i)<EOL>for i, player in enumerate(self.players)<EOL>)<EOL>return NormalFormGame(players_new)<EOL>
Return a new `NormalFormGame` instance with the action(s) specified by `action` deleted from the action set of the player specified by `player_idx`. Deletion is not performed in place. Parameters ---------- player_idx : scalar(int) Index of the player to delete action(s) for. action : scalar(int) or array_like(int) Integer or array like of integers representing the action(s) to be deleted. Returns ------- NormalFormGame Copy of `self` with the action(s) deleted as specified. Examples -------- >>> g = NormalFormGame( ... [[(3, 0), (0, 1)], [(0, 0), (3, 1)], [(1, 1), (1, 0)]] ... ) >>> print(g) 2-player NormalFormGame with payoff profile array: [[[3, 0], [0, 1]], [[0, 0], [3, 1]], [[1, 1], [1, 0]]] Delete player `0`'s action `2` from `g`: >>> g1 = g.delete_action(0, 2) >>> print(g1) 2-player NormalFormGame with payoff profile array: [[[3, 0], [0, 1]], [[0, 0], [3, 1]]] Then delete player `1`'s action `0` from `g1`: >>> g2 = g1.delete_action(1, 0) >>> print(g2) 2-player NormalFormGame with payoff profile array: [[[0, 1]], [[3, 1]]]
f5067:c1:m6
def is_nash(self, action_profile, tol=None):
if self.N == <NUM_LIT:2>:<EOL><INDENT>for i, player in enumerate(self.players):<EOL><INDENT>own_action, opponent_action =action_profile[i], action_profile[<NUM_LIT:1>-i]<EOL>if not player.is_best_response(own_action, opponent_action,<EOL>tol):<EOL><INDENT>return False<EOL><DEDENT><DEDENT><DEDENT>elif self.N >= <NUM_LIT:3>:<EOL><INDENT>for i, player in enumerate(self.players):<EOL><INDENT>own_action = action_profile[i]<EOL>opponents_actions =tuple(action_profile[i+<NUM_LIT:1>:]) + tuple(action_profile[:i])<EOL>if not player.is_best_response(own_action, opponents_actions,<EOL>tol):<EOL><INDENT>return False<EOL><DEDENT><DEDENT><DEDENT>else: <EOL><INDENT>if not self.players[<NUM_LIT:0>].is_best_response(action_profile[<NUM_LIT:0>], None,<EOL>tol):<EOL><INDENT>return False<EOL><DEDENT><DEDENT>return True<EOL>
Return True if `action_profile` is a Nash equilibrium. Parameters ---------- action_profile : array_like(int or array_like(float)) An array of N objects, where each object must be an integer (pure action) or an array of floats (mixed action). tol : scalar(float) Tolerance level used in determining best responses. If None, default to each player's `tol` attribute value. Returns ------- bool True if `action_profile` is a Nash equilibrium; False otherwise.
f5067:c1:m7
def update_values(self):
<EOL>Q, R, A, B, N, C = self.Q, self.R, self.A, self.B, self.N, self.C<EOL>P, d = self.P, self.d<EOL>S1 = Q + self.beta * dot(B.T, dot(P, B))<EOL>S2 = self.beta * dot(B.T, dot(P, A)) + N<EOL>S3 = self.beta * dot(A.T, dot(P, A))<EOL>self.F = solve(S1, S2)<EOL>new_P = R - dot(S2.T, self.F) + S3<EOL>new_d = self.beta * (d + np.trace(dot(P, dot(C, C.T))))<EOL>self.P, self.d = new_P, new_d<EOL>
This method is for updating in the finite horizon case. It shifts the current value function .. math:: V_t(x) = x' P_t x + d_t and the optimal policy :math:`F_t` one step *back* in time, replacing the pair :math:`P_t` and :math:`d_t` with :math:`P_{t-1}` and :math:`d_{t-1}`, and :math:`F_t` with :math:`F_{t-1}`
f5068:c0:m3
def stationary_values(self, method='<STR_LIT>'):
<EOL>Q, R, A, B, N, C = self.Q, self.R, self.A, self.B, self.N, self.C<EOL>A0, B0 = np.sqrt(self.beta) * A, np.sqrt(self.beta) * B<EOL>P = solve_discrete_riccati(A0, B0, R, Q, N, method=method)<EOL>S1 = Q + self.beta * dot(B.T, dot(P, B))<EOL>S2 = self.beta * dot(B.T, dot(P, A)) + N<EOL>F = solve(S1, S2)<EOL>d = self.beta * np.trace(dot(P, dot(C, C.T))) / (<NUM_LIT:1> - self.beta)<EOL>self.P, self.F, self.d = P, F, d<EOL>return P, F, d<EOL>
Computes the matrix :math:`P` and scalar :math:`d` that represent the value function .. math:: V(x) = x' P x + d in the infinite horizon case. Also computes the control matrix :math:`F` from :math:`u = - Fx`. Computation is via the solution algorithm as specified by the `method` option (default to the doubling algorithm) (see the documentation in `matrix_eqn.solve_discrete_riccati`). Parameters ---------- method : str, optional(default='doubling') Solution method used in solving the associated Riccati equation, str in {'doubling', 'qz'}. Returns ------- P : array_like(float) P is part of the value function representation of :math:`V(x) = x'Px + d` F : array_like(float) F is the policy rule that determines the choice of control in each period. d : array_like(float) d is part of the value function representation of :math:`V(x) = x'Px + d`
f5068:c0:m4
def compute_sequence(self, x0, ts_length=None, method='<STR_LIT>',<EOL>random_state=None):
<EOL>A, B, C = self.A, self.B, self.C<EOL>if self.T:<EOL><INDENT>T = self.T if not ts_length else min(ts_length, self.T)<EOL>self.P, self.d = self.Rf, <NUM_LIT:0><EOL><DEDENT>else:<EOL><INDENT>T = ts_length if ts_length else <NUM_LIT:100><EOL>self.stationary_values(method=method)<EOL><DEDENT>random_state = check_random_state(random_state)<EOL>x0 = np.asarray(x0)<EOL>x0 = x0.reshape(self.n, <NUM_LIT:1>) <EOL>x_path = np.empty((self.n, T+<NUM_LIT:1>))<EOL>u_path = np.empty((self.k, T))<EOL>w_path = random_state.randn(self.j, T+<NUM_LIT:1>)<EOL>Cw_path = dot(C, w_path)<EOL>policies = []<EOL>for t in range(T):<EOL><INDENT>if self.T: <EOL><INDENT>self.update_values()<EOL><DEDENT>policies.append(self.F)<EOL><DEDENT>F = policies.pop()<EOL>x_path[:, <NUM_LIT:0>] = x0.flatten()<EOL>u_path[:, <NUM_LIT:0>] = - dot(F, x0).flatten()<EOL>for t in range(<NUM_LIT:1>, T):<EOL><INDENT>F = policies.pop()<EOL>Ax, Bu = dot(A, x_path[:, t-<NUM_LIT:1>]), dot(B, u_path[:, t-<NUM_LIT:1>])<EOL>x_path[:, t] = Ax + Bu + Cw_path[:, t]<EOL>u_path[:, t] = - dot(F, x_path[:, t])<EOL><DEDENT>Ax, Bu = dot(A, x_path[:, T-<NUM_LIT:1>]), dot(B, u_path[:, T-<NUM_LIT:1>])<EOL>x_path[:, T] = Ax + Bu + Cw_path[:, T]<EOL>return x_path, u_path, w_path<EOL>
Compute and return the optimal state and control sequences :math:`x_0, ..., x_T` and :math:`u_0,..., u_T` under the assumption that :math:`{w_t}` is iid and :math:`N(0, 1)`. Parameters ---------- x0 : array_like(float) The initial state, a vector of length n ts_length : scalar(int) Length of the simulation -- defaults to T in finite case method : str, optional(default='doubling') Solution method used in solving the associated Riccati equation, str in {'doubling', 'qz'}. Only relevant when the `T` attribute is `None` (i.e., the horizon is infinite). random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- x_path : array_like(float) An n x T+1 matrix, where the t-th column represents :math:`x_t` u_path : array_like(float) A k x T matrix, where the t-th column represents :math:`u_t` w_path : array_like(float) A j x T+1 matrix, where the t-th column represent :math:`w_t`
f5068:c0:m5
@contextmanager<EOL>def capture(command, *args, **kwargs):
out, sys.stdout = sys.stdout, StringIO()<EOL>command(*args, **kwargs)<EOL>sys.stdout.seek(<NUM_LIT:0>)<EOL>yield sys.stdout.read()<EOL>sys.stdout = out<EOL>
A context manager to capture std out, so we can write tests that depend on messages that are printed to stdout References ---------- http://schinckel.net/2013/04/15/capture-and-test-sys.stdout-sys. stderr-in-unittest.testcase/ Examples -------- class FooTest(unittest.TestCase): def test_printed_msg(self): with capture(func, *args, **kwargs) as output: self.assertRegexpMatches(output, 'should be in print msg')
f5070:m0
def get_data_dir():
this_dir = os.path.dirname(__file__)<EOL>data_dir = os.path.join(this_dir, "<STR_LIT:data>")<EOL>return data_dir<EOL>
Return directory where data is stored
f5070:m1
def get_h5_data_file():
data_dir = get_data_dir()<EOL>if not exists(data_dir):<EOL><INDENT>os.mkdir(data_dir)<EOL><DEDENT>data_file = join(data_dir, "<STR_LIT>")<EOL>return tables.open_file(data_file, "<STR_LIT:a>", "<STR_LIT>")<EOL>
return the data file used for holding test data. If the data directory or file do not exist, they are created. Notes ----- This should ideally be called from a context manage as so:: with get_h5_data_file() as f: # do stuff This way we know the file will be closed and cleaned up properly
f5070:m2
def get_h5_data_group(grp_name, parent="<STR_LIT:/>", f=get_h5_data_file()):
existed = True<EOL>try:<EOL><INDENT>group = f.getNode(parent + grp_name)<EOL><DEDENT>except:<EOL><INDENT>existed = False<EOL>msg = "<STR_LIT>".format(grp_name + "<STR_LIT>")<EOL>group = f.create_group(parent, grp_name, msg)<EOL><DEDENT>return existed, group<EOL>
Try to fetch the group named grp_name from the file f. If it doesn't yet exist, it is created Parameters ---------- grp_name : str A string specifying the name of the new group. This should be only the group name, not including any information about the group's parent (path) parent : str, optional(default="/") The parent or path for where the group should live. If nothing is given, the group will be created at the root node `"/"` f : hdf5 file, optional(default=get_h5_data_file()) The file where this should happen. The default is the data file for these tests Returns ------- existed : bool A boolean specifying whether the group existed or was created group : tables.Group The requested group Examples -------- with get_h5_data_file() as f: my_group = get_h5_data_group("jv") # data for jv tests Notes ----- As with other code dealing with I/O from files, it is best to call this function within a context manager as shown in the example.
f5070:m3
def write_array(f, grp, array, name):
atom = tables.Atom.from_dtype(array.dtype)<EOL>ds = f.createCArray(grp, name, atom, array.shape)<EOL>ds[:] = array<EOL>
stores array in into group grp of h5 file f under name name
f5070:m4
def max_abs_diff(a1, a2):
return np.max(np.abs(a1 - a2))<EOL>
return max absolute difference between two arrays
f5070:m5
def setUp(self):
<EOL>gam = <NUM_LIT:0><EOL>gamma = np.array([[gam], [<NUM_LIT:0>]])<EOL>phic = np.array([[<NUM_LIT:1>], [<NUM_LIT:0>]])<EOL>phig = np.array([[<NUM_LIT:0>], [<NUM_LIT:1>]])<EOL>phi1 = <NUM_LIT><EOL>phii = np.array([[<NUM_LIT:0>], [-phi1]])<EOL>deltak = np.array([[<NUM_LIT>]])<EOL>thetak = np.array([[<NUM_LIT:1>]])<EOL>beta = np.array([[<NUM_LIT:1> / <NUM_LIT>]])<EOL>ud = np.array([[<NUM_LIT:5>, <NUM_LIT:1>, <NUM_LIT:0>], [<NUM_LIT:0>, <NUM_LIT:0>, <NUM_LIT:0>]])<EOL>a22 = np.array([[<NUM_LIT:1>, <NUM_LIT:0>, <NUM_LIT:0>], [<NUM_LIT:0>, <NUM_LIT>, <NUM_LIT:0>], [<NUM_LIT:0>, <NUM_LIT:0>, <NUM_LIT:0.5>]])<EOL>c2 = np.array([[<NUM_LIT:0>, <NUM_LIT:1>, <NUM_LIT:0>], [<NUM_LIT:0>, <NUM_LIT:0>, <NUM_LIT:1>]]).T<EOL>llambda = np.array([[<NUM_LIT:0>]])<EOL>pih = np.array([[<NUM_LIT:1>]])<EOL>deltah = np.array([[<NUM_LIT>]])<EOL>thetah = np.array([[<NUM_LIT:1>]]) - deltah<EOL>ub = np.array([[<NUM_LIT:30>, <NUM_LIT:0>, <NUM_LIT:0>]])<EOL>information = (a22, c2, ub, ud)<EOL>technology = (phic, phig, phii, gamma, deltak, thetak)<EOL>preferences = (beta, llambda, pih, deltah, thetah) <EOL>self.dle = DLE(information, technology, preferences)<EOL>
Given LQ control is tested we will test the transformation to alter the problem into a form suitable to solve using LQ
f5071:c0:m0
def solow_model(t, k, g, n, s, alpha, delta):
k_dot = s * k**alpha - (g + n + delta) * k<EOL>return k_dot<EOL>
Equation of motion for capital stock (per unit effective labor). Parameters ---------- t : float Time k : ndarray (float, shape=(1,)) Capital stock (per unit of effective labor) g : float Growth rate of technology. n : float Growth rate of the labor force. s : float Savings rate. Must satisfy `0 < s < 1`. alpha : float Elasticity of output with respect to capital stock. Must satisfy :math:`0 < alpha < 1`. delta : float Depreciation rate of physical capital. Must satisfy :math:`0 < \delta`. Returns ------- k_dot : ndarray (float, shape(1,)) Time derivative of capital stock (per unit effective labor).
f5072:m0
def solow_jacobian(t, k, g, n, s, alpha, delta):
jac = s * alpha * k**(alpha - <NUM_LIT:1>) - (g + n + delta)<EOL>return jac<EOL>
Jacobian matrix for the Solow model. Parameters ---------- t : float Time k : ndarray (float, shape=(1,)) Capital stock (per unit of effective labor) g : float Growth rate of technology. n : float Growth rate of the labor force. s : float Savings rate. Must satisfy `0 < s < 1`. alpha : float Elasticity of output with respect to capital stock. Must satisfy :math:`0 < alpha < 1`. delta : float Depreciation rate of physical capital. Must satisfy :math:`0 < \delta`. Returns ------- jac : ndarray (float, shape(1,)) Time derivative of capital stock (per unit effective labor).
f5072:m1
def solow_steady_state(g, n, s, alpha, delta):
k_star = (s / (n + g + delta))**(<NUM_LIT:1> / (<NUM_LIT:1> - alpha))<EOL>return k_star<EOL>
Steady-state level of capital stock (per unit effective labor). Parameters ---------- g : float Growth rate of technology. n : float Growth rate of the labor force. s : float Savings rate. Must satisfy `0 < s < 1`. alpha : float Elasticity of output with respect to capital stock. Must satisfy :math:`0 < alpha < 1`. delta : float Depreciation rate of physical capital. Must satisfy :math:`0 < \delta`. Returns ------- kstar : float Steady state value of capital stock (per unit effective labor).
f5072:m2
def solow_analytic_solution(t, k0, g, n, s, alpha, delta):
<EOL>lmbda = (n + g + delta) * (<NUM_LIT:1> - alpha)<EOL>k_t = (((s / (n + g + delta)) * (<NUM_LIT:1> - np.exp(-lmbda * t)) +<EOL>k0**(<NUM_LIT:1> - alpha) * np.exp(-lmbda * t))**(<NUM_LIT:1> / (<NUM_LIT:1> - alpha)))<EOL>analytic_traj = np.hstack((t[:, np.newaxis], k_t[:, np.newaxis]))<EOL>return analytic_traj<EOL>
Analytic solution for the path of capital stock (per unit effective labor). Parameters ---------- t : ndarray(float, shape=(1,)) Time k : ndarray (float, shape=(1,)) Capital stock (per unit of effective labor) g : float Growth rate of technology. n : float Growth rate of the labor force. s : float Savings rate. Must satisfy `0 < s < 1`. alpha : float Elasticity of output with respect to capital stock. Must satisfy :math:`0 < alpha < 1`. delta : float Depreciation rate of physical capital. Must satisfy :math:`0 < \delta`. Returns ------- soln : ndarray (float, shape(t.size, 2)) Trajectory describing the analytic solution of the model.
f5072:m3
def _compute_fixed_length_solns(model, t0, k0):
<EOL>results = {}<EOL>for integrator in ['<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>']:<EOL><INDENT>discrete_soln = model.solve(t0, k0, h=<NUM_LIT>, T=<NUM_LIT>,<EOL>integrator=integrator,<EOL>atol=<NUM_LIT>, rtol=<NUM_LIT>)<EOL>results[integrator] = discrete_soln<EOL><DEDENT>return results<EOL>
Returns a dictionary of fixed length solution trajectories.
f5072:m4
def _termination_condition(t, k, g, n, s, alpha, delta):
diff = k - solow_steady_state(g, n, s, alpha, delta)<EOL>return diff<EOL>
Terminate solver when we get close to steady state.
f5072:m5
def _compute_variable_length_solns(model, t0, k0, g, tol):
<EOL>results = {}<EOL>for integrator in ['<STR_LIT>', '<STR_LIT>', '<STR_LIT>', '<STR_LIT>']:<EOL><INDENT>discrete_soln = model.solve(t0, k0, h=<NUM_LIT>, g=g, tol=tol,<EOL>integrator=integrator,<EOL>atol=<NUM_LIT>, rtol=<NUM_LIT>)<EOL>results[integrator] = discrete_soln<EOL><DEDENT>return results<EOL>
Returns a dictionary of variable length solution trajectories.
f5072:m6
def list_of_array_equal(s, t):
eq_(len(s), len(t))<EOL>all(assert_array_equal(x, y) for x, y in zip(s, t))<EOL>
Compare two lists of ndarrays s, t: lists of numpy.ndarrays
f5074:m0
def setUp(self):
self.graphs = Graphs()<EOL>for graph_dict in self.graphs.graph_dicts:<EOL><INDENT>try:<EOL><INDENT>weighted = graph_dict['<STR_LIT>']<EOL><DEDENT>except:<EOL><INDENT>weighted = False<EOL><DEDENT>graph_dict['<STR_LIT:g>'] = DiGraph(graph_dict['<STR_LIT:A>'], weighted=weighted)<EOL><DEDENT>
Setup Digraph instances
f5074:c1:m0
def solve_discrete_lyapunov(A, B, max_it=<NUM_LIT:50>, method="<STR_LIT>"):
if method == "<STR_LIT>":<EOL><INDENT>A, B = list(map(np.atleast_2d, [A, B]))<EOL>alpha0 = A<EOL>gamma0 = B<EOL>diff = <NUM_LIT:5><EOL>n_its = <NUM_LIT:1><EOL>while diff > <NUM_LIT>:<EOL><INDENT>alpha1 = alpha0.dot(alpha0)<EOL>gamma1 = gamma0 + np.dot(alpha0.dot(gamma0), alpha0.conjugate().T)<EOL>diff = np.max(np.abs(gamma1 - gamma0))<EOL>alpha0 = alpha1<EOL>gamma0 = gamma1<EOL>n_its += <NUM_LIT:1><EOL>if n_its > max_it:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg.format(n_its))<EOL><DEDENT><DEDENT><DEDENT>elif method == "<STR_LIT>":<EOL><INDENT>gamma1 = sp_solve_discrete_lyapunov(A, B)<EOL><DEDENT>else:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>return gamma1<EOL>
r""" Computes the solution to the discrete lyapunov equation .. math:: AXA' - X + B = 0 :math:`X` is computed by using a doubling algorithm. In particular, we iterate to convergence on :math:`X_j` with the following recursions for :math:`j = 1, 2, \dots` starting from :math:`X_0 = B`, :math:`a_0 = A`: .. math:: a_j = a_{j-1} a_{j-1} .. math:: X_j = X_{j-1} + a_{j-1} X_{j-1} a_{j-1}' Parameters ---------- A : array_like(float, ndim=2) An n x n matrix as described above. We assume in order for convergence that the eigenvalues of A have moduli bounded by unity B : array_like(float, ndim=2) An n x n matrix as described above. We assume in order for convergence that the eigenvalues of A have moduli bounded by unity max_it : scalar(int), optional(default=50) The maximum number of iterations method : string, optional(default="doubling") Describes the solution method to use. If it is "doubling" then uses the doubling algorithm to solve, if it is "bartels-stewart" then it uses scipy's implementation of the Bartels-Stewart approach. Returns ------- gamma1: array_like(float, ndim=2) Represents the value :math:`X`
f5095:m0
def solve_discrete_riccati(A, B, Q, R, N=None, tolerance=<NUM_LIT>, max_iter=<NUM_LIT>,<EOL>method="<STR_LIT>"):
methods = ['<STR_LIT>', '<STR_LIT>']<EOL>if method not in methods:<EOL><INDENT>msg = "<STR_LIT>".format(*methods)<EOL>raise ValueError(msg)<EOL><DEDENT>error = tolerance + <NUM_LIT:1><EOL>fail_msg = "<STR_LIT>"<EOL>A, B, Q, R = np.atleast_2d(A, B, Q, R)<EOL>n, k = R.shape[<NUM_LIT:0>], Q.shape[<NUM_LIT:0>]<EOL>I = np.identity(k)<EOL>if N is None:<EOL><INDENT>N = np.zeros((n, k))<EOL><DEDENT>else:<EOL><INDENT>N = np.atleast_2d(N)<EOL><DEDENT>if method == '<STR_LIT>':<EOL><INDENT>X = sp_solve_discrete_are(A, B, Q, R, s=N.T)<EOL>return X<EOL><DEDENT>current_min = np.inf<EOL>candidates = (<NUM_LIT>, <NUM_LIT:0.1>, <NUM_LIT>, <NUM_LIT:0.5>, <NUM_LIT:1.0>, <NUM_LIT>, <NUM_LIT>, <NUM_LIT>, <NUM_LIT>)<EOL>BB = dot(B.T, B)<EOL>BTA = dot(B.T, A)<EOL>for gamma in candidates:<EOL><INDENT>Z = R + gamma * BB<EOL>cn = np.linalg.cond(Z)<EOL>if cn * EPS < <NUM_LIT:1>:<EOL><INDENT>Q_tilde = - Q + dot(N.T, solve(Z, N + gamma * BTA)) + gamma * I<EOL>G0 = dot(B, solve(Z, B.T))<EOL>A0 = dot(I - gamma * G0, A) - dot(B, solve(Z, N))<EOL>H0 = gamma * dot(A.T, A0) - Q_tilde<EOL>f1 = np.linalg.cond(Z, np.inf)<EOL>f2 = gamma * f1<EOL>f3 = np.linalg.cond(I + dot(G0, H0))<EOL>f_gamma = max(f1, f2, f3)<EOL>if f_gamma < current_min:<EOL><INDENT>best_gamma = gamma<EOL>current_min = f_gamma<EOL><DEDENT><DEDENT><DEDENT>if current_min == np.inf:<EOL><INDENT>msg = "<STR_LIT>"<EOL>raise ValueError(msg)<EOL><DEDENT>gamma = best_gamma<EOL>R_hat = R + gamma * BB<EOL>Q_tilde = - Q + dot(N.T, solve(R_hat, N + gamma * BTA)) + gamma * I<EOL>G0 = dot(B, solve(R_hat, B.T))<EOL>A0 = dot(I - gamma * G0, A) - dot(B, solve(R_hat, N))<EOL>H0 = gamma * dot(A.T, A0) - Q_tilde<EOL>i = <NUM_LIT:1><EOL>while error > tolerance:<EOL><INDENT>if i > max_iter:<EOL><INDENT>raise ValueError(fail_msg.format(i))<EOL><DEDENT>else:<EOL><INDENT>A1 = dot(A0, solve(I + dot(G0, H0), A0))<EOL>G1 = G0 + dot(dot(A0, G0), solve(I + dot(H0, G0), A0.T))<EOL>H1 = H0 + dot(A0.T, solve(I + dot(H0, G0), dot(H0, A0)))<EOL>error = np.max(np.abs(H1 - H0))<EOL>A0 = A1<EOL>G0 = G1<EOL>H0 = H1<EOL>i += <NUM_LIT:1><EOL><DEDENT><DEDENT>return H1 + gamma * I<EOL>
Solves the discrete-time algebraic Riccati equation .. math:: X = A'XA - (N + B'XA)'(B'XB + R)^{-1}(N + B'XA) + Q Computation is via a modified structured doubling algorithm, an explanation of which can be found in the reference below, if `method="doubling"` (default), and via a QZ decomposition method by calling `scipy.linalg.solve_discrete_are` if `method="qz"`. Parameters ---------- A : array_like(float, ndim=2) k x k array. B : array_like(float, ndim=2) k x n array Q : array_like(float, ndim=2) k x k, should be symmetric and non-negative definite R : array_like(float, ndim=2) n x n, should be symmetric and positive definite N : array_like(float, ndim=2) n x k array tolerance : scalar(float), optional(default=1e-10) The tolerance level for convergence max_iter : scalar(int), optional(default=500) The maximum number of iterations allowed method : string, optional(default="doubling") Describes the solution method to use. If it is "doubling" then uses the doubling algorithm to solve, if it is "qz" then it uses `scipy.linalg.solve_discrete_are` (in which case `tolerance` and `max_iter` are irrelevant). Returns ------- X : array_like(float, ndim=2) The fixed point of the Riccati equation; a k x k array representing the approximate solution References ---------- Chiang, Chun-Yueh, Hung-Yuan Fan, and Wen-Wei Lin. "STRUCTURED DOUBLING ALGORITHM FOR DISCRETE-TIME ALGEBRAIC RICCATI EQUATIONS WITH SINGULAR CONTROL WEIGHTING MATRICES." Taiwanese Journal of Mathematics 14, no. 3A (2010): pp-935.
f5095:m1
def compute_steadystate(self, nnc=<NUM_LIT:2>):
zx = np.eye(self.A0.shape[<NUM_LIT:0>])-self.A0<EOL>self.zz = nullspace(zx)<EOL>self.zz /= self.zz[nnc]<EOL>self.css = self.Sc.dot(self.zz)<EOL>self.sss = self.Ss.dot(self.zz)<EOL>self.iss = self.Si.dot(self.zz)<EOL>self.dss = self.Sd.dot(self.zz)<EOL>self.bss = self.Sb.dot(self.zz)<EOL>self.kss = self.Sk.dot(self.zz)<EOL>self.hss = self.Sh.dot(self.zz)<EOL>
Computes the non-stochastic steady-state of the economy. Parameters ---------- nnc : array_like(float) nnc is the location of the constant in the state vector x_t
f5096:c0:m1
def compute_sequence(self, x0, ts_length=None, Pay=None):
lq = LQ(self.Q, self.R, self.A, self.B,<EOL>self.C, N=self.W, beta=self.beta)<EOL>xp, up, wp = lq.compute_sequence(x0, ts_length)<EOL>self.h = self.Sh.dot(xp)<EOL>self.k = self.Sk.dot(xp)<EOL>self.i = self.Si.dot(xp)<EOL>self.b = self.Sb.dot(xp)<EOL>self.d = self.Sd.dot(xp)<EOL>self.c = self.Sc.dot(xp)<EOL>self.g = self.Sg.dot(xp)<EOL>self.s = self.Ss.dot(xp)<EOL>e1 = np.zeros((<NUM_LIT:1>, self.nc))<EOL>e1[<NUM_LIT:0>, <NUM_LIT:0>] = <NUM_LIT:1><EOL>self.R1_Price = np.empty((ts_length + <NUM_LIT:1>, <NUM_LIT:1>))<EOL>self.R2_Price = np.empty((ts_length + <NUM_LIT:1>, <NUM_LIT:1>))<EOL>self.R5_Price = np.empty((ts_length + <NUM_LIT:1>, <NUM_LIT:1>))<EOL>for i in range(ts_length + <NUM_LIT:1>):<EOL><INDENT>self.R1_Price[i, <NUM_LIT:0>] = self.beta * e1.dot(self.Mc).dot(np.linalg.matrix_power(<EOL>self.A0, <NUM_LIT:1>)).dot(xp[:, i]) / e1.dot(self.Mc).dot(xp[:, i])<EOL>self.R2_Price[i, <NUM_LIT:0>] = self.beta**<NUM_LIT:2> * e1.dot(self.Mc).dot(<EOL>np.linalg.matrix_power(self.A0, <NUM_LIT:2>)).dot(xp[:, i]) / e1.dot(self.Mc).dot(xp[:, i])<EOL>self.R5_Price[i, <NUM_LIT:0>] = self.beta**<NUM_LIT:5> * e1.dot(self.Mc).dot(<EOL>np.linalg.matrix_power(self.A0, <NUM_LIT:5>)).dot(xp[:, i]) / e1.dot(self.Mc).dot(xp[:, i])<EOL><DEDENT>self.R1_Gross = <NUM_LIT:1> / self.R1_Price<EOL>self.R1_Net = np.log(<NUM_LIT:1> / self.R1_Price) / <NUM_LIT:1><EOL>self.R2_Net = np.log(<NUM_LIT:1> / self.R2_Price) / <NUM_LIT:2><EOL>self.R5_Net = np.log(<NUM_LIT:1> / self.R5_Price) / <NUM_LIT:5><EOL>if isinstance(Pay, np.ndarray) == True:<EOL><INDENT>self.Za = Pay.T.dot(self.Mc)<EOL>self.Q = solve_discrete_lyapunov(<EOL>self.A0.T * self.beta**<NUM_LIT:0.5>, self.Za)<EOL>self.q = self.beta / (<NUM_LIT:1> - self.beta) *np.trace(self.C.T.dot(self.Q).dot(self.C))<EOL>self.Pay_Price = np.empty((ts_length + <NUM_LIT:1>, <NUM_LIT:1>))<EOL>self.Pay_Gross = np.empty((ts_length + <NUM_LIT:1>, <NUM_LIT:1>))<EOL>self.Pay_Gross[<NUM_LIT:0>, <NUM_LIT:0>] = np.nan<EOL>for i in range(ts_length + <NUM_LIT:1>):<EOL><INDENT>self.Pay_Price[i, <NUM_LIT:0>] = (xp[:, i].T.dot(self.Q).dot(<EOL>xp[:, i]) + self.q) / e1.dot(self.Mc).dot(xp[:, i])<EOL><DEDENT>for i in range(ts_length):<EOL><INDENT>self.Pay_Gross[i + <NUM_LIT:1>, <NUM_LIT:0>] = self.Pay_Price[i + <NUM_LIT:1>,<EOL><NUM_LIT:0>] / (self.Pay_Price[i, <NUM_LIT:0>] - Pay.dot(xp[:, i]))<EOL><DEDENT><DEDENT>return<EOL>
Simulate quantities and prices for the economy Parameters ---------- x0 : array_like(float) The initial state ts_length : scalar(int) Length of the simulation Pay : array_like(float) Vector to price an asset whose payout is Pay*xt
f5096:c0:m2
def irf(self, ts_length=<NUM_LIT:100>, shock=None):
if type(shock) != np.ndarray:<EOL><INDENT>shock = np.vstack((np.ones((<NUM_LIT:1>, <NUM_LIT:1>)), np.zeros((self.nw - <NUM_LIT:1>, <NUM_LIT:1>))))<EOL><DEDENT>self.c_irf = np.empty((ts_length, self.nc))<EOL>self.s_irf = np.empty((ts_length, self.nb))<EOL>self.i_irf = np.empty((ts_length, self.ni))<EOL>self.k_irf = np.empty((ts_length, self.nk))<EOL>self.h_irf = np.empty((ts_length, self.nh))<EOL>self.g_irf = np.empty((ts_length, self.ng))<EOL>self.d_irf = np.empty((ts_length, self.nd))<EOL>self.b_irf = np.empty((ts_length, self.nb))<EOL>for i in range(ts_length):<EOL><INDENT>self.c_irf[i, :] = self.Sc.dot(<EOL>np.linalg.matrix_power(self.A0, i)).dot(self.C).dot(shock).T<EOL>self.s_irf[i, :] = self.Ss.dot(<EOL>np.linalg.matrix_power(self.A0, i)).dot(self.C).dot(shock).T<EOL>self.i_irf[i, :] = self.Si.dot(<EOL>np.linalg.matrix_power(self.A0, i)).dot(self.C).dot(shock).T<EOL>self.k_irf[i, :] = self.Sk.dot(<EOL>np.linalg.matrix_power(self.A0, i)).dot(self.C).dot(shock).T<EOL>self.h_irf[i, :] = self.Sh.dot(<EOL>np.linalg.matrix_power(self.A0, i)).dot(self.C).dot(shock).T<EOL>self.g_irf[i, :] = self.Sg.dot(<EOL>np.linalg.matrix_power(self.A0, i)).dot(self.C).dot(shock).T<EOL>self.d_irf[i, :] = self.Sd.dot(<EOL>np.linalg.matrix_power(self.A0, i)).dot(self.C).dot(shock).T<EOL>self.b_irf[i, :] = self.Sb.dot(<EOL>np.linalg.matrix_power(self.A0, i)).dot(self.C).dot(shock).T<EOL><DEDENT>return<EOL>
Create Impulse Response Functions Parameters ---------- ts_length : scalar(int) Number of periods to calculate IRF Shock : array_like(float) Vector of shocks to calculate IRF to. Default is first element of w
f5096:c0:m3
def canonical(self):
Ac1 = np.hstack((self.deltah, np.zeros((self.nh, self.nz))))<EOL>Ac2 = np.hstack((np.zeros((self.nz, self.nh)), self.a22))<EOL>Ac = np.vstack((Ac1, Ac2))<EOL>Bc = np.vstack((self.thetah, np.zeros((self.nz, self.nc))))<EOL>Cc = np.vstack((np.zeros((self.nh, self.nw)), self.c2))<EOL>Rc1 = np.hstack((self.llambda.T.dot(self.llambda), -<EOL>self.llambda.T.dot(self.ub)))<EOL>Rc2 = np.hstack((-self.ub.T.dot(self.llambda), self.ub.T.dot(self.ub)))<EOL>Rc = np.vstack((Rc1, Rc2))<EOL>Qc = self.pih.T.dot(self.pih)<EOL>Nc = np.hstack(<EOL>(self.pih.T.dot(self.llambda), -self.pih.T.dot(self.ub)))<EOL>lq_aux = LQ(Qc, Rc, Ac, Bc, N=Nc, beta=self.beta)<EOL>P1, F1, d1 = lq_aux.stationary_values()<EOL>self.F_b = F1[:, <NUM_LIT:0>:self.nh]<EOL>self.F_f = F1[:, self.nh:]<EOL>self.pihat = np.linalg.cholesky(self.pih.T.dot(<EOL>self.pih) + self.beta.dot(self.thetah.T).dot(P1[<NUM_LIT:0>:self.nh, <NUM_LIT:0>:self.nh]).dot(self.thetah)).T<EOL>self.llambdahat = self.pihat.dot(self.F_b)<EOL>self.ubhat = - self.pihat.dot(self.F_f)<EOL>return<EOL>
Compute canonical preference representation Uses auxiliary problem of 9.4.2, with the preference shock process reintroduced Calculates pihat, llambdahat and ubhat for the equivalent canonical household technology
f5096:c0:m4
def cartesian(nodes, order='<STR_LIT:C>'):
nodes = [np.array(e) for e in nodes]<EOL>shapes = [e.shape[<NUM_LIT:0>] for e in nodes]<EOL>dtype = nodes[<NUM_LIT:0>].dtype<EOL>n = len(nodes)<EOL>l = np.prod(shapes)<EOL>out = np.zeros((l, n), dtype=dtype)<EOL>if order == '<STR_LIT:C>':<EOL><INDENT>repetitions = np.cumprod([<NUM_LIT:1>] + shapes[:-<NUM_LIT:1>])<EOL><DEDENT>else:<EOL><INDENT>shapes.reverse()<EOL>sh = [<NUM_LIT:1>] + shapes[:-<NUM_LIT:1>]<EOL>repetitions = np.cumprod(sh)<EOL>repetitions = repetitions.tolist()<EOL>repetitions.reverse()<EOL><DEDENT>for i in range(n):<EOL><INDENT>_repeat_1d(nodes[i], repetitions[i], out[:, i])<EOL><DEDENT>return out<EOL>
Cartesian product of a list of arrays Parameters ---------- nodes : list(array_like(ndim=1)) order : str, optional(default='C') ('C' or 'F') order in which the product is enumerated Returns ------- out : ndarray(ndim=2) each line corresponds to one point of the product space
f5097:m0
def mlinspace(a, b, nums, order='<STR_LIT:C>'):
a = np.array(a, dtype='<STR_LIT>')<EOL>b = np.array(b, dtype='<STR_LIT>')<EOL>nums = np.array(nums, dtype='<STR_LIT>')<EOL>nodes = [np.linspace(a[i], b[i], nums[i]) for i in range(len(nums))]<EOL>return cartesian(nodes, order=order)<EOL>
Constructs a regular cartesian grid Parameters ---------- a : array_like(ndim=1) lower bounds in each dimension b : array_like(ndim=1) upper bounds in each dimension nums : array_like(ndim=1) number of nodes along each dimension order : str, optional(default='C') ('C' or 'F') order in which the product is enumerated Returns ------- out : ndarray(ndim=2) each line corresponds to one point of the product space
f5097:m1
@njit<EOL>def _repeat_1d(x, K, out):
N = x.shape[<NUM_LIT:0>]<EOL>L = out.shape[<NUM_LIT:0>] // (K*N) <EOL>for n in range(N):<EOL><INDENT>val = x[n]<EOL>for k in range(K):<EOL><INDENT>for l in range(L):<EOL><INDENT>ind = k*N*L + n*L + l<EOL>out[ind] = val<EOL><DEDENT><DEDENT><DEDENT>
Repeats each element of a vector many times and repeats the whole result many times Parameters ---------- x : ndarray(ndim=1) vector to be repeated K : scalar(int) number of times each element of x is repeated (inner iterations) out : ndarray(ndim=1) placeholder for the result Returns ------- None
f5097:m2
@jit(nopython=True, cache=True)<EOL>def simplex_grid(m, n):
L = num_compositions_jit(m, n)<EOL>if L == <NUM_LIT:0>: <EOL><INDENT>raise ValueError(_msg_max_size_exceeded)<EOL><DEDENT>out = np.empty((L, m), dtype=np.int_)<EOL>x = np.zeros(m, dtype=np.int_)<EOL>x[m-<NUM_LIT:1>] = n<EOL>for j in range(m):<EOL><INDENT>out[<NUM_LIT:0>, j] = x[j]<EOL><DEDENT>h = m<EOL>for i in range(<NUM_LIT:1>, L):<EOL><INDENT>h -= <NUM_LIT:1><EOL>val = x[h]<EOL>x[h] = <NUM_LIT:0><EOL>x[m-<NUM_LIT:1>] = val - <NUM_LIT:1><EOL>x[h-<NUM_LIT:1>] += <NUM_LIT:1><EOL>for j in range(m):<EOL><INDENT>out[i, j] = x[j]<EOL><DEDENT>if val != <NUM_LIT:1>:<EOL><INDENT>h = m<EOL><DEDENT><DEDENT>return out<EOL>
r""" Construct an array consisting of the integer points in the (m-1)-dimensional simplex :math:`\{x \mid x_0 + \cdots + x_{m-1} = n \}`, or equivalently, the m-part compositions of n, which are listed in lexicographic order. The total number of the points (hence the length of the output array) is L = (n+m-1)!/(n!*(m-1)!) (i.e., (n+m-1) choose (m-1)). Parameters ---------- m : scalar(int) Dimension of each point. Must be a positive integer. n : scalar(int) Number which the coordinates of each point sum to. Must be a nonnegative integer. Returns ------- out : ndarray(int, ndim=2) Array of shape (L, m) containing the integer points in the simplex, aligned in lexicographic order. Notes ----- A grid of the (m-1)-dimensional *unit* simplex with n subdivisions along each dimension can be obtained by `simplex_grid(m, n) / n`. Examples -------- >>> simplex_grid(3, 4) array([[0, 0, 4], [0, 1, 3], [0, 2, 2], [0, 3, 1], [0, 4, 0], [1, 0, 3], [1, 1, 2], [1, 2, 1], [1, 3, 0], [2, 0, 2], [2, 1, 1], [2, 2, 0], [3, 0, 1], [3, 1, 0], [4, 0, 0]]) >>> simplex_grid(3, 4) / 4 array([[ 0. , 0. , 1. ], [ 0. , 0.25, 0.75], [ 0. , 0.5 , 0.5 ], [ 0. , 0.75, 0.25], [ 0. , 1. , 0. ], [ 0.25, 0. , 0.75], [ 0.25, 0.25, 0.5 ], [ 0.25, 0.5 , 0.25], [ 0.25, 0.75, 0. ], [ 0.5 , 0. , 0.5 ], [ 0.5 , 0.25, 0.25], [ 0.5 , 0.5 , 0. ], [ 0.75, 0. , 0.25], [ 0.75, 0.25, 0. ], [ 1. , 0. , 0. ]]) References ---------- A. Nijenhuis and H. S. Wilf, Combinatorial Algorithms, Chapter 5, Academic Press, 1978.
f5097:m3
def simplex_index(x, m, n):
if m == <NUM_LIT:1>:<EOL><INDENT>return <NUM_LIT:0><EOL><DEDENT>decumsum = np.cumsum(x[-<NUM_LIT:1>:<NUM_LIT:0>:-<NUM_LIT:1>])[::-<NUM_LIT:1>]<EOL>idx = num_compositions(m, n) - <NUM_LIT:1><EOL>for i in range(m-<NUM_LIT:1>):<EOL><INDENT>if decumsum[i] == <NUM_LIT:0>:<EOL><INDENT>break<EOL><DEDENT>idx -= num_compositions(m-i, decumsum[i]-<NUM_LIT:1>)<EOL><DEDENT>return idx<EOL>
r""" Return the index of the point x in the lexicographic order of the integer points of the (m-1)-dimensional simplex :math:`\{x \mid x_0 + \cdots + x_{m-1} = n\}`. Parameters ---------- x : array_like(int, ndim=1) Integer point in the simplex, i.e., an array of m nonnegative itegers that sum to n. m : scalar(int) Dimension of each point. Must be a positive integer. n : scalar(int) Number which the coordinates of each point sum to. Must be a nonnegative integer. Returns ------- idx : scalar(int) Index of x.
f5097:m4
def num_compositions(m, n):
<EOL>return scipy.special.comb(n+m-<NUM_LIT:1>, m-<NUM_LIT:1>, exact=True)<EOL>
The total number of m-part compositions of n, which is equal to (n+m-1) choose (m-1). Parameters ---------- m : scalar(int) Number of parts of composition. n : scalar(int) Integer to decompose. Returns ------- scalar(int) Total number of m-part compositions of n.
f5097:m5
@jit(nopython=True, cache=True)<EOL>def num_compositions_jit(m, n):
return comb_jit(n+m-<NUM_LIT:1>, m-<NUM_LIT:1>)<EOL>
Numba jit version of `num_compositions`. Return `0` if the outcome exceeds the maximum value of `np.intp`.
f5097:m6
def var_quadratic_sum(A, C, H, beta, x0):
<EOL>A, C, H = list(map(np.atleast_2d, (A, C, H)))<EOL>x0 = np.atleast_1d(x0)<EOL>Q = scipy.linalg.solve_discrete_lyapunov(sqrt(beta) * A.T, H)<EOL>cq = dot(dot(C.T, Q), C)<EOL>v = np.trace(cq) * beta / (<NUM_LIT:1> - beta)<EOL>q0 = dot(dot(x0.T, Q), x0) + v<EOL>return q0<EOL>
r""" Computes the expected discounted quadratic sum .. math:: q(x_0) = \mathbb{E} \Big[ \sum_{t=0}^{\infty} \beta^t x_t' H x_t \Big] Here :math:`{x_t}` is the VAR process :math:`x_{t+1} = A x_t + C w_t` with :math:`{x_t}` standard normal and :math:`x_0` the initial condition. Parameters ---------- A : array_like(float, ndim=2) The matrix described above in description. Should be n x n C : array_like(float, ndim=2) The matrix described above in description. Should be n x n H : array_like(float, ndim=2) The matrix described above in description. Should be n x n beta: scalar(float) Should take a value in (0, 1) x_0: array_like(float, ndim=1) The initial condtion. A conformable array (of length n, or with n rows) Returns ------- q0: scalar(float) Represents the value :math:`q(x_0)` Remarks: The formula for computing :math:`q(x_0)` is :math:`q(x_0) = x_0' Q x_0 + v` where * :math:`Q` is the solution to :math:`Q = H + \beta A' Q A`, and * :math:`v = \frac{trace(C' Q C) \beta}{(1 - \beta)}`
f5098:m0
def m_quadratic_sum(A, B, max_it=<NUM_LIT:50>):
gamma1 = solve_discrete_lyapunov(A, B, max_it)<EOL>return gamma1<EOL>
r""" Computes the quadratic sum .. math:: V = \sum_{j=0}^{\infty} A^j B A^{j'} V is computed by solving the corresponding discrete lyapunov equation using the doubling algorithm. See the documentation of `util.solve_discrete_lyapunov` for more information. Parameters ---------- A : array_like(float, ndim=2) An n x n matrix as described above. We assume in order for convergence that the eigenvalues of :math:`A` have moduli bounded by unity B : array_like(float, ndim=2) An n x n matrix as described above. We assume in order for convergence that the eigenvalues of :math:`A` have moduli bounded by unity max_it : scalar(int), optional(default=50) The maximum number of iterations Returns ======== gamma1: array_like(float, ndim=2) Represents the value :math:`V`
f5098:m1
def backward_induction(ddp, T, v_term=None):
n = ddp.num_states<EOL>vs = np.empty((T+<NUM_LIT:1>, n))<EOL>sigmas = np.empty((T, n), dtype=int)<EOL>if v_term is None:<EOL><INDENT>v_term = np.zeros(n)<EOL><DEDENT>vs[T, :] = v_term<EOL>for t in range(T, <NUM_LIT:0>, -<NUM_LIT:1>):<EOL><INDENT>ddp.bellman_operator(vs[t, :], Tv=vs[t-<NUM_LIT:1>, :], sigma=sigmas[t-<NUM_LIT:1>, :])<EOL><DEDENT>return vs, sigmas<EOL>
r""" Solve by backward induction a :math:`T`-period finite horizon discrete dynamic program with stationary reward and transition probability functions :math:`r` and :math:`q` and discount factor :math:`\beta \in [0, 1]`. The optimal value functions :math:`v^*_0, \ldots, v^*_T` and policy functions :math:`\sigma^*_0, \ldots, \sigma^*_{T-1}` are obtained by :math:`v^*_T = v_T`, and .. math:: v^*_{t-1}(s) = \max_{a \in A(s)} r(s, a) + \beta \sum_{s' \in S} q(s'|s, a) v^*_t(s') \quad (s \in S) and .. math:: \sigma^*_{t-1}(s) \in \operatorname*{arg\,max}_{a \in A(s)} r(s, a) + \beta \sum_{s' \in S} q(s'|s, a) v^*_t(s') \quad (s \in S) for :math:`t = T, \ldots, 1`, where the terminal value function :math:`v_T` is exogenously given. Parameters ---------- ddp : DiscreteDP DiscreteDP instance storing reward array `R`, transition probability array `Q`, and discount factor `beta`. T : scalar(int) Number of decision periods. v_term : array_like(float, ndim=1), optional(default=None) Terminal value function, of length equal to n (the number of states). If None, it defaults to the vector of zeros. Returns ------- vs : ndarray(float, ndim=2) Array of shape (T+1, n) where `vs[t]` contains the optimal value function at period `t = 0, ..., T`. sigmas : ndarray(int, ndim=2) Array of shape (T, n) where `sigmas[t]` contains the optimal policy function at period `t = 0, ..., T-1`.
f5099:m0
@jit(nopython=True)<EOL>def _has_sorted_sa_indices(s_indices, a_indices):
L = len(s_indices)<EOL>for i in range(L-<NUM_LIT:1>):<EOL><INDENT>if s_indices[i] > s_indices[i+<NUM_LIT:1>]:<EOL><INDENT>return False<EOL><DEDENT>if s_indices[i] == s_indices[i+<NUM_LIT:1>]:<EOL><INDENT>if a_indices[i] >= a_indices[i+<NUM_LIT:1>]:<EOL><INDENT>return False<EOL><DEDENT><DEDENT><DEDENT>return True<EOL>
Check whether `s_indices` and `a_indices` are sorted in lexicographic order. Parameters ---------- s_indices, a_indices : ndarray(ndim=1) Returns ------- bool Whether `s_indices` and `a_indices` are sorted.
f5099:m5
@jit(nopython=True)<EOL>def _generate_a_indptr(num_states, s_indices, out):
idx = <NUM_LIT:0><EOL>out[<NUM_LIT:0>] = <NUM_LIT:0><EOL>for s in range(num_states-<NUM_LIT:1>):<EOL><INDENT>while(s_indices[idx] == s):<EOL><INDENT>idx += <NUM_LIT:1><EOL><DEDENT>out[s+<NUM_LIT:1>] = idx<EOL><DEDENT>out[num_states] = len(s_indices)<EOL>
Generate `a_indptr`; stored in `out`. `s_indices` is assumed to be in sorted order. Parameters ---------- num_states : scalar(int) s_indices : ndarray(int, ndim=1) out : ndarray(int, ndim=1) Length must be num_states+1.
f5099:m6
def _check_action_feasibility(self):
<EOL>R_max = self.s_wise_max(self.R)<EOL>if (R_max == -np.inf).any():<EOL><INDENT>s = np.where(R_max == -np.inf)[<NUM_LIT:0>][<NUM_LIT:0>]<EOL>raise ValueError(<EOL>'<STR_LIT>'<EOL>'<STR_LIT>'.format(s=s)<EOL>)<EOL><DEDENT>if self._sa_pair:<EOL><INDENT>diff = np.diff(self.a_indptr)<EOL>if (diff == <NUM_LIT:0>).any():<EOL><INDENT>s = np.where(diff == <NUM_LIT:0>)[<NUM_LIT:0>][<NUM_LIT:0>]<EOL>raise ValueError(<EOL>'<STR_LIT>'<EOL>'<STR_LIT>'.format(s=s)<EOL>)<EOL><DEDENT><DEDENT>
Check that for every state, reward is finite for some action, and for the case sa_pair is True, that for every state, there is some action available.
f5099:c0:m1
def to_sa_pair_form(self, sparse=True):
if self._sa_pair:<EOL><INDENT>return self<EOL><DEDENT>else:<EOL><INDENT>s_ind, a_ind = np.where(self.R > - np.inf)<EOL>RL = self.R[s_ind, a_ind]<EOL>if sparse:<EOL><INDENT>QL = sp.csr_matrix(self.Q[s_ind, a_ind])<EOL><DEDENT>else:<EOL><INDENT>QL = self.Q[s_ind, a_ind]<EOL><DEDENT>return DiscreteDP(RL, QL, self.beta, s_ind, a_ind)<EOL><DEDENT>
Convert this instance of `DiscreteDP` to SA-pair form Parameters ---------- sparse : bool, optional(default=True) Should the `Q` matrix be stored as a sparse matrix? If true the CSR format is used Returns ------- ddp_sa : DiscreteDP The correspnoding DiscreteDP instance in SA-pair form Notes ----- If this instance is already in SA-pair form then it is returned un-modified
f5099:c0:m2
def to_product_form(self):
if self._sa_pair:<EOL><INDENT>ns = self.num_states<EOL>na = self.a_indices.max() + <NUM_LIT:1><EOL>R = np.full((ns, na), -np.inf)<EOL>R[self.s_indices, self.a_indices] = self.R<EOL>Q = np.zeros((ns, na, ns))<EOL>if self._sparse:<EOL><INDENT>_fill_dense_Q(self.s_indices, self.a_indices,<EOL>self.Q.toarray(), Q)<EOL><DEDENT>else:<EOL><INDENT>_fill_dense_Q(self.s_indices, self.a_indices, self.Q, Q)<EOL><DEDENT>return DiscreteDP(R, Q, self.beta)<EOL><DEDENT>else:<EOL><INDENT>return self<EOL><DEDENT>
Convert this instance of `DiscreteDP` to the "product" form. The product form uses the version of the init method taking `R`, `Q` and `beta`. Parameters ---------- Returns ------- ddp_sa : DiscreteDP The correspnoding DiscreteDP instance in product form Notes ----- If this instance is already in product form then it is returned un-modified
f5099:c0:m3
def RQ_sigma(self, sigma):
if self._sa_pair:<EOL><INDENT>sigma = np.asarray(sigma)<EOL>sigma_indices = np.empty(self.num_states, dtype=int)<EOL>_find_indices(self.a_indices, self.a_indptr, sigma,<EOL>out=sigma_indices)<EOL>R_sigma, Q_sigma = self.R[sigma_indices], self.Q[sigma_indices]<EOL><DEDENT>else:<EOL><INDENT>R_sigma = self.R[np.arange(self.num_states), sigma]<EOL>Q_sigma = self.Q[np.arange(self.num_states), sigma]<EOL><DEDENT>return R_sigma, Q_sigma<EOL>
Given a policy `sigma`, return the reward vector `R_sigma` and the transition probability matrix `Q_sigma`. Parameters ---------- sigma : array_like(int, ndim=1) Policy vector, of length n. Returns ------- R_sigma : ndarray(float, ndim=1) Reward vector for `sigma`, of length n. Q_sigma : ndarray(float, ndim=2) Transition probability matrix for `sigma`, of shape (n, n).
f5099:c0:m4
def bellman_operator(self, v, Tv=None, sigma=None):
vals = self.R + self.beta * self.Q.dot(v) <EOL>if Tv is None:<EOL><INDENT>Tv = np.empty(self.num_states)<EOL><DEDENT>self.s_wise_max(vals, out=Tv, out_argmax=sigma)<EOL>return Tv<EOL>
The Bellman operator, which computes and returns the updated value function `Tv` for a value function `v`. Parameters ---------- v : array_like(float, ndim=1) Value function vector, of length n. Tv : ndarray(float, ndim=1), optional(default=None) Optional output array for Tv. sigma : ndarray(int, ndim=1), optional(default=None) If not None, the v-greedy policy vector is stored in this array. Must be of length n. Returns ------- Tv : ndarray(float, ndim=1) Updated value function vector, of length n.
f5099:c0:m5
def T_sigma(self, sigma):
R_sigma, Q_sigma = self.RQ_sigma(sigma)<EOL>return lambda v: R_sigma + self.beta * Q_sigma.dot(v)<EOL>
Given a policy `sigma`, return the T_sigma operator. Parameters ---------- sigma : array_like(int, ndim=1) Policy vector, of length n. Returns ------- callable The T_sigma operator.
f5099:c0:m6
def compute_greedy(self, v, sigma=None):
if sigma is None:<EOL><INDENT>sigma = np.empty(self.num_states, dtype=int)<EOL><DEDENT>self.bellman_operator(v, sigma=sigma)<EOL>return sigma<EOL>
Compute the v-greedy policy. Parameters ---------- v : array_like(float, ndim=1) Value function vector, of length n. sigma : ndarray(int, ndim=1), optional(default=None) Optional output array for `sigma`. Returns ------- sigma : ndarray(int, ndim=1) v-greedy policy vector, of length n.
f5099:c0:m7
def evaluate_policy(self, sigma):
if self.beta == <NUM_LIT:1>:<EOL><INDENT>raise NotImplementedError(self._error_msg_no_discounting)<EOL><DEDENT>R_sigma, Q_sigma = self.RQ_sigma(sigma)<EOL>b = R_sigma<EOL>A = self._I - self.beta * Q_sigma<EOL>v_sigma = self._lineq_solve(A, b)<EOL>return v_sigma<EOL>
Compute the value of a policy. Parameters ---------- sigma : array_like(int, ndim=1) Policy vector, of length n. Returns ------- v_sigma : ndarray(float, ndim=1) Value vector of `sigma`, of length n.
f5099:c0:m8
def operator_iteration(self, T, v, max_iter, tol=None, *args, **kwargs):
<EOL>if max_iter <= <NUM_LIT:0>:<EOL><INDENT>return v, <NUM_LIT:0><EOL><DEDENT>for i in range(max_iter):<EOL><INDENT>new_v = T(v, *args, **kwargs)<EOL>if tol is not None and np.abs(new_v - v).max() < tol:<EOL><INDENT>v[:] = new_v<EOL>break<EOL><DEDENT>v[:] = new_v<EOL><DEDENT>num_iter = i + <NUM_LIT:1><EOL>return num_iter<EOL>
Iteratively apply the operator `T` to `v`. Modify `v` in-place. Iteration is performed for at most a number `max_iter` of times. If `tol` is specified, it is terminated once the distance of `T(v)` from `v` (in the max norm) is less than `tol`. Parameters ---------- T : callable Operator that acts on `v`. v : ndarray Object on which `T` acts. Modified in-place. max_iter : scalar(int) Maximum number of iterations. tol : scalar(float), optional(default=None) Error tolerance. args, kwargs : Other arguments and keyword arguments that are passed directly to the function T each time it is called. Returns ------- num_iter : scalar(int) Number of iterations performed.
f5099:c0:m9
def solve(self, method='<STR_LIT>',<EOL>v_init=None, epsilon=None, max_iter=None, k=<NUM_LIT:20>):
if method in ['<STR_LIT>', '<STR_LIT>']:<EOL><INDENT>res = self.value_iteration(v_init=v_init,<EOL>epsilon=epsilon,<EOL>max_iter=max_iter)<EOL><DEDENT>elif method in ['<STR_LIT>', '<STR_LIT>']:<EOL><INDENT>res = self.policy_iteration(v_init=v_init,<EOL>max_iter=max_iter)<EOL><DEDENT>elif method in ['<STR_LIT>', '<STR_LIT>']:<EOL><INDENT>res = self.modified_policy_iteration(v_init=v_init,<EOL>epsilon=epsilon,<EOL>max_iter=max_iter,<EOL>k=k)<EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>return res<EOL>
Solve the dynamic programming problem. Parameters ---------- method : str, optinal(default='policy_iteration') Solution method, str in {'value_iteration', 'vi', 'policy_iteration', 'pi', 'modified_policy_iteration', 'mpi'}. v_init : array_like(float, ndim=1), optional(default=None) Initial value function, of length n. If None, `v_init` is set such that v_init(s) = max_a r(s, a) for value iteration and policy iteration; for modified policy iteration, v_init(s) = min_(s_next, a) r(s_next, a)/(1 - beta) to guarantee convergence. epsilon : scalar(float), optional(default=None) Value for epsilon-optimality. If None, the value stored in the attribute `epsilon` is used. max_iter : scalar(int), optional(default=None) Maximum number of iterations. If None, the value stored in the attribute `max_iter` is used. k : scalar(int), optional(default=20) Number of iterations for partial policy evaluation in modified policy iteration (irrelevant for other methods). Returns ------- res : DPSolveResult Optimization result represetned as a DPSolveResult. See `DPSolveResult` for details.
f5099:c0:m10
def value_iteration(self, v_init=None, epsilon=None, max_iter=None):
if self.beta == <NUM_LIT:1>:<EOL><INDENT>raise NotImplementedError(self._error_msg_no_discounting)<EOL><DEDENT>if max_iter is None:<EOL><INDENT>max_iter = self.max_iter<EOL><DEDENT>if epsilon is None:<EOL><INDENT>epsilon = self.epsilon<EOL><DEDENT>try:<EOL><INDENT>tol = epsilon * (<NUM_LIT:1>-self.beta) / (<NUM_LIT:2>*self.beta)<EOL><DEDENT>except ZeroDivisionError: <EOL><INDENT>tol = np.inf<EOL><DEDENT>v = np.empty(self.num_states)<EOL>if v_init is None:<EOL><INDENT>self.s_wise_max(self.R, out=v)<EOL><DEDENT>else:<EOL><INDENT>v[:] = v_init<EOL><DEDENT>Tv = np.empty(self.num_states)<EOL>num_iter = self.operator_iteration(T=self.bellman_operator,<EOL>v=v, max_iter=max_iter, tol=tol,<EOL>Tv=Tv)<EOL>sigma = self.compute_greedy(v)<EOL>res = DPSolveResult(v=v,<EOL>sigma=sigma,<EOL>num_iter=num_iter,<EOL>mc=self.controlled_mc(sigma),<EOL>method='<STR_LIT>',<EOL>epsilon=epsilon,<EOL>max_iter=max_iter)<EOL>return res<EOL>
Solve the optimization problem by value iteration. See the `solve` method.
f5099:c0:m11
def policy_iteration(self, v_init=None, max_iter=None):
if self.beta == <NUM_LIT:1>:<EOL><INDENT>raise NotImplementedError(self._error_msg_no_discounting)<EOL><DEDENT>if max_iter is None:<EOL><INDENT>max_iter = self.max_iter<EOL><DEDENT>if v_init is None:<EOL><INDENT>v_init = self.s_wise_max(self.R)<EOL><DEDENT>sigma = self.compute_greedy(v_init)<EOL>new_sigma = np.empty(self.num_states, dtype=int)<EOL>for i in range(max_iter):<EOL><INDENT>v_sigma = self.evaluate_policy(sigma)<EOL>self.compute_greedy(v_sigma, sigma=new_sigma)<EOL>if np.array_equal(new_sigma, sigma):<EOL><INDENT>break<EOL><DEDENT>sigma[:] = new_sigma<EOL><DEDENT>num_iter = i + <NUM_LIT:1><EOL>res = DPSolveResult(v=v_sigma,<EOL>sigma=sigma,<EOL>num_iter=num_iter,<EOL>mc=self.controlled_mc(sigma),<EOL>method='<STR_LIT>',<EOL>max_iter=max_iter)<EOL>return res<EOL>
Solve the optimization problem by policy iteration. See the `solve` method.
f5099:c0:m12
def modified_policy_iteration(self, v_init=None, epsilon=None,<EOL>max_iter=None, k=<NUM_LIT:20>):
if self.beta == <NUM_LIT:1>:<EOL><INDENT>raise NotImplementedError(self._error_msg_no_discounting)<EOL><DEDENT>if max_iter is None:<EOL><INDENT>max_iter = self.max_iter<EOL><DEDENT>if epsilon is None:<EOL><INDENT>epsilon = self.epsilon<EOL><DEDENT>def span(z):<EOL><INDENT>return z.max() - z.min()<EOL><DEDENT>def midrange(z):<EOL><INDENT>return (z.min() + z.max()) / <NUM_LIT:2><EOL><DEDENT>v = np.empty(self.num_states)<EOL>if v_init is None:<EOL><INDENT>v[:] = self.R[self.R > -np.inf].min() / (<NUM_LIT:1> - self.beta)<EOL><DEDENT>else:<EOL><INDENT>v[:] = v_init<EOL><DEDENT>u = np.empty(self.num_states)<EOL>sigma = np.empty(self.num_states, dtype=int)<EOL>try:<EOL><INDENT>tol = epsilon * (<NUM_LIT:1>-self.beta) / self.beta<EOL><DEDENT>except ZeroDivisionError: <EOL><INDENT>tol = np.inf<EOL><DEDENT>for i in range(max_iter):<EOL><INDENT>self.bellman_operator(v, Tv=u, sigma=sigma)<EOL>diff = u - v<EOL>if span(diff) < tol:<EOL><INDENT>v[:] = u + midrange(diff) * self.beta / (<NUM_LIT:1> - self.beta)<EOL>break<EOL><DEDENT>self.operator_iteration(T=self.T_sigma(sigma), v=u, max_iter=k)<EOL>v[:] = u<EOL><DEDENT>num_iter = i + <NUM_LIT:1><EOL>res = DPSolveResult(v=v,<EOL>sigma=sigma,<EOL>num_iter=num_iter,<EOL>mc=self.controlled_mc(sigma),<EOL>method='<STR_LIT>',<EOL>epsilon=epsilon,<EOL>max_iter=max_iter,<EOL>k=k)<EOL>return res<EOL>
Solve the optimization problem by modified policy iteration. See the `solve` method.
f5099:c0:m13
def controlled_mc(self, sigma):
_, Q_sigma = self.RQ_sigma(sigma)<EOL>return MarkovChain(Q_sigma)<EOL>
Returns the controlled Markov chain for a given policy `sigma`. Parameters ---------- sigma : array_like(int, ndim=1) Policy vector, of length n. Returns ------- mc : MarkovChain Controlled Markov chain.
f5099:c0:m14
def KMR_Markov_matrix_sequential(N, p, epsilon):
P = np.zeros((N+<NUM_LIT:1>, N+<NUM_LIT:1>), dtype=float)<EOL>P[<NUM_LIT:0>, <NUM_LIT:0>], P[<NUM_LIT:0>, <NUM_LIT:1>] = <NUM_LIT:1> - epsilon * (<NUM_LIT:1>/<NUM_LIT:2>), epsilon * (<NUM_LIT:1>/<NUM_LIT:2>)<EOL>for n in range(<NUM_LIT:1>, N):<EOL><INDENT>P[n, n-<NUM_LIT:1>] =(n/N) * (epsilon * (<NUM_LIT:1>/<NUM_LIT:2>) +<EOL>(<NUM_LIT:1> - epsilon) * (((n-<NUM_LIT:1>)/(N-<NUM_LIT:1>) < p) + ((n-<NUM_LIT:1>)/(N-<NUM_LIT:1>) == p) * (<NUM_LIT:1>/<NUM_LIT:2>))<EOL>)<EOL>P[n, n+<NUM_LIT:1>] =((N-n)/N) * (epsilon * (<NUM_LIT:1>/<NUM_LIT:2>) +<EOL>(<NUM_LIT:1> - epsilon) * ((n/(N-<NUM_LIT:1>) > p) + (n/(N-<NUM_LIT:1>) == p) * (<NUM_LIT:1>/<NUM_LIT:2>))<EOL>)<EOL>P[n, n] = <NUM_LIT:1> - P[n, n-<NUM_LIT:1>] - P[n, n+<NUM_LIT:1>]<EOL><DEDENT>P[N, N-<NUM_LIT:1>], P[N, N] = epsilon * (<NUM_LIT:1>/<NUM_LIT:2>), <NUM_LIT:1> - epsilon * (<NUM_LIT:1>/<NUM_LIT:2>)<EOL>return P<EOL>
Generate the Markov matrix for the KMR model with *sequential* move Parameters ---------- N : int Number of players p : float Level of p-dominance of action 1, i.e., the value of p such that action 1 is the BR for (1-q, q) for any q > p, where q (1-q, resp.) is the prob that the opponent plays action 1 (0, resp.) epsilon : float Probability of mutation Returns ------- P : numpy.ndarray Markov matrix for the KMR model with simultaneous move
f5101:m0
def list_of_array_equal(s, t):
eq_(len(s), len(t))<EOL>all(assert_array_equal(x, y) for x, y in zip(s, t))<EOL>
Compare two lists of ndarrays s, t: lists of numpy.ndarrays
f5105:m0
def KMR_Markov_matrix_sequential(N, p, epsilon):
P = np.zeros((N+<NUM_LIT:1>, N+<NUM_LIT:1>), dtype=float)<EOL>P[<NUM_LIT:0>, <NUM_LIT:0>], P[<NUM_LIT:0>, <NUM_LIT:1>] = <NUM_LIT:1> - epsilon * (<NUM_LIT:1>/<NUM_LIT:2>), epsilon * (<NUM_LIT:1>/<NUM_LIT:2>)<EOL>for n in range(<NUM_LIT:1>, N):<EOL><INDENT>P[n, n-<NUM_LIT:1>] =(n/N) * (epsilon * (<NUM_LIT:1>/<NUM_LIT:2>) +<EOL>(<NUM_LIT:1> - epsilon) * (((n-<NUM_LIT:1>)/(N-<NUM_LIT:1>) < p) + ((n-<NUM_LIT:1>)/(N-<NUM_LIT:1>) == p) * (<NUM_LIT:1>/<NUM_LIT:2>))<EOL>)<EOL>P[n, n+<NUM_LIT:1>] =((N-n)/N) * (epsilon * (<NUM_LIT:1>/<NUM_LIT:2>) +<EOL>(<NUM_LIT:1> - epsilon) * ((n/(N-<NUM_LIT:1>) > p) + (n/(N-<NUM_LIT:1>) == p) * (<NUM_LIT:1>/<NUM_LIT:2>))<EOL>)<EOL>P[n, n] = <NUM_LIT:1> - P[n, n-<NUM_LIT:1>] - P[n, n+<NUM_LIT:1>]<EOL><DEDENT>P[N, N-<NUM_LIT:1>], P[N, N] = epsilon * (<NUM_LIT:1>/<NUM_LIT:2>), <NUM_LIT:1> - epsilon * (<NUM_LIT:1>/<NUM_LIT:2>)<EOL>return P<EOL>
Generate the Markov matrix for the KMR model with *sequential* move N: number of players p: level of p-dominance for action 1 = the value of p such that action 1 is the BR for (1-q, q) for any q > p, where q (1-q, resp.) is the prob that the opponent plays action 1 (0, resp.) epsilon: mutation probability References: KMRMarkovMatrixSequential is contributed from https://github.com/oyamad
f5105:m1
def setUp(self):
self.P = KMR_Markov_matrix_sequential(self.N, self.p, self.epsilon)<EOL>self.mc = MarkovChain(self.P)<EOL>self.stationary = self.mc.stationary_distributions<EOL>stat_shape = self.stationary.shape<EOL>if len(stat_shape) == <NUM_LIT:1>:<EOL><INDENT>self.n_stat_dists = <NUM_LIT:1><EOL><DEDENT>else:<EOL><INDENT>self.n_stat_dists = stat_shape[<NUM_LIT:0>]<EOL><DEDENT>
Setup a KMRMarkovMatrix and Compute Stationary Values
f5105:c0:m0
def rouwenhorst(n, ybar, sigma, rho):
<EOL>y_sd = sqrt(sigma**<NUM_LIT:2> / (<NUM_LIT:1> - rho**<NUM_LIT:2>))<EOL>p = (<NUM_LIT:1> + rho) / <NUM_LIT:2><EOL>q = p<EOL>psi = y_sd * np.sqrt(n - <NUM_LIT:1>)<EOL>ubar = psi<EOL>lbar = -ubar<EOL>bar = np.linspace(lbar, ubar, n)<EOL>def row_build_mat(n, p, q):<EOL><INDENT>"""<STR_LIT>"""<EOL>if n == <NUM_LIT:2>:<EOL><INDENT>theta = np.array([[p, <NUM_LIT:1> - p], [<NUM_LIT:1> - q, q]])<EOL><DEDENT>elif n > <NUM_LIT:2>:<EOL><INDENT>p1 = np.zeros((n, n))<EOL>p2 = np.zeros((n, n))<EOL>p3 = np.zeros((n, n))<EOL>p4 = np.zeros((n, n))<EOL>new_mat = row_build_mat(n - <NUM_LIT:1>, p, q)<EOL>p1[:n - <NUM_LIT:1>, :n - <NUM_LIT:1>] = p * new_mat<EOL>p2[:n - <NUM_LIT:1>, <NUM_LIT:1>:] = (<NUM_LIT:1> - p) * new_mat<EOL>p3[<NUM_LIT:1>:, :-<NUM_LIT:1>] = (<NUM_LIT:1> - q) * new_mat<EOL>p4[<NUM_LIT:1>:, <NUM_LIT:1>:] = q * new_mat<EOL>theta = p1 + p2 + p3 + p4<EOL>theta[<NUM_LIT:1>:n - <NUM_LIT:1>, :] = theta[<NUM_LIT:1>:n - <NUM_LIT:1>, :] / <NUM_LIT:2><EOL><DEDENT>else:<EOL><INDENT>raise ValueError("<STR_LIT>" +<EOL>"<STR_LIT>")<EOL><DEDENT>return theta<EOL><DEDENT>theta = row_build_mat(n, p, q)<EOL>bar += ybar / (<NUM_LIT:1> - rho)<EOL>return MarkovChain(theta, bar)<EOL>
r""" Takes as inputs n, p, q, psi. It will then construct a markov chain that estimates an AR(1) process of: :math:`y_t = \bar{y} + \rho y_{t-1} + \varepsilon_t` where :math:`\varepsilon_t` is i.i.d. normal of mean 0, std dev of sigma The Rouwenhorst approximation uses the following recursive defintion for approximating a distribution: .. math:: \theta_2 = \begin{bmatrix} p & 1 - p \\ 1 - q & q \\ \end{bmatrix} .. math:: \theta_{n+1} = p \begin{bmatrix} \theta_n & 0 \\ 0 & 0 \\ \end{bmatrix} + (1 - p) \begin{bmatrix} 0 & \theta_n \\ 0 & 0 \\ \end{bmatrix} + q \begin{bmatrix} 0 & 0 \\ \theta_n & 0 \\ \end{bmatrix} + (1 - q) \begin{bmatrix} 0 & 0 \\ 0 & \theta_n \\ \end{bmatrix} Parameters ---------- n : int The number of points to approximate the distribution ybar : float The value :math:`\bar{y}` in the process. Note that the mean of this AR(1) process, :math:`y`, is simply :math:`\bar{y}/(1 - \rho)` sigma : float The value of the standard deviation of the :math:`\varepsilon` process rho : float By default this will be 0, but if you are approximating an AR(1) process then this is the autocorrelation across periods Returns ------- mc : MarkovChain An instance of the MarkovChain class that stores the transition matrix and state values returned by the discretization method
f5106:m0
def tauchen(rho, sigma_u, m=<NUM_LIT:3>, n=<NUM_LIT:7>):
<EOL>std_y = np.sqrt(sigma_u**<NUM_LIT:2> / (<NUM_LIT:1> - rho**<NUM_LIT:2>))<EOL>x_max = m * std_y<EOL>x_min = -x_max<EOL>x = np.linspace(x_min, x_max, n)<EOL>step = (x_max - x_min) / (n - <NUM_LIT:1>)<EOL>half_step = <NUM_LIT:0.5> * step<EOL>P = np.empty((n, n))<EOL>_fill_tauchen(x, P, n, rho, sigma_u, half_step)<EOL>mc = MarkovChain(P, state_values=x)<EOL>return mc<EOL>
r""" Computes a Markov chain associated with a discretized version of the linear Gaussian AR(1) process .. math:: y_{t+1} = \rho y_t + u_{t+1} using Tauchen's method. Here :math:`{u_t}` is an i.i.d. Gaussian process with zero mean. Parameters ---------- rho : scalar(float) The autocorrelation coefficient sigma_u : scalar(float) The standard deviation of the random process m : scalar(int), optional(default=3) The number of standard deviations to approximate out to n : scalar(int), optional(default=7) The number of states to use in the approximation Returns ------- mc : MarkovChain An instance of the MarkovChain class that stores the transition matrix and state values returned by the discretization method
f5106:m1
def gth_solve(A, overwrite=False, use_jit=True):
A1 = np.array(A, dtype=float, copy=not overwrite, order='<STR_LIT:C>')<EOL>if len(A1.shape) != <NUM_LIT:2> or A1.shape[<NUM_LIT:0>] != A1.shape[<NUM_LIT:1>]:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>n = A1.shape[<NUM_LIT:0>]<EOL>x = np.zeros(n)<EOL>if use_jit:<EOL><INDENT>_gth_solve_jit(A1, x)<EOL>return x<EOL><DEDENT>for k in range(n-<NUM_LIT:1>):<EOL><INDENT>scale = np.sum(A1[k, k+<NUM_LIT:1>:n])<EOL>if scale <= <NUM_LIT:0>:<EOL><INDENT>n = k+<NUM_LIT:1><EOL>break<EOL><DEDENT>A1[k+<NUM_LIT:1>:n, k] /= scale<EOL>A1[k+<NUM_LIT:1>:n, k+<NUM_LIT:1>:n] += np.dot(A1[k+<NUM_LIT:1>:n, k:k+<NUM_LIT:1>], A1[k:k+<NUM_LIT:1>, k+<NUM_LIT:1>:n])<EOL><DEDENT>x[n-<NUM_LIT:1>] = <NUM_LIT:1><EOL>for k in range(n-<NUM_LIT:2>, -<NUM_LIT:1>, -<NUM_LIT:1>):<EOL><INDENT>x[k] = np.dot(x[k+<NUM_LIT:1>:n], A1[k+<NUM_LIT:1>:n, k])<EOL><DEDENT>x /= np.sum(x)<EOL>return x<EOL>
r""" This routine computes the stationary distribution of an irreducible Markov transition matrix (stochastic matrix) or transition rate matrix (generator matrix) `A`. More generally, given a Metzler matrix (square matrix whose off-diagonal entries are all nonnegative) `A`, this routine solves for a nonzero solution `x` to `x (A - D) = 0`, where `D` is the diagonal matrix for which the rows of `A - D` sum to zero (i.e., :math:`D_{ii} = \sum_j A_{ij}` for all :math:`i`). One (and only one, up to normalization) nonzero solution exists corresponding to each reccurent class of `A`, and in particular, if `A` is irreducible, there is a unique solution; when there are more than one solution, the routine returns the solution that contains in its support the first index `i` such that no path connects `i` to any index larger than `i`. The solution is normalized so that its 1-norm equals one. This routine implements the Grassmann-Taksar-Heyman (GTH) algorithm [1]_, a numerically stable variant of Gaussian elimination, where only the off-diagonal entries of `A` are used as the input data. For a nice exposition of the algorithm, see Stewart [2]_, Chapter 10. Parameters ---------- A : array_like(float, ndim=2) Stochastic matrix or generator matrix. Must be of shape n x n. Returns ------- x : numpy.ndarray(float, ndim=1) Stationary distribution of `A`. overwrite : bool, optional(default=False) Whether to overwrite `A`. References ---------- .. [1] W. K. Grassmann, M. I. Taksar and D. P. Heyman, "Regenerative Analysis and Steady State Distributions for Markov Chains," Operations Research (1985), 1107-1116. .. [2] W. J. Stewart, Probability, Markov Chains, Queues, and Simulation, Princeton University Press, 2009.
f5107:m0
@jit(nopython=True)<EOL>def _gth_solve_jit(A, out):
n = A.shape[<NUM_LIT:0>]<EOL>for k in range(n-<NUM_LIT:1>):<EOL><INDENT>scale = np.sum(A[k, k+<NUM_LIT:1>:n])<EOL>if scale <= <NUM_LIT:0>:<EOL><INDENT>n = k+<NUM_LIT:1><EOL>break<EOL><DEDENT>for i in range(k+<NUM_LIT:1>, n):<EOL><INDENT>A[i, k] /= scale<EOL>for j in range(k+<NUM_LIT:1>, n):<EOL><INDENT>A[i, j] += A[i, k] * A[k, j]<EOL><DEDENT><DEDENT><DEDENT>out[n-<NUM_LIT:1>] = <NUM_LIT:1><EOL>for k in range(n-<NUM_LIT:2>, -<NUM_LIT:1>, -<NUM_LIT:1>):<EOL><INDENT>for i in range(k+<NUM_LIT:1>, n):<EOL><INDENT>out[k] += out[i] * A[i, k]<EOL><DEDENT><DEDENT>norm = np.sum(out)<EOL>for k in range(n):<EOL><INDENT>out[k] /= norm<EOL><DEDENT>
JIT complied version of the main routine of gth_solve. Parameters ---------- A : numpy.ndarray(float, ndim=2) Stochastic matrix or generator matrix. Must be of shape n x n. Data will be overwritten. out : numpy.ndarray(float, ndim=1) Output array in which to place the stationary distribution of A.
f5107:m1
@jit(nopython=True)<EOL>def _generate_sample_paths(P_cdfs, init_states, random_values, out):
num_reps, ts_length = out.shape<EOL>for i in range(num_reps):<EOL><INDENT>out[i, <NUM_LIT:0>] = init_states[i]<EOL>for t in range(ts_length-<NUM_LIT:1>):<EOL><INDENT>out[i, t+<NUM_LIT:1>] = searchsorted(P_cdfs[out[i, t]], random_values[i, t])<EOL><DEDENT><DEDENT>
Generate num_reps sample paths of length ts_length, where num_reps = out.shape[0] and ts_length = out.shape[1]. Parameters ---------- P_cdfs : ndarray(float, ndim=2) Array containing as rows the CDFs of the state transition. init_states : array_like(int, ndim=1) Array containing the initial states. Its length must be equal to num_reps. random_values : ndarray(float, ndim=2) Array containing random values from [0, 1). Its shape must be equal to (num_reps, ts_length-1) out : ndarray(int, ndim=2) Array to store the sample paths. Notes ----- This routine is jit-complied by Numba.
f5109:m0
@jit(nopython=True)<EOL>def _generate_sample_paths_sparse(P_cdfs1d, indices, indptr, init_states,<EOL>random_values, out):
num_reps, ts_length = out.shape<EOL>for i in range(num_reps):<EOL><INDENT>out[i, <NUM_LIT:0>] = init_states[i]<EOL>for t in range(ts_length-<NUM_LIT:1>):<EOL><INDENT>k = searchsorted(P_cdfs1d[indptr[out[i, t]]:indptr[out[i, t]+<NUM_LIT:1>]],<EOL>random_values[i, t])<EOL>out[i, t+<NUM_LIT:1>] = indices[indptr[out[i, t]]+k]<EOL><DEDENT><DEDENT>
For sparse matrix. Generate num_reps sample paths of length ts_length, where num_reps = out.shape[0] and ts_length = out.shape[1]. Parameters ---------- P_cdfs1d : ndarray(float, ndim=1) 1D array containing the CDFs of the state transition. indices : ndarray(int, ndim=1) CSR format index array. indptr : ndarray(int, ndim=1) CSR format index pointer array. init_states : array_like(int, ndim=1) Array containing the initial states. Its length must be equal to num_reps. random_values : ndarray(float, ndim=2) Array containing random values from [0, 1). Its shape must be equal to (num_reps, ts_length-1) out : ndarray(int, ndim=2) Array to store the sample paths. Notes ----- This routine is jit-complied by Numba.
f5109:m1
def mc_compute_stationary(P):
return MarkovChain(P).stationary_distributions<EOL>
Computes stationary distributions of P, one for each recurrent class. Any stationary distribution is written as a convex combination of these distributions. Returns ------- stationary_dists : array_like(float, ndim=2) Array containing the stationary distributions as its rows.
f5109:m2
def mc_sample_path(P, init=<NUM_LIT:0>, sample_size=<NUM_LIT:1000>, random_state=None):
random_state = check_random_state(random_state)<EOL>if isinstance(init, numbers.Integral):<EOL><INDENT>X_0 = init<EOL><DEDENT>else:<EOL><INDENT>cdf0 = np.cumsum(init)<EOL>u_0 = random_state.random_sample()<EOL>X_0 = searchsorted(cdf0, u_0)<EOL><DEDENT>mc = MarkovChain(P)<EOL>return mc.simulate(ts_length=sample_size, init=X_0,<EOL>random_state=random_state)<EOL>
Generates one sample path from the Markov chain represented by (n x n) transition matrix P on state space S = {{0,...,n-1}}. Parameters ---------- P : array_like(float, ndim=2) A Markov transition matrix. init : array_like(float ndim=1) or scalar(int), optional(default=0) If init is an array_like, then it is treated as the initial distribution across states. If init is a scalar, then it treated as the deterministic initial state. sample_size : scalar(int), optional(default=1000) The length of the sample path. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- X : array_like(int, ndim=1) The simulation of states.
f5109:m3
def get_index(self, value):
if self.state_values is None:<EOL><INDENT>state_values_ndim = <NUM_LIT:1><EOL><DEDENT>else:<EOL><INDENT>state_values_ndim = self.state_values.ndim<EOL><DEDENT>values = np.asarray(value)<EOL>if values.ndim <= state_values_ndim - <NUM_LIT:1>:<EOL><INDENT>return self._get_index(value)<EOL><DEDENT>elif values.ndim == state_values_ndim: <EOL><INDENT>k = values.shape[<NUM_LIT:0>]<EOL>idx = np.empty(k, dtype=int)<EOL>for i in range(k):<EOL><INDENT>idx[i] = self._get_index(values[i])<EOL><DEDENT>return idx<EOL><DEDENT>else:<EOL><INDENT>raise ValueError('<STR_LIT>')<EOL><DEDENT>
Return the index (or indices) of the given value (or values) in `state_values`. Parameters ---------- value Value(s) to get the index (indices) for. Returns ------- idx : int or ndarray(int) Index of `value` if `value` is a single state value; array of indices if `value` is an array_like of state values.
f5109:c0:m5
def _get_index(self, value):
error_msg = '<STR_LIT>'.format(value)<EOL>if self.state_values is None:<EOL><INDENT>if isinstance(value, numbers.Integral) and (<NUM_LIT:0> <= value < self.n):<EOL><INDENT>return value<EOL><DEDENT>else:<EOL><INDENT>raise ValueError(error_msg)<EOL><DEDENT><DEDENT>if self.state_values.ndim == <NUM_LIT:1>:<EOL><INDENT>try:<EOL><INDENT>idx = np.where(self.state_values == value)[<NUM_LIT:0>][<NUM_LIT:0>]<EOL>return idx<EOL><DEDENT>except IndexError:<EOL><INDENT>raise ValueError(error_msg)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>idx = <NUM_LIT:0><EOL>while idx < self.n:<EOL><INDENT>if np.array_equal(self.state_values[idx], value):<EOL><INDENT>return idx<EOL><DEDENT>idx += <NUM_LIT:1><EOL><DEDENT>raise ValueError(error_msg)<EOL><DEDENT>
Return the index of the given value in `state_values`. Parameters ---------- value Value to get the index for. Returns ------- idx : int Index of `value`.
f5109:c0:m6
def _compute_stationary(self):
if self.is_irreducible:<EOL><INDENT>if not self.is_sparse: <EOL><INDENT>stationary_dists = gth_solve(self.P).reshape(<NUM_LIT:1>, self.n)<EOL><DEDENT>else: <EOL><INDENT>stationary_dists =gth_solve(self.P.toarray(),<EOL>overwrite=True).reshape(<NUM_LIT:1>, self.n)<EOL><DEDENT><DEDENT>else:<EOL><INDENT>rec_classes = self.recurrent_classes_indices<EOL>stationary_dists = np.zeros((len(rec_classes), self.n))<EOL>for i, rec_class in enumerate(rec_classes):<EOL><INDENT>P_rec_class = self.P[np.ix_(rec_class, rec_class)]<EOL>if self.is_sparse:<EOL><INDENT>P_rec_class = P_rec_class.toarray()<EOL><DEDENT>stationary_dists[i, rec_class] =gth_solve(P_rec_class, overwrite=True)<EOL><DEDENT><DEDENT>self._stationary_dists = stationary_dists<EOL>
Store the stationary distributions in self._stationary_distributions.
f5109:c0:m19
def simulate_indices(self, ts_length, init=None, num_reps=None,<EOL>random_state=None):
random_state = check_random_state(random_state)<EOL>dim = <NUM_LIT:1> <EOL>msg_out_of_range = '<STR_LIT>'<EOL>try:<EOL><INDENT>k = len(init) <EOL>dim = <NUM_LIT:2><EOL>init_states = np.asarray(init, dtype=int)<EOL>if (init_states >= self.n).any() or (init_states < -self.n).any():<EOL><INDENT>idx = np.where(<EOL>(init_states >= self.n) + (init_states < -self.n)<EOL>)[<NUM_LIT:0>][<NUM_LIT:0>]<EOL>raise ValueError(msg_out_of_range.format(init=idx))<EOL><DEDENT>if num_reps is not None:<EOL><INDENT>k *= num_reps<EOL>init_states = np.tile(init_states, num_reps)<EOL><DEDENT><DEDENT>except TypeError: <EOL><INDENT>k = <NUM_LIT:1><EOL>if num_reps is not None:<EOL><INDENT>dim = <NUM_LIT:2><EOL>k = num_reps<EOL><DEDENT>if init is None:<EOL><INDENT>init_states = random_state.randint(self.n, size=k)<EOL><DEDENT>elif isinstance(init, numbers.Integral):<EOL><INDENT>if init >= self.n or init < -self.n:<EOL><INDENT>raise ValueError(msg_out_of_range.format(init=init))<EOL><DEDENT>init_states = np.ones(k, dtype=int) * init<EOL><DEDENT>else:<EOL><INDENT>raise ValueError(<EOL>'<STR_LIT>'<EOL>)<EOL><DEDENT><DEDENT>X = np.empty((k, ts_length), dtype=int)<EOL>random_values = random_state.random_sample(size=(k, ts_length-<NUM_LIT:1>))<EOL>if not self.is_sparse: <EOL><INDENT>_generate_sample_paths(<EOL>self.cdfs, init_states, random_values, out=X<EOL>)<EOL><DEDENT>else: <EOL><INDENT>_generate_sample_paths_sparse(<EOL>self.cdfs1d, self.P.indices, self.P.indptr, init_states,<EOL>random_values, out=X<EOL>)<EOL><DEDENT>if dim == <NUM_LIT:1>:<EOL><INDENT>return X[<NUM_LIT:0>]<EOL><DEDENT>else:<EOL><INDENT>return X<EOL><DEDENT>
Simulate time series of state transitions, where state indices are returned. Parameters ---------- ts_length : scalar(int) Length of each simulation. init : int or array_like(int, ndim=1), optional Initial state(s). If None, the initial state is randomly drawn. num_reps : scalar(int), optional(default=None) Number of repetitions of simulation. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- X : ndarray(ndim=1 or 2) Array containing the state values of the sample path(s). See the `simulate` method for more information.
f5109:c0:m23
def simulate(self, ts_length, init=None, num_reps=None, random_state=None):
if init is not None:<EOL><INDENT>init_idx = self.get_index(init)<EOL><DEDENT>else:<EOL><INDENT>init_idx = None<EOL><DEDENT>X = self.simulate_indices(ts_length, init=init_idx, num_reps=num_reps,<EOL>random_state=random_state)<EOL>if self.state_values is not None:<EOL><INDENT>X = self.state_values[X]<EOL><DEDENT>return X<EOL>
Simulate time series of state transitions, where the states are annotated with their values (if `state_values` is not None). Parameters ---------- ts_length : scalar(int) Length of each simulation. init : scalar or array_like, optional(default=None) Initial state values(s). If None, the initial state is randomly drawn. num_reps : scalar(int), optional(default=None) Number of repetitions of simulation. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- X : ndarray(ndim=1 or 2) Array containing the sample path(s), of shape (ts_length,) if init is a scalar (integer) or None and num_reps is None; of shape (k, ts_length) otherwise, where k = len(init) if (init, num_reps) = (array, None), k = num_reps if (init, num_reps) = (int or None, int), and k = len(init)*num_reps if (init, num_reps) = (array, int).
f5109:c0:m24
@jit(nopython=True)<EOL>def sa_indices(num_states, num_actions):
L = num_states * num_actions<EOL>dtype = np.int_<EOL>s_indices = np.empty(L, dtype=dtype)<EOL>a_indices = np.empty(L, dtype=dtype)<EOL>i = <NUM_LIT:0><EOL>for s in range(num_states):<EOL><INDENT>for a in range(num_actions):<EOL><INDENT>s_indices[i] = s<EOL>a_indices[i] = a<EOL>i += <NUM_LIT:1><EOL><DEDENT><DEDENT>return s_indices, a_indices<EOL>
Generate `s_indices` and `a_indices` for `DiscreteDP`, for the case where all the actions are feasible at every state. Parameters ---------- num_states : scalar(int) Number of states. num_actions : scalar(int) Number of actions. Returns ------- s_indices : ndarray(int, ndim=1) Array containing the state indices. a_indices : ndarray(int, ndim=1) Array containing the action indices. Examples -------- >>> s_indices, a_indices = qe.markov.sa_indices(4, 3) >>> s_indices array([0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3]) >>> a_indices array([0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2])
f5110:m0
def random_markov_chain(n, k=None, sparse=False, random_state=None):
P = random_stochastic_matrix(n, k, sparse, format='<STR_LIT>',<EOL>random_state=random_state)<EOL>mc = MarkovChain(P)<EOL>return mc<EOL>
Return a randomly sampled MarkovChain instance with n states, where each state has k states with positive transition probability. Parameters ---------- n : scalar(int) Number of states. k : scalar(int), optional(default=None) Number of states that may be reached from each state with positive probability. Set to n if not specified. sparse : bool, optional(default=False) Whether to store the transition probability matrix in sparse matrix form. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- mc : MarkovChain Examples -------- >>> mc = qe.markov.random_markov_chain(3, random_state=1234) >>> mc.P array([[ 0.19151945, 0.43058932, 0.37789123], [ 0.43772774, 0.34763084, 0.21464142], [ 0.27259261, 0.5073832 , 0.22002419]]) >>> mc = qe.markov.random_markov_chain(3, k=2, random_state=1234) >>> mc.P array([[ 0.19151945, 0.80848055, 0. ], [ 0. , 0.62210877, 0.37789123], [ 0.56227226, 0. , 0.43772774]])
f5111:m0
def random_stochastic_matrix(n, k=None, sparse=False, format='<STR_LIT>',<EOL>random_state=None):
P = _random_stochastic_matrix(m=n, n=n, k=k, sparse=sparse, format=format,<EOL>random_state=random_state)<EOL>return P<EOL>
Return a randomly sampled n x n stochastic matrix with k nonzero entries for each row. Parameters ---------- n : scalar(int) Number of states. k : scalar(int), optional(default=None) Number of nonzero entries in each row of the matrix. Set to n if not specified. sparse : bool, optional(default=False) Whether to generate the matrix in sparse matrix form. format : str, optional(default='csr') Sparse matrix format, str in {'bsr', 'csr', 'csc', 'coo', 'lil', 'dia', 'dok'}. Relevant only when sparse=True. random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- P : numpy ndarray or scipy sparse matrix (float, ndim=2) Stochastic matrix. See also -------- random_markov_chain : Return a random MarkovChain instance.
f5111:m1
def _random_stochastic_matrix(m, n, k=None, sparse=False, format='<STR_LIT>',<EOL>random_state=None):
if k is None:<EOL><INDENT>k = n<EOL><DEDENT>probvecs = probvec(m, k, random_state=random_state)<EOL>if k == n:<EOL><INDENT>P = probvecs<EOL>if sparse:<EOL><INDENT>return scipy.sparse.coo_matrix(P).asformat(format)<EOL><DEDENT>else:<EOL><INDENT>return P<EOL><DEDENT><DEDENT>rows = np.repeat(np.arange(m), k)<EOL>cols =sample_without_replacement(<EOL>n, k, num_trials=m, random_state=random_state<EOL>).ravel()<EOL>data = probvecs.ravel()<EOL>if sparse:<EOL><INDENT>P = scipy.sparse.coo_matrix((data, (rows, cols)), shape=(m, n))<EOL>return P.asformat(format)<EOL><DEDENT>else:<EOL><INDENT>P = np.zeros((m, n))<EOL>P[rows, cols] = data<EOL>return P<EOL><DEDENT>
Generate a "non-square stochastic matrix" of shape (m, n), which contains as rows m probability vectors of length n with k nonzero entries. For other parameters, see `random_stochastic_matrix`.
f5111:m2
def random_discrete_dp(num_states, num_actions, beta=None,<EOL>k=None, scale=<NUM_LIT:1>, sparse=False, sa_pair=False,<EOL>random_state=None):
if sparse:<EOL><INDENT>sa_pair = True<EOL><DEDENT>L = num_states * num_actions<EOL>random_state = check_random_state(random_state)<EOL>R = scale * random_state.randn(L)<EOL>Q = _random_stochastic_matrix(L, num_states, k=k,<EOL>sparse=sparse, format='<STR_LIT>',<EOL>random_state=random_state)<EOL>if beta is None:<EOL><INDENT>beta = random_state.random_sample()<EOL><DEDENT>if sa_pair:<EOL><INDENT>s_indices, a_indices = sa_indices(num_states, num_actions)<EOL><DEDENT>else:<EOL><INDENT>s_indices, a_indices = None, None<EOL>R.shape = (num_states, num_actions)<EOL>Q.shape = (num_states, num_actions, num_states)<EOL><DEDENT>ddp = DiscreteDP(R, Q, beta, s_indices, a_indices)<EOL>return ddp<EOL>
Generate a DiscreteDP randomly. The reward values are drawn from the normal distribution with mean 0 and standard deviation `scale`. Parameters ---------- num_states : scalar(int) Number of states. num_actions : scalar(int) Number of actions. beta : scalar(float), optional(default=None) Discount factor. Randomly chosen from [0, 1) if not specified. k : scalar(int), optional(default=None) Number of possible next states for each state-action pair. Equal to `num_states` if not specified. scale : scalar(float), optional(default=1) Standard deviation of the normal distribution for the reward values. sparse : bool, optional(default=False) Whether to store the transition probability array in sparse matrix form. sa_pair : bool, optional(default=False) Whether to represent the data in the state-action pairs formulation. (If `sparse=True`, automatically set `True`.) random_state : int or np.random.RandomState, optional Random seed (integer) or np.random.RandomState instance to set the initial state of the random number generator for reproducibility. If None, a randomly initialized RandomState is used. Returns ------- ddp : DiscreteDP An instance of DiscreteDP.
f5111:m3
@njit<EOL>def func(x):
return (x**<NUM_LIT:3> - <NUM_LIT:1>)<EOL>
Function for testing on.
f5112:m0
@njit<EOL>def func_prime(x):
return (<NUM_LIT:3>*x**<NUM_LIT:2>)<EOL>
Derivative for func.
f5112:m1
@njit<EOL>def func_prime2(x):
return <NUM_LIT:6>*x<EOL>
Second order derivative for func.
f5112:m2
@njit<EOL>def func_two(x):
return np.sin(<NUM_LIT:4> * (x - <NUM_LIT:1>/<NUM_LIT:4>)) + x + x**<NUM_LIT:20> - <NUM_LIT:1><EOL>
Harder function for testing on.
f5112:m3