ast_errors
stringlengths
0
3.2k
d_id
int64
44
121k
id
int64
70
338k
n_whitespaces
int64
3
14k
path
stringlengths
8
134
n_words
int64
4
4.82k
n_identifiers
int64
1
131
random_cut
stringlengths
16
15.8k
commit_message
stringlengths
2
15.3k
fun_name
stringlengths
1
84
commit_id
stringlengths
40
40
repo
stringlengths
3
28
file_name
stringlengths
5
79
ast_levels
int64
6
31
nloc
int64
1
548
url
stringlengths
31
59
complexity
int64
1
66
token_counts
int64
6
2.13k
n_ast_errors
int64
0
28
vocab_size
int64
4
1.11k
n_ast_nodes
int64
15
19.2k
language
stringclasses
1 value
documentation
dict
code
stringlengths
101
62.2k
53,111
211,511
214
ppdet/modeling/rbox_utils.py
128
28
def box2corners(box): B = box.shape[0] x, y, w, h, alpha = paddle.split(box, 5, axis=-1) x4 = paddle.to_tensor( [0.5, 0.5, -0.5, -0.5], dtype=paddle.float32).reshape( (1, 1, 4)) # (1,1,4) x4 = x4 * w # (B, N, 4) y4 = paddle.to_tensor( [-0.5, 0.5, 0.5, -0.5], dtype=...
add fcosr model (#6765) * add fcosr * fix some problem * add docs for fcosr * modify code * modify focsr reader * finish tensorrt deployment with dynamic shape * modify according to review comment Co-authored-by: wangxinxin08 <>
box2corners
92078713cced4f0d9450a6fc80a449fa75fd8c10
PaddleDetection
rbox_utils.py
12
21
https://github.com/PaddlePaddle/PaddleDetection.git
1
287
0
71
403
Python
{ "docstring": "convert box coordinate to corners\n Args:\n box (Tensor): (B, N, 5) with (x, y, w, h, alpha) angle is in [0, 90)\n Returns:\n corners (Tensor): (B, N, 4, 2) with (x1, y1, x2, y2, x3, y3, x4, y4)\n ", "language": "en", "n_whitespaces": 61, "n_words": 38, "vocab_size": 32 ...
def box2corners(box): B = box.shape[0] x, y, w, h, alpha = paddle.split(box, 5, axis=-1) x4 = paddle.to_tensor( [0.5, 0.5, -0.5, -0.5], dtype=paddle.float32).reshape( (1, 1, 4)) # (1,1,4) x4 = x4 * w # (B, N, 4) y4 = paddle.to_tensor( [-0.5, 0.5, 0.5, -0.5], dtype=...
18,753
91,256
155
src/sentry/incidents/subscription_processor.py
29
14
def get_crash_rate_alert_metrics_aggregation_value(self, subscription_update): rows = subscription_update["values"]["data"] if BaseMetricsEntitySubscription.is_crash_rate_format_v2(rows): version = "v2" result = self._get_crash_rate_alert_metrics_aggregation_value_v2(sub...
fix(cra-metrics): Count all users in metrics alerts (#34957) Use conditional aggregates in order to get both the total user count and the number of crashed users in the same snuba query. To maintain compatibility until existing subscriptions have been migrated, make the subscription processor able to handle both ...
get_crash_rate_alert_metrics_aggregation_value
65f43fd4e0f1821b468547fc08136bbad9cd8446
sentry
subscription_processor.py
11
14
https://github.com/getsentry/sentry.git
2
72
0
22
123
Python
{ "docstring": "Handle both update formats. Once all subscriptions have been updated\n to v2, we can remove v1 and replace this function with current v2.\n ", "language": "en", "n_whitespaces": 37, "n_words": 23, "vocab_size": 23 }
def get_crash_rate_alert_metrics_aggregation_value(self, subscription_update): rows = subscription_update["values"]["data"] if BaseMetricsEntitySubscription.is_crash_rate_format_v2(rows): version = "v2" result = self._get_crash_rate_alert_metrics_aggregation_value_v2(sub...
26,998
120,945
20
jax/_src/test_util.py
14
6
def strict_promotion_if_dtypes_match(dtypes): if all(dtype == dtypes[0] for dtype in dtypes): return jax.
Add jtu.strict_promotion_if_dtypes_match utility
strict_promotion_if_dtypes_match
4c0d61a1435b70760814f1f678cb041d36b8408d
jax
test_util.py
10
4
https://github.com/google/jax.git
3
35
0
13
61
Python
{ "docstring": "\n Context manager to enable strict promotion if all dtypes match,\n and enable standard dtype promotion otherwise.\n ", "language": "en", "n_whitespaces": 20, "n_words": 16, "vocab_size": 14 }
def strict_promotion_if_dtypes_match(dtypes): if all(dtype == dtypes[0] for dtype in dtypes): return jax.numpy_dtype_promotion('strict') return jax.numpy_dtype_promotion('standard')
76,104
260,170
28
sklearn/utils/tests/test_param_validation.py
15
7
def test_stroptions_deprecated_subset(): with pytest.raises(ValueError, match="deprecated options must be a subset"): StrOptions({"a", "b", "c"}, deprecated={"a", "d"})
MNT Param validation: Make it possible to mark a constraint as hidden (#23558)
test_stroptions_deprecated_subset
de659b9dee2054efb4830eff8e98ece60f4a1758
scikit-learn
test_param_validation.py
12
3
https://github.com/scikit-learn/scikit-learn.git
1
35
0
15
68
Python
{ "docstring": "Check that the deprecated parameter must be a subset of options.", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
def test_stroptions_deprecated_subset(): with pytest.raises(ValueError, match="deprecated options must be a subset"): StrOptions({"a", "b", "c"}, deprecated={"a", "d"})
80,793
271,557
337
keras/engine/training.py
148
5
def _validate_target_and_loss(self, y, loss): # `self.loss` references the loss added via `compile` call. If users have # provided such, the target must be provided; otherwise it's a user error. # Note that `self.loss` does not include losses added via `add_loss`, and it # is a...
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
_validate_target_and_loss
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
training.py
13
12
https://github.com/keras-team/keras.git
4
38
0
95
84
Python
{ "docstring": "Raises error if target or loss is not found.\n\n This method verifies that the target and loss are properly populated\n when applicable, or raises errors.\n\n Args:\n y: the target for training.\n loss: the total loss tensor including loss added via `compile` and...
def _validate_target_and_loss(self, y, loss): # `self.loss` references the loss added via `compile` call. If users have # provided such, the target must be provided; otherwise it's a user error. # Note that `self.loss` does not include losses added via `add_loss`, and it # is a...
39,728
165,883
222
pandas/core/window/rolling.py
52
15
def _validate_datetimelike_monotonic(self): # GH 46061 if self._on.hasnans: self._raise_monoton
BUG: groupby().rolling(freq) with monotonic dates within groups #46065 (#46567)
_validate_datetimelike_monotonic
d2aa44f50f6ac4789d4e351e4e52a53a358da42e
pandas
rolling.py
14
13
https://github.com/pandas-dev/pandas.git
6
75
0
44
135
Python
{ "docstring": "\n Validate that each group in self._on is monotonic\n ", "language": "en", "n_whitespaces": 23, "n_words": 8, "vocab_size": 8 }
def _validate_datetimelike_monotonic(self): # GH 46061 if self._on.hasnans: self._raise_monotonic_error("values must not have NaT") for group_indices in self._grouper.indices.values(): group_on = self._on.take(group_indices) if not ( g...
56,528
221,825
31
python3.10.4/Lib/ctypes/macholib/framework.py
12
6
def framework_info(filename): is_framework = STRICT_FRAMEWORK_RE.match(filename) if not is_framework: return None return is_framewo
add python 3.10.4 for windows
framework_info
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
framework.py
8
5
https://github.com/XX-net/XX-Net.git
2
26
0
11
46
Python
{ "docstring": "\n A framework name can take one of the following four forms:\n Location/Name.framework/Versions/SomeVersion/Name_Suffix\n Location/Name.framework/Versions/SomeVersion/Name\n Location/Name.framework/Name_Suffix\n Location/Name.framework/Name\n\n returns None if not fo...
def framework_info(filename): is_framework = STRICT_FRAMEWORK_RE.match(filename) if not is_framework: return None return is_framework.groupdict()
13,090
62,994
26
.venv/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py
14
8
def contained_in(filename, directory): filename = os.path.normcase(os.path.abspath(filename)) directory = os.path.normcase(os.path.abspath(directory)) return os.path.commonprefix([f
upd; format
contained_in
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
transferlearning
_in_process.py
11
4
https://github.com/jindongwang/transferlearning.git
1
57
0
12
91
Python
{ "docstring": "Test if a file is located within the given directory.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 10 }
def contained_in(filename, directory): filename = os.path.normcase(os.path.abspath(filename)) directory = os.path.normcase(os.path.abspath(directory)) return os.path.commonprefix([filename, directory]) == directory
6,193
34,064
25
src/transformers/activations_tf.py
17
12
def glu(x, axis=-1): a, b = tf.split(x, 2, axis=axis) return a *
add TF glu activation function (#15146)
glu
c4f7eb124b218741d66dd1d86b5d744024a78f6f
transformers
activations_tf.py
9
3
https://github.com/huggingface/transformers.git
1
38
0
17
92
Python
{ "docstring": "\n Gated Linear Unit. Implementation as defined in the original paper (see https://arxiv.org/abs/1612.08083), where\n the input `x` is split in two halves across a dimension (`axis`), A and B, returning A * sigmoid(B).\n\n Args:\n `x`: float Tensor to perform activation\n `axis`...
def glu(x, axis=-1): a, b = tf.split(x, 2, axis=axis) return a * tf.math.sigmoid(b) if version.parse(tf.version.VERSION) >= version.parse("2.4"):
22,471
106,848
624
py/visdom/__init__.py
149
41
def matplot(self, plot, opts=None, env=None, win=None): opts = {} if opts is None else opts _title2str(opts) _assert_opts(opts) # write plot to SVG buffer: buffer = StringIO() plot.savefig(buffer, format="svg") buffer.seek(0) svg = buffer.read() ...
apply black py to all python files
matplot
5b8b7f267cfaf76a2a39a727ef31a62b3909a093
visdom
__init__.py
18
39
https://github.com/fossasia/visdom.git
13
329
0
88
543
Python
{ "docstring": "\n This function draws a Matplotlib `plot`. The function supports\n one plot-specific option: `resizable`. When set to `True` the plot\n is resized with the pane. You need `beautifulsoup4` and `lxml`\n packages installed to use this option.\n ", "language": "en", ...
def matplot(self, plot, opts=None, env=None, win=None): opts = {} if opts is None else opts _title2str(opts) _assert_opts(opts) # write plot to SVG buffer: buffer = StringIO() plot.savefig(buffer, format="svg") buffer.seek(0) svg = buffer.read() ...
54,514
216,309
243
salt/modules/consul.py
89
14
def acl_clone(consul_url=None, token=None, **kwargs): ret = {} data = {} if not consul_url: consul_url = _get_config() if not consul_url: log.error("No Consul URL found.") ret["message"] = "No Consul URL found." ret["res"] = False return r...
fix(consul): serialize to JSON only non string objects. Fixes 35215
acl_clone
50a17432015fb712ec4dc7d3ead79e8939e2bf96
salt
consul.py
13
26
https://github.com/saltstack/salt.git
5
170
0
51
306
Python
{ "docstring": "\n Information about an ACL token.\n\n :param consul_url: The Consul server URL.\n :param id: Unique identifier for the ACL to update.\n :return: Boolean, message of success or\n failure, and new ID of cloned ACL.\n\n CLI Example:\n\n .. code-block:: bash\n\n salt ...
def acl_clone(consul_url=None, token=None, **kwargs): ret = {} data = {} if not consul_url: consul_url = _get_config() if not consul_url: log.error("No Consul URL found.") ret["message"] = "No Consul URL found." ret["res"] = False return r...
40,253
168,242
178
pandas/core/indexes/datetimes.py
81
14
def slice_indexer(self, start=None, end=None, step=None, kind=lib.no_default): self._deprecated_arg(kind, "kind", "slice_indexer") # For historical reasons DatetimeIndex supports slices between two # instances of datetime.time as if it were applying a slice mask to # an
PERF cache find_stack_level (#48023) cache stacklevel
slice_indexer
2f8d0a36703e81e4dca52ca9fe4f58c910c1b304
pandas
datetimes.py
12
38
https://github.com/pandas-dev/pandas.git
14
269
0
63
149
Python
{ "docstring": "\n Return indexer for specified label slice.\n Index.slice_indexer, customized to handle time slicing.\n\n In addition to functionality provided by Index.slice_indexer, does the\n following:\n\n - if both `start` and `end` are instances of `datetime.time`, it\n ...
def slice_indexer(self, start=None, end=None, step=None, kind=lib.no_default): self._deprecated_arg(kind, "kind", "slice_indexer") # For historical reasons DatetimeIndex supports slices between two # instances of datetime.time as if it were applying a slice mask to # an array o...
34,618
149,967
120
freqtrade/persistence/migrations.py
14
6
def fix_old_dry_orders(engine):
Cleanup old, left open dry-run orders
fix_old_dry_orders
c0ff554d5be871098cd10424fdd579322b5370df
freqtrade
migrations.py
12
27
https://github.com/freqtrade/freqtrade.git
1
32
0
9
61
Python
{ "docstring": "\n update orders\n set ft_is_open = 0\n where ft_is_open = 1 and (ft_trade_id, order_id) not in (\n select id, stoploss_order_id from trades where stoploss_order_id is not null\n ) and ft_order_side = 'stoploss'\n ...
def fix_old_dry_orders(engine): with engine.begin() as connection: connection.execute( text( ) ) connection.execute( text( ) )
73,833
251,829
104
test/mitmproxy/proxy/layers/http/hyper_h2_test_helpers.py
33
11
def build_data_frame(self, data, flags=None, stream_id=1, padding_len=0): flags = set(flags) if flags is not None else set() f = DataFrame(stream_id) f.data = data f.flags = flags if padding_len: flags.add("PADDED") f.pad_length = padding_len ...
make it black!
build_data_frame
b3587b52b25077f68116b9852b041d33e7fc6601
mitmproxy
hyper_h2_test_helpers.py
10
9
https://github.com/mitmproxy/mitmproxy.git
3
67
0
25
107
Python
{ "docstring": "\n Builds a single data frame out of a chunk of data.\n ", "language": "en", "n_whitespaces": 26, "n_words": 11, "vocab_size": 9 }
def build_data_frame(self, data, flags=None, stream_id=1, padding_len=0): flags = set(flags) if flags is not None else set() f = DataFrame(stream_id) f.data = data f.flags = flags if padding_len: flags.add("PADDED") f.pad_length = padding_len ...
2,421
12,837
106
jina/parsers/dryrun.py
29
9
def set_dryrun_parser(parser=None): if not parser: parser = set_base_parser() parser.add_argument( 'host', type=str, help='The full host address of the Gateway, e.g. grpc://localhost:12345', ) parser.add_argument( '--timeout', type=int, defa...
feat: add dryrun to cli (#5050) * feat: add dryrun to cli * style: fix overload and cli autocomplete * feat: add parsing for dryrun * feat: update checker dryrun * docs: add dryrun cli to healt check page * style: fix overload and cli autocomplete * feat: add exit Co-authored-by: Jina Dev Bot <dev...
set_dryrun_parser
124045351137d80d118f9692de4295d50561f1e1
jina
dryrun.py
10
18
https://github.com/jina-ai/jina.git
2
53
0
26
90
Python
{ "docstring": "Set the parser for `dryrun`\n\n :param parser: an existing parser to build upon\n :return: the parser\n \nTimeout in millisecond of one check\n-1 for waiting forever\n", "language": "en", "n_whitespaces": 33, "n_words": 26, "vocab_size": 22 }
def set_dryrun_parser(parser=None): if not parser: parser = set_base_parser() parser.add_argument( 'host', type=str, help='The full host address of the Gateway, e.g. grpc://localhost:12345', ) parser.add_argument( '--timeout', type=int, defa...
12,521
61,339
202
.venv/lib/python3.8/site-packages/pip/_internal/utils/wheel.py
83
19
def wheel_dist_info_dir(source, name): # type: (ZipFile, str) -> str # Zip file path separators must be / subdirs = {p.split("/", 1)[0] for p in source.namelist()} info_dirs = [s for s in subdirs if s.endswith(".dist-info")] if not info_dirs: raise UnsupportedWheel(".dist-info directo...
upd; format
wheel_dist_info_dir
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
transferlearning
wheel.py
14
19
https://github.com/jindongwang/transferlearning.git
7
120
0
61
204
Python
{ "docstring": "Returns the name of the contained .dist-info directory.\n\n Raises AssertionError or UnsupportedWheel if not found, >1 found, or\n it doesn't match the provided name.\n ", "language": "en", "n_whitespaces": 33, "n_words": 24, "vocab_size": 20 }
def wheel_dist_info_dir(source, name): # type: (ZipFile, str) -> str # Zip file path separators must be / subdirs = {p.split("/", 1)[0] for p in source.namelist()} info_dirs = [s for s in subdirs if s.endswith(".dist-info")] if not info_dirs: raise UnsupportedWheel(".dist-info directo...
38,667
160,611
976
numpy/lib/arraysetops.py
367
51
def in1d(ar1, ar2, assume_unique=False, invert=False): # Ravel both arrays, behavior for the first array could be different ar1 = np.asarray(ar1).ravel() ar2 = np.asarray(ar2).ravel() # Ensure that iteration through object arrays yields size
MAINT: Optimize np.isin for integer arrays - This optimization indexes with an intermediary boolean array to speed up numpy.isin and numpy.in1d for integer arrays over a range of optimal parameters which are calculated.
in1d
cedba623b110caf83f46edfa38cb4fbc0191e285
numpy
arraysetops.py
17
59
https://github.com/numpy/numpy.git
16
504
0
205
792
Python
{ "docstring": "\n Test whether each element of a 1-D array is also present in a second array.\n\n Returns a boolean array the same length as `ar1` that is True\n where an element of `ar1` is in `ar2` and False otherwise.\n\n We recommend using :func:`isin` instead of `in1d` for new code.\n\n Parameter...
def in1d(ar1, ar2, assume_unique=False, invert=False): # Ravel both arrays, behavior for the first array could be different ar1 = np.asarray(ar1).ravel() ar2 = np.asarray(ar2).ravel() # Ensure that iteration through object arrays yields size-1 arrays if ar2.dtype == object: ar2 = ar2.r...
42,201
176,973
55
networkx/algorithms/centrality/degree_alg.py
33
8
def out_degree_centrality(G): if len(G) <= 1: return {n: 1 for n in G} s = 1.0 / (len(G) - 1.0) centrality = {n: d * s for n, d in G.out_degree()} return centralit
added examples to degree_alg.py (#5644) * added example on degree centrality * added example on in degree centrality * added example on out degree centrality * added opening braces
out_degree_centrality
b8d1438e4ea3d8190c650110b3b7d7c141224842
networkx
degree_alg.py
11
6
https://github.com/networkx/networkx.git
4
61
0
25
91
Python
{ "docstring": "Compute the out-degree centrality for nodes.\n\n The out-degree centrality for a node v is the fraction of nodes its\n outgoing edges are connected to.\n\n Parameters\n ----------\n G : graph\n A NetworkX graph\n\n Returns\n -------\n nodes : dictionary\n Dictiona...
def out_degree_centrality(G): if len(G) <= 1: return {n: 1 for n in G} s = 1.0 / (len(G) - 1.0) centrality = {n: d * s for n, d in G.out_degree()} return centrality
67,386
235,950
42
packages/python/plotly/plotly/tests/test_optional/test_offline/test_offline.py
14
8
def _read_html(self, file_url): with open(file_url.replace("file://", "").replace(" ", "")) as f: return f.read() if matplotlylib:
switch to black .22
_read_html
43e3a4011080911901176aab919c0ecf5046ddd3
plotly.py
test_offline.py
15
3
https://github.com/plotly/plotly.py.git
1
36
0
14
74
Python
{ "docstring": "Read and return the HTML contents from a file_url in the\n form e.g. file:///Users/chriddyp/Repos/plotly.py/plotly-temp.html\n ", "language": "en", "n_whitespaces": 28, "n_words": 14, "vocab_size": 13 }
def _read_html(self, file_url): with open(file_url.replace("file://", "").replace(" ", "")) as f: return f.read() if matplotlylib:
30,974
136,708
260
python/ray/_private/utils.py
129
22
def set_omp_num_threads_if_unset() -> bool: num_threads_from_env = os.environ.get("OMP_NUM_THREADS") if num_threads_from_env is not None: # No ops if it's set return False # If unset, try setting the correct CPU count assigned. runtime_ctx = ray.get_runtime_context() if runtime...
[core] Set OMP_NUM_THREADS to `num_cpus` required by task/actors by default (#30496) Ray currently sets OMP_NUM_THREADS=1 when the environ variable is not set. This PR: Sets OMP_NUM_THREADS to the number of cpus assigned to the worker that runs a task before running, and reset it after running. If num_cpus is a f...
set_omp_num_threads_if_unset
7c8859f1428224710e4c2db2abf0d9ec28536301
ray
utils.py
11
27
https://github.com/ray-project/ray.git
4
105
0
94
189
Python
{ "docstring": "Set the OMP_NUM_THREADS to default to num cpus assigned to the worker\n\n This function sets the environment variable OMP_NUM_THREADS for the worker,\n if the env is not previously set and it's running in worker (WORKER_MODE).\n\n Returns True if OMP_NUM_THREADS is set in this function.\n\n ...
def set_omp_num_threads_if_unset() -> bool: num_threads_from_env = os.environ.get("OMP_NUM_THREADS") if num_threads_from_env is not None: # No ops if it's set return False # If unset, try setting the correct CPU count assigned. runtime_ctx = ray.get_runtime_context() if runtime...
88,292
289,145
97
tests/components/homekit/test_type_sensors.py
43
21
async def test_binary_device_classes(hass, hk_driver): entity_id = "binary_sensor.demo" aid = 1 for device_class, (service, char, _) in BINARY_SENSOR_SERVICE_MAP.items(): hass.states.async_set(entity_id, STATE_OFF, {ATTR_DEVICE_CLASS: device_class}) await hass.async_block_till_done() ...
Add support for restoring HomeKit IIDs (#79913)
test_binary_device_classes
3b33e0d832b238b40360383099391e2093ea05cb
core
test_type_sensors.py
11
10
https://github.com/home-assistant/core.git
2
91
0
37
142
Python
{ "docstring": "Test if services and characteristics are assigned correctly.", "language": "en", "n_whitespaces": 7, "n_words": 8, "vocab_size": 8 }
async def test_binary_device_classes(hass, hk_driver): entity_id = "binary_sensor.demo" aid = 1 for device_class, (service, char, _) in BINARY_SENSOR_SERVICE_MAP.items(): hass.states.async_set(entity_id, STATE_OFF, {ATTR_DEVICE_CLASS: device_class}) await hass.async_block_till_done() ...
55,002
217,902
144
python3.10.4/Lib/imaplib.py
76
24
def Internaldate2tuple(resp): mo = InternalDate.match(resp) if not mo: return
add python 3.10.4 for windows
Internaldate2tuple
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
imaplib.py
11
19
https://github.com/XX-net/XX-Net.git
3
178
0
57
302
Python
{ "docstring": "Parse an IMAP4 INTERNALDATE string.\n\n Return corresponding local time. The return value is a\n time.struct_time tuple or None if the string has wrong format.\n ", "language": "en", "n_whitespaces": 34, "n_words": 24, "vocab_size": 24 }
def Internaldate2tuple(resp): mo = InternalDate.match(resp) if not mo: return None mon = Mon2num[mo.group('mon')] zonen = mo.group('zonen') day = int(mo.group('day')) year = int(mo.group('year')) hour = int(mo.group('hour')) min = int(mo.group('min')) sec = int(mo.gro...
16,319
74,795
69
wagtail/documents/tests/test_admin_views.py
16
11
def test_delete_get(self): # Send request response = self.client.get( reverse("wagtaildocs:delete_multiple", args=(self.doc.id,)) ) # Check response self.assertEqual(response.status_code, 405)
Reformat with black
test_delete_get
d10f15e55806c6944827d801cd9c2d53f5da4186
wagtail
test_admin_views.py
14
5
https://github.com/wagtail/wagtail.git
1
40
0
14
68
Python
{ "docstring": "\n This tests that a GET request to the delete view returns a 405 \"METHOD NOT ALLOWED\" response\n ", "language": "en", "n_whitespaces": 32, "n_words": 17, "vocab_size": 16 }
def test_delete_get(self): # Send request response = self.client.get( reverse("wagtaildocs:delete_multiple", args=(self.doc.id,)) ) # Check response self.assertEqual(response.status_code, 405)
17,733
83,840
141
zerver/tests/test_subs.py
22
15
def test_stream_admin_remove_others_from_public_stream(self) -> None: result = self.attempt_unsubscribe_of_principal( query_count=15, target_users=[self.example_user("cordelia")], is_realm_admin=False, is_stream_admin=True, is_subbed=True, ...
message_flags: Short-circuit if no messages changed. Omit sending an event, and updating the database, if there are no matching messages.
test_stream_admin_remove_others_from_public_stream
803982e87254e3b1ebcb16ed795e224afceea3a3
zulip
test_subs.py
13
16
https://github.com/zulip/zulip.git
1
80
0
21
125
Python
{ "docstring": "\n You can remove others from public streams you're a stream administrator of.\n ", "language": "en", "n_whitespaces": 27, "n_words": 12, "vocab_size": 12 }
def test_stream_admin_remove_others_from_public_stream(self) -> None: result = self.attempt_unsubscribe_of_principal( query_count=15, target_users=[self.example_user("cordelia")], is_realm_admin=False, is_stream_admin=True, is_subbed=True, ...
70,422
244,539
724
mmdet/datasets/pipelines/transforms.py
201
14
def _mosaic_combine(self, loc, center_position_xy, img_shape_wh): assert loc in ('top_left', 'top_right', 'bottom_left', 'bottom_right') if loc == 'top_left': # index0 to top left part of image x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \ ...
Refactor RandomCrop and SegRescale
_mosaic_combine
c407e970a8ee2544f27d1c233855a31129dca158
mmdetection
transforms.py
15
36
https://github.com/open-mmlab/mmdetection.git
4
406
0
71
562
Python
{ "docstring": "Calculate global coordinate of mosaic image and local coordinate of\n cropped sub-image.\n\n Args:\n loc (str): Index for the sub-image, loc in ('top_left',\n 'top_right', 'bottom_left', 'bottom_right').\n center_position_xy (Sequence[float]): Mixing ce...
def _mosaic_combine(self, loc, center_position_xy, img_shape_wh): assert loc in ('top_left', 'top_right', 'bottom_left', 'bottom_right') if loc == 'top_left': # index0 to top left part of image x1, y1, x2, y2 = max(center_position_xy[0] - img_shape_wh[0], 0), \ ...
47,857
196,357
517
sympy/matrices/common.py
164
25
def permute(self, perm, orientation='rows', direction='forward'): r from sympy.combinatorics import Permutation # allow british variants and `columns` if direction == 'forwards': direction = 'forward' if direction == 'backwards': direction = 'backward' ...
Moved imports to higher level
permute
59d22b6bb7287613d598611027f640d068ca5748
sympy
common.py
14
128
https://github.com/sympy/sympy.git
16
238
0
95
413
Python
{ "docstring": "Permute the rows or columns of a matrix by the given list of\n swaps.\n\n Parameters\n ==========\n\n perm : Permutation, list, or list of lists\n A representation for the permutation.\n\n If it is ``Permutation``, it is used directly with some\n ...
def permute(self, perm, orientation='rows', direction='forward'): r from sympy.combinatorics import Permutation # allow british variants and `columns` if direction == 'forwards': direction = 'forward' if direction == 'backwards': direction = 'backward' ...
7,482
42,082
31
seaborn/axisgrid.py
10
5
def apply(self, func, *args, **kwargs):
Add apply and pipe methods to Grid objects for fluent customization (#2928) * Return self from tight_layout and refline * Add apply and pipe methods to FacetGrid for fluent customization * Move apply/pipe down to base class so JointGrid/PaiGrid get them too * Tweak docstrings
apply
949dec3666ab12a366d2fc05ef18d6e90625b5fa
seaborn
axisgrid.py
8
3
https://github.com/mwaskom/seaborn.git
1
26
0
9
41
Python
{ "docstring": "\n Pass the grid to a user-supplied function and return self.\n\n The `func` must accept an object of this type for its first\n positional argument. Additional arguments are passed through.\n The return value of `func` is ignored; this method returns self.\n See the ...
def apply(self, func, *args, **kwargs): func(self, *args, **kwargs) return self
17,256
81,766
521
awx/main/utils/common.py
110
27
def copy_m2m_relationships(obj1, obj2, fields, kwargs=None): for field_name in fields: if hasattr(obj1, field_name): try: field_obj = obj1._meta.get_field(field_name) except FieldDoesNotExist: continue if isinstance(field_obj, ManyToMa...
JT param everything (#12646) * Making almost all fields promptable on job templates and config models * Adding EE, IG and label access checks * Changing jobs preferred instance group function to handle the new IG cache field * Adding new ask fields to job template modules * Address unit/functional tests * Adding ...
copy_m2m_relationships
33c0fb79d66f56374d7c042ba79887faa85e2885
awx
common.py
21
22
https://github.com/ansible/awx.git
11
164
0
77
263
Python
{ "docstring": "\n In-place operation.\n Given two saved objects, copies related objects from obj1\n to obj2 to field of same name, if field occurs in `fields`\n ", "language": "en", "n_whitespaces": 36, "n_words": 23, "vocab_size": 21 }
def copy_m2m_relationships(obj1, obj2, fields, kwargs=None): for field_name in fields: if hasattr(obj1, field_name): try: field_obj = obj1._meta.get_field(field_name) except FieldDoesNotExist: continue if isinstance(field_obj, ManyToMa...
42,923
179,238
1,541
gradio/external.py
440
43
def load_from_pipeline(pipeline): try: import transformers except ImportError: raise ImportError( "transformers not installed. Please try `pip install transformers`" ) if not isinstance(pipeline, transformers.Pipeline): raise ValueError("pipeline must be a tr...
Format The Codebase - black formatting - isort formatting
load_from_pipeline
cc0cff893f9d7d472788adc2510c123967b384fe
gradio
external.py
20
139
https://github.com/gradio-app/gradio.git
32
1,075
0
165
1,781
Python
{ "docstring": "\n Gets the appropriate Interface kwargs for a given Hugging Face transformers.Pipeline.\n pipeline (transformers.Pipeline): the transformers.Pipeline from which to create an interface\n Returns:\n (dict): a dictionary of kwargs that can be used to construct an Interface object\n ", "...
def load_from_pipeline(pipeline): try: import transformers except ImportError: raise ImportError( "transformers not installed. Please try `pip install transformers`" ) if not isinstance(pipeline, transformers.Pipeline): raise ValueError("pipeline must be a tr...
126
819
61
packages/syft/src/syft/core/adp/vectorized_publish.py
36
11
def calculate_bounds_for_mechanism(value_array, min_val_array, max_val_array): # TODO: Double check whether the iDPGaussianMechanism class squares its squared_l2_no
Implemented working vectorized_publish method into codebase Took 26 minutes
calculate_bounds_for_mechanism
56137bacda6fea5a0053c65eb6fd88688f5298cc
PySyft
vectorized_publish.py
14
8
https://github.com/OpenMined/PySyft.git
1
67
0
30
113
Python
{ "docstring": "Calculates the squared L2 norm values needed to create a Mechanism, and calculate privacy budget + spend If you calculate the privacy budget spend with the worst case bound, you can show this number to the D.S.\n If you calculate it with the regular value (the value computed below when public_only ...
def calculate_bounds_for_mechanism(value_array, min_val_array, max_val_array): # TODO: Double check whether the iDPGaussianMechanism class squares its squared_l2_norm values!! worst_case_l2_norm = np.sqrt(np.sum(np.square(max_val_array - min_val_array))) * np.ones_like(value_array) l2_norm = np.s...
13,947
65,572
5
erpnext/buying/report/procurement_tracker/procurement_tracker.py
11
7
def get_po_entries(conditions): ret
style: format code with black
get_po_entries
494bd9ef78313436f0424b918f200dab8fc7c20b
erpnext
procurement_tracker.py
10
34
https://github.com/frappe/erpnext.git
1
26
0
11
43
Python
{ "docstring": "\n\t\tSELECT\n\t\t\tchild.name,\n\t\t\tchild.parent,\n\t\t\tchild.cost_center,\n\t\t\tchild.project,\n\t\t\tchild.warehouse,\n\t\t\tchild.material_request,\n\t\t\tchild.material_request_item,\n\t\t\tchild.item_code,\n\t\t\tchild.stock_uom,\n\t\t\tchild.qty,\n\t\t\tchild.amount,\n\t\t\tchild.base_amoun...
def get_po_entries(conditions): return frappe.db.sql( .format( conditions=conditions ), as_dict=1, ) # nosec
51,751
206,849
160
django/views/generic/dates.py
27
9
def get_year(self): year = self.year if year is None: try: year = self.kwargs["year"]
Refs #33476 -- Reformatted code with Black.
get_year
9c19aff7c7561e3a82978a272ecdaad40dda5c00
django
dates.py
18
11
https://github.com/django/django.git
4
54
0
17
96
Python
{ "docstring": "Return the year for which this view should display data.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 10 }
def get_year(self): year = self.year if year is None: try: year = self.kwargs["year"] except KeyError: try: year = self.request.GET["year"] except KeyError: raise Http404(_("No year s...
117,022
319,934
67
src/documents/tests/test_management_retagger.py
18
13
def test_overwrite_storage_path(self): call_command("document_retagger", "--storage_path", "--overwrite") d_first, d_second, d_unrelated, d_auto = self.get_updated_docs() self.assertEqual(d_first.storage_path, self.sp2) self.assertEqual(d_auto.storage_path, self.sp1) se...
Adds the storage paths to the re-tagger command
test_overwrite_storage_path
c8e838e3a0828e82efac1fd93ebb9aba6a000ff8
paperless-ngx
test_management_retagger.py
8
7
https://github.com/paperless-ngx/paperless-ngx.git
1
71
0
17
116
Python
{ "docstring": "\n GIVEN:\n - 2 storage paths with documents which match them\n - 1 document which matches but has a storage path\n WHEN:\n - document retagger is called with overwrite\n THEN:\n - Matching document's storage paths updated\n -...
def test_overwrite_storage_path(self): call_command("document_retagger", "--storage_path", "--overwrite") d_first, d_second, d_unrelated, d_auto = self.get_updated_docs() self.assertEqual(d_first.storage_path, self.sp2) self.assertEqual(d_auto.storage_path, self.sp1) se...
35,635
153,820
18
modin/core/storage_formats/base/query_compiler.py
4
7
def invert(self): return DataFrameDefault.regis
REFACTOR-#4513: Fix spelling mistakes in docs and docstrings (#4514) Co-authored-by: Rehan Sohail Durrani <rdurrani@berkeley.edu> Signed-off-by: jeffreykennethli <jkli@ponder.io>
invert
57e29bc5d82348006c5170ef9ac0a9eedcd9acf9
modin
query_compiler.py
10
2
https://github.com/modin-project/modin.git
1
20
0
4
35
Python
{ "docstring": "\n Apply bitwise inversion for each element of the QueryCompiler.\n\n Returns\n -------\n BaseQueryCompiler\n New QueryCompiler containing bitwise inversion for each value.\n ", "language": "en", "n_whitespaces": 67, "n_words": 20, "vocab_size": 16...
def invert(self): return DataFrameDefault.register(pandas.DataFrame.__invert__)(self)
70,506
244,739
532
tests/test_models/test_dense_heads/test_centernet_head.py
183
34
def test_center_head_loss(self): s = 256 img_metas = [{'batch_input_shape': (s, s, 3)}] test_cfg = dict(topK=100, max_per_img=100) centernet_head = CenterNetHead( num_classes=4, in_channels=1, feat_channels=4, test_cfg=test_cfg) feat = [torch.rand(1, 1, s, s...
[Refactor] CenterNet
test_center_head_loss
96aa909c19dbe753852ac6dba13bbbc35329b99f
mmdetection
test_centernet_head.py
10
33
https://github.com/open-mmlab/mmdetection.git
1
295
0
101
457
Python
{ "docstring": "Tests center head loss when truth is empty and non-empty.", "language": "en", "n_whitespaces": 9, "n_words": 10, "vocab_size": 10 }
def test_center_head_loss(self): s = 256 img_metas = [{'batch_input_shape': (s, s, 3)}] test_cfg = dict(topK=100, max_per_img=100) centernet_head = CenterNetHead( num_classes=4, in_channels=1, feat_channels=4, test_cfg=test_cfg) feat = [torch.rand(1, 1, s, s...
@contextlib.contextmanager
55,183
218,181
22
python3.10.4/Lib/importlib/_common.py
11
11
def from_package(package): spec = wrap_spec(package) reader = spec.loader.get_resource_reader(spec.name) return reader.files() @contextlib.contex
add python 3.10.4 for windows
from_package
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
_common.py
9
4
https://github.com/XX-net/XX-Net.git
1
30
1
10
59
Python
{ "docstring": "\n Return a Traversable object for the given package.\n\n ", "language": "en", "n_whitespaces": 15, "n_words": 8, "vocab_size": 8 }
def from_package(package): spec = wrap_spec(package) reader = spec.loader.get_resource_reader(spec.name) return reader.files() @contextlib.contextmanager
5,042
26,683
61
saleor/checkout/complete_checkout.py
13
9
def _is_refund_ongoing(payment): return ( payment.transactions.filter( kind=TransactionKind.REFUND_ONGOING, is_s
Fix payment flow (#9504) * Do not capture payment again when it should be refunded or voided * Do not create order when then is ongoing refund
_is_refund_ongoing
0881beec1ac02dfa97525c5173687defb356d85c
saleor
complete_checkout.py
13
8
https://github.com/saleor/saleor.git
2
33
0
13
53
Python
{ "docstring": "Return True if refund is ongoing for given payment.", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
def _is_refund_ongoing(payment): return ( payment.transactions.filter( kind=TransactionKind.REFUND_ONGOING, is_success=True ).exists() if payment else False )
54,320
216,011
699
salt/states/win_wua.py
215
37
def installed(name, updates=None): if isinstance(updates, str): updates = [updates] if not updates: updates = name ret = {"name": name, "changes": {}, "result": True, "comment": ""} wua = salt.utils.win_update.WindowsUpdateAgent() # Search for updates install_list = wua....
Remove 40 character limit to update Title
installed
52c922760e8447f0c9efd23b12481ba1a7509dcd
salt
win_wua.py
17
59
https://github.com/saltstack/salt.git
15
441
0
114
772
Python
{ "docstring": "\n Ensure Microsoft Updates are installed. Updates will be downloaded if\n needed.\n\n Args:\n\n name (str):\n The identifier of a single update to install.\n\n updates (list):\n A list of identifiers for updates to be installed. Overrides\n ``na...
def installed(name, updates=None): if isinstance(updates, str): updates = [updates] if not updates: updates = name ret = {"name": name, "changes": {}, "result": True, "comment": ""} wua = salt.utils.win_update.WindowsUpdateAgent() # Search for updates install_list = wua....
42,673
178,349
200
nuitka/plugins/standard/KivyPlugin.py
36
8
def _getKivyInformation(self): setup_codes = r info = self.queryRuntimeInformationMultiple( info_name="kivy_info", setup_codes=setup_codes, values=( ("libs_loaded", "kivy.core.image.libs_loaded"), ("window_impl", "kivy.core.window.windo...
Plugins: Add DLL folders needed on Windows for Kivy plugin * Make DLL reporting code part of plugin base class. * Added new method to scan for DLLs in folders.
_getKivyInformation
6ed4d787519d7075d7ff492bc40a291bc12f088c
Nuitka
KivyPlugin.py
12
32
https://github.com/Nuitka/Nuitka.git
2
72
0
32
125
Python
{ "docstring": "\nimport kivy.core.image\nimport kivy.core.text\n# Prevent Window from being created at compile time.\nkivy.core.core_select_lib=(lambda *args, **kwargs: None)\nimport kivy.core.window\n\n# Kivy has packages designed to provide these on Windows\ntry:\n from kivy_deps.sdl2 import dep_bins as sdl2_de...
def _getKivyInformation(self): setup_codes = r info = self.queryRuntimeInformationMultiple( info_name="kivy_info", setup_codes=setup_codes, values=( ("libs_loaded", "kivy.core.image.libs_loaded"), ("window_impl", "kivy.core.window.windo...
40,375
169,019
32
pandas/core/generic.py
15
5
def __iter__(self) -> Iterator: return
TYP: Autotyping (#48191) * annotate-magics * annotate-imprecise-magics * none-return * scalar-return * pyi files * ignore vendored file * manual changes * ignore pyright in pickle_compat (these errors would be legit if the current __new__ methods were called but I think these pickle tests call old...
__iter__
54347fe684e0f7844bf407b1fb958a5269646825
pandas
generic.py
8
10
https://github.com/pandas-dev/pandas.git
1
15
0
15
28
Python
{ "docstring": "\n Iterate over info axis.\n\n Returns\n -------\n iterator\n Info axis as iterator.\n ", "language": "en", "n_whitespaces": 58, "n_words": 11, "vocab_size": 11 }
def __iter__(self) -> Iterator: return iter(self._info_axis) # can we get a better explanation of this?
21,833
104,397
213
src/datasets/table.py
55
26
def cast(self, target_schema, *args, **kwargs): table = table_cast(self.table, target_schema, *args, **kwargs) blocks = [] for subtables in self.blocks: new_tables = [] fields = list(target_schema) for subtable in subtables: subfields ...
Update docs to new frontend/UI (#3690) * WIP: update docs to new UI * make style * Rm unused * inject_arrow_table_documentation __annotations__ * hasattr(arrow_table_method, "__annotations__") * Update task_template.rst * Codeblock PT-TF-SPLIT * Convert loading scripts * Convert docs to mdx ...
cast
e35be138148333078284b942ccc9ed7b1d826f97
datasets
table.py
20
14
https://github.com/huggingface/datasets.git
6
134
0
39
208
Python
{ "docstring": "\n Cast table values to another schema\n\n Args:\n target_schema (:obj:`Schema`):\n Schema to cast to, the names and order of fields must match\n safe (:obj:`bool`, defaults to :obj:`True`):\n Check for overflows or other unsafe convers...
def cast(self, target_schema, *args, **kwargs): table = table_cast(self.table, target_schema, *args, **kwargs) blocks = [] for subtables in self.blocks: new_tables = [] fields = list(target_schema) for subtable in subtables: subfields ...
40,850
173,537
100
magenta/models/coconet/lib_util.py
65
16
def softmax(p, axis=None, temperature=1): if axis is None: axis = p.ndim - 1 if temperature == 0.: # NOTE: in case o
Work around tensor2tensor/gym issues, fix pylint errors. PiperOrigin-RevId: 433019701
softmax
e833848c6dda95dbcf17e84d935dcdb8cff6f47d
magenta
lib_util.py
12
14
https://github.com/magenta/magenta.git
4
120
0
44
188
Python
{ "docstring": "Apply the softmax transform to an array of categorical distributions.\n\n Args:\n p: an array of categorical probability vectors, possibly unnormalized.\n axis: the axis that spans the categories (default: -1).\n temperature: if not 1, transform the distribution by dividing the log\n ...
def softmax(p, axis=None, temperature=1): if axis is None: axis = p.ndim - 1 if temperature == 0.: # NOTE: in case of multiple equal maxima, returns uniform distribution. p = p == np.max(p, axis=axis, keepdims=True) else: # oldp = p logp = np.log(p) logp /= temperature logp -= logp....
1,308
8,001
280
ludwig/benchmarking/profiler.py
77
25
def _populate_static_information(self) -> None: self.info["ludwig_version"] = LUDWIG_VERSION self.info["start_disk_usage"] = shutil.disk_usage(os.path.expanduser("~")).used # CPU information cpu_info = get_my_cpu_info() self.info["cpu_architecture"] = cpu_info["arch"] ...
More precise resource usage tracking (#2363) * added `torch.profiler.record_function` decorator * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * export torch profiler metric draft/pseudocode * exporting cpu and cuda memory usage * exporting CPU and...
_populate_static_information
c50997c2b27e7f7f59a96c0158f3737e22419ed8
ludwig
profiler.py
13
19
https://github.com/ludwig-ai/ludwig.git
3
187
0
58
348
Python
{ "docstring": "Populate the report with static software and hardware information.", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
def _populate_static_information(self) -> None: self.info["ludwig_version"] = LUDWIG_VERSION self.info["start_disk_usage"] = shutil.disk_usage(os.path.expanduser("~")).used # CPU information cpu_info = get_my_cpu_info() self.info["cpu_architecture"] = cpu_info["arch"] ...
16,457
76,116
212
wagtail/tests/utils/page_tests.py
29
11
def assertCanNotCreateAt(self, parent_model, child_model, msg=None): if self._testCanCreateAt(parent_model, child_model): msg = self._formatMessage( msg, "Can create a %s.%s unde
Reformat with black
assertCanNotCreateAt
d10f15e55806c6944827d801cd9c2d53f5da4186
wagtail
page_tests.py
14
13
https://github.com/wagtail/wagtail.git
2
69
0
28
103
Python
{ "docstring": "\n Assert a particular child Page type can not be created under a parent\n Page type. ``parent_model`` and ``child_model`` should be the Page\n classes being tested.\n ", "language": "en", "n_whitespaces": 54, "n_words": 25, "vocab_size": 21 }
def assertCanNotCreateAt(self, parent_model, child_model, msg=None): if self._testCanCreateAt(parent_model, child_model): msg = self._formatMessage( msg, "Can create a %s.%s under a %s.%s" % ( child_model._meta.app_label, ...
18,815
91,818
37
src/sentry/features/manager.py
16
11
def get_feature_objects(self) -> Mapping[Project, Feature]: cls = self._manager._get_feature_class(self.feature_name) return {obj: cls(self.feature_name, obj) for obj in self.objects}
feat(notifications): add a feature flag to make slack the default for new users (#35652) We want to have some users automatically get notifications on Slack and email instead of just Slack. But we don't want to impact existing users so instead I introduce the concept of a UserFeature which isn't dependent on the user....
get_feature_objects
f64e58203b6c6d5121a4f6685dace0a4627d49b0
sentry
manager.py
10
9
https://github.com/getsentry/sentry.git
2
44
0
16
68
Python
{ "docstring": "\n Iterate over individual Feature objects.\n\n This is a fallback mode for applying a FeatureHandler that doesn't\n support checking the entire batch at once.\n ", "language": "en", "n_whitespaces": 52, "n_words": 23, "vocab_size": 22 }
def get_feature_objects(self) -> Mapping[Project, Feature]: cls = self._manager._get_feature_class(self.feature_name) return {obj: cls(self.feature_name, obj) for obj in self.objects}
6,335
34,796
49
tests/test_pipelines_automatic_speech_recognition.py
16
9
def require_ffmpeg(test_case): import subprocess try: s
Adding support for `microphone` streaming within pipeline. (#15046) * Adding support for `microphone` streaming within pipeline. - Uses `ffmpeg` to get microphone data. - Makes sure alignment is made to `size_of_sample`. - Works by sending `{"raw": ..data.., "stride": (n, left, right), "partial": bool}` directl...
require_ffmpeg
623d8cb475804f2b0f85a47b04b8b2e522db06ef
transformers
test_pipelines_automatic_speech_recognition.py
12
7
https://github.com/huggingface/transformers.git
2
41
0
15
74
Python
{ "docstring": "\n Decorator marking a test that requires FFmpeg.\n\n These tests are skipped when FFmpeg isn't installed.\n\n ", "language": "en", "n_whitespaces": 25, "n_words": 15, "vocab_size": 15 }
def require_ffmpeg(test_case): import subprocess try: subprocess.check_output(["ffmpeg", "-h"], stderr=subprocess.DEVNULL) return test_case except Exception: return unittest.skip("test requires ffmpeg")(test_case)
9,364
48,117
636
tests/system/providers/google/tasks/example_queue.py
221
59
def generate_random_string(): import random import string return "".join(random.choices(string.ascii_uppercase + string.digits, k=8)) random_string = generate_random_string() # [START create_queue] create_queue = CloudTasksQueueCreateOperator( location=LOCATION, ...
CloudTasks assets & system tests migration (AIP-47) (#23282)
generate_random_string
3977e1798d8294ba628b5f330f43702c1a5c79fc
airflow
example_queue.py
13
4
https://github.com/apache/airflow.git
1
31
0
115
517
Python
{ "docstring": "\n Generate random string for queue and task names.\n Queue name cannot be repeated in preceding 7 days and\n task name in the last 1 hour.\n ", "language": "en", "n_whitespaces": 54, "n_words": 25, "vocab_size": 21 }
def generate_random_string(): import random import string return "".join(random.choices(string.ascii_uppercase + string.digits, k=8)) random_string = generate_random_string() # [START create_queue] create_queue = CloudTasksQueueCreateOperator( location=LOCATION, ...
81,432
275,619
23
keras/optimizers/optimizer_v2/utils.py
10
3
def make_gradient_clipvalue_fn(clipvalue): if clipvalue is None:
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
make_gradient_clipvalue_fn
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
utils.py
9
5
https://github.com/keras-team/keras.git
2
20
0
10
29
Python
{ "docstring": "Creates a gradient transformation function for clipping by value.", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
def make_gradient_clipvalue_fn(clipvalue): if clipvalue is None: return lambda grads_and_vars: grads_and_vars
26,583
119,312
184
jax/_src/scipy/signal.py
83
16
def odd_ext(x, n, axis=-1): if n < 1: return x if n > x.shape[axis] - 1: raise ValueError( f"The extension length n ({n}) is too big. " f"It must not exceed x.shape[axis]-1, which is {x.shape[axis] - 1}.") left_end = lax.slice_in_di
Add some functions for spectral analysis. This commit adds "stft", "csd", and "welch" functions in scipy.signal.
odd_ext
e085370ec4137cf0f73c5163cb664bc4e1c46082
jax
signal.py
15
16
https://github.com/google/jax.git
3
159
0
54
252
Python
{ "docstring": "Extends `x` along with `axis` by odd-extension.\n\n This function was previously a part of \"scipy.signal.signaltools\" but is no\n longer exposed.\n\n Args:\n x : input array\n n : the number of points to be added to the both end\n axis: the axis to be extended\n ", "language": "en", ...
def odd_ext(x, n, axis=-1): if n < 1: return x if n > x.shape[axis] - 1: raise ValueError( f"The extension length n ({n}) is too big. " f"It must not exceed x.shape[axis]-1, which is {x.shape[axis] - 1}.") left_end = lax.slice_in_dim(x, 0, 1, axis=axis) left_ext = jnp.flip(lax.slice_i...
79,737
268,868
29
keras/tests/keras_doctest.py
20
7
def filter_on_submodules(all_modules, submodule): filtered_modules = [ mod for mod in all_modules if PACKAGE + submodule in mod.__name__ ] return filtered_modules
Add a keras doctest modeled on tensorflow doctest PiperOrigin-RevId: 424672415
filter_on_submodules
a449efe29b092e658a29cd847e0494979a47d252
keras
keras_doctest.py
10
5
https://github.com/keras-team/keras.git
3
27
0
17
43
Python
{ "docstring": "Filters all the modules based on the module flag.\n\n The module flag has to be relative to the core package imported.\n For example, if `submodule=keras.layers` then, this function will return\n all the modules in the submodule.\n\n Args:\n all_modules: All the modules in the core package.\n ...
def filter_on_submodules(all_modules, submodule): filtered_modules = [ mod for mod in all_modules if PACKAGE + submodule in mod.__name__ ] return filtered_modules
15,095
69,776
91
erpnext/accounts/doctype/bank_reconciliation_tool/bank_reconciliation_tool.py
124
24
def get_pe_matching_query(amount_condition, account_from_to, transaction): # get matching payment entries query from_date = frappe.db.get_single_value("Bank Reconciliation Tool", "bank_statement_from_date") to_date = frappe.db.get_single_value("Bank Reconciliation Tool", "bank_statement_to_date") from_reference_dat...
Feat:Filter on Payment Entries and Journal Entries Applying filters on Payement entries and Journal Entries as per reference date and posting date
get_pe_matching_query
408c89df030998fe36df135570c9edd90a522996
erpnext
bank_reconciliation_tool.py
10
61
https://github.com/frappe/erpnext.git
4
149
0
60
336
Python
{ "docstring": "\n\t\tSELECT\n\t\t\t(CASE WHEN reference_no=%(reference_no)s THEN 1 ELSE 0 END\n\t\t\t+ CASE WHEN (party_type = %(party_type)s AND party = %(party)s ) THEN 1 ELSE 0 END\n\t\t\t+ 1 ) AS rank,\n\t\t\t'Payment Entry' as doctype,\n\t\t\tname,\n\t\t\tpaid_amount,\n\t\t\treference_no,\n\t\t\treference_date...
def get_pe_matching_query(amount_condition, account_from_to, transaction): # get matching payment entries query from_date = frappe.db.get_single_value("Bank Reconciliation Tool", "bank_statement_from_date") to_date = frappe.db.get_single_value("Bank Reconciliation Tool", "bank_statement_to_date") from_reference_dat...
40,231
168,207
97
pandas/core/arrays/interval.py
22
15
def closed(self) -> IntervalInclusiveType: warnings.warn( "Attribute `closed` is deprecated in favor of `inclusive`.", FutureWarning, stacklevel=find_stack_level(inspect.currentframe()), ) return self.dtype.inclusive _interval_shared_docs["set_cl...
PERF cache find_stack_level (#48023) cache stacklevel
closed
2f8d0a36703e81e4dca52ca9fe4f58c910c1b304
pandas
interval.py
12
12
https://github.com/pandas-dev/pandas.git
1
34
0
21
79
Python
{ "docstring": "\n String describing the inclusive side the intervals.\n\n Either ``left``, ``right``, ``both`` or ``neither`.\n \n Return an identical %(klass)s closed on the specified side.\n\n .. deprecated:: 1.5.0\n\n Parameters\n ----------\n closed : {'lef...
def closed(self) -> IntervalInclusiveType: warnings.warn( "Attribute `closed` is deprecated in favor of `inclusive`.", FutureWarning, stacklevel=find_stack_level(inspect.currentframe()), ) return self.dtype.inclusive _interval_shared_docs["set_cl...
47,745
196,245
95
sympy/functions/elementary/exponential.py
31
15
def as_real_imag(self, deep=True, **hints): from sympy.functions.elementary.trigonometric import cos, sin re, im = self.a
Updated import locations
as_real_imag
498015021131af4dbb07eb110e5badaba8250c7b
sympy
exponential.py
11
8
https://github.com/sympy/sympy.git
2
93
0
24
144
Python
{ "docstring": "\n Returns this function as a 2-tuple representing a complex number.\n\n Examples\n ========\n\n >>> from sympy import I, exp\n >>> from sympy.abc import x\n >>> exp(x).as_real_imag()\n (exp(re(x))*cos(im(x)), exp(re(x))*sin(im(x)))\n >>> exp(1)....
def as_real_imag(self, deep=True, **hints): from sympy.functions.elementary.trigonometric import cos, sin re, im = self.args[0].as_real_imag() if deep: re = re.expand(deep, **hints) im = im.expand(deep, **hints) cos, sin = cos(im), sin(im) return ...
38,346
159,579
199
rasa/core/exporter.py
51
11
async def _get_conversation_ids_to_process(self) -> Set[Text]: conversation_ids_in_tracker_store = ( await self._get_conversation_ids_in_tracker() ) if not self.requested_conversation_ids: return conversation_ids_in_tracker_store self._validate_all_requ...
Async Tracker Store Support (#10696) Make `TrackerStore` interface methods asynchronous and supply an `AwaitableTrackerstore` wrapper for custom tracker stores which do not implement the methods as asynchronous. Squashed commits: * refactor tracker store and tests to be async * update core modules with asyn...
_get_conversation_ids_to_process
ca316fc80cb490ecf1e2e7261fb7fcef22fccc4a
rasa
exporter.py
11
27
https://github.com/RasaHQ/rasa.git
3
57
0
40
101
Python
{ "docstring": "Get conversation IDs that are good for processing.\n\n Finds the intersection of events that are contained in the tracker store with\n those events requested as a command-line argument.\n\n Returns:\n Conversation IDs that are both requested and contained in the tracker...
async def _get_conversation_ids_to_process(self) -> Set[Text]: conversation_ids_in_tracker_store = ( await self._get_conversation_ids_in_tracker() ) if not self.requested_conversation_ids: return conversation_ids_in_tracker_store self._validate_all_requ...
80,808
271,577
115
keras/engine/training.py
39
6
def call(self, inputs, training=None, mask=None): raise NotImplementedError( "Unimplemented `tf.keras.Model.call()`: if you " "intend to create a `Model` with the Functional " "API, please provide `inputs` and
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
call
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
training.py
9
8
https://github.com/keras-team/keras.git
1
25
0
34
48
Python
{ "docstring": "Calls the model on new inputs and returns the outputs as tensors.\n\n In this case `call()` just reapplies\n all ops in the graph to the new inputs\n (e.g. build a new computational graph from the provided inputs).\n\n Note: This method should not be called directly. It is ...
def call(self, inputs, training=None, mask=None): raise NotImplementedError( "Unimplemented `tf.keras.Model.call()`: if you " "intend to create a `Model` with the Functional " "API, please provide `inputs` and `outputs` " "arguments. Otherwise, subclass `...
35,438
153,549
655
modin/core/io/text/json_dispatcher.py
157
58
def _read(cls, path_or_buf, **kwargs): path_or_buf = cls.get_path_or_buffer(path_or_buf) if isinstance(path_or_buf, str): if not cls.file_exists(path_or_buf): return cls.single_worker_read(path_or_buf, **kwargs) path_or_buf = cls.get_path(path_or_buf) ...
REFACTOR-#3853: interacting with Dask interface through 'DaskWrapper' class (#3854) Co-authored-by: Devin Petersohn <devin-petersohn@users.noreply.github.com> Co-authored-by: Dmitry Chigarev <dchigarev@users.noreply.github.com> Co-authored-by: Yaroslav Igoshev <Poolliver868@mail.ru> Signed-off-by: Anatoly Myachev <...
_read
97769988a6f19e4b76f34238c97bf159ee7626a5
modin
json_dispatcher.py
16
48
https://github.com/modin-project/modin.git
7
398
0
106
641
Python
{ "docstring": "\n Read data from `path_or_buf` according to the passed `read_json` `kwargs` parameters.\n\n Parameters\n ----------\n path_or_buf : str, path object or file-like object\n `path_or_buf` parameter of `read_json` function.\n **kwargs : dict\n Para...
def _read(cls, path_or_buf, **kwargs): path_or_buf = cls.get_path_or_buffer(path_or_buf) if isinstance(path_or_buf, str): if not cls.file_exists(path_or_buf): return cls.single_worker_read(path_or_buf, **kwargs) path_or_buf = cls.get_path(path_or_buf) ...
46,339
190,124
248
manim/scene/three_d_scene.py
40
21
def stop_ambient_camera_rotation(self, about="theta"): about: str = about.lower() try: if config.renderer == RendererType.CAIRO: trackers = { "theta": self.camera.theta_tracker, "phi": self.camera.phi_tracker, ...
Replaced renderer strings with :class:`.RendererType` enum entries (#3017) * remove unused constants * remove deprecated --use_opengl_renderer flag * remove unnecessary workaround with class initialization * add OpenGLMobject.name to get rid of one renderer check * add VMobject.n_points_per_curve property ...
stop_ambient_camera_rotation
bd844f46d804c8cad50d06ad20ab5bebaee9987b
manim
three_d_scene.py
14
16
https://github.com/ManimCommunity/manim.git
4
101
0
36
171
Python
{ "docstring": "\n This method stops all ambient camera rotation.\n ", "language": "en", "n_whitespaces": 22, "n_words": 7, "vocab_size": 7 }
def stop_ambient_camera_rotation(self, about="theta"): about: str = about.lower() try: if config.renderer == RendererType.CAIRO: trackers = { "theta": self.camera.theta_tracker, "phi": self.camera.phi_tracker, ...
71,842
247,689
100
tests/rest/client/test_relations.py
25
13
def _get_bundled_aggregations(self) -> JsonDict: # Fetch the bundled aggregations of the event. channel = self.make_request( "GET", f"/_matrix/client/unstable/rooms/{self.room}/event/{self.parent_id}", access_token=self.user_token, ) self.asse...
Refactor relations tests (#12232) * Moves the relation pagination tests to a separate class. * Move the assertion of the response code into the `_send_relation` helper. * Moves some helpers into the base-class.
_get_bundled_aggregations
1da0f79d5455b594f2aa989106a672786f5b990f
synapse
test_relations.py
11
11
https://github.com/matrix-org/synapse.git
1
55
0
24
105
Python
{ "docstring": "\n Requests /event on the parent ID and returns the m.relations field (from unsigned), if it exists.\n ", "language": "en", "n_whitespaces": 31, "n_words": 16, "vocab_size": 15 }
def _get_bundled_aggregations(self) -> JsonDict: # Fetch the bundled aggregations of the event. channel = self.make_request( "GET", f"/_matrix/client/unstable/rooms/{self.room}/event/{self.parent_id}", access_token=self.user_token, ) self.asse...
21,558
102,634
394
chia/types/spend_bundle.py
153
27
def get_memos(self) -> Dict[bytes32, List[bytes]]: memos: Dict[bytes32, List[bytes]] = {} for coin_spend in self.coin_spends: result = Program.from_bytes(bytes(coin_spend.puzzle_reveal)).run( Program.from_bytes(bytes(coin_spend.solution)) ) fo...
Merge standalone wallet into main (#9793) * wallet changes from pac * cat changes * pool tests * pooling tests passing * offers * lint * mempool_mode * black * linting * workflow files * flake8 * more cleanup * renamed * remove obsolete test, don't cast announcement * memos ar...
get_memos
89f15f591cc3cc3e8ae40e95ffc802f7f2561ece
chia-blockchain
spend_bundle.py
17
18
https://github.com/Chia-Network/chia-blockchain.git
6
146
0
109
235
Python
{ "docstring": "\n Retrieves the memos for additions in this spend_bundle, which are formatted as a list in the 3rd parameter of\n CREATE_COIN. If there are no memos, the addition coin_id is not included. If they are not formatted as a list\n of bytes, they are not included. This is expensive to ...
def get_memos(self) -> Dict[bytes32, List[bytes]]: memos: Dict[bytes32, List[bytes]] = {} for coin_spend in self.coin_spends: result = Program.from_bytes(bytes(coin_spend.puzzle_reveal)).run( Program.from_bytes(bytes(coin_spend.solution)) ) fo...
12,256
60,712
34
.venv/lib/python3.8/site-packages/pip/_internal/index/collector.py
22
6
def _clean_url_path_part(part): # type: (str) -> str # We unquote prior
upd; format
_clean_url_path_part
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
transferlearning
collector.py
10
2
https://github.com/jindongwang/transferlearning.git
1
22
0
20
40
Python
{ "docstring": "\n Clean a \"part\" of a URL path (i.e. after splitting on \"@\" characters).\n ", "language": "en", "n_whitespaces": 20, "n_words": 13, "vocab_size": 12 }
def _clean_url_path_part(part): # type: (str) -> str # We unquote prior to quoting to make sure nothing is double quoted. return urllib.parse.quote(urllib.parse.unquote(part))
12,155
60,427
161
code/deep/BJMMD/caffe/scripts/cpp_lint.py
114
14
def CheckAltTokens(filename, clean_lines, linenum, error): line = clean_lines.elided[linenum] # Avoid preprocessor lines if Match(r'^\s*#', line): return # Last ditch effort to avoid multi-line comments. This will not help # if the comment started before the current line or ended after the # curre...
Balanced joint maximum mean discrepancy for deep transfer learning
CheckAltTokens
cc4d0564756ca067516f71718a3d135996525909
transferlearning
cpp_lint.py
14
10
https://github.com/jindongwang/transferlearning.git
5
91
0
84
154
Python
{ "docstring": "Check alternative keywords being used in boolean expressions.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found.\n ", "language...
def CheckAltTokens(filename, clean_lines, linenum, error): line = clean_lines.elided[linenum] # Avoid preprocessor lines if Match(r'^\s*#', line): return # Last ditch effort to avoid multi-line comments. This will not help # if the comment started before the current line or ended after the # curre...
29,950
133,165
126
python/ray/util/joblib/__init__.py
39
10
def register_ray(): try: from ray.util.joblib.ray_backend import RayBackend register_parallel_backend("ray", RayBackend) except ImportError: msg = ( "T
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
register_ray
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
__init__.py
12
12
https://github.com/ray-project/ray.git
2
39
0
37
83
Python
{ "docstring": "Register Ray Backend to be called with parallel_backend(\"ray\").", "language": "en", "n_whitespaces": 7, "n_words": 8, "vocab_size": 8 }
def register_ray(): try: from ray.util.joblib.ray_backend import RayBackend register_parallel_backend("ray", RayBackend) except ImportError: msg = ( "To use the ray backend you must install ray." "Try running 'pip install ray'." "See https://docs...
50,470
203,603
22
django/contrib/auth/backends.py
8
5
def get_group_permissions(self, user_obj, obj=None): return self._get_permissi
Refs #33476 -- Reformatted code with Black.
get_group_permissions
9c19aff7c7561e3a82978a272ecdaad40dda5c00
django
backends.py
8
2
https://github.com/django/django.git
1
23
0
8
37
Python
{ "docstring": "\n Return a set of permission strings the user `user_obj` has from the\n groups they belong.\n ", "language": "en", "n_whitespaces": 37, "n_words": 15, "vocab_size": 14 }
def get_group_permissions(self, user_obj, obj=None): return self._get_permissions(user_obj, obj, "group")
74,993
257,048
20
haystack/document_stores/deepsetcloud.py
6
5
def get_evaluation_sets(self) -> List[dict]: return self.evaluation_set_client.get
EvaluationSetClient for deepset cloud to fetch evaluation sets and la… (#2345) * EvaluationSetClient for deepset cloud to fetch evaluation sets and labels for one specific evaluation set * make DeepsetCloudDocumentStore able to fetch uploaded evaluation set names * fix missing renaming of get_evaluation_set_name...
get_evaluation_sets
a273c3a51dd432bd125e5b35df4be94260a2cdb7
haystack
deepsetcloud.py
8
8
https://github.com/deepset-ai/haystack.git
1
19
0
6
33
Python
{ "docstring": "\n Returns a list of uploaded evaluation sets to deepset cloud.\n\n :return: list of evaluation sets as dicts\n These contain (\"name\", \"evaluation_set_id\", \"created_at\", \"matched_labels\", \"total_labels\") as fields.\n ", "language": "en", "n_whitespace...
def get_evaluation_sets(self) -> List[dict]: return self.evaluation_set_client.get_evaluation_sets()
42,555
177,986
231
label_studio/data_import/uploader.py
34
16
def allowlist_svg(dirty_xml): from lxml.html import clean allow_tags = [ 'xml', 'svg', 'circle', 'ellipse', 'line', 'path', 'polygon', 'polyline', 'rect' ] cleaner = clean.Cleaner( ...
fix: DEV-2236: Stored XSS via SVG file (#2273) * user uploaded content rendered as plain text or known image only * allow list for svg in progress * allow list for svg basic pass * add error handling * add to file processing re: code review * rm uneeded code * add env var to disable svg cleaning *...
allowlist_svg
53f6308186aa131946e196b0409f3d732ec9e007
label-studio
uploader.py
9
23
https://github.com/heartexlabs/label-studio.git
1
77
0
31
126
Python
{ "docstring": "Filter out malicious/harmful content from SVG files\n by defining allowed tags\n ", "language": "en", "n_whitespaces": 17, "n_words": 11, "vocab_size": 11 }
def allowlist_svg(dirty_xml): from lxml.html import clean allow_tags = [ 'xml', 'svg', 'circle', 'ellipse', 'line', 'path', 'polygon', 'polyline', 'rect' ] cleaner = clean.Cleaner( ...
21,021
101,613
87
tools/sort/sort_methods.py
28
15
def _sort_filelist(self) -> None: for filename, image, alignments in self._iterator(): self.score_image(filename, image, alignments) self.sort() logger.debug("sorted list: %s", [r[0] if isinsta
Overhaul sort: - Standardize image data reading and writing - Optimize loading (just one pass required) - Make all sort groups binnable (to greater or lesser results) - Add sort by pitch - Deprecate multiple options - linting, docs + locales
_sort_filelist
98d01760e469fd2108eed8d0b0a1ba6297c3177c
faceswap
sort_methods.py
12
16
https://github.com/deepfakes/faceswap.git
4
68
0
24
104
Python
{ "docstring": " Call the sort method's logic to populate the :attr:`_results` attribute.\n\n Put logic for scoring an individual frame in in :attr:`score_image` of the child\n\n Returns\n -------\n list\n The sorted file. A list of tuples with the filename in the first position...
def _sort_filelist(self) -> None: for filename, image, alignments in self._iterator(): self.score_image(filename, image, alignments) self.sort() logger.debug("sorted list: %s", [r[0] if isinstance(r, (tuple, list)) else r for r in self._result])
81,369
275,284
76
keras/optimizers/optimizer_experimental/optimizer.py
29
5
def finalize_variable_values(self, var_list): if self.use_ema: # If the optimizer uses EMA, then when finalizing, we replace the model # variable value with its moving average stored inside optimizer. self._overwrite_model_variable
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
finalize_variable_values
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
optimizer.py
9
3
https://github.com/keras-team/keras.git
2
19
0
27
35
Python
{ "docstring": "Set the final value of model's trainable variables.\n\n Sometimes there are some extra steps before ending the variable updates,\n such as overriding the model variables with its average value.\n\n Args:\n var_list: list of model variables.\n ", "language": "en",...
def finalize_variable_values(self, var_list): if self.use_ema: # If the optimizer uses EMA, then when finalizing, we replace the model # variable value with its moving average stored inside optimizer. self._overwrite_model_variables_with_average_value(var_list)
2,402
12,761
34
jina/serve/stream/__init__.py
9
5
async def wait_floating_requests_end(self): while self.total_num_floating_tasks_alive > 0:
feat: wait for floating Executor tasks (#5004)
wait_floating_requests_end
1b3edacf531e4e8d29eac4ea73785f8d201255d6
jina
__init__.py
10
3
https://github.com/jina-ai/jina.git
2
20
0
9
37
Python
{ "docstring": "\n Await this coroutine to make sure that all the floating tasks that the request handler may bring are properly consumed\n ", "language": "en", "n_whitespaces": 35, "n_words": 20, "vocab_size": 18 }
async def wait_floating_requests_end(self): while self.total_num_floating_tasks_alive > 0: await asyncio.sleep(0)
21,504
102,266
61
torch/functional.py
46
13
def _lu_impl(A, pivot=True, get_infos=False, out=None): # type: (Tensor, bool, bool, Any) -> Tuple[Tensor, Tensor, Tensor] r # If get_infos is True, then we don't need to ch
Add linalg.lu_factor (#66933) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66933 This PR exposes `torch.lu` as `torch.linalg.lu_factor` and `torch.linalg.lu_factor_ex`. This PR also adds support for matrices with zero elements both in the size of the matrix and the batch. Note that this fu...
_lu_impl
a35b4b49d2e2a215a64e455101c779ae623b3321
pytorch
functional.py
10
77
https://github.com/pytorch/pytorch.git
1
37
0
42
83
Python
{ "docstring": "Computes the LU factorization of a matrix or batches of matrices\n :attr:`A`. Returns a tuple containing the LU factorization and\n pivots of :attr:`A`. Pivoting is done if :attr:`pivot` is set to\n ``True``.\n\n .. note::\n * The returned permutation matrix for every matrix in the...
def _lu_impl(A, pivot=True, get_infos=False, out=None): # type: (Tensor, bool, bool, Any) -> Tuple[Tensor, Tensor, Tensor] r # If get_infos is True, then we don't need to check for errors and vice versa return torch._lu_with_info(A, pivot=pivot, check_errors=(not get_infos)) if TYPE_CHECKING: _List...
72,153
248,205
324
tests/config/test_workers.py
32
9
def test_new_configs_appservice_worker(self) -> None: appservice_worker_config = self._make_worker_config( worker_app="synapse.app.generic_worker", worker_name="worker1" ) self.assertTrue( appservice_worker_config._should_this_worker_perform_duty( ...
Add the `notify_appservices_from_worker` configuration option (superseding `notify_appservices`) to allow a generic worker to be designated as the worker to send traffic to Application Services. (#12452)
test_new_configs_appservice_worker
c2d50e9f6c5f7b01cbd8bf1dca36cb8c0e7b007f
synapse
test_workers.py
12
27
https://github.com/matrix-org/synapse.git
1
68
0
21
125
Python
{ "docstring": "\n Tests new config options. This is for the worker's config.\n ", "language": "en", "n_whitespaces": 25, "n_words": 10, "vocab_size": 10 }
def test_new_configs_appservice_worker(self) -> None: appservice_worker_config = self._make_worker_config( worker_app="synapse.app.generic_worker", worker_name="worker1" ) self.assertTrue( appservice_worker_config._should_this_worker_perform_duty( ...
48,477
197,334
41
sympy/physics/hydrogen.py
22
6
def E_nl(n, Z=1): n
Remove abbreviations in documentation
E_nl
65be461082dda54c8748922f9c29a19af1279fe1
sympy
hydrogen.py
10
5
https://github.com/sympy/sympy.git
3
52
0
22
86
Python
{ "docstring": "\n Returns the energy of the state (n, l) in Hartree atomic units.\n\n The energy does not depend on \"l\".\n\n Parameters\n ==========\n\n n : integer\n Principal Quantum Number which is\n an integer with possible values as 1, 2, 3, 4,...\n Z :\n Atomic number (...
def E_nl(n, Z=1): n, Z = S(n), S(Z) if n.is_integer and (n < 1): raise ValueError("'n' must be positive integer") return -Z**2/(2*n**2)
69,687
241,760
96
tests/checkpointing/test_model_checkpoint.py
30
21
def test_model_checkpoint_no_extraneous_invocations(tmpdir): model = LogInTwoMethods() num_epochs = 4 model_checkpoint = ModelCheckpointTestInvocations(monitor="early_stop_on", expected_count=num_epochs, save_top_k=-1) trainer = Trainer( strategy="ddp_spawn", accelerator="cpu", ...
Update `tests/checkpointing/*.py` to use `devices` instead of `gpus` or `ipus` (#11408) Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
test_model_checkpoint_no_extraneous_invocations
d2d284fd6e3e8f53e9a44ab233771850af1e4dab
lightning
test_model_checkpoint.py
10
14
https://github.com/Lightning-AI/lightning.git
1
77
0
27
130
Python
{ "docstring": "Test to ensure that the model callback saves the checkpoints only once in distributed mode.", "language": "en", "n_whitespaces": 14, "n_words": 15, "vocab_size": 14 }
def test_model_checkpoint_no_extraneous_invocations(tmpdir): model = LogInTwoMethods() num_epochs = 4 model_checkpoint = ModelCheckpointTestInvocations(monitor="early_stop_on", expected_count=num_epochs, save_top_k=-1) trainer = Trainer( strategy="ddp_spawn", accelerator="cpu", ...
55,204
218,207
175
python3.10.4/Lib/importlib/abc.py
45
12
def find_module(self, fullname, path): warnings.warn("MetaPathFinder.find_module() is deprecated since Python " "3.4 in favor of MetaPathFinder.find_spec() and is "
add python 3.10.4 for windows
find_module
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
abc.py
9
10
https://github.com/XX-net/XX-Net.git
3
56
0
34
92
Python
{ "docstring": "Return a loader for the module.\n\n If no module is found, return None. The fullname is a str and\n the path is a list of strings or None.\n\n This method is deprecated since Python 3.4 in favor of\n finder.find_spec(). If find_spec() exists then backwards-compatible\n ...
def find_module(self, fullname, path): warnings.warn("MetaPathFinder.find_module() is deprecated since Python " "3.4 in favor of MetaPathFinder.find_spec() and is " "slated for removal in Python 3.12", DeprecationWarning, ...
24,533
111,992
440
nni/algorithms/hpo/evolution_tuner.py
103
28
def _generate_individual(self, parameter_id): pos = -1 for i in range(len(self.population)): if self.population[i].result is None: pos = i break if pos != -1: indiv = copy.deepcopy(self.population[pos]) self.populatio...
[WIP] add doc for evolution (#4575)
_generate_individual
de6662a4a0fbfc557614b6c022edaf8117de7a5a
nni
evolution_tuner.py
15
27
https://github.com/microsoft/nni.git
8
259
0
70
403
Python
{ "docstring": "\n This function will generate the config for a trial.\n If at the first generation, randomly generates individuals to satisfy self.population_size.\n Otherwise, random choose a pair of individuals and compare their fitnesses.\n The worst of the pair will be removed. Copy t...
def _generate_individual(self, parameter_id): pos = -1 for i in range(len(self.population)): if self.population[i].result is None: pos = i break if pos != -1: indiv = copy.deepcopy(self.population[pos]) self.populatio...
5,417
30,232
50
spotdl/console/web.py
15
11
async def connect(self): connection = {"client_id": self.client_id, "websocket": self.websocket} logging.info(f"Connect
fixed docstrings
connect
448bd75fe5de981995446a536963c5bd11e491ec
spotify-downloader
web.py
9
5
https://github.com/spotDL/spotify-downloader.git
1
44
0
15
83
Python
{ "docstring": "\n Called when a new client connects to the websocket.\n ", "language": "en", "n_whitespaces": 24, "n_words": 9, "vocab_size": 9 }
async def connect(self): connection = {"client_id": self.client_id, "websocket": self.websocket} logging.info(f"Connecting WebSocket: {connection}") await self.websocket.accept() WSProgressHandler.instances.append(self)
70,180
243,990
73
mmdet/datasets/custom.py
20
10
def prepare_test_img(self, idx): img_info = self.data_infos[idx] results = dict(img_info=img_in
[Feature] Support OpenImages Dataset (#6331) * [Feature] support openimage group of eval * [Feature] support openimage group of eval * support openimage dataset * support openimage challenge dataset * fully support OpenImages-V6 and OpenImages Challenge 2019 * Fix some logic error * update config fil...
prepare_test_img
1516986a616fee8bb741d0ab2be40683045efccd
mmdetection
custom.py
10
7
https://github.com/open-mmlab/mmdetection.git
2
56
0
18
92
Python
{ "docstring": "Get testing data after pipeline.\n\n Args:\n idx (int): Index of data.\n\n Returns:\n dict: Testing data after pipeline with new keys introduced by \\\n pipeline.\n ", "language": "en", "n_whitespaces": 82, "n_words": 24, "vocab_size": ...
def prepare_test_img(self, idx): img_info = self.data_infos[idx] results = dict(img_info=img_info) if self.proposals is not None: results['proposals'] = self.proposals[idx] self.pre_pipeline(results) return self.pipeline(results)
@frappe.whitelist()
14,456
67,253
67
erpnext/regional/report/provident_fund_deductions/provident_fund_deductions.py
107
26
def get_data(filters): data = [] conditions = get_conditions(filters) salary_slips = frappe.db.sql( % (conditions), as_dict=1, ) component_type_dict = frappe._dict( frappe.db.sql( ) ) if not len(component_type_dict): return [] entry = frappe.db.sql( % (conditions, ", ".join(["%s"] * l...
style: format code with black
get_data
494bd9ef78313436f0424b918f200dab8fc7c20b
erpnext
provident_fund_deductions.py
17
52
https://github.com/frappe/erpnext.git
7
337
1
60
586
Python
{ "docstring": " select sal.name from `tabSalary Slip` sal\n\t\twhere docstatus = 1 %s\n\t\t select name, component_type from `tabSalary Component`\n\t\twhere component_type in ('Provident Fund', 'Additional Provident Fund', 'Provident Fund Loan') select sal.name, sal.employee, sal.employee_name, ded.salary_component...
def get_data(filters): data = [] conditions = get_conditions(filters) salary_slips = frappe.db.sql( % (conditions), as_dict=1, ) component_type_dict = frappe._dict( frappe.db.sql( ) ) if not len(component_type_dict): return [] entry = frappe.db.sql( % (conditions, ", ".join(["%s"] * l...
106,449
307,681
55
homeassistant/components/trace/models.py
12
7
def as_dict(self) -> dict[str, Any]: return { "extended_dict": self.a
Move Trace classes to separate module (#78433)
as_dict
219cee2ca9f6cd9eb7e0abcbda6d9540240e20d3
core
models.py
9
6
https://github.com/home-assistant/core.git
1
32
0
12
55
Python
{ "docstring": "Return an dictionary version of this ActionTrace for saving.", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
def as_dict(self) -> dict[str, Any]: return { "extended_dict": self.as_extended_dict(), "short_dict": self.as_short_dict(), }
@keras_export( "keras.__internal__.optimizers.convert_to_legacy_optimizer", v1=[] )
83,359
280,502
761
keras/optimizers/__init__.py
218
49
def deserialize(config, custom_objects=None, **kwargs): # loss_scale_optimizer has a direct dependency of optimizer, import here # rather than top to avoid the cyclic
Move new optimizer out of optimizer_experimental/ directory. PiperOrigin-RevId: 488998585
deserialize
5a105aadbdc6fde2c2529280c4789864adbb81c7
keras
__init__.py
12
55
https://github.com/keras-team/keras.git
6
311
1
130
547
Python
{ "docstring": "Inverse of the `serialize` function.\n\n Args:\n config: Optimizer configuration dictionary.\n custom_objects: Optional dictionary mapping names (strings) to custom\n objects (classes and functions) to be considered during\n deserialization.\n\n Returns:\n ...
def deserialize(config, custom_objects=None, **kwargs): # loss_scale_optimizer has a direct dependency of optimizer, import here # rather than top to avoid the cyclic dependency. from keras.mixed_precision import ( loss_scale_optimizer, ) use_legacy_optimizer = kwargs.pop("use_legacy_o...
55,689
219,661
85
python3.10.4/Lib/_pydecimal.py
28
11
def multiply(self, a, b): a = _convert_other(a, raiseit=True) r = a.__mul__(b, context=self) if r is NotImplemented: raise TypeError("Unable to convert %s to Decimal" %
add python 3.10.4 for windows
multiply
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
_pydecimal.py
11
7
https://github.com/XX-net/XX-Net.git
2
48
0
24
78
Python
{ "docstring": "multiply multiplies two operands.\n\n If either operand is a special value then the general rules apply.\n Otherwise, the operands are multiplied together\n ('long multiplication'), resulting in a number which may be as long as\n the sum of the lengths of the two operands.\...
def multiply(self, a, b): a = _convert_other(a, raiseit=True) r = a.__mul__(b, context=self) if r is NotImplemented: raise TypeError("Unable to convert %s to Decimal" % b) else: return r
7,871
43,210
111
tests/utils/test_db_cleanup.py
17
12
def test_run_cleanup_skip_archive(self, cleanup_table_mock, kwargs, should_skip): run_cleanup( clean_before_timestamp=None, table_
Don't rely on current ORM structure for db clean command (#23574) For command DB clean, by not relying on the ORM models, we will be able to use the command even when the metadatabase is not yet upgraded to the version of Airflow you have installed. Additionally we archive all rows before deletion.
test_run_cleanup_skip_archive
95bd6b71cc9f5da377e272707f7b68000d980939
airflow
test_db_cleanup.py
10
10
https://github.com/apache/airflow.git
1
52
0
17
78
Python
{ "docstring": "test that delete confirmation input is called when appropriate", "language": "en", "n_whitespaces": 8, "n_words": 9, "vocab_size": 9 }
def test_run_cleanup_skip_archive(self, cleanup_table_mock, kwargs, should_skip): run_cleanup( clean_before_timestamp=None, table_names=['log'], dry_run=None, verbose=None, confirm=False, **kwargs, ) assert cleanup_...
976
6,409
41
ludwig/datasets/base_dataset.py
9
5
def process(self) -> None: if not self.is_downloaded(): self.download()
Add and expand docstrings in base_dataset.py (#1819)
process
d0bcbb2a6e2ab82501fd34ef583329ff2ac22a15
ludwig
base_dataset.py
9
5
https://github.com/ludwig-ai/ludwig.git
2
26
0
9
48
Python
{ "docstring": "Process the dataset into a dataframe and save it at self.processed_dataset_path.", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
def process(self) -> None: if not self.is_downloaded(): self.download() self.process_downloaded_dataset()
30,799
136,008
66
rllib/utils/tests/test_actor_manager.py
24
10
def test_healthy_only_works_for_list_of_functions(self): act
[RLlib] Refactor `WorkerSet` on top of `FaultTolerantActorManager`. (#29938) Signed-off-by: Jun Gong <jungong@anyscale.com>
test_healthy_only_works_for_list_of_functions
e707ce4fb3717e3c05118c57f503dfbd03552ca9
ray
test_actor_manager.py
10
11
https://github.com/ray-project/ray.git
4
115
0
22
77
Python
{ "docstring": "Test healthy only mode works when a list of funcs are provided.", "language": "en", "n_whitespaces": 11, "n_words": 12, "vocab_size": 12 }
def test_healthy_only_works_for_list_of_functions(self): actors = [Actor.remote(i) for i in range(4)] manager = FaultTolerantActorManager(actors=actors) # Mark first and second actor as unhealthy. manager.set_actor_state(1, False) manager.set_actor_state(2, False)
19,665
99,587
235
tests/sentry/integrations/slack/notifications/test_unassigned.py
42
21
def test_unassignment(self, mock_func): notification = UnassignedActivityNotification(
fix(notifications): Use `metrics_key` (#34572)
test_unassignment
1730c481f1a8a71446326fa1ff72e10663016385
sentry
test_unassigned.py
14
19
https://github.com/getsentry/sentry.git
1
93
0
34
171
Python
{ "docstring": "\n Test that a Slack message is sent with the expected payload when an issue is unassigned\n ", "language": "en", "n_whitespaces": 31, "n_words": 16, "vocab_size": 15 }
def test_unassignment(self, mock_func): notification = UnassignedActivityNotification( Activity( project=self.project, group=self.group, user=self.user, type=ActivityType.ASSIGNED, data={"assignee": ""}, ...
83,843
281,546
61
gamestonk_terminal/stocks/insider/insider_controller.py
26
11
def print_help(self): has_ticker_start = "
Terminal Wide Rich (#1161) * My idea for how we handle Rich moving forward * remove independent consoles * FIxed pylint issues * add a few vars * Switched print to console * More transitions * Changed more prints * Replaced all prints * Fixing tabulate * Finished replace tabulate * Finish...
print_help
82747072c511beb1b2672846ae2ee4aec53eb562
OpenBBTerminal
insider_controller.py
9
45
https://github.com/OpenBB-finance/OpenBBTerminal.git
3
42
0
19
100
Python
{ "docstring": "Print help[cmds]\n view view available presets\n set set one of the available presets[/cmds]\n\n[param]PRESET: [/param]{self.preset}[cmds]\n\n filter filter insiders based on preset [src][Open Insider][/src]\n\n\n load load a specific stock ticker for ana...
def print_help(self): has_ticker_start = "[unvl]" if not self.ticker else "" has_ticker_end = "[/unvl]" if not self.ticker else "" help_text = f console.print(text=help_text, menu="Stocks - Insider Trading")
@pytest.mark.django_db
18,216
87,078
47
tests/sentry/relay/test_config.py
23
12
def test_project_config_dynamic_sampling_is_none(default_project): default_project.update_option("sentry:dynamic_sampling", None) with Feature({"organizations:server-side-sampling": True}): cfg = get_project_config(default_project) cfg = cfg.to_dict() dynamic_sampling = get_path(cfg, "con...
feat(dynamic-sampling): Handles updating ProjectConfig with uniform DS rule for v2 [TET-465] (#40268) This PR forces your uniform rule by your plan or respect old logic. If both feature flags are enabled dynamic-sampling-basic flag takes the highest precedence. Original PR https://github.com/getsentry/sentry/pull...
test_project_config_dynamic_sampling_is_none
e0e2c4ff4248042abda3cc93024930dada416af8
sentry
test_config.py
12
7
https://github.com/getsentry/sentry.git
1
51
1
19
103
Python
{ "docstring": "\n Tests test check inc-237 that dynamic sampling is None,\n so it's pass when we have fix and fails when we dont\n ", "language": "en", "n_whitespaces": 31, "n_words": 21, "vocab_size": 19 }
def test_project_config_dynamic_sampling_is_none(default_project): default_project.update_option("sentry:dynamic_sampling", None) with Feature({"organizations:server-side-sampling": True}): cfg = get_project_config(default_project) cfg = cfg.to_dict() dynamic_sampling = get_path(cfg, "con...
56,268
221,198
29
python3.10.4/Lib/bz2.py
8
8
def seek(self, offset, whence=io.SEEK_SET):
add python 3.10.4 for windows
seek
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
bz2.py
8
3
https://github.com/XX-net/XX-Net.git
1
30
0
8
48
Python
{ "docstring": "Change the file position.\n\n The new position is specified by offset, relative to the\n position indicated by whence. Values for whence are:\n\n 0: start of stream (default); offset must not be negative\n 1: current stream position\n 2: end of stream; of...
def seek(self, offset, whence=io.SEEK_SET): self._check_can_seek() return self._buffer.seek(offset, whence)
50,903
204,818
100
django/db/backends/base/base.py
26
13
def savepoint(self): if not self._savepoint_allowed(): return thread_ident = _thread.get_ident() tid = str(thread_ident).replace("-", "") self.savepoint_state += 1 sid = "s%s_x%d" % (tid, self.savepoint_state) self.validate_thread_sharing() ...
Refs #33476 -- Reformatted code with Black.
savepoint
9c19aff7c7561e3a82978a272ecdaad40dda5c00
django
base.py
10
10
https://github.com/django/django.git
2
64
0
22
113
Python
{ "docstring": "\n Create a savepoint inside the current transaction. Return an\n identifier for the savepoint that will be used for the subsequent\n rollback or commit. Do nothing if savepoints are not supported.\n ", "language": "en", "n_whitespaces": 59, "n_words": 30, "vocab_si...
def savepoint(self): if not self._savepoint_allowed(): return thread_ident = _thread.get_ident() tid = str(thread_ident).replace("-", "") self.savepoint_state += 1 sid = "s%s_x%d" % (tid, self.savepoint_state) self.validate_thread_sharing() ...
13,876
65,388
9
erpnext/accounts/report/unpaid_expense_claim/unpaid_expense_claim.py
20
9
def get_unclaimed_expese_claims(filters): cond = "1=1" if filters.get("employee"):
style: format code with black
get_unclaimed_expese_claims
494bd9ef78313436f0424b918f200dab8fc7c20b
erpnext
unpaid_expense_claim.py
10
22
https://github.com/frappe/erpnext.git
2
42
0
17
73
Python
{ "docstring": "\n\t\tselect\n\t\t\tec.employee, ec.employee_name, ec.name, ec.total_sanctioned_amount, ec.total_amount_reimbursed,\n\t\t\tsum(gle.credit_in_account_currency - gle.debit_in_account_currency) as outstanding_amt\n\t\tfrom\n\t\t\t`tabExpense Claim` ec, `tabGL Entry` gle\n\t\twhere\n\t\t\tgle.against_vouc...
def get_unclaimed_expese_claims(filters): cond = "1=1" if filters.get("employee"): cond = "ec.employee = %(employee)s" return frappe.db.sql( .format( cond=cond ), filters, as_list=1, )
3,279
20,227
73
pipenv/patched/notpip/_vendor/platformdirs/unix.py
27
9
def site_config_dir(self) -> str: # XDG default for $XDG_CONFIG_DIRS only first, if multipath is False path = os.environ.get
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for p...
site_config_dir
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
unix.py
9
10
https://github.com/pypa/pipenv.git
2
38
0
24
71
Python
{ "docstring": "\n :return: config directories shared by users (if `multipath <platformdirs.api.PlatformDirsABC.multipath>`\n is enabled and ``XDG_DATA_DIR`` is set and a multi path the response is also a multi path separated by the OS\n path separator), e.g. ``/etc/xdg/$appname/$version``\n ...
def site_config_dir(self) -> str: # XDG default for $XDG_CONFIG_DIRS only first, if multipath is False path = os.environ.get("XDG_CONFIG_DIRS", "") if not path.strip(): path = "/etc/xdg" return self._with_multi_path(path)
76,206
260,360
29
sklearn/decomposition/_fastica.py
8
7
def fit_transform(self, X, y=None): self._validate_params() return self._fit_transform(X, compute_sources=
MAINT Use _validate_params in FastICA (#23711) Co-authored-by: Guillaume Lemaitre <g.lemaitre58@gmail.com> Co-authored-by: jeremiedbb <jeremiedbb@yahoo.fr>
fit_transform
4cc347d4d0cbbfdcbd353f08842e0668fed78c9f
scikit-learn
_fastica.py
8
3
https://github.com/scikit-learn/scikit-learn.git
1
28
0
8
45
Python
{ "docstring": "Fit the model and recover the sources from X.\n\n Parameters\n ----------\n X : array-like of shape (n_samples, n_features)\n Training data, where `n_samples` is the number of samples\n and `n_features` is the number of features.\n\n y : Ignored\n ...
def fit_transform(self, X, y=None): self._validate_params() return self._fit_transform(X, compute_sources=True)
18,148
86,689
672
src/sentry/api/endpoints/project_dynamic_sampling.py
92
53
def __fetch_randomly_sampled_transactions(self, project, query, sample_size, query_time_range): sampling_factor = self.__generate_transactions_sampling_factor( project=project, query=query, sample_size=sample_size, query_time_range=query_time_range, ...
feat(dynamic-sampling): Improve empty transaction breakdown message [TET-338] (#39539) This PR add new attribute parentProjectBreakdown to /api/0/projects/<organization_slug>/<project_slug>/dynamic-sampling/distribution/ api: ``` { "projectBreakdown": null, "sampleSize": 0, "startTimestamp": null, "end...
__fetch_randomly_sampled_transactions
ceee9dfd8d6fed70d34546e7b46ebb7bf1d49745
sentry
project_dynamic_sampling.py
19
52
https://github.com/getsentry/sentry.git
1
275
0
78
436
Python
{ "docstring": "\n Fetches a random sample of transactions of size `sample_size` in the last period\n defined by `stats_period`. The random sample is fetched by generating a random number by\n for every row, and then doing a modulo operation on it, and if that number is divisible\n by the ...
def __fetch_randomly_sampled_transactions(self, project, query, sample_size, query_time_range): sampling_factor = self.__generate_transactions_sampling_factor( project=project, query=query, sample_size=sample_size, query_time_range=query_time_range, ...
@frappe.whitelist() @frappe.validate_and_sanitize_search_inputs
13,591
64,270
175
erpnext/controllers/queries.py
235
55
def item_query(doctype, txt, searchfield, start, page_len, filters, as_dict=False): conditions = [] if isinstance(filters, str): filters = json.loads(filters) #Get searchfields from meta and use in Item Link field query meta = frappe.get_meta("Item", cached=True) searchfields = meta.get_search_fields() # the...
fix: ignore empty customer/supplier in item query (#29610) * fix: dont try to filter by customer/supplier if None * test: item query with emtpy supplier
item_query
41a95e56241ff8f3dceac7285f0bc6b9a43d7a06
erpnext
queries.py
16
73
https://github.com/frappe/erpnext.git
22
449
1
145
783
Python
{ "docstring": "select\n\t\t\ttabItem.name, tabItem.item_name, tabItem.item_group,\n\t\tif(length(tabItem.description) > 40, \\\n\t\t\tconcat(substr(tabItem.description, 1, 40), \"...\"), description) as description\n\t\t{columns}\n\t\tfrom tabItem\n\t\twhere tabItem.docstatus < 2\n\t\t\tand tabItem.disabled=0\n\t\t\...
def item_query(doctype, txt, searchfield, start, page_len, filters, as_dict=False): conditions = [] if isinstance(filters, str): filters = json.loads(filters) #Get searchfields from meta and use in Item Link field query meta = frappe.get_meta("Item", cached=True) searchfields = meta.get_search_fields() # the...
36,680
156,567
106
dask/array/core.py
44
10
def apply_and_enforce(*args, **kwargs): func = kwargs.pop("_func") expected_ndim = kwargs.pop("expected_ndim") out = func(*args, **kwargs) if getattr(out, "ndim", 0) != expected_ndim: out_ndim = getattr(out, "ndim", 0) raise ValueError( f"Dimensio
Add kwarg ``enforce_ndim`` to ``dask.array.map_blocks()`` (#8865)
apply_and_enforce
2b90415b02d3ad1b08362889e0818590ca3133f4
dask
core.py
12
11
https://github.com/dask/dask.git
2
68
0
36
129
Python
{ "docstring": "Apply a function, and enforce the output.ndim to match expected_ndim\n\n Ensures the output has the expected dimensionality.", "language": "en", "n_whitespaces": 19, "n_words": 17, "vocab_size": 15 }
def apply_and_enforce(*args, **kwargs): func = kwargs.pop("_func") expected_ndim = kwargs.pop("expected_ndim") out = func(*args, **kwargs) if getattr(out, "ndim", 0) != expected_ndim: out_ndim = getattr(out, "ndim", 0) raise ValueError( f"Dimension mismatch: expected out...
40,570
170,534
26
pandas/core/construction.py
13
8
def _sanitize_non_ordered(data) -> None: if isinstance(data, (set, frozenset)): raise TypeError(f"'{type(data).__name__}' type is unordered")
REF: simplify sanitize_array (#49347) REF: simpify sanitize_array
_sanitize_non_ordered
6b4fa02e10480c4ddae0714e36b7fe765fa42eac
pandas
construction.py
14
6
https://github.com/pandas-dev/pandas.git
2
26
0
13
55
Python
{ "docstring": "\n Raise only for unordered sets, e.g., not for dict_keys\n ", "language": "en", "n_whitespaces": 16, "n_words": 9, "vocab_size": 8 }
def _sanitize_non_ordered(data) -> None: if isinstance(data, (set, frozenset)): raise TypeError(f"'{type(data).__name__}' type is unordered")
56,619
222,529
421
python3.10.4/Lib/dis.py
145
29
def dis(x=None, *, file=None, depth=None): if x is Non
add python 3.10.4 for windows
dis
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
dis.py
17
33
https://github.com/XX-net/XX-Net.git
14
249
0
96
413
Python
{ "docstring": "Disassemble classes, methods, functions, and other compiled objects.\n\n With no argument, disassemble the last traceback.\n\n Compiled objects currently include generator objects, async generator\n objects, and coroutine objects, all of which store their code object\n in a special attribu...
def dis(x=None, *, file=None, depth=None): if x is None: distb(file=file) return # Extract functions from methods. if hasattr(x, '__func__'): x = x.__func__ # Extract compiled code objects from... if hasattr(x, '__code__'): # ...a function, or x = x.__code__ ...
49,336
199,680
18
sympy/polys/appellseqs.py
13
7
def bernoulli_poly(n, x=None, polys=False):
Run orthopolys and appellseqs through a common interface Including unifying the two Chebyshev generators into one function. There are also two kinds of Hermite polynomials, and they too share the same recurrence, but the second type He_n(x) (aka the probabilist, reduced or small polynomials) will not be added here.
bernoulli_poly
d1d46df73ebaad94089847558d00a8b7269f554d
sympy
appellseqs.py
8
54
https://github.com/sympy/sympy.git
1
33
0
13
47
Python
{ "docstring": "Generates the Bernoulli polynomial `\\operatorname{B}_n(x)`.\n\n `\\operatorname{B}_n(x)` is the unique polynomial satisfying\n\n .. math :: \\int_{x}^{x+1} \\operatorname{B}_n(t) \\,dt = x^n.\n\n Based on this, we have for nonnegative integer `s` and integer\n `a` and `b`\n\n .. math :...
def bernoulli_poly(n, x=None, polys=False): r return named_poly(n, dup_bernoulli, QQ, "Bernoulli polynomial", (x,), polys)
13,510
63,813
49
.venv/lib/python3.8/site-packages/pip/_vendor/tenacity/after.py
26
6
def after_log(logger, log_level, sec_format="%0.3f"): log_tpl = ( "Finished call to '%s' af
upd; format
after_log
f638f5d0e6c8ebed0e69a6584bc7f003ec646580
transferlearning
after.py
11
7
https://github.com/jindongwang/transferlearning.git
1
29
0
24
49
Python
{ "docstring": "After call strategy that logs to some logger the finished attempt.", "language": "en", "n_whitespaces": 10, "n_words": 11, "vocab_size": 11 }
def after_log(logger, log_level, sec_format="%0.3f"): log_tpl = ( "Finished call to '%s' after " + str(sec_format) + "(s), " "this was the %s time calling it." )
24,214
110,568
34
lib/matplotlib/offsetbox.py
17
11
def _compat_get_offset(meth): sigs = [lambda self,
Reparametrize offsetbox calculations in terms of bboxes. Passing a single bbox instead of (xdescent, ydescent, width, height) separately is easier to follow (see e.g. the changes in VPacker and HPacker, which no longer have to repeatedly pack/unpack whd_list), and avoids having to figure out e.g. the sign of the desce...
_compat_get_offset
de2192589f8ea50c9dc90be87b649399ff623feb
matplotlib
offsetbox.py
10
6
https://github.com/matplotlib/matplotlib.git
1
48
0
15
55
Python
{ "docstring": "\n Decorator for the get_offset method of OffsetBox and subclasses, that\n allows supporting both the new signature (self, bbox, renderer) and the old\n signature (self, width, height, xdescent, ydescent, renderer).\n ", "language": "en", "n_whitespaces": 42, "n_words": 29, "vocab_...
def _compat_get_offset(meth): sigs = [lambda self, width, height, xdescent, ydescent, renderer: locals(), lambda self, bbox, renderer: locals()]
55,381
218,549
48
python3.10.4/Lib/ipaddress.py
16
4
def sixtofour(self):
add python 3.10.4 for windows
sixtofour
8198943edd73a363c266633e1aa5b2a9e9c9f526
XX-Net
ipaddress.py
11
4
https://github.com/XX-net/XX-Net.git
2
34
0
14
53
Python
{ "docstring": "Return the IPv4 6to4 embedded address.\n\n Returns:\n The IPv4 6to4-embedded address if present or None if the\n address doesn't appear to contain a 6to4 embedded address.\n\n ", "language": "en", "n_whitespaces": 62, "n_words": 26, "vocab_size": 19 }
def sixtofour(self): if (self._ip >> 112) != 0x2002: return None return IPv4Address((self._ip >> 80) & 0xFFFFFFFF)