repo_name stringlengths 1 62 | dataset stringclasses 1
value | lang stringclasses 11
values | pr_id int64 1 20.1k | owner stringlengths 2 34 | reviewer stringlengths 2 39 | diff_hunk stringlengths 15 262k | code_review_comment stringlengths 1 99.6k |
|---|---|---|---|---|---|---|---|
axlearn | github_2023 | python | 568 | apple | markblee | @@ -202,6 +202,11 @@ class Config(BaseLayer.Config):
)
lconv: LConvLayer.Config = LConvLayer.default_config()
norm: LayerNorm.Config = LayerNorm.default_config()
+ # Layer order. If None, default to "mhsa_before_conv", i.e., conformer layer order as
+ # secified in https://arxiv... | ```suggestion
# If not None, only "lconv_before_ff" "lconv_before_mhsa" "mhsa_before_lconv" are allowed.
```
You can also consider a Literal[...] typing. |
axlearn | github_2023 | python | 568 | apple | markblee | @@ -253,6 +258,12 @@ def __init__(self, cfg: Config, *, parent: Module):
f"cfg.right_context must be greater or equal to 0, get {cfg.right_context}."
)
+ if cfg.layer_order is not None:
+ supperted_layer_order = ["lconv_before_ff", "lconv_before_mhsa", "mhsa_before_lcon... | ```suggestion
if cfg.layer_order is not None:
supported_layer_order = ["lconv_before_ff", "lconv_before_mhsa", "mhsa_before_lconv"]
if cfg.layer_order not in supported_layer_order:
raise ValueError(f"Layer order must be one of {supported_layer_order}, got {cfg.layer_o... |
axlearn | github_2023 | python | 568 | apple | markblee | @@ -253,6 +259,11 @@ def __init__(self, cfg: Config, *, parent: Module):
f"cfg.right_context must be greater or equal to 0, get {cfg.right_context}."
)
+ if cfg.layer_order is not None:
+ supperted_layer_order = ["lconv_before_ff", "lconv_before_mhsa", "mhsa_before_lcon... | ```suggestion
supported_layer_order = ["lconv_before_ff", "lconv_before_mhsa", "mhsa_before_lconv"]
```
This doesn't look fixed? |
axlearn | github_2023 | python | 568 | apple | markblee | @@ -253,6 +259,11 @@ def __init__(self, cfg: Config, *, parent: Module):
f"cfg.right_context must be greater or equal to 0, get {cfg.right_context}."
)
+ if cfg.layer_order is not None:
+ supperted_layer_order = ["lconv_before_ff", "lconv_before_mhsa", "mhsa_before_lcon... | ```suggestion
raise ValueError(f"Only {supperted_layer_order} is allowed, got {cfg.layer_order}")
```
This should be a fstring |
axlearn | github_2023 | others | 568 | apple | markblee | @@ -90,6 +90,19 @@ ENV PIP_FIND_LINKS=https://storage.googleapis.com/jax-releases/libtpu_releases.h
RUN pip install .[tpu]
COPY . .
+################################################################################ | Can you rebase the files? |
axlearn | github_2023 | others | 517 | apple | markblee | @@ -90,6 +90,25 @@ ENV PIP_FIND_LINKS=https://storage.googleapis.com/jax-releases/libtpu_releases.h
RUN pip install .[tpu]
COPY . .
+################################################################################
+# GPU container spec. #
+###################... | Could this be in pyproject too, since we already specify `PIP_FIND_LINKS` below? |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,327 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | Add a reference/comment on what this sidecar is doing and why we only do this for a3-highgpu? |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,327 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | nit suggestion: construct the default volume mounts, env vars, etc. upfront, and then group all of the `a3-highgpu` specific changes under one branch. |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,327 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | nit -- `env_vars.update({ ... })` may be slightly easier to read. |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,327 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | What is this file used for? Is it read by the sidecar? |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -70,7 +70,7 @@
flags.DEFINE_string("jax_backend", None, "Specifies the XLA backend to use.", required=True)
flags.DEFINE_string(
"distributed_coordinator",
- None,
+ os.environ.get("DISTRIBUTED_COORDINATOR", None), | Do we need this change for the GPU runner, or can we just supply the flags for now? |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -446,9 +446,18 @@ def from_flags(cls, fv: flags.FlagValues, **kwargs):
return cfg
+class GPUGKERunnerJob(GKERunnerJob):
+ """A GKERunnerJob that uses GPUGKEJob."""
+
+ inner = GPUGKEJob
+ pre_provisioner = TPUNodePoolProvisioner | A reminder to change `TPUNodePoolProvisioner` when ready. Maybe we can just use the default implementation that raises NotImplementedError. |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,327 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | I noticed that we have this check in a lot of places. What are the other instance types that we intend to support? |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,359 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | Document `queue` and the behavior when `None`? |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,359 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | nit -- comments should be full sentences with punctuation. (Please also fix below.)
```suggestion
# Different machine types require different sidecar containers.
# For example A3 requires a tcpx socket but A3 Mega does not.
```
Also, should we add a pointer to e.g. https://cloud.google.com/kubern... |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,359 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | These can be workload dependent -- should we omit and let XLA decide as the default? |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,359 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | I think this whole section can benefit from more comments/pointers to what these are doing or how we decided on the defaults. |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,359 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | Is it expected that user command and the `touch` command are separated only by newline? (Should we add a semicolon or similar?) |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -109,6 +109,9 @@ def named_trainer_configs() -> Dict[str, TrainerConfigFn]:
)
kwargs = fuji.get_trainer_kwargs(model_size, vocab_size=vocab_size, version=version)
max_sequence_length = kwargs.pop("max_sequence_length")
+
+ # TODO remove before merging
+ kw... | You can create a separate module with your changes and point to it via `--module`, similar to your previous fuji experiments. Let me know if you prefer a more concrete example. |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,384 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | ```suggestion
f"The instance type {instance_type} is not supported on GKE with GPU. "
"Only gpu-a3-highgpu-8g is supported."
```
(With this check, the other checks for `instance_type.startswith("gpu-a3-highgpu")` are now redundant.) |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,384 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | ```suggestion
"metadata.annotations['batch.kubernetes.io/job-completion-index']"
``` |
axlearn | github_2023 | python | 517 | apple | markblee | @@ -639,6 +639,384 @@ def _execute(self) -> Any:
)
+class GPUGKEJob(GKEJob):
+ """A GPU job represented as a k8s JobSet.
+
+ See also `gke_runner` as an example.
+ """
+
+ @config_class
+ class Config(GKEJob.Config):
+ """Configures GPUGKEJob.
+
+ Attributes:
+ accel... | Maybe we should `raise ValueError("Command should not be None.")` in this case? |
axlearn | github_2023 | python | 543 | apple | markblee | @@ -438,7 +448,18 @@ def check_supported(*supported_layers: Type):
check_supported(BertPooler)
axlearn_to_torch(layer.linear, src["linear"], dst.dense)
# Note: always use tanh as activation here.
- elif isinstance(dst, (hf_bert.BertModel, hf_roberta.RobertaModel)):
+ elif isinstance(dst... | nit --
```suggestion
src_pooler = src.get("head", {}).get("pooler", None)
if (src_pooler is not None) != (dst.pooler is not None):
raise ValueError(
"Input layer and output layer must either both have pooler, or both not."
)
if src_pooler:
... |
axlearn | github_2023 | python | 507 | apple | markblee | @@ -233,8 +233,8 @@ def _call_model(
(outputs, output_collection), where `outputs` are the return value of
self._model.method(...).
"""
+ input_batch = self._dispatch_global_batch(input_batch)
# Shard and (possibly) dispatch the input batch. | Should the comment be above L236? |
axlearn | github_2023 | python | 507 | apple | markblee | @@ -0,0 +1,242 @@
+# Copyright © 2024 Apple Inc.
+"""Utility to help dispatching input batches from hosts to devices."""
+
+import copy
+from typing import Dict, Optional, Sequence
+
+import jax
+from jax import numpy as jnp
+
+from axlearn.common.config import REQUIRED, Required, config_class
+from axlearn.common.modu... | Would this mutate the instantiating config? |
axlearn | github_2023 | python | 507 | apple | markblee | @@ -0,0 +1,242 @@
+# Copyright © 2024 Apple Inc.
+"""Utility to help dispatching input batches from hosts to devices."""
+
+import copy
+from typing import Dict, Optional, Sequence
+
+import jax
+from jax import numpy as jnp
+
+from axlearn.common.config import REQUIRED, Required, config_class
+from axlearn.common.modu... | Is this a case that we expect we can run into? Should it be an assertion given checks in init? |
axlearn | github_2023 | python | 507 | apple | markblee | @@ -0,0 +1,242 @@
+# Copyright © 2024 Apple Inc.
+"""Utility to help dispatching input batches from hosts to devices."""
+
+import copy
+from typing import Dict, Optional, Sequence
+
+import jax
+from jax import numpy as jnp
+
+from axlearn.common.config import REQUIRED, Required, config_class
+from axlearn.common.modu... | I suppose it's also intended to work with `Nested[tf.Tensor]`? |
axlearn | github_2023 | python | 507 | apple | markblee | @@ -0,0 +1,242 @@
+# Copyright © 2024 Apple Inc.
+"""Utility to help dispatching input batches from hosts to devices."""
+
+import copy
+from typing import Dict, Optional, Sequence
+
+import jax
+from jax import numpy as jnp
+
+from axlearn.common.config import REQUIRED, Required, config_class
+from axlearn.common.modu... | nit -- might be worth an assertion on the length of `non_logical_feed_indices`? |
axlearn | github_2023 | python | 507 | apple | markblee | @@ -249,6 +249,16 @@ def _call_model(
is_training=False,
)
+ def _dispatch_global_batch(self, input_batch: NestedTensor) -> NestedTensor:
+ module = self.parent
+ while module is not None:
+ if isinstance(module, SpmdEvaler):
+ break | nit -- you could also fold the break into the while condition. |
axlearn | github_2023 | python | 507 | apple | markblee | @@ -0,0 +1,242 @@
+# Copyright © 2024 Apple Inc. | ```suggestion
# Copyright © 2024 Apple Inc.
``` |
axlearn | github_2023 | python | 507 | apple | markblee | @@ -863,6 +868,79 @@ def fn(ds: tf.data.Dataset) -> tf.data.Dataset:
return fn
+def per_feed_batch(
+ feed_batch_size: int,
+ *,
+ is_training: bool,
+ pad_example_fn: PadExampleFn,
+ prefetch_buffer_size: Optional[int] = None,
+ post_batch_processor: Optional[ConfigOr[DatasetToDatasetFn]] = ... | nit -- the comment seems to suggest it returns a source instead of a processor. |
axlearn | github_2023 | python | 507 | apple | markblee | @@ -863,6 +868,79 @@ def fn(ds: tf.data.Dataset) -> tf.data.Dataset:
return fn
+def per_feed_batch(
+ feed_batch_size: int,
+ *,
+ is_training: bool,
+ pad_example_fn: PadExampleFn,
+ prefetch_buffer_size: Optional[int] = None,
+ post_batch_processor: Optional[ConfigOr[DatasetToDatasetFn]] = ... | nit --
```suggestion
if repeat is not None:
ds = ds.repeat(repeat)
elif is_training:
ds = ds.repeat()
``` |
axlearn | github_2023 | python | 507 | apple | markblee | @@ -1064,9 +1159,24 @@ class Config(Module.Config):
# A config that instantiates to a DatasetToDatasetFn, which performs batching of examples.
batcher: InstantiableConfig = config_for_function(batch)
+ # If not None, creates an InputDispatcher and use it for dispatching per-feed batches to | ```suggestion
# If not None, creates an InputDispatcher and uses it for dispatching per-feed batches to
``` |
axlearn | github_2023 | python | 507 | apple | markblee | @@ -1064,9 +1159,24 @@ class Config(Module.Config):
# A config that instantiates to a DatasetToDatasetFn, which performs batching of examples.
batcher: InstantiableConfig = config_for_function(batch)
+ # If not None, creates an InputDispatcher and use it for dispatching per-feed batches to
+ ... | ```suggestion
# If using `per_feed_batch`, set feed_batch_size according to `input_dispatcher`.
``` |
axlearn | github_2023 | python | 528 | apple | markblee | @@ -1963,21 +1963,24 @@ def rel_pos_to_abs_pos(x: Tensor) -> Tensor:
Args:
x: a Tensor of shape [T, 2*T - 1], where x[i, j] represents the bias between query[i] and
absolute position k = i + j - (T - 1), if 0 <= k < T, otherwise the value is not used.
+ T >= 1.
Returns:
... | nit --
```suggestion
if t <= 1:
``` |
axlearn | github_2023 | python | 528 | apple | markblee | @@ -1963,21 +1963,24 @@ def rel_pos_to_abs_pos(x: Tensor) -> Tensor:
Args:
x: a Tensor of shape [T, 2*T - 1], where x[i, j] represents the bias between query[i] and
absolute position k = i + j - (T - 1), if 0 <= k < T, otherwise the value is not used.
+ T >= 1.
Returns:
... | An if statement is probably preferable here? |
axlearn | github_2023 | python | 528 | apple | markblee | @@ -1963,21 +1963,25 @@ def rel_pos_to_abs_pos(x: Tensor) -> Tensor:
Args:
x: a Tensor of shape [T, 2*T - 1], where x[i, j] represents the bias between query[i] and
absolute position k = i + j - (T - 1), if 0 <= k < T, otherwise the value is not used.
+ T >= 1. | ```suggestion
T is expected to be >= 1.
``` |
axlearn | github_2023 | python | 525 | apple | jinglu1 | @@ -425,6 +428,15 @@ def _build_container(self) -> Nested[Any]:
if cfg.enable_tpu_ici_resiliency is not None:
env_vars["ENABLE_ICI_RESILIENCY"] = str(cfg.enable_tpu_ici_resiliency).lower()
+ resources = {"limits": {"google.com/tpu": system.chips_per_vm}}
+ # Set request memory by h... | Does it mean that we are reserving 20% of memory for system software? |
axlearn | github_2023 | python | 450 | apple | jiya-zhang | @@ -149,4 +152,50 @@ def make_single_host_config(base_config_name: str) -> SpmdTrainer.Config:
config_map[f"{config_name}-single-host"] = functools.partial(
make_single_host_config, config_name
)
+
+ if model_size == "test":
+
+ def make_s... | Is it possible to save checkpointer more frequently than eval? Something like save ckpt every 500 steps, eval every 1500 steps. This allows us to identify issues separately if the job hangs |
axlearn | github_2023 | python | 450 | apple | markblee | @@ -149,4 +152,51 @@ def make_single_host_config(base_config_name: str) -> SpmdTrainer.Config:
config_map[f"{config_name}-single-host"] = functools.partial(
make_single_host_config, config_name
)
+
+ if model_size == "test":
+
+ def make_s... | For these, have you considered tweaking the kwargs here directly?
https://github.com/apple/axlearn/blob/c2c8a935a8ea339cdf0e0ffad6d48e005455dbe4/axlearn/experiments/text/gpt/fuji.py#L85-L104
It looks like we can add a mesh rule for the accelerators that you are testing on, too.
For reference, the kwargs will be pa... |
axlearn | github_2023 | python | 450 | apple | markblee | @@ -140,6 +140,29 @@ def get_trainer_kwargs(model_size: str, *, vocab_size: int, version: Version) ->
),
),
)
+ elif model_size == "simple": | Thanks! Does this need to be separate from `"test"` (which is itself intended to be the testing configuration)?
In particular, we can configure `mesh_rules` for the accelerator that you are testing on. This way, it'll run on both CPU and the target testing hardware.
The only other differences seem to be batch si... |
axlearn | github_2023 | python | 450 | apple | markblee | @@ -98,8 +98,11 @@ def get_trainer_kwargs(model_size: str, *, vocab_size: int, version: Version) ->
weight_decay=0.01,
),
max_sequence_length=64,
- train_batch_size=16,
+ train_batch_size=32,
+ eval_batch_size=32,
max_step=3000,
+ ... | You can probably get away with just changing this to
```suggestion
mesh_shape=mesh_shape_from_axes(data=-1),
```
On CPU, this completes to `(1,1,1,1,1)`, on v4-8 it completes to `(4,1,1,1,1)`. You can also do `fsdp=-1` if you instead want to test against `(1,1,4,1,1)`, although the configs are small en... |
axlearn | github_2023 | python | 420 | apple | markblee | @@ -199,7 +199,7 @@ class WandBWriter(BaseWriter):
Note:
This utility does not support restarts gracefully.
- If the job is pre-empted, the logger will create a new run.
+ If the job is pre-emptied, the logger will create a new run. | ```suggestion
If the job is preempted, the logger will create a new run.
``` |
axlearn | github_2023 | python | 485 | apple | jiya-zhang | @@ -0,0 +1,142 @@
+# Copyright © 2024 Apple Inc.
+
+"""A script to compute goodput and upload to Cloud Monitoring.
+
+This can be run as a daemon for each training job for which `GoodputRecorder` is configured.
+
+Example:
+
+ python3 -m axlearn.experiments.calculate_goodput --job_name=my-test-job
+
+"""
+
+import t... | I believe this only writes one data point, containing the current time and current goodput? Is this the intended user journey? |
axlearn | github_2023 | others | 485 | apple | jiya-zhang | @@ -85,6 +85,7 @@ gcp = [
"google-auth[pyopenssl]", # Ensures that we have compatible pyopenssl/cryptography pins.
"google-cloud-storage==2.16.0",
"google-cloud-core==2.3.3",
+ "ml_goodput_measurement==0.0.2", | Should we wait until they release the newer version to merge? The release can be as soon as next week |
axlearn | github_2023 | python | 481 | apple | markblee | @@ -0,0 +1,17 @@
+"""Tests for AXLearn environment."""
+# pylint: disable=no-self-use,redundant-keyword-arg,too-many-function-args | OOI where did `redundant-keyword-arg,too-many-function-args` come from? |
axlearn | github_2023 | others | 479 | apple | tuzhucheng | @@ -48,9 +48,16 @@ conda install -c apple tensorflow-deps
# Manually build tensorflow-text until a collaborator build is available.
# This was tested using clang version 15 - you may get non-working wheels with earlier versions of clang.
mkdir ~/builds && git clone https://github.com/tensorflow/text.git ~/builds/tex... | Thanks to @jiya-zhang's tip, if we install TF manually before trying to build `tensorflow-text`, it will not attempt to install TF again.
```suggestion
pip install tensorflow==2.16.1
cd ~/builds/text && git checkout 0f9f6df5b4da19bc7a734ba05fc4fa12bccbedbe
``` |
axlearn | github_2023 | python | 476 | apple | markblee | @@ -186,8 +188,10 @@ def model_config(
if ffn_dim is None:
ffn_dim = scaled_hidden_dim(scale=8 / 3, round_up_to_multiples_of=256)
if num_kv_heads:
+ atten_cfg = GroupedQueryAttention.default_config() | May be worth adding a unit test? |
axlearn | github_2023 | python | 472 | apple | markblee | @@ -24,22 +25,43 @@ def sweep(self, jobs: Dict[str, JobSpec]) -> Sequence[str]:
raise NotImplementedError(type(self))
+class AggregationType(Enum):
+ """The aggregation rule for CompositeCleaner. | ```suggestion
"""The aggregation rule for CompositeCleaner.
``` |
axlearn | github_2023 | python | 457 | apple | markblee | @@ -196,19 +196,30 @@ def _compute_target_paddings(
target_labels: Tensor = input_batch["target_labels"]
# Infer target_paddings from out-of-range labels.
target_paddings = jnp.logical_or(cfg.vocab_size <= target_labels, target_labels < 0)
+ return target_paddings
+ def _input_sta... | Guard against division by 0 here and below? |
axlearn | github_2023 | python | 457 | apple | markblee | @@ -196,19 +196,30 @@ def _compute_target_paddings(
target_labels: Tensor = input_batch["target_labels"]
# Infer target_paddings from out-of-range labels.
target_paddings = jnp.logical_or(cfg.vocab_size <= target_labels, target_labels < 0)
+ return target_paddings
+ def _input_sta... | Is there a specific reason to prefer returning the summaries instead of just adding them here? |
axlearn | github_2023 | python | 457 | apple | markblee | @@ -196,19 +196,30 @@ def _compute_target_paddings(
target_labels: Tensor = input_batch["target_labels"]
# Infer target_paddings from out-of-range labels.
target_paddings = jnp.logical_or(cfg.vocab_size <= target_labels, target_labels < 0)
+ return target_paddings
+ def _input_sta... | ```suggestion
def _input_stats_summaries(
```
or `def _add_input_stats_summaries` if we decide to inline the add, which may be more similar to other callsites in the repo. |
axlearn | github_2023 | python | 457 | apple | markblee | @@ -257,6 +268,64 @@ def predict(self, input_batch: Nested[Tensor]) -> Tensor:
logits = self.lm_head(inputs)
return logits * (1 - paddings[..., None])
+ def _input_stats_summary(
+ self, input_batch: Nested[Tensor], per_example_weight: Tensor
+ ) -> Dict[str, Union[WeightedScalar, Tenso... | Same here? |
axlearn | github_2023 | python | 457 | apple | markblee | @@ -257,6 +269,65 @@ def predict(self, input_batch: Nested[Tensor]) -> Tensor:
logits = self.lm_head(inputs)
return logits * (1 - paddings[..., None])
+ def _input_stats_summaries(
+ self, input_batch: Nested[Tensor], per_example_weight: Tensor
+ ) -> Dict[str, Union[WeightedScalar, Ten... | Here too? |
axlearn | github_2023 | python | 457 | apple | markblee | @@ -257,6 +269,65 @@ def predict(self, input_batch: Nested[Tensor]) -> Tensor:
logits = self.lm_head(inputs)
return logits * (1 - paddings[..., None])
+ def _input_stats_summaries(
+ self, input_batch: Nested[Tensor], per_example_weight: Tensor
+ ) -> Dict[str, Union[WeightedScalar, Ten... | A couple nits -- since we sum over weights, 1.0 may not always be appropriate. We might also consider renaming `num_valid_examples` to `total_example_weight`. |
axlearn | github_2023 | others | 454 | apple | samos123 | @@ -109,6 +109,10 @@ dataflow = [
"google-apitools", # for beam pipeline
"orjson==3.9.10",
]
+# Triton kernel dependency.
+triton = [ | maybe rename `triton` to `gpu` if this is a gpu specific dependency? Not sure if there will be other gpu only dependencies |
axlearn | github_2023 | python | 456 | apple | markblee | @@ -2358,7 +2361,10 @@ class Config(BaseLayer.Config):
add_dead_neuron_summary: Optional[bool] = None
# Adds summary of RMS norms of the specified values. Supported value are:
+ # - "inputs": inputs of the layer.
+ # - "linear1_outputs": outputs of linear1.
# - "linear2_output... | ```suggestion
# TODO(tlei3): deprecate this feature since we use TensorStats.
```
here and elsewhere? |
axlearn | github_2023 | python | 456 | apple | ruomingp | @@ -182,6 +182,18 @@ def add_stats(self, name: str, value: Nested[Tensor]):
self.add_summary("max_abs", jnp.abs(value).max().astype(jnp.float32))
+class DefaultTensorStats(CompositeTensorStats):
+ """Default tensor stats that compute RMS norm and max value."""
+
+ @config_class
+ class Config(Comp... | Nit: do we need class? Maybe a function is enough:
```
def default_tensor_stats_config() -> TensorStats.Config:
``` |
axlearn | github_2023 | python | 446 | apple | apghml | @@ -126,6 +127,61 @@ def apply(self, prng_key: Tensor, params: NestedTensor) -> NestedTensor:
raise NotImplementedError(self)
+class TensorStats(Module):
+ """An abstract Module to add summaries about the given Tensors."""
+
+ def add_stats(self, name: str, value: Nested[Tensor]):
+ """Subclas... | ```suggestion
"""A TensorStats consisting of multiple child TensorStats."""
``` |
axlearn | github_2023 | python | 446 | apple | apghml | @@ -296,6 +365,8 @@ def initialize_parameters_recursively(
parameter_spec=spec,
)
for name, child in self._children.items():
+ if not isinstance(child, BaseLayer):
+ continue | Could you add a comment about why this is needed and a test that fails without this change? E.g., why shouldn't we error in this case? |
axlearn | github_2023 | python | 444 | apple | markblee | @@ -249,6 +249,9 @@ def _gcloud_storage_rsync(
timeout=timeout_s,
capture_output=True,
text=True,
+ # Avoid "No space left on device":
+ # https://cloud.google.com/knowledge/kb/error-message-while-running-the-command-gsutil-rsync-000004577
+ env={"... | nit -- I wonder if we should use /var/tmp/rsync explicitly, given that log dir is often the directory being rsync'ed itself? |
axlearn | github_2023 | python | 425 | apple | ruomingp | @@ -523,24 +523,24 @@ def forward(self, inputs: Tensor) -> Tensor:
)
return jnp.transpose(time_major_outputs, [1, 0, 2])
- def init_step_states(self, *, batch_size: int) -> Nested[Tensor]:
+ def init_states(self, *, batch_size: int) -> Nested[Tensor]:
"""Returns the prediction network... | ```suggestion
(updated_cache_states, outputs), where `outputs` is a Tensor of shape
[batch_size, output_dim].
``` |
axlearn | github_2023 | python | 425 | apple | ruomingp | @@ -284,41 +284,41 @@ def initialize_parameters_recursively(
)
return state
- def init_step_states(self, *, batch_size: int) -> List[NestedTensor]:
+ def init_states(self, *, batch_size: int) -> List[Nested[Tensor]]:
"""Returns a list of initial step states from all layers."""
- ... | ```suggestion
(updated_cache_states, outputs), where:
`updated_cache_states` is a list of states from all layers;
`outputs` is a Tensor of shape [batch_size, output_dim].
``` |
axlearn | github_2023 | python | 425 | apple | ruomingp | @@ -338,27 +338,26 @@ def output_dim(self):
class _RNNRepeat(Repeat):
"""A Repeat layer with layer = children class of BaseRNNCell."""
- def init_step_states(self, *, batch_size: int) -> NestedTensor:
- """Returns the initial step states of all layers."""
+ def init_states(self, *, batch_size: int)... | Ditto. |
axlearn | github_2023 | python | 428 | apple | kelvin-zou | @@ -6,12 +6,13 @@
The fuji models are set up to imitate LLaMA-1 (https://arxiv.org/abs/2302.13971). | nit, fix comment? |
axlearn | github_2023 | python | 428 | apple | kelvin-zou | @@ -444,19 +444,23 @@ def evaler_config_dict(
return evalers
-def make_config_name(arch: str, model_size: str) -> str:
+def make_config_name(arch: str, model_size: str, version: Optional[str] = None) -> str:
"""Makes config name string as a function of architecture and model-size.
Useful to keep co... | nit: add v1 as an default for backward compatibility? |
axlearn | github_2023 | python | 428 | apple | kelvin-zou | @@ -54,17 +105,21 @@ def get_trainer_kwargs(model_size: str, *, vocab_size: int) -> Dict[str, Any]:
num_layers=32,
hidden_dim=128 * 32,
num_heads=32,
+ num_kv_heads=num_kv_heads,
+ rope_theta=rope_theta,
),
learne... | Nit, add a note for v3 model? I believe 1024 GPUs won't work for v3 model since global bs is only 512 due to 8k seq length. |
axlearn | github_2023 | python | 423 | apple | ruomingp | @@ -439,11 +439,18 @@ def _postprocess_outputs(self, *, sequences: Tensor, paddings: Tensor, scores: T
)
-def _map_label_sequences(inputs: Tensor, *, blank_id: int = 0, pad_id: int = 0) -> Nested[Tensor]:
- """Removes blanks, paddings, and repeats from the input sequences, as seen in CTC.
+def _map_labe... | Should we leave `remove_repeats` without a default value, since neither is the best value?
```suggestion
inputs: Tensor, *, remove_repeats: bool, blank_id: int = 0, pad_id: int = 0
``` |
axlearn | github_2023 | python | 416 | apple | samos123 | @@ -75,16 +81,46 @@ def _get_job_credentials(
)
-class TPUJob(GCPJob):
+@config_class
+class AcceleratorConfig(ConfigBase):
+ """Configures job resources, e.g. TPU or GPU.
+
+ Attributes:
+ instance_type: Instance type, e.g. tpu-v4-8. | add example for what this should be on GPU (can be AWS or GCP) |
axlearn | github_2023 | python | 421 | apple | markblee | @@ -476,3 +477,69 @@ def _map_label_sequences(inputs: Tensor, *, blank_id: int = 0, pad_id: int = 0)
if pad_id != 0:
sequences = jnp.where(paddings, pad_id, sequences)
return dict(sequences=sequences, paddings=paddings, lengths=lens)
+
+
+class RNNPredictionNetwork(BaseLayer):
+ """RNN prediction ... | ```suggestion
See https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html
#out-of-bounds-indexing.
``` |
axlearn | github_2023 | python | 421 | apple | markblee | @@ -476,3 +477,69 @@ def _map_label_sequences(inputs: Tensor, *, blank_id: int = 0, pad_id: int = 0)
if pad_id != 0:
sequences = jnp.where(paddings, pad_id, sequences)
return dict(sequences=sequences, paddings=paddings, lengths=lens)
+
+
+class RNNPredictionNetwork(BaseLayer):
+ """RNN prediction ... | ```suggestion
Returns:
A Tensor of shape [batch_size, num_labels, output_dim].
``` |
axlearn | github_2023 | others | 417 | apple | ruomingp | @@ -13,7 +13,7 @@ requires-python = ">=3.9"
# Every time we upgrade JAX, we should try to bring the rest to the newest versions.
dependencies = [
"attrs>=23.1.0", # We use `type` in `attrs.field`
- "absl-py",
+ "absl-py<2", # breaks axlearn.cli.utils_test on 2.1.0 | Can we pin to a specific version of absl-py? |
axlearn | github_2023 | python | 404 | apple | ruomingp | @@ -306,6 +308,7 @@ def from_spec(cls, spec: List[str], *, fv: Optional[flags.FlagValues]) -> Config
- platform: The image target platform.
- allow_dirty: Whether to ignore dirty git status.
- cache_from: A comma-separated list of cache sources.
+ - skip_bundle: Whether to skip the bui... | When is it safe to enable `skip_bundle`? Please add a comment. |
axlearn | github_2023 | python | 401 | apple | markblee | @@ -708,3 +713,66 @@ def temp_chdir(new_cwd: Union[pathlib.Path, str]):
yield
finally:
os.chdir(old_cwd)
+
+
+L = TypeVar("L", bound=BaseLayer)
+
+
+@contextlib.contextmanager
+def bind_layer(
+ layer: ConfigOr[L],
+ *,
+ is_training: bool = True,
+ prng_key: Optional[jax.random.PRNGK... | ```suggestion
fact that FLAX state is only associated with an instance of a module, whereas AXLearn state is
``` |
axlearn | github_2023 | python | 401 | apple | markblee | @@ -708,3 +713,66 @@ def temp_chdir(new_cwd: Union[pathlib.Path, str]):
yield
finally:
os.chdir(old_cwd)
+
+
+L = TypeVar("L", bound=BaseLayer)
+
+
+@contextlib.contextmanager
+def bind_layer(
+ layer: ConfigOr[L],
+ *,
+ is_training: bool = True,
+ prng_key: Optional[jax.random.PRNGK... | Should it more generally be `Module` rather than `BaseLayer`? |
axlearn | github_2023 | python | 401 | apple | markblee | @@ -708,3 +713,66 @@ def temp_chdir(new_cwd: Union[pathlib.Path, str]):
yield
finally:
os.chdir(old_cwd)
+
+
+L = TypeVar("L", bound=BaseLayer)
+
+
+@contextlib.contextmanager
+def bind_layer( | ```suggestion
def bind_module(
```
Which is the base class associated with invocation contexts? |
axlearn | github_2023 | python | 399 | apple | markblee | @@ -242,6 +242,13 @@ def add_summary(
name: The name of the item to add.
value: The value to add.
"""
+ | (Not from this PR, but just noticed that `add_summary` typing seems out of date.) |
axlearn | github_2023 | python | 399 | apple | markblee | @@ -738,6 +739,20 @@ def test_drop_output(self):
ctx.output_collection.module_outputs["nested"],
)
+ def test_add_summary_validation(self):
+ """Tests validation in `add_summary()`."""
+
+ class MySummary(summary.Summary):
+ val: str
+
+ def validat... | Should we test with nested container of summaries and non-summaries to exercise tree_map? |
axlearn | github_2023 | python | 399 | apple | markblee | @@ -56,6 +64,103 @@ def test_add_summary_image(self):
)
chex.assert_trees_all_close(logged_grayscale_image / 255, grayscale_image[..., None])
+ def test_with_tree_paths(self):
+ """Tests that `ImageSummary` works with `tree_paths()`."""
+ img = jnp.ones((1, 1, 1, 3))
+ s = di... | ```suggestion
``` |
axlearn | github_2023 | python | 399 | apple | markblee | @@ -56,6 +64,103 @@ def test_add_summary_image(self):
)
chex.assert_trees_all_close(logged_grayscale_image / 255, grayscale_image[..., None])
+ def test_with_tree_paths(self):
+ """Tests that `ImageSummary` works with `tree_paths()`."""
+ img = jnp.ones((1, 1, 1, 3))
+ s = di... | nit -- use `self.assertNestedAllClose`? |
axlearn | github_2023 | others | 366 | apple | markblee | @@ -65,6 +65,7 @@ disable=abstract-method,
coerce-builtin,
coerce-method,
delslice-method,
+ # disallowed-name, # copied from pyproject.toml | ```suggestion
# disallowed-name,
```
The comment probably adds little value after this PR (please also remove below) |
axlearn | github_2023 | python | 366 | apple | markblee | @@ -208,6 +208,7 @@ def _int32_binary_search(
def loop_body(i: int, solution: Tensor) -> Tensor:
# Loop over the non-sign bits.
bit = jnp.int32(1 << 30 - i)
+ # pylint: disable-next=unsupported-binary-operation # TODO this might be a real bug? | ```suggestion
# pylint: disable-next=unsupported-binary-operation
``` |
axlearn | github_2023 | others | 366 | apple | markblee | @@ -26,12 +26,19 @@ download_assets() {
}
set -o xtrace
+if [[ "${1:-x}" = "--skip-pre-commit" ]] ; then
+ SKIP_PRECOMMIT=true
+ shift
+fi
UNQUOTED_PYTEST_FILES=$(echo $1 | tr -d "'")
-pre-commit install
-pre-commit run --all-files || exit_if_error $? "pre-commit failed."
-# Run pytype separately to utilize a... | ```suggestion
# Skip pre-commit on parallel CI because it is run as a separate job.
``` |
axlearn | github_2023 | python | 373 | apple | tgunter | @@ -1502,7 +1506,7 @@ def adastar_optimizer(
eps: (float) regularization constant added to the square root of smoothed_gradient_squares.
eps_square: (float) regularization constant added to gradient_squares.
raw_update_clipping_threshold: If not None, clips the norms of the raw updates
- ... | Did you mean:
```suggestion
to this value. `raw_update_norm` summaries will be logged either way.
```
? |
axlearn | github_2023 | python | 5 | apple | markblee | @@ -86,7 +86,7 @@ def _convert_translation_to_transform(translations: tf.Tensor) -> tf.Tensor:
Returns:
A transformation matrix of shape (num_images, 8) to be used by
- https://github.com/keras-team/keras/blob/v2.9.0/keras/layers/preprocessing/image_preprocessing.py#L898-L985 | FWIW I think tags are ok too. |
axlearn | github_2023 | python | 27 | apple | ruomingp | @@ -2437,18 +2437,33 @@ class Config(BaseTransformerLayer.Config):
class StackedTransformerLayer(BaseStackedTransformerLayer):
"""A simple implementation of BaseStackedTransformerLayer."""
- Config = BaseStackedTransformerLayer.Config
+ @config_class
+ class Config(BaseStackedTransformerLayer.Config):
... | Add comments that `len(layer)` should match `num_layers`? |
axlearn | github_2023 | python | 358 | apple | markblee | @@ -326,8 +335,16 @@ def schedule(
for resource_type, demand in job_demands.items():
resource_usages[resource_type] += demand
job_verdicts[project_id][job_id] = verdict
+ project_usages[project_id] = resource_usages
- return Scheduler.Sc... | ```suggestion
``` |
axlearn | github_2023 | python | 350 | apple | apghml | @@ -146,76 +146,96 @@ def calculate(
) -> Dict[str, float]:
"""Calculates per-project limits on available resources, quotas, and demands.
+ We assume that `limit` and `demands` are all integers, reflecting number of resource units,
+ e.g., number of GPUs. The allocations will also be integ... | Would it make sense to add a test that it behaves correctly when the total quota exceeds `limit`, if there isn't already such a test? |
axlearn | github_2023 | python | 349 | apple | zetaqubit | @@ -214,7 +214,7 @@ def from_flags(cls, fv: flags.FlagValues, action: str, **kwargs) -> Config:
"""
cfg = super().from_flags(fv, **kwargs)
if not cfg.bastion_name:
- cfg.bastion_name = shared_bastion_name(fv)
+ cfg.bastion_name = fv.bastion or shared_bastion_name(fv) | For future: might be good to centralize all of the config reading and overriding |
axlearn | github_2023 | python | 346 | apple | markblee | @@ -335,6 +336,7 @@ def example_fn(example: Dict[str, Tensor]) -> Dict[str, Tensor]:
randaug_magnitude=randaug_magnitude,
randaug_exclude_ops=randaug_exclude_ops,
erasing_probability=erasing_probability,
+ use_whitening=use_whitening, | Not for this PR, but a more 'configurable' way would be to take a `config_for_function(crop_augment_whiten)` as an input processor, configure it at the caller, and chain it here. This avoids the need to propagate all args in the fn signature. |
axlearn | github_2023 | python | 338 | apple | tgunter | @@ -1255,6 +1255,86 @@ def compute_loss(param_values):
test_results = _compute_updates(test_opt)
self.assertNestedAllClose(base_results, test_results, atol=1e-6, rtol=1e-6)
+ @parameterized.parameters(
+ dict(
+ learning_rate=0.01,
+ b1=0.95,
+ b2=0.995,
+ ... | Print statement left in intentionally? |
axlearn | github_2023 | python | 319 | apple | markblee | @@ -317,11 +317,23 @@ def _execute(self):
class SubmitBastionJob(BaseSubmitBastionJob):
"""A job to submit a command to bastion.
+ TODO(rpang): rename this class to BastionRemoteJob.
+
Main differences from base submit:
- Emits gsutil commands to view logs.
- Emits a warning if the bastion doe... | I think we can remove this (`from_flags` defaults to setting configs that match flag names). |
axlearn | github_2023 | python | 319 | apple | markblee | @@ -207,6 +210,7 @@ def from_flags(cls, fv: flags.FlagValues, action: str, **kwargs) -> Config:
# Default output_dir depends on the final value of --name.
fv.set_default("output_dir", f"gs://{gcp_settings('ttl_bucket')}/axlearn/jobs/{fv.name}")
cfg = super().from_flags(fv, **kwargs)
+ ... | Same here. |
axlearn | github_2023 | python | 319 | apple | markblee | @@ -335,7 +341,8 @@ def _execute(self):
"\nView bastion outputs with:\n"
f"gsutil cat {os.path.join(self.bastion_dir, 'logs', cfg.job_name)}\n"
"\nCheck job history with:\n"
- f"{infer_cli_name()} gcp bastion history --name={cfg.name} --job_name={cfg.job_name}"
+ ... | Was the `infer_cli_name()` -> `axlearn` change necessary for some reason? |
axlearn | github_2023 | others | 259 | apple | tgunter | @@ -12,6 +12,9 @@ echo "=== AXLearn start_tpu.sh ==="
# Random sleep to prevent all TPU-VMs overwhelming pypi etc for large slices.
sleep $((1 + $RANDOM % 30))
+sudo sh -c "echo 'root soft nofile 100000' >> /etc/security/limits.conf" | nit: Is it worth a comment to explain? |
axlearn | github_2023 | others | 259 | apple | samos123 | @@ -12,6 +12,10 @@ echo "=== AXLearn start_tpu.sh ==="
# Random sleep to prevent all TPU-VMs overwhelming pypi etc for large slices.
sleep $((1 + $RANDOM % 30))
+# Increase file descriptor limits for `root` to avoid "Too many open files" errors. | nit: the change the comment to an echo statement so someone reading the logs knows what's going on. That would have helped in catching whether the correct startup script was run as well. |
axlearn | github_2023 | python | 302 | apple | ruomingp | @@ -67,8 +67,7 @@ def main(_):
setup(jax_backend="cpu")
trainer_config_fn: TrainerConfigFn = get_named_trainer_config(
FLAGS.config,
- config_module=FLAGS.module,
- root_module="axlearn",
+ config_module=f"axlearn.{FLAGS.module}", | How about
```suggestion
config_module=FLAGS.module",
```
? This will make the script work for other repos that depend on axlearn |
axlearn | github_2023 | python | 155 | apple | markblee | @@ -361,7 +361,7 @@ def forward(self, image: Tensor, is_masked: Optional[Tensor] = None) -> Dict[str
Args:
image: The input image. Shape: (batch, height, width, channels).
- is_masked: a boolen Tensor in shape (batch, length), representing masked positions
+ is_masked: a bo... | ```suggestion
is_masked: A boolean Tensor in shape (batch, length), representing masked positions
``` |
axlearn | github_2023 | python | 297 | apple | markblee | @@ -1033,6 +1033,113 @@ def update_fn(updates, state, params=None):
)
+class SkipClipState(NamedTuple):
+ """State returned by functions in skip_and_clip_by_global_norm()."""
+
+ nonvalid_count: Union[Tensor, TensorSpec] # Number of non-valid steps.
+ inner_state: Any # State of the inner Partitione... | Can we fix the docstring spacing (newline before example, args, returns)? |
axlearn | github_2023 | python | 296 | apple | ruomingp | @@ -219,6 +219,9 @@ def __init__(self, cfg: Config, *, parent: Optional[Module]):
with self.mesh():
self._add_child("model", cfg.model)
self._model_param_specs = self.model.create_parameter_specs_recursively()
+ if cfg.inference_dtype is not None:
+ self._mod... | ```suggestion
self._model_param_specs = self._inference_cast(self._model_param_specs)
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.