repo_name stringlengths 1 62 | dataset stringclasses 1 value | lang stringclasses 11 values | pr_id int64 1 20.1k | owner stringlengths 2 34 | reviewer stringlengths 2 39 | diff_hunk stringlengths 15 262k | code_review_comment stringlengths 1 99.6k |
|---|---|---|---|---|---|---|---|
axlearn | github_2023 | python | 296 | apple | ruomingp | @@ -830,9 +832,13 @@ def cast_floats(in_tree: NestedTensor, to_dtype: Optional[jnp.dtype]) -> NestedT
from_dtype = jnp.float32 if to_dtype == jnp.bfloat16 else jnp.bfloat16
- def cast(x: Tensor) -> Tensor:
+ def cast(x: Union[Tensor, TensorSpec]) -> [Tensor, TensorSpec]:
if x.dtype == from_dtype:
- return x.astype(to_dtype)
+ if isinstance(x, Tensor):
+ return x.astype(to_dtype)
+ else:
+ x.dtype = to_dtype
+ return x | To avoid mutating `x`:
```suggestion
return dataclasses.replace(x, dtype=to_dtype)
``` |
axlearn | github_2023 | python | 293 | apple | ruomingp | @@ -416,11 +431,19 @@ def restore_from_dir(
spec.tf_ckpt_map, dir=os.path.join(ckpt_dir, f"tf_{jax.process_index()}")
)
+ # Override dtype to target cast dtype.
+ # From jax docs,
+ # Cast while reloading on process to avoid 2 copies on device if the
+ # casting is done on device.
+ spec_dtypes = spec.dtypes
+ if dtype is not None:
+ spec_dtypes = [dtype] * len(spec_dtypes) | How would this handle integer vs float dtypes? |
axlearn | github_2023 | python | 293 | apple | ruomingp | @@ -404,7 +404,22 @@ def restore_from_dir(
ckpt_dir: str,
validation: CheckpointValidationType = CheckpointValidationType.EXACT,
concurrent_gb: int = 32,
+ dtype_cast_func: Optional[Callable[[jnp.dtype], jnp.dtype]] = None,
) -> NestedTensor:
+ """Restore checkpoints in tensorstore format from a target directory.
+
+ Args:
+ step (int): Step number to restore.
+ state (Union[NestedTensor, NestedTensorSpec]): Model states.
+ ckpt_dir (str): The path to checkpoint directory.
+ validation (CheckpointValidationType): Validation type after loading weights.
+ concurrent_gb (int): Max concurrent size to load for tensorstore checkpoints.
+ dtype_cast_func (Optional[Callable[[jnp.dtype], jnp.dtype]]):
+ Function to cast dtype to other dtype. | ```suggestion
dtype_cast_func (Optional[Callable[[jnp.dtype], jnp.dtype]]):
Function to map saved dtypes to restored dtypes.
``` |
axlearn | github_2023 | python | 293 | apple | ruomingp | @@ -559,17 +559,27 @@ def test_read_state_spec(self):
class TensorStoreStateStorageTest(test_utils.TestCase):
- @parameterized.parameters(jnp.float32, jnp.bfloat16, jnp.int32, jnp.int16)
- def test_save_and_restore_from_dir(self, restore_floats_as: jnp.dtype):
+ @parameterized.parameters(
+ [jnp.float32, None],
+ [jnp.bfloat16, None],
+ [jnp.int32, None],
+ [jnp.int16, None],
+ [jnp.float32, jnp.bfloat16],
+ [jnp.float32, jnp.float16],
+ )
+ def test_save_and_restore_from_dir(self, restore_floats_as: jnp.dtype, cast_dtype: jnp.dtype): | Comment on the difference between `restore_floats_as` and `cast_dtype`? Shouldn't they be the same? |
axlearn | github_2023 | python | 293 | apple | ruomingp | @@ -404,7 +404,22 @@ def restore_from_dir(
ckpt_dir: str,
validation: CheckpointValidationType = CheckpointValidationType.EXACT,
concurrent_gb: int = 32,
+ dtype_cast_func: Optional[Callable[[jnp.dtype], jnp.dtype]] = None, | Reading the unittest again, I'm not sure we need this new arg. It seems that user can manipulate `state` by casting one type to another instead. Would that work? |
axlearn | github_2023 | python | 201 | apple | apghml | @@ -0,0 +1,90 @@
+"""Utilities for ahead-of-time compilation.
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/
+"""
+import os
+
+os.environ["JAX_PLATFORMS"] = "cpu"
+os.environ["TPU_SKIP_MDS_QUERY"] = "1"
+
+import copy
+from dataclasses import dataclass
+from typing import Callable, Dict
+
+import jax
+jax.config.update("jax_platforms", "cpu") | Is this redundant with the environment variable above? |
axlearn | github_2023 | python | 201 | apple | apghml | @@ -0,0 +1,90 @@
+"""Utilities for ahead-of-time compilation.
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/
+"""
+import os
+
+os.environ["JAX_PLATFORMS"] = "cpu"
+os.environ["TPU_SKIP_MDS_QUERY"] = "1"
+
+import copy
+from dataclasses import dataclass
+from typing import Callable, Dict
+
+import jax
+jax.config.update("jax_platforms", "cpu")
+import jax.random
+import numpy as np
+from jax.experimental.topologies import get_topology_desc
+
+from axlearn.common.trainer import SpmdTrainer
+from axlearn.common.utils import infer_mesh_shape
+
+
+@dataclass
+class SystemCharacteristics:
+ platform: str
+ topology_name: str
+ chip_config_name: str # 'megacore' or 'default'
+ chips_per_host_bounds: tuple
+ devices_per_slice: int
+
+
+USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS = {
+ "v5e-16": SystemCharacteristics("tpu", "v5e:4x4", "default", (2, 2, 1), 16),
+ "v5e-32": SystemCharacteristics("tpu", "v5e:4x8", "default", (2, 2, 1), 32),
+ "v5e-64": SystemCharacteristics("tpu", "v5e:8x8", "default", (2, 2, 1), 64),
+ "v5e-128": SystemCharacteristics("tpu", "v5e:8x16", "default", (2, 2, 1), 128),
+ "v5e-256": SystemCharacteristics("tpu", "v5e:16x16", "default", (2, 2, 1), 256),
+ "v4-8": SystemCharacteristics("tpu", "v4:2x2x1", "megacore", (2, 2, 1), 4),
+ "v4-16": SystemCharacteristics("tpu", "v4:2x2x2", "megacore", (2, 2, 1), 8),
+ "v4-32": SystemCharacteristics("tpu", "v4:2x2x4", "megacore", (2, 2, 1), 16),
+ "v4-64": SystemCharacteristics("tpu", "v4:2x4x4", "megacore", (2, 2, 1), 32),
+ "v4-128": SystemCharacteristics("tpu", "v4:4x4x4", "megacore", (2, 2, 1), 64),
+ "v4-256": SystemCharacteristics("tpu", "v4:4x4x8", "megacore", (2, 2, 1), 128),
+ "v4-512": SystemCharacteristics("tpu", "v4:4x8x8", "megacore", (2, 2, 1), 256),
+ "v4-1024": SystemCharacteristics("tpu", "v4:8x8x8", "megacore", (2, 2, 1), 512),
+ "v4-1536": SystemCharacteristics("tpu", "v4:8x8x12", "megacore", (2, 2, 1), 768),
+ "v4-2048": SystemCharacteristics("tpu", "v4:8x8x16", "megacore", (2, 2, 1), 1024),
+ "v4-4096": SystemCharacteristics("tpu", "v4:8x16x16", "megacore", (2, 2, 1), 2048),
+}
+
+
+def compile_trainer_programs(
+ trainer_config: SpmdTrainer.Config, *, topology: str, topology_num_slices: int = 1
+) -> Dict[str, Callable]:
+ """Returns compiled XLA programs for the given trainer.
+
+ Args:
+ trainer_config: the trainer config.
+ topology: a string representing the TPU topology, e.g., "v4-8". Must be a key in
+ USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS.
+ topology_num_slices: number of TPU slices.
+
+ Returns:
+ A dict containing the following programs:
+ * "train_step": a program to run a single training step.
+ """
+ if topology is not None:
+ target_hardware = USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS[topology]
+ topology_devices = get_topology_desc(
+ platform=target_hardware.platform,
+ topology_name=target_hardware.topology_name,
+ chip_config_name=target_hardware.chip_config_name,
+ chips_per_host_bounds=target_hardware.chips_per_host_bounds,
+ num_slices=topology_num_slices,
+ ).devices
+ else:
+ topology_devices = jax.devices()
+
+ cfg = copy.deepcopy(trainer_config) | ```suggestion
cfg = trainer_cfg.clone()
``` |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -0,0 +1,90 @@
+"""Utilities for ahead-of-time compilation.
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/
+"""
+import os
+
+os.environ["JAX_PLATFORMS"] = "cpu"
+os.environ["TPU_SKIP_MDS_QUERY"] = "1"
+
+import copy
+from dataclasses import dataclass
+from typing import Callable, Dict
+
+import jax
+jax.config.update("jax_platforms", "cpu")
+import jax.random
+import numpy as np
+from jax.experimental.topologies import get_topology_desc
+
+from axlearn.common.trainer import SpmdTrainer
+from axlearn.common.utils import infer_mesh_shape
+
+
+@dataclass
+class SystemCharacteristics:
+ platform: str
+ topology_name: str
+ chip_config_name: str # 'megacore' or 'default'
+ chips_per_host_bounds: tuple
+ devices_per_slice: int
+
+
+USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS = { | Link to ref in case we need to update? https://github.com/google/maxtext/blob/bff7efbb7a51dfaf7c1739aae22a9cd742386fef/MaxText/accelerator_to_spec_map.py#L33
(FWIW we may eventually want to move to a common file since it is useful on the GKE side.) |
axlearn | github_2023 | python | 201 | apple | apghml | @@ -163,7 +178,7 @@ class Config(Module.Config):
# increment within this interval.
watchdog_timeout_seconds: Optional[float] = None
- def __init__(self, cfg: Config, *, parent: Optional[Module]):
+ def __init__(self, cfg: Config, *, parent: Optional[Module], devices=None): | Add a type annotation for `devices`? |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -0,0 +1,90 @@
+"""Utilities for ahead-of-time compilation.
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/
+"""
+import os
+
+os.environ["JAX_PLATFORMS"] = "cpu"
+os.environ["TPU_SKIP_MDS_QUERY"] = "1"
+
+import copy
+from dataclasses import dataclass
+from typing import Callable, Dict
+
+import jax
+jax.config.update("jax_platforms", "cpu")
+import jax.random
+import numpy as np
+from jax.experimental.topologies import get_topology_desc
+
+from axlearn.common.trainer import SpmdTrainer
+from axlearn.common.utils import infer_mesh_shape
+
+
+@dataclass
+class SystemCharacteristics:
+ platform: str
+ topology_name: str
+ chip_config_name: str # 'megacore' or 'default'
+ chips_per_host_bounds: tuple
+ devices_per_slice: int
+
+
+USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS = {
+ "v5e-16": SystemCharacteristics("tpu", "v5e:4x4", "default", (2, 2, 1), 16),
+ "v5e-32": SystemCharacteristics("tpu", "v5e:4x8", "default", (2, 2, 1), 32),
+ "v5e-64": SystemCharacteristics("tpu", "v5e:8x8", "default", (2, 2, 1), 64),
+ "v5e-128": SystemCharacteristics("tpu", "v5e:8x16", "default", (2, 2, 1), 128),
+ "v5e-256": SystemCharacteristics("tpu", "v5e:16x16", "default", (2, 2, 1), 256),
+ "v4-8": SystemCharacteristics("tpu", "v4:2x2x1", "megacore", (2, 2, 1), 4),
+ "v4-16": SystemCharacteristics("tpu", "v4:2x2x2", "megacore", (2, 2, 1), 8),
+ "v4-32": SystemCharacteristics("tpu", "v4:2x2x4", "megacore", (2, 2, 1), 16),
+ "v4-64": SystemCharacteristics("tpu", "v4:2x4x4", "megacore", (2, 2, 1), 32),
+ "v4-128": SystemCharacteristics("tpu", "v4:4x4x4", "megacore", (2, 2, 1), 64),
+ "v4-256": SystemCharacteristics("tpu", "v4:4x4x8", "megacore", (2, 2, 1), 128),
+ "v4-512": SystemCharacteristics("tpu", "v4:4x8x8", "megacore", (2, 2, 1), 256),
+ "v4-1024": SystemCharacteristics("tpu", "v4:8x8x8", "megacore", (2, 2, 1), 512),
+ "v4-1536": SystemCharacteristics("tpu", "v4:8x8x12", "megacore", (2, 2, 1), 768),
+ "v4-2048": SystemCharacteristics("tpu", "v4:8x8x16", "megacore", (2, 2, 1), 1024),
+ "v4-4096": SystemCharacteristics("tpu", "v4:8x16x16", "megacore", (2, 2, 1), 2048),
+}
+
+
+def compile_trainer_programs(
+ trainer_config: SpmdTrainer.Config, *, topology: str, topology_num_slices: int = 1
+) -> Dict[str, Callable]:
+ """Returns compiled XLA programs for the given trainer.
+
+ Args:
+ trainer_config: the trainer config.
+ topology: a string representing the TPU topology, e.g., "v4-8". Must be a key in
+ USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS.
+ topology_num_slices: number of TPU slices. | ```suggestion
trainer_config: The trainer config.
topology: A string representing the TPU topology, e.g., "v4-8". Must be a key in
USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS.
topology_num_slices: Number of TPU slices.
``` |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -0,0 +1,90 @@
+"""Utilities for ahead-of-time compilation.
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/
+"""
+import os
+
+os.environ["JAX_PLATFORMS"] = "cpu"
+os.environ["TPU_SKIP_MDS_QUERY"] = "1"
+
+import copy
+from dataclasses import dataclass
+from typing import Callable, Dict
+
+import jax
+jax.config.update("jax_platforms", "cpu")
+import jax.random
+import numpy as np
+from jax.experimental.topologies import get_topology_desc
+
+from axlearn.common.trainer import SpmdTrainer
+from axlearn.common.utils import infer_mesh_shape
+
+
+@dataclass
+class SystemCharacteristics:
+ platform: str
+ topology_name: str
+ chip_config_name: str # 'megacore' or 'default'
+ chips_per_host_bounds: tuple
+ devices_per_slice: int
+
+
+USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS = {
+ "v5e-16": SystemCharacteristics("tpu", "v5e:4x4", "default", (2, 2, 1), 16),
+ "v5e-32": SystemCharacteristics("tpu", "v5e:4x8", "default", (2, 2, 1), 32),
+ "v5e-64": SystemCharacteristics("tpu", "v5e:8x8", "default", (2, 2, 1), 64),
+ "v5e-128": SystemCharacteristics("tpu", "v5e:8x16", "default", (2, 2, 1), 128),
+ "v5e-256": SystemCharacteristics("tpu", "v5e:16x16", "default", (2, 2, 1), 256),
+ "v4-8": SystemCharacteristics("tpu", "v4:2x2x1", "megacore", (2, 2, 1), 4),
+ "v4-16": SystemCharacteristics("tpu", "v4:2x2x2", "megacore", (2, 2, 1), 8),
+ "v4-32": SystemCharacteristics("tpu", "v4:2x2x4", "megacore", (2, 2, 1), 16),
+ "v4-64": SystemCharacteristics("tpu", "v4:2x4x4", "megacore", (2, 2, 1), 32),
+ "v4-128": SystemCharacteristics("tpu", "v4:4x4x4", "megacore", (2, 2, 1), 64),
+ "v4-256": SystemCharacteristics("tpu", "v4:4x4x8", "megacore", (2, 2, 1), 128),
+ "v4-512": SystemCharacteristics("tpu", "v4:4x8x8", "megacore", (2, 2, 1), 256),
+ "v4-1024": SystemCharacteristics("tpu", "v4:8x8x8", "megacore", (2, 2, 1), 512),
+ "v4-1536": SystemCharacteristics("tpu", "v4:8x8x12", "megacore", (2, 2, 1), 768),
+ "v4-2048": SystemCharacteristics("tpu", "v4:8x8x16", "megacore", (2, 2, 1), 1024),
+ "v4-4096": SystemCharacteristics("tpu", "v4:8x16x16", "megacore", (2, 2, 1), 2048),
+}
+
+
+def compile_trainer_programs(
+ trainer_config: SpmdTrainer.Config, *, topology: str, topology_num_slices: int = 1
+) -> Dict[str, Callable]:
+ """Returns compiled XLA programs for the given trainer.
+
+ Args:
+ trainer_config: the trainer config.
+ topology: a string representing the TPU topology, e.g., "v4-8". Must be a key in
+ USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS.
+ topology_num_slices: number of TPU slices.
+
+ Returns:
+ A dict containing the following programs:
+ * "train_step": a program to run a single training step.
+ """
+ if topology is not None:
+ target_hardware = USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS[topology]
+ topology_devices = get_topology_desc(
+ platform=target_hardware.platform,
+ topology_name=target_hardware.topology_name,
+ chip_config_name=target_hardware.chip_config_name,
+ chips_per_host_bounds=target_hardware.chips_per_host_bounds,
+ num_slices=topology_num_slices,
+ ).devices
+ else:
+ topology_devices = jax.devices()
+
+ cfg = copy.deepcopy(trainer_config)
+ cfg.dir = "NOT_USED" | ```suggestion
cfg = trainer_config.clone(dir="NOT_USED")
```
? |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -70,6 +70,21 @@ def _prune_empty(in_tree: NestedTensor) -> NestedTensor:
return prune_tree(in_tree, lambda _, v: isinstance(v, dict) and not v)
+def to_jax_dtype(tf_dtype: tf.DType) -> jnp.dtype: | Where is this used? |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -70,6 +70,21 @@ def _prune_empty(in_tree: NestedTensor) -> NestedTensor:
return prune_tree(in_tree, lambda _, v: isinstance(v, dict) and not v)
+def to_jax_dtype(tf_dtype: tf.DType) -> jnp.dtype:
+ if tf_dtype == tf.int32:
+ return jnp.int32
+ elif tf_dtype == tf.float32:
+ return jnp.float32
+ elif tf_dtype == tf.bloat16:
+ return jnp.bloat16
+ else:
+ raise NotImplementedError(tf_dtype)
+
+
+def get_shape_dtype_struct(tf_spec) -> jax.ShapeDtypeStruct: | Same question. |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -747,6 +764,23 @@ def _pjit_train_step(self):
donate_argnums=(0,), # donate the state
)
+ def compile_train_step(self) -> Callable:
+ with self.mesh():
+ # Do not run init(), which require real devices.
+ # trainer_state_specs = jax.eval_shape(self.init, jax.random.PRNGKey(1)) | Intentional? |
axlearn | github_2023 | python | 201 | apple | apghml | @@ -0,0 +1,70 @@
+# Copyright © 2023 Apple Inc.
+
+"""AoT (ahead-of-time) compilation config tests.
+
+pip install 'jax[tpu]==0.4.21' -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
+
+export TPU_SKIP_MDS_QUERY=1
+python axlearn/experiments/aot_test.py
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/
+"""
+import pickle
+from typing import Optional
+
+from absl.testing import absltest
+from jax.experimental.serialize_executable import serialize
+
+from axlearn.common import test_utils
+from axlearn.common.aot_compilation import compile_trainer_programs
+from axlearn.common.trainer import SpmdTrainer
+from axlearn.experiments.text.gpt import c4_trainer
+
+
+class AoTCompilationTest(test_utils.TrainerConfigTestCase):
+ """Tests ahead-of-time (AoT) compilation."""
+
+ def _jax_backend(self) -> Optional[str]:
+ return "cpu"
+
+ def _test_aot(
+ self,
+ trainer_config: SpmdTrainer.Config,
+ *,
+ compile_topology: Optional[str],
+ compile_topology_num_slices: int = 1,
+ ):
+ programs = compile_trainer_programs(
+ trainer_config,
+ topology=compile_topology,
+ topology_num_slices=compile_topology_num_slices,
+ )
+ compiled_train_step = programs["train_step"]
+ self.assertIsNotNone(compiled_train_step)
+ print("== Help ==")
+ print(help(compiled_train_step))
+ print("== Text ==")
+ print(compiled_train_step.as_text())
+ print("== Cost analysis ==")
+ print(compiled_train_step.cost_analysis())
+ print("== Memeory analysis ==")
+ print(compiled_train_step.memory_analysis())
+
+ # Serialization does not work for CPU devices:
+ # UNIMPLEMENTED: Not an XLA Runtime executable
+ if compile_topology is not None:
+ serialized_compiled, in_tree, out_tree = serialize(compiled_train_step)
+ with open("/tmp/aot_compiled", "wb") as f:
+ pickle.dump(serialized_compiled, f)
+ print(serialized_compiled) | Are these lines needed in the test? If so, would it make sense to deduplicate them with the similar lines in `_compile_and_dump_programs`? |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -0,0 +1,90 @@
+"""Utilities for ahead-of-time compilation. | Missing copyright. |
axlearn | github_2023 | python | 201 | apple | apghml | @@ -0,0 +1,90 @@
+"""Utilities for ahead-of-time compilation.
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/ | This Google doc asks me to sign in in order to view it. Is it publicly accessible? |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -0,0 +1,70 @@
+# Copyright © 2023 Apple Inc.
+
+"""AoT (ahead-of-time) compilation config tests.
+
+pip install 'jax[tpu]==0.4.21' -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
+
+export TPU_SKIP_MDS_QUERY=1
+python axlearn/experiments/aot_test.py
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/
+"""
+import pickle
+from typing import Optional
+
+from absl.testing import absltest
+from jax.experimental.serialize_executable import serialize
+
+from axlearn.common import test_utils
+from axlearn.common.aot_compilation import compile_trainer_programs
+from axlearn.common.trainer import SpmdTrainer
+from axlearn.experiments.text.gpt import c4_trainer
+
+
+class AoTCompilationTest(test_utils.TrainerConfigTestCase):
+ """Tests ahead-of-time (AoT) compilation."""
+
+ def _jax_backend(self) -> Optional[str]: | ```suggestion
def _jax_backend(self) -> str:
``` |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -0,0 +1,78 @@
+# Copyright © 2023 Apple Inc. | ```suggestion
# Copyright © 2023 Apple Inc.
``` |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -0,0 +1,78 @@
+# Copyright © 2023 Apple Inc.
+"""A command-line tool to perform AoT (ahead-of-time) compilation.
+
+pip install 'jax[tpu]==0.4.21' -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
+python axlearn/experiments/run_aot_compilation.py \
+ --config_module=text.gpt.c4_trainer \
+ --config=fuji-7B \
+ --topology=v4-1024 1> /tmp/aot_stdout 2| tee /tmp/aot_stderr
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/
+"""
+
+import pickle
+from typing import Optional
+
+from absl import flags, app
+from jax.experimental.serialize_executable import serialize
+
+from axlearn.common.aot_compilation import compile_trainer_programs
+from axlearn.common.trainer import SpmdTrainer
+from axlearn.common.utils import set_data_dir
+from axlearn.common.utils_spmd import setup
+from axlearn.experiments import TrainerConfigFn, get_named_trainer_config
+
+
+flags.DEFINE_string("config_module", None, "The TPU topology.")
+flags.DEFINE_string("config", None, "The TPU topology.")
+flags.DEFINE_string("topology", None, "The TPU topology.") | ```suggestion
flags.DEFINE_string("module", None, "The trainer config module.")
flags.DEFINE_string("config", None, "The trainer config name.")
flags.DEFINE_string("topology", None, "The TPU topology.")
```
For consistency with `launch_trainer` |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -0,0 +1,78 @@
+# Copyright © 2023 Apple Inc.
+"""A command-line tool to perform AoT (ahead-of-time) compilation.
+
+pip install 'jax[tpu]==0.4.21' -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
+python axlearn/experiments/run_aot_compilation.py \
+ --config_module=text.gpt.c4_trainer \
+ --config=fuji-7B \
+ --topology=v4-1024 1> /tmp/aot_stdout 2| tee /tmp/aot_stderr
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/
+"""
+
+import pickle
+from typing import Optional
+
+from absl import flags, app
+from jax.experimental.serialize_executable import serialize
+
+from axlearn.common.aot_compilation import compile_trainer_programs
+from axlearn.common.trainer import SpmdTrainer
+from axlearn.common.utils import set_data_dir
+from axlearn.common.utils_spmd import setup
+from axlearn.experiments import TrainerConfigFn, get_named_trainer_config
+
+
+flags.DEFINE_string("config_module", None, "The TPU topology.")
+flags.DEFINE_string("config", None, "The TPU topology.")
+flags.DEFINE_string("topology", None, "The TPU topology.")
+flags.DEFINE_integer("topology_num_slices", 1, "The number of TPU slices.")
+
+FLAGS = flags.FLAGS
+
+
+def _compile_and_dump_programs( | Would it make sense to have this be in `aot_compilation`? E.g. we seem to do the same thing in aot_test. |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -0,0 +1,78 @@
+# Copyright © 2023 Apple Inc.
+"""A command-line tool to perform AoT (ahead-of-time) compilation.
+
+pip install 'jax[tpu]==0.4.21' -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
+python axlearn/experiments/run_aot_compilation.py \
+ --config_module=text.gpt.c4_trainer \
+ --config=fuji-7B \
+ --topology=v4-1024 1> /tmp/aot_stdout 2| tee /tmp/aot_stderr
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/
+"""
+
+import pickle
+from typing import Optional
+
+from absl import flags, app
+from jax.experimental.serialize_executable import serialize
+
+from axlearn.common.aot_compilation import compile_trainer_programs
+from axlearn.common.trainer import SpmdTrainer
+from axlearn.common.utils import set_data_dir
+from axlearn.common.utils_spmd import setup
+from axlearn.experiments import TrainerConfigFn, get_named_trainer_config
+
+
+flags.DEFINE_string("config_module", None, "The TPU topology.")
+flags.DEFINE_string("config", None, "The TPU topology.")
+flags.DEFINE_string("topology", None, "The TPU topology.")
+flags.DEFINE_integer("topology_num_slices", 1, "The number of TPU slices.")
+
+FLAGS = flags.FLAGS
+
+
+def _compile_and_dump_programs(
+ trainer_config: SpmdTrainer.Config,
+ *,
+ compile_topology: Optional[str],
+ compile_topology_num_slices: int = 1,
+):
+ with set_data_dir("FAKE"):
+ programs = compile_trainer_programs(
+ trainer_config,
+ topology=compile_topology,
+ topology_num_slices=compile_topology_num_slices,
+ )
+ for program_name, program in programs.items():
+ print(f"== Text: {program_name} ==")
+ print(program.as_text())
+ print(f"== Cost analysis {program_name} ==")
+ print(program.cost_analysis())
+ print(f"== Memeory analysis {program_name} ==")
+ print(program.memory_analysis())
+
+ # Serialization does not work for CPU devices:
+ # UNIMPLEMENTED: Not an XLA Runtime executable
+ if compile_topology is not None:
+ serialized_compiled, in_tree, out_tree = serialize(program)
+ with open("/tmp/aot_compiled", "wb") as f:
+ pickle.dump(serialized_compiled, f)
+ print(serialized_compiled)
+
+def main(argv):
+ setup(jax_backend="cpu")
+ trainer_config_fn: TrainerConfigFn = get_named_trainer_config(
+ FLAGS.config,
+ config_module=FLAGS.config_module,
+ root_module="axlearn",
+ )
+ _compile_and_dump_programs(
+ trainer_config_fn(),
+ compile_topology = FLAGS.topology,
+ compile_topology_num_slices = FLAGS.topology_num_slices, | ```suggestion
compile_topology=FLAGS.topology,
compile_topology_num_slices=FLAGS.topology_num_slices,
``` |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -0,0 +1,191 @@
+# Copyright © 2023 Apple Inc.
+
+"""Utilities for ahead-of-time compilation.
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/
+"""
+import os
+
+os.environ["TPU_SKIP_MDS_QUERY"] = "1"
+
+from dataclasses import dataclass
+from typing import Callable, Dict
+
+import jax
+
+# To avoid error: Unable to initialize backend 'tpu'.
+jax.config.update("jax_platforms", "cpu")
+import jax.random
+import numpy as np
+from jax.experimental.topologies import get_topology_desc
+
+from axlearn.common.trainer import SpmdTrainer
+from axlearn.common.utils import infer_mesh_shape
+
+
+@dataclass
+class SystemCharacteristics:
+ platform: str
+ topology_name: str
+ chip_config_name: str # 'megacore' or 'default'
+ chips_per_host_bounds: tuple
+ devices_per_slice: int
+
+
+# Reference: https://github.com/google/maxtext/blob/main/MaxText/accelerator_to_spec_map.py
+USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS = {
+ # v5e
+ "v5e-16": SystemCharacteristics("tpu", "v5e:4x4", "default", (2, 2, 1), 16),
+ "v5e-32": SystemCharacteristics("tpu", "v5e:4x8", "default", (2, 2, 1), 32),
+ "v5e-64": SystemCharacteristics("tpu", "v5e:8x8", "default", (2, 2, 1), 64),
+ "v5e-128": SystemCharacteristics("tpu", "v5e:8x16", "default", (2, 2, 1), 128),
+ "v5e-256": SystemCharacteristics("tpu", "v5e:16x16", "default", (2, 2, 1), 256),
+ # v4
+ "v4-8": SystemCharacteristics("tpu", "v4:2x2x1", "megacore", (2, 2, 1), 4),
+ "v4-16": SystemCharacteristics("tpu", "v4:2x2x2", "megacore", (2, 2, 1), 8),
+ "v4-32": SystemCharacteristics("tpu", "v4:2x2x4", "megacore", (2, 2, 1), 16),
+ "v4-64": SystemCharacteristics("tpu", "v4:2x4x4", "megacore", (2, 2, 1), 32),
+ "v4-128": SystemCharacteristics("tpu", "v4:4x4x4", "megacore", (2, 2, 1), 64),
+ "v4-256": SystemCharacteristics("tpu", "v4:4x4x8", "megacore", (2, 2, 1), 128),
+ "v4-512": SystemCharacteristics("tpu", "v4:4x8x8", "megacore", (2, 2, 1), 256),
+ "v4-1024": SystemCharacteristics("tpu", "v4:8x8x8", "megacore", (2, 2, 1), 512),
+ "v4-1536": SystemCharacteristics("tpu", "v4:8x8x12", "megacore", (2, 2, 1), 768),
+ "v4-2048": SystemCharacteristics("tpu", "v4:8x8x16", "megacore", (2, 2, 1), 1024),
+ "v4-4096": SystemCharacteristics("tpu", "v4:8x16x16", "megacore", (2, 2, 1), 2048),
+ # v5p
+ "v5p-8": SystemCharacteristics("tpu", "v5:2x2x1", "megacore", (2, 2, 1), 4),
+ "v5p-16": SystemCharacteristics("tpu", "v5:2x2x2", "megacore", (2, 2, 1), 8),
+ "v5p-32": SystemCharacteristics("tpu", "v5:2x2x4", "megacore", (2, 2, 1), 16),
+ "v5p-64": SystemCharacteristics("tpu", "v5:2x4x4", "megacore", (2, 2, 1), 32),
+ "v5p-128": SystemCharacteristics("tpu", "v5:4x4x4", "megacore", (2, 2, 1), 64),
+ "v5p-256": SystemCharacteristics("tpu", "v5:4x4x8", "megacore", (2, 2, 1), 128),
+ "v5p-384": SystemCharacteristics("tpu", "v5:4x4x12", "megacore", (2, 2, 1), 192),
+ "v5p-512": SystemCharacteristics("tpu", "v5:4x8x8", "megacore", (2, 2, 1), 256),
+ "v5p-640": SystemCharacteristics("tpu", "v5:4x4x20", "megacore", (2, 2, 1), 320),
+ "v5p-768": SystemCharacteristics("tpu", "v5:4x8x12", "megacore", (2, 2, 1), 384),
+ "v5p-896": SystemCharacteristics("tpu", "v5:4x4x28", "megacore", (2, 2, 1), 448),
+ "v5p-1024": SystemCharacteristics("tpu", "v5:8x8x8", "megacore", (2, 2, 1), 512),
+ "v5p-1152": SystemCharacteristics("tpu", "v5:4x12x12", "megacore", (2, 2, 1), 576),
+ "v5p-1280": SystemCharacteristics("tpu", "v5:4x8x20", "megacore", (2, 2, 1), 640),
+ "v5p-1408": SystemCharacteristics("tpu", "v5:4x4x44", "megacore", (2, 2, 1), 704),
+ "v5p-1536": SystemCharacteristics("tpu", "v5:8x8x12", "megacore", (2, 2, 1), 768),
+ "v5p-1664": SystemCharacteristics("tpu", "v5:4x4x52", "megacore", (2, 2, 1), 832),
+ "v5p-1792": SystemCharacteristics("tpu", "v5:4x8x28", "megacore", (2, 2, 1), 896),
+ "v5p-1920": SystemCharacteristics("tpu", "v5:4x12x20", "megacore", (2, 2, 1), 960),
+ "v5p-2048": SystemCharacteristics("tpu", "v5:8x8x16", "megacore", (2, 2, 1), 1024),
+ "v5p-2176": SystemCharacteristics("tpu", "v5:4x4x68", "megacore", (2, 2, 1), 1088),
+ "v5p-2304": SystemCharacteristics("tpu", "v5:8x12x12", "megacore", (2, 2, 1), 1152),
+ "v5p-2432": SystemCharacteristics("tpu", "v5:4x4x76", "megacore", (2, 2, 1), 1216),
+ "v5p-2560": SystemCharacteristics("tpu", "v5:8x8x20", "megacore", (2, 2, 1), 1280),
+ "v5p-2688": SystemCharacteristics("tpu", "v5:4x12x28", "megacore", (2, 2, 1), 1344),
+ "v5p-2816": SystemCharacteristics("tpu", "v5:4x8x44", "megacore", (2, 2, 1), 1408),
+ "v5p-2944": SystemCharacteristics("tpu", "v5:4x4x92", "megacore", (2, 2, 1), 1472),
+ "v5p-3072": SystemCharacteristics("tpu", "v5:4x12x16", "megacore", (2, 2, 1), 1536),
+ "v5p-3200": SystemCharacteristics("tpu", "v5:4x20x20", "megacore", (2, 2, 1), 1600),
+ "v5p-3328": SystemCharacteristics("tpu", "v5:4x8x52", "megacore", (2, 2, 1), 1664),
+ "v5p-3456": SystemCharacteristics("tpu", "v5:12x12x12", "megacore", (2, 2, 1), 1728),
+ "v5p-3584": SystemCharacteristics("tpu", "v5:8x8x28", "megacore", (2, 2, 1), 1792),
+ "v5p-3712": SystemCharacteristics("tpu", "v5:4x4x116", "megacore", (2, 2, 1), 1856),
+ "v5p-3840": SystemCharacteristics("tpu", "v5:8x12x20", "megacore", (2, 2, 1), 1920),
+ "v5p-3968": SystemCharacteristics("tpu", "v5:4x4x124", "megacore", (2, 2, 1), 1984),
+ "v5p-4096": SystemCharacteristics("tpu", "v5:8x16x16", "megacore", (2, 2, 1), 2048),
+ "v5p-4224": SystemCharacteristics("tpu", "v5:4x12x44", "megacore", (2, 2, 1), 2112),
+ "v5p-4352": SystemCharacteristics("tpu", "v5:4x8x68", "megacore", (2, 2, 1), 2176),
+ "v5p-4480": SystemCharacteristics("tpu", "v5:4x20x28", "megacore", (2, 2, 1), 2240),
+ "v5p-4608": SystemCharacteristics("tpu", "v5:12x12x16", "megacore", (2, 2, 1), 2304),
+ "v5p-4736": SystemCharacteristics("tpu", "v5:4x4x148", "megacore", (2, 2, 1), 2368),
+ "v5p-4864": SystemCharacteristics("tpu", "v5:4x8x76", "megacore", (2, 2, 1), 2432),
+ "v5p-4992": SystemCharacteristics("tpu", "v5:4x12x52", "megacore", (2, 2, 1), 2496),
+ "v5p-5120": SystemCharacteristics("tpu", "v5:8x16x20", "megacore", (2, 2, 1), 2560),
+ "v5p-5248": SystemCharacteristics("tpu", "v5:4x4x164", "megacore", (2, 2, 1), 2624),
+ "v5p-5376": SystemCharacteristics("tpu", "v5:8x12x28", "megacore", (2, 2, 1), 2688),
+ "v5p-5504": SystemCharacteristics("tpu", "v5:4x4x172", "megacore", (2, 2, 1), 2752),
+ "v5p-5632": SystemCharacteristics("tpu", "v5:8x8x44", "megacore", (2, 2, 1), 2816),
+ "v5p-5760": SystemCharacteristics("tpu", "v5:12x12x20", "megacore", (2, 2, 1), 2880),
+ "v5p-5888": SystemCharacteristics("tpu", "v5:4x8x92", "megacore", (2, 2, 1), 2944),
+ "v5p-6016": SystemCharacteristics("tpu", "v5:4x4x188", "megacore", (2, 2, 1), 3008),
+ "v5p-6144": SystemCharacteristics("tpu", "v5:12x16x16", "megacore", (2, 2, 1), 3072),
+ "v5p-6272": SystemCharacteristics("tpu", "v5:4x28x28", "megacore", (2, 2, 1), 3136),
+ "v5p-6400": SystemCharacteristics("tpu", "v5:8x20x20", "megacore", (2, 2, 1), 3200),
+ "v5p-6528": SystemCharacteristics("tpu", "v5:4x12x68", "megacore", (2, 2, 1), 3264),
+ "v5p-6656": SystemCharacteristics("tpu", "v5:8x8x52", "megacore", (2, 2, 1), 3328),
+ "v5p-6784": SystemCharacteristics("tpu", "v5:4x4x212", "megacore", (2, 2, 1), 3392),
+ "v5p-6912": SystemCharacteristics("tpu", "v5:12x12x24", "megacore", (2, 2, 1), 3456),
+ "v5p-7040": SystemCharacteristics("tpu", "v5:4x20x44", "megacore", (2, 2, 1), 3520),
+ "v5p-7168": SystemCharacteristics("tpu", "v5:8x16x28", "megacore", (2, 2, 1), 3584),
+ "v5p-7296": SystemCharacteristics("tpu", "v5:4x12x76", "megacore", (2, 2, 1), 3648),
+ "v5p-7424": SystemCharacteristics("tpu", "v5:4x8x116", "megacore", (2, 2, 1), 3712),
+ "v5p-7552": SystemCharacteristics("tpu", "v5:4x4x236", "megacore", (2, 2, 1), 3776),
+ "v5p-7680": SystemCharacteristics("tpu", "v5:12x16x20", "megacore", (2, 2, 1), 3840),
+ "v5p-7808": SystemCharacteristics("tpu", "v5:4x4x244", "megacore", (2, 2, 1), 3904),
+ "v5p-7936": SystemCharacteristics("tpu", "v5:4x8x124", "megacore", (2, 2, 1), 3968),
+ "v5p-8064": SystemCharacteristics("tpu", "v5:12x12x28", "megacore", (2, 2, 1), 4032),
+ "v5p-8192": SystemCharacteristics("tpu", "v5:16x16x16", "megacore", (2, 2, 1), 4096),
+ "v5p-8320": SystemCharacteristics("tpu", "v5:4x20x52", "megacore", (2, 2, 1), 4160),
+ "v5p-8448": SystemCharacteristics("tpu", "v5:8x12x44", "megacore", (2, 2, 1), 4224),
+ "v5p-8704": SystemCharacteristics("tpu", "v5:8x8x68", "megacore", (2, 2, 1), 4352),
+ "v5p-8832": SystemCharacteristics("tpu", "v5:4x12x92", "megacore", (2, 2, 1), 4416),
+ "v5p-8960": SystemCharacteristics("tpu", "v5:8x20x28", "megacore", (2, 2, 1), 4480),
+ "v5p-9216": SystemCharacteristics("tpu", "v5:12x16x24", "megacore", (2, 2, 1), 4608),
+ "v5p-9472": SystemCharacteristics("tpu", "v5:4x8x148", "megacore", (2, 2, 1), 4736),
+ "v5p-9600": SystemCharacteristics("tpu", "v5:12x20x20", "megacore", (2, 2, 1), 4800),
+ "v5p-9728": SystemCharacteristics("tpu", "v5:8x8x76", "megacore", (2, 2, 1), 4864),
+ "v5p-9856": SystemCharacteristics("tpu", "v5:4x28x44", "megacore", (2, 2, 1), 4928),
+ "v5p-9984": SystemCharacteristics("tpu", "v5:8x12x52", "megacore", (2, 2, 1), 4992),
+ "v5p-10240": SystemCharacteristics("tpu", "v5:16x16x20", "megacore", (2, 2, 1), 5120),
+ "v5p-10368": SystemCharacteristics("tpu", "v5:12x12x36", "megacore", (2, 2, 1), 5184),
+ "v5p-10496": SystemCharacteristics("tpu", "v5:4x8x164", "megacore", (2, 2, 1), 5248),
+ "v5p-10752": SystemCharacteristics("tpu", "v5:12x16x28", "megacore", (2, 2, 1), 5376),
+ "v5p-10880": SystemCharacteristics("tpu", "v5:4x20x68", "megacore", (2, 2, 1), 5440),
+ "v5p-11008": SystemCharacteristics("tpu", "v5:4x8x172", "megacore", (2, 2, 1), 5504),
+ "v5p-11136": SystemCharacteristics("tpu", "v5:4x12x116", "megacore", (2, 2, 1), 5568),
+ "v5p-11264": SystemCharacteristics("tpu", "v5:8x16x44", "megacore", (2, 2, 1), 5632),
+ "v5p-11520": SystemCharacteristics("tpu", "v5:12x20x24", "megacore", (2, 2, 1), 5760),
+ "v5p-11648": SystemCharacteristics("tpu", "v5:4x28x52", "megacore", (2, 2, 1), 5824),
+ "v5p-11776": SystemCharacteristics("tpu", "v5:8x8x92", "megacore", (2, 2, 1), 5888),
+ "v5p-11904": SystemCharacteristics("tpu", "v5:4x12x124", "megacore", (2, 2, 1), 5952),
+ "v5p-12032": SystemCharacteristics("tpu", "v5:4x8x188", "megacore", (2, 2, 1), 6016),
+ "v5p-12160": SystemCharacteristics("tpu", "v5:4x20x76", "megacore", (2, 2, 1), 6080),
+ "v5p-12288": SystemCharacteristics("tpu", "v5:16x16x24", "megacore", (2, 2, 1), 6144),
+ "v5p-13824": SystemCharacteristics("tpu", "v5:12x24x24", "megacore", (2, 2, 1), 6912),
+ "v5p-17920": SystemCharacteristics("tpu", "v5:16x20x28", "megacore", (2, 2, 1), 8960),
+}
+
+
+def compile_trainer_programs(
+ trainer_config: SpmdTrainer.Config, *, topology: str, topology_num_slices: int = 1
+) -> Dict[str, Callable]:
+ """Returns compiled XLA programs for the given trainer.
+
+ Args:
+ trainer_config: The trainer config.
+ topology: A string representing the TPU topology, e.g., "v4-8". Must be a key in
+ USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS.
+ topology_num_slices: The number of TPU slices.
+
+ Returns:
+ A dict containing the following programs:
+ * "train_step": a program to run a single training step.
+ """
+ if topology is not None:
+ target_hardware = USER_FACING_NAME_TO_SYSTEM_CHARACTERISTICS[topology] | Since `topology` is user-supplied, consider raising a useful error message if it's an invalid key? |
axlearn | github_2023 | python | 201 | apple | markblee | @@ -0,0 +1,54 @@
+# Copyright © 2023 Apple Inc.
+
+"""AoT (ahead-of-time) compilation config tests.
+
+pip install 'jax[tpu]==0.4.21' -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
+
+export TPU_SKIP_MDS_QUERY=1
+python axlearn/experiments/aot_test.py
+
+Reference:
+https://docs.google.com/document/d/1Y5IdmvAZA7UtMHAWkRh8k2PscVoG5FvMH9-E6hygsyY/
+"""
+import pickle
+from typing import Optional
+
+from absl.testing import absltest
+from jax.experimental.serialize_executable import serialize
+
+from axlearn.common import test_utils
+from axlearn.common.aot_compilation import compile_trainer_programs
+from axlearn.common.trainer import SpmdTrainer
+from axlearn.experiments.text.gpt import c4_trainer
+
+
+class AoTCompilationTest(test_utils.TrainerConfigTestCase):
+ """Tests ahead-of-time (AoT) compilation."""
+
+ def _jax_backend(self) -> str:
+ return "cpu" | nit: consider removing if it's going to use the default anyway? |
axlearn | github_2023 | python | 280 | apple | apghml | @@ -17,8 +17,8 @@
"resource_type2": 8,
},
"project_resources": {
- "team1": {"resource_type1": 0.5},
- "team2": {"resource_type1": 0.5, "resource_type2": 1.0},
+ "team1": {"resource_type1": 0.3}, | Does this test fail without the change? I get:
```
> 4.8+11.2==16
True
> 12.8+11.2==24
True
> 0.3+0.7==1
True
```
Maybe it would make sense to use:
```
> 0.1 + 0.2 <= 0.3
False
``` |
axlearn | github_2023 | python | 273 | apple | yqwangustc | @@ -243,6 +248,8 @@ def test_asr_encoder(self, is_training: bool):
self.assertEqual(output_paddings.shape, (batch_size, output_shape[1]))
self.assertTrue(jnp.all(output_paddings[:2] == output_paddings[2:]))
- # If is_training, outputs should always be different due to augmentation.
+ # If is_training and use_augmenter, outputs should always be different due to augmentation.
# Otherwise, outputs should be the same despite differences in padding.
- self.assertEqual(not is_training, jnp.allclose(outputs[:2], outputs[2:]))
+ self.assertEqual(
+ not (is_training and use_augmenter), jnp.allclose(outputs[:2], outputs[2:]) | nit: I am a little confused about why `outputs[:2]` should be the same as `outputs[2:]`, could you add a comment to explain why they are the same in this case ? |
axlearn | github_2023 | python | 251 | apple | markblee | @@ -229,13 +229,25 @@ def _delete(self):
def _execute(self):
cfg: DataflowJob.Config = self.config
-
# Run the setup command locally, but the launch command via docker.
# This is to ensure that the launch environment matches the worker environment.
- cmd = (
- "docker run --rm --entrypoint /bin/bash "
- f"{self._bundler.id(cfg.name)} -c '{cfg.command}'"
- )
+ processor = platform.processor().lower()
+ if "arm" in processor:
+ # Disable running from docker on Mac M1 chip due to quemu core dump bug. | ```suggestion
# Disable running from docker on Mac M1 chip due to qemu core dump bug.
``` |
axlearn | github_2023 | python | 264 | apple | ruomingp | @@ -387,6 +387,8 @@ def process_kv(key: str, val: Any):
default_val = field.default
if val is default_val and default_val in omit_default_values:
return
+ elif field is None and val in omit_default_values: | I wonder if we could apply this logic only to dataclass fields. |
axlearn | github_2023 | others | 264 | apple | ruomingp | @@ -223,7 +221,6 @@ model.decoder.transformer.layer.self_attention.attention.output_linear.param_par
model.decoder.transformer.layer.self_attention.attention.output_linear.param_partition_spec[0][1]: 'fsdp'
model.decoder.transformer.layer.self_attention.attention.output_linear.param_partition_spec[0][2]: 'seq'
model.decoder.transformer.layer.self_attention.attention.output_linear.param_partition_spec[1]: 'model'
-model.decoder.transformer.layer.self_attention.attention.output_linear.param_partition_spec[2]: None | Applying it to list/tuple/dict seems going too far, e.g., here it will not be easy for readers to tell that `param_partition_spec` actually has three elements rather than two. WDYT? |
axlearn | github_2023 | python | 264 | apple | ruomingp | @@ -371,7 +371,14 @@ def to_flat_dict(self, *, omit_default_values: Collection[Any]) -> Dict[str, Any
result = {}
def enter(key: str, val: Any, default_result: Optional[List]) -> Optional[List]:
- if key and isinstance(val, ConfigBase):
+ if dataclasses.is_dataclass(val) and not isinstance(val, type):
+ omitted_result = [] | ```suggestion
kvs_to_traverse = []
``` |
axlearn | github_2023 | python | 264 | apple | ruomingp | @@ -371,7 +371,14 @@ def to_flat_dict(self, *, omit_default_values: Collection[Any]) -> Dict[str, Any
result = {}
def enter(key: str, val: Any, default_result: Optional[List]) -> Optional[List]:
- if key and isinstance(val, ConfigBase):
+ if dataclasses.is_dataclass(val) and not isinstance(val, type):
+ omitted_result = []
+ for cur_key, cur_val in default_result:
+ if cur_val in omit_default_values:
+ continue | We should omit only if cur_val is the default value of the dataclass field, e.g.,
```
@dataclass
class foo:
x: Optional[int] = None
y: Optional[int] = -1
```
We should omit if `x is None` but not if `y is None`. This is to make it consistent with the attribute values below. |
axlearn | github_2023 | python | 264 | apple | ruomingp | @@ -371,7 +371,23 @@ def to_flat_dict(self, *, omit_default_values: Collection[Any]) -> Dict[str, Any
result = {}
def enter(key: str, val: Any, default_result: Optional[List]) -> Optional[List]:
- if key and isinstance(val, ConfigBase):
+ if dataclasses.is_dataclass(val) and not isinstance(val, type):
+ kvs_to_traverse = []
+ default_result_dict = dict(default_result)
+ for field in dataclasses.fields(val): | What if `default_result_dict` contains keys that are not a field? We would've missed them in this loop. It would be more robust to iterate through `default_result_dict` and check they are all fields? |
axlearn | github_2023 | python | 246 | apple | markblee | @@ -28,6 +29,7 @@ class SpeechFeatureLayerTest(TestCase):
"""Tests SpeechFeatureLayer."""
@parameterized.parameters([True, False])
+ @pytest.mark.fp64 | I wonder why you see these changes? Is this rebased to main? |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -28,6 +28,40 @@
from axlearn.common.utils import Nested, Tensor
+def is_valid_ctc_seq(logitpaddings, labels, labelpaddings): | ```suggestion
def _is_valid_ctc_seq(*, paddings: Tensor, target_labels: Tensor, target_paddings: Tensor) -> Tensor:
```
In general, can we add type+return annotations and follow existing naming conventions (following https://github.com/apple/axlearn/blob/main/CONTRIBUTING.md) here and elsewhere? |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -88,6 +91,80 @@ def test_map_label_sequences(
jit_fn(inputs, blank_id=blank_id, pad_id=pad_id),
)
+class ValidCtcSeqTest(TestCase):
+
+ def get_logits_and_labels(self, batchsize, timesteps, labelsteps, nclasses):
+ logits = np.random.randn(batchsize, timesteps, nclasses)
+ logitpaddings = np.zeros((batchsize, timesteps), dtype=np.int32)
+ labels = np.random.randint(
+ 1, nclasses, size=(batchsize, labelsteps)
+ ).astype(np.int32)
+ labelpaddings = np.zeros((batchsize, labelsteps), dtype=np.int32)
+ return logits, logitpaddings, labels, labelpaddings
+
+ def test_label_longer_than_input(self):
+ batchsize = 4
+ timesteps = 10
+ labelsteps = 11
+ nclasses = 400
+ # generate logits and labels, which has logits shorter than labels
+ logits, logitpaddings, labels, labelpaddings = self.get_logits_and_labels(
+ batchsize, timesteps, labelsteps, nclasses
+ )
+ per_seq_loss = optax.ctc_loss(
+ logits, logitpaddings, labels, labelpaddings, blank_id=0)
+ print(per_seq_loss)
+ # they are very close to `logepsilon` (default in optax is -1e5)
+ per_seq_validality = is_valid_ctc_seq(logitpaddings, labels, labelpaddings)
+ self.assertAllClose(per_seq_validality, [0.0] * batchsize)
+
+ def test_label_shorter_than_input(self):
+ batchsize = 4
+ timesteps = 15
+ labelsteps = 10
+ nclasses = 400
+ logits, logitpaddings, _, labelpaddings = self.get_logits_and_labels(
+ batchsize, timesteps, labelsteps, nclasses
+ )
+ # to make sure there is no duplicate in the labels
+ labels = np.tile(np.arange(labelsteps)[np.newaxis, :], [batchsize, 1])
+
+ per_seq_loss = optax.ctc_loss(
+ logits, logitpaddings, labels, labelpaddings)
+ # per_seq_loss in this case looks normal, it should be around log(400)*15
+ print(per_seq_loss)
+ per_seq_validality = is_valid_ctc_seq(logitpaddings, labels, labelpaddings)
+ self.assertAllClose(per_seq_validality, [1.0] * batchsize)
+
+ def test_label_with_duplicates(self):
+ batchsize = 4
+ timesteps = 12
+ labelsteps = 10
+ nclasses = 400
+ logits, logitpaddings, _, labelpaddings = self.get_logits_and_labels(
+ batchsize, timesteps, labelsteps, nclasses
+ )
+ # there are 12 timesteps, and 10 labels. If the consecutive duplicates in
+ # one sequence is larger than 2, then the pair become non-valid
+ labels = np.array(
+ [
+ [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], # no duplicates
+ [0, 0, 1, 1, 2, 3, 4, 5, 6, 7], # 2 duplicates
+ [0, 0, 0, 1, 1, 2, 3, 4, 5, 6], # 3 duplicates -> invalid seq
+ [0, 0, 1, 1, 2, 3, 4, 5, 6, 6],
+ # 2 duplicates, since the last 6 is a padding
+ ],
+ dtype=np.int32,
+ )
+ labelpaddings[3, 9:] = 1
+ per_seq_loss = optax.ctc_loss(
+ logits, logitpaddings, labels, labelpaddings)
+ # per_seq_loss[0:1] and per_seq_loss[3] should near log(400) * 15, while
+ # per_seq_loss[2] should be around logepsilon
+ print(per_seq_loss)
+ per_seq_validality = is_valid_ctc_seq(logitpaddings, labels, labelpaddings)
+ self.assertAllClose(per_seq_validality, [1.0, 1.0, 0.0, 1.0])
+
class CTCPrefixMergerTest(TestCase): | Shall we also update `test_forward` to make sure we cover the training codepath? |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -230,6 +238,16 @@ def assertNestedEqual(self, a, b):
if hasattr(a_value, "dtype"):
self.assertEqual(a_value.dtype, b_value.dtype)
+ def assertAllClose(self, x, y, check_dtypes=True, rtol=1e-5, atol=1e-5, **kwargs): | Do we need this, or can we use `assertNestedAllClose`? |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -89,6 +91,75 @@ def test_map_label_sequences(
)
+class ValidCtcSeqTest(TestCase):
+ def get_logits_and_labels(self, batchsize, timesteps, labelsteps, nclasses): | Same comment as above re types and naming -- also, should we use `jnp`/`jax.random` here and elsewhere? Besides consistency, we can avoid `np.random` calls that may depend on a global seed? |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -28,6 +28,39 @@
from axlearn.common.utils import Nested, Tensor
+def is_valid_ctc_seq(logitpaddings, labels, labelpaddings):
+ """Returns for per example sequence if it passes validity check.
+
+ Note that the above `ctc_loss_with_alignments` returns logeps
+ (usually a very large number) if the input length is smaller than
+ the label length plus number of consectutive duplications.
+ However, in that case, we should ignore the loss.
+
+ A validity check is passed if for an example when :
+ input.length >= labels.length + num(consecutive dup label tokens)
+
+ Args:
+ logitpaddings: [b, t], 0/1 JTensor.
+ labels: [b, t], int32 JTensor.
+ labelpaddings: [b, t], 0/1 JTensor.
+ Returns:
+ A shape [b] float tensor indicating if each (input, label) pair is valid,
+ with a value of 1.0 indicating valid and 0.0 otherwise. | Fix docstring following rest of repo? |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -28,6 +28,39 @@
from axlearn.common.utils import Nested, Tensor
+def is_valid_ctc_seq(logitpaddings, labels, labelpaddings):
+ """Returns for per example sequence if it passes validity check.
+
+ Note that the above `ctc_loss_with_alignments` returns logeps | What is `ctc_loss_with_alignments` referring to? |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -89,6 +91,75 @@ def test_map_label_sequences(
)
+class ValidCtcSeqTest(TestCase):
+ def get_logits_and_labels(self, batchsize, timesteps, labelsteps, nclasses):
+ logits = np.random.randn(batchsize, timesteps, nclasses)
+ logitpaddings = np.zeros((batchsize, timesteps), dtype=np.int32)
+ labels = np.random.randint(1, nclasses, size=(batchsize, labelsteps)).astype(np.int32)
+ labelpaddings = np.zeros((batchsize, labelsteps), dtype=np.int32)
+ return logits, logitpaddings, labels, labelpaddings
+
+ def test_label_longer_than_input(self):
+ batchsize = 4
+ timesteps = 10
+ labelsteps = 11
+ nclasses = 400
+ # generate logits and labels, which has logits shorter than labels
+ logits, logitpaddings, labels, labelpaddings = self.get_logits_and_labels(
+ batchsize, timesteps, labelsteps, nclasses
+ )
+ per_seq_loss = optax.ctc_loss(logits, logitpaddings, labels, labelpaddings, blank_id=0)
+ print(per_seq_loss) | nit: Remove debugging prints here and elsewhere? |
axlearn | github_2023 | python | 211 | apple | zhiyun | @@ -28,6 +28,43 @@
from axlearn.common.utils import Nested, Tensor
+def _is_valid_ctc_seq(
+ paddings: Tensor,
+ target_labels: Tensor,
+ target_paddings: Tensor) -> Tensor:
+ """Returns for per example sequence if it passes validity check.
+
+ Note that `optax.ctc_loss` returns logeps (default to 1e5) if the
+ input length is smaller than the label length plus number of
+ consectutive duplications. However, in that case, we should
+ ignore the loss.
+
+ A validity check is passed if for an example when :
+ input.length >= labels.length + num(consecutive dup label tokens)
+
+ Args:
+ paddings: [b, t], 0/1 Tensor.
+ target_labels: [b, t], int32 Tensor.
+ target_paddings: [b, t], 0/1 Tensor.
+
+ Returns:
+ A shape [b] float tensor indicating if each (input, label) pair is valid,
+ with a value of 1.0 indicating valid and 0.0 otherwise.
+ """
+ # [b]
+ label_lengths = jnp.sum(1.0 - target_paddings, axis=-1)
+ # [b]
+ input_lengths = jnp.sum(1.0 - paddings, axis=-1)
+ # [b, t-1]
+ dups = (1.0 - target_paddings[:, 1:]) * (target_labels[:, :-1] == target_labels[:, 1:])
+ # [b]
+ num_consecutive_dups = jnp.sum(dups, axis=-1)
+ # [b]
+ is_valid = (label_lengths + num_consecutive_dups) <= input_lengths
+ is_valid = is_valid.astype(jnp.float32) | Do we need to cast type here, since we cast type in forward? |
axlearn | github_2023 | python | 211 | apple | zhiyun | @@ -28,6 +28,43 @@
from axlearn.common.utils import Nested, Tensor
+def _is_valid_ctc_seq(
+ paddings: Tensor,
+ target_labels: Tensor,
+ target_paddings: Tensor) -> Tensor:
+ """Returns for per example sequence if it passes validity check.
+
+ Note that `optax.ctc_loss` returns logeps (default to 1e5) if the | Can we give a reference link to optax implementation on this? |
axlearn | github_2023 | python | 211 | apple | zhiyun | @@ -28,6 +28,39 @@
from axlearn.common.utils import Nested, Tensor
+def _is_valid_ctc_seq(paddings: Tensor, target_labels: Tensor, target_paddings: Tensor) -> Tensor:
+ """Returns for per example sequence if it passes validity check.
+
+ Note that `optax.ctc_loss` returns -logeps (default to 1e5) if the
+ input length is smaller than the label length plus number of
+ consectutive duplications. However, in that case, we should
+ ignore the loss.
+
+ A validity check is passed if for an example when :
+ input.length >= labels.length + num(consecutive dup label tokens)
+
+ Args:
+ paddings: [b, t], 0/1 Tensor.
+ target_labels: [b, t], int32 Tensor.
+ target_paddings: [b, t], 0/1 Tensor.
+
+ Returns:
+ A shape [b] float tensor indicating if each (input, label) pair is valid,
+ with a value of 1.0 indicating valid and 0.0 otherwise.
+ """
+ # [b]
+ label_lengths = jnp.sum(1.0 - target_paddings, axis=-1)
+ # [b]
+ input_lengths = jnp.sum(1.0 - paddings, axis=-1)
+ # [b, t-1] | Follow the rest of the file, use `[batch_size, num_frames-1]`? |
axlearn | github_2023 | python | 211 | apple | zhiyun | @@ -28,6 +28,39 @@
from axlearn.common.utils import Nested, Tensor
+def _is_valid_ctc_seq(paddings: Tensor, target_labels: Tensor, target_paddings: Tensor) -> Tensor:
+ """Returns for per example sequence if it passes validity check.
+
+ Note that `optax.ctc_loss` returns -logeps (default to 1e5) if the
+ input length is smaller than the label length plus number of
+ consectutive duplications. However, in that case, we should
+ ignore the loss.
+
+ A validity check is passed if for an example when :
+ input.length >= labels.length + num(consecutive dup label tokens)
+
+ Args:
+ paddings: [b, t], 0/1 Tensor. | paddings of input frames. |
axlearn | github_2023 | python | 211 | apple | zhiyun | @@ -28,6 +28,39 @@
from axlearn.common.utils import Nested, Tensor
+def _is_valid_ctc_seq(paddings: Tensor, target_labels: Tensor, target_paddings: Tensor) -> Tensor:
+ """Returns for per example sequence if it passes validity check.
+
+ Note that `optax.ctc_loss` returns -logeps (default to 1e5) if the
+ input length is smaller than the label length plus number of
+ consectutive duplications. However, in that case, we should
+ ignore the loss.
+
+ A validity check is passed if for an example when :
+ input.length >= labels.length + num(consecutive dup label tokens) | Add a brief explanation, sth like "to account for the extra blank token inserted between the dup tokens." |
axlearn | github_2023 | python | 211 | apple | zhiyun | @@ -89,6 +91,97 @@ def test_map_label_sequences(
)
+class ValidCtcSeqTest(TestCase):
+ def get_logits_and_labels(
+ self, batch_size: int, time_steps: int, target_steps: int, nclasses: int
+ ) -> tuple[Tensor, Tensor, Tensor, Tensor]: | time_steps -> input_lengths
target_steps -> target_lengths
nclasses -> vocab_size |
axlearn | github_2023 | python | 211 | apple | zhiyun | @@ -89,6 +91,97 @@ def test_map_label_sequences(
)
+class ValidCtcSeqTest(TestCase):
+ def get_logits_and_labels(
+ self, batch_size: int, time_steps: int, target_steps: int, nclasses: int
+ ) -> tuple[Tensor, Tensor, Tensor, Tensor]:
+ prng_key = jax.random.PRNGKey(1234)
+ logits = jax.random.normal(prng_key, (batch_size, time_steps, nclasses), dtype=jnp.float32)
+ paddings = jnp.zeros((batch_size, time_steps), dtype=np.int32)
+ target_labels = jax.random.randint(
+ prng_key,
+ shape=(batch_size, target_steps),
+ minval=1,
+ maxval=nclasses - 1,
+ dtype=jnp.int32,
+ )
+ target_paddings = jnp.zeros(shape=(batch_size, target_steps), dtype=jnp.int32)
+ return logits, paddings, target_labels, target_paddings
+
+ def test_label_longer_than_input(self):
+ batchsize = 4
+ timesteps = 10
+ labelsteps = 11
+ nclasses = 400
+ # generate logits and labels, which has logits shorter than labels | Nit: Use full sentence for comment: start with Capitalize, and end with period `.` |
axlearn | github_2023 | python | 211 | apple | zhiyun | @@ -89,6 +91,97 @@ def test_map_label_sequences(
)
+class ValidCtcSeqTest(TestCase):
+ def get_logits_and_labels(
+ self, batch_size: int, time_steps: int, target_steps: int, nclasses: int
+ ) -> tuple[Tensor, Tensor, Tensor, Tensor]:
+ prng_key = jax.random.PRNGKey(1234)
+ logits = jax.random.normal(prng_key, (batch_size, time_steps, nclasses), dtype=jnp.float32)
+ paddings = jnp.zeros((batch_size, time_steps), dtype=np.int32)
+ target_labels = jax.random.randint(
+ prng_key,
+ shape=(batch_size, target_steps),
+ minval=1,
+ maxval=nclasses - 1,
+ dtype=jnp.int32,
+ )
+ target_paddings = jnp.zeros(shape=(batch_size, target_steps), dtype=jnp.int32)
+ return logits, paddings, target_labels, target_paddings
+
+ def test_label_longer_than_input(self):
+ batchsize = 4
+ timesteps = 10
+ labelsteps = 11
+ nclasses = 400
+ # generate logits and labels, which has logits shorter than labels
+ logits, paddings, target_labels, target_paddings = self.get_logits_and_labels(
+ batchsize, timesteps, labelsteps, nclasses
+ )
+ per_seq_loss = optax.ctc_loss(logits, paddings, target_labels, target_paddings, blank_id=0)
+ for x in per_seq_loss:
+ # because these are invalid sequence loss, the optax.ctc_loss will return | Same here. Use complete sentence for comment. |
axlearn | github_2023 | python | 211 | apple | zhiyun | @@ -230,6 +238,16 @@ def assertNestedEqual(self, a, b):
if hasattr(a_value, "dtype"):
self.assertEqual(a_value.dtype, b_value.dtype)
+ def assertTensorAllClose(self, x, y, rtol=1e-5, atol=1e-5, check_dtypes=True, **kwargs):
+ """assert Tensor are all close."""
+ x = np.asarray(x)
+ y = np.asarray(y)
+ if check_dtypes:
+ self.assertDtypesMatch(x, y)
+ x = x.astype(np.float32) if x.dtype == jnp.bfloat16 else x
+ y = y.astype(np.float32) if y.dtype == jnp.bfloat16 else y
+ np.testing.assert_allclose(x, y, rtol=rtol, atol=atol, **kwargs)
+ | @markblee to comment if we want to add this util. |
axlearn | github_2023 | python | 211 | apple | ruomingp | @@ -28,6 +28,40 @@
from axlearn.common.utils import Nested, Tensor
+def _is_valid_ctc_seq(paddings: Tensor, target_labels: Tensor, target_paddings: Tensor) -> Tensor:
+ """Returns for per example sequence if it passes validity check.
+
+ Note that `optax.ctc_loss` returns -logeps (default to 1e5) if the
+ input length is smaller than the label length plus number of
+ consectutive duplications, because we need an blank to token to transit
+ between the same tokens. When this condition is not met, it should be
+ considered as an invalid sequence and the loss should be ignored.
+
+ A validity check is passed if for an example when :
+ input.length >= labels.length + num(consecutive dup label tokens)
+
+ Args:
+ paddings: [batch_size, num_frames], 0/1 Tensor, paddings of input frames.
+ target_labels: [batch_size, num_frames], int32 Tensor.
+ target_paddings: [batch_size, num_frames], 0/1 Tensor.
+
+ Returns:
+ A shape [b] float tensor indicating if each (input, label) pair is valid, | ```suggestion
A float tensor of shape [batch_size] indicating if each (input, label) pair is valid,
``` |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -28,6 +28,40 @@
from axlearn.common.utils import Nested, Tensor
+def _is_valid_ctc_seq(paddings: Tensor, target_labels: Tensor, target_paddings: Tensor) -> Tensor:
+ """Returns for per example sequence if it passes validity check.
+
+ Note that `optax.ctc_loss` returns -logeps (default to 1e5) if the
+ input length is smaller than the label length plus number of
+ consectutive duplications, because we need an blank to token to transit
+ between the same tokens. When this condition is not met, it should be
+ considered as an invalid sequence and the loss should be ignored. | ```suggestion
consecutive duplications, because we need a blank to token to transition
between the same labels. When this condition is not met, it should be
considered as an invalid sequence and the loss should be ignored.
``` |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -28,6 +28,40 @@
from axlearn.common.utils import Nested, Tensor
+def _is_valid_ctc_seq(paddings: Tensor, target_labels: Tensor, target_paddings: Tensor) -> Tensor:
+ """Returns for per example sequence if it passes validity check.
+
+ Note that `optax.ctc_loss` returns -logeps (default to 1e5) if the
+ input length is smaller than the label length plus number of
+ consectutive duplications, because we need an blank to token to transit
+ between the same tokens. When this condition is not met, it should be
+ considered as an invalid sequence and the loss should be ignored.
+
+ A validity check is passed if for an example when : | ```suggestion
A validity check is passed if for an example when:
``` |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -28,6 +28,40 @@
from axlearn.common.utils import Nested, Tensor
+def _is_valid_ctc_seq(paddings: Tensor, target_labels: Tensor, target_paddings: Tensor) -> Tensor:
+ """Returns for per example sequence if it passes validity check.
+
+ Note that `optax.ctc_loss` returns -logeps (default to 1e5) if the
+ input length is smaller than the label length plus number of
+ consectutive duplications, because we need an blank to token to transit
+ between the same tokens. When this condition is not met, it should be
+ considered as an invalid sequence and the loss should be ignored.
+
+ A validity check is passed if for an example when :
+ input.length >= labels.length + num(consecutive dup label tokens)
+
+ Args:
+ paddings: [batch_size, num_frames], 0/1 Tensor, paddings of input frames.
+ target_labels: [batch_size, num_frames], int32 Tensor.
+ target_paddings: [batch_size, num_frames], 0/1 Tensor. | ```suggestion
paddings: A 0/1 Tensor of shape [batch_size, num_frames] representing paddings of input frames.
target_labels: An int Tensor of shape [batch_size, num_labels].
target_paddings: A 0/1 Tensor of shape [batch_size, num_labels].
``` |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -89,6 +91,99 @@ def test_map_label_sequences(
)
+class ValidCtcSeqTest(TestCase):
+ def get_logits_and_labels(
+ self, batch_size: int, input_lengths: int, target_lengths: int, vocab_size: int
+ ) -> tuple[Tensor, Tensor, Tensor, Tensor]:
+ prng_key = jax.random.PRNGKey(1234)
+ logits = jax.random.normal(
+ prng_key, (batch_size, input_lengths, vocab_size), dtype=jnp.float32
+ )
+ paddings = jnp.zeros((batch_size, input_lengths), dtype=np.int32)
+ target_labels = jax.random.randint(
+ prng_key,
+ shape=(batch_size, target_lengths),
+ minval=1,
+ maxval=vocab_size - 1,
+ dtype=jnp.int32,
+ )
+ target_paddings = jnp.zeros(shape=(batch_size, target_lengths), dtype=jnp.int32)
+ return logits, paddings, target_labels, target_paddings
+
+ def test_label_longer_than_input(self):
+ batch_size = 4
+ input_lengths = 10
+ target_lengths = 11
+ vocab_size = 400
+ # Generate logits and labels, which has logits shorter than labels.
+ logits, paddings, target_labels, target_paddings = self.get_logits_and_labels(
+ batch_size, input_lengths, target_lengths, vocab_size
+ )
+ per_seq_loss = optax.ctc_loss(logits, paddings, target_labels, target_paddings, blank_id=0)
+ for x in per_seq_loss:
+ # Because these are invalid sequence loss, the optax.ctc_loss will return
+ # -logeps for these sequences (but theoretically, this is not correct).
+ self.assertGreater(x, 1e5)
+ per_seq_validality = _is_valid_ctc_seq(paddings, target_labels, target_paddings).astype(
+ jnp.float32
+ )
+ self.assertTensorAllClose(per_seq_validality, [0.0] * batch_size)
+
+ def test_label_shorter_than_input(self):
+ batch_size = 4
+ input_lengths = 15
+ target_lengths = 10
+ vocab_size = 400
+ logits, paddings, _, target_paddings = self.get_logits_and_labels(
+ batch_size, input_lengths, target_lengths, vocab_size
+ )
+ # to make sure there is no duplicate in the labels
+ labels = jnp.tile(jnp.arange(target_lengths)[jnp.newaxis, :], [batch_size, 1])
+
+ per_seq_loss = optax.ctc_loss(logits, paddings, labels, target_paddings)
+ # per_seq_loss in this case looks normal, it should be around log(400)*15, so
+ # significantly smaller than 1e5
+ for x in per_seq_loss:
+ self.assertLess(x, 1e5)
+ per_seq_validality = _is_valid_ctc_seq(paddings, labels, target_paddings).astype(
+ jnp.float32
+ )
+ self.assertTensorAllClose(per_seq_validality, [1.0] * batch_size)
+
+ def test_label_with_duplicates(self):
+ batch_size = 4
+ input_lengths = 12
+ target_lengths = 10
+ vocab_size = 400
+ logits, paddings, _, target_paddings = self.get_logits_and_labels(
+ batch_size, input_lengths, target_lengths, vocab_size
+ )
+ # there are 12 timesteps, and 10 labels. If the consecutive duplicates in
+ # one sequence is larger than 2, then the pair become non-valid
+ target_labels = np.array(
+ [
+ [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], # no duplicates | Consider having a test w/ duplicates but not consecutively? |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -89,6 +91,99 @@ def test_map_label_sequences(
)
+class ValidCtcSeqTest(TestCase):
+ def get_logits_and_labels(
+ self, batch_size: int, input_lengths: int, target_lengths: int, vocab_size: int
+ ) -> tuple[Tensor, Tensor, Tensor, Tensor]:
+ prng_key = jax.random.PRNGKey(1234)
+ logits = jax.random.normal(
+ prng_key, (batch_size, input_lengths, vocab_size), dtype=jnp.float32
+ )
+ paddings = jnp.zeros((batch_size, input_lengths), dtype=np.int32)
+ target_labels = jax.random.randint(
+ prng_key,
+ shape=(batch_size, target_lengths),
+ minval=1,
+ maxval=vocab_size - 1,
+ dtype=jnp.int32,
+ )
+ target_paddings = jnp.zeros(shape=(batch_size, target_lengths), dtype=jnp.int32)
+ return logits, paddings, target_labels, target_paddings
+
+ def test_label_longer_than_input(self):
+ batch_size = 4
+ input_lengths = 10
+ target_lengths = 11
+ vocab_size = 400
+ # Generate logits and labels, which has logits shorter than labels.
+ logits, paddings, target_labels, target_paddings = self.get_logits_and_labels(
+ batch_size, input_lengths, target_lengths, vocab_size
+ )
+ per_seq_loss = optax.ctc_loss(logits, paddings, target_labels, target_paddings, blank_id=0)
+ for x in per_seq_loss:
+ # Because these are invalid sequence loss, the optax.ctc_loss will return
+ # -logeps for these sequences (but theoretically, this is not correct).
+ self.assertGreater(x, 1e5)
+ per_seq_validality = _is_valid_ctc_seq(paddings, target_labels, target_paddings).astype(
+ jnp.float32
+ )
+ self.assertTensorAllClose(per_seq_validality, [0.0] * batch_size)
+
+ def test_label_shorter_than_input(self):
+ batch_size = 4
+ input_lengths = 15
+ target_lengths = 10
+ vocab_size = 400
+ logits, paddings, _, target_paddings = self.get_logits_and_labels(
+ batch_size, input_lengths, target_lengths, vocab_size
+ )
+ # to make sure there is no duplicate in the labels | Please see @zhiyun 's comment re. comment formatting here and below. |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -28,6 +28,40 @@
from axlearn.common.utils import Nested, Tensor
+def _is_valid_ctc_seq(paddings: Tensor, target_labels: Tensor, target_paddings: Tensor) -> Tensor:
+ """Returns for per example sequence if it passes validity check. | ```suggestion
def _is_valid_ctc_seq(*, paddings: Tensor, target_labels: Tensor, target_paddings: Tensor) -> Tensor:
"""Returns whether each input sequence passes validity check.
``` |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -28,6 +28,40 @@
from axlearn.common.utils import Nested, Tensor
+def _is_valid_ctc_seq(paddings: Tensor, target_labels: Tensor, target_paddings: Tensor) -> Tensor:
+ """Returns for per example sequence if it passes validity check.
+
+ Note that `optax.ctc_loss` returns -logeps (default to 1e5) if the
+ input length is smaller than the label length plus number of
+ consectutive duplications, because we need an blank to token to transit
+ between the same tokens. When this condition is not met, it should be
+ considered as an invalid sequence and the loss should be ignored.
+
+ A validity check is passed if for an example when :
+ input.length >= labels.length + num(consecutive dup label tokens)
+
+ Args:
+ paddings: [batch_size, num_frames], 0/1 Tensor, paddings of input frames.
+ target_labels: [batch_size, num_frames], int32 Tensor.
+ target_paddings: [batch_size, num_frames], 0/1 Tensor.
+
+ Returns:
+ A shape [b] float tensor indicating if each (input, label) pair is valid,
+ with a value of 1.0 indicating valid and 0.0 otherwise.
+ """
+ # [batch_size, ]
+ label_lengths = jnp.sum(1.0 - target_paddings, axis=-1)
+ # [batch_size, ]
+ input_lengths = jnp.sum(1.0 - paddings, axis=-1)
+ # [batch_size, num_frames - 1]
+ dups = (1.0 - target_paddings[:, 1:]) * (target_labels[:, :-1] == target_labels[:, 1:]) | I wonder whether we should consider `jnp.logical_or(target_paddings[:, :-1], target_paddings[:, 1:])` when dropping dups? |
axlearn | github_2023 | python | 211 | apple | markblee | @@ -28,6 +28,47 @@
from axlearn.common.utils import Nested, Tensor
+def _is_valid_ctc_seq(
+ *, paddings: Tensor, target_labels: Tensor, target_paddings: Tensor
+) -> Tensor:
+ """Returns for per example sequence if it passes validity check.
+
+ Note that `optax.ctc_loss` returns -logeps (default to 1e5) if the
+ input length is smaller than the label length plus number of
+ consecutive duplications, because we need a blank label to transition
+ between the same labels. When this condition is not met, it should be
+ considered as an invalid sequence and the loss should be ignored.
+
+ A validity check is passed if for an example when:
+ input.length >= labels.length + num(consecutive dup label tokens)
+
+ Args:
+ paddings: A 0/1 tensor of shape [batch_size, num_frames], indicating whether | nit -- please use 4 space indents. |
axlearn | github_2023 | python | 253 | apple | tgunter | @@ -2358,17 +2373,29 @@ def __init__(self, cfg: Config, *, parent: Module):
raise NotImplementedError(cfg.structure)
self._add_child("stochastic_depth", cfg.stochastic_depth)
+ for value in cfg.add_value_rms_norm_summary:
+ if value != "linear2_outputs":
+ raise NotImplementedError(f"add_value_rms_norm_summary: {value}")
def forward(self, inputs: Tensor) -> Tensor:
cfg = self.config
+
+ def _linear2(x):
+ """Root mean square function.""" | nit:
```suggestion
"""Applies linear2, optionally logging RMS norm of the output."""
``` |
axlearn | github_2023 | python | 223 | apple | markblee | @@ -14,7 +14,17 @@
class Job(Configurable):
- """Base Job definition."""
+ """Base Job definition.
+
+ Job's main API method is `execute`, which sets up the environment according to `bundler`,
+ runs the specified `command`, and retries if necessary.
+
+ Subclasses of `Job` further specify the platform (e.g., TPUs on GCP) on which the job
+ should run.
+
+ The implementation of `execute` should be idempotent---invoking `execute` multiple times | (As a side note, this is mostly necessary for jobs intended to be run by bastion.) |
axlearn | github_2023 | python | 222 | apple | markblee | @@ -350,7 +350,8 @@ def _run_command(self):
# Set env vars, run the command and pipe outputs to run log.
# Depending on command returncode, emit either success or failure flag.
# Note that we use PIPESTATUS[0] to check the returncode of the first command in the pipe.
- cmd = f"""mkdir -p {self._output_dir}; echo "Starting command..." >> {self._run_log};
+ cmd = f"""ulimit -n 100000; | Does it work if we set it in `start_tpu.sh`? |
axlearn | github_2023 | python | 230 | apple | zhiyun | @@ -0,0 +1,149 @@
+# Copyright © 2023 Apple Inc.
+
+"""Base Encoder-Decoder model interface."""
+
+from typing import Callable, Dict, Optional, Sequence, Tuple
+
+from axlearn.common.base_layer import BaseLayer
+from axlearn.common.base_model import BaseModel
+from axlearn.common.config import REQUIRED, ConfigOr, Required, config_class
+from axlearn.common.decoding import BeamSearchOutputs, SampleOutputs
+from axlearn.common.logit_modifiers import LogitsToLogitsFn
+from axlearn.common.utils import Nested, Tensor, get_recursively
+
+
+class BaseEncoderDecoderModel(BaseModel):
+ """Defines the interface for Encoder-Decoder model implementations."""
+
+ @config_class
+ class Config(BaseModel.Config):
+ """Configures BaseEncoderDecoderModel."""
+
+ encoder: Required[BaseLayer.Config] = REQUIRED
+ decoder: Required[BaseLayer.Config] = REQUIRED
+
+ # We drop the kwargs from BaseModel, since they aren't used here.
+ # pylint: disable-next=arguments-differ
+ def forward(
+ self,
+ input_batch: Nested[Tensor],
+ return_aux: bool = False,
+ ) -> Tuple[Tensor, Nested[Tensor]]:
+ """Produces Encoder-Decoder loss and predictions (such as logits and decoder hidden states)
+ in auxiliary outputs.
+
+ Args:
+ input_batch: A dict with the following entries:
+ source: A dict containing keyword arguments for the encoder.
+ target: A dict containing keyword arguments for the decoder.
+ target_labels: An int Tensor of shape [batch_size, target_len] for computing loss.
+ To represent paddings, use target_labels < 0.
+ return_aux: Boolean to determine whether auxiliary outputs and metrics are returned.
+
+ Returns:
+ A tuple (loss, aux_outputs):
+ loss: A scalar float Tensor representing the cross-entropy loss.
+ aux_outputs: A dict containing auxiliary outputs if `return_aux=True`; otherwise, an
+ empty dict.
+ """
+ self._validate_input_batch(input_batch, paths=["source", "target", "target_labels"]) | Is `target_labels` required? Can we remove it here?
`self._validate_input_batch(input_batch, paths=["source", "target"])` |
axlearn | github_2023 | python | 227 | apple | zhiyun | @@ -29,7 +29,7 @@
axlearn gcp dataflow start \
--name=$USER-dataflow \
--bundler_spec=dockerfile=Dockerfile \
- --bundler_spec=base_image=apache/beam_python3.8_sdk:2.38.0 \
+ --bundler_spec=base_image=apache/beam_python3.10_sdk:2.52.0 \ | 3.9_sdk:2.52.0 |
axlearn | github_2023 | python | 206 | apple | markblee | @@ -1877,8 +1971,8 @@ def _compute_logits(self, q_proj: Tensor, k_proj: Tensor) -> Tensor:
# In the original XL-Net code, it applies scale on AC + BD:
#
# https://github.com/zihangdai/xlnet/blob/bbaa3a6fa0b3a2ee694e8cf66167434f9eca9660/modeling.py#L148
- scale = self.per_head_dim() ** -0.5
- logits = logits * scale
+ # with child_context("apply_scale_factor_logits", module=self): | Intended? |
axlearn | github_2023 | python | 206 | apple | ruomingp | @@ -1847,13 +1939,12 @@ def forward(
def _compute_logits(self, q_proj: Tensor, k_proj: Tensor) -> Tensor:
cfg = self.config
- if cfg.per_dim_scale is not None:
- # Applies a per dim scale on q_proj.
- q_proj = self.per_dim_scale(q_proj)
+ with child_context("apply_per_dim_scale", module=self):
+ q_proj = self.scale_query.apply_per_dim_scale(q_proj) | Should we call `scale_query.apply_norm` before `apply_per_dim_scale`? |
axlearn | github_2023 | python | 203 | apple | markblee | @@ -19,6 +19,22 @@
--trainer_dir=$OUTPUT_DIR \
--data_dir=$GS_ROOT/tensorflow_datasets \
--mesh_selector=$INSTANCE_TYPE
+
+wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh; \
+bash Miniconda3-latest-Linux-x86_64.sh; \
+bash
+conda create -n axlearn python=3.10; \
+conda activate axlearn; \
+git clone https://github.com/apple/axlearn.git; \
+cd axlearn; \
+git fetch origin pull/203/head:debug_gpu; \ | Is this intended for merge, or just to share w/ jax team? |
axlearn | github_2023 | python | 198 | apple | ruomingp | @@ -58,6 +59,9 @@ class Config(BaseModel.Config):
# These will be used to constrain the sequence axis of relevant inputs.
# If None, no batch sequence dim constraints are applied.
seq_axis_names: Optional[Tuple[str]] = None
+ # `aux_loss` can only be collected when `aux_loss_regex` is set and there exist paths in
+ # `module_outputs` that fully match the regex.
+ aux_loss_regex: Optional[str] = None | ```suggestion
# If not None, collect Tensors from `module_outputs` whose paths fully match the regular
# expression and compute the sum as the auxiliary loss, which will be added to the overall
# model loss and reported in the summary as `aux_loss`.
#
# This can be used to support regularization losses such as the load balancing loss in MoE
# routing.
aux_loss_regex: Optional[str] = None
``` |
axlearn | github_2023 | python | 197 | apple | ruomingp | @@ -366,20 +393,65 @@ def __init__(
super().__init__(
cfg, parent=parent, model=model, model_param_partition_specs=model_param_partition_specs
)
-
- for name, calculator_cfg in cfg.metric_calculators.items():
- self._add_child(
+ # Maps dst to (one or more) src calculators, forming a DAG.
+ self._calculator_dag: dict[str, set[str]] = defaultdict(set)
+ # Each edge (src, dst) corresponds to a dst_key.
+ self._edge_names: dict[tuple[str, str], str] = {}
+ # Maps dst to (one or more) dst_keys.
+ keys_by_dst: dict[str, set[str]] = defaultdict(set)
+
+ # Given `dependencies` in the form of (src, dst), build the DAG.
+ for src, dst, dst_key in self._dependencies():
+ if src in self._calculator_dag[dst]:
+ raise ValueError(f"Encountered duplicate edge ({src}, {dst}).")
+ self._calculator_dag[dst].add(src)
+ self._edge_names[(src, dst)] = dst_key
+
+ # Make sure we don't have duplicate keys for the same dst.
+ if dst_key in keys_by_dst[dst]:
+ raise ValueError(f"Encountered duplicate key {dst_key} for {dst}.")
+ keys_by_dst[dst].add(dst_key) | Thanks for adding this check. I envision a more strict check where `dst_key` is required to be globally unique.
Strictly speaking it's not necessary, but I think it will make the data model clearer by avoiding confusion on which src produces the dst_key. WDYT? |
axlearn | github_2023 | python | 178 | apple | markblee | @@ -176,7 +176,6 @@ def _locate_user_config_file() -> Optional[str]:
config_file = None
for path in search_paths:
if os.path.exists(path):
- logging.log_first_n(logging.INFO, "Found user config at %s", 1, path) | Missing a rebase? |
axlearn | github_2023 | others | 163 | apple | ruomingp | @@ -1,3 +1,19 @@
# Concepts in the AXLearn Library
**This doc is still under construction.**
+
+
+## Input Batch Sharding
+
+WWhen using `SpmdTrainer`, it is common to read and process inputs across all processes and hosts. For the most common use case where you want each process to have an equal portion of the input batch, this process is mostly transparent to the user. For more complex use cases, it can be helpful to have a general idea of the what is happening behind the scenes. | ```suggestion
When using `SpmdTrainer`, it is common to read and process inputs across all processes and hosts.
For the most common use case where you want each process to have an equal portion of the input batch, this process is mostly transparent to the user.
For more complex use cases, it can be helpful to have a general idea of the what is happening behind the scenes.
``` |
axlearn | github_2023 | python | 154 | apple | ruomingp | @@ -1528,32 +1595,110 @@ def test_data_types(self, dtype: jnp.dtype, per_dim_scale: Optional[PerDimScale.
dtype=(jnp.float32, jnp.float16, jnp.bfloat16),
per_dim_scale=(None, PerDimScale.default_config()),
atten_logit_cap=(0.0, 20.0),
+ input_linear=(
+ None, # Use the default linear.
+ attention.QKVLinear.default_config(),
+ attention.GroupedQKVLinear.default_config().set(num_kv_heads=4),
+ attention.FusedGroupedQKVLinear.default_config().set(num_kv_heads=4),
+ ),
)
- def test_extend_step(
- self, dtype: jnp.dtype, per_dim_scale: Optional[PerDimScale.Config], atten_logit_cap: float
+ def test_gqa_forward( | Do we test the cases when bias=False? |
axlearn | github_2023 | python | 138 | apple | ruomingp | @@ -0,0 +1,127 @@
+# Copyright © 2023 Apple Inc.
+
+"""Audio frontends for feature extraction."""
+
+from typing import Callable, Dict, Optional, Sequence, Union
+
+import jax.numpy as jnp
+
+from axlearn.audio.frontend_utils import (
+ WindowType,
+ linear_to_log_mel_spectrogram,
+ linear_to_mel_weight_matrix,
+ magnitude_spectrogram,
+ next_power_of_2,
+ pre_emphasis,
+ sliding_window,
+ windowing,
+)
+from axlearn.common.base_layer import BaseLayer
+from axlearn.common.config import (
+ REQUIRED,
+ InstantiableConfig,
+ Required,
+ config_class,
+ maybe_instantiate,
+)
+from axlearn.common.module import Module
+from axlearn.common.utils import Tensor
+
+
+def scale_by_mean_std(
+ x: Tensor, *, mean: Optional[Sequence[float]] = None, std: Optional[Sequence[float]] = None
+) -> Tensor:
+ """Scales the input by subtracting pre-computed `mean` and/or dividing by pre-computed `std`."""
+ if mean is not None:
+ x = x - jnp.array(mean, dtype=x.dtype)
+ if std is not None:
+ x = x / jnp.maximum(jnp.array(std, dtype=x.dtype), jnp.finfo(x.dtype).eps)
+ return x
+
+
+def _ms_to_samples(ms: Union[int, float], *, sample_rate: int) -> float:
+ """Converts time in milliseconds to number of samples under the given sample rate."""
+ return sample_rate / 1000 * ms
+
+
+class LogMelFrontend(BaseLayer):
+ """Computes Log Mel spectrogram features.
+
+ The frontend implements the following stages:
+ `Framer -> PreEmphasis -> Window -> FFT -> FilterBank -> MeanStdDev`.
+ """
+
+ @config_class
+ class Config(BaseLayer.Config):
+ """Configures LogMelFrontend."""
+
+ # Number of filters/bands in the output spectrogram.
+ num_filters: Required[int] = REQUIRED
+ # Number of input samples per second, e.g., 24000 for 24KHz inputs.
+ sample_rate: Required[int] = REQUIRED
+ # Size of each frame in ms.
+ frame_size_ms: Required[float] = REQUIRED
+ # Hop size in ms.
+ hop_size_ms: Required[float] = REQUIRED
+ # Optional output scaling.
+ scaling: Optional[InstantiableConfig[Callable[[Tensor], Tensor]]] = None | ```suggestion
output_transformation: Optional[InstantiableConfig[Callable[[Tensor], Tensor]]] = None
``` |
axlearn | github_2023 | python | 138 | apple | ruomingp | @@ -0,0 +1,127 @@
+# Copyright © 2023 Apple Inc.
+
+"""Audio frontends for feature extraction."""
+
+from typing import Callable, Dict, Optional, Sequence, Union
+
+import jax.numpy as jnp
+
+from axlearn.audio.frontend_utils import (
+ WindowType,
+ linear_to_log_mel_spectrogram,
+ linear_to_mel_weight_matrix,
+ magnitude_spectrogram,
+ next_power_of_2,
+ pre_emphasis,
+ sliding_window,
+ windowing,
+)
+from axlearn.common.base_layer import BaseLayer
+from axlearn.common.config import (
+ REQUIRED,
+ InstantiableConfig,
+ Required,
+ config_class,
+ maybe_instantiate,
+)
+from axlearn.common.module import Module
+from axlearn.common.utils import Tensor
+
+
+def scale_by_mean_std(
+ x: Tensor, *, mean: Optional[Sequence[float]] = None, std: Optional[Sequence[float]] = None
+) -> Tensor:
+ """Scales the input by subtracting pre-computed `mean` and/or dividing by pre-computed `std`."""
+ if mean is not None:
+ x = x - jnp.array(mean, dtype=x.dtype)
+ if std is not None:
+ x = x / jnp.maximum(jnp.array(std, dtype=x.dtype), jnp.finfo(x.dtype).eps)
+ return x
+
+
+def _ms_to_samples(ms: Union[int, float], *, sample_rate: int) -> float:
+ """Converts time in milliseconds to number of samples under the given sample rate."""
+ return sample_rate / 1000 * ms
+
+
+class LogMelFrontend(BaseLayer):
+ """Computes Log Mel spectrogram features.
+
+ The frontend implements the following stages:
+ `Framer -> PreEmphasis -> Window -> FFT -> FilterBank -> MeanStdDev`.
+ """
+
+ @config_class
+ class Config(BaseLayer.Config):
+ """Configures LogMelFrontend."""
+
+ # Number of filters/bands in the output spectrogram.
+ num_filters: Required[int] = REQUIRED
+ # Number of input samples per second, e.g., 24000 for 24KHz inputs.
+ sample_rate: Required[int] = REQUIRED
+ # Size of each frame in ms.
+ frame_size_ms: Required[float] = REQUIRED
+ # Hop size in ms.
+ hop_size_ms: Required[float] = REQUIRED
+ # Optional output scaling. | Give an example? |
axlearn | github_2023 | python | 138 | apple | ruomingp | @@ -0,0 +1,127 @@
+# Copyright © 2023 Apple Inc.
+
+"""Audio frontends for feature extraction."""
+
+from typing import Callable, Dict, Optional, Sequence, Union
+
+import jax.numpy as jnp
+
+from axlearn.audio.frontend_utils import (
+ WindowType,
+ linear_to_log_mel_spectrogram,
+ linear_to_mel_weight_matrix,
+ magnitude_spectrogram,
+ next_power_of_2,
+ pre_emphasis,
+ sliding_window,
+ windowing,
+)
+from axlearn.common.base_layer import BaseLayer
+from axlearn.common.config import (
+ REQUIRED,
+ InstantiableConfig,
+ Required,
+ config_class,
+ maybe_instantiate,
+)
+from axlearn.common.module import Module
+from axlearn.common.utils import Tensor
+
+
+def scale_by_mean_std( | Looks like this is not just scaling, but also shifting. Is `normalize_by_mean_std` a more accurate name? |
axlearn | github_2023 | python | 135 | apple | ruomingp | @@ -0,0 +1,54 @@
+# Copyright © 2023 Apple Inc.
+
+"""Input processing utilities on tf.data for ranking-related tasks."""
+from typing import Dict
+
+import seqio
+import tensorflow as tf
+
+from axlearn.common import input_tf_data
+
+
+def rank_by_value(
+ *, input_key: str, output_key: str, ascending: bool = True, allow_ties: bool = False | Nit: remove default values, as they are not obvious choices, so spelling them out explicitly at call sites will make the code more readable.
```suggestion
*, input_key: str, output_key: str, ascending: bool, allow_ties: bool,
``` |
axlearn | github_2023 | python | 135 | apple | ruomingp | @@ -0,0 +1,54 @@
+# Copyright © 2023 Apple Inc.
+
+"""Input processing utilities on tf.data for ranking-related tasks."""
+from typing import Dict
+
+import seqio
+import tensorflow as tf
+
+from axlearn.common import input_tf_data
+
+
+def rank_by_value(
+ *, input_key: str, output_key: str, ascending: bool = True, allow_ties: bool = False
+) -> input_tf_data.DatasetToDatasetFn:
+ """Returns a DatasetToDatasetFn that stores the ranks of input_field in output_field.
+
+ Note the rank starts at 1.
+
+ Args:
+ input_key: The field whose value will be ranked.
+ output_key: The field to store the ranks into.
+ ascending: True to rank in ascending order or False to rank in descending order.
+ allow_ties: If true, multiple elements could have the same rank. Ranks could have gaps | Comment on how we break ties when allow_ties=False. |
axlearn | github_2023 | python | 122 | apple | ruomingp | @@ -1108,6 +1110,28 @@ def forward(self, x: Tensor) -> Tensor:
return (x * scale).astype(x.dtype)
+ScaleFn = Callable[[int], float] # A function mapping per_head_dim to a scale.
+
+
+def constant_scale_config(value: float) -> InstantiableConfig[ScaleFn]:
+ """A config for a constant scale function for `MultiheadAttention`.
+
+ Args:
+ value: The value to scale by.
+
+ Example:
+ `query_scale = config_for_function(constant_scale).set(value=0.01)`
+
+ Returns:
+ A config that scales by `value`.
+ """
+
+ def constant_function(_: float, value: float) -> float: | ```suggestion
def constant_function(per_head_dim: int, value: float) -> float:
``` |
axlearn | github_2023 | python | 111 | apple | markblee | @@ -8,9 +8,6 @@
#
# ofirpress/attention_with_linear_biases:
# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# facebookresearch/llama: | Why is this reverted? |
axlearn | github_2023 | python | 105 | apple | markblee | @@ -808,6 +808,41 @@ def test_rope_self_attention(self):
)
+class RoFormerSinusoidalPositionalEmbeddingAgainstLLaMATest(TestCase):
+ def llama_ref_precompute_freqs_cis(self, dim: int, end: int, theta: float = 10000.0):
+ """Reference LLaMA-1 implemention.
+
+ Ref:
+ https://github.com/facebookresearch/llama/blob/1076b9c51c77ad06e9d7ba8a4c6df775741732bd/llama/model.py#L47-L52 | We may want to add to header:
```
# facebookresearch/llama:
# Copyright (c) Meta Platforms, Inc. and affiliates.
``` |
axlearn | github_2023 | python | 102 | apple | ruomingp | @@ -158,6 +159,8 @@ def visit(tree, prefix):
(k, visit(v, _concat(prefix=prefix, suffix=k, separator=separator)))
for k, v in tree.items()
)
+ elif isinstance(tree, flax.struct.PyTreeNode): | Is there a test that can be added to catch the issue? |
axlearn | github_2023 | python | 89 | apple | markblee | @@ -176,6 +188,26 @@ def test_layer_norm(self):
ref_layer=hf_layer,
test_inputs=[inputs],
ref_inputs=as_torch_tensor(inputs),
+ test_torch_to_axlearn=True,
+ )
+ self.assertNestedAllClose(out, hf_out)
+
+ def test_layer_norm_stateless(self): | Consider parameterizing the `test_layer_norm` test? It may also make it easier to check the invalid elementwise_affine codepath. |
axlearn | github_2023 | python | 89 | apple | markblee | @@ -583,7 +615,7 @@ def test_roundtrip(self, arch, repeat, fuse_qkv):
hf_layer_copy(**hf_inputs).logits,
),
)
- self.assertNestedAllClose(expected, actual)
+ self.assertNestedAllClose(expected, actual, atol=1e-5, rtol=1e-3) | Intended change? |
axlearn | github_2023 | others | 57 | apple | markblee | @@ -0,0 +1,72 @@
+# Bastion: Axlearn's Job Scheduler
+
+## Control Flow of Job Submission
+```mermaid
+%% elk seems to be more maintained, see: https://github.com/mermaid-js/mermaid/issues/1969
+%% N.B. elk doesn't stop rendering invisible edge operator, i.e. ~~~
+%%{init: {"flowchart": {"defaultRenderer": "elk"}} }%%
+
+flowchart TB
+
+ subgraph UserMachine ["User's dev machine (e.g. MacBook Pro)"]
+ subgraph AXLearnGithubRepository["AXLearn Github Repository"] | Actually, submit need not start from github repo. We should be able to launch from a pip install in a different CWD. |
axlearn | github_2023 | others | 57 | apple | markblee | @@ -0,0 +1,72 @@
+# Bastion: Axlearn's Job Scheduler
+
+## Control Flow of Job Submission
+```mermaid
+%% elk seems to be more maintained, see: https://github.com/mermaid-js/mermaid/issues/1969
+%% N.B. elk doesn't stop rendering invisible edge operator, i.e. ~~~
+%%{init: {"flowchart": {"defaultRenderer": "elk"}} }%%
+
+flowchart TB
+
+ subgraph UserMachine ["User's dev machine (e.g. MacBook Pro)"]
+ subgraph AXLearnGithubRepository["AXLearn Github Repository"]
+ localAXLearnPackage(["axlearn package \n (built by a user, e.g. Alice)"]):::fileCSS
+ end
+ end
+
+ localAXLearnPackage --"
+ Bundle/upload
+ the user's axlearn dir
+ (minus excluded paths)"--> bastionPrimaryStore
+
+ localAXLearnPackage =="
+ Submit a bastion job
+ (serialized as a job spec)"==> bastionPrimaryStore
+
+ subgraph PublicCloud ["Public Cloud (e.g. Google Cloud Platform)"]
+
+ subgraph BastionVM_shared ["Bastion VM (e.g. 'shared-bastion')"]
+ bastionScheduler_1["Bastion \n Scheduler"]
+ bastionVmAXLearnPackage(["axlearn package \n (running shared docker image)"]):::fileCSS
+ bastionJob_1["Bastion job 1 \n name: notebook-tpu-alice-a59ce1"]
+ bastionJob_2["Bastion job 2 \n name: notebook-tpu-bob-b3b5f1"]
+
+ bastionScheduler_1 --"spawn/kill"--> bastionJob_1 & bastionJob_2
+ end
+
+ bastionPrimaryStore[("Data Store \n (e.g. Google Storage)")]:::storeCSS
+ bastionPrimaryStore =="Download \n Bastion job specs"==> bastionScheduler_1
+
+ bastionPrimaryStore --"Download the user's \n axlearn bundle"--> bastionJob_1
+ bastionJob_1 --"Dockerfile \n (using Alice's bundle)"--> WorkerVM_1
+ bastionJob_2 --"Tarball \n (using Bob's bundle)"--> WorkerVM_2
+
+ subgraph WorkerVM_1 ["Worker VM 1 (name: notebook-tpu-alice-a59ce1)"]
+ workerProcess_1["User-specified process \n e.g. `jupyter lab --port=12345`"]
+ accelerator_1[/"hardware accelerators \n (e.g. TPU v4-8)"\]:::chipCSS
+ bastionWorkerAXLearnPackage_1(["axlearn package \n (built by Alice)"]):::fileCSS
+
+ workerProcess_1 --> accelerator_1
+ end
+
+ subgraph WorkerVM_2 ["Worker VM 2"]
+ workerProcess_2["..."]
+ accelerator_2[/"..."\]:::chipCSS
+ bastionWorkerAXLearnPackage_2(["..."]):::fileCSS
+
+ workerProcess_2 --> accelerator_2
+ end
+
+ bastionLogStore[("Log Store \n (e.g. Google Storage)")]:::storeCSS
+ WorkerVM_2 --"sync logs"--> bastionLogStore
+
+ end
+
+ bastionLogStore--"download logs for debug"-->UserMachine | ```suggestion
bastionLogStore--"Download logs for debug"-->UserMachine
``` |
axlearn | github_2023 | others | 57 | apple | markblee | @@ -0,0 +1,72 @@
+# Bastion: Axlearn's Job Scheduler
+
+## Control Flow of Job Submission
+```mermaid
+%% elk seems to be more maintained, see: https://github.com/mermaid-js/mermaid/issues/1969
+%% N.B. elk doesn't stop rendering invisible edge operator, i.e. ~~~
+%%{init: {"flowchart": {"defaultRenderer": "elk"}} }%%
+
+flowchart TB
+
+ subgraph UserMachine ["User's dev machine (e.g. MacBook Pro)"]
+ subgraph AXLearnGithubRepository["AXLearn Github Repository"]
+ localAXLearnPackage(["axlearn package \n (built by a user, e.g. Alice)"]):::fileCSS
+ end
+ end
+
+ localAXLearnPackage --"
+ Bundle/upload
+ the user's axlearn dir
+ (minus excluded paths)"--> bastionPrimaryStore
+
+ localAXLearnPackage =="
+ Submit a bastion job
+ (serialized as a job spec)"==> bastionPrimaryStore
+
+ subgraph PublicCloud ["Public Cloud (e.g. Google Cloud Platform)"]
+
+ subgraph BastionVM_shared ["Bastion VM (e.g. 'shared-bastion')"]
+ bastionScheduler_1["Bastion \n Scheduler"]
+ bastionVmAXLearnPackage(["axlearn package \n (running shared docker image)"]):::fileCSS
+ bastionJob_1["Bastion job 1 \n name: notebook-tpu-alice-a59ce1"]
+ bastionJob_2["Bastion job 2 \n name: notebook-tpu-bob-b3b5f1"]
+
+ bastionScheduler_1 --"spawn/kill"--> bastionJob_1 & bastionJob_2
+ end
+
+ bastionPrimaryStore[("Data Store \n (e.g. Google Storage)")]:::storeCSS
+ bastionPrimaryStore =="Download \n Bastion job specs"==> bastionScheduler_1
+
+ bastionPrimaryStore --"Download the user's \n axlearn bundle"--> bastionJob_1
+ bastionJob_1 --"Dockerfile \n (using Alice's bundle)"--> WorkerVM_1
+ bastionJob_2 --"Tarball \n (using Bob's bundle)"--> WorkerVM_2
+
+ subgraph WorkerVM_1 ["Worker VM 1 (name: notebook-tpu-alice-a59ce1)"]
+ workerProcess_1["User-specified process \n e.g. `jupyter lab --port=12345`"]
+ accelerator_1[/"hardware accelerators \n (e.g. TPU v4-8)"\]:::chipCSS
+ bastionWorkerAXLearnPackage_1(["axlearn package \n (built by Alice)"]):::fileCSS
+
+ workerProcess_1 --> accelerator_1
+ end
+
+ subgraph WorkerVM_2 ["Worker VM 2"]
+ workerProcess_2["..."]
+ accelerator_2[/"..."\]:::chipCSS
+ bastionWorkerAXLearnPackage_2(["..."]):::fileCSS
+
+ workerProcess_2 --> accelerator_2
+ end
+
+ bastionLogStore[("Log Store \n (e.g. Google Storage)")]:::storeCSS
+ WorkerVM_2 --"sync logs"--> bastionLogStore | ```suggestion
WorkerVM_2 --"Sync logs"--> bastionLogStore
```
Should WorkerVM_1 also have the "sync logs" branch? |
axlearn | github_2023 | others | 57 | apple | markblee | @@ -0,0 +1,72 @@
+# Bastion: Axlearn's Job Scheduler
+
+## Control Flow of Job Submission
+```mermaid
+%% elk seems to be more maintained, see: https://github.com/mermaid-js/mermaid/issues/1969
+%% N.B. elk doesn't stop rendering invisible edge operator, i.e. ~~~
+%%{init: {"flowchart": {"defaultRenderer": "elk"}} }%%
+
+flowchart TB
+
+ subgraph UserMachine ["User's dev machine (e.g. MacBook Pro)"]
+ subgraph AXLearnGithubRepository["AXLearn Github Repository"]
+ localAXLearnPackage(["axlearn package \n (built by a user, e.g. Alice)"]):::fileCSS
+ end
+ end
+
+ localAXLearnPackage --"
+ Bundle/upload
+ the user's axlearn dir
+ (minus excluded paths)"--> bastionPrimaryStore
+
+ localAXLearnPackage =="
+ Submit a bastion job
+ (serialized as a job spec)"==> bastionPrimaryStore
+
+ subgraph PublicCloud ["Public Cloud (e.g. Google Cloud Platform)"]
+
+ subgraph BastionVM_shared ["Bastion VM (e.g. 'shared-bastion')"]
+ bastionScheduler_1["Bastion \n Scheduler"]
+ bastionVmAXLearnPackage(["axlearn package \n (running shared docker image)"]):::fileCSS
+ bastionJob_1["Bastion job 1 \n name: notebook-tpu-alice-a59ce1"]
+ bastionJob_2["Bastion job 2 \n name: notebook-tpu-bob-b3b5f1"]
+
+ bastionScheduler_1 --"spawn/kill"--> bastionJob_1 & bastionJob_2 | ```suggestion
bastionScheduler_1 --"Spawn/kill"--> bastionJob_1 & bastionJob_2
``` |
axlearn | github_2023 | others | 57 | apple | markblee | @@ -0,0 +1,72 @@
+# Bastion: Axlearn's Job Scheduler
+
+## Control Flow of Job Submission
+```mermaid
+%% elk seems to be more maintained, see: https://github.com/mermaid-js/mermaid/issues/1969
+%% N.B. elk doesn't stop rendering invisible edge operator, i.e. ~~~
+%%{init: {"flowchart": {"defaultRenderer": "elk"}} }%%
+
+flowchart TB
+
+ subgraph UserMachine ["User's dev machine (e.g. MacBook Pro)"]
+ subgraph AXLearnGithubRepository["AXLearn Github Repository"]
+ localAXLearnPackage(["axlearn package \n (built by a user, e.g. Alice)"]):::fileCSS
+ end
+ end
+
+ localAXLearnPackage --"
+ Bundle/upload
+ the user's axlearn dir
+ (minus excluded paths)"--> bastionPrimaryStore
+
+ localAXLearnPackage =="
+ Submit a bastion job
+ (serialized as a job spec)"==> bastionPrimaryStore
+
+ subgraph PublicCloud ["Public Cloud (e.g. Google Cloud Platform)"]
+
+ subgraph BastionVM_shared ["Bastion VM (e.g. 'shared-bastion')"]
+ bastionScheduler_1["Bastion \n Scheduler"]
+ bastionVmAXLearnPackage(["axlearn package \n (running shared docker image)"]):::fileCSS | ```suggestion
bastionVmAXLearnPackage(["axlearn package \n (running on shared docker image)"]):::fileCSS
``` |
axlearn | github_2023 | python | 37 | apple | ruomingp | @@ -1387,7 +1387,7 @@ def forward(
)
# Causal mask.
return apply_attention_logit_biases( | Great catch! Could you change `apply_attention_logit_biases` to take `apply_attention_logit_biases` as a keyword arg? This also better matches https://docs.google.com/document/d/1tK3MyfZQgXyrvWNg3rt6NQCItuFnFw_FBd6-C7pE2Wk/edit#heading=h.d1xks5sf41jd.
Feel free to do so in a follow-up PR. |
axlearn | github_2023 | python | 41 | apple | markblee | @@ -422,6 +422,12 @@ class Config(BaseLayer.Config):
lm_head: Optional[InstantiableConfig] = None
pad_token_id: int = 0 # Int ID of the inputs to be masked for self-attention.
eos_token_id: int = 1 # Int ID of the end of sequence token id.
+ # Specifies how to partition the output logits of shape [batch, max_seq_len, vocab_size].
+ logits_partition_spec: Tuple[Union[Optional[str], Tuple[Optional[str]]], ...] = ( | This is fine for now, but I wonder if the inner Tuple should also support len > 2 in theory. |
axlearn | github_2023 | python | 1,070 | apple | markblee | @@ -390,8 +391,9 @@ def _build_job_submission_deployment(
cfg.builder.accelerator.num_replicas * system.vms_per_slice * system.chips_per_vm
)
user_command += (
- f" --flink_master_address={job_manager_ip}"
- f" --flink_parallelism={flink_parallelism}"
+ f" --flink_master={job_manager_ip}"
+ f" --parallelism={flink_parallelism}"
+ f" --artifacts_dir={os.path.join(self.config.builder.output_dir, 'artifacts_dir')}" | ```suggestion
f" --artifacts_dir={os.path.join(cfg.builder.output_dir, 'artifacts_dir')}"
``` |
axlearn | github_2023 | python | 1,070 | apple | markblee | @@ -390,8 +391,9 @@ def _build_job_submission_deployment(
cfg.builder.accelerator.num_replicas * system.vms_per_slice * system.chips_per_vm
)
user_command += (
- f" --flink_master_address={job_manager_ip}"
- f" --flink_parallelism={flink_parallelism}"
+ f" --flink_master={job_manager_ip}"
+ f" --parallelism={flink_parallelism}" | What's the motivation of changing `--flink_parallelism` to `--parallelism`? Is there testing for this change? |
app-store-server-library-python | github_2023 | python | 82 | apple | alexanderjordanbaker | @@ -17,9 +17,9 @@ class ExtendRenewalDateRequest:
extendByDays: Optional[int] = attr.ib(default=None)
"""
The number of days to extend the subscription renewal date.
-
+ The number of days is a number from 1 to 90. | This doesn't follow the style of the rest of the library, where only the first sentence of a field's documentation is listed. Could you please remove this bit. Discussion about the style of documentation would probably be best in a separate issue |
app-store-server-library-python | github_2023 | python | 82 | apple | alexanderjordanbaker | @@ -66,5 +66,5 @@ class TransactionHistoryRequest:
revoked: Optional[bool] = attr.ib(default=None)
"""
- An optional Boolean value that indicates whether the response includes only revoked transactions when the value is true, or contains only nonrevoked transactions when the value is false. By default, the request doesn't include this parameter.
+ An optional Boolean value that indicates whether the response includes only revoked transactions when the value is true, or contains only non-revoked transactions when the value is false. By default, the request doesn't include this parameter. | This differs from the official Apple documentation |
app-store-server-library-python | github_2023 | python | 63 | apple | alexanderjordanbaker | @@ -15,6 +15,6 @@
long_description_content_type="text/markdown",
packages=find_packages(exclude=["tests"]),
python_requires=">=3.7, <4",
- install_requires=["attrs >= 21.3.0", 'PyJWT >= 2.6.0, < 3', 'requests >= 2.28.0, < 3', 'cryptography >= 40.0.0, < 42', 'pyOpenSSL >= 23.1.1, < 24', 'asn1==2.7.0', 'cattrs==23.1.2'],
+ install_requires=["attrs >= 21.3.0", 'PyJWT >= 2.6.0, < 3', 'requests >= 2.28.0, < 3', 'cryptography >= 40.0.0', 'pyOpenSSL >= 23.1.1, < 25', 'asn1==2.7.0', 'cattrs==23.1.2'], | Please bump the max to 43, not removing the limit |
app-store-server-library-python | github_2023 | python | 52 | apple | izanger | @@ -36,72 +36,405 @@
class APIError(IntEnum):
GENERAL_BAD_REQUEST = 4000000
+ """
+ An error that indicates an invalid request.
+
+ https://developer.apple.com/documentation/appstoreserverapi/generalbadrequesterror
+ """
+
INVALID_APP_IDENTIFIER = 4000002
+ """
+ An error that indicates an invalid app identifier.
+
+ https://developer.apple.com/documentation/appstoreserverapi/invalidappidentifiererror
+ """
+
INVALID_REQUEST_REVISION = 4000005
+ """
+ An error that indicates an invalid request revision.
+
+ https://developer.apple.com/documentation/appstoreserverapi/invalidrequestrevisionerror
+ """
+
INVALID_TRANSACTION_ID = 4000006
+ """
+ An error that indicates an invalid transaction identifier.
+
+ https://developer.apple.com/documentation/appstoreserverapi/invalidtransactioniderror
+ """
+
INVALID_ORIGINAL_TRANSACTION_ID = 4000008
+ """
+ An error that indicates an invalid original transaction identifier.
+
+ https://developer.apple.com/documentation/appstoreserverapi/invalidoriginaltransactioniderror
+ """
+
INVALID_EXTEND_BY_DAYS = 4000009
+ """
+ An error that indicates an invalid extend-by-days value.
+
+ https://developer.apple.com/documentation/appstoreserverapi/invalidextendbydayserror
+ """
+
INVALID_EXTEND_REASON_CODE = 4000010
+ """
+ An error that indicates an invalid reason code.
+
+ https://developer.apple.com/documentation/appstoreserverapi/invalidextendreasoncodeerror
+ """
+
INVALID_IDENTIFIER = 4000011 | Suggest a rename to `INVALID_REQUEST_IDENTIFIER` |
swift-openapi-generator | github_2023 | others | 731 | apple | czechboy0 | @@ -66,7 +66,7 @@ struct Handler: APIProtocol {
@main struct HelloWorldVaporServer {
static func main() async throws {
- let app = Vapor.Application()
+ let app = try await Application.make(.detect()) | Is the `.detect()` part necessary? |
swift-openapi-generator | github_2023 | others | 708 | apple | simonjbeaumont | @@ -78,7 +84,10 @@ extension _GenerateOptions {
/// Returns the naming strategy requested by the user.
/// - Parameter config: The configuration specified by the user.
/// - Returns: The naming strategy requestd by the user.
- func resolvedNamingStrategy(_ config: _UserConfig?) -> NamingStrategy { config?.namingStrategy ?? .defensive }
+ func resolvedNamingStrategy(_ config: _UserConfig?) -> NamingStrategy {
+ if let namingStrategy { return namingStrategy }
+ return config?.namingStrategy ?? Config.defaultNamingStrategy
+ } | Just calling out that we've picked up some asymmetry here with regard to how we handle `accessModifier` and `namingStrategy`.
Both of these have defaults, defined as statics in `Config` and both of these must have a value (unlike other options, e.g. filter, where `nil` is reasonable).
The `resolveAccessModifier` and `resolveNamingStrategy` differ with regard to whether they return an optional.
The `accessModifier` and `namingStrategy` parameters to `Config.init` also differ: the former expects an argument, moving the use of default to the call site; but the latter has the default value in the parameter list of the initializer.
IMO we should clean this up rather than accumulate some unnecessary inconsistency here.
But this doesn't need to be done as part of this PR. Feel free to convert this to an issue. |
swift-openapi-generator | github_2023 | others | 709 | apple | simonjbeaumont | @@ -56,4 +55,3 @@ jobs:
with:
name: "Example packages"
matrix_linux_command: "./scripts/test-examples.sh"
- matrix_linux_nightly_main_enabled: false | I'm +1 on us having the nightly main enabled for unit tests, but do we gain much by having the integration test and examples here, other than noise. Especially since the examples pipeline is currently the slowest, so the limiting factor on getting PRs green, and IIUC the nightly main setup time is longer than all others because it's likely it will not have a warm image. |
swift-openapi-generator | github_2023 | others | 706 | apple | simonjbeaumont | @@ -16,36 +16,45 @@ import OpenAPIVapor
import Vapor
import TracingMiddleware
import Tracing
-import OpenTelemetry
-import OtlpGRPCSpanExporting
+import OTel
+import OTLPGRPC
import NIO
struct Handler: APIProtocol {
- func getGreeting(_ input: Operations.getGreeting.Input) async throws -> Operations.getGreeting.Output {
+ func getGreeting(_ input: Operations.GetGreeting.Input) async throws -> Operations.GetGreeting.Output {
let name = input.query.name ?? "Stranger"
return .ok(.init(body: .json(.init(message: "Hello, \(name)!"))))
}
}
@main struct HelloWorldVaporServer {
static func main() async throws {
- let eventLoopGroup = MultiThreadedEventLoopGroup.singleton
- let otel = OTel(
- serviceName: "HelloWorldServer",
- eventLoopGroup: eventLoopGroup,
- processor: OTel.BatchSpanProcessor(
- exportingTo: OtlpGRPCSpanExporter(config: .init(eventLoopGroup: eventLoopGroup)),
- eventLoopGroup: eventLoopGroup
- )
+ let environment = OTelEnvironment.detected()
+ let resourceDetection = OTelResourceDetection(detectors: [
+ OTelProcessResourceDetector(), OTelEnvironmentResourceDetector(environment: environment),
+ ])
+ let resource = await resourceDetection.resource(environment: environment, logLevel: .trace)
+ let exporter = try OTLPGRPCSpanExporter(configuration: .init(environment: environment))
+ let processor = OTelBatchSpanProcessor(exporter: exporter, configuration: .init(environment: environment))
+ let tracer = OTelTracer(
+ idGenerator: OTelRandomIDGenerator(),
+ sampler: OTelConstantSampler(isOn: true),
+ propagator: OTelW3CPropagator(),
+ processor: processor,
+ environment: environment,
+ resource: resource
)
- try await otel.start().get()
- defer { try? otel.shutdown().wait() }
- InstrumentationSystem.bootstrap(otel.tracer())
-
- let app = Vapor.Application()
+ InstrumentationSystem.bootstrap(tracer)
+ let app = try await Vapor.Application.make()
let transport = VaporTransport(routesBuilder: app)
let handler = Handler()
try handler.registerHandlers(on: transport, serverURL: URL(string: "/api")!, middlewares: [TracingMiddleware()])
- try await app.execute()
+ try await withThrowingTaskGroup(of: Void.self) { group in
+ group.addTask { try await app.execute() }
+ group.addTask { try await tracer.run() }
+ group.addTask { try await processor.run() }
+ _ = try await group.next()
+ group.cancelAll()
+ } | Can we drop a comment pointing people to Swift Service Lifecycle here, because this probably isn't what we want folks to do in practice, right?
```suggestion
// Consider using Swift Service Lifecycle — https://github.com/swift-server/swift-service-lifecycle
try await withThrowingTaskGroup(of: Void.self) { group in
group.addTask { try await app.execute() }
group.addTask { try await tracer.run() }
group.addTask { try await processor.run() }
_ = try await group.next()
group.cancelAll()
}
``` |
swift-openapi-generator | github_2023 | others | 679 | apple | simonjbeaumont | @@ -0,0 +1,160 @@
+# SOAR-0013: Optimistic naming strategy
+
+Introduce an alternative naming strategy for more idiomatic Swift identifiers, including a way to provide custom name overrides.
+
+## Overview
+
+- Proposal: SOAR-0013
+- Author(s): [Honza Dvorsky](https://github.com/czechboy0), [Si Beaumont](https://github.com/simonjbeaumont)
+- Status: **Awaiting Review**
+- Issues:
+ - [apple/swift-openapi-generator#112][issuePlugin]
+ - [apple/swift-openapi-generator#107][issue1]
+ - [apple/swift-openapi-generator#503][issue2]
+ - [apple/swift-openapi-generator#244][issue3]
+ - [apple/swift-openapi-generator#405][issue4]
+- Implementation:
+ - [apple/swift-openapi-generator#679][pr]
+- New configuration options:
+ - `namingStrategy`
+ - `nameOverrides`
+- Affected components:
+ - generator
+
+### Introduction
+
+Introduce a new naming strategy as an opt-in feature, instructing the generator to produce more conventional Swift names, and offer a way to completely customize how any OpenAPI identifier gets projected to a Swift identifier.
+
+### Motivation
+
+The purpose of Swift OpenAPI Generator is to generate Swift code from OpenAPI documents. As part of that process, names specified in the OpenAPI document have to be converted to names in Swift code - and there are many ways to do that. We call these "naming strategies" in this proposal.
+
+When Swift OpenAPI Generator 0.1.0 went open-source in May 2023, it had a simple naming strategy that produced relatively conventional Swift identifiers from OpenAPI names, however when tested on a large test corpus of around 3000 OpenAPI documents, it produced an unacceptably high number of non-compiling packages due to naming conflicts. | ```suggestion
When Swift OpenAPI Generator 0.1.0 went open-source in May 2023, it had a simple naming strategy that produced relatively conventional Swift identifiers from OpenAPI names. However, when tested on a large test corpus of around 3000 OpenAPI documents, it produced an unacceptably high number of non-compiling packages due to naming conflicts.
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.