Instructions to use lia21/swarm-ppo-drone with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- stable-baselines3
How to use lia21/swarm-ppo-drone with stable-baselines3:
from huggingface_sb3 import load_from_hub checkpoint = load_from_hub( repo_id="lia21/swarm-ppo-drone", filename="{MODEL FILENAME}.zip", ) - Notebooks
- Google Colab
- Kaggle
π Swarm PPO Drone
This repository contains a Proximal Policy Optimization (PPO) model trained for swarm/drone control.
The model was trained using Gymnasium environments with Stable-Baselines3 and exported for use in Bittensor Subnet 124 (Swarm).
π Files
policy.pthβ Trained PPO policy weights (PyTorch).ppo_policy.zipβ Stable-Baselines3 PPO saved model (reload withPPO.load()).safe_policy_meta.jsonβ Metadata for policy compliance.best/β Best checkpointed model during training.eval_logs/β Evaluation logs.tb_logs/β TensorBoard logs.
π οΈ Usage
Load with Stable-Baselines3
from stable_baselines3 import PPO
import gymnasium as gym
# Load model
model = PPO.load("ppo_policy.zip")
# Example run
env = gym.make("CartPole-v1")
obs, _ = env.reset()
action, _ = model.predict(obs)
print("Predicted action:", action)
- Downloads last month
- -