How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("Muapi/source-engine-aesthetic")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Source Engine Aesthetic

preview

Base model: Flux.1 D Trained words: s0urc3, Computer-generated scene of, skybox backdrop, video game environment, Low-polygon 3D modeling with clean simple geometry, matte textures with minimal specular highlights, baked ambient occlusion, shadow mapping, directional lighting with sharp shadow edges, limited dynamic lighting effects, stylized environmental textures with visible tiling, subtle fog effect in distant areas, hard geometric building edges with minimal beveling, semi-realistic proportions with slightly exaggerated features, high contrast between light and shadow areas

🧠 Usage (Python)

🔑 Get your MUAPI key from muapi.ai/access-keys

import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
    "prompt": "masterpiece, best quality, 1girl, looking at viewer",
    "model_id": [{"model": "civitai:1345659@1519757", "weight": 1.0}],
    "width": 1024,
    "height": 1024,
    "num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
Downloads last month
6
Inference Providers NEW

Model tree for Muapi/source-engine-aesthetic

Adapter
(40025)
this model