SuperPicky CoreML Models
CoreML-converted copies of the five machine-learning models used by
SuperPickyMac, a native macOS
birding photo-culling app. Each file is the weights/weight.bin payload of
the corresponding .mlmodelc directory — the app ships the small scaffold
files (model.mil, metadata.json, …) in its app bundle and downloads
these weight blobs on first launch.
This repository does not introduce any new models. Every model here is a conversion of an existing, independently-published network to Apple's Core ML format, for native Neural Engine execution on Apple Silicon. Credit and licensing belong to the original authors.
Models and credits
| File | Architecture | Source / credit | License |
|---|---|---|---|
FlightDetector.weight.bin (41 MB) |
EfficientNet-B3 → binary head | Trained by SuperPicky (Jamesphotography) for flying-vs-perched bird classification. Backbone: EfficientNet (Tan & Le, 2019). | See SuperPicky repo |
KeypointDetector.weight.bin (94 MB) |
ResNet50 + PartLocalizer head | Trained by SuperPicky on CUB-200-2011 keypoint annotations (left-eye, right-eye, beak). | See SuperPicky repo |
YOLOBirdDetector.weight.bin (53 MB) |
YOLO11l-seg | Ultralytics YOLO11l-seg; SuperPicky filters detections to COCO class 14 (bird). |
AGPL-3.0 |
OSEAClassifier.weight.bin (103 MB) |
ResNet34 → 10,964 species | OSEA bird classifier by Sun Jiao. Trained on ~11 k bird species worldwide; SuperPicky feeds each YOLO crop to it for species identification. | See OSEA repo |
AestheticsModel.weight.bin (266 MB) |
CFANet / TOPIQ (ResNet50 backbone + transformer cross-attention) | TOPIQ by Chen et al.; CFANet checkpoint trained on the AVA aesthetics dataset. Paper: TOPIQ: A Top-Down Approach from Semantics to Distortions for Image Quality Assessment. | NTU S-Lab License |
All source PyTorch checkpoints originate from the jamesphotography/SuperPicky-models reference repository — see there for the .pth / .onnx sources and the corresponding training code.
What this repo contains
Five files, one per CoreML model, each identical to the weight.bin blob
produced by coremltools.convert(...).save():
| File | SHA-256 | Size |
|---|---|---|
FlightDetector.weight.bin |
0105ee79ff06f4f40edace40daa275f71126d8d1fb0737f0fff029c611379610 |
42,634,112 |
KeypointDetector.weight.bin |
0ce77aefef957af92ffbc58e23897f7b6127ac79ab1d23f8a0395db9f296d82c |
98,676,800 |
YOLOBirdDetector.weight.bin |
387b5e33feb8fdaac86e6792ba11cf40d91aaed851bb4ccb0ce04501cbc760ca |
55,367,168 |
OSEAClassifier.weight.bin |
cd2ca17e7858e3b49647a01e7830d38405e5b605f6c49c5b8f2490c73bd67bf2 |
107,681,472 |
AestheticsModel.weight.bin |
9e3612f51c95331d69cf5aecfff5185f4f7316436f00186713f9656fb211f1b9 |
278,668,800 |
The SuperPicky Mac app bundles manifest.json with exactly these digests and
refuses to install a downloaded file whose SHA-256 doesn't match — so if you
modify any file here, the app will reject it.
Reproducing these weights
The conversion scripts live in the SuperPickyMac repo under
scripts/convert_*.py.
Each script:
- Loads the original PyTorch checkpoint from the SuperPicky source models (or a pinned Ultralytics release).
- Traces the model with
torch.jit.trace. - Converts via
coremltools.convert(..., convert_to='mlprogram', compute_precision=ct.precision.FLOAT32). - Writes a
.mlpackagedirectory whoseweights/weight.binis the file you see here, and runs a parity check against the PyTorch original (max absolute delta typically ≤ 1e-6).
No architectural changes, no re-training, no quantization — just format translation so the models can run on Apple's Neural Engine.
License
Each model inherits the license of its upstream source (see the table above). This repository packages the CoreML conversion artifacts only; please consult the original projects for terms governing commercial use, redistribution, and derivative works.