CondadosAI commited on
Commit
c4e0613
·
verified ·
1 Parent(s): d15aea5

docs: acaua mirror model card with upstream provenance

Browse files
Files changed (1) hide show
  1. README.md +59 -46
README.md CHANGED
@@ -1,68 +1,81 @@
1
  ---
2
- license: other
 
 
3
  tags:
4
- - vision
5
- - image-segmentation
 
 
6
  datasets:
7
- - coco
8
- widget:
9
- - src: http://images.cocodataset.org/val2017/000000039769.jpg
10
- example_title: Cats
11
- - src: http://images.cocodataset.org/val2017/000000039770.jpg
12
- example_title: Castle
13
  ---
14
 
15
- # Mask2Former
16
 
17
- Mask2Former model trained on COCO instance segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
18
- ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
19
 
20
- Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
21
 
22
- ## Model description
23
 
24
- Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
25
- [MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
26
- without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.
 
 
 
 
 
 
 
 
27
 
28
- ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png)
29
 
30
- ## Intended uses & limitations
31
-
32
- You can use this particular checkpoint for instance segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
33
- fine-tuned versions on a task that interests you.
 
 
 
34
 
35
- ### How to use
36
 
37
- Here is how to use this model:
38
 
39
  ```python
40
- import requests
41
- import torch
42
- from PIL import Image
43
- from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
 
 
 
 
44
 
 
45
 
46
- # load Mask2Former fine-tuned on COCO instance segmentation
47
- processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-tiny-coco-instance")
48
- model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-tiny-coco-instance")
49
 
50
- url = "http://images.cocodataset.org/val2017/000000039769.jpg"
51
- image = Image.open(requests.get(url, stream=True).raw)
52
- inputs = processor(images=image, return_tensors="pt")
53
 
54
- with torch.no_grad():
55
- outputs = model(**inputs)
56
 
57
- # model predicts class_queries_logits of shape `(batch_size, num_queries)`
58
- # and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
59
- class_queries_logits = outputs.class_queries_logits
60
- masks_queries_logits = outputs.masks_queries_logits
 
 
 
61
 
62
- # you can pass them to processor for postprocessing
63
- result = processor.post_process_instance_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
64
- # we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
65
- predicted_instance_map = result["segmentation"]
 
 
66
  ```
67
-
68
- For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former).
 
1
  ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: image-segmentation
5
  tags:
6
+ - image-segmentation
7
+ - instance-segmentation
8
+ - vision
9
+ - acaua
10
  datasets:
11
+ - coco
12
+ base_model: facebook/mask2former-swin-tiny-coco-instance
 
 
 
 
13
  ---
14
 
15
+ # Mask2Former Swin-Tiny (COCO Instance) — acaua mirror
16
 
17
+ Apache-2.0 mirror hosted under `CondadosAI/` for use with the [acaua](https://github.com/CondadosAI/acaua) computer vision library.
 
18
 
19
+ This is a **1:1 byte-identical copy** of the upstream Meta AI Research weights at the pinned commit shown below. We do not modify weights or configuration. The purpose of the mirror is license hygiene: acaua's core promise is that every shipped weight has an auditable, declared Apache-2.0 upstream. Mirroring lets us pin a specific revision so the audit claim stays verifiable even if upstream rewrites history.
20
 
21
+ ## Provenance
22
 
23
+ | | |
24
+ |---|---|
25
+ | Upstream repo | [`facebook/mask2former-swin-tiny-coco-instance`](https://huggingface.co/facebook/mask2former-swin-tiny-coco-instance) |
26
+ | Upstream commit SHA | `22c4a2f15dc88149b8b8d9f4d42c54431fbd66f6` |
27
+ | Upstream commit date | 2023-09-11 |
28
+ | Declared license | Apache-2.0 (upstream YAML frontmatter) |
29
+ | Paper | Cheng et al., *"Masked-attention Mask Transformer for Universal Image Segmentation"*, CVPR 2022, arXiv:[2112.01527](https://arxiv.org/abs/2112.01527) |
30
+ | Official code | [`facebookresearch/Mask2Former`](https://github.com/facebookresearch/Mask2Former) (MIT) |
31
+ | Backbone | Swin-Tiny, pretrained on ImageNet-1k (per upstream model card) |
32
+ | Mirrored on | 2026-04-17 |
33
+ | Mirrored by | [CondadosAI/acaua](https://github.com/CondadosAI/acaua) |
34
 
35
+ ## Usage via acaua
36
 
37
+ ```python
38
+ import acaua
39
+ model = acaua.Model.from_pretrained("CondadosAI/mask2former_swin_tiny_coco_instance")
40
+ results = model.predict("image.jpg")
41
+ for r in results:
42
+ print(r.boxes, r.labels, r.scores, r.masks.shape)
43
+ ```
44
 
45
+ ## Usage via 🤗 Transformers
46
 
47
+ This mirror is drop-in compatible with the upstream Facebook repo:
48
 
49
  ```python
50
+ from transformers import AutoModelForUniversalSegmentation, AutoImageProcessor
51
+ model = AutoModelForUniversalSegmentation.from_pretrained(
52
+ "CondadosAI/mask2former_swin_tiny_coco_instance"
53
+ )
54
+ processor = AutoImageProcessor.from_pretrained(
55
+ "CondadosAI/mask2former_swin_tiny_coco_instance"
56
+ )
57
+ ```
58
 
59
+ ## License and attribution
60
 
61
+ Redistributed under Apache License 2.0, consistent with the upstream HF model card declaration. The reference implementation at `facebookresearch/Mask2Former` is MIT-licensed; the weights as distributed by `facebook/*` on Hugging Face are declared Apache-2.0.
 
 
62
 
63
+ See [`NOTICE`](./NOTICE) for required attribution to upstream contributors (Meta AI Research / FAIR, Mask2Former authors, Swin Transformer authors).
 
 
64
 
65
+ ## Citation
 
66
 
67
+ ```bibtex
68
+ @inproceedings{cheng2022mask2former,
69
+ title={Masked-attention Mask Transformer for Universal Image Segmentation},
70
+ author={Cheng, Bowen and Misra, Ishan and Schwing, Alexander G and Kirillov, Alexander and Girdhar, Rohit},
71
+ booktitle={CVPR},
72
+ year={2022}
73
+ }
74
 
75
+ @inproceedings{liu2021swin,
76
+ title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
77
+ author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
78
+ booktitle={ICCV},
79
+ year={2021}
80
+ }
81
  ```