Instructions to use google/owlv2-large-patch14-ensemble with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/owlv2-large-patch14-ensemble with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-object-detection", model="google/owlv2-large-patch14-ensemble")# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection processor = AutoProcessor.from_pretrained("google/owlv2-large-patch14-ensemble") model = AutoModelForZeroShotObjectDetection.from_pretrained("google/owlv2-large-patch14-ensemble") - Notebooks
- Google Colab
- Kaggle
Update README.md
#7
by bornasquare - opened
README.md
CHANGED
|
@@ -64,7 +64,7 @@ def get_preprocessed_image(pixel_values):
|
|
| 64 |
|
| 65 |
unnormalized_image = get_preprocessed_image(inputs.pixel_values)
|
| 66 |
|
| 67 |
-
target_sizes = torch.Tensor([
|
| 68 |
# Convert outputs (bounding boxes and class logits) to final bounding boxes and scores
|
| 69 |
results = processor.post_process_object_detection(
|
| 70 |
outputs=outputs, threshold=0.2, target_sizes=target_sizes
|
|
|
|
| 64 |
|
| 65 |
unnormalized_image = get_preprocessed_image(inputs.pixel_values)
|
| 66 |
|
| 67 |
+
target_sizes = target_sizes = torch.Tensor([image.size[::-1]]) # the bounding boxes can be drawn on the original image
|
| 68 |
# Convert outputs (bounding boxes and class logits) to final bounding boxes and scores
|
| 69 |
results = processor.post_process_object_detection(
|
| 70 |
outputs=outputs, threshold=0.2, target_sizes=target_sizes
|