Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

UCSC-VLAA
/
openvision3-vit-base-patch2-28

OpenCLIP
Model card Files Files and versions
xet
Community

Instructions to use UCSC-VLAA/openvision3-vit-base-patch2-28 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • OpenCLIP

    How to use UCSC-VLAA/openvision3-vit-base-patch2-28 with OpenCLIP:

    import open_clip
    
    model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:UCSC-VLAA/openvision3-vit-base-patch2-28')
    tokenizer = open_clip.get_tokenizer('hf-hub:UCSC-VLAA/openvision3-vit-base-patch2-28')
  • Notebooks
  • Google Colab
  • Kaggle
openvision3-vit-base-patch2-28
343 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
Letian2003's picture
Letian2003
Upload standalone vision encoder: vit-only first commit
bdb0446 verified 3 months ago
  • .gitattributes
    1.52 kB
    initial commit 3 months ago
  • open_clip_config.json
    512 Bytes
    Upload standalone vision encoder: vit-only first commit 3 months ago
  • open_clip_pytorch_model.bin

    Detected Pickle imports (3)

    • "torch._utils._rebuild_tensor_v2",
    • "collections.OrderedDict",
    • "torch.FloatStorage"

    What is a pickle import?

    343 MB
    xet
    Upload standalone vision encoder: vit-only first commit 3 months ago