Instructions to use Jibbscript/privacy-filter-oai with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Jibbscript/privacy-filter-oai with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="Jibbscript/privacy-filter-oai")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Jibbscript/privacy-filter-oai") model = AutoModelForTokenClassification.from_pretrained("Jibbscript/privacy-filter-oai") - Transformers.js
How to use Jibbscript/privacy-filter-oai with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('token-classification', 'Jibbscript/privacy-filter-oai'); - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 59f22360b51bf1a137fb80e14df310b4b249bed6cc64565aa0c401ca19ffcd04
- Size of remote file:
- 27.9 MB
- SHA256:
- 0614fe83cadab421296e664e1f48f4261fa8fef6e03e63bb75c20f38e37d07d3
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.