| | --- |
| | license: mit |
| | datasets: |
| | - mteb/tweet_sentiment_extraction |
| | language: |
| | - en |
| | library_name: transformers |
| | --- |
| | # bart-perspectives |
| |
|
| | ## Overview |
| |
|
| | The BART-perspectives model is a sequence-to-sequence transformers mode;. Built on top of Facebook's BART-large (specifically the `philschmid/bart-large-cnn-samsum` finetune), it is specifically designed to extract perspectives from textual data at scale. The model provides an in-depth analysis of the speaker's identity, their emotions, the object of these emotions, and the reason behind these emotions. |
| |
|
| | ## Usage |
| |
|
| | It is designed to be used with the `perspectives` library: |
| |
|
| | ```python |
| | from perspectives import DataFrame |
| | |
| | # Load DataFrame |
| | df = DataFrame(texts = [list of sentences]) |
| | |
| | # Get perspectives |
| | df.get_perspectives() |
| | |
| | # Search |
| | df.search(speaker='...', emotion='...') |
| | ``` |
| |
|
| | You can use also this model directly with a pipeline for text generation: |
| |
|
| | ```python |
| | from transformers import pipeline |
| | |
| | # Load the model |
| | generator = pipeline('text-generation', model='helliun/bart-perspectives') |
| | |
| | # Get perspective |
| | perspective = generator("Describe the perspective of this text: <your text>", max_length=1024, do_sample=False) |
| | print(perspective) |
| | ``` |
| | You can also use it with `transformers.AutoTokenizer` and `transformers.AutoModelForSeq2SeqLM`: |
| |
|
| | ```python |
| | from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
| | |
| | # Load the model |
| | tokenizer = AutoTokenizer.from_pretrained("helliun/bart-perspectives") |
| | model = AutoModelForSeq2SeqLM.from_pretrained("helliun/bart-perspectives") |
| | |
| | # Tokenize the sentence |
| | inputs = tokenizer.encode("Describe the perspective for this sentence: <your text>", return_tensors='pt') |
| | |
| | # Pass the tensor through the model |
| | results = model.generate(inputs) |
| | |
| | # Decode the results |
| | decoded = tokenizer.decode(results[:,0]) |
| | print(decoded) |
| | ``` |
| |
|
| | ## Training |
| |
|
| | The model was fine-tuned on a subset of the `mteb/tweet-sentiment-extraction` dataset with emotional analyses generated synthetically by GPT-4. |
| |
|
| | ## About me |
| |
|
| | I'm a recent grad of Ohio State University where I did an undergraduate thesis on Synthetic Data Augmentation using LLMs. I've worked as an NLP consultant for a couple awesome startups, and now I'm looking for a role with an inspiring company who is as interested in the untapped potential of LMs as I am! [Here's my LinkedIn.](https://www.linkedin.com/in/henry-leonardi-a63851165/) |
| |
|
| | ## Contributing and Support |
| |
|
| | Please raise an issue here if you encounter any problems using the model. Contributions like fine-tuning on additional data or improving the model architecture are always welcome! |
| |
|
| | [Buy me a coffee!](https://www.buymeacoffee.com/helliun) |
| |
|
| | ## License |
| |
|
| | The model is open source and free to use under the MIT license. |