| | --- |
| | library_name: keras-hub |
| | license: apache-2.0 |
| | tags: |
| | - text-classification |
| | pipeline_tag: text-classification |
| | --- |
| | ### Model Overview |
| | BERT (Bidirectional Encoder Representations from Transformers) is a set of language models published by Google. They are intended for classification and embedding of text, not for text-generation. See the model card below for benchmarks, data sources, and intended use cases. |
| |
|
| | Weights and Keras model code are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE). |
| |
|
| | ## Links |
| |
|
| | * [Bert Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/bert-quickstart-notebook) |
| | * [Bert API Documentation](https://keras.io/api/keras_hub/models/bert/) |
| | * [Bert Model Card](https://github.com/google-research/bert/blob/master/README.md) |
| | * [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/) |
| | * [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/) |
| |
|
| | ## Installation |
| |
|
| | Keras and KerasHub can be installed with: |
| |
|
| | ``` |
| | pip install -U -q keras-hub |
| | pip install -U -q keras>=3 |
| | ``` |
| |
|
| | Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instruction on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page. |
| |
|
| | ## Presets |
| |
|
| | The following model checkpoints are provided by the Keras team. Full code examples for each are available below. |
| |
|
| | | Preset name | Parameters | Description | |
| | |------------------------|------------|-------------------------------------------------------------------------------------------------| |
| | | `bert_tiny_en_uncased` | 4.39M | 2-layer BERT model where all input is lowercased. | |
| | | `bert_small_en_uncased` | 28.76M | 4-layer BERT model where all input is lowercased. | |
| | | `bert_medium_en_uncased` | 41.37M | 8-layer BERT model where all input is lowercased. | |
| | | `bert_base_en_uncased` | 109.48M | 12-layer BERT model where all input is lowercased. | |
| | | `bert_base_en` | 108.31M | 12-layer BERT model where case is maintained. | |
| | | `bert_base_zh` | 102.27M | 12-layer BERT model. Trained on Chinese Wikipedia. | |
| | | `bert_base_multi` | 177.85M | 12-layer BERT model where case is maintained. | |
| | | `bert_large_en_uncased` | 335.14M | 24-layer BERT model where all input is lowercased. | |
| | | `bert_large_en` | 333.58M | 24-layer BERT model where case is maintained. | |
| | | `bert_tiny_en_uncased_sst2 ` | 4.39M | he bert_tiny_en_uncased backbone model fine-tuned on the SST-2 sentiment analysis dataset. | |
| | |
| | ## Example Usage |
| | ```python |
| | import keras |
| | import keras_hub |
| | import numpy as np |
| | ``` |
| | |
| | Raw string data. |
| | ```python |
| | features = ["The quick brown fox jumped.", "I forgot my homework."] |
| | labels = [0, 3] |
| |
|
| | # Pretrained classifier. |
| | classifier = keras_hub.models.BertClassifier.from_preset( |
| | "bert_base_multi", |
| | num_classes=4, |
| | ) |
| | classifier.fit(x=features, y=labels, batch_size=2) |
| | classifier.predict(x=features, batch_size=2) |
| | |
| | # Re-compile (e.g., with a new learning rate). |
| | classifier.compile( |
| | loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), |
| | optimizer=keras.optimizers.Adam(5e-5), |
| | jit_compile=True, |
| | ) |
| | # Access backbone programmatically (e.g., to change `trainable`). |
| | classifier.backbone.trainable = False |
| | # Fit again. |
| | classifier.fit(x=features, y=labels, batch_size=2) |
| | ``` |
| | |
| | Preprocessed integer data. |
| | ```python |
| | features = { |
| | "token_ids": np.ones(shape=(2, 12), dtype="int32"), |
| | "segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]] * 2), |
| | "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2), |
| | } |
| | labels = [0, 3] |
| | |
| | # Pretrained classifier without preprocessing. |
| | classifier = keras_hub.models.BertClassifier.from_preset( |
| | "bert_base_multi", |
| | num_classes=4, |
| | preprocessor=None, |
| | ) |
| | classifier.fit(x=features, y=labels, batch_size=2) |
| | ``` |
| |
|
| | ## Example Usage with Hugging Face URI |
| |
|
| | ```python |
| | import keras |
| | import keras_hub |
| | import numpy as np |
| | ``` |
| |
|
| | Raw string data. |
| | ```python |
| | features = ["The quick brown fox jumped.", "I forgot my homework."] |
| | labels = [0, 3] |
| | |
| | # Pretrained classifier. |
| | classifier = keras_hub.models.BertClassifier.from_preset( |
| | "hf://keras/bert_base_multi", |
| | num_classes=4, |
| | ) |
| | classifier.fit(x=features, y=labels, batch_size=2) |
| | classifier.predict(x=features, batch_size=2) |
| | |
| | # Re-compile (e.g., with a new learning rate). |
| | classifier.compile( |
| | loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), |
| | optimizer=keras.optimizers.Adam(5e-5), |
| | jit_compile=True, |
| | ) |
| | # Access backbone programmatically (e.g., to change `trainable`). |
| | classifier.backbone.trainable = False |
| | # Fit again. |
| | classifier.fit(x=features, y=labels, batch_size=2) |
| | ``` |
| |
|
| | Preprocessed integer data. |
| | ```python |
| | features = { |
| | "token_ids": np.ones(shape=(2, 12), dtype="int32"), |
| | "segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]] * 2), |
| | "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2), |
| | } |
| | labels = [0, 3] |
| | |
| | # Pretrained classifier without preprocessing. |
| | classifier = keras_hub.models.BertClassifier.from_preset( |
| | "hf://keras/bert_base_multi", |
| | num_classes=4, |
| | preprocessor=None, |
| | ) |
| | classifier.fit(x=features, y=labels, batch_size=2) |
| | ``` |
| |
|