Baselight

BERT-Embedded Spam Messages

An extension to the SMS spam messages dataset providing BERT-Embeddings

@kaggle.mrlucasfischer_bertembedded_spam_messages

Loading...
Loading...

About this Dataset

BERT-Embedded Spam Messages

Context

This dataset is an extension of the original dataset which is a set of English SMS messages tagged with being spam or ham.

The dataset was created to add the possibility to work with BERT-Embeddings. Since creating these embeddings in kaggle kernels is not feasible for memory efficiency reasons, I've created them locally and provide you the original dataset plus the embedings. So in this dataset you get the original dataset plus the embeddings for each SMS message!

Please refer to the original dataset for further clarification.

Content

The dataset contains the same information as the original dataset plus the additional DiltilBERT classification embeddings.

This results in a dataset with 5574 rows and 770 columns:

  • spam -> Target column specifying if the message is spam or ham
  • original_message -> The original unprocessed messages
  • 0 up to 768 -> columns containing the DistilBERT classification embeddings for the message, after it being processed

Inspiration

  • Can you classify spam messages using the embeddings?
  • Does BERT-Embeddings work better than TF-IDF?
  • What is the highest ROC-AUC you can get?
  • What features can be derived from the dataset?
  • What is the most common words from Spam/Ham messages?
  • What are some Spam messages you can't correctly classify?

Procedure for creating the dataset

HuggingFace's DistilBERT is used from their transformers package.

Jay Allamar's tutorial was followed to encode the messages using DistilBERT.

For memory efficiency reasons all messages are first stripped from punctuation and then english stopwords are removed. Then only the first 30 tokens are kept.

As per my analysis of the original dataset it can be seen that most ham messages have around 10 words and spam messages around 29 words, without stopwords. This means that once stopwords are removed from the messages, keeping the first 30 tokens might mean some information loss but not to critical. (Acrually in my analysis it is demonstrated that encoding the messages using only the first 10 tokens after processing them is enough to have a good encoding capable of achieving 0.881 ROC-AUC with a baseline random forest.)

To better understand how the embeddings were created I encourage to check out the Github repo with the script for creating the dataset.

Acknowledgements

Jay Allamar's tutorial was followed to encode the messages using DistilBERT.

The original dataset is part of the UCI Machine Learning repository and can be found here.

UCI Machine Learning urges to if you find the original dataset useful, cite the original authors found here.

Almeida, T.A., Gómez Hidalgo, J.M., Yamakami, A. Contributions to the Study of SMS Spam Filtering: New Collection and Results. Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11), Mountain View, CA, USA, 2011

Tables

Spam Encoded

@kaggle.mrlucasfischer_bertembedded_spam_messages.spam_encoded
  • 37.45 MB
  • 5572 rows
  • 770 columns
Loading...

CREATE TABLE spam_encoded (
  "spam" BIGINT,
  "original_message" VARCHAR,
  "n_0" DOUBLE,
  "n_1" DOUBLE,
  "n_2" DOUBLE,
  "n_3" DOUBLE,
  "n_4" DOUBLE,
  "n_5" DOUBLE,
  "n_6" DOUBLE,
  "n_7" DOUBLE,
  "n_8" DOUBLE,
  "n_9" DOUBLE,
  "n_10" DOUBLE,
  "n_11" DOUBLE,
  "n_12" DOUBLE,
  "n_13" DOUBLE,
  "n_14" DOUBLE,
  "n_15" DOUBLE,
  "n_16" DOUBLE,
  "n_17" DOUBLE,
  "n_18" DOUBLE,
  "n_19" DOUBLE,
  "n_20" DOUBLE,
  "n_21" DOUBLE,
  "n_22" DOUBLE,
  "n_23" DOUBLE,
  "n_24" DOUBLE,
  "n_25" DOUBLE,
  "n_26" DOUBLE,
  "n_27" DOUBLE,
  "n_28" DOUBLE,
  "n_29" DOUBLE,
  "n_30" DOUBLE,
  "n_31" DOUBLE,
  "n_32" DOUBLE,
  "n_33" DOUBLE,
  "n_34" DOUBLE,
  "n_35" DOUBLE,
  "n_36" DOUBLE,
  "n_37" DOUBLE,
  "n_38" DOUBLE,
  "n_39" DOUBLE,
  "n_40" DOUBLE,
  "n_41" DOUBLE,
  "n_42" DOUBLE,
  "n_43" DOUBLE,
  "n_44" DOUBLE,
  "n_45" DOUBLE,
  "n_46" DOUBLE,
  "n_47" DOUBLE,
  "n_48" DOUBLE,
  "n_49" DOUBLE,
  "n_50" DOUBLE,
  "n_51" DOUBLE,
  "n_52" DOUBLE,
  "n_53" DOUBLE,
  "n_54" DOUBLE,
  "n_55" DOUBLE,
  "n_56" DOUBLE,
  "n_57" DOUBLE,
  "n_58" DOUBLE,
  "n_59" DOUBLE,
  "n_60" DOUBLE,
  "n_61" DOUBLE,
  "n_62" DOUBLE,
  "n_63" DOUBLE,
  "n_64" DOUBLE,
  "n_65" DOUBLE,
  "n_66" DOUBLE,
  "n_67" DOUBLE,
  "n_68" DOUBLE,
  "n_69" DOUBLE,
  "n_70" DOUBLE,
  "n_71" DOUBLE,
  "n_72" DOUBLE,
  "n_73" DOUBLE,
  "n_74" DOUBLE,
  "n_75" DOUBLE,
  "n_76" DOUBLE,
  "n_77" DOUBLE,
  "n_78" DOUBLE,
  "n_79" DOUBLE,
  "n_80" DOUBLE,
  "n_81" DOUBLE,
  "n_82" DOUBLE,
  "n_83" DOUBLE,
  "n_84" DOUBLE,
  "n_85" DOUBLE,
  "n_86" DOUBLE,
  "n_87" DOUBLE,
  "n_88" DOUBLE,
  "n_89" DOUBLE,
  "n_90" DOUBLE,
  "n_91" DOUBLE,
  "n_92" DOUBLE,
  "n_93" DOUBLE,
  "n_94" DOUBLE,
  "n_95" DOUBLE,
  "n_96" DOUBLE,
  "n_97" DOUBLE
);

Share link

Anyone who has the link will be able to view this.