Data augmentation for NLP

What types of data augmentation are there for NLP? I have read about the idea of inserting synonyms instead of words, at random, to generate new data. I have tried this and personally had no success. Are there are any other good ways to use data augmentation for NLP models?

5 Likes

The thesaurus thing is all Iā€™ve seen. Some positive results were shown here https://arxiv.org/pdf/1502.01710.pdf . However they were very small, since the datasets were so big.

@anamariapopescug are you aware of other approaches?

Thesaurus-based approaches are all Iā€™ve come across so far, but Iā€™ll look for others and post if anything interesting. The problem w/ thesaurus-based approaches is that (as @ben.bowles saw first hand), you usually canā€™t just use an off-the-shelf thesaurus for most tasks ā€¦

1 Like

Not exactly Data Augmentation, though in the recent research work things like training on ā€œmonomodal dataā€ have been tried out and it have been effective.

For example: In Semantic Parsing with Semi-Supervised Sequential Autoencoders, the network is trained to do a ā€œsemi-supervised approach for sequence transduction and apply it to semantic parsing.ā€.

In Section 3.4 Data Generation, it explains how previous data can be used as to generate Valid test/train data, valid as in: proper database queries, or map directions (Since the task here is semantic parsing).

Can you explain so that a simpleton like me can understand (I havenā€™t read this paper yet)? Based on their introduction it sounds like they have an auto-encoder running side-by-side with their main task. Is that right? Is there more to the basic technique? How does an auto-encoder work in NLP?

Short answer:

in addition to regular encoding and decoding part, the decoder part gets trained separately on large text corpora, which is a new separate dataset. This dataset is usually obtained (generated) from the original set, using different techniques, for example: in a dataset, making new valid sql queries with select * table. This essentially improves the decoderā€™s loss function.

A little longer answer:
To make sure Iā€™m not going to miss out on any information. In the basic sequence to sequence learning models that leads to generating an output, where itā€™s:

  • encoder, encodes the input into some representation.
  • decoder, takes in the encoderā€™s out-put while keeping the previous states in memory.

This is good for natural language related models, in terms of dimensionality reduction and capturing actual important bits, however, it struggles with constructing proper structure.

The part it struggles with is the decoder where the mapping between input (to encoder) and to last stage happens, due to Decoder not having enough ground truth perform the Loss Function.

Among all the other techniques, such as adding Attention, RL agents and combined Loss functions for decoder. Being able to train on more valid data for the decoder have done a lot better in terms of performance.

@ben.bowles Any other techniques have you tried apart from ā€œsynonym replacementā€ for the NLP tasks and can you share your results if so ?
Have you tried hyponym and hypernym techniques ?

Hello there!
I have no background on ML, I just watched all 7 videos on this week, so I can be totally off on this guess:
Considering that changing (replacing) words with the synonyms is applied on the corresponding word (eg: good=>well), it (?may?) not help because they ā€œresideā€ in the same place (have similar weights)

Now if you are talking about adding randomly some adjectives (instead of synonyms), you may train your network to ā€œresistā€ over-fitting and ā€œcomprehendā€ more noisy texts that can help more than random words (as random words tend to have less chances of being used on a real situation than adjectives, IMHO)

You should try it and see! :slight_smile:

An interesting method is interpolating between two text embeddings. This was technique was used to improve performance in the Generative Adversarial Text to Image Synthesis paper by Reed et al.

1 Like

An interesting technique for data augmentation specific to RNNs from ā€œData Noising as Smoothing in Neural Network Language Modelsā€ by Xie et al (ICLR 2017) (arXiv):

In this work, we consider noising primitives as a form of data augmentation
for recurrent neural network-based language models.

1 Like

Iā€™m doing my Msc thesis on this topic :blush:

Specifically, Iā€™m looking at various ways of using external data derived from Wikipedia. Itā€™s still early days but essentially I came up with a simple way of linking wikipedia articles to arbitrary input text. The idea is that if the input text were on Wikipedia, it would have links to other Wikipedia articles (that are semantically related and provide additional info).

The basic procedure is:

  1. break the input text into n-grams
  2. check whether each n-gram exists as a wikipedia article to create a set of ā€˜candidate linksā€™
  3. prune the candidate links by computing the similarity of the input text and the abstract of each candidate

Once youā€™ve got ā€˜wiki-linksā€™ for an article, you can use those as additional data in a variety of ways. For example, you could just throw the abstracts of the linked wiki pages into a bag together with your input document for classification. Or you could run a recursive neural net on the sentences in the abstracts and then average the sentence representations to get a vector representation for each wiki article and a bag of those vectors to represent your input document. Iā€™m also playing around with computing the eigen-centrality of the link graph of the linked documents (up to some link-degree) and using that as a feature representation for the input document.

Thereā€™s so much info in wikipedia! :stuck_out_tongue:

3 Likes

Here is an interesting idea that was used in recently completed kaggle competition ā€˜toxic commentsā€™. Few people used [english - ā€˜intermediate languageā€™ - english] translastion to augment the data. It changes few words in translation keeping the meaning in tact. I think this is similar to synonym replacement strategy. Here is a link of a quick script to do the same - https://github.com/PavelOstyakov/toxic/blob/master/tools/extend_dataset.py

4 Likes

Very interesting approach! Do you have any updates on this? Hope the thesis went well :slight_smile:

Yes, since machine translation have shown impressive results, English->Intermediate Language->English works well. A very good paper on the same lines is by John Weiting : Learning Paraphrastic Sentence Embeddings from Back-Translated Bitext

3 Likes

As always it is really problem dependent. We have faced with the need of data augmentation for:

  • text search model: map manually typed search tokens into a set of tags
  • text classification model: make text model more robust to product textual description source

Case: User types search tokens and you need to return correct tags
Train set: some limited set of search tokens vs correct tags
Rationale vs augmentation:

  1. User can make a mistake in token - randomly change one letter in a word (white blouse - white blosse)
  2. User can miss a character in a token - randomly delete one letter in a word (white blouse - white blose)
  3. User can use different order of search tokens - randomise tokens position (white blouse - blouse white)
  4. User can use different number of search tokens (2, 3, 4 etc) - subsample number of tokens
  5. User can use other tokens - enrich tokens with synonyms (red - pink - cardinal - cerise etc.)

We augmented 500 tags-search tokens into 10M rows train dataset. After training for around 50 epochs model was absolutely robust to every case we predicted. Needless to say it failed every time for case we did not do augmentation for :slight_smile:

1 Like

Thanks for sharing. Do you have the whole thesis or a paper to share?

Thanks for sharing. Do you have any kind of longer description of your solution, results etc. that you could share? Iā€™m especially interested in the text classification case, because Iā€™m doing research on that.

Hi everyone,
Another text data augmentation that is not yet mentioned here is sentence shuffling.
It is used in topic modelling though, not translation. The idea is to shuffle the sentences in paragraph and what it does is:

  • the topic remains the same
  • the wordsā€™ order are preserved
  • we get a different data

Hope it helps :smiley:
Thanks.

1 Like

Has anyone thought of using a language model to substitute some words in the example text? This would be specially easy in the transfer-learning framework, since creating a language model is already a requirement.

I guess this would be faster than traditional word substitution (finding the closest embedding), as well as producing richer results.

I have googled this idea for a bit, but found no mentions of anyone who tried it!