Upcoming Webinar Revolutionizing underwriting clearance: a groundbreaking AI solution unveiled
November 12 at 8 AM PT   |   11 AM ET  
Save Your Spot
0
Days
0
Hours
0
Minutes
0
Seconds
  Everest Group IDP
             PEAK Matrix® 2022  
Indico Named as Major Contender and Star Performer in Everest Group's PEAK Matrix® for Intelligent Document Processing (IDP)
Access the Report

BLOG

Enso: An Open Source Library for Benchmarking Embeddings + Transfer Learning Methods

June 26, 2018 | Announcements, Data Science, Indico, Machine Learning

Back to Blog

Because Indico has benefited so much from hard work in the open-source community, we like to make sure a portion of our time is spent giving back. As part of this initiative, we’re releasing Enso, an open-source python library for benchmarking document embeddings and transfer learning methods.

Enso was created in part to help measure and address industry-wide overfitting to a small number of academic datasets. There needs to be a simple way to separate generic progress from advances that take advantage of dataset-specific features. The former is “publication worthy”, the latter may not be — but often it’s hard to distinguish between the two because papers fail to evaluate a broad enough domain. The number of classes, number of training examples, level of class imbalance, average document length, and other dataset attributes can have an enormous influence on the viability of an approach, and evaluation across a broad range of tasks helps to ascertain where an application of a given method is appropriate and where it is not. Through Enso, we hope to make evaluating across a broad range of datasets painless in an effort to make this practice more common.

In addition to providing a framework for benchmarking embedding quality, we’ve also included 24 open-source datasets for you to use in your own experiments.

Installation

Enso is compatible with Python 3.4+.

You can install enso via pip:

pip install enso

or directly via setup.py:

git clone git@github.com:IndicoDataSolutions/Enso.git
python setup.py install

After installation, you’ll probably also want to download the provided datasets:

python

python3 -m enso.download

Usage and Workflow

Although there are other effective approaches to applying transfer learning to natural language processing, the current version of Enso is built on the assumption that the approach to “transfer learning” adheres to the flow listed below. This approach is designed to replicate a scenario where a pool of unlabeled data is available, and labelers with subject matter expertise have a limited amount of time to provide labels for a subset of the unlabeled data.

  • All examples in the dataset are “featurized” via a pre-trained source model (python -m enso.featurize)
  • Re-represented data is separated into train and test sets
  • A fixed number of examples from the train set is selected to use as training data via the selected sampling strategy
  • The training data subset is optionally over or under-sampled to account for variation in class balance
  • A target model is trained using the featurized training examples as inputs (python -m enso.experiment)
  • The target model is benchmarked on all featurized test examples
  • The process is repeated for all combinations of featurizers, dataset sizes, target model architectures, etc.
  • Results are visualized and manually inspected (python -m enso.visualize)

Documentation

Full documentation and configuration information is available here at enso.readthedocs.org

Future Work

Currently Enso is limited to benchmarking tasks that rely on static representations outputted by pretrained models. We’d eventually like to extend Enso to support benchmarking for model fine-tuning approaches as well. We’re in the process of incorporating the model finetuning work of our advisor, Alec Radford, into Enso, so stay tuned!

In addition to supporting the model fine-tuning workflows, we’d also like to add support for benchmarking tasks other than classification (comparison, multiple-choice, textual-entailment, etc.) and a broader range of input types (image, audio).

We’ve used Enso internally to test whether or not adding new optimizers or new embeddings to the Indico platform is worthwhile, and we hope Enso also enables others to test what methods are good fits for industry application.

If you’d like to help add any of this functionality or are looking for a machine learning project to hack on, check out the Enso wishlist or reach out to <madison@indico.io> for more information.

Happy hacking!

[addtoany]

Increase intake capacity. Drive top line revenue growth.

[addtoany]

Resources

Blog

Gain insights from experts in automation, data, machine learning, and digital transformation.

Unstructured Unlocked

Enterprise leaders discuss how to unlock value from unstructured data.

YouTube Channel

Check out our YouTube channel to see clips from our podcast and more.
Subscribe to our blog

Get our best content on intelligent automation sent to your inbox weekly!