Rasa NLU (Natural Language Understanding) is a tool for understanding what is being said in short pieces of text. For example, taking a short message like:
"I'm looking for a Mexican restaurant in the center of town"
And returning structured data like:
intent: search_restaurant entities: - cuisine : Mexican - location : center
Rasa NLU is primarily used to build chatbots and voice apps, where this is called intent classification and entity extraction. To use Rasa, you have to provide some training data. That is, a set of messages which you've already labelled with their intents and entities. Rasa then uses machine learning to pick up patterns and generalise to unseen sentences.
You can think of Rasa NLU as a set of high level APIs for building your own language parser using existing NLP and ML libraries.
If you are new to Rasa NLU and want to create a bot, you should start with the tutorial.
What does Rasa NLU do? 🤔 Read About the Rasa Stack
I'd like to read the detailed docs 🤓 Read The Docs
I'm ready to install Rasa NLU! 🚀 Installation
I have a question ❓ Rasa Community Forum
I would like to contribute 🤗 How to contribute
For the full installation instructions, please head over to the documenation: Installation
Via Docker Image From docker hub:
docker run -p 5000:5000 rasa/rasa_nlu:latest-full
(for more docker installation options see Advanced Docker Installation)
Via Python Library From pypi:
pip install rasa_nlu python -m rasa_nlu.server &
(for more python installation options see Advanced Python Installation)
The below command can be executed for either method used above.
curl 'https://raw.githubusercontent.com/RasaHQ/rasa_nlu/master/sample_configs/config_train_server_json.yml' | \ curl --request POST --header 'content-type: application/x-yml' --data-binary @- --url 'localhost:5000/train?project=test_model'
This will train a simple keyword based models (not usable for anything but this demo). For better pipelines consult the documentation.
wget 'https://raw.githubusercontent.com/RasaHQ/rasa_nlu/master/sample_configs/config_train_server_md.yml' curl --request POST --header 'content-type: application/x-yml' --data-binary @config_train_server_md.yml --url 'localhost:5000/train?project=test_model'
The above command does the following:
POSTSthat data to the
/trainendpoint and names the model
Make sure the above command has finished before executing the below. You can check with the
/status command above.
The intended audience is mainly people developing bots, starting from scratch or looking to find a a drop-in replacement for wit, LUIS, or Dialogflow. The setup process is designed to be as simple as possible. Rasa NLU is written in Python, but you can use it from any language through a HTTP API. If your project is written in Python you can simply import the relevant classes. If you're currently using wit/LUIS/Dialogflow, you just:
httpscall to parse every message.
These points are laid out in more detail in a blog post. Rasa is a set of tools for building more advanced bots, developed by the company Rasa. Rasa NLU is the natural language understanding module, and the first component to be open-sourced.
It depends. Some things, like intent classification with the
tensorflow_embedding pipeline, work in any language.
Other features are more restricted. See details here
We are very happy to receive and merge your contributions. There is some more information about the style of the code and docs in the documentation.
In general the process is rather simple:
You pull request will be reviewed by a maintainer, who might get back to you about any necessary changes or questions. You will also be asked to sign the Contributor License Agreement
git clone firstname.lastname@example.org:RasaHQ/rasa_nlu.git cd rasa_nlu pip install -r requirements.txt pip install -e .
For local development make sure you install the development requirements:
pip install -r alt_requirements/requirements_dev.txt pip install -e .
To test the installation use (this will run a very stupid default model. you need to train your own model to do something useful!):
Before you start, ensure you have the latest version of docker engine on your machine. You can check if you have docker installed by typing
docker -v in your terminal.
To see all available builds go to the Rasa docker hub, but to get up and going the quickest just run:
docker run -p 5000:5000 rasa/rasa_nlu:latest-full
There are also three volumes, which you may want to map:
/app/data. It is also possible to override the config file used by the server by mapping a new config file to the volume
/app/config.json. For complete docker usage instructions go to the official docker hub readme.
To test run the below command after the container has started. For more info on using the HTTP API see here
Warning! setting up Docker Cloud is quite involved - this method isn't recommended unless you've already configured Docker Cloud Nodes (or swarms)
In order to use the Spacy or Mitie backends make sure you have one of their pretrained models installed.
python -m spacy download en
To download the Mitie model run and place it in a location that you can reference in your configuration during model training:
wget https://github.com/mit-nlp/MITIE/releases/download/v0.4/MITIE-models-v0.2.tar.bz2 tar jxf MITIE-models-v0.2.tar.bz2
If you want to run the tests, you need to copy the model into the Rasa folder:
cp MITIE-models/english/total_word_feature_extractor.dat RASA_NLU_ROOT/data/
RASA_NLU_ROOT points to your Rasa installation directory.
Releasing a new version is quite simple, as the packages are build and distributed by travis. The following things need to be done to release a new version
travis will build this tag and push a package to pypi
git tag -f 0.7.0 -m "Some helpful line describing the release" git push origin 0.7.0
git checkout -b 0.7.x git push origin 0.7.x
In order to run the tests make sure that you have the development requirements installed.
Licensed under the Apache License, Version 2.0. Copyright 2018 Rasa Technologies GmbH. Copy of the license.
A list of the Licenses of the dependencies of the project can be found at the bottom of the Libraries Summary.