Skip to content

Customer Review Sentiment Analysis#

Build Review Sentiment
Backend Coverage Status
Frontend Coverage Status

Components#

Running the service locally (with Docker)#

To launch a standalone instance of the service:

cd review-sentiment
docker-compose up

Running the service locally (without Docker)#

During development, it can be desirable to launch the service directly and not as a Docker container. For instance, this allows to get rapid feedback on changes to the backend code.

cd review-sentiment
./build_frontend.sh
cd sentiment-backend
uvicorn sentiment.main:app

GPU Acceleration with CUDA#

  • The sentiment service can be run with GPU acceleration using Pytorch's CUDA package.
  • In order to enable GPU support use docker build -f Dockerfile_CUDA . to build the Docker image from CUDA_Dockerfile. Launch a container from this image with docker run -p 80:8000 --gpus all *image ID*. The flags are required to expose the container port 8000 (where the service is listening) to machine port 80 (where HTTP requests are coming in) and to grant the container access to the GPU.
  • There are compatibility conditions which need to be satisfied for CUDA to work.
  • CUDA requires the installation of NVIDIA drivers. The drivers have to be supported by the utilized CUDA version. An overview of the minimum driver required for a specific version of CUDA can be found here.
  • The PyTorch version has to be compatible with the CUDA version. A list of previous PyTorch versions, their supported CUDA versions and download instructions can be found here.
  • The configuration has been tested using GCP's "GPU Optimized Debian m32 (with CUDA 10.0)" image (specifications: NVIDIA driver version 410.104, CUDA 10.0 and PyTorch 1.4.0).