Containers is one of the preferred ways to deploy applications. It contains everything that the application needs to run. As we know that data does not persist when the container is killed or if it does not exist anymore. For some reason, if there is a need to persist data then the bind mount or volumes can be of help. With bind mounts, a file or directory on the host machine is mounted into a container with limited functionality.
Let’s consider, that if we are running the container on the cloud and the container needs to read/write the data from one of the cloud provider’s storage services (i.e. Azure File Storage) then the docker volumes are the way to go. You can read more about storage types in docker here. Let’s start with the hands-on setup.
As a part of this article, we will read the CSV file uploaded on Azure File Storage, print the data, do some modification and write it back to Azure File Storage via docker container for demo purposes.
Hands-On
Create Azure File Share storage account
Go to Azure Portal → search for Storage Accounts → Create the Azure File share storage account as below.
Post creating the Azure File Share, go to that resource → click File Shares under Data storage → click File share and fill in the required details and click on Create
Open the file share that you created and create two directories (i.e. input & output). Upload the CSV file that you want to transform in the input directory. I have uploaded a CSV file which we will access within the container for the demo purpose.
Create & setup Virtual Machine
Search for V_irtual Machines_ on Azure Portal and create Azure Virtual Machine
As a next step, connect to VM via Visual Studio Code since we are going to write some code and docker config files. But you can also SSH into the instance for the same.
After a successful remote connection, execute the below commands.
sudo apt-get update
sudo apt-get install docker-compose
Now, it is time to write some code. Go ahead and create a directory in your VM and copy the code from here
The folder structure looks like this
Within the service directory, we have an app directory (afs_app), which contains _ main.py_ (gets executed when the docker container is up & running), app.py (contains the logic to read, modify & write the file from AFS).
The requirements.txt is under service which contains the information about the python packages to be installed. setup.py will package the code and install it.
Dockerfile contains all the definitions/commands to create an image
docker-compose.yml is used to configure the application service. It has the standard configuration except for the volumes part, where the Azure File Share is linked and mounted as a volume on the data directory (under services) within the container via Common Internet File System.
version: '3'
services:
afs_service:
container_name: afs_service
build:
context: ./
dockerfile: Dockerfile
restart: always
volumes:
- AFSMount:/project/service/afs_app/data
volumes:
AFSMount:
driver: local
driver_opts:
type: cifs
o: "mfsymlinks,vers=3.0,username=${AFS_NAME},password=${AFS_KEY},addr=${AFS_NAME}.file.core.windows.net"
device: "//${AFS_NAME}.file.core.windows.net/${AFS_CONTAINER}"
Once you have the code in place, create the .env file in the same directory as the docker-compose file and configure the below ENV variables.
AFS_NAME=containerfilestorage
AFS_CONTAINER=vm-fileshare
AFS_KEY=AFS_ACCESS_KEY
For AFS_KEY, go to the storage account and click Access keys under Security & Networking. Reveal & copy the key and paste it as a value of AFS_KEY.
After the environment variables are configured, execute the below command to build the image and run the container.
sudo docker-compose up --build
Once the container is up, the application reads the CSV file & uploads the modified file back to Azure File Storage within the output folder as unique_id_processed.csv
I hope this is helpful and can cater to the number of use cases.
Thank you for reading!