Simplified deployment of .NET and ANGULAR applications on Raspberry Pi with Docker.


Quentin Destrade

IBM i News

In this article, discover the process of deploying applications in heterogeneous environments, using Docker. Explore a detailed approach to building a network of interoperable, distributed applications.


Deploying applications efficiently, quickly and reproducibly in heterogeneous environments can seem complex. In this article, I invite you to discover the detailed process of deploying a back-end .NET APi in Docker containers, coupled with an Angular front-end application, which I developed and introduced in a previous article.

Let's take a look at how to use Docker to deploy these applications efficiently on Raspberry Pi computers. Each of these small computers, functioning as an independent server in our architecture, will be connected to an IBM i server.

Our .NET API, with data provider NTi, acts as a service layer, exposing data and functionality from a DB2 for i database hosted on IBM i, as well as another Postgre SQL database, running on a Raspberry 4. The Angular application consumes and uses these services to provide a user interface.

Requirements and development environment

  • IDE

The .NET API is developed with Visual Studio, taking advantage of its built-in functionality for Docker container management.

The ANGULAR application was developed using Visual Studio Code, appreciated for its lightness and flexibility, especially for JavaScript / TypeScript projects.


The applications are designed to be deployed on Raspberry Pi 4 and 5. These ARM-based nano-computers are connected to a IBM i.


The Docker desktop installed on my development machine will enable me to containerize the applications, facilitating their deployment on different hardware architectures, notably those of the Raspberry pi.

PART 1 - Deploying the .NET API.

Step 1 - Dockerfile creation with VISUAL STUDIO.

The first step is to create a Dockerfile for the .NET API. Visual Studio makes this process easy, thanks to its native Docker integration:

right-click on project > Add > Supported with Docker.

Visual Studio generates a Dockerfile tailored to the application, pre-configured for the .NET environment.

This file defines the instructions for building the application's Docker image, including the base, build, publish and final step for running the application.

FROM AS base
USER app

FROM AS build
COPY ["AccessDataAPI.csproj", "."]
RUN dotnet restore "./././AccessDataAPI.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "./AccessDataAPI.csproj" -c $BUILD_CONFIGURATION -o /app/build

FROM build AS publish
RUN dotnet publish "./AccessDataAPI.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false

FROM base AS final
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "AccessDataAPI.dll"]
  • Base Step: Use the official ASP.NET Docker image to create a runtime environment for the application. Ports 8080 and 8081 are exposed for HTTP and HTTPS traffic.
  • Build step: Use the SDK image to compile the application. Dependency management and compilation are carried out in this stage.
  • Publish stage: Publishes the compiled application, ready for deployment.
  • Final Stage: ¨Prepare the final image by copying the published application to the runtime environment.

Step 2 - Adding CORS (Cross Origin Resource Sharing).

using AccessDataAPI.Services;
using Microsoft.AspNetCore.Server.Kestrel.Core;
using Microsoft.AspNetCore.Server.Kestrel.Https;

var builder = WebApplication.CreateBuilder(args);

//Ajoutez la configuration CORS pour autoriser les requêtes depuis l'application Angular HotelAppManager. 

builder.Services.AddCors(options =>
    options.AddPolicy("MyCorsPolicy", policy =>

To enable our .NET API to accept requests from other origins, such as our Angular application which will be hosted on a different server, we configure CORS (Cross Origin Resource Sharing) in the "program.cs" file.

CORS are essential, enabling our front-end application to communicate simply with our back-end.

Step 3 - Configuring LaunchSettings.json

article 5

This file defines several profiles for launching the application, including local development via HTTP or HTTPS, as well as Docker.

  • HTTP & HTTPS Profiles: allow the application to be launched locally, with or without SSL, specifying the launch URL, ports used and environment variables.
  • The Docker Profile: tells Visual Studio how to launch the application in Docker, including port mapping and SSL usage. Environment variables are defined here to correspond to the ports exposed in the "Dockerfile".

Step 4 - DOCKER deployment

Once the Dockerfile, launchsettings and CORS have been configured, it's time to build the DOCKER image of the .NET APi using the command line.

In our application's current folder, we'll use Docker Buildx to create a multi-architecture image of the .NET APi, essential for deploying the application on the Raspberry Pi.

docker buildx build --platform linux/arm64/v8,linux/amd64 -t quentindaumerial/accessdataapi:latest --push -f Dockerfile .

The image is sent directly to my Docker Hub, confirming that the build was successful.

article 5

To validate the operation of the .NET APi in an environment similar to our deployment target, I run the image locally using "docker run".

By mapping the necessary ports and configuring the environment variables for SSL, I simulate the production environment on my development machine.

docker run --rm -it -p 7245:8080 -p 7246:8081 -e ASPNETCORE_HTTPS_PORTS=8081 -e ASPNETCORE_HTTP_PORTS=8080 -e ASPNETCORE_Kestrel__Certificates__Default__Password="password" -e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx -v %USERPROFILE%\.aspnet\https:/https/ accessdataapi

By going to https://localhost:7246/HotelCustomer/clients from my web browser, I can access the data exposed by my API directly from DB2 for i. This validation guarantees that my .NET API is ready for deployment on the Raspberry 5.

article 5

Step 5 - Send the image to the RASPBERRY Pi 5.

After pushing the image into my Docker Hub registry, it's now time to connect to the Raspberry Pi in SSH, download the image directly from my hub, and launch it in a container.

  1. Connect to the Raspberry 5 via SSH:

article 5

  1. Downloading the image from my Docker Hub:
docker image pull quentindaumerial/accessdataapi:latest

Step 6 - Launch the Docker container on the Raspberry 5.

I launch my container by mapping its port 8080 to port 5040 on my Raspberry Pi 5:

docker run -it -p 5040:8080 quentindaumerial/accessdataapi:latest

And that's it! My .NET API is now running in a Docker container on my Raspberry Pi 5, itself connected to my IBM i. I can check this directly by going to

article 5

PART 2 - Deploying the ANGULAR application.

Step 1 - Preparing the Dockerfile.

As with the .NET API, to dockerize our ANGULAR application, we'll need to go into our application folder and create a file named Dockerfile with no extension.

It consists of two steps: build and run.

### STAGE 1:BUILD ###

FROM node:latest AS build


COPY package.json ./

RUN npm install

RUN npm build --configuration=production

### STAGE 2:RUN ###

FROM nginx:latest
COPY dist/app-data-manager usr/share/nginx/html

COPY nginx.conf /etc/nginx/nginx.conf

  • The build step will use the latest version of the "node" image to install the dependencies and build the Angular application, specifying a production configuration ( detailed in step 4 ).
  • The runtime stage will use the latest version of the "nginx" image to serve our application. The generated distribution files are copied to /usr/share/nginx/html, the default folder used by NGINX to serve web containers.

Step 2 - Create the nginx.conf file.

This file configures NGINX to correctly serve our ANGULAR application, taking care of client-side routing.


http {
    include /etc/nginx/mime.types; 
    server {        
        root /usr/share/nginx/html/browser;
        index /index.html;
        location / {
            try_files $uri $uri/ /index.html;
  • The "server" block defines where to find application files and how to respond to HTTP requests.

  • The "try_files" directive tells NGINX to serve index.html for all client-side routes.

Step 3 - Configuring development and production environments.

As you can see from the Dockerfile, we've specified a build with "production configuration".

This is because Angular is able to natively differentiate and manage configurations specific to the development and production environments, to ensure that the application behaves as expected in each context (notably connection to my API).

  • For development ("src/environments/environment.ts"):

Contains the configuration used during development, including the API base URL pointing to the development server we've configured in the launchsettings file of our .NET API.

export const environment = {
    production: false,
    apiBaseUrl: 'https://localhost:7246/'
  • For production ("src/environments/"):

Contains the configuration used for the production version of the application, with a URL pointing to the production server (here our second Raspberry Pi, accessible at address, on port 5040).

export const environment = {
  production: true,
  apiBaseUrl: ''

These files enable simplified switching between development and production configurations, simply by changing the "production" flag when building the application.

So Angular's ng serve command lets me access my development server on my localhost, port 4200.

article 5

article 5

Step 4 - Building the Docker Image.

In the same way as our .NET API, it's important to build the image cross-platform to ensure the Angular application's compatibility with different architectures.

So once again we use Docker Buildx to create an image that works on both amd64 architectures (typical of PCs and servers) and arm64/v8 (arm devices like the Raspberry).

The following command launches the image build and pushes it to my Dockerhub:

docker buildx build --platform linux/amd64,linux/arm64,linux/arm64/v8 -t quentindaumerial/appdatamanager:latest --push --no-cache .

article 5

My image is updated on my Dockerhub:

article 5

Step 5 - Sending the IMAGE to the Raspberry 4.

After building and pushing the image of my Angular application on Dockerhub, I connect to Raspberry Pi 4 via SSH :

article 5

I retrieve the latest version of my image from my hub, and make it available on the Raspberry Pi:

docker pull quentindaumerial/appdatamanager:latest

article 5

Step 6 - Launch the Docker container on the Raspberry 4.

Now that the image is downloaded, I launch a Docker container to deploy the Angular application, mapping the container's port 80 to port 5000 on the Raspberry pi 4:

docker run -it --platform linux/arm64/v8 -p 5000:80 quentindaumerial/appdatamanager:latest

article 5

The application can now be accessed via a web browser at "":

article 5

In particular, we can see that the data is indeed retrieved from our APi, which consumes the data exposed by DB2 for i and retrieved via NTi from .NET:

article 5


By setting up specific Dockerfiles, configuring nginx.conf appropriately for Angular, and adjusting the CORS and launch parameters of our .NET APi, it's possible to simply create an ecosystem where both applications work perfectly well in isolation. The result? Smooth and easy data retrieval, illustrating interconnection in a containerized environment.

The use of the NTi connector plays an essential role here in our .NET API, facilitating the connection to DB2 for i. It thus proves that it is possible to run .NET CORE applications natively, interacting with data on IBM i, while remaining in a Dockerized environment. The ease of deploying a .NET API and client application, combined with the ability of our data provider NTi to operate natively in .NET, underlines not only the simplicity of managing such applications, but also their scalability potential. This project is just one example of the many ways in which we can rethink the use of technology today to create, innovate and optimize development and deployment processes.

I hope this guide has provided you with the knowledge and inspiration you need to explore new methods and tools for your own projects. Docker is here to make application deployment simpler, faster and more reliable, allowing you to focus on what really matters: efficiently meeting the specific needs of each customer project.

Quentin Destrade