Designing a Desktop Application with C#

Layers, MVVM and organizing the Code into Projects

According to the PluralSight Course “Getting Started with Dependency Injection in .NET” by Jeremy Clark, a desktop application can be structured in four layers.

  • The View Layer contains the UI elements of the application such as buttons and list boxes.
  • The Presentation (Logic) Layer contains the business logic that drives the application and controls the View Layer.
  • The Data Access Layer contains code that interacts with the data store. A datastore can be a WSDL-WebService, a REST API, a database or any other data provider.
  • The Data Store provides the data.

After defining the four layers, Jeremy goes on to map those four layers to the MVVM pattern. MVVM stands for Model – View – ViewModel.

  • The View Layer maps to the View of the MVVM pattern.
  • The Presentation Logic Layer maps to the ViewModel of the MVVM pattern.
  • The Data Access Layer and the Data Store both map to the Model of the MVVM pattern.

Jeremy uses one separate C# project in the solution per Layer and one project for common code shared amongst the layers. In total the solution contains five projects.

The project of Jeremy’s Data Store Layer contains a ASP.NET Core project that provides a dummy REST-API. I can imagine that there are situations in which the Data Store Layer project does not add any additional benefit to your specific use case and I would go on to say that the Data Store Layer project is optional.

The Data Access Layer

The Data Access Layer has to retrieve or send data to and from the data store. The integration with the data store and the data format used should be of no concern to any of the upper layers. The Data Access Layer shields the upper layers from the details of communication and integration as best as possible.

One way to achieve this separation is that the Data Access Layer talks to the upper layers via domain objects and it has the responsibility to convert to and from those domain objects, when it talks to the datastores.

The Data Access Layer contains ServiceReader interfaces and classes that implement those interfaces. The interfaces define an API that uses domain objects. Internally a ServiceReader implementation uses a WebClient to talk to REST APIs or a SSH- or TelnetClient to speak those protocols.

It also contains converters. Converters are generic interfaces and implementations that have the task to convert from and to domain objects. They contains and isolates the mapping logic to convert between domain objects and the form of serialization that is used to talk to the datastore at hand. A converter can also be unit tested easily. Converters can be nested to deal with complex data structures.

Data is retrieved from the data store via the client, put into the converter to retrieve domain objects and then the domain objects are returned via the ServiceReaders API. Sending data to a data store also goes through the API, a converter and finally a client.

If JSON is the method of serialization, you can use NewtonSoft Json. For WSDL webservices use ???. For Telnet use ??? For SSH use ??? For HTTP you should use a modern asynchronous HTTP client (Which one ???)

The Presentation Layer

Contains ViewModel classes (e.g. EmployeeViewModel, PeopleViewModel, …) that implement INotifyPropertyChanged and other interfaces used to connect them to the View Layer.

The ViewModel classes contain properties (= members) that are databound to the View Layer.

The ViewModel classes finally contain ServiceReader – Interface member variables that get the ServiceReaders Implementations from the Data Access Layer injected.

The View Layer

The view layer contains GUI descriptions in .xaml files. The UI components defined in the .xaml files are backed by window classes. The UI components are bound to properties of their respective windows classes.

The data that the windows classes provide to the GUI components via their bound properties comes from the Presentation Layer’s ViewModel classes. The ViewModel objects are injected into the window classes.

Docker and ASP .NET Core

How to dockerize a .NET Core application

Install docker on your machine.
Test the installation
$ docker run hello-world
The expected output is: Hello from Docker!

Read https://docs.docker.com/get-started/

The goal is to build an image that you can copy to the target machine and start to create a running container.
The container will contain all applications needed to run your app.

A container is launched by running an image. An image is an executable package that includes everything needed
to run an application — the code, a runtime, libraries, environment variables, and configuration files.

As an analogy: A docker-image is a class, a docker-container is an instance of a class.

First, you have to create a docker image on the development machine.
Docker images are created from docker-files (called Dockerfile without an extension) which contain statements that docker
will execute to create the image.

Go to the folder that contains the solution (.sln) file.
Create a file called ‘Dockerfile’ without extension.
Edit the Dockerfile in a text editor.

Here is my example Dockerfile:

FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build
#FROM mono:6.0.0.313-slim AS build
WORKDIR /app

# debug output
RUN pwd
RUN hostname
RUN uname -r

# install npm
RUN apt-get update && apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get update && apt-get install -y nodejs

# copy csproj and restore as distinct layers
COPY *.sln .
COPY clone_angular/*.csproj ./clone_angular/
RUN dotnet restore

# copy everything else and build app
COPY clone_angular/. ./clone_angular/
WORKDIR /app/clone_angular

RUN npm install
RUN dotnet publish -c Release -o out

FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS runtime
WORKDIR /app
COPY --from=build /app/clone_angular/out ./
ENTRYPOINT ["dotnet", "clone_angular.dll"]

 

Execute docker to build the docker image from the Dockerfile.
Navigate to the folder that contains the docker file.
$ docker build --tag=<TAG_NAME> .

‘docker build’ will run the Dockerfile.
During the execution, the operating system is 4.9.184-linuxkit.
So you are actually running a linux and apt-get is available for installing software.

On that linuxkit, there is no software installed.
If your build requires any tools, you have to install them on the linuxkit.
For example if your application uses angular, you will need node and npm you have to
install those tools first before building your app.
To install software, prepare a working installation command, then add a RUN command
to the dockerfile and paste the install command after it.

Once the dockerfile was executed, check if your docker installation lists your new image:
$ docker image ls

You should see

REPOSITORY TAG IMAGE ID CREATED SIZE
<TAG_NAME> latest 8797820ed5c5 3 minutes ago 262MB

Start a container from that image:
$ docker run -d -p 8888:80 <TAG_NAME>

In this command, the -d flag detaches the process from the command line. The container will run in the background and
the terminal is free for subsequent input. -p <EXPOSED_PORT>:<INTERNAL_PORT> will open the port 8888 in the host system
and bind it to the port 80 of the system running inside the container. This is necessary for accessing webapps from
the outside world. Open ports are shown in the last column of the output of the ‘docker container ls –all’ command.
The rightmost column shows which external port is bound to which internal port. In the example above you can now
access your webapp at localhost:8888. The last command is the image tag name to start the container from.

Errors during image creation

Q: The command ‘/bin/sh -c dotnet publish -c Release -o out’ returned a non-zero code: 1
A: You have to install software during the installation by adding commands to the Dockerfile
https://stackoverflow.com/questions/49088768/dockerfile-returns-npm-not-found-on-build

Deploy to the Remote server

The idea is to prepare an image in your local development environment.
Then create a .tar file of that image and upload the .tar file to the remote system.
https://stackoverflow.com/questions/23935141/how-to-copy-docker-images-from-one-host-to-another-without-using-a-repository

1. Install docker on the remote server:

The assumption is that your remote server is running Ubuntu Linux.
https://docs.docker.com/install/linux/docker-ce/ubuntu/

ssh root@<yourip>
sudo apt-get remove docker docker-engine docker.io containerd runc
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
apt-cache madison docker-ce
EXAMPLE: sudo apt-get install docker-ce=<VERSION_STRING> docker-ce-cli=<VERSION_STRING> containerd.io
sudo apt-get install docker-ce=18.06.3~ce~3-0~ubuntu docker-ce-cli=18.06.3~ce~3-0~ubuntu containerd.io
sudo apt-get install docker-ce=5:19.03.1~3-0~ubuntu-xenial docker-ce-cli=5:19.03.1~3-0~ubuntu-xenial containerd.io
sudo docker run hello-world

2. On your local machine, build a Dockerfile and an image from that Dockerfile. The steps for dockerizing an application are explained above.

3. On your local machine, create a .tar file from the docker image, upload and import it on the remote machine.

You can list all images available on your local machine:
docker image ls

Then export one of the images to a file on your filesystem:

docker save -o <path for generated tar file> <image name>
docker save -o ./clone_tag_1_image.tar clone_tag_1

Now, zip the file to save time uploading it.

zip -r clone_tag_1_image.tar.gz clone_tag_1_image.tar

Upload the file to the server using scp:

scp <source> <destination>
scp clone_tag_1_image.tar.gz root@<your_ip>:/temp

Unzip the image on the server:

gunzip clone_tag_1_image.tar.gz

Import the image on the server:

docker load -i <path to image tar file>
docker load -i /temp/clone_tag_1_image.tar

Run the docker detached while exposing the port 80 t0 8888:

docker run -d -p 8888:80 clone_tag_1

The web app should now be available via the servers IP on the port 8888. If it is not, check the firewall settings of your server.

Example repository

cd /Users/<USER>/dev/dot_net_core
git clone https://github.com/dotnet/dotnet-docker.git


Example

https://github.com/dotnet/dotnet-docker/tree/master/samples/aspnetapp
$ docker run --name aspnetcore_sample --rm -it -p 8000:80 mcr.microsoft.com/dotnet/core/samples:aspnetapp

The app starts and is reachable on http://localhost:8000/


Cheatsheet

Version
$ docker version

General info
$ docker info

List all images downloaded to your local machine
$ docker image ls

List all running docker containers
$ docker container ls --all

Find all running docker containers and their IDs
$ docker ps -a

$ docker port <ContainerID>

Stopping a container
$ docker stop <ContainerID-Prefix>

Find info about a specific docker container
$ docker inspect <containerid>
$ docker inspect <containerid> | grep IPAddress

Build an image from a Dockerfile
$ docker build --tag=clone_tag_1 .

$ docker rm
$ docker container prune