When you think of Docker, you probably don’t think of .NET or Windows. There are a lot of good reasons to use Docker with ASP.NET. Check out our top 10 list of reasons to use Docker with .NET to see if Docker can help you!
1. Pre-Made Runtime Environment
Generally, when you set up a server machine (or virtual machine), you have to install the operating system, any software needed by the application (web server, SDK, et. al.) and set up any environment variables needed by the application. If you use Docker, you can specify all of these in your dockerfile or docker-compose.yml file and every time the environment is started, it will ensure the proper software is installed in the container.
When updates are required, you can simply update the container definition and redeploy the container. This can eliminate costly maintenance time on servers as well as give you the ability to see what the application looked like in a previous version of the server environment.
2. Version-Controlled Infrastructure
Because you check in your Dockerfile and docker-compose.yml with your code, they are versioned along with the code. This means the environments are versioned too! When you update a version of the software needed to run the application (web server, SDK, etc.), the environment gets updated alongside the application code. This is especially helpful when trying to find out why a previous version of the app acts a certain way: you can just check out that version of the code, spin up the Docker container, and debug as usual!
3. Runtime Consistency
We’ve all heard it. Most of us have said it: “It works on my machine.”
There have probably been several man-decades over the history of software development spent trying to figure out why an app works on the developer’s machine, but not on the production server. Those days are gone. If the developer develops by running the Docker container as their development server, then they arerunning the code on the production server. This means if it works in development, it will work in staging and production because they are the same thing!
4. Securable like VMs
Docker containers use the host machine’s resources, but they have their own runtime environment. They have their own copy of (a slimmed-down version) the operating system’s user space. This means you can secure a container much in the same way you secure a traditional machine because it kind of is a real machine.
Not only can containers not access other containers without permission and network access, access the host machine. It’s kind of like chroot jailing a process, except that a container gets its own file system to make it look like a server using UnionFS.
5. Lighter than Virtual Machines
You can also restrict a container’s access to host resources like CPU, RAM, and hard drive space just like any other program. So if a particular container is flooded with activity (like being compromised by a denial of service attack), it won’t hog all the host machine’s resources and bring any other containers on that host down as well.
Unlike virtual machines, where you have to specify how much of the host machine’s resources at the time the machine is created, and it just makes those resources unavailable to the host, a Docker container just uses the resources it needs. So being able to restrict how much resources it can use can be very helpful when hosting several containers on one host.
6. Cloud Friendly
Amazon Web Services and Microsoft Azure both have great deployment stories for containers. In fact, Azure can support a single container as an application service and as a farm, using Docker Swarm and the Azure Container Service. Amazon Web Services has similar offerings.
7. Continuous Integration/Continuous Delivery
Continuous integration and continuous delivery can make the software process easy and enjoyable, but problems in the CI or CD pipeline can be hard to diagnose. Many CI and CD services now offer ways to build, test and deploy applications using Docker containers. This means that when your unit and integration tests run on your CI server, they are run in the same environment they will be running in when in production. Eliminating another hard-to-debug place in your software delivery pipeline.
8. .NET In Particular
Microsoft is making substantial contributions to the Docker community. They are integrating Docker workflows into the Visual Studio and Team System ecosystems. When you install the Visual Studio Tools for Docker, you get an easy integration into Docker. The tools will create a base Dockerfile and docker-compose.yml files that are now tied into your environment; so when you press F5, Visual Studio builds and runs the container using the Dockerfile and the debugging is now taking place in the running container!
9. Docker in Particular?
Docker is a stand-out in the container space because they have standardized the container format. This makes it easy for operating system makers to provide for that container format. Which means it becomes easier and easier to move containers from one host to another, making testing and deployment drop-dead simple.
Turning a single Docker container into a running farm couldn’t be easier either. It is just a matter of updating docker-compose.yml files with instructions on how many instances of a container to run, and creating a container to use as a reverse-proxy and load balancer. NGINX has created lots of prebuilt images for helping with this scenario, so most of the time it’s just a matter of adding that container to the compose file and setting up some configuration.
In fact, if you plan to forego IIS in favor of Kestrel for your web projects, reverse proxy for Kestrel anyway, so you might as well just add it to your compose files from the beginning! Docker Swarm was actually built for this reason and can ensure that if you specify you want x number of instances of your container running, it will spin up that many instances and watch. If one of the instances goes down, it will handle spinning up a new container for you!