This blogpost is a comparison between different docker registries that you can host yourself.

Intended audience: System administrators, DevOps familiar with docker and its ecosystem, or anyone curious about docker registries.

By Quentin Anglade, security freak @ Objectif Libre

Self-hosted docker registries showdown

The docker hub is great if you accept to wait for your images to pull, if you don’t need anything private or if you’re happy with the docker hub private repositories.

Maybe you need a private registry, or a faster one, or one located on your private isolated network, or maybe you just have your own infrastructure and you don’t want to pay for SaaS since you can host it yourself.

Chances are, you probably need a docker registry. Hereafter, we will see 3 different open source registries and which one is the better for you.



Made by VMware, this “enterprise-class registry server” is more than just a registry. It presents many features:

  • Role based access control: users and repositories are organized via ‘projects’ and a user can have different permissions for images under a project.
  • Policy based image replication: images can be replicated (synchronized) between multiple registry instances, with auto-retry on errors. Great for load balancing, high availability, multi-datacenter, hybrid and multi-cloud scenarios.
  • Vulnerability Scanning: Harbor regularly scans images and warns users of vulnerabilities.
  • LDAP/AD support: Harbor integrates with existing enterprise LDAP/AD for user authentication and management.
  • Image deletion & garbage collection: images can be deleted and their space can be recycled.
  • Notary: image authenticity can be ensured.
  • Graphical user portal: user can easily browse, search repositories and manage projects.
  • Auditing: all the operations to the repositories are tracked.
  • RESTful API: RESTful APIs for most administrative operations, easy to integrate with external systems.
  • Easy deployment: provide both an online and offline installer.


We would like to moderate the easy deployment topic. While there are an online installer and an offline installer with built-in images that ease deployment in network-restricted environments, there are some things you’ll probably have to do in order to successfully install it. Unless you have vSphere – in which case you can use the provided OVA.

You can download the installer on the github release page.

There is a Harbor.cfg file along with a script you need to run every time you want to change your setup. This will generate configuration files and certificates. The problem is: it’s not in a docker container. So if you don’t have python2 as your default interpreter, you will have to change the shebang at the first line of the prepare script. And if you don’t have python for whatever reason (CoreOS or any modern OS that deprecated python2 maybe?), you will need to install it or to make your own Harbor package with everything pre-configured or you’ll need to dockerize the prepare script.

I had to edit the prepare script for it to work in my environment. It did not manage to generate a key, with a not very helpful error message. After some intense low-level print-based debugging, I found out this github issue with the details about the necessary patching for the script to work. This is very sad because it’s exactly why docker exists (among other reasons): work seamlessly in any environment. Note that this is only necessary if you don’t use your own certificate.

Harbor stores everything in /data on the host by default. There’s no way of changing that in Harbor.cfg. So if this doesn’t match with your needs, just edit all the docker-compose files by yourself. And all the logs are stored on the host too by default, and if you want to change that, you need to edit all the docker-compose files too.

After that, I launched the script and everything was up and running. With a 502 bad gateway error on the reverse proxy. Turns out I tried to change the path of the secretkey file to something different but it did not change the path in the docker-compose files. I ended up removing the /data/secretkey directory it had created with the volume path, and replaced it with the key that had been generated at the path I originally specified in Harbor.cfg.

If you plan to run Harbor behind a reverse proxy, you will probably need to remove the proxy_set_header X-Forwarded-Proto $$scheme; line in the nginx templates config files in the common/templates/nginx/ folder.

I finally managed to login to Harbor and changed the default password. You really feel that this was made by VMware and is intended to run in a dedicated (virtual) machine.

Using Harbor

A simple docker login domain.tld and you can use Harbor like any other registry:

docker build -t myawesomeimage .
docker tag myawesomeimage domain.tld/project/image:tag
docker push domain.tld/project/image:tag

The UI is very nice and effective. You can do everything you need in a few clicks.

The integration of clair for vulnerabilities scanning is very good, you get a nice vulnerabilities overview with the ability of scanning images when they are pushed, and prevent them from being pulled if they are above a configurable severity threshold.

You can use your existing LDAP or AD so that users can easily sign in. Every action of every user is logged:

You can set repositories to be public so that anyone can pull the image and access the repo details in a browser with the same URL.

The REST API looks good, and you even get a swagger file to work with.

The integrated notary server is a very nice addition if you want to sign your images, but is quite difficult to setup with custom certificates (not self-signed ones).



Portus is an authorization server and a user interface made by SUSE. It packs some interesting features:

  • Fine-grained control of permissions
  • User-friendly web interface
  • Synchronization with your private registry in order to fetch which images and tags are available
  • LDAP user authentication
  • OAuth and OpenID-Connect authentication
  • Monitoring of all the activities performed on your private registry and Portus itself
  • Search for repositories and tags inside your private registry
  • ‘Star’ your favorite repositories
  • Disable users temporarily
  • Optionally use Application Tokens for better security


Installation is easier compared to Harbor. The documentation has several examples of deployment, the easiest one being a docker-compose. There is also support for kubernetes and helm.

We covered the secure deployment of Portus in another article.

Just like Harbor, you will probably have problems when deploying Portus behind a reverse proxy such as Nginx. Be careful with the http headers that might conflict with the integrated Nginx coming with Portus.

Using Portus

Portus is like your own docker hub, but better. It presents a nice UI that works well:

Just like Harbor, a simple docker login domain.tld:5000 gets you up and running. Portus has users, teams and namespaces. A user can have namespaces. A team can have namespaces. And, of course, a team is made of users. A namespace contains repositories, and repositories contain images. To put it simply: if you create a namespace named “qwerty”, you will push and pull images to domain.tld:5000/qwerty/image:tag. See here for more details.

Clair is not as well integrated in Portus as in Harbor. You cannot scan on demand, prevent vulnerable images from being pulled, see if vulnerabilities databases are up-to-date. Note that Portus / clair will report no vulnerability at first if it didn’t have enough time to pull all the vulnerabilities (and there are lots of vulnerabilities… so it takes time). You can still check if clair has downloaded the vulnerabilities databases with a simple docker-compose logs clair. If you don’t want vulnerable images to be pulled, an option is to use the rest api of Portus in your CI/CD workflow and take action if there is a vulnerability.

The UI is also very simple:

While everything works well, there is little to no customization available, especially when compared to Harbor. Most of the automation will have to be done with the API in your CI/CD workflow. Just remember: Portus is not a registry, it’s more of a front end for a registry.

GitLab registry


GitLab’s docker registry is available with GitLab since version 8.8. It’s tightly integrated with GitLab and has the following features:

  • User authentication is from GitLab itself, so all the user and group definitions are respected.
  • There’s no need to create repositories in the registry; the project is already defined in GitLab.
  • Projects have a new tab, Container Registry, which lists all the images related to the project.
  • Every project can have an image repository, but this can be turned off per-project.
  • Developers can easily upload and download images from GitLab CI.
  • There’s no need to download or install additional software.

The goal of this registry is to be used along with GitLab’s CI/CD to “simplify your development and deployment workflows”, with some examples from GitLab:

  • Easily build Docker images with the help of GitLab CI and store them in the GitLab Container Registry.
  • Easily create images per branch, tag, or any other way suitable to your workflow, and with little effort, store them on GitLab.
  • Use your own built images, stored in your registry to test your applications against these images, allowing you to simplify the Docker-based workflow.
  • Let the team easily contribute to the images, using the same workflow they are already accustomed to. With the help of GitLab CI, you can automatically rebuild images that inherit from yours, allowing you to easily deliver fixes and new features to a base image used by your teams.
  • Have a full Continuous Deployment and Delivery workflow by pointing your CaaS to use images directly from GitLab Container Registry. You’ll be able to perform automated deployments of your applications to the cloud (Docker Cloud, Docker Swarm, Kubernetes and others) when you build and test your images.


There is no “installation” to speak of, as it’s integrated into GitLab itself. You rather need to activate it, and it’s easy. If you have a GitLab omnibus there is only one line to edit in your gitlab.rb:

registry_external_url ''

You may have to edit a few more lines if your certificates are not in the usual path. If you have installed GitLab from source, you’ll need to edit your gitlab.yml as described in the GitLab documentation.

Using GitLab’s registry

GitLab claims the registry is “seamless and secure”. Like any other registry, a simple docker login is all you need. As the registry is integrated in GitLab, there is no management of repositories, users, teams or namespaces. Every GitLab user can log into the registry with its GitLab credentials, every project has its own repository to store docker images. You can, however, delete images from the GitLab registry page:

The goal of GitLab’s registry is to be used in CI. So how exactly can we do that? Let’s take a look at an example of a gitlab-ci.yml from GitLab’s documentation :

  image: docker:stable
  - docker:dind
  stage: build
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN
    - docker build -t .
    - docker push

So you can automagically build docker images in your CI workflow with just a few lines in a gitlab-ci.yml.

There is no vulnerability scanning with GitLab, unless you are willing to pay for Gitlab Ultimate. But you still want security scans, right? You can do it yourself !

GitLab is customizable, so nothing prevents you from using the same CI workflow with another registry, like Harbor or Portus.

Bonus: Openshift

OpenShift is an open source container application platform by Red Hat, based on top of Docker and Kubernetes.

If you already use Openshift, you probably have the integrated Openshift registry called OCR (Openshift Container Registry). Using a third-party registry with Openshift is possible but not useful as OCR is well integrated (it uses notifications to inform Openshift of a new image).

In a typical use of Openshift, you don’t interact directly with the registry. You can acess the registry, but you have no reason to.

You can also deploy OCR as a standalone registry, which mainly makes sense for multiple Openshift clusters or security reasons.

And if you don’t use Openshift… then you may not need OCR.


If you need a secure registry that integrates well with your existing LDAP or AD, has security policies and supports notary to sign images: Harbor is the obvious choice. Once it’s set up, it works well, but it can take some time to get it up and running.

If you need your own docker hub that can use your existing LDAP, where people can interact and collaborate, has a bit of security and can be deployed easily: Portus is here.

If you use GitLab and need work to be done, use GitLab’s registry.

If you use Openshift, you should stay with OCR.

For other needs,a mixture of GitLab and external registry can be a good option.