Previous article -> Introduction to Docker - pt.1

Images and Containers

An image is an ordered set of root filesystem updates and related execution parameters to be used in a container runtime; it has no state and is immutable.
A typical image has a limited size, doesn’t require any external dependency and includes all runtimes, libraries, environmental variables, configuration files, scripts and everything needed to run the application.

A container is the runtime instance of an image, that is, what the image actually is in memory when is run. Generally a container is completely independent from the underlying host, but an access to files and networks can be set in order to permit a communication with other containers or the host.

docker image rm

Conceptually, an image is the general idea, a container is the actual realization of that idea. One of the points of strength of Docker is the capability of creating minimal, light and complete images that can be ported on different operating systems and platforms: the execution of the related container will always be feasible and possible thus avoiding any problem related to the compatibility of packages, dependencies, libraries, and so forth. What the container needs is already included in the image, and the image is always portable indeed.

The image is created from a Dockerfile, which is a text document containing the precise instructions for its creation that are executed when creating the image.

A typical Dockerfile contains a base-image acting as the “skeleton”, labels providing information about the image and the maintainer, commands (as they were run from a shell), any host file to be included, ports exposed, environment variables and commands to be run once the container is running.

Images (and thus containers) created from the same Dockerfile are all the same, regardless of the operating system and the architecture of the host: this is how the pernicious problem of portability and compatibility of applications is brilliantly solved.
An example of a Dockerfile is the following one, which is from the official documentation; it customizes the Python 2.7 base-image with slim tags (the one with the least dependencies) and then runs the application.

# Use an official Python runtime as a parent image

FROM python:2.7-slim 

# Set the working directory to /app


# Copy the current directory contents into the container at /app

ADD . /app

 # Install any needed packages specified in requirements.txt

RUN pip install --trusted-host -r requirements.txt

 # Make port 80 available to the world outside this container


# Define environment variable


# Run when the container launches

CMD ["python", ""]

Docker top

Docker Hub

Docker Hub is the Cloud-based image registry provided by Docker: here you can find images uploaded by users and official images created by the developers of apps.
When you create a container, or an image that requires another base-image, Docker finds them here unless you specify another registry (local or Cloud).
Creating a Docker ID, ie an account, you can upload (push) your own images in your own repository (a private one is provided for free, additional ones are paid) and distribute them also with your colleagues with the “Organizations” feature.

The automated build automatically compiles images from a build context on the repo each time it’s modified, that is, everytime some work is done. A build context is just a Dockerfile and any associated file. This way images are always generated as specified, the repo is always synchronized with the modifications on the code and the Dockerfile is available to everybody who accesses the repo. Automated Builds are available on private and public repos on Github and Bitbucket.

Official repos, like for instance the one of Nginx, provide a clear and complete documentation, best practices, examples and they’re constantly scanned with the Security Scanner service by Docker,
From an operational standpoint, you need to login with the related command to Docker Hub;after that you can attach ID tags, called docker tags, to images and finally upload them to the registry with docker push username/repository:tag.

Basic Operations

Let’s see how to create a container from a Dockerfile. First, the image is created from the Dockerfile with the docker build command.
docker images instead lists every available image.

docker build

Then you can run the container with docker run. The assigned parameters map port 80 of the container to port 8080 of the host (-p 80:8080 -- mapping works as host:container), run it in detached mode (ie in background) and specify which image to build the container from. In this case the container runs an app available at within a local browser.

docker run

If you want to use an image from the Docker Hub or another repository, then you must download it with docker pull; on a general Docker Hub page, like the one shown in the image below, there’s a box with the complete and correct command to copy and use. The creation of the container is the usual one.docker pull

The docker ps command is very useful: it shows information such as running containers, their state, IP, executed commands and port mapping. You can achieve a similar result with docker container ls: you’re given information on IDs, images, commands and creation date. The addition of the -a parameter shows stopped containers too.

Containers can be stopped with the docker container stop command followed by the ID of the container; as the ID is very long (and unique), you can use just the first 4 characters. Stopped containers can be restarted with docker start.

docker container stop

The deletion of a stopped container is performed with docker container rm container-ID; note that you the container should be stopped prior to deletion and not running, albeit you can delete a running container. Similarly, docker image rm image-ID deletes an image.

One of the advantages of Docker containers is that you can run commands inside the container without setting up an SSH connection, thus avoiding a possible weak link in the overall system security. The command docker container exec container-name shell-command allows to run the shell-command inside the container, whose result is redirected to stdout and shown on screen.

As for monitoring purposes, Docker offers the logs, top and stats commands which show, respectively, logs, running processes and information on the computational resources of the container specified as argument of the command.

Docker logs
Still talking about computational resources, you can assign a certain amount of a specific resource to a container when creating it or by updating an existing one with the appropriate command.
By default, a container has all the resources available to the host, and it’s easy to see how in certain cases some constraints must be placed. Further information on the topic are available at this address.

banner eng

fb icon evo twitter icon evo

Word of the Day

The term Edge Computing refers, when used in the cloud-based infrastructure sphere, the set of devices and technologies that allows...


The acronym SoC (System on Chip) describes particular integrated circuit that contain a whole system inside a single physical chip:...


The acronym PtP (Point-to-Point) indicates point-to-point radio links realized with wireless technologies. Differently, PtMP links connects a single source to...


Hold Down Timer is a technique used by network routers. When a node receives notification that another router is offline...


In the field of Information Technology, the term piggybacking refers to situations where an unauthorized third party gains access to...

Read also the others...

Download of the Day


Netcat is a command line tool that can be used in both Linux and Windows environments, capable of...



Fiddler is a proxy server that can run locally to allow application debugging and control of data in...


Adapter Watch

Adapter Watch is a tool that shows a complete and detailed report about network cards. Download it here.


DNS DataView

DNS DataView is a graphical-interface software to perform DNS lookup queries from your PC using system-defined DNS, or...


SolarWinds Traceroute NG

SolarWinds Traceroute NG is a command line tool to perform advanced traceroute in Windows environment, compared to the...

All Download...

Issues Archive

  •  GURU advisor: issue 21 - May 2019

    GURU advisor: issue 21 - May 2019

  • GURU advisor: issue 20 - December 2018

    GURU advisor: issue 20 - December 2018

  • GURU advisor: issue 19 - July 2018

    GURU advisor: issue 19 - July 2018

  • GURU advisor: issue 18 - April 2018

    GURU advisor: issue 18 - April 2018

  • GURU advisor: issue 17 - January 2018

    GURU advisor: issue 17 - January 2018

  • GURU advisor: issue 16 - october 2017

    GURU advisor: issue 16 - october 2017

  • GURU advisor: issue 15 - July 2017

    GURU advisor: issue 15 - July 2017

  • GURU advisor: issue 14 - May 2017

    GURU advisor: issue 14 - May 2017

  • 1
  • 2
  • 3
  • BYOD: your devices for your firm

    The quick evolution of informatics and technologies, together with the crisis that mined financial mines, has brought to a tendency inversion: users that prefer to work with their own devices as they’re often more advanced and modern than those the companies would provide. Read More
  • A switch for datacenters: Quanta LB4M

    You don’t always have to invest thousands of euros to build an enterprise-level networking: here’s our test of the Quanta LB4M switch Read More
  • Mobile World Congress in Barcelona

    GURU advisor will be at the Mobile World Congress in Barcelona from February 22nd to 25th 2016!

    MWC is one of the biggest conventions about the worldwide mobile market, we'll be present for the whole event and we'll keep you posted with news and previews from the congress.

    Read More
  • 1