I was once preparing yet another virtualization software to install Web Call Server on it and clone it several times for further deployment of a test CDN. It ocurred to me that it would be perfect if the process was self-deploying and did not require my participation.

In addition to saving time for deployment, it is also very convenient. For example, if one server cannot cope with the load, another server is automatically deployed, which takes over some of the load. After the load is reduced below a certain level, the additional server is shut down.

So, I decided to run WCS in a Docker container. There were several reasons and applications:

  1. The need to deploy a large number of WCS servers. For example, to organize a test CDN. Or for a non-test CDN — but in this case, the containers must be deployed on different hosts that are remote from each other.
  2. The need to deploy more WCS than are available physical or virtual machines.
  3. Fast organization of test benches.
  4. And simply because it’s cool and trendy.

 

There were also several solutions to these problems:

I could write a script for installation, but that would be too easy. Or difficult. And this option won’t take into account all the dependencies and other factors for the software to work. Such as updating the operating system version and service libraries.

For multiple deployments and rapid test benches, virtual machines could be used. It requires one deployment of the server, making an image, and then cloning it in the required quantities. This option has its shortcomings. The most important of which is the size of the image. For a virtual machine, the image includes everything — the operating system and all related programs, so the image is often quite large. Deploying virtual machines from an image requires not only time, but also a fairly large amount of resources, including CPU and RAM.

And finally, containerization in Docker. Containerization is a cross between running software on a physical server and full virtualization. Docker’s main idea is running applications in isolated space. Containers are needed so that the environment of one process does not interfere with another. Each container is isolated by default from other containers and from the machine it runs on. Another important difference between Docker and regular virtual machines is that it does not emulate the hardware. It uses system resources, but isolates the process itself. Therefore, it is not recommended to run containers from images that were built for platforms other than the one where Docker is running.

Pretty much everything is clear when it comes to the use of Docker for test landscapes — it’s fast and convenient. I had an idea to use containers not only for CDN test benches, but also for organizing video streaming; for example, to make a video surveillance system for the entrances of several houses.

One house => one Docker container

IP cameras are installed at the entrances of the houses. WCS in a Docker container receives RTSP video streams from IP cameras and converts them into WebRTС streams, which in turn are played on the website of the Management Company.

RTSP-to-WebRTC-WCS_Docker_network_WebRTC_browser_CDN_streaming_publish

The advantage of implementing this task using Docker containers is the ability to reconfigure the video surveillance system on the go without affecting the part that already works. To add or disconnect a house, all we need to do is start or shut down the container, and this work can be entrusted to scripts.

Now we just have to launch WCS in Docker.

Easy peasy!

The image Flashphoner Web Call Server is already loaded in Docker Hub.

docker-hub-WCS_Docker_network_WebRTC_browser_CDN_streaming_publish

Deploying WCS comes down to two commands:

1. Downloading the current build from Docker Hub

docker pull flashponer/webcallserver

2. Runing a Docker container with a trial or commercial license number

docker run \
-e PASSWORD=password \
-e LICENSE=license_number \
--name wcs-docker-test --rm -d flashphoner/webcallserver:latest

where:

  • PASSWORD – password for access to the container via SSH. If this variable is not defined, it will not be possible to get into the container via SSH;
  • LICENSE – WCS licence number. If this variable is not defined, the license can be activated via the web interface.

 

But if everything had been so simple, I wouldn’t have written this article.

It looks beautiful in the paper…

I install Docker on my local machine running on Ubuntu Desktop 20.04 LTS:

sudo apt install docker.io

Then I create a new internal Docker network called “testnet”:

sudo docker network create \
--subnet 192.168.1.0/24 \
--gateway=192.168.1.1 \
--driver=bridge \
--opt com.docker.network.bridge.name=br-testnet testnet

After that, I download an up-to-date WCS build from Docker Hub

sudo docker pull flashphoner/webcallserver

And launch a WCS container

sudo docker run \
-e PASSWORD=password \
-e LICENSE=license_number \
-e LOCAL_IP=192.168.1.10 \
--net testnet --ip 192.168.1.10 \
--name wcs-docker-test --rm -d flashphoner/webcallserver:latest

Here, the variables are as follows:

  • PASSWORD – password for access to the container via SSH. If this variable is not defined, it will not be possible to get into the container via SSH;
  • LICENSE – WCS licence number. If this variable is not defined, the license can be activated via the web interface;
  • LOCAL_IP – the IP address of the container on the docker network, which will be written to the ip_local in the settings file flashphoner.properties;
  • I specify the network, in which the launched container will run, in the key –net. And launch the container on the testnet network.

 

Then I check container availability by ping

ping 192.168.1.10

ping-to-docker-WCS_Docker_network_WebRTC_browser_CDN_streaming_publish

I open the WCS Web interface in a local browser using the link https://192.168.1.10:8444 and test the publication of a WebRTC stream using the “Two Way Streaming” example. It works.

publish-play-ubuntu-WCS_Docker_network_WebRTC_browser_CDN_streaming_publish

Locally, I now have access to the WCS server from my computer with Docker installed. I need to give access to my colleagues.

This is where I first face obstacles.

Obstacle #1

Docker’s internal network is isolated, i.e. there is access “to the world” from the Docker’s network access, but the Docker’s network is not accesible “from the world”.

It turns out that in order to provide colleagues with access to the test bench in Docker on my machine, I have to provide console access to my machine. For testing within a development group, this is possible at a push. But I really wanted to put it all into production. Do billions of containers all over the world only work locally?

Of course they don’t. The answer was found by smoking manuals. You need to forward ports. Moreover, port forwarding is needed not on the network router, but in the Docker itself.

Great! List of ports is known. Then we forward it:

docker run \
-e PASSWORD=password \
-e LICENSE=license_number \
-e LOCAL_IP=192.168.1.10 \
-e EXTERNAL_IP=192.168.23.6 \
-d -p8444:8444 -p8443:8443 -p1935:1935 -p30000-33000:30000-33000 \
--net testnet --ip 192.168.1.10 \
--name wcs-docker-test --rm flashphoner/webcallserver:latest

We use the following variables in this command:

  • PASSWORD, LICENSE, and LOCAL_IP — all have been reviewed above;
  • EXTERNAL_IP — external network interface IP address. It is written to the ip parameter in the settings file flashphoner.properties;
  • In addition, the keys -p appear in the command — this is port forwarding. In this iteration, I use the same “testnet”, which I created earlier.

 

In a browser on another computer, I open https://192.168.23.6:8444 (IP address of my Docker machine) and launch an example “Two Way Streaming”

play-with-port-forwarding-WCS_Docker_network_WebRTC_browser_CDN_streaming_publish

The WCS web interface works, and there is even WebRTC traffic.

And everything would be fine if not for the second and third obstacle, which came together this time.

Obstacle #2 and #3

It took me about 10 minutes to start the container with port forwarding. During this time, I would have managed to manually install a couple of WCS copies. This delay is due to the fact that Docker generates a binding for each port in the range.

When trying to start a second container with the same list of ports, I expectedly receive an error that the port range is already taken.

It turns out that the port forwarding option does not suit me — because of the container’s slow start and the need to change ports to start the second and subsequent containers.

After googling, I found a thread at github, where a similar problem was discussed. In this discussion, it was recommended to use the host network to run the container to work with WebRTC traffic.

I launch the container on the host network (this is indicated by the key –net host)

docker run \
-e PASSWORD=password \
-e LICENSE=license_number \
-e LOCAL_IP=192.168.23.6 \
-e EXTERNAL_IP=192.168.23.6 \
--net host \
--name wcs-docker-test --rm -d flashphoner/webcallserver:latest

Great! The container starts up quickly. Everything works from an external machine — both the web interface and WebRTC traffic are published and reproduced.

play-with-port-forwarding-WCS_Docker_network_WebRTC_browser_CDN_streaming_publish

Then I launch a couple more containers. Fortunately, there are several network cards on my computer.

Here I could draw a line under the whole thing. But I was confused by the fact that the number of containers on the host will be limited by the number of network interfaces.

The method that I use in production

Since version 1.12 Docker Docker provides two network drivers: Macvlan and IPvlan. They allow you to assign static IPs from the LAN.

Macvlan allows one physical network interface (host machine) to have an arbitrary number of containers, each with its own MAC address.

Requires a Linux kernel v3.9–3.19 or 4.0+.

IPvlan allows creating an arbitrary number of containers for your host machine that have the same MAC address.

Requires a Linux kernel v4.2 + (there is support for earlier kernels, but it is buggy).

I used the IPvlan driver in my installation. Partly, it happened historically, partly I had the expectation of transferring the infrastructure to VMWare ESXi. The fact is that only one MAC address per port is available for VMWare ESXi, and Macvlan technology is not suitable in this case.

So. I have an enp0s3 network interface that gets an IP address from a DHCP server.

network_adapter-WCS_Docker_network_WebRTC_browser_CDN_streaming_publish

because on my network, addresses are issued by a DHCP server, and Docker chooses and assigns addresses on its own; this can lead to conflicts if Docker chooses an address that has already been assigned to another host on the network.

To avoid this, we need to reserve part of the subnet range for using Docker. This solution has two parts:

  1. Configuring the DHCP service on the network so that it does not assign addresses in a specific range.
  2. Telling Docker about this reserved address range.

 

In this article, I won’t tell you how to configure a DHCP server. I think that every IT specialist has come across this more than once in their practice; plus, there are plenty of manuals online.

But we will analyze in detail how to tell Docker what range is allocated for it.

I have limited the range of DHCP server addresses so that it does not issue addresses higher than 192.168.23. 99. Let’s give Docker 32 addresses, starting from 192.168.23.100.

The we create a new Docker network called “new-testnet”:

docker network create -d ipvlan -o parent=enp0s3 \
--subnet 192.168.23.0/24 \
--gateway 192.168.23.1 \
--ip-range 192.168.23.100/27 \
new-testnet

where:

  • ipvlan is a network driver type;
  • parent=enp0s3 is a physical network interface (enp0s3) through which container traffic will go;
  • –subnet is a subnet;
  • –gateway is a default gateway for subnet;
  • –ip-range is the range of subnet addresses that Docker can assign to containers.

 

and then we launch a container with WCS on this network

docker run \
-e PASSWORD=password \
-e LICENSE=license_number \
-e LOCAL_IP=192.168.23.101 \
-e EXTERNAL_IP=192.168.23.101 \
--net new-testnet --ip 192.168.23.101 \
--name wcs-docker-test --rm -d flashphoner/webcallserver:latest

Check the operation of the web interface and publishing/playing WebRTC traffic using the “Two-Way Streaming” example:

This approach has one small drawback. When using Ipvlan or Macvlan technologies, Docker isolates the container from the host. If, for example, we try to ping a container from the host, then all packets will be lost.

isolation-WCS_Docker_network_WebRTC_browser_CDN_streaming_publish

But for my current task — running WCS in a container – this is not critical. We can always ping or ssh from another machine.

ssh-to-docker-WCS_Docker_network_WebRTC_browser_CDN_streaming_publish

Using IPvlan technology on one Docker host, we can raise the required number of containers. This number is limited only by host resources and, in part, by the network addressing of a particular network.

Conclusion

Running containers in Docker can be challenging only for beginners. But once you understand the technology a little, you can appreciate how simple and convenient it is. I really hope that my experience will help some of you appreciate containerization.

Links

WCS in Docker