I’ve read that standard containers are optimized for developer productivity and not security, which makes sense.
But then what would be ideal to use for security? Suppose I want to isolate environments from each other for security purposes, to run questionable programs or reduce attack surface. What are some secure solutions?
Something without the performance hit of VMs
It is the application Docker that is not secure. Containers are. In fact Docker runs a daemon as root to wich you communicate from a client. This is what makes it less secure; running under root power. It also has a few shortcomings of privileged containers. This can be easily solved by using podman and SELinux. If you can manage to run Docker rootless, then you are magnitudes higher in security.
Do you think Podman is ready to take over Docker? My understanding is that Podman is Docker without the root requirement.
Yes it is. I’ve been using it for more than a year now. Works reliably. Has pod support aswel.
Great. I don’t know enough to use either but I think I’m going to try lean on podman from the get go. In any case, I know that all podman commands are exactly identical to Docker, such that you can replace, say,
docker compose
withpodman compose
and move on with ease.With the specific exception of podman compose I completely agree. I haven’t tested it for a while but podman compose has had issues with compose file syntax in my experience. Especially with network configs.
However, I have been using “docker-compose” with podman’s docker compatible socket implementation when necessary, with great success
Related to this: van podman completely replace Docker? I.e., can it pull containers and build containers in addition to running them?
I believe it can but don’t take my word for it
It can pull and build containers fine but last time I tried there were some differences. Mounts were not usable because user uid/gid behave quite differently. Tools like portainer dont work on podman containers. I havent tried out any networking or advanced stuff yet.
But i found that the considerations to write docker files are quite different for podman.
Differences you find could be related to containers being run rootless, or the host system having SELinux enforcesd. Both problems could be intended behavior and can be soled simply by using by adding correct labels to mount points like :z or :Z. This SELinux feature also affects Docker when setup.
Portaiers tries to connect to a docker sock path that is not the same with Podman. While podman is rootless and does not need a daemon, socks and stuff, it has support for them nevertheless. So you can simply adjust Portainer config to work with podman. I havnt tried it yet but I managed to do similar things for other software.
Podman supports dockerfile, right?
Yes.
Unlike docker, podman doesn’t try to do everything on it’s own. There’s a separate tool known as
buildah
which builds containers from dockerfiles just fine.Ps. More generally, they’re called containerfiles.
Gotcha. I use docker containers on computing clusters at the University, but because of security, I have to convert them to singularity containers. That is okay, but I was hoping that by running podman I could prevent this extra step.
There can also be old images with e.g. old openssl versions being used. Its not a concern if they are updated frequently, but still manual.
This is a problem of the containerized program and the image itself. This problem affect, containers, VM, and baremetal aswel.
I agree. But imo these usecases are more known and mature in traditional setups, we could
apt update
and restart a systemd service and its done.Its not so obvious and there are no mechanisms for containers/images.
(I am not into devops/sysadmin, so this might also be my lack of exposure)
Most often, images are updated automatically and are managed by the developers themselves so images are usually up to date. If you don’t know how to build images, it may be difficult for you to update the containerized software before the vendor does. But this situation is infrequent.
Many projects just pull in a bunch of images from wherever and never update them. Especially if it’s that one obscure image that happens to package this over obscure app you absolutely need.
Where did you read that and which arguments did the authors make?
Many times, the configuration of Docker is the issue, e.g. mounting stuff like files from
/etc/
or the Docker socket from the outside, using insecure file permissions or running the application asroot
user.If you use rootless Docker or Podman, you already eliminated one of the security risks. The same goes for the other mentioned things.
What exactly do you mean by “questionable programs”? If you want to run malware, you shouldn’t do so in an environment where it can break out of anything. There’s the possibility of hardware virtualisation which prevents many of the outbreaks possible, but even then, exploits have been found.
You’re really only secure if you run questionable software on an airgapped computer with no speakers and never run anything else on it.
What would be your use case?
There are multiple use cases I have in mind, but one of them is running proprietary software I don’t outright trust. For example, zoom video conferencing for work, or steam for games.
Docker isnt build to run these type of Programms. You should look into sandbox environments to test these apps.
Try firejail and flatseal for that.
How come no speakers? Is it to prevent your ears from being blasted just in case, or is there malware that can be transmitted through audio?
Dang, that’s wild. there is some insane malware out there
All recent CPUs have native virtualization support, so there’s close to no performance hit on VMs.
That being said, even a VM is subject to exploits and malicious code could break out of the VM down to its hypervisor.
The only secure way of running suspicious programs starts with an air-gaped machine, a cheap hdd/ssd that will go straight under the hammer as soon as testing is complete. And I’d be wondering even after that if maybe the BIOS might have been compromised.
On a lower level of paranoia and/or threat, a VM on an up-to-date hypervisor with a snapshot taken before doing anything questionable should be enough. You’d then only have to fear a zero day exploit of said hypervisor.
Each VM needs a complete OS, though. Even at 100% efficiency, that’s still a whole kernel+userspace just idling around and a bunch of caches, loaded libraries, etc. Docker is much more efficient in that regard.
And LXC even more efficient in that regard.
Docker does load a bunch of stuff that most people don’t need for their project.
I don’t know why LXC is always the red-headed stepchild. It works wonderfully.
Docker has an additional issue, but not one unique to docker. Like flatpak, pip, composer, npm or even back to cpan and probably further, as a third-party source of installed software, it breaks single-source of truth when we want to examine the installed-state of applications on a given host.
I’ve seen iso27002/12.2.1f, I’ve seen supply-chain management in action to massive benefit for uptime, changes, validation and rollback, and it’s simplified the work immensely.
.1.3.6.1.2.1.25.6.3
If anyone remembers dependency hell - which is always self-inflicted - then this should be Old Hat.
HAVING SAID THAT, I’ve seen docker images loaded as the entire, sole running image, apparently over a razor-thin bmc-sized layer, on very small gear, to wondrous effect. But - and this is how VMware did it - a composed bare micro-image with Just Enough OS to load a single container on top, may not violate 27002 in that circumstance.
Try using LXD
Just to add to the already existing and much more comprehensive comments, here’s an interesting solution which Flatpak uses: https://wiki.archlinux.org/title/Bubblewrap