Hi! Im new to self hosting. Currently i am running a Jellyfin server on an old laptop. I am very curious to host other things in the future like immich or other services. I see a lot of mention of a program called docker.

search this on The internet I am still Not very clear what it does.

Could someone explain this to me like im stupid? What does it do and why would I need it?

Also what are other services that might be interesting to self host in The future?

Many thanks!

EDIT: Wow! thanks for all the detailed and super quick replies! I’ve been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!

  • Cenzorrll@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    3 days ago

    EDIT: Wow! thanks for all the detailed and super quick replies! I’ve been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!

    This is pretty much what I’ve started doing. Containers have the wonderful benefit that if you don’t like it, you just delete it. If you install on bare metal (at least in Linux) you can end up with a lot of extra packages getting installed and configured that could affect your system in the future. With containers, all those specific extras are bundled together and removed at the same time without having any effect on your base system, so you’re always at your clean OS install.

    I will also add an irritation with docker containers as well, if you create something in a container that isn’t kept in a shared volume, it gets destroyed when starting the container again. The container you use keeps the maintainers setup, for instance I do occasional encoding of videos in a handbrake container, I can’t save any profiles I make within that container because it will get wiped next time I restart the container since it’s part of the container, not on any shared volume.

      • Cenzorrll@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 hours ago

        Agreed, I just spent a week (very intermittently) trying to figure out where all my free space had gone, turns out it was a bunch of abandoned docker volumes taking up. I have 32gb on my laptop, so space is at an absolute premium.

        I guess I learned my lesson about trying out docker containers on my laptop just to check them out.

  • Professorozone@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    3 days ago

    I’ve never posted on Lemmy before. I tried to ask this question of the greater community but I had to pick a community and didn’t know which one. This shows up as lemmy.world but that wasn’t an option.

    Anyway, what I wanted to know is why do people self host? What is the advantage/cost. Sorry if I’m hijacking. Maybe someone could just post a link or something.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 days ago

      Anyway, what I wanted to know is why do people self host?

      Wow. That’s a whole separate thread on it’s on. I selfhost a lot of my services because I am a staunch privacy advocate, and I really have a problem with corporations using my data to further bolster their profit margins without giving me due compensation. I also self host because I love to tinker and learn. The learning aspect is something I really get in to. At my age it is good to keep the brain active and so I self host, create bonsai, garden, etc. I’ve always been into technology from the early days of thumbing through Pop Sci and Pop Mech magazines, which evolved into thumbing through Byte mags.

    • Domi@lemmy.secnd.me
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      Anyway, what I wanted to know is why do people self host?

      For the warm and fuzzy feeling I get when I know all my documents, notes, calendars, contacts, passwords, movies/shows/music, videos, pictures and much more are stored safely in my basement and belong to me.

      Nobody is training their AI on it, nobody is trying to use them for targetted ads, nobody is selling them. Just for me.

    • CocaineShrimp@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      Yeah, 100% a whole separate post on its own. If you ask the same question in a new post, you’ll get more visibility and more answers

      • Professorozone@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        As I mentioned, I didn’t really know where to post it. I guess my lemmy-foo isn’t up to snuff. I saw that this appears to be in lemmy.world, but only 10 options came up when I tried to post and none of them really seemed right. Advice?

    • James R Kirk@startrek.website
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 days ago

      People are talking about privacy but the big reason is that it gives you, the owner, control over everything quickly without ads or other uneeded stuff. We are so used to apps being optomized for revenue and not being interoperable with other services that it’s easy to forget the single biggest advantage of computers which is that programs and apps can work together quickly and quietly and in the background. Companies provide products, self-hosting provides tools.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      3 days ago

      It usually comes down to privacy and independence from big tech, but there are a ton of other reasons you might want to do it. Here are some more:

      • preservation - no longer have to care if Google kills another service
      • cost - over time, Jellyfin could be cheaper than a Netflix sub
      • speed - copying data on your network is faster than to the internet
      • hobby - DIY is fun for a lot of people

      For me, it’s a mix of several of reasons.

  • Professorozone@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    Thank you for the thorough response. After looking carefully at what you wrote I didn’t really see a difference between the term self-hosting and home network.

    You said you have software that automatically downloads media. The way I see this using movies for instance, if I own the movies and have them on my machine, then I can stream them over my network and have full control. Whereas if I “own” them on Amazon and steam it from there, they can track the viewing experience, push ads, or even remove the content completely. I understand that… But if I want a NEW movie, I’m back to Amazon to get it in the first place (or Netflix, or Walmart, etc. I get it). I’m fact, personally I’ve started actually buying disks of the movies/music I like most so that it can’t really be taken away and I can enjoy it even without an Internet connection. Am I missing something? Unless of course the media you are downloading is pirated.

    I know I’m asking what seems to be a huge question but I’m really only asking for a broad description, sort of an ELI5 thing.

  • Professorozone@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 days ago

    Wow! Thank you all for the civilized responses. This all sounds so great. I am older and I feel like I’ve already seen enough ads for one lifetime and I hate all this fascist tracking crap.

    But how does that work? Is it just a network on which you store your stuff in a way that you can download it anywhere or can it do more? I mean, to me that’s just a home network. Hosting sounds like it’s designed for other people to access. Can I put my website on there? If so, how do I go about registering my domain each year. I’m not computer illiterate but this sounds kind of beyond my skill level. I’ll go search Jellyfin, weird name, and see what I can find. Thanks again!

    • y0kai@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      2 days ago

      You’re asking a lot of questions at one time and will be better served understanding you’re knocking at the door of a very deep rabbit hole.

      That said, I’ll try to give you the basic idea here and anyone who can correct me, please do so! I doubt I’ll get everything correct and will probably forget some stuff lol.

      So, self hosting really just means running the services you use on your own machine. There’s some debate about whether hosting on a cloud server - where someone else owns and has physical access to the machine - counts as self hosting. For the sake of education, and because I’m not a fan of gatekeeping, I say it does count.

      Anyway, when you’re running a server (a machine, real or virtualized, that is running a program connected to a network that can - usually - be accessed by other machines connected to that network), who and what you share with other machines on your network or other networks, is ultimately up to you.

      When using a “hosted” service, which is where another entity manages the server (not just the hardware, but the software and administration too, and is therefore the opposite of self hosting. Think Netflix, as opposed to Jellyfin), your data and everything you do on or with that service on that network belongs to the service provider and network owners. Your “saved” info is stored on their disks in their data center. There are of course exceptions and companies who will offer better infrastructure and privacy options but that’s the gist of non-self-hosted services.

      To your specific questions:

      But how does that work?

      Hopefully the above helps, but this question is pretty open ended lol. Your next few questions are more pointed, so I’ll try to answer them better.

      Is it just a network on which you store your stuff in a way that you can download it anywhere or can it do more?

      Well, kind of. If you’re hosting on a physical machine that you own, your services will be accessible to any other machine on your home network (unless you segment your network, which is another conversation for another time) and should not, by default, be accessible from the internet. You will need to be at home, on your own network to access anything you host, by default.

      As for storage of your data, self hosted services almost always default to local storage. This means, you can save anything you’re doing on the hard-drive of the machine the server is running on. Alternatively if you have a network drive, you can store it on another machine on your network. Some services will allow you to connect to cloud storage (on someone else’s machine somewhere else). The beauty is that you decide where your data lives.

      I mean, to me that’s just a home network. Hosting sounds like it’s designed for other people to access. Can I put my website on there?

      Like almost anything with computers and networking, the defaults are changeable. You can certainly host a service on the internet for others to access. This usually involves purchasing the rights to a domain name, setting that domain up to link to your private IP address, and forwarding a port on your router so people can connect to your machine. This can be extremely dangerous if you don’t know what you’re doing an isn’t recommended without learning a lot more about network and cyber security.

      That said, there are safer ways to connect from afar. Personally, I use a software called Wireguard. This software allows devices I approve (like my phone, or my girlfirend’s laptop) to connect to my network when away from home though what is called an “encrypted tunnel” or a "Virtual Private Network (VPN) ". These can be a pain to set up for the first time if you’re new to the tech and there are easier solutions I’ve heard of but haven’t tried, namely Tailscale, and Netbird, both of which use Wireguard but try to make the administration easier.

      You can also look into reverse proxies, and services like cloudflare for accessing things away from home. These involve internet hostng, and security should be considered, like above. Anything that allows remote access will come with unique pros and cons that you’ll need to weigh and sort for yourself.

      If so, how do I go about registering my domain each year.

      Personally, I use Porkbun.com for cheap domains, but there are tons of different providers. You’ll just have to shop around. To actually use the domain, I’m gonna be linking some resources lower in the post. If I remember correctly, landchad.net was a good resource for learning about configuring a domain but idk. There will be a few links below.

      I’m not computer illiterate but this sounds kind of beyond my skill level.

      It was beyond my skill level when I started too. It’s been nearly a year now and I have a service that automatically downloads media I want, such as movies, shows, music, and books. It stores them locally on a stack of hard drives, I can access them outside of my house with wireguard as well. Further, I’ve got some smaller services, like a recipe book I share with my girlfriend and soon with friends and family. I’ve also started hosting my own AI, a network wide ad-blocker, a replacement for Google photos, a filesharing server, and some other things that are escaping me right now.

      The point is that it’s only a steep hill while you’re at the bottom looking up. Personally, the hike has been more rejuvenating than tiresome, though I admit it takes patience, a bit of effort, and a willingness to learn, try new things, and fail sometimes.

      Never sweat the time it takes to accomplish a task. The time will pass either way and at the end of it you can either have accomplished something, or you’ll look back and say, “damn I could’ve been done by now.”

      I’ll go search Jellyfin, weird name, and see what I can find. Thanks again!

      Also check these out, if you’re diving in:

      YouTube:

      Guides:

      Tools:

      Hopefully this helps someone. Good luck!

  • 0^2@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    3 days ago

    Now compare Docker vs LXC vs Chroot vs Jails and the performance and security differences. I feel a lot of people here are biased without knowing the differences (pros and cons).

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    91
    arrow-down
    1
    ·
    3 days ago

    A program isn’t just a program: in order to work properly, the context in which it runs — system libraries, configuration files, other programs it might need to help it such as databases or web servers, etc. — needs to be correct. Getting that stuff figured out well enough that end users can easily get it working on random different Linux distributions with arbitrary other software installed is hard, so developers eventually resorted to getting it working on their one (virtual) machine and then just (virtually) shipping that whole machine.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 days ago

        I’m aware of that, but OP requested “explain like I’m stupid” so I omitted that detail.

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          a chroot is different, but it’s an easy way to get an idea of what docker is:

          it also contains all the libraries and binaries that reference each other, such that if you call commands they use the structure of the chroot

          this is far more relevant to a basic understanding of what docker does than explaining kernel namespaces. once you have the knowledge of “shipping around applications including dependencies”, then you can delve into isolation and other kinds of virtualisation

        • fishpen0@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 days ago

          Yes, technically chroot and jails are wrappers around kernel namespaces / cgroups and so is docker.

          But containers were born in a post chroot era as an attempt at making the same functionality much more user friendly and focused more on bundling cgroups and namespaces into a single superset, where chroot on its own is only namespaces. This is super visible in early docker where you could not individually dial those settings. It’s still a useful way to explain containers in general in the sense that comparing two similar things helps you define both of them.

          Also cgroups have evolved alongside containers at this point and work rather differently now compared to 18 years ago when cgroups were invented and this differentiation mattered more than now. We’re at the point where differentiation between VMs and Containers is getting really hard since both more and more often rely on the same kernel features that were developed in recent years on top of cgroups

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      16
      ·
      3 days ago

      So instead of having problems getting the fucking program to run, you have problems getting docker to properly build/run when you need it to.

      At work, I have one program that fails to build an image because of a 3rd party package who forgot to update their pgp signature; one that builds and runs, but for some reason gives a 404 error when I try to access it on localhost; one that whoever the fuck made it literally never ran it, because the Dockerfile was missing some 7 packages in the apt install line.

      • turmacar@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 days ago

        Building from source is always going to come with complications. That’s why most people don’t do it. A docker compose file that ‘just’ downloads the stable release from a repo and starts running is dramatically more simple than cross-referencing all your services to make sure there are no dependency conflicts.

        There’s an added layer of complexity under the hood to simplify the common use case.

      • Nibodhika@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        There are two ends here, as a user and as a developer. As a user Docker images just work, so you solve almost every problem you’re having which would be your users having them and giving up on using your software.

        Then as a developer docker can get complicated, because you need to build a “system” from scratch to run your program. If you’re using an unstable 3d party package or missing packages it means that those problems would be happening in the deploy servers instead of your local machines, and each server would have its own set of problems due to which packages they didn’t have or had the wrong version, and in fixing that for your service you might be breaking other service already running there.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        3 days ago

        Yeah, it’s another layer, and so there definitely is an https://xkcd.com/927/ aspect to it… but (at least in theory) only having problems getting Docker (1 program) to run is better than having problems getting N problems to run, right?

        (I’m pretty ambivalent about Docker myself, BTW.)

    • akilou@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      3 days ago

      But why can I “just install a program” on my windows machine or on my phone and it is that easy?

      • kieron115@startrek.website
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        Caveat: I am not a programmer, just an enthusiast. Windows programs typically package all of the dependency libraries up with each individual program in the form of DLLs (dynamic link library). If two programs both require the same dependency they just both have a local copy in their directory.

      • GnuLinuxDude@lemmy.ml
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 days ago

        You might notice that your Windows installation is like 30 gigabytes and there is a huge folder somewhere in the system path called WinSXS. Microsoft bends over backwards to provide you with basically all the versions of all the shared libs ever, resulting in a system that can run programs compiled from decades ago just fine.

        In Linux-land usually we just recompile all of the software from source. Sometimes it breaks because Glibc changed something. Or sometimes it breaks because (extremely rare) the kernel broke something. Linus considers breaking the userspace API one of the biggest no-nos in kernel development.

        Even so, depending on what you’re doing you can have a really old binary run on your Linux computer if the conditions are right. Windows just makes that surface area of “conditions being right” much larger.

        As for your phone, all the apps that get built and run for it must target some kind of specific API version (the amount of stuff you’re allowed to do is much more constrained). Android and iOS both basically provide compatibility for that stuff in a similar way that Windows does, but the story is much less chaotic than on Linux and Windows (and even macOS) where your phone app is not allowed to do that much, by comparison.

        • pressanykeynow@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 days ago

          In Linux-land usually we just recompile all of the software from source

          That’s just incorrect. Apart from 3 guys who have no better things to do no one in “Linux-land” does that.

      • SirQuack@feddit.nl
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 days ago

        In case of phones, there’s less of a myriad of operating systems and libraries.

        A typical Android app is (eventually) Java with some bundled dependencies and ties in to known system endpoints (for stuff like notifications and rendering graphics).

        For windows these installers are usually responsible for getting the dependencies. Which is why some installers are enormous (and most installers of that size are web installers, so it looks smaller).

        Docker is more aimed at developers and server deployment, you don’t usually use docker for desktop applications. This is the area where you want to skip inconsistencies between environments, especially if these are hard to debug.

    • Scrollone@feddit.it
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      8
      ·
      edit-2
      3 days ago

      Isn’t all of this a complete waste of computer resources?

      I’ve never used Docker but I want to set up a Immich server, and Docker is the only official way to install it. And I’m a bit afraid.

      Edit: thanks for downvoting an honest question. Wtf.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        3 days ago

        On the contrary. It relies on the premise of segregating binaries, config and data. But since it is only running one app, then it is a bare minimum version of it. Most containers systems include elements that also deduplicate common required binaries. So, the containers are usually very small and efficient. While a traditional system’s libraries could balloon to dozens of gigabytes, pieces of which are only used at a time by different software. Containers can be made headless and barebones very easily. Cutting the fat, and leaving only the most essential libraries. Fitting in very tiny and underpowered hardware applications without losing functionality or performance.

        Don’t be afraid of it, it’s like Lego but for software.

      • Nibodhika@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        It’s not. Imagine Immich required library X to be at Y version, but another service on the server requires it to be at Z version. That will be a PitA to maintain, not to mention that getting a service to run at all can be difficult due to a multitude of reasons in which your system is different from the one where it was developed so it might just not work because it makes certain assumptions about where certain stuff will be or what APIs are available.

        Docker eliminates all of those issues because it’s a reproducible environment, so if it runs on one system it runs on another. There’s a lot of value in that, and I’m not sure which resource you think is being wasted, but docker is almost seamless without not much overhead, where you won’t feel it even on a raspberry pi zero.

      • PM_Your_Nudes_Please@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        It can be, yes. One of the largest complaints with Docker is that you often end up running the same dependencies a dozen times, because each of your dozen containers uses them. But the trade-off is that you can run a dozen different versions of those dependencies, because each image shipped with the specific version they needed.

        Of course, the big issue with running a dozen different versions of dependencies is that it makes security a nightmare. You’re not just tracking exploits for the most recent version of what you have installed. Many images end up shipping with out-of-date dependencies, which can absolutely be a security risk under certain circumstances. In most cases the risk is mitigated by the fact that the services are isolated and don’t really interact with the rest of the computer. But it’s at least something to keep in mind.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 days ago

        The main “wasted” resources here is storage space and maybe a bit of RAM, actual runtime overhead is very limited. It turns out, storage and RAM are some of the cheapest resources on a machine, and you probably won’t notice the extra storage or RAM usage.

        VMs are heavy, Docker containers are very light. You get most of the benefits of a VM with containers, without paying as high of a resource cost.

      • couch1potato@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        3 days ago

        I’ve had immich running in a VM as a snap distribution for almost a year now and the experience has been leaps and bounds easier than maintaining my own immich docker container. There have been so many breaking changes over the few years I’ve used it that it was just a headache. This snap version has been 100% hands off “it just works”.

        https://snapcraft.io/immich-distribution

        • AtariDump@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Interesting idea (snap over docker).

          I wonder, does using snap still give you the benefit of not having to maintain specific versions of 3rd party software?

          • couch1potato@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            6
            ·
            3 days ago

            I don’t know too much about snap (I literally haven’t had to touch my immich setup) but as far as I remember when I set it up that was snap’s whole thing - it maintains and updates itself with minimal administrative oversight.

          • Colloidal@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 days ago

            Snap is like Flatpak. So it will store and maintain as many versions of dependencies as your applications need. So it gives you that benefit by automating the work for you. The multiple versions still exist if your apps depend in different versions.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        ·
        edit-2
        3 days ago

        If it were actual VMs, it would be a huge waste of resources. That’s really the purpose of containers. It’s functionally similar to running a separate VM specific to every application, except you’re not actually virtualizing an entire system like you are with a VM. Containers are actually very lightweight. So much so, that if you have 10 apps that all require database backends, it’s common practice to just run 10 separate database containers.

  • state_electrician@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    3 days ago

    Docker is a set of tools, that make it easier to work with some features of the Linux kernel. These kernel features allow several degrees of separating different processes from each other. For example, by default each Docker container you run will see its own file system, unable to interact (read: mess) with the original file system on the host or other Docker container. Each Docker container is in the end a single executable with all its dependencies bundled in an archive file, plus some Docker-related metadata.

  • Matt@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    It’s the platform that runs all of your services in containers. This means they are separated from your system.

    Also what are other services that might be interesting to self host in The future?

    Nextcloud, the Arr stack, your future app, etc etc.

  • jagged_circle@feddit.nl
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    13
    ·
    3 days ago

    Its an extremely fast and insecure way to setup services. Avoid it unless you want to download and execute malicious code.

      • jagged_circle@feddit.nl
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        Package managers like apt use cryptography to check signatures in everything they download to make sure they aren’t malicious.

        Docker doesn’t do this. They have a system called DCT but its horribly broken (not to mention off by default).

        So when you run docker pull, you can’t trust anything it downloads.

        • Darioirad@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Thank you very much! For the off by default part i can agree, but why it’s horribly broken?

          • jagged_circle@feddit.nl
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 day ago

            PKI.

            Apt and most release signing has a root of trust shipped with the OS and the PGP keys are cross signed on keyservers (web of trust).

            DCT is just TOFU. They disable it because it gives a false sense of security. Docker is just not safe. Maybe on 10 years they’ll fix it, but honestly it seems like they just dont care. The well is poisoned. Avoid. Use apt or some package manager that actually cares about security

            • Darioirad@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              23 hours ago

              So, if I understand correctly: rather than using prebuilt images from Docker Hub or untrusted sources, the recommended approach is to start from a minimal base image of a known OS (like Debian or Ubuntu), and explicitly install required packages via apt within the Dockerfile to ensure provenance and security. Does that make sense?

              • jagged_circle@feddit.nl
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                21 hours ago

                Install the package with apt. Avoid docker completely.

                If the docker image maintainer has a github, open a ticket asking them to publish a Debian package

                • Darioirad@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  10 hours ago

                  I see your point about trusting signed Debian packages, and I agree that’s ideal when possible. But Docker and APT serve very different purposes — one is for OS-level package management, the other for containerization and isolation. That’s actually where I got a bit confused by your answer — it felt like you were comparing tools with different goals (due to my limited knowledge). My intent isn’t just to install software, but to run it in a clean, reproducible, and isolated environment (maybe more than one in the same hosting machine). That’s why I’m considering building my own container from a minimal Debian base and installing everything via apt inside it, to preserve trust while still using containers responsibly! Does this makes sense for you? Thank you again for wasting your time to reply to my dumb messages

        • ianonavy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 days ago

          A signature only tells you where something came from, not whether it’s safe. Saying APT is more secure than Docker just because it checks signatures is like saying a mysterious package from a stranger is safer because it includes a signed postcard and matches the delivery company’s database. You still have to trust both the sender and the delivery company. Sure, it’s important to reject signatures you don’t recognize—but the bigger question is: who do you trust?

          APT trusts its keyring. Docker pulls over HTTPS with TLS, which already ensures you’re talking to the right registry. If you trust the registry and the image source, that’s often enough. If you don’t, tools like Cosign let you verify signatures. Pulling random images is just as risky as adding sketchy PPAs or running curl | bash—unless, again, you trust the source. I certainly trust Debian and Ubuntu more than Docker the company, but “no signature = insecure” misses the point.

          Pointing out supply chain risks is good. But calling Docker “insecure” without nuance shuts down discussion and doesn’t help anyone think more critically about safer practices.

          • jagged_circle@feddit.nl
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 day ago

            Oof, TLS isnt a replacement for signatures. There’s a reason most package managers use release signatures. x.509 is broken.

            And, yes PGP has a WoT to solve its PKI. That’s why we can trust apt sigs and not docker sigs.

    • festus@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      3 days ago

      Entirely depends on who’s publishing the image. Many projects publish their own images, in which case you’re running their code regardless.