I’ve got forgejo configured and running as a custom docker app, but I’ve noticed there’s a community app available now. I like using the community apps when available since I can keep them updated more easily than having to check/update image tags.
Making the switch would mean migrating from sqlite to postgres, plus some amount of file restructuring. It’ll also tie my setup to truenas, which is a platform I like, but after being bit by truecharts I’m nervous about getting too attached to any platform.
Has anyone made a similar migration and can give suggestions? All I know about the postgres config is where the data is stored, so I’m not even sure how I’d connect to import anything. Is there a better way to get notified about/apply container images for custom apps instead?
nah you’re probably not going to get any benefits from it. The best way to make your setup more maintainable is to start putting your compose/kubernetes configuration in git, if you’re not already.
I don’t want to derail this thread, but you piqued my interest in something I’ve always wanted to do, maybe just for the learning aspect, and to see what I could accomplish.
I’ve always wanted to see if I could have all my docker compose/run files, and various associated files to a git where I could just reinitialize a server, with everything I already have installed previously. So, I could just fire up a script, and have it pull all my config files, docker images, the works from the git, and set up a server with basically one initial script. I have never used github or others of that genre, except for installation instructions for a piece of software, so I’m a little lost on how I would set that up or if there are better options.
Yeah, what you’re talking about is called GitOps. Using git as the single source of truth for your infrastructure. I have this set up for my home servers.
https://codeberg.org/jlh/h5b
nodes
has NixOS configuration for my 5 kubernetes servers and a script that builds a flash drive for each of them to use as a boot drive (same setup forporygonz
, but that’s my dedicated DHCP/DNS/NTP mini server)mikrotik
has a dump of my Mikrotik router config and a script that deploys the config from the git repo.applications
has all my kubernetes config: containers, proxies, load balancers, config files, certificate renewal, databases, clustered raid, etc. It’s all super automated. A pretty typical “operator” container to run in Kubernetes is ArgoCD, which watches a git repo and automatically deploys any changes or desyncs back to the Kubernetes API so it’s always in sync with git. I don’t use any GUI or console commands to deploy or update a container, I just edit git and commit.The kubernetes cluster runs about 400 containers, most of them just automatic replicas of services for high-availability. Of course there’s always some manual setup steps outside of git, like partitioning drives, joining the nodes to the cluster, writing hardware-specific config, and bootstrapping Argocd to watch git. But overall, my house could burn down tomorrow and I would have everything I need to redeploy using this git repo, the secrets git repo, and my backups of my databases and container
/data
dirs.I think Portainer supports doing GitOps on Docker compose? Never used it.
https://docs.portainer.io/user/docker/stacks/add
Argocd is really the gold standard for GitOps though. I highly recommend trying out k3s on a server and running ArgoCD on it, it’s super easy to use.
https://argo-cd.readthedocs.io/en/stable/getting_started/
Kubernetes is definitely different than Docker Compose, and tutorials are usually written for Docker
compose.yml
, not KubernetesDeployments
, but It’s super powerful and automated. Very hard to crash once you have it running. I don’t think it’s as scary as a lot of people think, and you definitely don’t need more than one server to run it.Man, I really appreciate all this info. Very helpful. It will take me some time to digest everything and put it into an action plan. I just thought, hey that would be cool and a nice project I can sink my teeth into and learn a lot on the way while deploying. Again, thank you for taking the time to give some direction and inspiration.
Your description does not sound related to git. It sounds more like nix
This: https://nixos.org/?
Nix is great for reproducibility
Yes, and Nix is another bag of worms. My suggestion is first try to backup your Docker compose file and the configuration files, you can define in .gitignore which files or dirs to ignore and not backup. You don’t need any automated installation for your server, as it is fairly standard but you can easily do that if you run it as a VM on top of Proxmox and just create a snapshot of your VM.
This is exactly what I do. I have a git repo with the config files and docker compose file that through the folder mapping all I have to do is docker compose up and it’s fully setup.
Awesome! When you were putting it all together, did you find some resources/reading material/tutorials that helped you?
I did. That was the way they had it setup in an *arr stack setup guide I was following. Unfortunately it’s been over a year so I don’t have a link. But if you’re interested I can send you my docker compose when I get chance
If could be so kind and DM it to me as well? I’d appreciate it.
Dude, I feel that loud and clear. LOL
That sounds like a lot of work, having to remove secrets and clean it up just for me. If you feel up to it, I would certainly love to have a look see. At your convenience of course.
Dm’d
Thank you so very much.
Yep we do that at work-ish. Ci/CD is really good.
Really? Cool! I am going to have to investigate. Sounds like a great project for me to learn from.
Yep take a look, theres quite a few examples, but they use Github Actions, CircleCI, Gitlab etc… etc…
Most CI/CD that use the above-ish model will use the same kinda scripts (bash or otherwise). Basically if you can do it on your desptop, you can automate it on a server. Make it work first, then try to make it better.
Most of the time, ill throw my Docker/Docker Compose (and/or terraform if need be) on the root of the repo and do the same steps I do on the development side for building/testing on the CI side. Then switch over to CD with either a new machine (docker build/ compose) or throw it all on a new server. At that point, if you script it out correctly, it doesnt really matter what kind of server you use for CI/CD, since they are all linux boxes at the end of the day.
You can also mix it up by using bare metal, docker alternatives, different password managers, QA tools, linters, etc…etc…
But virtualization will get you quite far. In my opinion start with just trying to get the project to build on another server via a script from scratch, then transfer it over to the CI. Then go with testing/deployment.
GL!
Man, I appreciate the info. Thanks
That’s what I figured, it’s already running without issue and converting the custom app to a standard docker would be trivial. Git sounds like a nice next step, right now my backup script just extracts the app configs from truenas and sticks them in a json file. It’s good enough to recreate the apps, but if I mess something up I have to dive into backups to see what changed.