Hello I am trying to install my instance of Lemmy with ubergeek77 method. I’m using Debian server and Cloudflare proxy.
In the troubleshooting it’s said I should put my Cloudflare API Token, is that the “Read All” token I create?
I have SSL pre-isntalled with the server, should I issue new one from Cloudflare, or I can use the pre-installed one?
My server comes with nginx and apache installed, should I disable them before I continue with installation?
Basically at the end I get Server Down error 521 and this error as well:
Error response from daemon: driver failed programming external connectivity on endpoint lemmy-easy-deploy-proxy-1 (3ab1001f9db85c9f2a0ffe06718561e7a40c2a201c903c7655570c9342e61f03): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use
As other people said, there is already a process running on port 80. To find what exactly you can use the command
sudo ss -lptn 'sport = :80'
orsudo netstat -nlp | grep :80
(both require)Also, what do you mean by
It’s a self-signed cert or letsencrypt (or similar)?
Looking at the Ubergeek77 method, I can see in the docker-compose that they have specified to use caddy to run on port 80 and 443. So my guess is that you don’t need neither nginx nor Apache (caddy is a reverse proxy as well) . Also, why have you installed both? I guess you selected “web server” during the OS installation.
So remove apache and nginx, and try running the install script again.
Thank you for the replies. Nginx was causing the issue. I installed it, but now I face the 526 Invalid SSL certificate error.
I tried putting my API Token in the conf.env and redeply with ./deploy -f but the error persists. Maybe I should somehow manually enter the Cloudflare certificate, but I’m not sure where (or if) I should do it.
From where I bought the domain, they do not let me to use their DNS servers to point to anything but their servers, so I used Coudflare for setting up the DNS records for the domain.
When I first opened the https:// on my server after I got it, the nginx had SSL certificate, the connection was secure. I suppose I have to manually do something here, but not sure what.
looking at the install instructions it doesn’t say you have to use CF cert, only the api token in the conf.env file. So if you have done that you should be ok.
I’m curious about the DNS thing from your registrar. If they are the authoritative DNS, even putting the right records in CF won’t make a difference. But maybe you can tell your registrar that CF DNS is authoritative, by creating a SOA DNS record in your registrar, pointing at CF DNS (I can only fnid references to 1.1.1.1 or adam.ns.cloudflare.com).
Looking at the deployment templates it doesn’t say that you have to use ANY certificate. I think caddy generates one (or import one from CF) at deployment. If I was you I’d start from scratch with a new OS installation WITHOUT nginx/apache. Base OS, docker/docker-compose, and run the script again (after you fixed the DNS). If you want to find who is the SOA for your domain I think the command should be
dig @9.9.9.9 SOA youlemmydomain.com
That should answer with the CF DNS you configured.
Also a
dig @9.9.9.9 youlemmydomain.com
should answer with the A records you configured in CF.Thank you for the help, I will try it out!