Since selfhosted clouds seem to be the most common thing ppl host, i’m wondering what else ppl here are selfhosting. Is anyone making use of something like excalidraw in the workplace? Curious about what apps that would be useful to always access over the web that aren’t mediaservers.
Actually Budget for finances, Nextcloud for everything office and organization, Home Assistant for home automation, paperless–ngx for storing and sorting documents, freshrss for news, ntfy.sh for notifications.
i dont understand ntfy.sh
you need an app to run to get messages? which you already do with home assistant and companion app or apprise. what is the usecase for ntfy?
Home Assistant notifications and almost all other notification services on phones actually route notifications through a cloud service like Firebase because Apple and Google try to railroad apps into their platforms. Ntfy lets you actually self host notifications without a third party, but also without killing your battery.
That’s not the main thing I care about, though. Mainly I use it as a self hosted replacement for PushBullet, to share links and files with myself across machines and do some light alerting for servers and stuff (e.g. TrueNAS errors). Some of that could he done with HA, but ntfy is just better for some other uses with stuff like its web ui.
Plus, apart from that ntfy is really easy to integrate with other stuff, like its easy to send a notification from a shell script or web hook so you can hack it into things that don’t otherwise support notifications (there are also lots of things that support ntfy natively, e.g. the arrs).
i cant follow your first paragraph. at all. i use companion app from fdroid withouth gsf and a selfhosted homeassistant. you could aswell connect apprise to it, you can uding telegram or whatever…all in homeassistant.
ntfy iphone and google app from playstore do share your data, right? and you use that to share data in LAN? i am confused.
Not entirely sure about the de-google’d version of the Home Assistant companion app, but I know the regular companion app uses Firebase (and whatever the Apple equivalent is called, I forget) to deliver notifications, and it still would using Telegram as Telegram also uses Firebase. Apprise is a bit different as it can use multiple backends. Regardless, there are multiple ways to do things. Ntfy iphone and google app do not route your data through a third party server. I self host the ntfy server on my own machine and domain and my phone connects to it and receives data. It will deliver notifications wherever I am, not just in my LAN. It also provides a nice UI akin to Pushbullet I can use to send myself stuff privately.
You can’t replicate all of what ntfy does with Home Assistant. There’s more to it than just delivering notifications, it’s the whole app frontend and persistent data etc. If it’s not clear to you what it’s for from my description you might have to go look into it yourself. Look at PushBullet, that’s most similar to what I primarily use it for.
ok. but thanks really for the details!
There is a pinned post for this https://lemmy.world/post/60585
I like seeing the same question pop up at least every few months to get fresh opinions, thats like 2 years old ppl could have scrapped their setup and have new ones now
- Calibreweb
- FreshRSS
- Grampsweb
- Emacs
- Gitea
- Stirling-PDF
- Vaultwarden
- Pihole
- Pyload
- Glances
- Syncthing
- Homepage
- Karakeep
Web-accessible Emacs? What are you using?
You can use the Linuxserver.io VSCodium Image and replace VSCodium with Emacs in the Dockerfile.
Huh, what?
I see in your link that that image has support for KasmVNC, which is great and you could use to make Emacs work…
But the whole point of VS Code is that it can run in a browser and not use a remote desktop solution- which is always going to be a worse experience than a locally-rendered UI.
I kinda expect someone to package Emacs with a JS terminal, or with a browser-friendly frontend, but I’m always very surprised that this does not exist. (It would be pretty cool to have a Git forge that can spawn an Emacs with my configuration on a browser to edit a repository.)
Exactly, since KasmVNC can run GUI programs in the browser and the Linux server.io base image is just Debian, it was trivial to just run it with Emacs instead. I much prefer Emacs over VS Code because of Org Mode. While VS Code works well in a browser. It isn’t what I wanted.
Here is where I have posted my Emacs Dockerfile. It might be a little out of date. Emacs Docker
EDIT: The Dockerfile also installs the fonts I like for Emacs along with git and hunspell.
EDIT: You could also probably achieve something similar with a Docker container run ning Apache Guacamole.
Mumble and Wireguard
Some of my friends are heading back to mumble because discord is getting too bloated with useless features.
Wireguard is to be able to access my local network when I am away.
Check out Tailscale. It uses Wireguard under the hood, but it’s magic.
Wireguard is quite magic itself
Wireguard + adguard means home ad blocking anywhere I want it.
Or WireGuard + PiHole
I hear about people wanting alternatives to discord though I never got into using it too much personally, but does anyone know about whether or not Revolt chat is a good open-source self-hostable solution?
I have tried and their documentation is too complex and incomplete for self hosting. Right now, for communication, I have mumble for VoIP and ngircd as an irc server.
It pretty much covers 80% of discord use case. I am looking for something that support video chat/screen sharing. Synapse is honestly not bad at all. But it’s too power hungry for my liking. I wish Jitsi could have better ux for average consumer. It feels too business like.
- Matrix server
- Element web GUI
- NocoDB for various Mini databases and forms
- Joplin server
- KanBan Board
- Mealie to store recipes
- Grocy as a home ERP
- Grafana for various metrics
- Home Assistant
- NodeRed(non HA, different node)
- InfluxDB
- Zabbix for monitoring
- Vaultwarden
- etherpad
- Technitium DNS
- A NTP server
- Mesh Central
- A win11 VM with RDP
- paperless NGX
- calibre Web (or does that count as Media already)
- Agent DVR
- Spoolmann
- OrcaSlicer via Browser(linuxserver.io)
- Omada Controller
- Univention to bring everything together
- netbox to document half of the shit
- wiki.js to document the other half
Honestly,I think I have a problem.
Can confirm you have a problem. I mean, you have two services to document your stuff.
Yeah, but Netbox is really really neat to document cabeling, IPAM, the rack and does asset management as well with a plugin.
But it’s really hard to document HOWTOs in it. And wiki.js is really a bad idea for the former.
You have all the solutions lol
It sounds like it, but there are a few things I still need to do.
-
AMP Gamemanager to get better control of the servers for the kiddos
-
Codeproject AI for better image recognition with agent dvr
-
A proper voice AI setup with HA
-
I need to get my PBX setup going again
-
I will soon clean up my media and storage solution and move to TrueNAS
And I need to automate more. One day…
-
@3dmvr @selfhosted I’d say DNS server is the most important self hosted server I have.
Local LLMs, I’m surprised no one brought that up yet. I’ve got an old GPU in my server, and I’m running some local models with openweb-ui for use in the browser and Maid for an Android app to connect to it.
You’re a brave one admitting that on here. Don’t you know LLM’s are pure evil? You might as well be torturing children!
I think most people on here are reasonable, and I think local LLMs are reasonable.
The race to AGI and companies trying to shove “AI” into everything is kind of insane, but it’s hard to deny LLMs are useful and running them locally you dont have privacy concerns.
Interesting, this has not been my experience. Most people on here seem to treat AI as completely black and white, with zero shades of grey.
I see a mix, don’t get me wrong, Lemmy is definitely opinionated lol, but I don’t think it’s quite black and white.
Also, generally, I’m not going to not share my thoughts or opinions because I’m afraid of people that don’t understand nuance, sometimes I don’t feel like dealing with it, but I’m going to share my opinion most of the time.
OP asked what you self host that isn’t media, self hosted LLMs is something I find very useful and I didn’t see mentioned. Home assistant, pihole, etc, all great answers… But those were already mentioned.
I still have positive upvotes on that comment, and no one has flamed me yet, but we will see.
I’ll give my recommendation to local LLMs as well. I have a 1060 super that I bought years ago in 2019 and it’s just big enough to do some very basic auto completion within visual studio. I love it. I wouldn’t trust it to write an entire program on its own, but when I have hit a mental block and need a rough estimate of how to use a library or how I can arrange some code, it gives me enough inspiration to get through that hump.
Ya exactly! Or just sanity checking if you understand how something works, I use it a lot for that, or trying to fill in knowledge gaps.
Also qwen3 is out, check that out, it might fit on a 1060.
Concur. In particular models focused on image output.
I think looking through the comments on this post about AI stuff is a pretty good representation of my experience on lemmy. Definitely some opinions, but most people are pretty reasonable 🙂
The tech itself is great.
But:
- Businesses push that shit where it doesn’t belong
- Businesses replacing people by AI when it is objectively worst, to make a buck
- Business stealing the work of million of people to train their model
Completely agree
Ais fine as a tool, trying to replace workers and artists while blatantly ripping stuff off is annoying, it can be a timesaver or just helpful for searching through your own docs/files
If you agree it’s a time saver, then you agree it makes workers more efficient. You now have a team of 5 doing the work of a team of 6. From a business perspective it’s idiotic to have more people than you need to, so someone would be let go from that team.
I personally don’t see any issue with this, as it’s been happening for the existence of humanity.
Tools are constantly improving that make us more efficient.
Most of people’s issue with AI is more an issue with greedy humans, and not the technology itself. Lord knows that new team of 5 is not getting the collective pay as the previous team of 6.
more work can get done and more work can be show in progress, its like a marginal timesaver, itll knock off 25% of a human maybe if that, not replace a whole one
If everyone on your team of 6 is 20% faster, you don’t necessarily need the 6th person. Maybe you put that towards more work, but that’s not very American, these days. Cut costs, cash out, fuck 'em
Nor will they get the workload of 6 people. They might for a couple of months, but at some point the KPI’s will suddenly say that it’s possible to squeeze out the workload of 2 more people. With maybe even 1 worker less!
Are you my project manager??
LLMs are perfectly fine, and cool tech. Problem is they’re billed as being actual intelligence or things that can replace humans. Sure they mimic humans well enough, but it would take a lot more than just absorbing content to be good enough at it to replace a human, rather than just aiding them. Either the content needs to be manually processed to add social context, or new tech needs to be made that includes models for how to interpret content in every culture represented by every piece of content, including dead cultures who’s work is available to the model. Otherwise, “hallucinations” (e.g. misinterpretation and thus miscategorization of data) will make them totally unreliable without human filtering.
That being said, there many more targeted uses of the tech that are quite good, but always with the need for a human to verify.
To add to this, I host Confusion for image generation
Depends on what you consider self-hosted. Web applications I use over LAN include Home Assistant, NextRSS, Syncthing, cockpit-machines (VM host), and media stuff (Jellyfin, Kavita, etc). Without web UI, I also run servers for NFS, SMB, and Joplin sync. Nothing but a Wireguard VPN is public-facing; I generally only use it for SSH and file transfer but can access anything else through it.
I’ve had NextCloud running for a year or two but honestly don’t see much point and will probably uninstall it.
I’ve been planning to someday also try out Immich (photo sync), Radicale (calendar), ntfy.sh, paperless-ngx, ArchiveBox (web archive), Tube Archivist (YouTube archive), and Frigate NVR.
Immich and Radicale definitely recommended. I’ve still got paperless-ng and plan to move to paperless-ngx as soon as I find the time. I’ve also got firefly-iii which is a big revolution to how I manage personal finance. Even my 17 old son has got into it … He couldn’t understand where all his hard earnings were going.
Besides a media server, I self host my email, a blog, an IRC bouncer, syncthing, SPFToolbox, and in my house I run ADS-B plane tracking.
Joplin. I have it as a sync server. But have it tucked away in a cloud server for the times when I’m traveling so j always have a way to access data in case my phone gets stolen/confiscated.
Whoogle, a meta-search that strips away all the nasty things from Google. Can’t live without it tbh.
- Gitlab (version control)
- Bookstack (wiki)
- Joplin (not a webapp, but sync server)
- Semaphore (does all of my infra updating via Ansible)
- Uptime-Kuma (monitoring/alerting)
Been thinking about adding NextCloud mostly for the Google Docs/MS Office replacement at some point.
But honestly most of my stuff is just for me, my family prefers to to use whatever commercial thing is out there. So I tend to limit things to infrastructure type things that are of personal interest to me alone.
Gitlab
This guy has a lot of memory in his server
It is allotted 16GB out of the 62GB total that the host has. Which is the amount their docs call for in a 20 RPS or 1000 user scenario. Since I am the only one doing any commits or pulls, it does fine.
Does take its sweet time to reboot though. 😆
Wow, I would never considering allocating so much memory to a single service I run at home.
It is all running in a Proxmox cluster. 2 nodes have 62GB and one has 32GB. So while it is a good chunk. Not enough to bottleneck available RAM for other things in the cluster.
Storyteller, ever wish you could listen to an Audio book and read an ebook at the same time.
Storyteller can combine an Audio book and and ebook to create a single ebook that can be read like a normal ebook or you can listen to it and watch the actively spoken sentences highlighted in real time like a karaoke song lyrics.
This is pretty neat!
https://storyteller-platform.gitlab.io/storyteller/docs/intro/what-is-this
Sounds like you need both the audio and the ebook to make it work?
I typically only have one or the other.
Yes you need to provide both an Audio book and an ebook as inputs, if you only have one of these, you could try getting the other from your local library, or you could sail the seas. It’s not a fool proof process, so sometimes you have to try different formats of Audio books to make it work, also depending on how beefy your computer is, it will take some time to process, 1-2 hrs for big books like Stormlight Archive on my laptop
Forgejo Jellyfin Navidrome PiHole AudioBookshelf Manyfold FoundryVTT sometimes
SearXNG, Forgejo, Linkwarden, Vaultwarden, copyparty, all the Servarr apps, qBittorrent and SABnzbd for downloads, Syncthing, Mastodon, and all the various containers like databases and other tools that support the aforementioned.