![](https://programming.dev/pictrs/image/f7f43e0c-834b-4f79-ac36-74108e313019.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
How about nextcloud with only the bare minimum amount of plugins? Filles alone is pretty snappy.
How about nextcloud with only the bare minimum amount of plugins? Filles alone is pretty snappy.
Pydio used to be called ajaxplorer and was a pretty solid and lightweight (although featureful) solution, but then they rewrote the UI with lots of misguided choices (touch controls and android inspired interactions on desktop devices) and it became so horrendous, heavy and clunky that I almost forgot about it. I wonder if they reversed the trend (but from the screenshots it doesn’t look so).
Aren’t they not the same thing at all?
I’m with you. Hg-git still is to this day the best git UI I know…
I have no idea what this is about, but was kotlin native considered here? And what ruled it out in favour of rust?
I’ve seen multiple JVM languages going the route of AOT/native compilation and now taking the spot of systems languages in some use cases (CLI utils, low footprint “cloud native” stacks, things requiring tight os-level integration) with often outstanding performance.
The problem I’ve observed with XMPP as an outsider is the lack of a standard. Each server or client has its own supported features and I’m not sure which one to choose.
That’s a valid concern, but I wouldn’t call it a problem. There are practically 2 types of clients/servers: the ones which are maintained, and which work absolutely fine and well together, and the rest, the unmaintained/abandoned part of the ecosystem.
And with the protocol being so stable and backwards/forwards compatible in large parts, those unmaintained clients will just work, just not with the latest and greatest features (XMPP has the machinery to let clients and servers advertise about their supported features so the experience is at least cohesive).
Which client would you recommend?
Depends on which platform you are on and the type of usage. You should be able to pick one as advertised on https://joinjabber.org , that should keep you away from the fringe/unmaintained stuff. Personally I use gajim and monocles.
They both qualify as “open, federated messaging protocols”, with XMPP being the oldest (about 25 years old) and an internet standard (IETF) but at this point we can consider Matrix to be quite old, too (10 years old). On the paper they are quite interchangeable, they both focus on bridging with established protocols, etc.
Where things differ, though, is that Matrix is practically a single vendor implementation: the same organization (Element/New Vector/ however it’s called these days) develops both the reference client and the reference server. Which incidentally is super complex, not well documented (the code is the documentation), and practically not compatible with the other (semi-official) implementations. This is a red herring because it also happens that this organization was built on venture capital money with no financial stability in sight. XMPP is a much more diverse and accessible ecosystem: there are multiple independent teams and corporations implementing servers and clients, the protocol itself is very stable, versatile and extensible. This is how you can find XMPP today running the backbone of the modern internet, dispatching notifications to all Android devices, being the signaling system behind millions of IoT devices, providing messaging to billion of users (WhatsApp is, by the way, based on XMPP)
Another significant difference is that, despite 10 years of existence and millions invested into it, Matrix still has not reached stability (and probably never will): the organization recently announced Matrix 2 as the (yet another) definitive answer to the protocol’s shortcomings, without changing anything to what makes the protocol so painful to work with, and the requirements (compute, memory, bandwidth) to run Matrix at even a small scale are still orders of magnitude higher than XMPP. This discouraged many organizations (even serious ones, like Mozilla, KDE, …) from running Matrix themselves and further contributes to the de-facto centralization and single point of control federated protocols are meant to prevent.
public Matrix server
Let’s see how long before it bankrupts you
It’s part of the reason why I think decentralized services could be the future. Lemmy or Mastodon can have a lot of small servers with reasonable costs spread across many admins, instead of one centralized service that costs a significant amount to run.
Ohh, absolutely, or rather, it is the past. I mean, internet was built that way, as a resilient federation of networks and protocols. Lemmy could be seen as us just rediscovering emails after the tech giants almost succeeded in killing it. We should approach all the services we use by asking ourselves basic sustainability questions:
is that thing opensource?
self hostable?
does it federate/interoperate with equivalent services?
can I pull my data out of it/relocate to another provider on a whim?
if not, is this a trustworthy and ethical business?
is it profitable?
are there open financial records available showing where/for what the money is going?
is it at risk of being acquired?
is it subject to foreign/unlawful interference
Etc Etc
Until i can give a laptop with linux to my neighbour without also needing to also provide support, its not there yet.
I mean, isn’t your neighbor already getting Windows support from his son or nephew anyway? Let’s not pretend that there exists a magical and perfect OS for those who don’t want to learn one. Some learning is required, whichever the OS, and I would be hard to convince that a current preinstalled Linux is more difficult to handle than a current preinstalled Windows.
Windows has for itself that it’s a devil most people know/got exposure to (thanks to Microsoft schemes and monopolistic practices), there is nothing inherently better or easier about it (and arguably quite the opposite).
What I found compelling about the sync is that you can have your other machines’ histories there with you, but in the background, behind a different shortcut, just in case you need to re-run or check that command you ran somewhere else few years ago…
As I said, I haven’t used that yet, but that’s in many ways more appealing than having to SSH onto said machine (assuming it’s even possible).
I figured starship.rs but not the CTT part, any pointer to help me?
Been using it for months, haven’t gotten to use the sync yet, my only regret so far is that it doesn’t support case insensitive search which is a pretty big deal for me unfortunately.
Mercurial* and no, GitHub never supported hg, that was kind of the distinguishing feature of bitbucket back in the glory days of VCS plurality.
Now if you need mercurial hosting, heptapod (a friendly fork of gitlab with mercurial support) is a great way to go
Most containers don’t package DB programs. Precisely so you don’t have to run 10 different database programs. You can have one Postgres container or whatever.
Well, that’s not the case of the official Nextcloud image: https://hub.docker.com/_/nextcloud (it defaults to sqlite which might as well be the reason of so many complaints), and the point about services duplication still holds: https://github.com/docker-library/repo-info/tree/master/repos/nextcloud
You can typically configure the software in a docker container just as much as you could if you installed it on your host OS…
True, but how large do you estimate the intersection of “users using docker by default because it’s convenient” and “users using docker and having the knowledge and putting the effort to fine-tune each and every container, optimizing/rebuilding/recomposing images as needed”?
I’m not saying it’s not feasible, I’m saying that nextcloud’s packaging can be quite tricky due to the breadth of its scope, and by the time you’ve given yourself fair chances for success, you’ve already thrown away most of the convenience docker brings.
See my reply to a sibling post. Nextcloud can do a great many things, are your dozen other containers really comparable? Would throwing in another “heavy” container like Gitlab not also result in the same outcome?
Well, that is boldly assuming:
that endlessly duplicating services across containers causes no overhead: you probably already have a SQL server, a Redis server, a PHP daemon, a Web server, … but a docker image doesn’t know, and indeed, doesn’t care about redundancy and wasting storage and memory
that the sum of those individual components work as well and as efficiently as a single (highly-optimized) pooled instance: every service/database in its own container duplicates tight event loops, socket communications, JITs, caches, … instead of pooling it and optimizing globally for the whole server, wasting threads, causing CPU cache misses, missing optimization paths, and increasing CPU load in the process
that those images are configured according to your actual end-users needs, and not to some packager’s conception of a “typical user”: do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not
that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization
And this is even before assuming that docker abstractions are free (which they are not)
and why would that be? More abstraction thrown in for the sake of sysadmin convenience doesn’t magically make things more efficient…
You can always give a shot at using a third party client (possibly acting as bridge for other/better protocols, like e.g. slidge.im>xmpp or the buggy matrix equivalent), but you need to keep in mind that they will all require you to authenticate (and remain authenticated) using a smartphone, and that usage of 3rd party clients is forbidden from WA’s terms and conditions (which may lead to your account being blocked/deleted).