maegul (he/they)

A little bit of neuroscience and a little bit of computing

  • 83 Posts
  • 1.25K Comments
Joined 2 years ago
cake
Cake day: January 19th, 2023

help-circle

  • Absolutely. It’s a shit show.

    And interestingly, making the general public more aware of this is likely quite important. Because 1, they have very idealistic views of what research is like, and 2, just about everyone is entering research blind to the realities. It’s a situation that needs some sunlight and rethinking.

    IMO, a root cause is that the heroic genius researcher ideal at the base of the system’s design basically doesn’t really exist any more. Things are just too big and complex now for a single person to be that important. Dismantle that ideal and redesign from scratch.




  • It’s definitely an interesting and relevant idea I think! A major flaw here is the lack of ability for communities to establish themselves as discrete spaces desperate from the doomscrolling crowd.

    A problem with the fediverse on the whole IMO, as community building is IMO what it should be focusing on.

    Generally decentralisation makes things like this difficult, AFAIU. Lemmy has things like private and local only communities in the works that will get you there. But then discovery becomes a problem which probably requires some additional features too.



  • The catch is that the whole system is effectively centralised on BlueSky backend services (basically the relay). So while the protocol may be standardised and open, and interpreted with decentralised components, they’ll control the core service. Which means they can unilaterally decide to introduce profitable things like ads and charging for features.

    The promise of the system though is that it provides for various levels of independence that can all connect to each other, so people with different needs and capabilities can all find their spot in the ecosystem. Whether that happens is a big question. Generally I’d say I’m optimistic about the ideas and architecture, but unsure about whether the community around it will get it to what I think it should be.


  • Fair, but at some point the “dream” breaks down. Python itself is written in C and plenty of packages, some vital, rely on C or Cython (or fortran) and rust now more and more. So why not the tooling that’s used all the time and doing some hard work and often in build/testing cycles?

    If Guido had packaging and project management included in the standard library from ages ago, with parts written in C, no one would bat an eye lid whether users could contribute to that part of the system. Instead, they’d celebrate the “batteries included”, “ease of use” and “zen”-like achievements of the language.

    Somewhere in Simon’s blog post he links to a blog post by Armin on this point, which is that the aim is to “win”, to make a singular tool that is better than all the others and which becomes the standard that everyone uses so that the language can move on from this era of chaos. With that motive, the ability for everyday users to contribute is no longer a priority.


  • Cool to see so many peeps on the Fedi!

    While I haven’t used uv (been kinda out of Python for a while), and I understand the concerns some have, the Python community getting concerned about good package/project management tooling is certainly a telling “choice” about how much senior Python devs have gotten used to their ecosystem. Somewhat ditto about concern over using a more performant language for fundamental tooling (rather than pursuing the dream of writing everything in Python, which is now surely dead).

    So Simon is probably right in saying (in agreement with others):

    while the risk of corporate capture for a crucial aspect of the Python packaging and onboarding ecosystem is a legitimate concern, the amount of progress that has been made here in a relatively short time combined with the open license and quality of the underlying code keeps me optimistic that uv will be a net positive for Python overall

    Concerns over maintainability should Astral go down may best be served by learning rust and establishing best practices around writing Python tooling in compiled languages to ensure future maintainability and composability.





  • Not a stock market person or anything at all … but NVIDIA’s stock has been oscillating since July and has been falling for about a 2 weeks (see Yahoo finance).

    What are the chances that this is the investors getting cold feet about the AI hype? There were open reports from some major banks/investors about a month or so ago raising questions about the business models (right?). I’ve seen a business/analysis report on AI, despite trying to trumpet it, actually contain data on growing uncertainties about its capability from those actually trying to implement, deploy and us it.

    I’d wager that the situation right now is full a lot of tension with plenty of conflicting opinions from different groups of people, almost none of which actually knowing much about generative-AI/LLMs and all having different and competing stakes and interests.


  • Just recently read your 2017 article on the different parts of the “Free Network”, where it was new to me just how much the Star Trek federation was used and invoked. So definitely interesting to see that here too!

    Aesthetically, the fedigram is clearly the most appealing out of all of these. For me at least.

    It seems though that using the pentagram may have been a misstep given how controversial it seems to be (easy to forget if you’re not in those sort of spaces). I liked the less pentagram styled versions at the bottom. I wonder if a different geometry could be used?




  • Yea, this highlights a fundamental tension I think: sometimes, perhaps oftentimes, the point of doing something is the doing itself, not the result.

    Tech is hyper focused on removing the “doing” and reproducing the result. Now that it’s trying to put itself into the “thinking” part of human work, this tension is making itself unavoidable.

    I think we can all take it as a given that we don’t want to hand total control to machines, simply because of accountability issues. Which means we want a human “in the loop” to ensure things stay sensible. But the ability of that human to keep things sensible requires skills, experience and insight. And all of the focus our education system now has on grades and certificates has lead us astray into thinking that the practice and experience doesn’t mean that much. In a way the labour market and employers are relevant here in their insistence on experience (to the point of absurdity sometimes).

    Bottom line is that we humans are doing machines, and we learn through practice and experience, in ways I suspect much closer to building intuitions. Being stuck on a problem, being confused and getting things wrong are all part of this experience. Making it easier to get the right answer is not making education better. LLMs likely have no good role to play in education and I wouldn’t be surprised if banning them outright in what may become a harshly fought battle isn’t too far away.

    All that being said, I also think LLMs raise questions about what it is we’re doing with our education and tests and whether the simple response to their existence is to conclude that anything an LLM can easily do well isn’t worth assessing. Of course, as I’ve said above, that’s likely manifestly rubbish … building up an intelligent and capable human likely requires getting them to do things an LLM could easily do. But the question still stands I think about whether we need to also find a way to focus more on the less mechanical parts of human intelligence and education.




  • Sure, but IME it is very far from doing the things that good, well written and informed human content could do, especially once we’re talking about forums and the like where you can have good conversations with informed people about your problem.

    IMO, what ever LLMs are doing that older systems can’t isn’t greater than what was lost with SEO ads-driven slop and shitty search.

    Moreover, the business interest of LLM companies is clearly in dominating and controlling (as that’s just capitalism and the “smart” thing to do), which means the retention of the older human-driven system of information sharing and problem solving is vulnerable to being severely threatened and destroyed … while we could just as well enjoy some hybridised system. But because profit is the focus, and the means of making profit problematic, we’re in rough waters which I don’t think can be trusted to create a net positive (and haven’t been trust worthy for decades now).


  • I really think it’s mostly about getting a big enough data set to effectively train an LLM.

    I mean, yes of course. But I don’t think there’s any way in which it is just about that. Because the business model around having and providing services around LLMs is to supplant the data that’s been trained on and the services that created that data. What other business model could there be?

    In the case of google’s AI alongside its search engine, and even chatGPT itself, this is clearly one of the use cases that has emerged and is actually working relatively well: replacing the internet search engine and giving users “answers” directly.

    Users like it because it feels more comfortable, natural and useful, and probably quicker too. And in some cases it is actually better. But, it’s important to appreciate how we got here … by the internet becoming shitter, by search engines becoming shitter all in the pursuit of ads revenue and the corresponding tolerance of SEO slop.

    IMO, to ignore the “carnivorous” dynamics here, which I think clearly go beyond ordinary capitalism and innovation, is to miss the forest for the trees. Somewhat sadly, this tech era (approx MS windows '95 to now) has taught people that the latest new thing must be a good idea and we should all get on board before it’s too late.