• 4 Posts
  • 51 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle











  • Ragnell@kbin.socialOPto196@lemmy.blahaj.zoneVenn Diagram Rule
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    @agamemnonymous

    Look, I’ve had to watch it happen to “triggered”, “mansplain”, and “woke.” You’re going to have to accept that it happened to Singularity.

    You don’t honestly think that the improvement of an LLM’s predictive algorithm is going to lead to it taking over the world? All it can do is produce words. Unless we stupidly do everything it says, thinking it’s truly intelligent, it has no power.

    We only have to worry about machine overlords if we PUT machines in charge of stuff, and we’ll only do that if we think they are intelligent enough to make decisions. So yeah, determining whether it has real intelligent is a key thing here.

    (Dammit, we’ve reached the end of the chat tree)


  • @agamemnonymous

    Look, I’ve had to watch it happen to “triggered”, “mansplain”, and “woke.” You’re going to have to accept that it happened to Singularity.

    You don’t honestly think that the improvement of an LLM’s predictive algorithm is going to lead to it taking over the world? All it can do is produce words. Unless we stupidly do everything it says, thinking it’s truly intelligent, it has no power.

    We only have to worry about machine overlords if we PUT machines in charge of stuff, and we’ll only do that if we think they are intelligent enough to make decisions. So yeah, determining whether it has real intelligent is a key thing here.


  • Ragnell@kbin.socialOPto196@lemmy.blahaj.zoneVenn Diagram Rule
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Look, I’ve had to watch it happen to “triggered”, “mansplain”, and “woke.” You’re going to have to accept that it happened to Singularity.

    You don’t honestly think that the improvement of an LLM’s predictive algorithm is going to lead to it taking over the world? All it can do is produce words. Unless we stupidly do everything it says, thinking it’s truly intelligent, it has no power.

    We only have to worry about machine overlords if we PUT machines in charge of stuff, and we’ll only do that if we think they are intelligent enough to make decisions. So yeah, determining whether it has real intelligent is a key thing here.


  • Look, I’ve had to watch it happen to “triggered”, “mansplain”, and “woke.” You’re going to have to accept that it happened to Singularity.

    You don’t honestly think that the improvement of an LLM’s predictive algorithm is going to lead to it taking over the world? All it can do is produce words. Unless we stupidly do everything it says, thinking it’s truly intelligent, it has no power.



  • Ragnell@kbin.socialOPto196@lemmy.blahaj.zoneVenn Diagram Rule
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    That’s because the capital-s Singularity as proposed by Verner Vinge is what we’re worried about here. The advent of a technological achievement that forever changes humanity, possibly signalling the end of it.

    This does specifically set a barrier, which is a “Point of No Return” when it comes to technology. This is what most people mean when they mean the Singularity. When a program becomes capital-I Intelligent.

    Neumann’s original proposal is as limited by mathematics as an LLM itself. The term Singularity has, as is common in the English language, become a larger term to signify a barrier has been crossed. There are other theories beyond the idea that it’s just self-replication gone wild.

    You’re trying to reduce what to most people is a moral quandry to pure mathematics. Since my core point is that pure mathematics is not enough to capture the depth and potential of humanity, I’m not going to be swayed by being told it’s just a mathematical function.

    I will give you a boost for being interesting, though.


  • Ragnell@kbin.socialOPto196@lemmy.blahaj.zoneVenn Diagram Rule
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    It’s not an arbitrary barrier. It’s important. Can the computer actually make a decision? Can it be HELD ACCOUNTABLE for that decision? If we’re going to deploy these things to replace human beings, this is a question that needs a “Yes” answer.

    Right now the answers are no. They can’t make a decision that takes multiple dimensions of an issue into account. But businesses ARE saying that they can replace human writers, people ARE using them to write legal briefs and technical instructions.

    I don’t know why you are so insistent that this doesn’t matter. We’re watching something kick its legs, it can’t even crawl yet, but it’s being signed up for a marathon and you’re arguing that it’ll be able to do the marathon eventually so that’s good enough.


  • You underestimate yourself as a complex method of distributing gametes. Because you are operating on a much more complicated mathematical base than a computer. You’re analog. Your brain is an analog computing engine that moves faster than any analog machine we’ve ever been able to make, the only way we can transmit faster is by using digital in our computers. Which means that down to it, while we might just be chemical and electrical signals, the computer itself is just two signals. Two voltages. 1 and 0. Our thinking is vastly more complex, even as fast as this thing goes. That’s what instinct and intuition is, our brains processing evidence against memory.


  • Ragnell@kbin.socialOPto196@lemmy.blahaj.zoneVenn Diagram Rule
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    1 year ago

    @agamemnonymous No, it looks like it beforehand. ChatGPT’s just a language prediction engine, but people think it can think. It can only discern what the most probable language patterns are, it can’t make judgements. But people are arguing it is working off inspiration.

    And we’ve KNOWN it will look like it beforehand, that’s why there’s even concepts like a Turing test, to prepare us for discerning the illusion of intelligence from actual intelligence.

    Prersonally, I suspect social media and the way that Bigsoc companies hack the human mind using feed algorithms is an argument for a Non-AI Singularity, and more likely than a math engine that predicts the next word in an astoundingly natural way.


  • @Yendor Point is that it’s jumping the gun to think we can escape climate change by rocketing to Mars and terraforming the climate there, rather than just concentrating on terraforming Earth back to a liveable environment and THEN worrying about moving elsewhere. If we can’t keep Earth inhabitable, we can’t make Mars inhabitable.

    Just like people who think Large Language Models are genuine AI are completely jumping the gun about what we’re capable of coding right now.