• 1 Post
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle

  • There’s some diagnostic info when in game through the battery sidebar menu, I think. You can use that to see frame rate and other performance benchmarks.

    I usually just google or YouTube some way to improve whatever game I’m playing on deck. Usually, someone has already done the leg work to figure it out.








  • Maybe more apt for me would be, “We don’t need to teach math, because we have calculators.” Like…yeah, maybe a lot of people won’t need the vast amount of domain knowledge that exists in programming, but all this stuff originates from human knowledge. If it breaks, what do you do then?

    I think someone else in the thread said good programming is about the architecture (maintainable, scalable, robust, secure). Many LLMs are legit black boxes, and it takes humans to understand what’s coming out, why, is it valid.

    Even if we have a fancy calculator doing things, there still needs to be people who do math and can check. I’ve worked more with analytics than LLMs, and more times than I can count, the data was bad. You have to validate before everything else, otherwise garbage in, garbage out.

    It’s sounds like a poignant quote, but it also feels superficial. Like, something a smart person would say to a crowd to make them say, “Ahh!” but also doesn’t hold water long.







  • The quote I like the most on this subject is: “The metaverse isn’t a place; it’s a time in history when our digital identity and goods have as much or more importance than our real life versions.” I don’t think we’re there yet, but it also makes little (rational) sense that people spend money for virtual items in video games.

    I think the closest playable analogues are actually Fortnite and Roblox. Interconnected worlds with external avatars that cross them. You play experiences vs. games. There’s brand integration so Goku can fight John Wick. It’s pretty close?



  • There was a similar study reported the other day about using FMRI imagining and AI to recreate the “thought content” of someone’s brain. It required training for the AI in the person’s brain and some other training. It does seem these techniques can work with some specified models, but yeah, it doesn’t seem like hooking someone’s brain up to this would create a movie of their mind or something.

    I think the more dangerous part is “This is step 0,” which this tech would have seemed impossible 10 years ago. Very strange times.