• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: July 24th, 2023

help-circle
  • Lol… I just read the paper, and Dr Zhao actually just wrote a research paper on why it’s actually legally OK to use images to train AI. Hear me out…

    He changes the ‘style’ of input images to corrupt the ability of image generators to mimic them, and even shows that the super majority of artists even can’t tell when this happens with his program, Glaze… Style is explicitly not copywriteable in US case law, and so he just provided evidence that the data OpenAI and others use to generate images is transformative which would legally mean that it falls under fair use.

    No idea if this would actually get argued in court, but it certainly doesn’t support the idea that these image generators are stealing actual artwork.


  • He’s not saying “AI is done, there’s nothing else to do, we’ve hit the limit”, he’s saying “bigger models don’t necessarily yield better results like we had initially anticipated”

    Sam recently went before congress and advocated for limiting model sizes as a means of regulation, because, at the time, he believed bigger would generally always mean better outputs. What we’re seeing now is that if a model is too large it will have trouble producing truthful output, which is super important to us humans.

    And honestly, I don’t think anyone should be shocked by this. Our own human brains have different sections that control different aspects of our lives. Why would an AI brain be different?



  • That’s kinda why I bring up Deming and his views of the entire purpose of a quality management system. “they should just stop pretending and send their employee the bullet points.” I couldn’t agree more. My bro is sending out the bullet points, but AI is formatting it, so it is acceptable to his boss.

    In an ideal world, there’d be someone who actually examined the business operation to determine what the benefits of doing individual performance reviews are. Instead, things at his work are done a certain way simply because that’s the way they’ve always been done… and thus, that’s what he’s doing.

    I’m not asking them to change the system…” That’s not really what I meant, I apologize if i phrased what I said weird. If you’re evaluating a person, then they’re already probably not too far to any extreme. If they were the worst employee ever, you would let them go. If they were the best employee ever, your company would be dependent on them and would suffer if they voluntarily decided to leave. Your ideal employee would, therefore, be somewhere within the norm and would need to conform to your system. An individual review exists simply to enforce this conformity, and the reality of the situation is that most employees’ true output is directed more by the operational efficiency of the business than an individuals own actions. If an employee is already conforming, then the review is effectively useless.

    Anyways, I’m kinda droning on, but I think the horses have already left the barn with AI. I think the next logical step for many businesses is to really evaluate what they do and why they do it at an administrative level… and this is a good thing!


  • Regardless of what anyone says, I think this is actually a pretty good use case of the technology. The specific verbiage of a review isn’t necessarily important, and ideas can still be communicated clearly if tools are used appropriately.

    If you ask a tool like ChatGPT to write “A performance review for a construction worker named Bob who could improve on his finer carpentry work and who is delightful to be around because if his enthusiasm for building. Make it one page.” The output can still be meaningful, and communicate relevant ideas.

    I’m just going to take a page from William Edwards Deming here, and state that an employee is largely unable to change the system that they work in, and as such individual performance reviews have limited value. Even if an employee could change the system that they work in, this should be interpreted as the organization having a singular point of failure.



  • Gonna just buck the trend and say that this AI push has me excited for the future. It’s easy to be a nay-sayer, but I genuinely believe the leaps made in AI in just the last year are amazing.

    The author clearly doesn’t like AI, and completely mischaracterizes Mistral AI for things their models could say, but doesn’t consider at all why unaligned models are useful in developing your own.

    The author likes to highlight that sometimes an AI will make things up, a phenomenon known as hallucinating. Hallucinations could also be called “creativity” in certain contexts. This isn’t always a fault, especially when creativity is the intended purpose.

    The author pointed out how it’s possible to prompt engineer out sensitive data, and how there’s a lack of privacy… which isn’t a problem with the tech, but rather tech companies.

    The technology used behind the scenes with ChatGPT isn’t exclusively for text generation. I’m seeing it appear in speech to text / text to speech applications. It’s showing up in image and video editing. It’s showing up in … well … images/movies of an adult nature.

    You’re probably already consuming AI generated content without even realizing it.



  • There’s a ton of stuff ChatGPT won’t answer, which is supremely annoying.

    I’ve tried making Dungeons and Dragons scenarios with it, and it will simply refuse to describe violence. Pretty much a full stop.

    Open AI is also a complete prude about nudity, so Eilistraee (Drow godess that dances with a sword) just isn’t an option for their image generation. Text generation will try to avoid nudity, but also stop short of directly addressing it.

    Sarcasm is, for the most part, very difficult to do… If ChatGPT thinks what you’re trying to write is mean-spirited, it just won’t do it. However, delusional/magical thinking is actually acceptable. Try asking ChatGPT how licking stamps will give you better body positivity, and it’s fine, and often unintentionally very funny.

    There’s plenty of topics that LLMs are overly sensitive about, and uncensored models largely correct that. I’m running Wizard 30B uncensored locally, and ChatGPT for everything else. I’d like to think I’m not a weirdo, I just like D&d… a lot, lol… and even with my use case I’m bumping my head on some of the censorship issues with LLMs.


  • I actually did ask my Doctor about why this happens once. Mainly it’s because if a patient before you has something that needs more time it messes up the schedule for every patient after… and this happens every single day. If no one cancels their appointments, then this problem just continually compounds throughout the day. The best bet to being seen on time is to be the first patient of the day.

    Or just intentionally show up a few minutes late and take the mild scolding from the receptionist. It’s not like they’re going to turn ya away




  • Man. That AIMS low frequency inverter is nice.

    I actually bought one of those cheaper Chinese pure sine wave inverters, but found that they don’t run motors/power tools that well. The surge current demand just exceeds anything they can provide. They’re great for resistive loads like PCs/LEDs/Hotplates, but if you wanted to run a table saw or something the AIMS is the only way.



  • I’m using the Quest 2 and loving it. I recently moved my router (a Netgear Orbi) into my office and I’ve been using AirLink instead of the tether and it’s actually working super well. Probably gonna shell out the cash for the Quest 3 when it comes out because I think the fresnel lenses are the biggest drawback of the Quest 2.

    For games I’ve been playing Into the Radius, a heavily moded version of Skyrim VR, and Demeo. If you like the S.T.A.L.K.E.R. series, Into the Radius is almost like an unofficial sequel and is soooooo immersive. Skyrim VR is worth the trouble of modding. Feels like a new game. Demeo is just a lot of fun to play with friends, but the amount of time it takes to play a full game usually kills my headset battery.