Google is embedding inaudible watermarks right into its AI generated music::Audio created using Google DeepMind’s AI Lyria model will be watermarked with SynthID to let people identify its AI-generated origins after the fact.
Google is embedding inaudible watermarks right into its AI generated music::Audio created using Google DeepMind’s AI Lyria model will be watermarked with SynthID to let people identify its AI-generated origins after the fact.
This assumes music is made and enjoyed in a void. It’s entirely reasonable to like music much more if it’s personal to the artist. If an AI writes a song about a very intense and human experience it will never carry the weight of the same song written by a human.
This isn’t like food, where snobs suddenly dislike something as soon as they find out it’s not expensive. Listening to music often has the listener feel a deep connection with the artist, and that connection is entirely void if an algorithm created the entire work in 2 seconds.
That’s a parasocial relationship and it’s not healthy, sure Taylor Swift is kinda expressing her emotions from real failed relationships but you’re not living her life and you never will. Clinging to the fantasy of being her feels good and makes her music feel special to you but it’s just fantasy.
Personally I think it would be far better if half the music was ai and people had to actually think if what their listing to actually sounds good and interesting rather than being meaningless mush pumped out by an image obsessed Scandinavian metal nerd or a pastiche of borrowed riffs thrown together by a drug frazzled brummie.
Lol, somehow you got the above commenter covering the sentiment that a song is better if it’s message is true to its creator…something a huge percentage of the population would agree with, and you equate that to fan obsession.
People on the internet are wild.
deleted by creator
What if an AI writes a song about its own experience? Like how people won’t take its music seriously?
It will depend on whether or not we can empathize with its existence. For now, I think almost all people consider AI to be just language learning models and pattern recognition. Not much emotion in that.
That’s because they are just that. Attributing feelings or thought to the LLMs is about as absurd as attributing the same to Microsoft Word. LLMs are computer programs that self optimise to imitate the data they’ve been trained on. I know that ChatGPT is very impressive to the general public and it seems like talking to a computer, but it’s not. The model doesn’t understand what you’re saying, and it doesn’t understand what it is answering. It’s just very good at generating fitting output for given input, because that’s what it has been optimised for.
Glad you’re at least open to the idea.
“I dunno why it’s hard, this anguish–I coddle / Myself too much. My ‘Self’? A large-language-model.”