- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Reducing emotion to voice intonation and facial expression is trivializing what it means to feel. This kind of approach dates from the 70s (promoted namely by Paul Elkman) and has been widely criticized from the get-go. It’s telling of the serious lack of emotional intelligence of the makers of such models. This field keeps redefining words pointing to deep concepts with their superficial facsimiles. If “emotion” is reduced to a smirk and “learning” to a calibrated variable, then of course OpenAI will be able to claim grand things based on that amputated view of the human experience.
The demo was so fucking creepy. Would rather be in a dark room surrounded by victorian dolls that sometimes seem to turn their head towards you and blink.
I didn’t think it was super creepy but I thought the voice was so overly enthusiastic and overacted and soooo sugary. bleh.
This won’t work for me unless that can be customised and toned down a lot.
God, it’s difficult enough having to talk to emotional people, and now this…
The way the presenters had to talk over the voice to interrupt it was awkward as hell. It also seemed to pick up on background noise from the audience often and interrupt itself. That makes it unusable in loud public settings (which imo is great, I hope it will never be socially acceptable to chat loudly with your AI in public).
This looks…well amazing but also horrifying. When they showed of GPT assisting with math equations, it made me think of how much better I would be at math if I had an assistant like that growing up.
It also makes me think about how there are going to be so many scams and fraud in the future. It’s already starting, and it’s only going to get worse. I’m sure I’ll be duped by something like this in the future.
Also, people are going to totally be marrying gpt bots in the future lol.
If they can marry cars and plants… Possible to lose a generation like Japan did
How impressive this is will hinge on whether there were any shenanigans behind the demos. I find it difficult to take breathless announcements at face value given recent issues.
I pay for ChatGPT+ and it’s real. I talked to it for about an hour today from my Android phone.
There were occasionally longer pauses than shown in the promo video, but only ever between when I spoke and when it started replying
🤖 I’m a bot that provides automatic summaries for articles:
Click here to see the summary
On Monday, OpenAI debuted GPT-4o (o for “omni”), a major new AI model that can ostensibly converse using speech in real time, reading emotional cues and responding to visual input.
OpenAI claims that GPT-4o responds to audio inputs in about 320 milliseconds on average, which is similar to human response times in conversation, according to a 2009 study, and much shorter than the typical 2–3 second lag experienced with previous models.
With GPT-4o, OpenAI says it trained a brand-new AI model end-to-end using text, vision, and audio in a way that all inputs and outputs “are processed by the same neural network.”
The AI assistant seemed to easily pick up on emotions, adapted its tone and style to match the user’s requests, and even incorporated sound effects, laughing, and singing into its responses.
By uploading screenshots, documents containing text and images, or charts, users can apparently hold conversations about the visual content and receive data analysis from GPT-4o.
In the live demo, the AI assistant demonstrated its ability to analyze selfies, detect emotions, and engage in lighthearted banter about the images.
Saved 77% of original text.