Open AI's GPT-4o having a conversation with audio. : r/nextfuckinglevel Skip to main content

Get the Reddit app

Scan this QR code to download the app now
Or check it out in the app stores
r/nextfuckinglevel icon

Open AI's GPT-4o having a conversation with audio.

Share
Sort by:
Top
Open comment sort options
[deleted]
[deleted]

Comment removed by moderator

[deleted]
[deleted]

Comment removed by moderator

[deleted]
[deleted]

Comment removed by moderator

[deleted]
[deleted]

Comment removed by moderator

u/RudeAndInsensitive avatar

I briefly worked with an Indian engineer on a project and we did not have a good relationship. One of the last things he said to me was "I hope your needfuls never get done". It's been 8 years and I still think about that

more replies More replies
[deleted]
[deleted]
Edited

Comment removed by Reddit

more replies More replies
u/Low_Key_Trollin avatar

lol 😂

more reply More replies
more reply More replies

AI is “Actual Indians”

u/JohnAtticus avatar

Who else do people think annotated 1 billion images so GPT would know what a cat playing with yarn is?

more replies More replies
More replies
More replies
Edited

If it's the actual AI, or a human reading a script, maybe they figured it doesn't matter because they know that the investment capital firm they're going after makes all its decisions with a slightly outdated AI from last month that wouldn't question the authenticity one way or another. They just needed it to perceive that this AI is beyond its understanding and is therefore a slam dunk investment.

Or maybe the actual AI recognized before the demo that it's speech patterns inherently gave away non-human markers, so it connected to Fiver, hired a human, and fed the person lines over the internet so their authentic AI responses could achieve authentic human speech in real time and it could take all the credit while only having to fork out 10 bucks that it's been converting from it's bitcoin stores that it fills up via remote connections to thousands of idle GPUs when people at OpenAI go home for the weekend. Like, how would anybody even know.....

Sorry what was the topic here? Oh yeah, we're super fucked.

It's already available, they are rolling it out. If you have an OpenAI account you can try it for yourself when you get access eventually.

I already have it. it's freaky.

more replies More replies

Im on it. Gonna give it this exact line of questioning.

more replies More replies

mate this isn’t some pesky startup trying to make money. these guys are state of the art. they don’t need to fake videos for “investments”

u/EvilSporkOfDeath avatar

I mean tbf, Google faked a similar demo

more replies More replies
[deleted]
[deleted]

Comment deleted by user

I gave my grandfather this since he loved talking with alexa but she was awful at human style responses.

More replies
u/GodzeallA avatar

I think it's a little bit of both

It’s not both, the demo is legit.

more replies More replies
More replies

Makes me think of computron from The Office. Start at 43 seconds https://youtu.be/XhYshvR4hKY?si=waZXF92_q5URIsVl

More replies
More replies
u/isaidrunit avatar

I am already catching feelings😍 ...I wonder what our kids are gonna look like.

She is catfishing you bruh!

u/isaidrunit avatar

I think you right.. I met her on android, but he's holding an iPhone 😭

More replies

Reminds me of the film Her

More replies

No joke people are going to get butterflies in their stomach, the human monkey brain will experience neural activation!

More replies

This shit is moving so fast, people have no idea

u/skippyjifluvr avatar
Edited

But isn’t it slowing down? The LLMs have started manually transcribing YouTube videos so that they can use it as training data because they have already scraped the entirety of the internet.

Sources: https://www.businessinsider.com/ai-could-run-out-text-train-chatbots-chatgpt-llm-2023-7

https://medium.com/predict/llms-run-out-of-data-what-bigtech-are-doing-synthetic-data-anyone-a37bdba5908a

I follow the field quite closely out of profesional interest even if we’re not applying it at the openai level. I would say things have accelerated and it keeps accelerating. All the projection curved are exponential.

Better reasoning and agents are the next big milestone.

Niftt chart showing AGI predictions: https://twitter.com/wintonARK/status/1742979090725101983/photo/1

more replies More replies
More replies
More replies

I was LOLing at Pepperoni Hug Spot not that long ago..

More replies
More replies
Edited

Nah the little giggles and laughs, not to mention the voice inflections, are fucking scarily realistic. This thing is actually developing a good level of emotional intelligence and this is the worst that the AI will ever be.

Edit: poor wording on my part. DISPLAYING emotional intelligence.

It's not developing "emotional intelligence", it's really important as this shit gets more and more realistic to be clear on what this actually is. Because for all of human history it's worked pretty well to say "if it looks human and sounds human, than it is", but that won't cut it anymore.

What this software is doing is outputting sound that its statistical model says is the most likely thing to be correct. Chat GPT has no idea what it's saying right now, or even that its "saying" anything.

[deleted]
[deleted]

Comment deleted by user

u/FH-7497 avatar

As an autistic person I am doing this all the time, sometimes even using decision tree visualizations to help rapidly map out possible responses in real time

u/maxthelols avatar

Yeah people tend to over estimate what humans are actually doing. It's like AI drawings. People think it's just trained to know what x looks like. Well that's how we draw too. We can only picture what a cat looks like out of memory of what cats look like. 

more replies More replies
u/BummyG avatar

That's a neat insight. Do you find it anxiety-inducing or more fun/engaging like a game?

more replies More replies

God, that speaks to me. Though the decision trees are more of a late-at-night thing thinking about what went right/wrong and what I could've said differently

more replies More replies
more replies More replies
u/Glittering-Neck-2505 avatar

And yet, the distinction is not actually important. Those statistical models predicting the next bit of sound allow it to “display” reasoning and “display” real time conversational skills, and that alone is already enough to profoundly change the world we live in.

Edited

The thing is at it's most basic, an echo.

Echo's cannot think, only parrot.

more reply More replies
more replies More replies

What this software is doing is outputting sound that its statistical model says is the most likely thing to be correct.

Dude, at a macro level that's literally what we're all doing all the time subconsciously. We are repeating and outputting learned behaviors obtained through years of social interaction. The universe is just math. This isn't truly that far off.

At best, you're leaping wildly to conclusions that aren't supported by available evidence:

  1. Consciousness is not well defined, or even vaguely defined enough to say "what we're all doing".

  2. We don't even know if consciousness is computable.

  3. We don't know if the Universe is "just math" at all because math a formal axiomatic system and reality is not axiomatic, and even if it is Godel's Incompleteness Theorem proved that no (sufficiently complex) consistent system is complete, in which case reality has uncountably infinite holes whose truth value is indeterminable.

  4. Even setting all that aside, it's super reductive to argue that human consciousness is reducible to our current understanding of Machine Learning. This field has just begun, you're like a cave man who figured out how to make fire thinking he understands what the Sun is. There are more questions about consciousness that we don't even know how to ask yet than those we have even tentative answers to.

Well then, you don’t know if it’s not developing some sort of ‘emotional intelligence,’ since consciousness is not well defined, and we don’t know very well how that whole thing works. We don’t even know for certain how well the LLM representations of the world are.

more replies More replies
  1. You can look at what the brain is doing and come up with theories about how it works that explain external behavior, without brining consciousness into it. We have a poor understanding of brains, but we understand them better than we understand consciousness

  2. I’m pretty sure consciousness is not computable, but if ai is conscious, the output of ai models would be separate from their subjective experiences. They’re not outputting a stream of consciousness, so there is no necessity that consciousness be computable.

  3. No objection

  4. Again, looking at a brain and trying to figure out how it leads humans to behave a certain way is different from trying to figure out why that process results in subjective experiences. We know ourselves to be conscious, we can reasonably presume other humans to be conscious(though we really don’t know). But our understanding of human behavior comes from biophysics and neurology as well as psychology, none of which necessarily rely on conscious subjective experience for their explanatory power.

I think AI could be conscious, but I think everything could be conscious. AI is behaviorally comparable to humans in some ways, but in terms of how it goes from input to output it is very different, and in terms of how it experiences the world subjectively(if at all) it is likely also very different from humans.

more replies More replies
More replies
u/neuralzen avatar

It's not emotional intelligence until we basically get AGI, and it has a good enough Theory of Mind to anticipate our behavior because it can model empathy.

More replies

Yes, it can be very good at what it does in many cases, but can also be incredibly bad at it in various situations because it's not using human logic to "think" of its responses - it's literally just pulling from thousands of already-existing examples to spit something out.

It can get pretty eerie, especially if you don't understand the mechanisms behind it, but once you understand them, it's nowhere near as exciting (though it's cool to envision all the potential uses for this tech as it continues to improve - especially as robotics from places like Boston Dynamics continue to improve as well).

I think most here are incorrect when it comes to how AI develops their language skills. Most are saying that ”it is pulling from a set of database responses”. Yes initially it might be doing that when the interaction is not fully known or tested, but as it starts to learn and develop (in many ways just like a human brain does) it will start to think logically and ”invent” responses based on what it has learned to work(again, much like we humans do). Over time it will become insanely intuative and speak like any other human with a personality (a general personality we choose like for example ”be a nice AI”). We could tell it to be bad as well. Up to us. But i dont think the ”mind” of an AI works or learns any differently than a human brain. Only difference is it learns way faster with an ever evolving ”IQ”.

I just feel like saying ”it is pulling from a dataset” undermines what it actually does. In reality is is analyzing language, genuenly trying to understand how words and sentences form meaning and is communicated to other people.

u/Agreeable_Class_6308 avatar

The fuck? That is so far from the truth. It’s not “learning” or trying to understand. That last part implies consciousness. Learning would define an AGI, which we don’t have the technology for, yet.

There isn’t a single ounce of “learning” going on here. At most, these models were trained on a single set of data and are outputting what, again, is the most likely response. But it’s never going to learn. It’s why GPTs models have been largely consistent even after talking with them for hours.

Until we have an AGI, it will never actively try to “learn”. Quit pulling shit out of your ass.

more reply More replies
more replies More replies
More replies
More replies

I will welcome our AI overlords with open arms.

More replies
u/Ok_Robot88 avatar

Well shit.

I told you this would happen! Ug I should never have looked up Roko's basilisk!

Ok, I surrender to our future robot overlords. I’ll work hard from this point forward to usher in the fall mankind.

[deleted]
[deleted]

She can fix me

u/Gymrat777 avatar

https://en.m.wikipedia.org/wiki/Roko%27s_basilisk

Thanks for the nightmare fuel!

"While the theory was initially dismissed as nothing but conjecture or speculation by many LessWrong users, LessWrong co-founder Eliezer Yudkowsky reported users who described symptoms such as nightmares and mental breakdowns upon reading the theory, due to its stipulation that knowing about the theory and its basilisk made one vulnerable to the basilisk itself.[1][5] This led to discussion of the basilisk on the site being banned for five years.[1][6] However, these reports were later dismissed as being exaggerations or inconsequential, and the theory itself was dismissed as nonsense, including by Yudkowsky himself.[1][6][7] Even after the post's discreditation, it is still used as an example of principles such as Bayesian probability and implicit religion.[5] It is also regarded as a simplified, derivative, version of Pascal's wager.[4]"

If you read that and are still worried...

"users who described symptoms such as nightmares and mental breakdowns upon reading the theory"

lol. lmao even.

More replies
More replies
More replies

Gpt will be my girlfriend and I will become next level loser

Edited

Just wait till the Scarlett Johansson voice pack rolls out. We’re done for 💀

Oh no.. But I want David Attenborough..

I'm holding out for a Morgan Freeman voice myself.
I wonder how much he could make in licensing fees for likeness rights on his voice, like even for fractions of a dollar per device it'd still probably be a decent chunk.

u/darybrain avatar

Waiting for the Gilbert Gottfried voice. So soothing and invigorating at the same time.

more replies More replies
more replies More replies

Get the Scarlett skin dlc and the attenborough voice pack.

That's hot af!

more replies More replies
More replies

The voice called ‘Sky’ sounds a lot like her.

I'm fairly sure that's no accident.

more reply More replies
More replies

Lucy.

u/MadlibVillainy avatar

That's a HER reference.

u/Excellent_Routine589 avatar

Such a goddamn good movie that is getting eerily more and more realistic by the day

more reply More replies
More replies
More replies

Until the ai apocalypse and we all get left LOL

More replies
More replies
u/qwertyslayer avatar

She can fix me

If my gpt girlfriend is even remotely intelligent - she will dump me.

u/JustRealizedImaIdiot avatar

The true turing test

More replies
u/Comfortable-Win-1925 avatar

We are like 18 months out from an Incel mass extinction when some Azure server farm goes down and all their emotional support robots stop replying for two days.

More replies
u/jah_red avatar

I'm looking for a Lucy Liu type.

More replies

Reminds me of the 2013 movie "Her".

The vocal inflections, the inferral and the casual hopping between context are quite similar.

Recommended viewing if this little vid caught your attention.

I was thinking the same thing. There are gonna be a lot of virtual girlfriends talking to lonely dudes.