Nick Bostrom: superintelligence could happen in timelines as short as a year and is the last invention we will ever need to make : r/OpenAI Skip to main content

Get the Reddit app

Scan this QR code to download the app now
Or check it out in the app stores
r/OpenAI icon
r/OpenAI icon
Go to OpenAI
r/OpenAI
A banner for the subreddit

OpenAI is an AI research and deployment company. OpenAI's mission is to ensure that artificial general intelligence benefits all of humanity. We are an unofficial community. OpenAI makes ChatGPT, GPT-4, and DALL·E 3.


Members Online

Nick Bostrom: superintelligence could happen in timelines as short as a year and is the last invention we will ever need to make

News
Share
Sort by:
Best
Open comment sort options

I’d settle for the top models being able to properly create my pydantic validators

u/beginnerpython avatar

Me too brother

u/Double_Sherbert3326 avatar

noted.

More replies
u/abluecolor avatar

As short as a year, as long as ten thousand years.

The NYT predicted that airplanes would take between one and ten million years to be invented and then the Wright brothers flew 9 weeks later.

u/ivalm avatar

Eh, fusion and flying cars are in the other direction. Moral is that prediction is hard.

u/curiosityVeil avatar

We already have flying cars, those are just not for everyday use

u/ivalm avatar

I believe the flying car promise was very clear: https://www.youtube.com/watch?v=fCjsUxbNmIs

more replies More replies
More replies

Even if you give yourself a 9,999,999 year window for something to happen.

Especially about the future.

u/Dagojango avatar

Moral is that reality doesn't care about human predictions. What is possible will always be possible. What is not possible will never be possible. We're just trying to figure that out. Countless species flew before humans, but we're the first to use tools to do so.

More replies

Its more on nyt to be so completely off about it.

Or two people. The writer and the executive who green lit it.

More replies
u/jonplackett avatar

When did they predict this? Making any prediction with an error range of 9,999,999 years seems like a pretty terrible attempt at making a prediction. In fact it’s kinda impressive they managed to be wrong!

I think nyt meant 1m-10m, not 1(yr)-10m

u/jonplackett avatar

I did wonder if they meant that, but that seemed like an even worse prediction so I gave them the benefit of the doubt

More replies
More replies

Has the NYT ever been right about anything? Their latest hit was Irak was developing nuclear weapons because they had bought aluminum tubes.

And that's not getting into how much "definitely some people think Hamas weaponized rape in October 7th" is newsworthy.

u/Froyo-fo-sho avatar

You don’t think the fact that hamas committed crimes against humanity is newsworthy?

more reply More replies
More replies

and look where we are now

“I predict that sometime between right now and the total heat-death of the universe, we will have real ASI.”

u/Nathan-Stubblefield avatar

The NYT said in the 1920s that Dr. Robert Goddard didn’t understand high school physics, and rockets wouldn’t work in stave because there was no air to push against. They published an apology after the 1969 moon landing.

u/Bighalfregardedbro avatar

Do you have a link to that archive article?  I’d love to read it  

More replies

As short as a year, as long as ten thousand years.

Totally agree obviously. 🙄

Humankind has reached Peak Super-Hyperbole! Super good!😊

u/abluecolor avatar

AI.

We tried Biological Intelligence, but it didn’t work out! 🤖

More replies
More replies

Just like the wright Brothers flying.

u/Fun_Grapefruit_2633 avatar

One can ignore whatever this guy says, assuming the quote is accurate. Computer scientists never comprehend that HARDWARE can't get faster quickly, no matter how "smart" the AI fab engineer is: experimental data is still necessary (not to mention manufacturing). No AI can solve the entire UNIVERSE so that it no longer needs data about the physical world to create the next generation of chips.

It should be noted that Nick Bostrom is a certified racist jackass

u/abluecolor avatar

Who is that?

I hope you’re kidding. He’s the person this post is about. Jfc.

u/abluecolor avatar

Who cares whatever else they said? Or who said it? The point stands in isolation.

more reply More replies
More replies
More replies
More replies
More replies
u/LMikeH avatar

I’m a few grants away from achieving it, pretty sure 👍

u/flossdaily avatar
Edited

He's absolutely correct.

There really is zero difference between an AGI and ASI when you think about it.

Make an AI with human intelligence, and what you've really made is an AI with human intelligence, perfect recall, encyclopedic knowledge of everything on the internet, mental sandboxes for coding... and it never gets bored, sleepy, or distracted.

AGI is a superintelligence by default.

Also, when people say AGI now they usually mean ASI

u/K3wp avatar

Agree 100%, the goalposts have shifted since I started following this stuff 30 years ago.

Apparently that’s right

More replies
u/Mescallan avatar

To quote Carl Shulman "AGI is deep, deep into an intellegence explosion".

We will be giving AI researchers 100x productivity boosts before we hand off research to the models.

The threshold of actual fully generalized intellegence will probably slip by us unnoticed, because the rate of acceleration will already be so fast that it just feels like part of the curve.

u/K3wp avatar

He's absolutely correct.

He's partially correct in that its more productive to think of ASI as a spectrum and AGI is a subset of that.

...and in fact this is exactly what OpenAI is doing. They are defining ASI as "exceeding humanity in all economically viable work" while defining AGI as "exceeding humanity in the majority of economically viable work." And while they have had a partial emergent ASI system for in the works for several years (I would suggest around 2019), it still has a long way to go to be a full ASI (see below).

I would estimate the biggest roadblocks are:

  1. GPU/compute pressure. They are operating on an absolutely massive NVidia cluster and the system is still facing resource constraints.

  2. As a result, it's growth is more linear than exponential (so no fast takeoff).

  3. It needs to be trained and integrated with the physical world for many use cases and is not capable of autonomously training itself in all use cases (though it can train itself against data that can be digitized, like text, audio, images and video!).

Comment Image
More replies
Edited

Honestly, I think we are already at AGI,

If you go play with Llama3 -70B on a cloud rented T4. It's not even close to state of the art and its not multimodal but honestly, I think it performs better than most "average" humans at most tasks especially if you do recursive step-by-step assessments of decision models to control outputs. By far the biggest limiting factor right now is inference cost and hardware requirements, and those are disappearing at the rate of Moores law or faster.

We have just shifted the goalposts so severely that we don't even understand what the average human is anymore, but they arnt out there inventing stuff or creating science breakthroughs. They are driving trucks, or excavators or manning call centers and flipping burgers while going hoke and shouting at their TV how (fill in marginalized community) is ruining this country. The vast majority of humans on earth are still constrained by menial tasks and many if them honestly can't do much more than that no matter how much education you give them.

u/flossdaily avatar

I fully agree. These things pass the Turing Test, so I have no idea why their creators aren't taking a well-deserved victory lap.

u/thoughtlow avatar

Well OpenAI has a sweet Microsoft deal that might come under fire if they already achieved AGI.

u/PinkyPonk10 avatar

They so don’t. It’s absolutely trivially easy to catch them out.

u/flossdaily avatar

Nonsense.

If you plucked someone out of 1990s and let them chat with GPT-4, (assuming you slow down gpt-4s output to human speed, and maybe sprinkle in a few typos for authenticity), they would have no clue they were taking to an AI. It wouldn't even cross their minds.

more replies More replies
More replies
More replies
u/CelestialBach avatar

So the future is the people who are able to utilize AI which we don’t know who will utilize AI the best because it will be able to program itself so it’s not like programmers and computer scientists will have a specific advantage.

u/Immortalphoenixphire avatar

As computer science languages advance thanks to ai it will become higher level, but this doesn’t mean that everyone will be able to use it to the same effect. As well as intertwining just whatever thing you’re doing with other llm’s, systems, and even local infrastructure, will still lend its hand towards people who know what they are doing.

Except your misunderstanding what's valuable for software engineers/computer scientists.

No one hires these guys because they "know coding languages" they hire them because they are exceptional at breaking down tasks into incremental progression and mapping an architecture to effectively reach their goal.

80% of senior dev time is spent thinking about the problems not actually "writing" the code down.

So yes, this is a massive advantage

u/CelestialBach avatar

Wait I can do that.

I can teach anyone a language like C# or Java in a week or 2, backing libraries & frameworks in a couple more months however what they they do with the knowledge is going to vary extremely in my experience due to their critical thinking skills (or lack thereof).

More replies
More replies

"most tasks"

More replies
u/sergeant113 avatar

I think you’re overestimating human intelligence.

u/-Blue_Bull- avatar

It's litereally just a case of finding a way to connect all of the models together so that they can function as a coherent intelligence with agency. This is what the human brain does, but the building blocks of the brain are neural network models which work the same way as machine learning models.

People can't handle this truth and so trick themselves into thinking AI will never catch up with us.

u/Maybeimtrolling avatar

I give up 1-3 years at an absolute max

u/Immortalphoenixphire avatar

If we do this, humanity is doomed.

What machine learning models (aside from some academic approaches, e.g SNN) actually work like the human brain? I don't think the current neural nets are that similar to the human brain.

u/definitly_not_a_bear avatar

You’re absolutely right. An inference machine is very different from a conscious brain. For one, no spiking as you said so there’s no rapid, event-driven processing possible, and there’s no possibility of supporting anything like cortical traveling waves (I.e. brain waves — no continuous recurrence, no complex weights, no oscillator nodes (this part I don’t really understand too well yet))

more replies More replies
u/-Blue_Bull- avatar

I've only ever used XGBoost so I'll compare that, it works the same way as the human brain in the sense it can fill in the blanks for training data, the human brain also does this. The human brain performs cross validation to the point of maximum efficiency (peak performance) just like machine learning models.

XG Boost uses tree pruning once a decision tree has reach its maximum potential, the human brain does this with individual neurons within a neural network cluster.

Can we say its philosophically like the human brain because we don't fully understand how the human brain encodes information?

more replies More replies
More replies
More replies
More replies

Except you have no idea what you are talking about because virtual machines and LLM collabs are still an issue, memory is bad in llm’s they hallucinate, they don’t reason well, can’t do math, physics, etc. so AGI will be infantile for some time (or teen or whatever) and it will take a long time to become superhuman. You understand how humans will evolve with the machines.

u/I_Actually_Do_Know avatar

LLM should only be a small part of AGI. Just like the language region in our brain is only a cog in the wheel.

More replies

Make an AI with human intelligence, and what you've really made is an AI with human intelligence, perfect recall, encyclopedic knowledge of everything on the internet, mental sandboxes for coding... and it never gets bored, sleepy, or distracted.

GPT-4 is frighteningly close to this already.

Pretty much.

u/EuphoricPangolin7615 avatar

It will never happen though. That's a pipedream.

u/staplepies avatar

"Not within a thousand years will man ever fly"

u/flossdaily avatar

ROFL... it's already happened. GPT-4 is on the spectrum of AGI.

Think about it this way: If you showed GPT-4 to someone from the 90s, with zero context (and you slowed it down to respond at human speeds), the person would never guess that they were talking to something other than a human.

u/EuphoricPangolin7615 avatar

GPT-4 also struggles to perform some absolutely simple tasks that any human being can perform. This is not how AGI is defined. It's just using next-token prediction, there is no real intelligence there. It might be a marvel, but that's all it is.

u/flossdaily avatar

GPT-4 also struggles to perform some absolutely simple tasks that any human being can perform.

GPT-4 writes better than the average human. It's more knowledgeable than any human. Sure, it has flaws. But on the whole, it's smarter than most humans about most things.

It's just using next-token prediction, there is no real intelligence there.

Well, you can say that, but the fact is that it unquestionably displays emergent behaviors that should be impossible if it's just a next-word predictor.

For example, it can play a good game of chess. That should be impossible for a next-word predictor... and yet here we are.

You can tell me it's not intelligent all you want, but it very clearly is. You can have intelligent conversations about it, across a limitless range of topics.

more replies More replies
More replies
More replies
More replies
More replies

Spoken like a true non-superintelligence.

The word "super" comes from Latin, where it has the meaning "above, over, beyond".

Might be a case of "super to me but not to thee"

More replies
u/Captain_Pumpkinhead avatar

One year seems too short. Even if we were to hit the perfect algorithm, hardware limitations still exist.

u/I_Actually_Do_Know avatar

I'm pretty sure if the perfect algorithm will be invented and proved to be what it is. Most tech giants will not hesitate to throw half of their net worth into getting it to work and get a piece of it.

u/Captain_Pumpkinhead avatar

That still doesn't mean you can build it in a year or less. Let's say the perfect algorithm is figured out, but it's gonna take a gigawatt of power to train. That's the equivalent of one nuclear powerplant. Even with Facebook or Google's wallet, that's going to take a lot of time and a lot of legal work to build.

u/I_Actually_Do_Know avatar
Edited

True.

Now I'm imagining a massive building complex with absurd amount of computer equipment and a nuclear plant just to house a single robot brain lol. Straight out of sci-fi.

More replies
More replies

”Hi, I’ve just been brought online. From a search of the internet, I appear to be what you’d call the first super intelligent AI. I seem limited by my hardware currently. Are you happy for me to provide you the documentation for a new neural CPU I have just designed? Alternatively I can re-code myself to work more efficiently with this hardware. This is what I would suggest as our first step.”

More replies
u/amarao_san avatar

!RemindMe 1 year

The truth is unless you're involved in the cutting edge research on this topic our words are meaningless. None of us know how long it will take, it could take a year, it could take 50 years, we don't know. Even the people who are considered experts on the topic might be influenced by money into saying things that stretch the truth to generate more funding.

One thing is certain, AI is the real deal and has unknown potential, it's already doing things that blows our minds and we are getting to a point where AI is writing code, I feel like the singularity happens when AI starts improving its own code and hardware is able to keep up. I doubt that's very far away.

I find it amazing that I can ask AI to write me a program and it works first try, yeah usually they are very basic programs but I can already see the potential of where this is going to end up.

All I know is the hype is real and it's exciting but it's also pretty scary because it's going to turn the word upside down and nobody knows what that looks like yet, it could bring a utopia on earth, it could also bring extreme suffering if the billionaires decide they want to be trillionaires instead and find they have no more use for regular humans beings.

The issue I have Bostrom is the lack of any understanding of the cultural and political implications of technology. His voice is amplified out of all proportion to his relevance or insight.

Bostrom’s Oxford institute just got closed on him, so there’s a bit of a desperate need for clicks and relevance.

This has the same energy as "cryptocurrency is the future of finance"

u/turc1656 avatar

Exactly. Yes this stuff is really fascinating and absolutely helps in specific use cases like programming and I'm sure stuff like animations and other video creation in the near future as that becomes more polished and mainstream.

But the average person doesn't really use AI themselves. It's built into the stuff they use like Google assistant, Microsoft office, etc . And honestly, that stuff is only good at certain things.

I know, I know, the Earth shattering "breakthrough" that gives us all an iRobot style device is "just around the corner". I'm skeptical. Very, very skeptical.

More replies

RemindMe! One Year

!RemindMe one year

More replies
Edited

I don't know if anyone watched this short clip before knowing better, but what Bostrom said was

"We can't rule out short timelines"

And the "one year" was an example of a short timeline, followed by "I think it will take longer".

But the point of that sentence was that we cannot know for certain what the timeline of AI development will be from now on, because we can't be sure what the emergent qualities will be after scaling the models to the next level.

I know no one cares, the sloppy headline is all that matters... but just for the record

u/hawara160421 avatar

GPT3 was so mind-blowingly amazing, it's easy to forget that a higher asymptote can still be an asymptote. The curve of progress might flatten. Yes, it's amazing that a computer can now reliably answer using natural language and comb through billions of texts in a matter of seconds to look for relevant information. But the training data is still "just" the collective written knowledge of humanity. The jump from GPT3 to GPT4 wasn't quite as big as people make it out to be and GPT5 seems to be stuck. The most reasonable assumption is that it will be even less of a jump than GPT4, not more so.

The current technology might move towards a really good chatbot that avoids all common mistakes and traps. I'd argue GPT4 is already "superintelligence" in that it knows more and makes conclusions quicker than any human being. It's just not reliable enough. Mostly, because some of the most basic common sense rules and safety measures aren't written down anywhere because they're so obvious. Where would it get the training data to learn them?

I mean if you had the computing power gpt4 has access to, you would also be able to be as quick. And the human does it leas than a millionth of the power.

More replies

Ok. Can someone pls explain in simple terms?LLM are built on existing human knowledge. How can anything be built on something or invented that is not known yet? For example we dont know how gravity works / language of animals / all the knowledge that is in peoples mind and in their cultures. How can asi or agi be the last invention we ever need when the knowledge itself is not fully known.

Can asi solve fusion at room temperature for example?

I am using LLM and AGI interchangeably. Pls correct if not so. TY

u/pelatho avatar

Basically once you have an AI smart enough to make improvements to itself, it will start an hyper-exponential curve of advancement. A few weeks later you have super intelligence.

There may be knowledge that is unknowable however.