World's fastest supercomputer vs. the human brain

53 posts in this topic

A supercomputer broke 10 petaflops for the first time ever last week.

 

That's 10 quadrillion (10,000,000,000,000,000) floating point operations per second.

 

It was achieved by the K supercomputer in Japan.

 

The human brain is estimated to operate at about 1 exaflop (that's 1,000 petaflops).

 

So we're about a hundredth of the way to being able to simulate an entire human brain.

 

If the trend in supercomputer advancement continues as it has the past 40 years, then we'll reach human-brain scale late this decade.

 

post-16-13206657674883.jpg

0

Share this post


Link to post
Share on other sites

For you DIYers:

 

 

If you wanted to duplicate that setup at home, you'd only need 864 racks, 88,128 processors and enough cash in your back pocket to front an annual electricity bill of $10 million a year.
0

Share this post


Link to post
Share on other sites

 

So we're about a hundredth of the way to being able to simulate an entire human brain.

 

That's a rather disingenuous claim though isn't it, when you factor in things like still not-fully-realised speech recognition and artificial intelligence (admit the two are closely intertwined) even when we've got the raw processor power, we'll still need to crack the algorithms that make up things like abstract thought, lateral thinking, humour, emotion etc, those things are, despite the best efforts of the Japanese, still quite a long way off.

0

Share this post


Link to post
Share on other sites

flops = FLoating point OPerations per Second... a floating point is the decimal expansion of a number, i.e. 3.14195... it's only a unit of measurement for calculating the speed of a cluster of processors since most operations don't utilize solely intergers...

2

Share this post


Link to post
Share on other sites

I heard this is Microsoft's base configuration for Windows 8 - and even then you'll need to turn off most of the fancy graphics effects...

6

Share this post


Link to post
Share on other sites

I designed Terra's cooling system rack in the late 80's. Damn sight taller than me. Now my watch has a better cooling system. I am not going anywhere with this but I did see two guys Fly Fishing in the Isar last Sunday. That will always be cool. No matter what language you speak.

2

Share this post


Link to post
Share on other sites

I meant The Cray Structure but in Terra's for CPU. I should write these things down.

As a Canadian I should go find the new Irish Pub. See you there.

0

Share this post


Link to post
Share on other sites

 

So we're about a hundredth of the way to being able to simulate an entire human brain

No, we're a hundredth of the way to be able to do the same number of floating point operations per second as a human brain has been estimated to carry out. That's not the same as being able to simulate it. We can't even accurately simulate the nervous system of C-elegans which has around 300 neurons. The problem you state is a problem of architecture over speed.

4

Share this post


Link to post
Share on other sites

I read that by 2050 we'd finally have enough processors (at their future speeds) to be able to match the processing power of the human brain. Of course, one doesn't foresee that the Japanese will always try to outdo these predictions by building ever more powerful supercomputers when they are not busy making the most bizarre TV shows you have ever seen!

0

Share this post


Link to post
Share on other sites

Where did you read that? I think we'll have human brain capability by 2020.

 

Nvidia's GK110 is due to be released Q4 this year. I forget off-hand how many flops it's capable of, but it's of the order of teraflops. It's being made with 28 nm process. Shrink that down to 11 nm, give it another few iterations of Moore's law, fill a cabinet with them, and already you're in the exascale range.

 

Nvidia have said they expect exascale by 2017/18. Intel have said zettascale by 2029. Human brain scale is thought to be somewhere in between.

1

Share this post


Link to post
Share on other sites

How many football fields will be occupied by such a machine? Unless it weighs roughly 3lbs and fits into 1300 cm3 and can continue to function on resources costing ~GBP1 per day it ain't comparable. Then it also has to be able to know which way up it is and be able to move itself around, let alone do some number crunching.

0

Share this post


Link to post
Share on other sites

Such a machine will probably start off very large and get progressively smaller until it fits inside the head of a small mobile robot.

0

Share this post


Link to post
Share on other sites

Exascale in 2017

 

 

Improvements in processors from Intel, AMD and Nvidia indicate that a 1U or blade HPC server will have 7 teraflops of peak performance in 2014.

 

15,000 nodes would be needed for 100 petaflops.

 

The next generation of chips for 2017 would then be able to reach exascale performance.

0

Share this post


Link to post
Share on other sites

I wonder how they came up with the estimate of how many flops a human brain can perform. Our brains suck at numerical calculation, compared to even the earliest computers. Our brains are built for pattern recognition. As for simulating the human brain, sure, we could probably do that now, but it would be a really slow simulation. Hours for a simulation step of milliseconds, but that's just a wild guess. Maybe they mean a real-time simulation?

 

Assuming the chemical and electrical states of 1 neuron can be modeled with a set of (likely strongly nonlinear) differential equations of whatever order. Then you have a system of 100 billion of these sets of equations (and determine how they are coupled). Then you determine the initial conditions of the system. Then you present the system with inputs. Voila! You've simulated the human brain! Any neurophyiscists out there want to supply me with the equations? I'll plug em into MATLAB and see what happens. Anyone care to donate some RAM? ;)

0

Share this post


Link to post
Share on other sites

Yes, it's real-time emulation. And it's based on the number of synapses and their signalling rate.

 

Human brain has ~86 billion neurons.

Neurons have 10,000 synapses each - on average, although it varies a lot between brain areas.

Synapses signal at up to 100 Hz.

 

Multiply all that together, and it comes out on the order of exaflops.

 

That's off the top of my head. There are more precise calculations out there just a short Google search away.

0

Share this post


Link to post
Share on other sites

Ok. But it sounds like that only tells you how fast a signal can propagate through the brain. And somehow we perceive meaning in patterns of signals. But there's no floating point operations going on. To me it seems like apples and oranges. In order to know how many flops we need to simulate the brain, we need a model of the neuron.

 

Let's take a generic 1st order, 2 variable set of linear differential equations with 1 input:

dx/dt = Ax + By + Cu

dy/dt = Dx + Ey + Fu

 

That's 10 floating point operations to find out what happens at the next instant in time (6 multiplications, 4 summations). Now, I have no idea how complex the model for a neuron is, but if the calculation of flops in the brain is not based on a even a guesstimated structure of the model, then I don't think we can call the methodology scientific. To me it would be more like quasi-science for a press release for popular media consumption. Which is fine. It gets people interested. I just like to convert apples to oranges, and then compare oranges and oranges.

 

*Edit: 12 operations. x at the next time instant is x+dx/dt, and same for y.

0

Share this post


Link to post
Share on other sites

The required complexity, or level of detail, of neural models is unknown. That's a subject of ongoing research.

 

Some simulations treat a neuron as a single point with x thousand inputs and one output. Other simulations are more biologically faithful with multi-compartmental models. The former obviously requires less compute power, but whether it's sufficient to reproduce intelligence, nobody knows for sure.

 

Two of the leading researchers in this field are Eugene Izhikevich in San Diego and Henry Markram in Lausanne. Izhikevich has some figures here, and here are some figures from Markram's group.

 

post-16-13388042115393.jpg

0

Share this post


Link to post
Share on other sites

 

A supercomputer broke 10 petaflops for the first time ever last week.

 

Six months later and the TOP500 list is updated today with the fastest supercomputer clocking in at over 16 petaflops.

 

Germany's SuperMUC and JuQUEEN occupy the 4th and 8th spots respectively.

 

Hopefully 2013 will see some fairly decent human brain simulations running on JuQUEEN, provided the funding is given.

 

Sequoia, world's fastest supercomputer:

 

post-16-13400099141082.jpg

0

Share this post


Link to post
Share on other sites

 

I think we'll have human brain capability by 2020.

 

2020 is for a man's brain. It'll take till 2050 to equal a ladies brain!

1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now