We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.

krisztiant • 9 years ago

“In the science world there is a big data problem,” says Robert Patton, an Oak Ridge computer scientist. “Scientists are now doing simulations that are simply producing too much data to analyze,” he says.
Yep, that's the problem. Our brain can incredibly efficiently ditch the data we don't need for simple pattern recognitions like e.g. "identifying cat videos". Human brains use ca. 20 watts for this feat and can do it much better than gigawatt supercomputers.

A new paper from researchers working in the UK and Germany says: "One of the other differences between existing supercomputers and the brain is that neurons aren't all the same size and they don’t all perform the same function. If you've done high school biology you may remember that neurons are broadly classified as either motor neurons, sensory neurons, and interneurons. This type of grouping ignores the subtle differences between the various structures — the actual number of different types of neurons in the brain is estimated between several hundred and perhaps as many as 10,000 — depending on how you classify them."

So even if "we use a GPU-based infrastructure to train our deep learning models" it is still only one type of processing unit, compared to the brain's ~10,000. Thus, that might be a key for next generation processors i.e. diversifying its components to be able to better mimic what nature already figured out under some billions of years of trial and error (as inefficient brains simply got eaten by other more efficient life forms).

Conclusion: if we don't want to spend some billion years of trial and error, we have to learn how our brains work. The only problem that we have to learn it with the same thing i.e. with our brains how they work, and that's something like a real hard problem or even a paradox.

Commuted • 9 years ago

Your understanding it dated. Even though computers can do millions of Boolean operations in milliseconds and a human brain can barely work out a few syllogisms that are connected, you identify a brain advantage where computers needs to solve problems using exhaustive methods. Well that isn't necessarily true. The semantic web and RDF triple stores change this relationship. The brains of the computer are fast moving from computational power into data organization, like your brain. Computers are learning the ontological relationships and deep understanding of grammar and objects from the precise human perspective. When people realize this they have an Oh !@#$ moment. You'd be surprised what a computer can do after reading 20 years of the Wall Street Journal.

YeahRight • 9 years ago

And I will believe it when the first machine can actually understand what I just wrote. Until then it's all just hype.

MonkeySeaMonkeyDew • 9 years ago

They'll be able to kill us long before then ;)

YeahRight • 9 years ago

If you have Lebensangst, see a shrink.

krisztiant • 9 years ago

"The brains of the computer are fast moving from computational power into data organization, like your brain." For this, computers need architectural advancements, like your brain / most people's brain (except some of them). Even if computers could organize data better than now, they'd still have the problem of "producing too much data to analyze" (can't prioritize data).
"a human brain can barely work out a few syllogisms that are connected" This is simply not true. Einstein's brain could very successfully find the "connected syllogisms" using much less data than the supercomputers now, also using about 20 watts for this achievement, while computers even today couldn't solve it using gigawatts of power. Supercomputers are indeed better in raw power / memory etc. and can compute things very fast which for humans would take years or even centuries, but cannot solve things (yet) which for us is just a piece of cake (like 'bullet proof' pattern recognition etc.). Also, for computers to really understand the actual problem to solve, requires some kind of 'consciousness', otherwise they will need you forever to program / feed / control them with the necessary data to compute, so they still require our ability to "work out the syllogisms" which are connected, as they don't now which are actually connected in terms of the actual problem.
"Computers are learning the ontological relationships and deep understanding of grammar and objects from the precise human perspective." Even if they understand grammar, they still don't understand us and the problems we face, as it needs - as I mentioned earlier - some kind of consciousness, which they don't have yet, but for them to gain it, we don't have better model as our own brains. That's what my comment was about, and since you could "barely work out a few syllogisms that are connected" in it, therefore, "your understanding is dated" (a bit).

MonkeySeaMonkeyDew • 9 years ago

All the heavy computing is in TRAINING, once it's trained it takes very few Watts to run, hence voice recognition on phones (Siri etc).

As for trial and error, the techniques used evolve towards ever improving solution spaces for the training data, it's not as random as you imply.

krisztiant • 9 years ago

"All the heavy computing is in TRAINING, once it's trained it takes very few Watts to run, hence voice recognition on phones (Siri etc)."
Siri (+Microsoft Cortana / Google Now etc.) use online "supercomputers" for either voice recognition or search (still using a lot of Watts), as Siri et al. don't even work offline, the phone's power is way too tiny for these tasks.

"As for trial and error, the techniques used evolve towards ever improving solution spaces for the training data, it's not as random as you imply."
That's exactly what evolution does too i.e. "evolve towards ever improving solution spaces for the training data, it's not as random as you imply". If it wasn't doing this (i.e. was random), we wouldn't be, but we are, because all our inefficient brained ancestors (in their trial and error attempts) were eaten by other creatures, while the efficient ones, after analyzing the training data (e.g. the deeds of the dead), used evolving techniques to improve solution spaces (run like hell / win the battle using weapons / get nutritious food etc).
Thus, your reply wasn't really successful, actually it wasn't at all.

MonkeySeaMonkeyDew • 9 years ago

I couldnt work out why you were being hostile then I realised you didn't like me contradicting you! Well sorry buddy but Im going to have to continue..

krisztiant: "the phone's power is way too tiny for these tasks"

Sorry but “Offline speech recognition” is an option in Android and it only takes a couple of hundred Mb not the Terabytes of training data, as I explained in my original comment.

http://thenextweb.com/lifeh...

Clearly you are guessing with your answers, whereas Im simply trying to set the record straight. What is your motive here?

--

krisztiant: "Thus, your reply wasn't really successful, actually it wasn't at all."

Given that my reply was for everyone on the site and not just you I'll let them be the judge with their +/- voting..

--

As for your allusions to biomimicry perhaps I've come across as hostile when Im simply trying to embellish.

I suggest you read OnIntelligence by Jeff Hawkins and look at their work at Numenta for the leading edge on biomimicry and attempts to model the human brain in software. It's a strong effort.

krisztiant • 9 years ago

"I couldn't work out why you were being hostile then I realised you didn't like me contradicting you!"
No, I wasn't hostile and you didn't contradict me as well, actually I've contradicted you, so I just pointed out your errors, just like now:
"Sorry but “Offline speech recognition” is an option in Android and it only takes a couple of hundred Mb not the Terabytes of training data, as I explained in my original comment."
Cortana can also do some basic "offline speech recognition" just like Google Now, but if you want the real thing, i.e. natural speech and meaningful answers, you shouldn't use them offline. To prove this, here's Gary's experience with it (your link's uppermost comment):
"I have trouble getting Google Now to work offline. I have the English (US) language pre-installed on my moto g, yet when I turn off WiFi and go offline Google Now doesn't work! The hot word "OK Google" still works, and it listens to my voice command but then it respond with this every time "Didn't get that. Try speaking again"! I know some voice commands need internet connection but I was trying to just set a simple alarm! Would that need an internet connection?"
Exactly as I told you.
"I suggest you read On Intelligence by Jeff Hawkins and look at their work at Numenta for the leading edge on biomimicry and attempts to model the human brain in software. It's a strong effort."
Yep, check out his TED talk Jeff Hawkins: How brain science will change computing, and you will see he's basically saying the same as my comments above. So, hopefully we ironed this out already.

MonkeySeaMonkeyDew • 9 years ago

OK checked your discus and you seem to take this condescending tone with everyone. You should look at the tone of them yourself..

You turn useful sharing of facts into points of conflict not for the betterment of the reader but to protect a vastly oversensitive ego

Laters

krisztiant • 9 years ago

"OK checked your discus and you seem to take this condescending tone with everyone."
No, I just "take condescending tone" with people, who state things which are not based on facts, only their opinion / enthusiasm about something, and trying to push it as the reality at all hazards.
If you checked out my Disqus comments a bit more carefully, you would see that I agree with everybody, who state reasonable things, just like I agreed with you, in things, which were reasonable / based on logic or facts.

"You turn useful sharing of facts into points of conflict not for the betterment of the reader but to protect a vastly oversensitive ego"
The only purpose of my commenting here (as I don't even comment anywhere else, and don't plan to comment for too long as well, only until the special purpose reached) is useful sharing of facts for the betterment of the reader (even if it means sometimes points of conflict). If you think about it, we've agreed on many things already.

Runt1me • 9 years ago

just reminded me of the Amiga computers from 1980s & 90s; built in speech synthesis, speech recognition programs, done on ~0.5MB ram & a single ~7MHz cpu. Also had a full preemptive multitasking OS. oh how far we've come.

h0cus_P0cus • 9 years ago

Wow between this and that piss poor reverse engineering article earlier, its evident that Wired is 5 years behind the cutting edge, and 2 years behind Nvidia press releases...

Izzy • 9 years ago

It's a pop culture tech magazine. Did you just now realize this?

JHG • 9 years ago

Totally agree. A non-article giving no new info whatsoever.

Commuted • 9 years ago

Open-CL.

Darkness • 9 years ago

Not to mention the obvious limitations of the GPU, massive power budget, limited computational functionality (forget if statements) and if your application doesn't fit that particular box - too bad.

Of course the world's fastest HPC (see top500.org) is powered by a massive number of multi-core Intel Xeon Phi boards which were originally the Intel Larrabee GPU (too late, too slow and now HPC vs GPU). The good news, x86 architecture and if statements work so they require very little code restructuring that all GPU solutions do.

The GPU crowd is up against some hard limitations. It will be interesting to see what they can pull off with their ASIC designs. Their fast iteration is limited by the fab technology as it slowly settles below 20nm. nVidia has complained (loudly) that the newer processes do not scale either in power nor economically. In layman terms that means instead of getting a smaller faster cheaper part when they moved from 28nm to 20nm they got a slightly better part that didn't save power and cost as much as the previous version. Not much gain and the signs for sub 20nm aren't looking very positive either.

This article is based on old technical information. Although it does touch on the huge data collection issue for large science projects. Not all of which have the funding to solve that very sticky problem. Thus they are required to solve it as cheaply as they can.

JHG • 9 years ago

Dude you are widely mislead:
- GPUs and ASICs are not at all the same thing. ASICs are Application Specific Integrated Circuits, ie: hardware designed for a specific function. GPUs are quite the contrary: designed for games and used for everything requiring floating point calculations.
- GPUs do not have a "massive power budget" or "limited computational functionality". Compared to CPUs they are an order of magnitude more energy efficient per Megaflop and are just as easy to program, you just need to learn CUDA or OpenCL.
- Intel Xeon Phi boards or Nvidia KX boards are neither GPUs nor CPUs in the sense that you can neither play games on them nor boot you operating system on them. They bring additional parallel processing power to the CPU if and when the software used makes a call to certain specific functions.
- "instead of getting a smaller faster cheaper part when they moved from 28nm to 20nm they got a slightly better part that didn't save power and cost as much as the previous version" ... There ARE no 20 nm GPUs yet and when there will be, their power efficiency will necessarily be better as smaller engraving of the same circuit necessarily uses less power...

Darkness • 9 years ago

Sorry to disturb your world. But the modern GPU is more like an ASIC than anything else. This is how they enable their fast engineering cycles.

Look at how branching is dealt in a GPU. The answer is do it only when [b]forced[/b] as it is horrifically slow. The reason many of the functions in modern graphics programs are small shaders is because they load the data, do a set of SIMD transforms and exit leaving branching decisions to the CPU. Don't get caught out by the limited stack space either.

Also, please note that the modern GPU also has a separate connector [b]or two[/b] for power. That isn't there for looks. It is there to provide the additional power they need that cannot be provided by the bus. A high end GPU can draw nearly as much power as the entire rest of the system.

The Xeon Phi runs Linux. How is that not running an OS? It is exactly like running 50 Linux copies in a compute cluster, only in one chip. That was by design, the HPC (High Performance Computing) engineer would be able to port their application from a cluster to a chip with minimal effort, often none is required. That is the opposite of what happens using GPGPU efforts. To get the highest performance takes effort per application, often a lot. And video gaming has nothing to do with HPC so running a video game is pointless. However, some engineer usually gets Doom or Quake running on whichever part can access the display.

Sorry, the nVidia CTO was complaining about 22nm. I said 20nm, my mistake. TSMC has yet to get much functional below that. TSMC is actually making more progress on 16nm than their 20nm fab. The 28nm to 22nm effort for nVidia did not produce the expected gains. The way the semi business works is they transition to a smaller process and the result is a part that either runs faster or uses less power and they get more parts per wafer so the cost is less. However, 22nm at TSMC did not provide good yields for many. nVidia was just the most vocal about it. They still used it, but the benefits were not there. They did not get the expected power reduction, gain sufficient speed and low yields meant the production parts weren't cheaper because they had to pay for the full wafer regardless of the yield (that is the reason AMD is using other fabs than Global Foundaries, their initial contract terms only counted functioning parts).

Also note that physics come very much into play at these scales, ever the more so the smaller the process. Quite often in unpleasant ways. Transistor idle power and other power leakage problems cost more than anticipated. This means that a transistor leaks current when off and/or uses (waste as heat) more power when on. In the 22nm process that was at play. As mentioned above the yields were lower than 28nm. Partly because parts were tested and exceeded the power requirement. Or more accurately they wouldn't run as fast as planned at the power budget for the chip. So your phrase 'necessarily' should be 'expected' as it isn't a guarantee. This is at least some of the reason the 20nm process has made so little progress and is the reason that nVidia has indicated they are waiting for the 16nm process instead.

Note that Samsung often reserves their best process internally, at least the ~16/14nm process was mostly used by Samsung last year from what I caught in the trades.

Dude, out.

JHG • 9 years ago

Darkness, thanks for the details. I guess we disagree on definitions, unsurprisingly as GPGPU computing is a relatively new thing and the terms used are not always clearly defined.

By GPU I mean a Graphical Processing Unit, the way Nvidia defined it in 1999 with the first Geforce. In that sense, GPGPU add-in boards (a term used by Nvidia for their KX boards) are not GPUs, even though they do have the initials GPU in their name and the same processors inside them. By CPU I mean the Central Processing Unit, ie.- the one you boot your system from. In that sense add-in boards are not CPUs, even though they do have x86-based processors in them in the case of Intel. A better term, IMHO, for these would be Co-Processor Add-in Board (CPAB?)

I believe the point of the article above was that the author was wowed to discover some ressearcher had obtained a high-level of computing power using only a few boxes with parallel GPUs in them. This of course is a widely known trick in the computer-rendering and more recently cryptocoin mining communities since the appearance of Nvidia's CUDA programming API in 2007.

Darkness • 9 years ago

So we're good, then. GPU has been used for a long time, BTW. I recall with fondness several of my S3 and ATI cards but nothing stood higher than 3dfx cards. They were absolute beasts (for their time).

The good news today that the interface to GPUs are limited to roughly four if you limit the field to PC. Nvidia/AMD GPU and Intel/AMD embedded GPUs in their CPU. Mobile is a different beast (kittens really) but the basic rules still apply. Using OpenGL allows cross platform, with some limitations.

Not that many additional programming solutions more or less than back in the 80-90's but at least they are all doing the same thing, or nearly so. Just minor tweaking of API calls for setup and various gotchas on running shader code.

It was funny to read the 'big news, undiscovered facts' that has been true for a couple of decades. I really wanted to use Nvidia CUDA language but, after promising an open API, it refused to even run if an ATI card was present in the system. Really disappointing, I had an ATI card in a Linux workstation and bought a Nvidia to run CUDA on. I didn't need fancy graphics, I wanted compute power. Sadly never happened. The Nvidia card didn't have drivers for Linux so it was useless.

I seem to recall the Xeon Phi and the Nvidia solutions being referred to as Compute Co-Processors so your CPAB is pretty close. The only reference I find recently are GPGPU though which is what even Nvidia is calling them.

Nice chatting with you.

JHG • 9 years ago

" I didn't need fancy graphics, I wanted compute power. Sadly never
happened. The Nvidia card didn't have drivers for Linux so it was
useless."

CUDA has existed for Linux for some time now. I currently program CUDA on Linux. Have you tried it lately?

Darkness • 9 years ago

The last time I tried CUDA it complained because of the ATI/AMD GPU (primary display) installed in addition to the Nvidia GPU. Refused to run. I couldn't use the Nvidia card for the display because their drivers didn't support that new a card. I gave it to my son, IIRC. He was/is a W* gamer.

Last time I read their website it was very much Nvidia centric. I just checked, still the same deal. No AMD cards allowed, probably means installed in the system just like before. That limitation never made sense, still doesn't. Like only allowing one SCSI controller in a system.

I only have one Nvidia card now, it is in my SteamBox. I got a deal one it as it came with a $60 Steam code. Newegg is great.

JHG • 9 years ago

Yea you must necessarily uninstall the ATI card. When installing CUDA you actually need to wipe the system of even the default Nouveau linux drivers for Nvidia to install only the latest proprietary Nvidia driver.

YeahRight • 9 years ago

Total bull. The central problem of AI is not computational complexity but data representation that can map the physical world onto a set of solvable (and solved) problems and it's completely unsolved. To give a human example: running faster would not be equivalent to writing the next great American novel, no matter how fast one could run.

Chinese Gum Jerry • 9 years ago

No one said AI development was finished at this stage, they are just reporting where we're at. Sheesh!

YeahRight • 9 years ago

AI hasn't even started, yet. It has been an empty promise for over sixty years now and it is still an empty promise.

dan1101 • 9 years ago

Yep what we have now is basically storage and logic.

aarondsc • 9 years ago

Thats what AI is... Its how the processes are structured that makes it AI.

RedPills • 9 years ago

depends how you define intelligence. does it include wisdom?
how about charisma... hehe
Seriously tho, logic doesn't write a novel, yet that takes intelligence right?

Benny X • 9 years ago

"logic doesn't write a novel, yet that takes intelligence right?"

...or a million monkeys with a million typewriters and some time.

RedPills • 9 years ago

and who will read the trillions of novels to determine when there actually is one? Though there may be an argument in there, I'd have to really stretch to find it.

Benny X • 9 years ago

well now you just took the fun right out of the whole creative exercise. :(

RedPills • 9 years ago

I fear i have not successfully ascertained your meaning
I am still learning what is this human thing called 'hue-more'

Benny X • 9 years ago

yeah, I'm still learning about 'hue-more', too. When choosing a movie to watch, I stay away from comedies. They aren't nearly as funny as they pretend to be.

RedPills • 9 years ago

tough crowd

alanh • 9 years ago

Nope...that takes creativity and the means to convey it.

Guest • 9 years ago

Not to mention

pete • 9 years ago

“But new CPU chips—with many cores—may approach the performance of GPUs in the near future.”

I wonder what this is referring to. It seems muti-core chip development is still stuck at 16 cores since a few years and I don't hear any news of advancements.

YeahRight • 9 years ago

You can build a 1024 core CPU from stock components, if you are willing to go with a very simple core. A high density FPGA will let you do that (cost approx. $2-4k per chip). These data flow and SIMD machines have existed for a long time and they are even useful for research into neural nets and optimization problems, but they have absolutely nothing to do with AI.

pete • 9 years ago

I thought the person I quoted was talking about ordinary x86 multi-core CPUs that can run normal (parallelized) code, which would spare all the trouble of (re)writing code specifically for GPUs or other specialized cores.

YeahRight • 9 years ago

That still wouldn't solve the problem. Intelligence has very little to do with how one parallelizes the problem or Google's server farm could already "think". We have sufficient computational resources, what we don't have is the smallest of ideas how "we" work.

pete • 9 years ago

Could be, but I just wanted to know what new many-core CPUs he was talking about. While not having anything to do with AI perse, they are still very useful for High Performance Computing and also virtualization/cloud services.

Carla Rojas M. . • 9 years ago

we need quantum computers now!

Full Metal Pizza • 9 years ago

Kittehs need them.

That Guy Who Goes There • 9 years ago

Or be able to jump to a universe in which we already have them!

Carla Rojas M. . • 9 years ago

omg that would be awesome!

mystixa • 9 years ago

Done a bit of gcgpu stuff, and they are fast. ..but they're not that fast.

You can look at the relatively recent graduation of Bitcoin and other cryptocoins from GPU to other technologies that can run the calculations faster. (No that isn't my only experience with gpu stuff.)

They do a really good job at paralell problems. Still they have limitations on memory speed, bus speed, bandwidth, and power consumption as well. There is a constant rebalancing of the architecture behind the scenes in computing. As the work progresses so will the hardware design.

OGwashingtonDomingo • 9 years ago

i thought this article just might save the unfollow after that horrendous "run for your life photography" piece. switching to TMZ.