Researchers use integrated GPU to boost CPU speed

Researchers use integrated GPU to boost CPU speed
Researchers at North Carolina State University have found a way to improve CPU performance more than 20 percent using a GPU built on the same processor die.

"Chip manufacturers are now creating processors that have a ?fused architecture,? meaning that they include CPUs and GPUs on a single chip," said Dr. Huiyang Zhou, who co-authored a new paper based on the research. He explained, "Our approach is to allow the GPU cores to execute computational functions, and have CPU cores pre-fetch the data the GPUs will need from off-chip main memory."



The research was performed in conjunction with AMD, who talked about plans to increase CPU/GPU integration in a presentation to analysts last week. Based on that presentation, the techniques identified in this research could be used in AMD processors within the next two years.

Although this research appears to be focused on current PC technology, most likely AMD's Fusion APU, it also has obvious applications for improving ARM processor performance. ARM's SOC (System On a Chip) design emphasizes power efficiency over speed, making it the standard choice for smartphones, tablets, and other mobile devices.

Along with a plan to transition into SOC processor production, possibly including ARM chips, AMD is promoting standardization between different processor architectures. Their HSA, or Heterogeneous Systems Architecture, initiative is intended to standardize the way various components integrated on a single processor interact with each other.

Written by: Rich Fiscus @ 10 Feb 2012 16:06
Tags
AMD CPU GPGPU SOC HSA
Advertisement - News comments available below the ad
  • 13 comments
  • DXR88

    interesting, however not ideal as there general purpose is in laptops, or other low profile machines that lack the ability to expand graphics power due to either cost cutting measures or design issues.

    Powered By

    11.2.2012 01:30 #1

  • LordRuss

    OK, not being a design engineer with all the math degrees & such for any of this... purely coming from the background of implementation & end user dynamics:

    So much of this particular technology has started to rely heavily on the cooperation of the GPU in doing a lot of number crunching. Granted, I've seen a bunch of it being driven in the 3D graphical interfaces & those 'physical' environments, but if memory serves, I would have thought some universities have been looking into folding some of the math of proteins as well.

    So I offer this... If the graphic processors are pounding out the numbers in comparatively/drastically higher numbers than regular CPUs why aren't manufacturers virtually using these processors as the basis of their design (as of resent)?

    Seems to me this would be the next step in the evolution. But I have missed things along the way too.

    http://onlyinrussellsworld.blogspot.com

    11.2.2012 13:39 #2

  • DXR88

    Originally posted by LordRuss: OK, not being a design engineer with all the math degrees & such for any of this... purely coming from the background of implementation & end user dynamics:

    So much of this particular technology has started to rely heavily on the cooperation of the GPU in doing a lot of number crunching. Granted, I've seen a bunch of it being driven in the 3D graphical interfaces & those 'physical' environments, but if memory serves, I would have thought some universities have been looking into folding some of the math of proteins as well.

    So I offer this... If the graphic processors are pounding out the numbers in comparatively/drastically higher numbers than regular CPUs why aren't manufacturers virtually using these processors as the basis of their design (as of resent)?

    Seems to me this would be the next step in the evolution. But I have missed things along the way too.

    Most do. there Called RISC Processor (SPARC PPC ARM MIPS)(the Masters of there Designated Task) such as RADARS, Laser Guidance systems, and many scientific purposes, including Protein folding, and Prediction Branching)

    your general Population uses CISC (x86, x64, Intel AMD etc)(The jack of all but Masters of none) often found in environments where each piece of hardware does not have its own hardwired chip.




    Powered By

    11.2.2012 14:47 #3

  • LordRuss

    Originally posted by DXR88: Most do. there Called RISC Processor (SPARC PPC ARM MIPS)(the Masters of there Designated Task) such as RADARS, Laser Guidance systems, and many scientific purposes, including Protein folding, and Prediction Branching)

    your general Population uses CISC (x86, x64, Intel AMD etc)(The jack of all but Masters of none) often found in environments where each piece of hardware does not have its own hardwired chip.

    You're not making much sense. RISC hasn't been made/used for about 12-15 years & your CISC analogy doesn't hold water as it washes back into itself as being the argument for being today's current computing technology.

    I'm talking about completely incorporating the different architecture of Tegra or Fusion(?) into just that, rather than the complaints of a stalemate of 'no further gains' in current CPU technology.

    Which by the way, GPUs are being currently used for some scientific purposes, just limited. So I'm saying why hasn't every bit of this been pushed over the hump yet?

    Either I wasn't clear or you clarified what I already said with dead & redundant equipment I also knew about.

    http://onlyinrussellsworld.blogspot.com

    11.2.2012 16:51 #4

  • KillerBug

    I think the main problem is that developers just don't always take advantage of everything that is available simply because it isn't always available. For instance, many apps use nVidia CUDA yet refuse to use ATI cards that can do essentially the same thing simply because they are not specifically designed for it (slower workstation cards from ATI ignored). Other apps use neither, as CUDA isn't always there...or simply because they were too lazy to add CUDA support. ATI has something like CUDA on their workstation cards too (I forget the name at the moment)...but it is virtually unused simply because it is not on the run-of-the-mill desktop cards.

    As an i5 owner with a dedicated video card, I would love to see apps using the integrated GPU that I have no use for currently...but I honestly don't know how much of a boost I would see considering that most apps don't even bother to use CUDA.


    12.2.2012 01:04 #5

  • LordRuss

    Originally posted by KillerBug: As an i5 owner with a dedicated video card, I would love to see apps using the integrated GPU that I have no use for currently...but I honestly don't know how much of a boost I would see considering that most apps don't even bother to use CUDA. I agree. But I think Cuda & Stream (the ATI equivalent you were looking for) are more of a software 'switch' (if you will) to get the GPU involved in helping with CPU processing.

    Don't get me wrong, like the cereal, "it's great!", but they only seem to want to use it for video processing in the consumer market. And not that Cuda or Stream are the only viable options, they just seem to be the only two out there at the moment.

    So my blathering is about some Tucker or Tesla upstart taking (say) a Cuda & building a quad core CPU layered off it's foundation. If engineers are already writing code for GPUs to fold/unfold protein DNA my feeble brain doesn't see the reason why it can't direct a little traffic on a motherboard.

    Thus killing this supposed stagnation of CPU processing speeds for a while. Granted, I'm leaving myself open for ridicule that these GPUs use a multi-processor approach to their ability to do their 'thing', thus giving the illusion of a higher mhz rating, but then equally, shouldn't we be able to have similar computers with similar CPUs?

    Thus the reason for the question to keep coming back on itself & the risk of me sounding like a crack smoker.

    http://onlyinrussellsworld.blogspot.com

    12.2.2012 11:15 #6

  • ddp

    LordRuss, risc still in use & production. http://en.wikipedia.org/wiki/RISC

    12.2.2012 15:28 #7

  • LordRuss

    Originally posted by ddp: LordRuss, risc still in use & production. http://en.wikipedia.org/wiki/RISC OOookay... I wasn't right. But Motorola was the biggest manufacturer of the processor. Now Quaalcom is in the game. But your link is saying "for all intents & purposes" the RISC-CISC lines are all but blurred. And if they're still in production, why aren't they calling them RISC processors? In your article they want to refer to them as ARM.

    I don't mean it necessarily like AMD's FX chip being renamed the AM(whatever). I mean the RISC seems to have died somewhere along the way, obviously not in the server market, but it somehow lived under a highway for a long time & now wants to live in the middle of Beverly Hills again.

    Besides, even 'they' adopted engineering rules of the x86 architecture, just like everyone else did. It's the micronization elements where we're starting to see a resurgence of all this.

    Still doesn't change the fact all these guys need to do whatever the video card guys are doing in processing technology & twist their nuts on a bit tighter.

    http://onlyinrussellsworld.blogspot.com

    12.2.2012 15:54 #8

  • ddp

    was using them on sun microsystem boards at celestica back in 1998-2000 before transfered to other site to build nokia cell phones & later cisco boards.

    12.2.2012 15:59 #9

  • LordRuss

    I forgot about Sun... I knew they were still using them after CrApple dropped them like rats on the proverbial sinking ship... I just lost track after the whole Motorola thing. Then started mumbling to myself when I heard voices around the processors in Blackberries & such (or so I thought).

    I figured it was a bad drug induced flashback. Who knew?

    http://onlyinrussellsworld.blogspot.com

    13.2.2012 12:38 #10

  • ddp

    the processors on the sun boards where made by TI & did not have pins but pads like the socket 775 & up.

    13.2.2012 13:21 #11

  • LordRuss

    Could I assume they were the trend setter for the x86 market going pinless then? Or were they just the first to be the first to naturally migrate?

    http://onlyinrussellsworld.blogspot.com

    13.2.2012 13:41 #12

  • ddp

    were the 1st as intel didn't go pinless till socket 775.

    13.2.2012 14:07 #13

© 2024 AfterDawn Oy

Hosted by
Powered by UpCloud