Comments Locked

25 Comments

Back to Article

  • F1N3ST - Monday, February 19, 2007 - link

    800 cores for 10 TFlops I say.
  • jiulemoigt - Wednesday, February 14, 2007 - link

    maybe 80 un-synced in-order chips is pointless but that stack as a mem controller

    80 socketed un-synced in-order chips is pointless, since most of the functionally comes from branch logic and out-of-order operations, and not syncing them together means that you could not pass data through them only to them, and even then, issues with passing data would be a mess.

    Yet that stack sitting underneath a modern cpu, especially if it could be used as a modern memory stack, with cache speed data access to four cores, that would speed many corp customers could use. Though the memory controller on the chip in the center to control the data flow treat the system memory as virtual extension of it, just like modern hard drives are virtual extensions of system memory, now we are a talking about access data as fast as we can use it. Though the branch logic is going to have to get even better.
  • najames - Monday, February 12, 2007 - link

    Remember the Itanimum and the BILLIONS of dollars Intel spent on the thing? Remember how they thought every company would buy them by the truckload? Remember how expensive they were?

    Intel did deliver on the Core 2, but I am still leary of anything they hype up.
  • Brian23 - Monday, February 12, 2007 - link

    I know that this chip won't run x86 code, but how does a Core 2 Duo 6600 compare to this as far as teraflops go?
  • AnnihilatorX - Monday, February 12, 2007 - link

    I believe that due to physical structures of the silicon lattice silicon is just not a good material candidate for a silicon-on-chip design. Exact same reason why blue laser diodes are made of Gallium arsenide rather than silicon.

    It's time to move on the much faster and better material than silicon.
  • fitten - Monday, February 12, 2007 - link

    Yes, but silicon has the advantage of being
    a) very cheap, comparatively
    b) plentiful
  • benx - Monday, February 12, 2007 - link

    I think it is time to stop building computers around the van neumann cycle idea. There wil always be the FSB preformance hit. To counter the problem cpu builders just add more L1/L2/L3 and now maybe L4?

    time to make the intel cycle with out fsb =)
  • fikimiki - Monday, February 12, 2007 - link

    80 cores sounds great for webserver, java or paralell-processing but how does it stand against to the price and performance of 4 x QuadCore stacked on a single board?

    Intel is trying to achieve the same thing as Transmeta or just show the marketing muscle once again. I'm sure that Teraflop is going to loose with specialized variety of chips like nVidia, ATI, Cell or Opteron together. You put 3-4 of those and that's it.
    We hear that R580 (ATI) can run some calculations 20x faster than ordinary x86, the same with Cell so what the hell is teraflop chip? Especially with integer only calculations?
  • JarredWalton - Monday, February 12, 2007 - link

    I think you're missing the point of this article and the processor. Intel has no intention of ever releasing this particular Teraflop chip into the mainstream market. This is an R&D project, nothing more nothing less. All you have to do is look at the transistor counts to realize that performance isn't going to be competitive right now. Intel chose 80 cores simply because that was what fit within their die size constraints. If they could've fit 100 cores, they would have done that instead.

    In the future, Intel is going to take some of what they've learned with this research project and apply it to other processors that they actually intend to mass produce and sell. That probably won't happen for several more years at least, and when they get around to releasing those chips you can be sure that they won't have 80 cores and that the course of that they do have won't be anything like the simple processing units on this proof of concept.

    How long before anything like this ever becomes practical on desktop computers? How long before it becomes necessary? Those are both interesting questions, and software are obviously has a long way to go first. I have no doubt that someday people are going to have computers with dozens of processor cores sitting on their desktops and in their laptops. Whether that's going to be in 10 years or 100 years... time will tell. I just hope I'm around long enough to see it! :-)
  • Andrwken - Monday, February 12, 2007 - link

    Basically they are just using it as a proving ground to show what can be done when more bandwidth is needed than traditional fsb and hypertransport can deliver. It would definitely be worthwhile in a configuration with say 20 cores and using 8 for cpu, 8 for video, and 2 for physix (one example). But my question is, doesn't this kind of go along with the supposed programmable generic cores that intel wants to use in their new discreet graphics cards? If so, it could be supposed that the code for this kind of monster is already being worked out and one multicore chip can be programmed to use each core as necessary, finally eliminating all the discreet cards and levying the power of one large multicore chip as needed? (sony came close with POS3 but still needed a discreet graphics chip at this point) They get the programming down with the discreet graphics cards and then use that for single chip integration down the road. That's just how I am reading into it and I may be way off base, but this tech maybe much closer to viable than we are giving it credit for. Especially in a cheap laptop or small formfactor application.
  • creathir - Monday, February 12, 2007 - link

    With all of this wonderful multi-core bliss, is the software side of things. Multicore means the software needs to be written asymentrically. This will be an incredibly hard challenge, especially on real time applications such as video games. The concept is fantastic, but the proof is in the pudding as they say. I do find Intel's routing technology to be quite interesting, especially the idea of stacking the L1/L2 memory on top (or below rather;)) the cores. The interconnect on them, how would this work exactly I wonder? Should be interesting to see what all 3 of these companies come out with in the coming years. I suppose the nay sayers of Moore's law will be once again disproven...

    - Creathir
  • Goty - Sunday, February 11, 2007 - link

    So basically it's a Cell processor on steroids. Other than the chip stacking, what's so new about it? People have been talking about 3D packaging for a year or two now, and not just Intel.
  • SocrPlyr - Monday, February 12, 2007 - link

    In a way, yes. And in a lot of ways, no. Yes the individual tiles are floating point units, but this chip is not meant to be a functional replacement for anything like the cell is trying to be. You really cannot compare this chip to anything available on the market. It is only a proof of concept. The choice of tiles that are floating point units was probably due to the fact that ultra high performance needs generally are nearly completely FP dependent. When testing and playing with this thing those types of applications are easy to come by. To be honest this chip seems a lot like a DSP chip, and mentioning those you will realize that the Cell processor is anything but an altered one of those. Really on all sides there has been little technology that is completely new, just better implementations.
  • oldhoss - Sunday, February 11, 2007 - link

    I'll bet that SOB would give two 8800GTX's a run for their money....CPU-limited be damned! ;-D
  • mino - Sunday, February 11, 2007 - link

    "Since the per-die area doesn't increase, the number of defects don't go up per die."

    Any sensible person knows that defect-rate is(mostly) dependent on the number of functional units(i.e. transistors), provided the defect-rate off a single unit is set.

    The fact that it is NOW mostly tied to die-area is caused exactly by the fact we do NOT use stacked-die aproach yet.

    Otherwise a nice news piece. Thanks AT.
  • mino - Sunday, February 11, 2007 - link

    sorry for typpos...
  • notposting - Sunday, February 11, 2007 - link

    quote:

    The obvious solution to this problem is to use wider front side and memory buses that run at higher frequencies, but that solution is only temporary. Intel's slide above shows that a 6-channel memory controller would require approximately 1800 pins, and at that point you get into serious routing and packaging constraints. Simply widening the memory bus and relying on faster memory to keep up with the scaling of cores on CPUs isn't sufficient for the future of microprocessors.


    The picture above this shows the Terascale slide:
    http://images.anandtech.com/reviews/cpu/intel/tera...">http://images.anandtech.com/reviews/cpu/intel/tera...
  • sprockkets - Sunday, February 11, 2007 - link

    We have a solution to the problem of ever increasing CPU speed. My question is, who here needs it?

    For those who need to open 80 Firefox tabs, video encoding, virus scanning and watching a HD movie, at the same time?

    Data sets did need to get bigger, but check this out: Music files started out at small sampling rates till about Win98 they got to the cd standard. It stopped there since no one needs it bigger than that, that is, 44.1khz and 16 bit resolution. If you can hear 96/192khz 24bit music better, fine, but we have others saying that 128kbps mp3 was cd quality.

    Video resolutions made their way from 640x480 to now around 1600x1200, and widescreen varients of that. Color depth sits at around 32bit. Can you see it improving beyond that?

    OK, so we can what, go 3D now, holographic?

    Sorry to you Intel and AMD, but the vast majority of the people you sell your technology to can live off a $30 processor and $50 of RAM, the smallest HDD, and a $30 optical drive which does everything.

    Would be cool to see a motherboard with built in DDR3 or 4 memory for the cpu/gpu AMD Fusion core, and have 2GB of it, with 32GB of flash built on as well. Let's go for silent computing, you know, back in the day when all processors only had tiny heatsinks on them!!!
  • joex444 - Monday, February 12, 2007 - link

    What part of the article was confusing to you?

    NOT FOR RETAIL SALE, COMMERCIAL USES ONLY.

    I got the idea, guess you didn't. PWNT!
  • Larso - Monday, February 12, 2007 - link

    So, why did we ever bother invent plastic materials? Or why invent the laser? The laser is a good example of an invention that was expected to be a useless curiosity but turned out to be hugely usefull.

    But this case isn't even comparable to that. There are indeed problems waiting to be solved with this solution. All servers with more than a handfull of CPU's could be cut down in size and power usage tremendously, and what about supercomputers? They are going to be extra extremely powerfull when they change to this kind of cpu's.

    And by the way, you have to be quite narrowminded to not see the (sales) potential of supercomputing at home. Lets have computer games with scary intelligent AI's :)
  • Navitron - Monday, February 12, 2007 - link

    In the words of bill gates "No one will need more than 637 kb of memory for a personal computer." You sound just like him :P Don't bash the technology just because "right now" we don't need it. But what about in 10-20 years, you still think your core 2 duo is gonna cut it in 15 years? Can a IBM 80386 run doom 3? will todays AMD and Intels run -insert game here- 10 years from now.

    So don't assume just because we don't need it now doesn't mean we wont need it in 3 years.
  • cscpianoman - Sunday, February 11, 2007 - link

    The average consumer might not need it, but large industries will be grabbing at these things faster than you can imagine. Think of health care, for example, the trend is to move towards genetic manipulations/prescreening. These industries want to download a person's entire genetic information, process it, and return it to you with the results of Alzhiemer's, cancer, and heart problems in a matter of minutes. Furthermore, the entertainment industry would love to create more special effects and render them that much faster. I'm sure if they could Pixar would already be placing an order for these. There are hundreds of applications out there that require the power and capability of multi-cores. Sure the consumer may not need it, but the consumer only accounts for less than approx. 5% of what Intel, AMD or whoever makes.
  • mino - Sunday, February 11, 2007 - link

    They need it (to sell). Period.
  • Justin Case - Sunday, February 11, 2007 - link

    In other words, Intel is doing the same that IBM and AMD (with Cell and Torrenza + Fusion), only with some made-up numbers and more Powerpoint charts. Unless they vastly improve their compilers' paralellization, or come up with a full suite of software optimized for multi-core chips (80? It's hard enough to take full advantage of 4!), this will remain something that "can be done", but which most people will have no use for.
  • joex444 - Sunday, February 11, 2007 - link

    attack switch

Log in

Don't have an account? Sign up now