Svoboda | Graniru | BBC Russia | Golosameriki | Facebook

To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

From Wikipedia, the free encyclopedia

The Cray CX1000 is a family of high-performance computers which is manufactured by Cray Inc., and consists of two individual groups of computer systems. The first group is intended for scale-up symmetric multiprocessing (SMP), and consists of the CX1000-SM and CX1000-SC nodes. The second group is meant for scale-out cluster computing, and consists of the CX1000 Blade Enclosure, and the CX1000-HN, CX1000-C and CX1000-G nodes.

The CX1000 line sits between Cray's entry-level CX-1 Personal Supercomputer range and Cray's high-end XT-series supercomputers.[1]

CX1000 scale-up symmetric multiprocessing nodes

The CX1000-SM and CX1000-SC nodes can be used for cluster computing, but they are designed for scale-up Symmetric Multi-Processing (SMP). When used for cluster computing, the CX1000-SM node is intended to be the master (service) node, although it can instead be a compute node. Similarly, the CX1000-SC node, when used for cluster computing, is intended to be a compute node, but can instead act as the master (service) node. Either or both the CX1000-SC and/or CX1000-SM nodes can be deployed in a HPC cluster. The CX1000-SM and CX1000-SC nodes, when used for SMP, are connected by a cache-coherency interconnect which is a built-in subassembly of the CX1000-SM and CX1000-SC nodes, rather than a standalone device, and is called the Drawer Interconnect Switch in Cray literature. The Drawer Interconnect Switch uses the Intel QuickPath Interconnect technology.

CX1000 scale-out cluster computing nodes

CX1000 Blade Enclosure populated with eighteen CX1000-C Compute Nodes. The Local Control Panel is the rectangular object with a blue screen and the Cray logo below the screen. The two shorter blades just below the Local Control Panel are the Fan Blades.

The CX1000 scale-out cluster computing group of systems consists of the CX1000 Blade Enclosure, CX1000-C compute Node, CX1000-G GPU Node and CX1000-HN Management Node. Unlike the CX1000-SM and CX1000-SC nodes, these nodes cannot be used for scale-up SMP, as they were designed without a cache-coherency capability. The CX1000-C and CX1000-G nodes both have blade form factors, and the CX1000-HN node is a rackmount 2U Server. The CX1000-HN is intended to act as the head (service) node in an HPC cluster, with CX1000-C and/or CX1000-G compute nodes.

References

  1. ^ Morgan, Timothy Prickett (March 23, 2010). "Cray's midrange line big on Xeons, GPUs". The Register. Retrieved September 1, 2010.

External links

This page was last edited on 4 December 2020, at 05:03
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.