Late last year, AWS announced the new X1 instance type for extremely heavy compute workloads. The largest ever EC2 instance type, with 8 times more memory than the next highest (and 4 times more than the highest available Azure VM), is now available via special order in a number of AWS regions (including Sydney).
With a breathtaking 128 vCPUs and 2TB of RAM, there aren’t many commercial grade workloads that can’t be entertained by this giant.
|Instance Type||vCPUs||Instance Memory (GiB)||Instance Storage (GB)||Network Bandwidth||3-Year Upfront Effective hourly*||On Demand*|
|x1.32xlarge||128||1,952||2 x 1,990 SSD||10 Gbps||$5.41||$19.34 p/h|
* Sydney region in USD.
The need for such an instance came about from some of AWS’ largest customers who had a requirement to run very large in-memory databases, such as SAP HANA, and big data processing engines like Apache Spark or Presto.
As you can imagine though, this kind of super compute instance is not something you want to be randomly firing up on a whim. Whilst it may seem like a quite economical way of modelling some fluid dynamics permutations, mapping out the odd genome or punching out a few bitcoins, AWS are being a bit cautious about allowing access to it. This instance type needs to be specially requested and approved on a per account basis.
If any AWS customers have used the new X1 in either a benchmarking or real world scenario, it would be great to hear about your experiences.