Several years ago Sun began a project to update and consolidate the business intelligence tools used at Sun and we decided on Hyperion as we had a variety of Hyperion tools already: Brio, Essbase. This was a few years before the Oracle acquisition of Hyperion and we wanted to run on Solaris.
This meant we were one of the first customers to run the Hyperion suite on Solaris and I frequently had conversations with other Hyperion experts similar to the title of this post and was also told that Essbase was designed on windows and would therefore run best on windows.
While the original implementation had it’s difficulties being one of the first on Sun hardware, software and operating system it undoubtedly laid the groundwork for the recent Hyperion Essbase ASO World Record on Sun SPARC Enterprise M5000.
The Oracle Essbase Aggregate Storage application employed for the benchmark was based on a real customer application in financial industry with 13 Million members in the customer dimension.
The benchmark system was a M5000 server with 4 UltraSPARC64 Vii 2.53 Ghz (quad core) and 64Gb ram running Solaris 10 update 8 and Oracle Essbase 184.108.40.206 (64 bit) combined with a Sun Storage F5100 Flash Array consisting of 40 x 22 GB flash modules in a single zpool.
The benchmark compared 500 and 20,000 users and showed that usage based aggregation improved response times, while adding the extra users showed similar performance with no signs of degradation in query timings.
What is interesting in the benchmark is that this seems to be one of the first to combine a variety of Oracle technology and provide a benchmark for John Fowler to beat: Solaris ZFS, SPARC M9000 server, Storage F5100 Flash Array and Essbase.
For more information check out the whitepaper here(pdf), Note: the BestPerf blog has an incorrect link since the recent update to the Oracle Technology Network. More details on Hyperion applications here.
After the announcements from Oracle Open World and new TPC benchmark, a lot of focus has been on Sun and the innovation DNA that drives the company. The announcements focus on flash and their increasing use in computing:
So what is the secret sauce in these? These are essentially caching data and are made up of 94GB (4 x 24GB modules) of single-level cell NAND flash, in the F20 card and a staggering 1.92TB (80 modules) for the F5100 flash array.
The F5100 Flash Array has 64 SAS lanes (16 x 4-wide ports), 4 domains and SAS zoning, It can perform 1.6m read IOPS and 1.2M write IOPS, with a bandwidth of 12.8GB/sec.
This read IOPS figure is equivalent to 3,000 hard drives in 14 rack cabinets. The F5100 uses 1/100th of the space and power, of such a collection of hard drives.
This is an amazing database accelerator for Oracle and MySQL. The unit can be zoned into 16 partitions, one for each of up to 16 hosts. The device can form part of a Sun ZFS hybrid storage pool, embracing solid state and hard disk drives.
Further Notes: Sequential Read = 9.7GB/sec; Read/Write Latency (1M transfers) = 0.41ms/0.28ms; Average Power 300 watts (Idle = 213W ; 100% = 386W). More spec info here.
So if you have need to speed up your Databases, Storage grids, HPC computing or Financial modeling look at what flash SSDs can offer.
Download the Sun Flash Analyzer and install on your server and see where SSDs can help accelerate system performance today.