Several years ago Sun began a project to update and consolidate the business intelligence tools used at Sun and we decided on Hyperion as we had a variety of Hyperion tools already: Brio, Essbase. This was a few years before the Oracle acquisition of Hyperion and we wanted to run on Solaris.
This meant we were one of the first customers to run the Hyperion suite on Solaris and I frequently had conversations with other Hyperion experts similar to the title of this post and was also told that Essbase was designed on windows and would therefore run best on windows.
While the original implementation had it’s difficulties being one of the first on Sun hardware, software and operating system it undoubtedly laid the groundwork for the recent Hyperion Essbase ASO World Record on Sun SPARC Enterprise M5000.
The Oracle Essbase Aggregate Storage application employed for the benchmark was based on a real customer application in financial industry with 13 Million members in the customer dimension.
The benchmark system was a M5000 server with 4 UltraSPARC64 Vii 2.53 Ghz (quad core) and 64Gb ram running Solaris 10 update 8 and Oracle Essbase 126.96.36.199 (64 bit) combined with a Sun Storage F5100 Flash Array consisting of 40 x 22 GB flash modules in a single zpool.
The benchmark compared 500 and 20,000 users and showed that usage based aggregation improved response times, while adding the extra users showed similar performance with no signs of degradation in query timings.
What is interesting in the benchmark is that this seems to be one of the first to combine a variety of Oracle technology and provide a benchmark for John Fowler to beat: Solaris ZFS, SPARC M9000 server, Storage F5100 Flash Array and Essbase.
For more information check out the whitepaper here(pdf), Note: the BestPerf blog has an incorrect link since the recent update to the Oracle Technology Network. More details on Hyperion applications here.
One thing I’ve noticed from the Oracle acquisition is the re-focusing on Sun strengths around the engineering talent that Sun had: Solaris, Sparc, servers and technology integration and innovation. This talent also developed such cool things as zfs, dtrace, F5100 storage array, hybrid storage pools and unified storage.
As my MBA tutor tells me, one way to harness and move a company forward is to focus on the key strengths or core capabilities that an organization has. There can be a problem if you rely on the core capabilities too much they become core rigidities – which can be evidenced in Suns past, focusing too much on Sparc and proprietary servers, and at one point even dropping Solaris on x86.
It’s taken a while for some information to flow out but in the last week 2 items have come out which shows the ongoing work and strategies are there:
Oracle Solaris Podcasts
This is a new monthly podcast series hosted by Dan Roberts, giving a general update on Oracle Solaris including industry news, events and technology highlights. This episode features Bill Nesheim and Chris Armes, and provides an update of what’s been happening over the last few months and details on why Oracle Solaris is the best OS for x86-based servers: scalability, reliability and security. It also includes a brief overview of the new support offering for Oracle Solaris on third party x86 hardware.
Strategy for Oracle’s Sun Servers, Storage and Complete Systems: 9AM Tuesday, August 10, 2010 Join John Fowler, Executive Vice President, Systems, for a live update on the strategy and roadmap for Oracle’s Sun servers, storage and complete systems including Oracle Solaris.
Sign up here.
With some of these developments and others, the technology future certainly looks bright at Oracle.
After some of my recent blogs on SSDs, I was excited to have in my hand an OCZ Vertex 60GB SSD. However I neglected to remember that in my other hand I needed to have both SATA data and SATA power cables. After a trip to the local store, I come back with the required cables!
Rather than dive straight in – as is my normal process, I knew I should do some planning and put some thought into the installation/upgrade process:
- Remove old/unwanted programs files.
- Schedule the required downtime for installation and re-installation copying of files and applications
- Read the manuals for the SSD, BIOS and HDD.
So after the required planning I was ready to go:
- Open up workstation and replace SATA cables to HDD to SSD.
- Added new SATA cables and connect HDD.
- Power up workstation and enter BIOS menu, turn off auto detection of HDD (This means the workstation will no longer boot from the HDD but rather choose the SSD.
- Install Windows: 20 minutes for install and 5 reboots later I had a working machine to download the required updates.
- Install OpenSolaris: 20 minutes and 1 reboot later I had a working machine, as I pulled the latest /dev image there were no updates I was ready to work.
- Wow, this is fast! No really I mean this SSD is REALLY fast.
- Boot times are now significantly faster: Windows = 30 seconds, OpenSolaris = 35 seconds
- Applications are 30% to 80% faster to open.
- Benchmarking results to follow, it seems there’s lots to consider from a Solaris perspective. Thanks to Lisa for her post.
UPDATE:Just a note to say that Windows automatically recognised the HDD and created drive letters E: and F:. On OpenSolaris as I had previously created pools, it was a simple matter of entering the command:
zpool import -R /mnt tank
This mounted the pool and I was able to copy and use as required. I love it when a plan comes together. You can also just enter the command “zpool import” without any options to discover all pools available.
These are very noticeable differences, although given the age of the workstation (October 2005) and components users with newer machines should expect more performance increases:
- AMD Athlon-64 X2 Dual-Core 4200+, GIGABYTE K8NF-9 AMD 939
- 4 GB 400MHZ PC3200 RAM
- NVIDIA GeForce 6600 GPU
Given the old nature of components, the workstation is also limited to SATA v1, so 150MB or in reality 130MB. So I’m not really reaching the capacity from the OCZ Vertex SSD, which has potential Read: Up to 230 MB/s and Write: Up to 135MB/s.
If you think SSDs could help your desktops, servers or applications look at the following sites for more info:
As with everything in the computing world, SSDs are not standing still. OCZ announced at CES that their Vertex 2 Pro SSDs (2nd generation) are on schedule for Q110, with the new SandForce controller and have preliminary specs of: Read and Write of 270 MB/s. AnandTech have a preview here.
OCZ have also produced the first 1TB SSD, under the “Colossus” moniker. Other manufacturers are sure to push the limits too.
Following up my recent posts concerning SSD and flash based disks, there seems to be a growing understanding of the power of SSDs and also some confusion over the pricing and whether some are faster than others. I’ve compiled a summary of some other posts and info:
Are some SSDs faster/better than others? YES, it starts with the cell memory: single-level cell (SLC) flash memory is better (and hence more expensive) than multi-level cell (MLC) flash memory. Then there are the other components that make up the SSD. From some recent reviews/blogs Intel, Samsung, OCZ and RunCore seem to make some fast ones.
ZFS super charging: L2ARC for random reads,
and the ZIL for writes. OpenSolaris 2009.06 and Solaris 10 U6 with ZFS have super capabilities
for very intelligent use of fast storage technology, especially when serving files. Thanks again to Brendan.
Correction: while some items for ZFS were added to Solaris 10 update 6, it was only in the delivery of ZFS version 13 that it was complete, these changes made it into Solaris 10 update 8.
Setting up a mirrored SSD OpenSolaris system: A very comprehensive how to guide for migrating from an existing system.
Making the most of your SSD: Easy steps to set up SSD on OpenSolaris – thanks Arnaud.
Seeing the performance upgrades that others are getting out of flash makes me want to try it out to see the impact to my 4 year old desktop, which is still going strong (AMD 4200 Dual Core, 4GB Memory and 200GB HDD). Alternatively if anyone has a 64GB SSD they’d like me to test I’d certainly appreciate it 😉
After the announcements from Oracle Open World and new TPC benchmark, a lot of focus has been on Sun and the innovation DNA that drives the company. The announcements focus on flash and their increasing use in computing:
So what is the secret sauce in these? These are essentially caching data and are made up of 94GB (4 x 24GB modules) of single-level cell NAND flash, in the F20 card and a staggering 1.92TB (80 modules) for the F5100 flash array.
The F5100 Flash Array has 64 SAS lanes (16 x 4-wide ports), 4 domains and SAS zoning, It can perform 1.6m read IOPS and 1.2M write IOPS, with a bandwidth of 12.8GB/sec.
This read IOPS figure is equivalent to 3,000 hard drives in 14 rack cabinets. The F5100 uses 1/100th of the space and power, of such a collection of hard drives.
This is an amazing database accelerator for Oracle and MySQL. The unit can be zoned into 16 partitions, one for each of up to 16 hosts. The device can form part of a Sun ZFS hybrid storage pool, embracing solid state and hard disk drives.
Further Notes: Sequential Read = 9.7GB/sec; Read/Write Latency (1M transfers) = 0.41ms/0.28ms; Average Power 300 watts (Idle = 213W ; 100% = 386W). More spec info here.
So if you have need to speed up your Databases, Storage grids, HPC computing or Financial modeling look at what flash SSDs can offer.
Download the Sun Flash Analyzer and install on your server and see where SSDs can help accelerate system performance today.
There has been a few announcements recently (and more to come) and here’s one that can really be a game changer and enabler for future tech advances:
Hybrid Storage Pools (HSP) are a new innovation designed to provide superior storage through the integration of flash with disk and DRAM. Sun and Intel have teamed up to combine their technologies of ZFS and high performance, flash-based solid state drives (SSDs) to offer enterprises cutting-edge HSP innovation that can reduce the risk, cost, complexity, and deployment time of multitiered storage environments.
Sun’s ZFS file system transparently manages data placement, holding copies of frequently used data in fast SSDs while less-frequently used data is stored in slower, less expensive mechanical disks. The application data set can be completely isolated from slower mechanical disk drives, unlocking new levels of performance and higher ROI. This ‘Hybrid Storage Pool’ approach provides the benefits of high performance SSDs while still saving money with low cost high capacity disk drives.
Solaris ZFS can easily be combined with Intel’s SSDs by simply adding Intel Enterprise SSDs into the server’s disk bays. ZFS is designed to dynamically recognize and add new drives, so SSDs can be configured as a cache disk without dismounting a file system that is in use. Once this is done, ZFS automatically optimizes the file system to use the SSDs as high-speed disks that improve read and write throughput for frequently accessed data, and safely cache data that will ultimately be written out to mechanical disk drives.
Intel’s SSDs provide 100x I/O performance improvement over mechanical disk drives with twice the reliability:
- One Intel Extreme SATA SSD (X25-E) can provide the same IOPS as up to 50 high-RPM hard disk drives (HDDs) — handling the same server workload in less space, with no cooling requirements and lower power consumption.
- Intel High-Performance SATA SSDs deliver higher IOPS and throughput performance than other SSDs while drastically outperforming traditional hard disk drives. Intel SATA SSDs feature the latest-generation native SATA interface with an advanced architecture employing 10 parallel NAND Flash channels equipped the latest generation (50nm) of NAND Flash memory. With powerful Native Command Queuing to enable up to 32 concurrent operations, Intel SATA SSDs deliver the performance needed for multicore, multi-socket servers while minimizing acquisition and operating costs.
- Intel High-Performance SATA SSDs feature sophisticated “wear leveling” algorithms that maximizes SSD lifespan, evening out write activity to avoid flash memory hot spot failures. These Intel drives also feature low write amplification and a unique wearleveling design for higher reliability, meaning Intel drives not only perform better, they last longer. The result translates to a tangible reduction in your TCO and dramatic improvements to system performance
Benefits of HSP
Architectures based on HSP can consume 1/5 the power and 1/3 the cost of standard monolithic storage pools while providing maximum performance.
For example, if an application environment with a 350 GB working set needs 30,000 IOPS to meet service level agreements, 100 15K RPM HDDs would be needed. If the drives are 300GB, consume 17.5 watts, and cost $750 each, this traditional environment provides the IOPS needed, has 30TB capacity, costs $75,000 to buy, and consumes 1.75 kWh of electricity.
Using a Hybrid Storage Pool, six 64 GB SSDs (at $1,000 each) provide the 30,000 IOPS required, and hold the 350GB working set. Lower cost, high-capacity drives can be used to store the rest of the data; 30 1TB 7200 RPM drives, at $689 each ($20,670) and consuming 13 watts, provide cost-effective HDD storage. The savings are dramatic:
- Purchase cost is $26,670, a 64-percent savings
- Electricity consumed is 0.392 kWh, a 77-percent savings
Link to docs:
UPDATE: Brendon from the Fishwork team has posted some speed and performance notes here.