I’m always impressed when folks take the time to understand, investigate, review and learn from testing. They even set up a website from their progress and learnings: http://www.zfsbuild.com/ As they say: “a friendly guide for building ZFS based SAN/NAS solutions”
They cover all three products highlighting some of the commitments and major improvements on integrating components which provide developer and users with improved value:
- improvements to Solaris Containers, enabling workloads to be consolidated and migrated from physical systems to virtual containers, ZFS improvements
- improved application performance for OBIEE, PeopleSoft, MySQL Cluster, enhanced integration with WebLogic and Siebel and enhanced support for InfiniBand and Sun 7000 series storage
John Fowler also gives a high level overview.
More information available from the following (thanks guys):
ZFS Triple Parity Raid: video by George Wilson
Solaris Studio 12.2: blog by Darryl Gove
Developer Licensing changes: blog by Joerg Moellenkamp
Solaris 10, Update 9/10: feature and benefits (pdf)
Solaris Technical Articles: lots, take your pick!
You wait and wait, then all of a sudden three show up. Well that’s not exactly true as I highlighted the first Solaris podcast in early August and had just noticed a new one this week and then lo and behold a third quickly followed on Friday.
Although there does seem to be some gremlins as the last two don’t show up on main podcast page, you have to point to the feedburner site to see them: http://feeds.feedburner.com/OracleSolaris. Oracle also seems to have consolidated Solaris into a “Servers, Storage, and Solaris” Podcasts category too. Nothing like making it hard for folks 😉
In the second podcast, SPARC Integration and Optimizations, Dan Roberts is again joined by Bill Nesheim and Chris Armes to give a high level update on Sparc and highlight some key items:
- By building the Sparc chip, Oracle has been able to design items into the chip
- Reliability, redundancy, availability and scalability are a combination of chip, system and application level design and integration
- Sun Sparc Enterprise M9000 server, with SPARC64 VII processors scale to 512 threads and 2TB of memory
In the third podcast, Dan is joined by Bill, Chris (as above) and Robert Barrios and give an update on the Solaris testing and integration over the last 6 months:
- Patch testing: now added Oracle Certification Environment and Oracle Applications Test Suite, which means 55,000 test cases are being run per week on patches
- Systems testing: Stress and fault injection tests on a variety of configurations are now being performed on Solaris and Solaris updates
- Global IT: A lot of work has been done migrating and moving servers and applications to Oracle data centres, utilising the best practices. The consolidation means systems are moving to the Utah compute facility, which contains 25,000 square foot at 8kW per rack. IT are also migrating the backup media servers to Solaris and using the Sun Unified Storage 7000 series.
All of this means that best practices are fully developed, tested and integrated giving real world use cases – which will be documented once completed so customers can get the benefits too. This means that customers can spend their time on business issues, rather than getting their components and IT infrastructure working together.
If you have any feedback on the Solaris podcasts or suggestions for guests, please send an email to solaris_podcast-at-oracle-dot-com.
Also mark your calendars for September 8, to watch the Webcast of John Fowler, Oracle Executive Vice President of Systems, discuss Oracle Solaris. I haven’t seen a link to sign up yet, stay posted for further details.
For those in San Francisco, there is the upcoming Oracle OpenWorld & Oracle Develop, September 19-23. Register here, detailed Oracle Solaris sessions here (pdf). I look forward to seeing the output and I’m hoping for some video of the events 😉
Several years ago Sun began a project to update and consolidate the business intelligence tools used at Sun and we decided on Hyperion as we had a variety of Hyperion tools already: Brio, Essbase. This was a few years before the Oracle acquisition of Hyperion and we wanted to run on Solaris.
This meant we were one of the first customers to run the Hyperion suite on Solaris and I frequently had conversations with other Hyperion experts similar to the title of this post and was also told that Essbase was designed on windows and would therefore run best on windows.
While the original implementation had it’s difficulties being one of the first on Sun hardware, software and operating system it undoubtedly laid the groundwork for the recent Hyperion Essbase ASO World Record on Sun SPARC Enterprise M5000.
The Oracle Essbase Aggregate Storage application employed for the benchmark was based on a real customer application in financial industry with 13 Million members in the customer dimension.
The benchmark system was a M5000 server with 4 UltraSPARC64 Vii 2.53 Ghz (quad core) and 64Gb ram running Solaris 10 update 8 and Oracle Essbase 188.8.131.52 (64 bit) combined with a Sun Storage F5100 Flash Array consisting of 40 x 22 GB flash modules in a single zpool.
The benchmark compared 500 and 20,000 users and showed that usage based aggregation improved response times, while adding the extra users showed similar performance with no signs of degradation in query timings.
What is interesting in the benchmark is that this seems to be one of the first to combine a variety of Oracle technology and provide a benchmark for John Fowler to beat: Solaris ZFS, SPARC M9000 server, Storage F5100 Flash Array and Essbase.
For more information check out the whitepaper here(pdf), Note: the BestPerf blog has an incorrect link since the recent update to the Oracle Technology Network. More details on Hyperion applications here.
OpenOffice 3.2.1 is released today and contains bug and security updates as well as a brand refresh, which can be seen in the splash screen image.
(Thanks to Joost Andrae for the tip).
After some of my recent blogs on SSDs, I was excited to have in my hand an OCZ Vertex 60GB SSD. However I neglected to remember that in my other hand I needed to have both SATA data and SATA power cables. After a trip to the local store, I come back with the required cables!
Rather than dive straight in – as is my normal process, I knew I should do some planning and put some thought into the installation/upgrade process:
- Remove old/unwanted programs files.
- Schedule the required downtime for installation and re-installation copying of files and applications
- Read the manuals for the SSD, BIOS and HDD.
So after the required planning I was ready to go:
- Open up workstation and replace SATA cables to HDD to SSD.
- Added new SATA cables and connect HDD.
- Power up workstation and enter BIOS menu, turn off auto detection of HDD (This means the workstation will no longer boot from the HDD but rather choose the SSD.
- Install Windows: 20 minutes for install and 5 reboots later I had a working machine to download the required updates.
- Install OpenSolaris: 20 minutes and 1 reboot later I had a working machine, as I pulled the latest /dev image there were no updates I was ready to work.
- Wow, this is fast! No really I mean this SSD is REALLY fast.
- Boot times are now significantly faster: Windows = 30 seconds, OpenSolaris = 35 seconds
- Applications are 30% to 80% faster to open.
- Benchmarking results to follow, it seems there’s lots to consider from a Solaris perspective. Thanks to Lisa for her post.
UPDATE:Just a note to say that Windows automatically recognised the HDD and created drive letters E: and F:. On OpenSolaris as I had previously created pools, it was a simple matter of entering the command:
zpool import -R /mnt tank
This mounted the pool and I was able to copy and use as required. I love it when a plan comes together. You can also just enter the command “zpool import” without any options to discover all pools available.
These are very noticeable differences, although given the age of the workstation (October 2005) and components users with newer machines should expect more performance increases:
- AMD Athlon-64 X2 Dual-Core 4200+, GIGABYTE K8NF-9 AMD 939
- 4 GB 400MHZ PC3200 RAM
- NVIDIA GeForce 6600 GPU
Given the old nature of components, the workstation is also limited to SATA v1, so 150MB or in reality 130MB. So I’m not really reaching the capacity from the OCZ Vertex SSD, which has potential Read: Up to 230 MB/s and Write: Up to 135MB/s.
If you think SSDs could help your desktops, servers or applications look at the following sites for more info:
As with everything in the computing world, SSDs are not standing still. OCZ announced at CES that their Vertex 2 Pro SSDs (2nd generation) are on schedule for Q110, with the new SandForce controller and have preliminary specs of: Read and Write of 270 MB/s. AnandTech have a preview here.
OCZ have also produced the first 1TB SSD, under the “Colossus” moniker. Other manufacturers are sure to push the limits too.
Ever get tired of the same old gnome desktop on OpenSolaris? Try the latest KDE version easily, quickly and simply:
- add the required repository:
pfexec pkg set-authority -O http://solaris.bionicmutton.org:10001/ bionicmutton
*** See Note1 below
- refresh the repository:
pfexec pkg refresh bionicmutton
- add the required packages:
pfexec pkg install KDEbase-apps KDEgdm-integration
*** See Note 3 below
- you should be able to log out and back in to choose KDE from the menu, however it might require a reboot, especially if you installed into a new boot environment (the safest way to test!!)
1. There are several version of KDE available, the latest and greatest 4.3.85 requires OpenSolaris Build 121 or greater:
Use http://solaris.bionicmutton.org:10001/ for KDE version 4.3.1
(This is the version I’ve mostly been using and it seems fine)
Use http://solaris.bionicmutton.org:10002/ for KDE version 4.3.80
(Not tested yet)
User http://solaris.bionicmutton.org:10003/ for KDE version 4.3.85
(I have only tested this today and Konqueror and KMail keep crashing)
UPDATE: The 02 and 03 repositories are just for beading edge testing by the KDE folks, use at your peril!
3. This first pkg installs the KDE distribution, the second allows the user to choose KDE from the login menu.
So how to setup and test, it’s very easy from a terminal window. I’ve got compression set on the pools, so lets get some baseline numbers: (see Thierry’s PartnerTech blog for a very good write up/debug for the use of compression)
# zfs get compressratio NAME PROPERTY VALUE SOURCE rootpool compressratio 1.37x - rootpool/ROOT compressratio 1.50x - rootpool/ROOT/os_next compressratio 1.50x - rootpool/dump compressratio 1.00x - rootpool/swap compressratio 1.00x - tank compressratio 1.14x - tank/home compressratio 1.14x -
So this is the base compression set on the 2 zfs pools currently. Let’s turn on dedup. I’ve chosen to use the sha256 setting, with verify. This will ask me to overwrite if there’s the small chance that 2 blocks have the same sha256. Always good to be cautious 😉
# zfs set dedup=sha256,verify rootpool # zfs set dedup=off rootpool/swap # zfs set dedup=off rootpool/dump # zfs get dedup NAME PROPERTY VALUE SOURCE rootpool dedup sha256,verify local rootpool/ROOT dedup sha256,verify inherited from rootpool rootpool/ROOT/os_129 dedup sha256,verify inherited from rootpool rootpool/dump dedup off local rootpool/swap dedup off local tank dedup off default tank/home dedup off default
Now to reboot and copy some small test databases 😉
Firefox 3.6 is just around the corner, due to be delivered later this year.
In testing I found out that the old java plugin version (libjavaplugin_oji.so) on opensolaris was no longer recognised and hence java apps didn’t work 😦
So what’s the deal?
Since Java 6 update 10, there is a new implementation of the java plugin which means java applets are run in separate Java Virtual Machine
instances which are launched by the plug-in’s code. Currently they are executed in a JVM instance embedded in the web
So what do OpenSolaris/Solaris users need to do?
Install Java 6 update 10 (at least), currently update 17 is available.
Remove the current java plugin from firefox/plugins directory:
Add a symbolic link to the new plugin:
ln -s /usr/java/jre/lib/i386/libnpjp2.so /export/home/tadpole/firefox/plugins
you should also check the system plugin directory: /usr/lib/firefox/plugins/