Tag Archives: opensolaris

UK Oracle User Group: annual survey & magazine submissions

As a previous member of the UK Oracle User Group before the Sun acquisition, I know the value, information and networking opportunities that it provides.  I have no doubt that it will increase with the London OpenSolaris group becoming part of UKOUG.

The UKOUG have launched their annual membership survey. This survey gathers members feedback that the UKOUG use to help them develop and shape the organisation to ensure the membership is being served in the right way.

Just incase that’s not enough incentive, there is also the chance for one of our members to win either a one-day Conference Series pass or a £200 Amazon voucher – so it is definitely worth the 10-15 minutes to complete.   The membership survey will close at 5pm Friday 11th June. www.oug.org/survey

Another benefit is the Oracle Scene magazine, which is full of useful information and articles on all Oracle products.

The deadline submission for the next magazine (Nov 2010) is 31 August 2010. So if you have an idea of an article and are unsure as to whether it is the kind of content to be included, you can get feedback from the editor at: editor@ukoug.org.uk

Members can review old editions online at www.oug.org/oraclescene.

Advertisements

last Sun entry

This is my last Sun post, as the Sun Oracle integration takes another step forward tomorrow with the Legal Entity Combination of the UK entities.  In the coming weeks there’s new systems to learn and integrate with as well as finding out what the longer term goals are and how I fit in.

This road has been long, with the initial Sun-IBM rumours breaking a year ago and the eventual Oracle offer in April.  Having seen other Oracle acquisitions I know that it takes 12 to 18 months for real change and development to occur, so expect that to be the same in this case – although I hope that the last 9 months of planning were used very productively 😉

I’m excited that Oracle has a large marketing presence and hope that the Sun technology, innovation and engineers get more than their fair share of deserved exposure.

I also hope that Sun Ray technology is shown to more users and business buyers.  Others think so too.  Just image what Oracle could do with this internally for the 85,000 employees (pre Sun) in terms of power savings – as units use just 5% of what a normal desktop computer uses.

Although you don’t have to run just Oracle Solaris on them, as these success stories from ResMed, Screwfix and Microsoft demonstrate.  In the ResMed example, they had a return on investment within 12 months and saved an estimated $270,000 all while providing a variety of users with a highly flexible, highly secure virtual desktop environment.  There are many more Sun Ray stories here.

Even better still if the Oracle users were migrated off Windows imagine the savings in licence fees!

I’m proud to have been part of the blogs.sun.com community and grow what has undoubtedly been one of the foremost blogging sites.  I’m also both excited and a little nervous about the future.  Not sure if I’ll continue to blog here or move elsewhere as others have done.

To misquote The Bard:  “Alas poor Sun Microsystems! I knew him “.

SSD experiment

After some of my recent blogs on SSDs, I was excited to have in my hand an OCZ Vertex 60GB SSD. However I neglected to remember that in my other hand I needed to have both SATA data and SATA power cables.  After a trip to the local store, I come back with the required cables!

Rather than dive straight in – as is my normal process, I knew I should do some planning and put some thought into the installation/upgrade process:

  • Remove old/unwanted programs files.
  • Schedule the required downtime for installation and re-installation copying of files and applications
  • Read the manuals for the SSD, BIOS and HDD.

So after the required planning I was ready to go:

  1. Open up workstation and replace SATA cables to HDD to SSD.
  2. Added new SATA cables and connect HDD.
  3. Power up workstation and enter BIOS menu, turn off auto detection of HDD (This means the workstation will no longer boot from the HDD but rather choose the SSD.
  4. Install Windows:  20 minutes for install and 5 reboots later I had a working machine to download the required updates.
  5. Install OpenSolaris: 20 minutes and 1 reboot later I had a working machine, as I pulled the latest /dev image there were no updates I was ready to work.

First Impressions:

  • Wow, this is fast!  No really I mean this SSD is REALLY fast.
  • Boot times are now significantly faster:  Windows = 30 seconds, OpenSolaris = 35 seconds
  • Applications are 30% to 80% faster to open.
  • Benchmarking results to follow, it seems there’s lots to consider from a Solaris perspective. Thanks to Lisa for her post.

UPDATE:Just a note to say that Windows automatically recognised the HDD and created drive letters E: and F:.  On OpenSolaris as I had previously created pools, it was a simple matter of entering the command:

zpool import -R /mnt tank

This mounted the pool and I was able to copy and use as required.  I love it when a plan comes together.  You can also just enter the command “zpool import” without any options to discover all pools available.

These are very noticeable differences, although given the age of the workstation (October 2005) and components users with newer machines should expect more performance increases:

Given the old nature of components, the workstation is also limited to SATA v1, so 150MB or in reality 130MB. So I’m not really reaching the capacity from the OCZ Vertex SSD, which has potential Read: Up to 230 MB/s and Write: Up to 135MB/s.

If you think SSDs could help your desktops, servers or applications look at the following sites for more info:

As with everything in the computing world, SSDs are not standing still. OCZ announced at CES that their Vertex 2 Pro SSDs (2nd generation) are on schedule for Q110, with the new SandForce controller and have preliminary specs of: Read and Write of 270 MB/s.  AnandTech have a preview here.

OCZ have also produced the first 1TB SSD, under the “Colossus” moniker.  Other manufacturers are sure to push the limits too.

open source software and survival of the fittest

There has been lots of recent discussions about open source and how that either helps or hinders proprietary software, depending on your point of view.

Open Source Software (OSS) is starting to gain more momentum:

  • Firefox has almost 25% market share in December, 440 million downloads of Firefox 3.5
  • OpenOffice has over 100 million downloads since launching version 3.0

Most folks may think they are open source users and contributing, but to download and use is not contributing, you must actively test and supply bug information with repeatable test cases.  I know from experience that trying to narrow down and confirm bugs takes time and effort, especially when you need to remove all plugins and extensions to test the base program then add back on or having to search for the exact nightly build that caused the issue/bug regression.

Similarly a lot of the proprietary software world also thinks they are open source friendly but in reality only support Firefox 2.0 and in one bad case I know only on Windows!

Granted a lot of proprietary vendors code only for IE (and old IE at that) and MSOffice, given it’s large business use.  However to ignore or pretend to be OSS friendly is a bad way to do business. Others think so too. At best it’s naive, at worst it’s lazy.  I’ve seen plenty of code that either hard code specific browser rules or manually attribute document states in code.

It might be a quick way to get code out the door, but it’s not future proofing the code longevity.  Although it probably does mean fat upgrade fees and unhappy customers.

The next big wave is making applications and tools available to a
variety of devices, not just computers but PDAs, iPhones, iTablets, eBooks and
anything that can connect via wifi.  So while the luddites are making
code for IE and MS Office they’re missing a big growth sector and
making more work for themselves in the long run.

The best way for software developers (both open and closed source) to make sure their software works with a variety of browsers or other OSS is to download and test development builds.  Most OSS sites have easy to find info on development builds and how to contribute:

It might take some initial effort to get up to speed, but that effort should save time and user frustration if your application suddenly breaks with a new release of software.

I know from several folks at Sun, be they involved in Firefox, Thunderbird, OpenOffice or OpenSolaris that they’re very happy to have help and are usually generous with their time helping answer queries to get you started.  This willingness to help is just one of the reasons I love the Sun community, the other big one is the sharing that goes on – especially on a multitude of topics as you can see by browsing the main blogs.sun.com page.

So lets all help each other to ensure Open Source is the big winner so that technology is the enabler to a sharing future and doesn’t exclude anyone.

Bootnote: Although generally thought of as Darwin’s words, “survival of the fittest” actually comes from Herbert Spencer, who summarized Darwins “natural selection”.

SSDs to the forefront

Following up my recent posts concerning SSD and flash based disks, there seems to be a growing understanding of the power of SSDs and also some confusion over the pricing and whether some are faster than others.  I’ve compiled a summary of some other posts and info:

Are some SSDs faster/better than others?  YES, it starts with the cell memory:  single-level cell (SLC) flash memory is better (and hence more expensive) than multi-level cell (MLC) flash memory. Then there are the other components that make up the SSD.  From some recent reviews/blogs Intel, Samsung, OCZ and RunCore seem to make some fast ones.

Check out this comprehensive AnandTech article and another recent article.  Both comprehensive and very detailed.

Last FM: Installed SSDs into a SunFire X4170 to massively increase their streaming customers served from around 300 for a 7200 rpm SATA disk to 7000 for a 64Gb X25-E Intel SSD.

ZFS super charging: L2ARC for random reads,
and the ZIL for writes. OpenSolaris 2009.06 and Solaris 10 U6 with ZFS have super capabilities
for very intelligent use of fast storage technology, especially when serving files. Thanks again to Brendan.
Correction: while some items for ZFS were added to Solaris 10 update 6, it was only in the delivery of ZFS version 13 that it was complete, these changes made it into Solaris 10 update 8.

Setting up a mirrored SSD OpenSolaris system:  A very comprehensive how to guide for migrating from an existing system.

Making the most of your SSD: Easy steps to set up SSD on OpenSolaris –  thanks Arnaud.

Future of Flash: How flash storage is a disruptive technology for
enterprises. Hal Stern, VP Global Systems Engineering @ Sun hosts this very
informative podcast.

Seeing the performance upgrades that others are getting out of flash makes me want to try it out to see the impact to my 4 year old desktop, which is still going strong (AMD 4200 Dual Core, 4GB Memory and 200GB HDD).  Alternatively if anyone has a 64GB SSD they’d like me to test I’d certainly appreciate it 😉

KDE on OpenSolaris

Ever get tired of the same old gnome desktop on OpenSolaris?  Try the latest KDE version easily, quickly and simply:

  1. add the required repository:
    pfexec pkg set-authority -O http://solaris.bionicmutton.org:10001/ bionicmutton

    *** See Note1 below

  2. refresh the repository:
    pfexec pkg refresh bionicmutton
  3. add the required packages:
    pfexec pkg install KDEbase-apps  KDEgdm-integration

    *** See Note 3 below

  4. you should be able to log out and back in to choose KDE from the menu, however it might require a reboot, especially if you installed into a new boot environment (the safest way to test!!)

Notes:

1. There are several version of KDE available, the latest and greatest 4.3.85 requires OpenSolaris Build 121 or greater:
Use http://solaris.bionicmutton.org:10001/ for KDE version 4.3.1
(This is the version I’ve mostly been using and it seems fine)
Use http://solaris.bionicmutton.org:10002/ for KDE version 4.3.80
(Not tested yet)

User http://solaris.bionicmutton.org:10003/ for KDE version 4.3.85
(I have only tested this today and Konqueror and KMail keep crashing)

UPDATE: The 02 and 03 repositories are just for beading edge testing by the KDE folks, use at your peril!

3. This first pkg installs the KDE distribution, the second allows the user to choose KDE from the login menu.

Thanks to the KDE folks and as always check out the release notes/status first, so you know what you’re getting into!

OpenSolaris Dedup details

So ZFS dedup made it into build 128, which is now available from the /dev repository and will therefore be in the 2010.03 release of OpenSolaris.

So how to setup and test, it’s very easy from a terminal window.  I’ve got compression set on the pools, so lets get some baseline numbers: (see Thierry’s PartnerTech blog for a very good write up/debug for the use of compression)

# zfs get compressratio
NAME                       PROPERTY       VALUE  SOURCE
rootpool                   compressratio  1.37x  -
rootpool/ROOT              compressratio  1.50x  -
rootpool/ROOT/os_next      compressratio  1.50x  -
rootpool/dump              compressratio  1.00x  -
rootpool/swap              compressratio  1.00x  -
tank                       compressratio  1.14x  -
tank/home                  compressratio  1.14x  -

So this is the base compression set on the 2 zfs pools currently. Let’s turn on dedup.  I’ve chosen to use the sha256 setting, with verify.  This will ask me to overwrite if there’s the small chance that 2 blocks have the same sha256.  Always good to be cautious 😉

# zfs set dedup=sha256,verify rootpool
# zfs set dedup=off rootpool/swap
# zfs set dedup=off rootpool/dump
# zfs get dedup
NAME                       PROPERTY  VALUE          SOURCE
rootpool                   dedup     sha256,verify  local
rootpool/ROOT              dedup     sha256,verify  inherited from rootpool
rootpool/ROOT/os_129       dedup     sha256,verify  inherited from rootpool
rootpool/dump              dedup     off            local
rootpool/swap              dedup     off            local
tank                       dedup     off            default
tank/home                  dedup     off            default

Now to reboot and copy some small test databases 😉

Firefox 3.6 changes the java plugin

Firefox 3.6 is just around the corner, due to be delivered later this year.

In testing I found out that the old java plugin version (libjavaplugin_oji.so) on opensolaris was no longer recognised and hence java apps didn’t work 😦

So what’s the deal?

Since Java 6 update 10, there is a new implementation of the java plugin which means java applets are run in separate Java Virtual Machine
instances which are launched by the plug-in’s code.  Currently they are
executed in a JVM instance embedded in the web
browser’s process.

So what do OpenSolaris/Solaris users need to do?

Install Java 6 update 10 (at least), currently update 17 is available.

Remove the current java plugin from firefox/plugins directory:

rm /export/home/tadpole/firefox/plugins/libjavaplugin_oji.so

Add a symbolic link to the new plugin:

ln -s /usr/java/jre/lib/i386/libnpjp2.so  /export/home/tadpole/firefox/plugins

you should also check the system plugin directory: /usr/lib/firefox/plugins/

More info can be found on the java.com pages here and here.

more innovation – ZFS Deduplication

When asked about Sun Microsystems, one word will always spring to the top of my mind: innovation

There is such a fantastic DNA in this company that looks to push boundaries and make things better – ok, we often do not got the message across well but the effort and dedication shown by employees always makes me proud.

To emphasis this point again there is great news as told by Jeff Bonwick earlier this week: "ZFS now has built-in deduplication"

Deduplication is a process to remove duplicate copies of data, whether it’s files, blocks or bytes.

It’s probably easier to explain with an example: suppose you have a database with company addresses, the location ‘London’ will exist for quite a few customers, so instead of having this entry 100 times, there will be one entry and the other 99 references to the original entry. So it saves space and lookup time as it’s likely that the reference will already be loaded in cache.

How easy is it to set up?

Assuming you have a storage pool named ‘tank’ and you want to use dedup,
just type this:

zfs set dedup=on tank

There is more to it, so read Jeffs blog for the whole story.

I’m guessing this should appear shortly in the OpenSolaris /Dev builds, which will feed into the next OpenSolaris release (2010.03) and possibly into a later Solaris 10 update. Once it’s released, I’ll try and run some tests to see the savings I get.

This should also feed into the FreeBSD project. Such a shame OSX has dumped their ZFS project.

ZFS:Hybrid Storage Pools

There has been a few announcements recently (and more to come) and here’s one that can really be a game changer and enabler for future tech advances:

Hybrid Storage Pools (HSP) are a new innovation designed to provide superior storage through the integration of flash with disk and DRAM. Sun and Intel have teamed up to combine their technologies of ZFS and high performance, flash-based solid state drives (SSDs) to offer enterprises cutting-edge HSP innovation that can reduce the risk, cost, complexity, and deployment time of multitiered storage environments.

Sun’s ZFS

Sun’s ZFS file system transparently manages data placement, holding copies of frequently used data in fast SSDs while less-frequently used data is stored in slower, less expensive mechanical disks. The application data set can be completely isolated from slower mechanical disk drives, unlocking new levels of performance and higher ROI. This ‘Hybrid Storage Pool’ approach provides the benefits of high performance SSDs while still saving money with low cost high capacity disk drives.

Solaris ZFS can easily be combined with Intel’s SSDs by simply adding Intel Enterprise SSDs into the server’s disk bays. ZFS is designed to dynamically recognize and add new drives, so SSDs can be configured as a cache disk without dismounting a file system that is in use. Once this is done, ZFS automatically optimizes the file system to use the SSDs as high-speed disks that improve read and write throughput for frequently accessed data, and safely cache data that will ultimately be written out to mechanical disk drives.

Intel’s SSDs

Intel’s SSDs provide 100x I/O performance improvement over mechanical disk drives with twice the reliability:

  • One Intel Extreme SATA SSD (X25-E) can provide the same IOPS as up to 50 high-RPM hard disk drives (HDDs) — handling the same server workload in less space, with no cooling requirements and lower power consumption.
  • Intel High-Performance SATA SSDs deliver higher IOPS and throughput performance than other SSDs while drastically outperforming traditional hard disk drives. Intel SATA SSDs feature the latest-generation native SATA interface with an advanced architecture employing 10 parallel NAND Flash channels equipped the latest generation (50nm) of NAND Flash memory. With powerful Native Command Queuing to enable up to 32 concurrent operations, Intel SATA SSDs deliver the performance needed for multicore, multi-socket servers while minimizing acquisition and operating costs.
  • Intel High-Performance SATA SSDs feature sophisticated “wear leveling” algorithms that maximizes SSD lifespan, evening out write activity to avoid flash memory hot spot failures. These Intel drives also feature low write amplification and a unique wearleveling design for higher reliability, meaning Intel drives not only perform better, they last longer. The result translates to a tangible reduction in your TCO and dramatic improvements to system performance

Benefits of HSP

Architectures based on HSP can consume 1/5 the power and 1/3 the cost of standard monolithic storage pools while providing maximum performance.

For example, if an application environment with a 350 GB working set needs 30,000 IOPS to meet service level agreements, 100 15K RPM HDDs would be needed. If the drives are 300GB, consume 17.5 watts, and cost $750 each, this traditional environment provides the IOPS needed, has 30TB capacity, costs $75,000 to buy, and consumes 1.75 kWh of electricity.

Using a Hybrid Storage Pool, six 64 GB SSDs (at $1,000 each) provide the 30,000 IOPS required, and hold the 350GB working set. Lower cost, high-capacity drives can be used to store the rest of the data; 30 1TB 7200 RPM drives, at $689 each ($20,670) and consuming 13 watts, provide cost-effective HDD storage. The savings are dramatic:

  • Purchase cost is $26,670, a 64-percent savings
  • Electricity consumed is 0.392 kWh, a 77-percent savings

Link to docs:

Solaris ZFS Enables Hybrid Storage Pools – Shatters Economic and Performance Barriers

UPDATE: Brendon from the Fishwork team has posted some speed and performance notes here