Glenn Brunette

Subscribe to Glenn Brunette: eMailAlertsEmail Alerts
Get Glenn Brunette: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Cloud Computing, Memopal, Cloud Hosting & Service Providers Journal, Software Configuration Management

Blog Feed Post

NEW: Cloud Safety Box v0.4

Today, I am happy to announce the v0.4 release of the Cloud Safety Box project. About a month ago, I announced the initial public release and since that time it was even highlighted and demonstrated at Sun's CommunityOne event! Not too bad for a new project!

The new version released today was a substantial redesign in order to improve the overall design and efficiency of the tools while at the same time adding a few key features. The biggest visible changes include support for compression, splitting up of large files into small chunks, and also support for Solaris key labels. Let's dive into each of these briefly:

  • Compression. Compression is enabled automatically for the Cloud Safety Box (csb) tool and it is configurable when using the s3-crypto.ksh utility. When compression is enabled, the input stream or file is compressed first (before encryption and splitting). By default, compression is formed using the bzip2 utility (with the command-line option -9. To enable compression with the s3-crypto.ksh utility, use the -C option as in the following example:
    $ s3-crypto.ksh -C -m put -b mybucket -l myfile -r myfile

    Of course, compression can be used along with encryption and file splitting. Decompression is handled on get operations and is the last step to be performed (after file re-assembly and decryption). Just as with compression, the bzip2 utility is used (with the command-line options -d -c. To enable decompression with the s3-crypto.ksh utility, use the -C option as in the following example:

    $ s3-crypto.ksh -C -m get -b mybucket -l myfile -r myfile

    The actual compression and decompression methods can be changed using the S3C_COMPRESS_CMD and S3C_DECOMPRESS_CMD environment variables respectively as in the following example:

    $ env S3C_COMPRESS_CMD="gzip -9" S3C_DECOMPRESS_CMD="gzip -d -c" \
       s3-crypto.ksh -C -m put -b mybucket -l myfile -r myfile

  • Splitting. It is well known that there are file size limits associated with Cloud Storage services. There are times, however, when you may have files that you would like to store that exceed those limits. This is where splitting comes into the picture. Splitting will take an input file and based upon a size threshold, divide it up into a number of files. Splitting is done by default with the csb tool and can be optionally enabled in the s3-crypto.ksh tool. Splitting is accomplished using the GNU split(1) program and is enabled using the -S option. The maximum file size limit is, by default, set at 4 GB, but it can be adjusted using the -L command-line option (specified in Kbytes). Splitting at 2 GB is enabled in the following example:
    $ s3-crypto.ksh -S -L 2000000 -m put -b mybucket -l myfile -r myfile

    When splitting is enabled and triggered (when a file's size exceeds the limit), the files stored in the Cloud Storage service use the name as specified by the remote_file (-r) argument. In the above example, the split files will all begin with the name myfile. Each will have a suffix of a ~ followed by an identification string. For example, files stored in the Cloud may look like:


    The csb and s3-crypto.ksh tools will use this naming convention to automatically reassemble files for get operations. Just as with splitting, reassembly is automatically performed for the csb tool and is enabled in the s3-crypto.ksh tool using the command-line option -S. When specifying a file that has been split, you do not need to include the suffix. The tools will discover that the file has been split and automatically reassemble it. Here is an example for reassembly:

    $ s3-crypto.ksh -S -m get -b mybucket -l myfile -r myfile

    The only downsides to splitting are the time it takes to split the files and the additional space that is needed to accommodate both the original file as well as the files created during the splitting process. This is unavoidable however as complete files must be available locally before they can be uploaded to the Cloud Storage provider.

  • Key Labels. The last "big" feature added in this new version is support for symmetric keys stored in PKCS#11 tokens (when the Solaris cryptographic provider is used). By default, the Solaris cryptographic provider is not selected (for reasons of portability), but it can easily be enabled in the s3-crypto.ksh tool using the -p solaris command line option. This setting will cause enable the use of the Solaris encrypt(1) and decrypt commands in place of their OpenSSL counterparts. Using the Solaris cryptographic provider allows you to take advantage of the Solaris Key Management Framework. Today, only the Sun Software PKCS#11 softtoken is supported, but I expect to remove this restriction in a future release.

    Using the pktool(1) command, you can create a key with a specific key label:

    $ pktool genkey keystore=pkcs11 label=my-new-key keytype=aes keylen=256
    Enter PIN for Sun Software PKCS#11 softtoken 
    Enter PIN for Sun Software PKCS#11 softtoken
    Enter PIN for Sun Software : 

    The creation of this new key (with label my-new-key) can be verified:

    $ pktool list objtype=key
    Enter PIN for Sun Software PKCS#11 softtoken  Enter PIN for Sun Software P: 
    Found 1 symmetric keys.
    Key #1 - AES:  my-new-key (256 bits)

    This key can be used with the s3-crypto.ksh tool when the Solaris cryptographic provider is selected and the key label is provided using the -K command-line option as in the following example:

    $ s3-crypto.ksh -c -p solaris -m put -b mybucket -K my-new-key -l myfile -r myfile
    Enter PIN for Sun Software PKCS#11 softtoken  : 

    The same approach is used to decrypt files when a get operation is specified.

As always, I am always looking for feedback! Let me know if these tools are helpful and how they can be improved! You can find out more information on this project at its home page at Project Kenai.

Take care!

Technorati Tag:

Read the original blog entry...

More Stories By Glenn Brunette

Glenn Brunette is a Distinguished Engineer and Chief Security Architect at Sun Microsystems. For over 15 years, he has designed and delivered security architectures and solutions supporting a wide array of global customers. Currently, he has focused his efforts on improving security for cloud computing and other highly dynamic and scalable architectures.