Bash rocks

Whenever I get a new Windows development machine, the first thing I do is install some software to actually make it usable. First off, it’s usually Cygwin.
This allows me to work on a Windows filesystem using a proper set of tools, including the venerable Bash shell. This shell has so many tricks and shortcuts up its sleeve, it’s incredible. One of my favourites (and one I always forget how to use in between using it) is variable substitution. The Win32 CLI offers a watered-down version of this as well, but Bash allows you to do things like this:

for i in $(dir *.MD5); do mv -f $i ${i%.MD5}.md5; done

Note the variable substitution bit in the curly braces, that says “return everything except the shortest match starting from the right-hand-side of the variable $i that matches the pattern .MD5.

Easy Testing with AbstractTransactionalSpringContextTests

Spring 1.2 has got some nice support for database testing. Specifically, it has the class AbstractTransactionalSpringContextTests, which will execute its tests within an individual transaction, and then rollback at the end of each test, unless you specify otherwise by calling setComplete(). This class in turn extends the class AbstractDependencyInjectionSpringContextTests, which uses field injection to populate its data. Field injection looks up beans in the current context by name, rather than type. So for instance, our PersistentTestCase class can now look like:

import org.hibernate.SessionFactory;
import org.springframework.test.AbstractTransactionalSpringContextTests;

public class PersistentTestCase extends AbstractTransactionalSpringContextTests

    protected SessionFactory testSessionFactory;   //wired by type
    protected ReferenceData referenceData;

    protected PersistentTestCase()

    protected String[] getConfigLocations()
        return new String[]

    protected void flushSession(){

    protected void clearSession(){

    public void setReferenceData(ReferenceData p_referenceData)
        referenceData = p_referenceData;


The neat thing about this is that we have a whole bunch of reference data that is injected into the class and populated automatically – just by declaring the data as a protected field, calling setPopulateProtectedVariables(true), and declaring the reference data bean in the application-context.xml with the correct name. Field-based injection will take care of the rest. Very easy, and it makes things nice and neat. Another nice feature is that contexts can be cached, and not reloaded and reinitialized every time across tests, which can save a lot of time if you’re using Hibernate’s create-drop mode, and it spends a long time initializing constraints.

Eclipse package-info bug solved

I finally found the cause of the Eclipse bug I mentioned earlier. I took a look at the .class files generated by IDEA and Eclipse and compared them, first using javap, and then using good old Vim’s built in Hex mode. Using a copy of the class file format specification (updated for Java 5), I was able to decipher the .class files and find a few interesting things.

For starters, here is the class file format:

ClassFile {
u4 magic;
u2 minor_version;
u2 major_version;
u2 constant_pool_count;
cp_info constant_pool[constant_pool_count-1];
u2 access_flags;
u2 this_class;
u2 super_class;
u2 interfaces_count;
u2 interfaces[interfaces_count];
u2 fields_count;
field_info fields[fields_count];
u2 methods_count;
method_info methods[methods_count];
u2 attributes_count;
attribute_info attributes[attributes_count];

First of all, the constant pool layout seems to be arbitrary, and there doesn’t seem to be any fixed order on where constants appear in the pool versus how they are defined in the source code. This stands to reason, as the constant pool is a lookup table, after all, and it should be irrelevant where they physically appear, as long as they are indexed correctly.

The second issue was actually tracking down what Eclipse and IDEA did differently. I had a feeling that the issue may have resided in the attributes section of the class file, but as it turned out, the error turned out to be in the access_info field of the class file. The access_info field is defined as:

Flag Name





Declared public; may be accessed from outside its package.



Declared final; no subclasses allowed.



Treat superclass methods specially when invoked by the invokespecial instruction.



Is an interface, not a class.



Declared abstract; may not be instantiated.



Declared as an annotation type



Declared as an enum type.

The value of this field was 0x1600 for Eclipse, but 0x0200 for IDEA. The Eclipse value was (I presume) constructed by using ACC_INTERFACE | ACC_ABSTRACT | ACC_SYNTHETIC. The ClassLoader choked when it saw the synthetic attribute. This bug has now been fixed. Oddly, the “bug” is technically correct – the synthetic attribute should be perfectly valid for a package-info class file. Apparently it will be in 6.0.

The strange thing is that the IDEA-generated access_info field is 0x0200 (ACC_INTERFACE), when strictly in accordance with the spec, it should be 0x0600 (ACC_INTERFACE | ACC_ABSTRACT), and technically is erroneous. I’m not sure why the CM does not flag it as such.

The last point I noticed is that the extensible attribute system that Sun built into the class file format was a stroke of genius. This is what supports the JDK 5 annotation system, and has previously allowed vendors to extend the class format in ways never thought possible.

Generics Problem #1 Solved

I have found a solution for the generics related error I mentioned here. The solution was to parameterize Comparable, e.g. instead of:

Foo implements Comparable
Foo implements Comparable<ActionState>

This solves the issue for Eclipse. It’s slightly strange that Eclipse flags this as an error, and not IDEA.

NTLM Proxy Authentication with Neon

Neon is really a great library, and has recently been released as version 0.25.0, which has built-in support for NTLM authentication. Here is a sample program showing how this can be used in practise.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include "ne_socket.h"
#include "ne_session.h"
#include "ne_request.h"
#include "ne_auth.h"

typedef struct {
static const int challenge_issued = 0;
} user_data;

* Response reader callback function
void my_response_reader(void *userdata, const char *buf, size_t len) {
printf("Received: %s\n", buf);

* Authentication callback

static int my_auth(void *userdata, const char *realm, int attempts, char *username, char *password)
strncpy(username, "username", NE_ABUFSIZ);
strncpy(password, "password", NE_ABUFSIZ);
user_data* data = (user_data*)userdata;
return attempts;

int main(int argc, char* argv[]) {
user_data data;
int success = ne_sock_init();
char dropbuf[4096];

if (success != 0) {
printf("Cannot initialize Neon library");

ne_session* session = ne_session_create("http", "", 80);
ne_session_proxy(session, "", 8080);
ne_set_proxy_auth(session, my_auth, (void*)&data);
ne_request *req = ne_request_create(session, "GET", "/");
ne_add_request_header(req, "Connection", "Keep-Alive");

ne_debug_init(stdout, NE_DBG_HTTPAUTH);

if (ne_request_dispatch(req))
printf("An error occurred\n");
const char* error = ne_get_error(session);
printf("%s", error);
printf("Response status code was %d\n", ne_get_status(req)->code);

return 0;

JDK-Based NTLM Authentication

When Sun released the 1.4.2 version of the JDK, they slipped in support for native NTLM authentication on Windows. This is done by retrieving the logged-on user’s username and password from the OS whenever an NTLM challenge is received. The only information that needs to be supplied is the NT domain. Here is an example that authenticates through an MS proxy, using my currently active credentials.



public class TestJDKNTLM {

    private void execute() throws IOException {

        URL url = new URL("");
        URLConnection conn = url.openConnection();

        BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));
        int read = 0;
        String body = null;

        do {
          body = reader.readLine();
        while (body != null);


    public static void main(String[] argsthrows IOException {
        new TestJDKNTLM().execute();


NTLM Proxy Authentication and Jakarta HttpClient

In my current work environment, our Web access is proxied via a MS ISA server, which uses NTML proxy authentication. I was recently looking at NTLM proxy authentication, as I had problems running Subversion inside the proxy (this should be fixed now with the newest release of Neon, Subversion’s WebDAV layer). I am currently looking at some NTLM providers in the Java space, and one of the obvious ones I came across is the Jakarta HttpClient. Here is an example that will authenticate to an NTLM-based proxy. The code is for HttpClient 3.0-RC2.

import java.util.ArrayList;
import java.util.List;

import org.apache.commons.httpclient.HttpClient;
import org.apache.commons.httpclient.HttpException;
import org.apache.commons.httpclient.NTCredentials;
import org.apache.commons.httpclient.auth.AuthPolicy;
import org.apache.commons.httpclient.auth.AuthScope;
import org.apache.commons.httpclient.methods.GetMethod;

public class TestNTLM {

    private void execute() throws HttpException, IOException {
        HttpClient proxyclient = new HttpClient();


        List authPrefs = new ArrayList();

            new AuthScope(null, 8080null),
            new NTCredentials("username""password""""MYDOMAIN"));

        proxyclient.getParams().setParameter(AuthPolicy.AUTH_SCHEME_PRIORITY, authPrefs);

        GetMethod get = new GetMethod("/");
        int status = proxyclient.executeMethod(get);


        BufferedReader bis = new BufferedReader(new InputStreamReader(get.getResponseBodyAsStream()));

        int count = 0;
        int read = 0;
        System.out.println("Content length: " + get.getResponseContentLength());
        char[] body = new char[2048];
        do {
          count =;
          read += count;
        while (count != -1);

        System.out.println("Read " + read + " bytes");

    public static void main(String[] argsthrows HttpException, IOException {
        new TestNTLM().execute();


Eclipse 3.1M6 and package-info

Another issue this time with Eclipse 3.1 M6. This concerns the “package-info” mechanism for package-level annotations. We are using to declare a package-level sequence generator:



This seems to compile OK, but when I try to run some unit tests, it chokes:

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'test.sessionFactory' defined in class path resource [application_test_context.xml]: Initialization of bean failed; nested exception is java.lang.ClassFormatError: Illegal class modifiers in class uk/co/researchkitchen/hibernate/package-info: 0x1600
java.lang.ClassFormatError: Illegal class modifiers in class uk/co/researchkitchen/hibernate/package-info: 0x1600
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(Unknown Source)
at Source)
at Source)
at$100(Unknown Source)
at$ Source)
at Method)
at Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at org.hibernate.util.ReflectHelper.classForName(
at org.hibernate.cfg.AnnotationBinder.bindPackage(
at org.hibernate.cfg.AnnotationConfiguration.addPackage(

Apparently, support for the package-info mechanism has been added in M6. I’m presuming this is an Eclipse issue as colleagues using IDEA have no such issues.

Buggy MemoryMapping in the JDK

I just ran into a problem using memory-mapped byte buffers on Java 5. The basic use case is read from a socket, write to a file, and then map the resulting file into memory to perform digest calculations, etc. across the entire file. It works the first time, but any subsequent attempt to rewrite the file will fail with the message The requested operation cannot be performed on a file with a user-mapped section open.

After some searching, I found that this is a relatively common problem, and it is a facet of the way that memory mapping actually works on the underlying OS. It seems you can have speed, or safety, but not both. The issue is with the current impossibility of producing a reliable and platform-independent unmap command. An example of unmap() at the C level can be seen here.

In fairness to Sun, it’s probably not fair to blame them for the existance of this problem – it seems to be currently intractable. The end of their evaluation note in the Bug Parade entry for this bug reads:

We at Sun have given this problem a lot of thought, both during the original
development of NIO and in the time since. We have yet to come up with a way to
implement an unmap() method that’s safe, efficient, and plausibly portable
across operating systems. We’ve explored several other alternatives aside from
the two described above, but all of them were even more problematic. We’d be
thrilled if someone could come up with a workable solution, so we’ll leave this
bug open in the hope that it will attract attention from someone more clever
than we are.

Just for reference, the code that creates the buffer is shown below:

 FileChannel fc = new FileInputStream(requestedFile).getChannel();
       int sz = (int)fc.size();
       MappedByteBuffer bb =, 0, sz);
       byte buffer[] new byte[sz];
       bb.get(buffer, 0, sz);

ClassCastExceptions and Hibernate mappings

I just had an issue where Hibernate was throwing ClassCastExceptions when trying to persist an entity to the database. I tracked the problem down a specific property – a byte[]. I had just switched my entity mappings from a hbm.xml file to an annotations-based approach. The solution was fairly straightforward, as it turned out – just specify the mapping type explicitly as “binary”. It seems that Hibernate may be attempting to map it as a blob type, which doesn’t map transparently to primitive byte arrays (yet – I see there is a PrimitiveByteArrayBlobType in the Hibernate Annotations API). Meanwhile, I just declare the mapping like so:

  public byte[] getRawData() {
    return rawData;