As you may know, there are currently two drivers for the Radeon R300-R500 families of GPUs. There is the classic Mesa driver and the r300g Gallium 3D driver.
The classic Mesa driver has obviously been around longer and has therefore seen more bugfixing and general attention. Naturally, r300g is not as mature even though Gallium 3D is where the future is, because the potential of many state trackers is only going to get bigger. Think a unified acceleration logic for the X server, client-side accelerated 2D rendering, OpenCL – the possibilities are endless: Each of these items simply needs a state tracker, and we can then painlessly hook our driver up to support these things without any additional work.
The question is where the cutoff should be. At which point do we "stop caring" about the classic Mesa driver? Here, "stop caring" obviously means stop implementing new features; bugfixing will remain important.
This has become a more important question for me now that I've entered new feature territory again with exploring GLSL. While the shader compiler is shared between classic Mesa and r300g, there will probably be some more required changes. Considering the fact that we also need to support the rest of OpenGL 2.0 to support GLSL well (a lot of applications will only test for OpenGL 2.0 and will not use GLSL otherwise even if the ARB extensions are there), I now have an even bigger incentive to make the break to Gallium.
I believe it's a very viable and sane strategy: Leave the classic Mesa driver at its current OpenGL 1.5 level and let it become a solid base for conservative users (including the next round or two of Linux distributions). In the meantime, get r300g into a good shape, particularly against Piglit, and get cracking on those OpenGL 2.0 features over in Gallium territory.