What level of persistence ignorance can be achieved with SQL (without ORMs)?

321 Views Asked by At

There seems to be a consensus that ORMs compared to vanilla SQL/ SQL Mappers are not a good choice when dealing with existing/ legacy databases. Check out here and here for example.

I've just started using MyBatis on a business intensive application against a legacy database but due to the lack of ORM features such as change tracking I started by writing update functions in my repositories that will try to update the domain entity and its direct associations. But this made update statements significantly slow!

So we are now considering a new strategy of implementing a function for each of the following cases: updating the entity's immediate attributes (e.g.: employeesRepository.update(employee)), updating the entity's associations (e.g.: employeesRepository.updateEmployeeDepartment(employee)) or updating the entity's collections (e.g.: userRepository.updateUserRoles(user)). By following this strategy, every time you change your business logic, you need to make sure that you're persisting your changes.

The main question here is: up to what extent can you achieve persistence ignorance when using vanilla SQL or SQL Mappers such as MyBatis? Is it worth it to implement a change tracking mechanism to achieve optimized updates and clean repositories?

I would also like to know if there better strategies to map my domain model to a legacy database?

=============================

UPDATE

Why persistence ignorance? for all the good reasons. Check out: What are the benefits of Persistence Ignorance? for instance. Testability and reusability are the first things that come to mind. More about Persistence Ignorance and its benefits: msdn.microsoft.com/en-us/magazine/dd882510.aspx.

Basically my code currently looks something like this:

public assignProfessorAsDean(int academicId) 
{
    UnitOfWork uow = UnitOfWorkFactory.create();
    FacultyRepository facultyRepository = new FacultyRepository(uow);
    FacultyMember prof = facultyRepository.getById(academicId);
    /*promoteToDean updates the set roles posessed by the professor by adding a DeanRole*/
    prof.promoteToDean();
    /*now instead of saying facultyRepository.save(prof),*/
    facultyRepository.updateRoles(prof);
    uow.commit();
}

As you can see, although data access logic and business logic seem to be separate, my data access logic knows what exactly has been changed in the state of my domain objects. Say that I needed to update the prefix of the employee to "Dean" in the same promoteToDean function, then, in addtion to facultyRepository.updateRoles(prof); I will have to also add something like: facultyRepository.updateTitle(prof); etc. Yet, these are all simplistic scenarios, imagine how the code would look like for complex use cases.

http://msdn.microsoft.com/en-us/magazine/dd882510.aspx

1

There are 1 best solutions below

0
On

Oh my looks like quite a pickle you're in. But never fear. There is hope. (Well hopefully 9 months later, you're not in the same situation). But for future readers.

What you want to do is have your repositories and unit of work provide a surrogate for your domain objects. The surrogate would have the persistence logic that you don't want to sully your objects with like change tracking and the like. Then when you call uow.commit() it would just inspect changes on the object and execute the appropriate sql as necessary. (It's somewhat close to what a normal ORM solution would provide.

On that note, are you certain that O/RM is not the best path for you? I know you mentioned there are no surrogate keys and stuff like that, but as long as there ARE keys and they are guaranteed unique, you can still work with it. For instance, Entity Framework (which DB2 supports) can use the Fluent Mappings to map the non-standard keys to your domain object and even supports keys that aren't generated by the server.