Thursday, December 16, 2010

End Thought for 2010: Keeping it Vanilla

There has been a common thread amongst CRM implementations I have been involved with this year, and on various other blogs I have read and conferences I have attended: keeping your CRM implementation Vanilla.

This is an IT/CRM term which may sound vaguely amusing if you have never heard it before, but it is one of the more important things you can consider when procuring and implementing a fundraising/membership database or charity CRM system. What it means is that, as much as possible, you implement the database in the first instance without any - or with minimal - “Customisation”; although “Configuration” is fine. (NB: All packages will allow “Configuration” to some degree and if they don’t then don’t buy them).

Why is Vanilla so Important?
Why do this? Quite simple: it will improve the simplicity of implementation, lower risk, create a faster implementation, a lower cost implementation, there will be less need for specific/expert resources, it will be a simpler and quicker data mapping from your existing database, enable simpler testing, you’ll avoid scope creep and more. All these things will mean that your initial implementation will be smoother and have a far higher chance of success. If you are buying a powerful or flexible database then it will be immensely tempting to jump straight in and implement lots of exciting Customisations from the word go, but if you do then you may not receive those benefits listed above.

Defining Configuration & Customisation
So having said that, I should define the two key terms: Configuration and Customisation. These are my definitions below and even if you or some suppliers don’t agree with all my points, the heart of the message is the same. Different suppliers will define these differently and claim different things, so at the end of the day, you need to discuss these issues with any prospective supplier and understand, in some detail if necessary, just what you can and can’t do in each of the following areas.

Configuration. It is Configuration if...
  • You use an application’s built-in tools to make changes to the system which every other organisation using this package could do and recognise if they were to start working at your charity.
  • If an upgrade/new version of the package was released tomorrow, then you could install it without worrying that any of the Configuration which you had done would mean that the upgrade wouldn’t work, and equally, knowing that the upgrade would not affect any of the Configuration you had done. In practise, some upgrades might still require some such work so if that is the case for your system then do spend time to understand just what that it is. (And of course, whatever the case you always need to test upgrades anyway before going live with them.)
  • The changes could probably (if not always) be expected to be done by a “non-techie”. This doesn’t mean an un-trained person and it doesn’t mean a non-database savvy person, but to put it in perspective, I wouldn’t normally expect Configuration to require any programming/coding, i.e. writing code in VBA, XML, C+ etc. This might not always be true but as soon as you do get to this level of “Configuration” then in my experience it is likely that you are starting to get to “Customisation”. Either way, ensure again that you know the impact.
Customisation. It is Customisation if...
  • The implementation involves bespoke changes, new modules, hard coded programming etc which the supplier or a third-party does for your organisation for your specific needs. 
  • If the work does mean that upgrades/new versions are affected (either because it stops them happening or because your Customisation would need additional work to be done on it).
So should you ever consider Customisations during an implementation?
It is of course very easy for me to write this and say that you shouldn’t do Customisations but do I think you should ever consider Customisation during an initial implementation? Of course you can! Consider but always ask yourself if they are definitely needed. But if you have a critical business function which is required immediately on go-live and there is no other way to achieve it, or you will get so much benefit from a Customised approach that it just makes sense, then go ahead, see how you can implement it. In particular, if a Customisation only has an impact on an “isolated” part of the system (or as isolated as one can expect in a CRM system), as opposed to a core area then that should lower the risk. One thing you could also do to mitigate some risk would be to discuss with the supplier whether could they take your proposed Customisation and build it in to their standard product in a future release.

And do remember that you can of course implement Customisations later, after your initial go-live. Why is this better? Because you can implement them in a more structured way, at a better pace, spreading costs, with less risk, increase user adoption and do it as you learn the package and all its capabilities. In fact, as you gain knowledge of the package, you might even find that some complex Customisations which you were planning originally can be greatly simplified or might not even be needed at all. Don’t shy away from those which are needed or which do bring you benefits, just take them on at a pace and structure which you can implement more easily.

Are there downsides to this approach?
All that said, let me also say that if you do follow my suggestions here then there may be downsides, or at least issues to overcome, for example:
  • End-users will (hopefully) be excited by the prospect of a new system, and if they have been sold-on and told about all the wonderful new, whizzy things it can do - which require Customisation… - and then find they are not included when they first get to use it, then they might be disappointed and user acceptance could stall; especially if that doesn’t improve their processes or, say, some screens/forms are not as they would ideally want. So you need to address this early on during an implementation and explain to the users exactly what they are going to get and when – and why you are doing this - and ideally give them a roadmap as soon as possible for future developments of the implementation to show when they can expect the elements which do require Customisation. 
  • Those great Customisations you planned for later never quite seem to happen… This is one of the higher risks of this strategy if you do not build-in a structured approach from the word go, especially for charities, where money is often tight. If you are not careful then what you start with will be seen as perfectly okay and if it's working then why do we need to do more work on the system anyway…? To avoid this, ensure you explain and discuss your proposed approach with your Senior Management Team/budget approvers early on in the procurement or implementation phase and get their full support and a committed budget for the post-live Customisations.
  • And just to re-iterate what I said above: if you do require a specific function and Customisation is the only option or the best option, then don't shy away from it.
Remember, Vanilla is not boring, it’s a great flavour because of its simplicity!

Friday, December 03, 2010

How to Stop Duplicates from Getting into Your Database

I’m starting this blog post by saying I was tempted to just write “You Can’t” and leave it at that! Thus you will see that there is no perfect answer to stopping duplicate records (a.k.a. dupes) from getting into your system, but that is no reason not to try and so I hope that the following will help you.

The first thing to confirm, therefore, is that it is almost impossible to have zero dupes in your database. This may be because of human error, unknown pieces of biographical/contact information, individuals not updating you with changes of address, individuals giving you different personal details on different occasions, or from technological problems such as poor database structure, lack of data integrity or weak duplicate checking tools. And data degrades. Fast. Millions of people move address every year and data management and data quality considerations on databases often slide after any initial implementation.

So first, consider how you are inputting new records and the different sources. You almost certainly will be doing some manual data entry and it is quite probable you might also be importing new records electronically, for example from a fulfilment house, your web site or another system within your organisation. Secondly, don’t forget that is not only when you enter new records when you might be creating a duplicate; updating records can also create dupes, and again this might come from manual keying of data or from loading data from an outside source.

The first line of defence against dupes is actually a simple one: user training. Make users aware of the issues of creating dupes and what it can mean to your organisation. Train them how to check for dupes, how to enter data consistently and accurately, how to ask for full and accurate supporter information and so on. But of course they can forget or do it badly or the name/address details may be too different/complex for them to be able to find a dupe simply, so don’t just rely on this approach.

Secondly, if you haven’t already, consider if there are ways of improving data integrity and accuracy through the database technology you currently have. For example, if you store counties, ensure they are in a look-up (a.k.a. drop-down) table; limit post code fields to 8 characters (ideally, check the data format!); split the name fields; and help users check for dupes by enabling them to use “wildcards” when searching for existing records. (e.g. w*wright will find Wainwright, Wainewright, Waynewright etc).

The next thing to have is a duplicate record checker which is native to your database and which is automatically invoked when you add/update records. At this point of data entry, you actually want such a check to be “broad” rather than “100% precise”. i.e. when you enter a new name and address, you don’t want the system to check for an exact match on all such data fields. You want it to be able to check whether there could be a duplicate record based on some of the criteria you have entered. Take my name for example: Ivan Wainewright. If you keyed a new record from me but forgot that my surname had an e in the middle, then an exact check would not find me. So, the dupe checker should check on a set number of characters within a name and address fields.

Ideally, such dupe checkers should even use fuzzy matching, something which the latest Enterprise level of SQL Server now comes with as standard. i.e. With fuzzy matching, a dupe check should find my surname with or without the e, and if I had a vanity address (e.g. Rose Cottage, 1 The Avenue etc), then it should be able to account for that too. It’s a pretty powerful tool and well worth using if you can.

But when you import data electronically you need to consider dupes slightly differently. If you have thousands of records being imported then clearly you can’t check each one by hand. But if you do want your import process to merge dupe records automatically, then in this instance, if you have too “broad” a dupe check then you could end up merging two records which the system thinks might be the same but which may well not be. And there isn’t a much worse thing to do in terms of supporter management! So, for electronic imports, you may decide that you do need a 100% match on data items (or as close to 100% as we can ever really be) in order to know that there is an existing record on your system; or maybe even consider asking for human interaction on a limited number of records which the system cannot definitely match but where it reports there could be a dupe.

Specialist software can also help with all of the above, the most common such system being PAF software, which will help you add and update addresses accurately by using the Royal Mail’s post code system. It isn’t the cheapest of options if you need many licenses but it’s well worthwhile. (And you might only need such software for users who do add/update biographical data). You can also get online PAF systems to help with data entry on web forms.

You might also want to consider using email addresses and mobile phone numbers as additional dupe checks; just because someone moves house doesn’t mean they get a new email address or mobile number, so see if that can help.

You could also introduce a supporter self-update system on your web site which supporters can fill in if they change address. And if they do then remember, there is very little which is more indicative of a keen supporter than someone who pro-actively tells you they have moved house. Treat them well! And if you do collect data online/electronically, whether it is on your web site or through a third-party, then it’s clearly going to be more accurate if you can add it to your central database electronically rather than re-keying it.

But despite all your best efforts, it is highly likely you will create dupes, so you also need to have a regular data cleaning protocol. Again, your database may be able to help by running a duplicate record report on your records (and at different ‘levels of confidence’ so you can automate some merges and check others manually), and/or you can extract your data to analyse and check it outside your system. There are plenty of good dupe checking software packages and there are lots of agencies and companies who will help you clean your data. You can also use services such as the NCOA (National Change of Address) register to check for people who have moved house and thus identify dupes that way.

And a final thought on a far too common issue for charities: if you have multiple databases in your organisation, then, amongst all sorts of other problems, if you transfer data between them then you can significantly increase the likelihood of dupes – cut down on multiple databases whenever possible.