We’ve been doing a lot of research in the last couple of months in relation to how our clients can get the best access to our underlying data. Because we know it’s valuable – after all we do a lot of work to make it so!
But direct access to an operational database is pretty poor design. There’s major issues on performance, there’s major issues on ‘tight-coupling’ (the idea that if we make changes, your systems get broken), and there’s some intellectual property issues that pop up too.
And lastly – NBV is a giant – often the 2nd biggest database after metering. So efficient querying over massive datasets is a challenge – one that an older relational database like Oracle isn’t purpose built to solve.
So what’s the solution? Glad you asked.
The solution is simple ( aren’t all good solutions? ) – let’s mix Big Data and NBV together – and what do we get?
Introducing the Big Data Connector.
So what it looks like at the moment is that we can:
So we’re tying down some loose ends, but it looks very, very promising. The lag time on getting NBV changes to the Big Data Connector is generally measured in seconds, and occasionally with massive loads may be only one or two hours. And we tend to think this is pretty acceptable for the performance and access that it provides.
We’re a few week away from finalising the testing, and working out the commercial model for this, but it’s looking like a very strong candidate at the minute.
And in the future – we can see different connectors on our road map across all our products. Wouldn’t that be something?
Written by Adam Kierce.