I do like open data but the recent talk of big data puts me off – implicit in the language, although often inadvertent is the implication that you have to be big to get this stuff working for you. It makes me feel excluded – I am just a little guy with a little company who wants to make a difference in a small community. In fact just like everyone else I’ve met through my community work with Talk About Local. For opendata to deliver on its promise it needs to be open and welcoming to the little guy. With the greatest respect, the last people we need showing up to help are IBM or Accenture. I wrote about this a couple of years ago and increasingly I do feel that the big data talk is mainly a load of balls. Rufus Pollock’s article in The Guardian hits the nail on the head.
‘Just as we now find it ludicrous to talk of “big software” – as if size in itself were a measure of value – we should, and will one day, find it equally odd to talk of “big data”. Size in itself doesn’t matter – what matters is having the data, of whatever size, that helps us solve a problem or address the question we have.’
These days, totals, averages and basic charts in excel are about as much as I can muster. The basic stats from my BA in economics are a distant memory. Although I am in a tiny minority who understand what APIs do, I could never work one myself. Attending a very good NESTA/Lottery/Nominet day yesterday on open data and the voluntary sector (see #vcsdata on twitter) I wanted to do something practical that didn’t need to involve any coders, nor IBM.
So I reverted to some basic spreadsheets about arson published by the London Data Store on behalf of the London Fire and Emergency Planning Authority. We used to have a huge arson problem in the bit of Kings Cross I lived in – cars on fire on a weekly basis in Caledonian Ward. The data stores publishes deliberate fire incidents by ward over a two year period as a spreadsheet.
The data set was a good size, covering all London wards, but it was arranged by borough so finding the Islington bit was straightforward. Then some simple copying, pasting and totalling allowed me to do a basic bar chart. This shows that arson has fallen by more than half in Caledonian ward since data was last published. This is a helpful result – it helps give confidence in measures to tackle ASB in the area. And somehow feels a bit more concrete than the police data. The blog post on our Kings Cross site sets it out.
All straight forward, involving simple use of Excel, no coding, no big IT company and satisfyingly small, hyperlocal even. And hopefully of use to people in the community who can say ‘Look some of the stuff we are doing is working’.
- So what does the digital charter mean? - 21st June 2017
- Hyperlocal blog can help hold power to account in tower block blaze - 14th June 2017
- A vision for regulating the digital sphere after Brexit? - 6th April 2017
Well said Will – the talk of Big Data has control running through it – “this is too important and the benefits potentially far too large to be left to anyone but the big boys”. I blogged about the problem that we govt’s don’t seem to trust networks to do things at scale…
http://podnosh.com/blog/2013/03/19/why-dont-we-trust-networks-to-do-things-at-scale-ukgovcamp13-lsis13/
Is this another area where big single solutions are sought because the aggregated effects small are too hard for the top to see or understand?
Where can I find an introduction to these issues? I’m Information Officer at a CVS in north east London. I can see huge potential in connecting together the data we hold about community organisations with data from local and national government. But what is available? What are similar organisations doing?