Petya, Fixing/Breaking/Fixing Servers and Lunchtime Learning.
I’ve been reading…
- How companies are learning to thrive without anyone in charge.
- Detecting riots with Twitter.
- Wetherspoons just deleted their entire customer database.. on purpose.
- A new kind of tech job emphasizes skills, not a college degree.
- Why we always try to recruit from within.
- Full Fact awarded $500,000 to build automated fact checking system.
- Fake news : You ain’t seen nothing yet.
- Tackling the poverty premium through data sharing.
- The problems with tech are invariably people problems.
- Chatbots : Your ultimate prototyping tool.
- Do we still need managers?
- An agile meet-up not a top-down waterfall.
- Smart places have citizens, not customers.
- Human jobs in the future will be the ones that require emotional labour : currently undervalued and underpaid but valuable.
- The ‘Without Me’ mindset.
- Fighting crime with Slack.
- Projects from #homelesshack London.
I’ve been watching/listening to…
- Why you can’t afford to buy a house and how to fix it.
- BBC Analysis : Brexit — The Tales Of Two Cities.
- Basic Income Podcast : Rep. Chris Lee on Basic Income Legislation in Hawaii.
Stuff I’ve been doing…
Ho boy! It’s been a really tough week.
After applying updates on Tuesday evening to specifically guard against the Petya/NotPetya virus, I immediately became aware of some unintended side effects.
There’s a blog post to be written here about the unique tensions of carefully maintaining IT systems whilst supporting legacy applications whilst being able to respond to rapidly to new breaking threats like ransomware that utilise undisclosed security holes found in the very things you use.
I worked till midnight on Tuesday before finally admitting that I was going nowhere and giving up till the morning. Unfortunately my brain refused to power down, so I spent much of the night staring at the ceiling thinking about what I’d do when I got back into work.
I did eventually manage to get things back on track the next day and the net result about four hours without email (which some people found blissful.. others, less so). I’m in the process of writing it up and looking for ways to mitigate similar problems in the future. There’s some good business continuity points to pull out of this too.
Perhaps the most alarming thing about the whole experience was the feeling of anxiety it caused. It was striking because in all the time I’ve worked with technology, I’ve rarely felt such a strong physiological/emotional response when things have gone awry.
That’s something I’m still reflecting on to understand why this was different from previous experiences, if only to ensure that I don’t end up feeling the same way again.
On a more positive note I got to run my Lunchtime Learning experiment this week. I wrote it up here if you like to read why/how I did it and what I learned.
I was *almost* tempted to cancel and re-arrange as it was the day following the server problems and I felt like I was only just getting back on an even keel. However, I’m glad I pushed on though because it was a really positive experience and it felt good to close the week out with a small tick in the win column.