Email Twitter LinkedIn Google Plus Hamburger
Incident Response
Get in touch

Security shortcuts? What could possibly go wrong?

There’s a saying amongst NASA’s astronauts, “there is no one problem, that cannot be made worse”. Whilst that seems rather pessimistic, it serves as an important reminder: when things go wrong it is vital to recognise that every decision you make to fix the issue, from the point of discovery, has the potential to do more damage. 

Nowhere is this saying truer than when describing the way many large companies have dealt with data breaches and cyber-attacks. Organisations have a history of bad form and make significant cyber incidents worse at every turn - usually by trying to hide them and then mismanaging the Public Relations fallout. This article will discuss some recent data breaches where some of the biggest companies have done just that and what they probably should have done instead.

In March 2018, Google+, the social networking site created by the search and ad giant to compete with Facebook, suffered a data leak resulting in 500,000 users’ data potentially being exposed. This was due to a\r\nbug in the mechanism that is used to access the data belonging to user accounts by 3rd party applications. At the time, Google chose not to disclose this breach, presumably to avoid public scrutiny and reputational damage, despite customer data potentially being accessed without valid permission. Avoiding reputational damage is a highly motivating factor in keeping data secure but owning up to your mistakes should come first. In October 2018, Google let the world know they were closing Google+. Could this lapse in security and subsequent loss of trust have contributed to this decision? Regardless, Google should have started with an apology, disclosed the breach and the results of its internal investigations sooner and in far more clarity than they did. Keeping user trust after a data breach is hard - but not impossible. In this instance, Google chose to delete the Google+ service entirely (it was never going to be a\r\nFacebook killer anyway) and hope the whole mess just went away.

Another key reason for big companies to keep data secure is to minimise financial impact. This is firstly to mitigate a loss of business from customers, but mostly to avoid regulatory enforcement. A good example of this would be the recent Facebook scandals that have taken place in the past two years, the details of which can be found on nearly every major news site. The most recent, and most prominent attack was revealed in September 2018 that allowed\r\nan adversary to log in to users’ personal profiles. It was estimated over 50 million users were affected by this. This weakness existed for at least a year, from June 2017, showing a disturbing lack of concern for security by the world’s largest social media company. Their laissez-faire attitude only being re-enforced by the Cambridge Analytica scandal. 

It’s hard to believe that Facebook didn’t know about the bugs affecting users’ profiles. All new software undergoes vigorous testing prior to release, not only for functionality but also security – a practice adopted by most businesses. Facebook also have a huge Bug Bounty program to support this."


undefined

Only Facebook knew how many ad clients they lost because of their negative PR around security and what the impact was to their bottom line.\r\nWhat you can be sure of is that their handling, Zuckerberg’s frankly robotic and stage-managed appearances were unhelpful (at best) - and evasive and damaging at worst.

The attitude that some companies have towards their data security often has a clear correlation with their approach to it. Mining and monetising user data is to the 21st century what oil was to the last century. Oil drove growth, innovation and untold wealth for those organisations able to capitalise on it and we are seeing exactly the same with user data now.

It took some environmental disasters and many lost lives before the health and safety industry and energy regulators got the teeth to get the petrochemical industries to invest heavily in “doing the right thing”. We are finally seeing the data regulators learning that self-regulation for the tech giants does not work either. The recent IAG and Marriot hotel GDPR related fines of £183m and £99m respectively are going to send shockwaves across boardrooms no doubt.

Consequences of large data breaches have only become fully apparent in the last few years, along with being dealt with legally – why are companies only just facing ramifications now? Organisations, like Facebook and Google,\r\nhave been storing large amounts of personal data for the last 15 years, some dating back further! Is it possible that companies have been mistreating our data for a considerable amount of time, but we’re only just beginning to hear about it? 

However, it’s not all doom and gloom. There are some tech giants who consistently protect their customers’ privacy, for example, Apple. They have the company motto of “what happens on an iPhone, stays on an iPhone”, something which they have remained true to, even when risking their reputation in high profile news cases. Apple use their own ecosystem, which has been designed to maintain both privacy and user protection. 

The consequences of not taking the appropriate security precautions, and any substandard subsequent handling of an incident, will be felt by an organisation for years after if a data breach were to occur. Yes, risk can be minimised, but sadly in most circumstances never eradicated.\r\nRemember, there is no problem that cannot be made worse – we should always be proactive not reactive; no company should ever gamble with their security.

Blog by Adam Casey-Rerhaye, Cyber Security Test Consultant

In order for this site to work properly, and in order to evaluate and improve the site, we have placed cookies on your computer.

That's fine!