Interview with Malcolm Shore: cyber security veteran, online educator and former AISA board member

By Nick Moore

Malcolm Shore's LinkedIn Learning videos on cyber security - with a particular focus on pentesting and ethical hacking - have received more than five million views. We caught up with the security architect from his NZ farm and asked him to tackle some of the issues in cyber security today, including:

  • Can we win the 'war on cybercrime'?
  • Is cybersec a dangerous profession?
  • The 2020 Cyber Security Strategy - good, bad or indifferent?
  • Can the government be the nation's pentester?
  • Is the IoT code of practice going to work?
  • How to break into cyber security?
  • How to advance a cyber security career? 
  •  
  •  

(Q) Cybercrime is said to be bigger globally than the drugs trade now. More resources are being dedicated to cyber security and yet the attacks and breaches are not slowing down. We’ve never really won the “war on drugs” - what do you think our chances are of winning the “war on cybercrime”? What needs to be done to achieve this?

MALCOLM SHORE: Cybercrime happens because technology provides the opportunity. I don’t think going out and threatening “consequences” for some anonymous offshore cyber criminal is going to win the war.

But we can slow down the breaches and attacks by removing the opportunity.

In the 1960s, cars were death traps. They had serious technical vulnerabilities but crashes were always the driver’s fault. There was no real incentive to fix them until Ralph Nader came along and published his Unsafe at Any Speed book, which was the trigger for regulatory change that required cars to meet safety standards before they are allowed on the road.

Cybercrime happens because criminals know that individuals are using equipment that is difficult, if not impossible, for them to secure, and they take advantage of that. Nevertheless, the penalty for a data breach always falls on some poor sacrificial lamb when it happens and never on the vendor.

We gave away that opportunity in the terms and conditions. We still need a Ralph Nader for cyber security to make sure vendors don’t make it easy for cyber criminals.

To stop cybercrime we have three options:

  • Continue best efforts prevention but continue to have system flaws and deploy a policing regime to deter and prosecute cyber criminals – but a global trans-jurisdictional regime is difficult in a coherent world, and all but impossible in the balkanised world we’re moving into. This seems to be the approach we’re taking and it isn’t going to work.
  • Remove the system flaws through government technical cyber-safety standards, which apply to all equipment sold – but we’d need a much greater investment in understanding how to test for cyber security and perhaps there’s far too much incentive for nations to want to encourage an unhealthy level of vulnerabilities to maintain their cyber intelligence operations. And, in any case, we’re at the mercy of global vendors and there’s little we can realistically influence.
  • Build systems without flaws. This is a political challenge as much as a technical one. Even if vendors were incentivised and had the software engineering skills to achieve such a Nirvana, the various flavours of the (USA) Patriot Act will put exploitable backdoors in all the systems anyway, and the adversaries will find them.

So, I think the short answer is no, we won’t win the war.

(Q) In Mexico, for example, the police are not infrequently attacked by drug cartels. Given that cybersec professionals stand between criminal organisations and their money, and nation-states and their espionage and IP theft, how do you rate the chances that cybersecs might be targeted?

MS: Firstly, cyber security professionals put in security measures – those security measures go after the criminals, not the security staff. I doubt that a SOC operator responding to an alert would be seen in the same light as a police patrol so I don’t rate these staff as targets.

Similarly, I don’t think there’s much likelihood we’ll see people being traffic’d or taken by extraordinary rendition into teams of unethical hackers – there’s plenty willing to join the teams without that.

But if we ask the question another way, "Is there a risk that security staff might be blackmailed or subject to extortion in order to get secrets such as credentials, or to take some malware into a site and install it?" – yes, I think that is very much a risk and certainly is the basis for much of our personnel security measures.

(Q) What are your thoughts on the Federal government’s 2020 Cyber Security Strategy, which was released in August?

MS: I think coming into 2020 we’re seeing evidence that, by going on the offensive in many areas of international conduct, Australia is setting itself up for a great deal of retaliatory action including retaliation in cyberspace. Making a public statement that Australia will employ 500 new cyber warriors to take the attack to our adversaries – with a nod towards one in particular – is somewhat provocative and just serves to increase the threat level. The government has often mentioned nation state attacks – and if we look at China, there’s over 30,000 police working on its Golden Shield project and I’d be surprised if they didn’t have as many or more on the offensive side – that’s a mighty force to have to defend against. Add to that the fact that the government has failed in many departments and agencies to stand up its cyber defences to the standard required by the PSPF (Protective Security Policy Framework), then the outcome of that cyber conflict, when it comes, will be pretty clear cut.

So this creates an interesting challenge for the Cyber Security Strategy, and sets the scene for what it has to deliver.

The Strategy continues a lot of the same approach that we’ve seen for the past 20 years: that government will invest to build its capabilities and be there to help business. It plans to establish an effective set of cyber eyes and ears to detect and warn business of the cyber threats. But this conversation started in the mid-1990s with the establishment of the Critical Infrastructure Protection centres – and has done little to deliver a safe digital environment to date. Building a billion-dollar technology solution is fine if we know what we’re solving but it’s certainly not clear to me that anyone understands how to monitor at national scale to detect every potential state-sponsored zero day. So I think the CESAR (Cyber Enhanced Situational Awareness and Responseinitiative is a very high-risk investment and unlikely to deliver more than the existing critical infrastructure initiatives have.

The cyber security strategy focuses on government being more directive on industry. But I don’t see any indication that it plans to understand the business problems that need to be solved, nor ensure that the solutions being proposed are a good fit to industry’s business requirements. The point of having a secure digital infrastructure is to enable business to thrive, not to choke business. It’s difficult enough to get industry’s own security teams sufficiently trained and focused to be able to deliver business-enhancing security, and having government getting involved in directing security at arms length and with no understanding of the business is just not a recipe for success.

Setting a minimum security baseline across the economy is an extension of the PSPF approach to security, and it’s unclear how government will gain an effective outcome when it has failed to do so with PSPF in its own community. Much more thought will be required to understand what success looks like, and what is required to achieve it. Making it law, ploughing money into a security-enforcement program, and punishing businesses when they have a breach isn’t the kind of government support that Australian industry needs from its government.

There’s a continuing expectation that we need the community to be more capable of being secure. That’s like saying the cars are unsafe at any speed, so it’s up to the driver to avoid accidents, or that everyone should learn judo because they should expect to be attacked daily on the street. It gets the government off the hook, but it does nothing to solve the problem. What’s really required is an innovative government mindset to change the paradigm – ensure that what my 80-year-old mother-in-law uses to browse the internet provides her with security, and that the internet she browses is safe. So trotting out the same old line that it’s a user problem and setting up agencies to produce more pamphlets isn’t likely to be a path to success.

On the upside, the Strategy takes a positive step forward with government working with telcos to deliver cleaner pipes. If this is a first step towards having a data utility that meets the same level of safe-service standards as power and water, then it’s a step in the right direction. At the same time, prioritising a high level of trustworthiness in the telecommunications sector would be a very good complementary measure. This is a much more likely path to success than government monitoring, and the telcos have the potential to be a very effective first line of defence – and this is an area that should have a much greater level of attention and government investment. This of course was one of the many lost opportunities of the NBN project.

So what do I think is missing in the strategy?

  • It is disappointing that the Strategy does not address the issue of insecure technology with investment in encouraging the use of advanced high-integrity software initiatives, with government being the role model for industry. I think expecting to have hardened government technology by putting cyber security clauses in contracts is a very weak approach, and having more policies and procedures will just destroy more trees. It would be much better to encourage and invest in developing more rigorous engineering practices. As an example, moving from proprietary security-design-pattern libraries, if they’re even used, to a vast commodity source of pre-constructed high-quality software-security components would be a big step forward. Having a cyber security strategy that drives this kind of secure design thinking would be a path to success.
  • I think the Strategy could have focused on delivering some level of fit-for-purpose testing for technology. The Common Criteria regime and the AISEP (Australasian Information Security Evaluation Program) program that we run here in Australia was that standard for government systems, but it has proved to be a failure and little of the government technology in use today has common criteria certification. It’s not surprising. In 2002, Windows 2000 received the common criteria EAL4 certification – pretty much the highest level expected for consumer products – yet the CVE (Common Vulnerabilities and Exposures) database lists 91 vulnerabilities that were subsequently found and there’s likely to be hundreds more in subsequent Windows products that can trace back to that time. That’s a lot of wheels that have fallen off that car. Nevertheless, we can do better. The techniques applied in the military and aerospace industries for safety-critical systems can be and should be applied to good effect in industry. While the 737 MAX failure shows that even safety-critical software design is not perfect, on the whole the safety-critical industry has done better than the rest. There’s now a British Standard on trustworthy technology, and that could be a good start point for developing resilient systems. The AISEP scheme has failed and should be revamped together with the PSPF into a program that can achieve a real step up for government and critical-infrastructure systems. Perhaps the Cyber Security Best Practice Regulation Task Force can come up with an answer to this.
  • The area of trustworthy technology doesn’t really get a look in, although there is an expected outcome of increasing the level of confidence customers have in products and services. Looking at the new British Standard on Trustworthy Systems, and the work that’s been done on applying this to trustworthy product development, would have been nice to see as a strategic goal of some worth.
  • The Strategy doesn’t really touch on cyber diplomacy, preferring to take the path of cyber warfare and “stronger consequences”. I’d have liked to see cyber diplomacy being a key platform of the Strategy and taking a real part in developing a peaceful and harmonious international digital environment, working collaboratively with all parties and avoiding the need to be on a cyberwar footing. Somehow I think that opportunity is another lost one.

 

(Q) A lot of the Strategy is devoted to government taking a more active role in defending businesses that it deems to be “critical”. What shape do you think that government involvement should take? Can you see a future where a government agency becomes the nation’s pentester? What are your thoughts on the government, say, scanning a business with its industrial-scale cyber assets, and then issuing an order for that business to install patches on known vulnerabilities that it finds? (For example, Equifax and Apache Struts)

MS: Overall, I think having the government take an active role in defending businesses is almost certain to be a failure, and do much more harm to industry than it will do good. The government record in defending itself is not good, so why does it think it can do better with private industry? The PSPF is a security obligation on all departments and agencies and has been for a long time, and to my knowledge there’s just a handful of departments that have met the PSPF standard. I’d prefer to see resources stepped up to make the PSPF an effective security instrument before presuming to tell industry what to do. Being a role model rather than taking an active role would be sensible, so I strongly support the Cyber Security Strategy action plan to hardening government IT.

The Cyber Security Strategy envisages critical-infrastructure organisations will be required to participate in cyber security activities in partnership with the government, covering third-party assessments and vulnerability scans and development of incident response plans. This is a proactive step and much more considered than the bizarre idea that was floated early on of having cyber spooks roaming around in telco networks hunting foreign cyber intruders. But while the aspiration is good it’s unclear how government will enforce this to achieve a more effective outcome than it currently has been able to achieve in its own community with the PSPF.

Beyond this, the concept of having a national pentesting service appears to have some merit, but there are pentesters in private industry that are just as good if not better than many government pentesters, and time and again it's shown that private industry is a more effective solution than public service. For sure, the government has access to classified knowledge that private industry doesn’t – but that’s generally been held close by the offensive side, not for sharing with the defensive side.

My concern with government getting involved in pentesting and vulnerability scanning is that these activities are the ambulance at the bottom of the cliff – the real cyber security failure happens well before the pentester finds a breach, and perhaps a much more effective approach would be to develop strategies to encourage the construction of correct systems rather than focusing on correcting badly constructed systems. Ensuring technology is trustworthy really is the approach to take.

So the implementation of the Strategy could usefully begin by focusing on the government sector and stopping the many government breaches before taking on the scale required for testing the whole of private industry. It should do so by encouraging the design and development of certifiably trustworthy systems. Then government might be in a position to help industry.

(Q) What is your take on the government’s voluntary Code of Practice: Securing the Internet of Things for Consumers? What did you like about it? What did you think was missing or would improve it?

MS: My take is disappointment at an opportunity lost.

The Code of Practice appears to be a lightweight advisory document following in the footsteps of similar initiatives such as the UK's Ten Cyber Essentials. It’s not as detailed as ISO27000’s 134 or ISM 676+ controls, and it doesn’t show the depth of thought that went into the NIST cyber security Framework. It doesn’t really take any account of specific IoT technologies –is LoRa more secure than Nb-IoT? It doesn’t say. Nevertheless, it’s all sensible stuff, all been said before, all high level and of limited practical use so unlikely to ever be effective. Honestly – it’s 2020 and we trot out the same lines we did in 1991. For anyone wanting guidance on IoT security, go look at what the IoT Alliance Australia or the Industrial IoT Consortium has published.

So what would I have liked to see? A government initiative to establish a national program so that products would be tested against a real world relevant IoT security testing standard to ensure that they were secure and fit for purpose. Something that would have a chance of being successful in practice, would establish Australia as a global thought leader, and help create a safer Australia. The work to do this was in fact well under way in IoTAA – but it would seem there’s no government appetite to put in place any real means of making IoT secure, and this means that our Smart City initiatives will be high-risk ventures.

(Q) You have produced many courses for LinkedIn Learning on cyber security and especially penetration testing and ethical hacking. What is your advice for someone wanting to land a job in pentesting or ethical hacking?

MS: I get that asked from time to time by my viewers. The answer is to have an OSCP or CREST qualification and 10 years of practical experience - that pretty much guarantees landing a job. The real problem is getting into the space from a standing start.

Doing a series of structured courses that have a practical component is a good way to start building the foundations. CEH (Certified Ethical Hacker) has evolved from purely paper qualification that was of minimal benefit to one that has an element of practical skill, and is an accessible qualification. And yes, taking my courses is one option for training up!

The cyber security CERT IV that I initiated together with Boxhill TAFE is an excellent pathway into pentesting and security operations, and this has now become the blueprint for many TAFEs around the country. This has the feature of engaging with industry and bringing businesses and students together so that they have a job when they graduate. Because the TAFE is very much a vocational learning, its graduates are immediately productive for their employers. This is a great opportunity for youngsters and people wanting a career change to get a solid foundation as well as an entry path into a job.

There are a number of online penetration labs, the one I spend time in is the European Hack The Box, and these are designed to help beginners build up their skills and experienced pentesters maintain them. In my experience, there are many people in these labs who are not only highly competent but also very supportive and prepared to spend time helping newcomers.

Finally, finding a mentor that can allow you to shadow them on assignments is a very good way to ramp up to a level where you can get the experience to secure a junior testing role.

(Q) What’s the best advice you could give someone working in information security today on how to do their jobs better. It could be a piece of tech, a framework or process, or an attitude or soft skill? Why will this help them to do a better job?

MS: Firstly, make sure that they understand what the business is doing and how security can help make the business successful. The best security in the world is worth naught if it doesn’t help the business, and it’s worth even less if it hinders the business. Take a SABSA course – it's not about doing security architecture, it's about thinking from a business perspective about security.

Secondly, make sure that they master their field. If part of their role is to secure a web application, don’t just blindly do the OWASP Top Ten mitigations, put the effort in to really understand the technology used, its vulnerabilities, and how effective the security design is. Be able to not just to talk but have the ability to be a hands-on subject matter expert.

Thirdly, be meticulous. Make sure that when they do security, they do it properly and with an in depth adversary mindset. I’ve seen countless assurances that security has been implemented properly, only to be resoundingly disproved by a good pentester. Just because we want it to be doesn’t make it so – learn how to self-assure your work before someone comes in to do it for you.