Not a week goes by in which the media does not report on new security-critical software problems. Large software manufacturers release a flood of important updates on a monthly basis. Why is it still so difficult to develop secure software despite the huge advances made in the IT world?
Security is not a feature
Software security is elusive because it is not easily detected at first glance. Unfortunately, the same is true about its absence. Even if locks, green lights, and other graphic elements lead us to believe that our software is secure, we only really assess this in rare instances.
This vagueness makes it difficult to address the issue of security in the software development process. For functional requirements, there are specific tests that can be performed: if the calculation is correct, a red button is displayed at the top left, and so on. But how do you test security? Of course, you can use a well-known attack pattern or bombard the application with random entries (so-called «fuzzing»). There are tools for this purpose. But that does not mean that a creative hacker would not find any vulnerabilities which have been overlooked. A further difficulty is that, unlike a defined program function, security is never fully implemented. The software developers cannot say, «OK, we've taken care of security, now I'll work on other functions.» In reality, every additional function increases the risk of incorporating weaknesses in the system.
Security is a process
The issue of security must therefore be taken into consideration not only during development, but throughout the entire software life cycle as well. This is what is known as the Secure Software Development Lifecycle, or S-SDLC for short. The S-SDLC fortifies the steps in a normal development cycle with controls to guarantee that security is incorporated from start to finish:
Requirements: When creating the requirements specification, you should already give some thought to the security requirements. Is sensitive data involved here that needs to be protected? Can misuse result in serious damage? Such questions can be answered with a risk analysis. Security requirements can then be defined on this basis.
Design: In this phase, the security requirements must be considered in the architecture and design. For example, the selection of the technologies to be used, such as programming language, libraries, etc., should be tailored to the requirements.
Implementation: In secure coding, trained developers significantly reduce the probability of security-critical errors. It is therefore important that the corresponding training is performed again and again. Trained developers detect potential sources of error and avoid trivial errors because they know what to look for. Specific tools assist the developers with this, detecting known error patterns and sending the appropriate warning (static code analysis). The developers need the required knowledge and ability to use these tools correctly. They have to understand the warnings and take the appropriate actions, but also know their limits. Continuous code reviews can increase the quality as well.
Quality assurance/Testing: Before delivery or initial use of the software, it must be checked whether the defined security requirements have been met and whether all security loopholes have been closed. There are various methods for this that are used depending on the technology, existing tools, and knowledge. A distinction is made between automated tests (dynamic scanning, fuzzing) and manual penetration tests in which an experienced tester tries to find and exploit vulnerabilities in the software. Automated tests quickly reach a result and are cost-effective. However, they have yet to attain the quality and depth of a manual test. Humans are still superior to an automated tool. With their experience, they can better understand how a software application should work and find the potential vulnerabilities. This could conceivably change in the future with the advances in artificial intelligence.
Deployment: Even if everything has been done correctly up to this point, errors can threaten security during initial use of new software. Errors in the operating system configuration or the access control could cause the application to be completely circumvented, resulting in unauthorized access to sensitive data. Or in the case of an Internet app, confidential information could be intercepted in the network because of missing or incorrectly configured SSL/TLS encryption. For this reason, a review of the server and network configuration is recommended in this phase.
Security awareness among software manufacturers has grown significantly in recent years. For example, Microsoft has invested a great deal in secure software and systems over the last 14 years with its Trustworthy Computing Initiative. Modern programming languages and frameworks now no longer permit certain errors. Development tools have become more intelligent as well, and are able to flag potential errors before it is too late.
But why are there still so many critical security problems and monthly software updates? One reason is the widespread continued use of outdated software, some of which is decades old. These programs were created at a time when security awareness was still low and development tools were less mature. Even the newest operating systems still contain some program code from the 1980s and 1990s. We are currently in a dismantling phase for this burden of unsafe legacy software, which has been shown by numerous initiatives from the software manufacturers. A bug bounty program, for example, rewards the discovery of programming errors with up to several thousand dollars.
Training for developers, however, remains a central element in secure software development. Nothing will, or should, change in this regard in the near future. This training, together with comprehensive support and a review of security aspects during the entire software development phase and life cycle (S-SDLC), should ensure that these so-called patch days run more smoothly in the future.