Krzysztof Labuda, security engineer, will share some useful insights on threat design models: STRIDE and OWASP TOP 10 as sources of methodologies that facilitate efficient penetration testing.
As part of this article, I’ll take a deeper dive into the threat model that can be successfully applied to develop a security software testing strategy. I have picked two models: STRIDE and the OWASP TOP 10 (although the latter is perhaps not a threat model, in the web application cyber security community it is considered a source with the dimension of an unwritten standard).
Security tests are, despite appearances of the contrary, hard work. It’s not about mindlessly attacking a given system with the exploits offered by, for instance, the Metasploit Framework using default settings, as one may think. Behind every well-executed test (including penetration tests) there is a plan – that comes as a guarantee of predictability and order.
Security test plan – the foundation of secure software
it is advisable to include a timeframe for the exploratory approach (especially if we deal with some sort of prototype device or a completely newly-developed web application) in the schedule of activities. Nevertheless – it is always recommended that all these activities are carried out independently of each other. When developing a test plan, a lot of inspiration for security testing web applications can be found in OWASP, or more precisely in the list compiled by this foundation – the OWASP TOP 10.
The OWASP organization is all about developing and gathering tools, articles, tutorials, accessible rules for the WAF (Web Application Firewall), i.e., access to the knowledge base, on a pro bono basis, with the goal of enhancing the security of web applications. But there is more to it than that.
The knowledge base includes guides for mobile applications and teaching aids to help develop the testalias needed for app security testing.
The OWASP TOP 10 takes the form of a 10-point list in which the order of the entries is assigned a natural number between <1-10>. The smaller it is, the higher the frequency of the problem in application implementations encountered in real-world solutions.
The OWASP TOP 10 was first published in 2003 and has had seven editions over 19 years (of which the 2003 and 2004 lists are identical – so there have actually been six). The list comes in very handy when planning as well as conducting penetration tests for web applications. The current OWASP TOP 10 list is released for 2021. The list is available on a dedicated domain, but links can also be found on the foundation’s website.
The list is intended to make both software developers and testers aware of common security issues found in web applications.
In the words of the ancient Far Eastern general, thinker and philosopher Sun Tzu: “If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”
Threat awareness is one of the key elements to be able to fend off an attack or even deter an attacker (bear in mind that a penetration test is designed to simulate a real attack on our systems!), while protecting ourselves properly.
Since I have drifted off into military terms here, let me add that, in the literature, one can find a term for the approach to a cyber-attack (adopted from the US Army) under the name of Kill Chain, with the small difference that the keyword Cyber is added. That said, this topic is broad enough to merit a separate article.
Let’s go back to the 2021 OWASP TOP 10 list – in its subsequent editions, the evolution that web applications have undergone cannot go unnoticed. In 2021, three new categories have emerged, four have been renamed and changed in scope, and risks/problems have been consolidated from previous versions of the document.
The OWASP TOP 10 not only helps to develop test strategies and their scope, or to define test cases but also to understand the mechanisms ruling web applications, which are contributors to risks from a software cyber security point of view. Making use of these tips helps to avoid errors, thus facilitating the work in the software development phase. In fact, many of the vulnerabilities such as XSS (Cross Site Scripting) are things that attackers, security researchers, and pentesters have been testing, finding, and eventually exploiting (i.e., using the vulnerability in question) since the dawn of web applications, when they started using scripting in their functional layers.
Nonetheless, the root cause of the problems that have lurked under the heading of Injection on the OWASP list for the past 19 years and landed at the top of the charts has been the inadequate or complete lack of validation of the input to web applications and trusting a priori that the data provided by the user will be correct. But it is crucial to understand that these are not the only factors bringing these types of errors into systems.
The previous 2017 category, XXE, in the 2021 list falls under the term Security Misconfiguration. Let’s imagine you allow the import of entities in XML documents sent from the user. During their syntactic analysis in the XML document, you may be exposed to an information leakage on the server end (OWASP itself also provides ready-made examples of how to attempt an attack). Using the entities in question, commands can be run to browse the file system on the victim’s operating system. Equally interesting is the denial of service (DoS) scenario, the so-called billion-smile attack, which is capable of wreaking havoc when it suddenly becomes apparent that, by resolving imported entities in a loop, the number of resources required to do so begins to grow mercilessly at an exponential rate, sucking up web server resources at lightning speed.
Let’s now take a look at the other threat model that also comes into play to support the development of a penetration test plan – STRIDE.
The STRIDE model was developed in the 1990s by two Microsoft engineers, Loren Kohnfelder and Praerit Garg. Explanations of the mentioned ‘mnemonics’ are as follows:
They are confronted towards the desired security properties (desired security properties) presented by the secure system.
When testing a highly-specialized network appliance for one of our customers – OWASP TOP 10 helped us to outline the parameters tested and further define the attack areas for the web interface.
With the STRIDE model, we identified further and equally safety-relevant key elements that needed to be checked.
It was a device with monstrous switching capabilities and was positioned in the backbone networks. Such a component had to be tested for DoS and DDoS, andwhether there was a way for the system under penetration test to consume a disproportionate number of resources while handling inconspicuous, single requests (the idea of attacks of the type mentioned above – the billion-smile attack or the DNS amplification-based attack attempt) had to be verified.
We focused our efforts on the search for weak cryptographic algorithms (behind the mnemonic I and R) in SSH or SSL/TLS systems. Problems that can be encountered in such situations include errors in the key exchange algorithms in the Diffie Helaman protocol (e.g., vulnerabilities in SSH or TLS of the logjam type may appear).
Spoofing. In developing this element, we looked for bugs and scenarios for whether there was exposure of confidential data such as the private key of a particular web server. You can see the ‘overlap’ between the different threats here – with I results S.
On top of that we fuzzed selected network protocols offered by the device in an attempt to meet the T (Tempering) mnemonic.
The last mnemonic, ‘E,’ was the source of the test cases to check the authorization mechanisms. By creating users for the test cases, we verified that a given user could perform specific actions on the system under penetration test. These and no other, in line with good practice (this principlism can be found under the name of the principle of least privilege), give minimum but sufficient authorizations to be able to perform tasks on a given system.
The STRIDE model has significantly helped and accelerated the overall assessment and finding the right perspective in security test design. As tester practice reveals, this is as essential as the workshop and skills of the test team in finding bugs.