One of the key services provided by Runtime Verification to our clients is our first-class smart contract analysis and verification services. Audits with Runtime Verification take a verification-oriented approach by using mathematical models to prove the correct operation of the smart contract. Audits typically consist of a two-step approach: design modeling and code review. During design modeling, our auditors formally describe the system and review the supplied specifications looking for flaws in the logic and state the essential properties (invariants) of the protocol. During the code review portion, the code undergoes a thorough analysis where we search for mathematical and common coding errors and ensure that the code faithfully implements the design and that the identified invariants hold.
Upon completion of a smart contract audit, RV delivers an audit report. The reports are a great resource for developers to make adjustments to the smart contracts and to share with their audience to show that a thorough analysis of the contract(s) has taken place. Findings from the audit process are documented in the report as issues, where each issue is described and classified. The issue classification system is based on two axes, severity and difficulty. This page outlines each ranking system's meaning and how it is used in our audit reports. In doing so, we want to provide clients with better tools to understand their audit reports and give potential clients an idea of what they can expect from a security audit report from Runtime Verification. In all cases, the RV auditor reserves the right to deviate from this classification system as needed.
The severity ranking on an audit report refers to how bad it could be if this issue is exploited. This means that the effects of the exploit affect the severity, but who can do the exploit does not. An audit report can tag the severity ranking as high, medium, low, or informative.
An issue can be tagged with high severity if it meets any of the below criteria -
An issue can be tagged with medium severity if it meets any of the below criteria -
An issue can be tagged with low severity if it meets any of the below criteria -
When deciding the classification of a given issue, our auditors will use the most severe classification if it seems to fit multiple criteria listed above.
In addition to the severity ratings mentioned above, an audit will include information on purely informative issues. This means that, although such cases will not leave the contract vulnerable to attack, resolving such issues could result in better overall performance or readability.
An issue can be flagged as an informative severity if it meets any of the below criteria -
The difficulty ranking on an audit report refers to how hard it is to accomplish a given exploit. Things that can increase difficulty are how expensive the attack is, who can perform the attack, and how much control you need to accomplish the attack. When analyzing an attack's expense difficulty, we must consider flash loans.
If an attack fits multiple categories here because of factors X (of severity S1), and Y (of severity S2) then we need to decide:
An issue can be flagged with high difficulty on an audit report if it meets any of the following criteria -
An issue can be flagged with medium difficulty if it meets any of the following criteria -
An issue can be flagged with low difficulty if it meets any of the following criteria -
Our auditors often include a recommended action in a security audit report. The recommended action classification can help emphasize to clients the nature of a vulnerability. These are ordered here from most expensive/time-consuming to least expensive/time-consuming for the client, so fix design is more severe than fix code which is more severe than document prominently.
A bug in the design means that even correct implementation will lead to undesirable behavior. If a design flaw is discovered, there is little point in proceeding to code review until the design is updated and/or fixed to address the finding. Issues discovered in the design of a protocol are usually more severe than code issues, because it will be difficult (or impossible) to implement code correctly for a flawed design.
A bug in the code means that the implementation does not conform to the design, or that a common programming error has been discovered. If the design has already been reviewed and deemed safe and operational, the code should be fixed to be conformant with the design. In the case of programming errors, the error should be addressed directly. If working on the code reveals design problems, this should be escalated to fix design.
The protocol design and code work as intended but may have unintuitive or unsafe behavior when paired with some other financial product (or integration). Then in the documentation (targeted at developers trying to integrate with the product), the client should include the assumptions this protocol makes of the other (or vice-versa). This can be important for avoiding scenarios where each protocol separately works as intended, but there is a known way to combine them that is unsafe. If the integration problem can be avoided by adding some checks in the code, the issue may be escalated to a “fix code.” If the integrations can be made safe altogether by reworking the system a bit, the issue may be escalated to a “fix design.”
Inquiries and estimates for audits can be initiated via the contact form on our website. Once the form is submitted, Runtime Verification will reach out with additional information on how the audit can begin.