Smart contract logoSmart Contract Audit Methodology

One of the key services provided by Runtime Verification to our clients is our first-class smart contract analysis and verification services. Audits with Runtime Verification take a verification-oriented approach by using mathematical models to prove the correct operation of the smart contract. Audits typically consist of a two-step approach: design modeling and code review. During design modeling, our auditors formally describe the system and review the supplied specifications looking for flaws in the logic and state the essential properties (invariants) of the protocol. During the code review portion, the code undergoes a thorough analysis where we search for mathematical and common coding errors and ensure that the code faithfully implements the design and that the identified invariants hold.

Upon completion of a smart contract audit, RV delivers an audit report. The reports are a great resource for developers to make adjustments to the smart contracts and to share with their audience to show that a thorough analysis of the contract(s) has taken place. Findings from the audit process are documented in the report as issues, where each issue is described and classified. The issue classification system is based on two axes, severity and difficulty. This page outlines each ranking system's meaning and how it is used in our audit reports. In doing so, we want to provide clients with better tools to understand their audit reports and give potential clients an idea of what they can expect from a security audit report from Runtime Verification. In all cases, the RV auditor reserves the right to deviate from this classification system as needed.

Severity Ranking

The severity ranking on an audit report refers to how bad it could be if this issue is exploited. This means that the effects of the exploit affect the severity, but who can do the exploit does not. An audit report can tag the severity ranking as high, medium, low, or informative.

An issue can be tagged with high severity if it meets any of the below criteria -

  • Permanent deadlock of some or all protocol operations.
  • Loss of any non-trivial amount of user or protocol funds.
  • Core protocol properties do not hold.
  • Arbitrary minting of tokens by untrusted users.
  • DOS attacks making the system (or any vital part of the system) unusable.

An issue can be tagged with medium severity if it meets any of the below criteria -

  • Sensible or desirable properties over the protocol do not hold, but no known attack vectors due to this ("looks risky" feeling).
  • Non-responsive or non-functional system is possible, but recovery of user funds can still be guaranteed.
  • Temporary loss of user funds, guaranteed to be recoverable via an external algorithmic mechanism like a treasury.
  • Loss of small amounts of user funds (e.g., bits of gas fees) that serve no protocol purpose.
  • Griefing attacks which make the system less pleasant to interact with and can potentially be used to promote a competitor.
  • System security relies on assumptions about externalities like "valid user input" or "working monitoring server."
  • Deployments are not verifiable, making phishing attacks possible.

An issue can be tagged with low severity if it meets any of the below criteria -

  • Slow processing of user transactions can lead to changed parameters at transaction execution time.
  • Function reverts on some inputs that it could safely handle.
  • Users receive fewer funds than expected in a purely mathematical model, but the bounds of this error are very small.
  • Users are not protected from obviously bad choices (e.g. trading into an asset with zero value).
  • The system accumulates unrecoverable dust (e.g., due to rounding errors)

When deciding the classification of a given issue, our auditors will use the most severe classification if it seems to fit multiple criteria listed above.

In addition to the severity ratings mentioned above, an audit will include information on purely informative issues. This means that, although such cases will not leave the contract vulnerable to attack, resolving such issues could result in better overall performance or readability.

An issue can be flagged as an informative severity if it meets any of the below criteria -

  • Not following best coding practices. Examples include:
    • Missing input validation or state sanity checks
    • Code duplication
    • Bad code architecture
    • Unmatched interfaces or poor use of external interfaces
    • Use of outdated or known problematic toolchains (e.g., outdated compiler version)
    • Domain-specific code smells (e.g. not recycling storage slots on EVM)
  • Gas optimizations.
  • Non-intuitive or overly complicated behaviors (which may lead to users or auditors misunderstanding the code).
  • Lack of documentation or incorrect/inconsistent documentation.
  • Known undesired behaviors when the security model or assumptions do not hold.

Difficulty Ranking

The difficulty ranking on an audit report refers to how hard it is to accomplish a given exploit. Things that can increase difficulty are how expensive the attack is, who can perform the attack, and how much control you need to accomplish the attack. When analyzing an attack's expense difficulty, we must consider flash loans.

If an attack fits multiple categories here because of factors X (of severity S1), and Y (of severity S2) then we need to decide:

  • Are both X and Y necessary to make the attack happen? Then use the greater difficulty.
  • Use the lower difficulty if only one of X and Y is necessary.

An issue can be flagged with high difficulty on an audit report if it meets any of the following criteria -

  • Only trusted authorized users can perform the attack (e.g., core devs).
  • Performing the attack costs significantly more than how much you benefit (e.g., it costs 10x to make the attack vs. what is won).
  • Performing the attack requires coordinating multiple transactions across different blocks and can be stopped if detected early enough.
  • Performing the attack requires control of the network to delay or censor given messages.
  • Performing the attack requires convincing users to participate (e.g., bribe the users).

An issue can be flagged with medium difficulty if it meets any of the following criteria -

  • Semi-authorized (or allowed) users can perform the attack (e.g. "special" nodes, like validators or staking operators).
  • Performing the attack costs close to how much you benefit (e.g., 0.5x - 2x).
  • Performing the attack requires coordinating multiple transactions across different blocks but cannot be stopped even if detected early.

An issue can be flagged with low difficulty if it meets any of the following criteria -

  • Anyone who can create an account on the network can perform the attack.
  • Performing the attack costs much less than how much you benefit (e.g., < 0.5x).
  • Performing the attack can happen within a single block or transaction (or transaction group).
  • Performing the attack requires access to only a modest amount of capital and a flash-loan system.

Recommended Action

Our auditors often include a recommended action in a security audit report. The recommended action classification can help emphasize to clients the nature of a vulnerability. These are ordered here from most expensive/time-consuming to least expensive/time-consuming for the client, so fix designis more severe than fix code which is more severe than document prominently.

Fix Design

A bug in the design means that even correct implementation will lead to undesirable behavior. If a design flaw is discovered, there is little point in proceeding to code review until the design is updated and/or fixed to address the finding. Issues discovered in the design of a protocol are usually more severe than code issues, because it will be difficult (or impossible) to implement code correctly for a flawed design.

Fix Code

A bug in the code means that the implementation does not conform to the design, or that a common programming error has been discovered. If the design has already been reviewed and deemed safe and operational, the code should be fixed to be conformant with the design. In the case of programming errors, the error should be addressed directly. If working on the code reveals design problems, this should be escalated to fix design.

Document Prominently

The protocol design and code work as intended but may have unintuitive or unsafe behavior when paired with some other financial product (or integration). Then in the documentation (targeted at developers trying to integrate with the product), the client should include the assumptions this protocol makes of the other (or vice-versa). This can be important for avoiding scenarios where each protocol separately works as intended, but there is a known way to combine them that is unsafe. If the integration problem can be avoided by adding some checks in the code, the issue may be escalated to a “fix code.” If the integrations can be made safe altogether by reworking the system a bit, the issue may be escalated to a “fix design.”

Inquiries and estimates for audits can be initiated via the contact form on our website. Once the form is submitted, Runtime Verification will reach out with additional information on how the audit can begin.

Smart contract review ("audit") and formal verification team members