Smart Contract Analysis
One of the key services provided by Runtime Verification to our clients is our first-class smart contract analysis and verification services. Audits with Runtime Verification take a verification-oriented approach by using mathematical models to prove the correct operation of the smart contract. Audits typically consist of a two-step approach: design modeling and code review. During design modeling, our auditors formally describe the system and review the supplied specifications looking for flaws in the logic and state the essential properties (invariants) of the protocol. During the code review portion, the code undergoes a thorough analysis where we search for mathematical and common coding errors and ensure that the code faithfully implements the design and that the identified invariants hold.
Upon completion of a smart contract audit, RV delivers an audit report. The reports are a great resource for developers to make adjustments to the smart contracts and to share with their audience to show that a thorough analysis of the contract(s) has taken place. Findings from the audit process are documented in the report as issues, where each issue is described and classified. The issue classification system is based on two axes, severity and difficulty. This page outlines each ranking system's meaning and how it is used in our audit reports. In doing so, we want to provide clients with better tools to understand their audit reports and give potential clients an idea of what they can expect from a security audit report from Runtime Verification. In all cases, the RV auditor reserves the right to deviate from this classification system as needed.
The severity ranking on an audit report refers to how bad it could be if this issue is exploited. This means that the effects of the exploit affect the severity, but who can do the exploit does not. An audit report can tag the severity ranking as high, medium, low, or informative.
An issue can be tagged with high severity if it meets any of the below criteria -
- Permanent deadlock of some or all protocol operations.
- Loss of any non-trivial amount of user or protocol funds.
- Core protocol properties do not hold.
- Arbitrary minting of tokens by untrusted users.
- DOS attacks making the system (or any vital part of the system) unusable.
An issue can be tagged with medium severity if it meets any of the below criteria -
- Sensible or desirable properties over the protocol do not hold, but no known attack vectors due to this ("looks risky" feeling).
- Non-responsive or non-functional system is possible, but recovery of user funds can still be guaranteed.
- Temporary loss of user funds, guaranteed to be recoverable via an external algorithmic mechanism like a treasury.
- Loss of small amounts of user funds (e.g., bits of gas fees) that serve no protocol purpose.
- Griefing attacks which make the system less pleasant to interact with and can potentially be used to promote a competitor.
- System security relies on assumptions about externalities like "valid user input" or "working monitoring server."
- Deployments are not verifiable, making phishing attacks possible.
An issue can be tagged with low severity if it meets any of the below criteria -
- Slow processing of user transactions can lead to changed parameters at transaction execution time.
- Function reverts on some inputs that it could safely handle.
- Users receive fewer funds than expected in a purely mathematical model, but the bounds of this error are very small.
- Users are not protected from obviously bad choices (e.g. trading into an asset with zero value).
- The system accumulates unrecoverable dust (e.g., due to rounding errors)
When deciding the classification of a given issue, our auditors will use the most severe classification if it seems to fit multiple criteria listed above.
In addition to the severity ratings mentioned above, an audit will include information on purely informative issues. This means that, although such cases will not leave the contract vulnerable to attack, resolving such issues could result in better overall performance or readability.
An issue can be flagged as an informative severity if it meets any of the below criteria -
- Not following best coding practices. Examples include:
- Missing input validation or state sanity checks
- Code duplication
- Bad code architecture
- Unmatched interfaces or poor use of external interfaces
- Use of outdated or known problematic toolchains (e.g., outdated compiler version)
- Domain-specific code smells (e.g. not recycling storage slots on EVM)
- Gas optimizations.
- Non-intuitive or overly complicated behaviors (which may lead to users or auditors misunderstanding the code).
- Lack of documentation or incorrect/inconsistent documentation.
- Known undesired behaviors when the security model or assumptions do not hold.
The difficulty ranking on an audit report refers to how hard it is to accomplish a given exploit. Things that can increase difficulty are how expensive the attack is, who can perform the attack, and how much control you need to accomplish the attack. When analyzing an attack's expense difficulty, we must consider flash loans.
If an attack fits multiple categories here because of factors X (of severity S1), and Y (of severity S2) then we need to decide:
- Are both X and Y necessary to make the attack happen? Then use the greater difficulty.
- Use the lower difficulty if only one of X and Y is necessary.
An issue can be flagged with high difficulty on an audit report if it meets any of the following criteria -
- Only trusted authorized users can perform the attack (e.g., core devs).
- Performing the attack costs significantly more than how much you benefit (e.g., it costs 10x to make the attack vs. what is won).
- Performing the attack requires coordinating multiple transactions across different blocks and can be stopped if detected early enough.
- Performing the attack requires control of the network to delay or censor given messages.
- Performing the attack requires convincing users to participate (e.g., bribe the users).
An issue can be flagged with medium difficulty if it meets any of the following criteria -
- Semi-authorized (or allowed) users can perform the attack (e.g. "special" nodes, like validators or staking operators).
- Performing the attack costs close to how much you benefit (e.g., 0.5x - 2x).
- Performing the attack requires coordinating multiple transactions across different blocks but cannot be stopped even if detected early.
An issue can be flagged with low difficulty if it meets any of the following criteria -
- Anyone who can create an account on the network can perform the attack.
- Performing the attack costs much less than how much you benefit (e.g., < 0.5x).
- Performing the attack can happen within a single block or transaction (or transaction group).
- Performing the attack requires access to only a modest amount of capital and a flash-loan system.
Our auditors often include a recommended action in a security audit report. The recommended action classification can help emphasize to clients the nature of a vulnerability. These are ordered here from most expensive/time-consuming to least expensive/time-consuming for the client, so fix designis more severe than fix code which is more severe than document prominently.
A bug in the design means that even correct implementation will lead to undesirable behavior. If a design flaw is discovered, there is little point in proceeding to code review until the design is updated and/or fixed to address the finding. Issues discovered in the design of a protocol are usually more severe than code issues, because it will be difficult (or impossible) to implement code correctly for a flawed design.
A bug in the code means that the implementation does not conform to the design, or that a common programming error has been discovered. If the design has already been reviewed and deemed safe and operational, the code should be fixed to be conformant with the design. In the case of programming errors, the error should be addressed directly. If working on the code reveals design problems, this should be escalated to fix design.
The protocol design and code work as intended but may have unintuitive or unsafe behavior when paired with some other financial product (or integration). Then in the documentation (targeted at developers trying to integrate with the product), the client should include the assumptions this protocol makes of the other (or vice-versa). This can be important for avoiding scenarios where each protocol separately works as intended, but there is a known way to combine them that is unsafe. If the integration problem can be avoided by adding some checks in the code, the issue may be escalated to a “fix code.” If the integrations can be made safe altogether by reworking the system a bit, the issue may be escalated to a “fix design.”
Inquiries and estimates for audits can be initiated via the contact form on our website. Once the form is submitted, Runtime Verification will reach out with additional information on how the audit can begin.
Smart contract review ("audit") and formal verification team members
Musab A. Alturki is a Senior Research Engineer working on formally specifying and verifying blockchain and distributed systems. Before joining Runtime Verification, he was a Visiting Research Scholar at the University of Pennsylvania from 2017-2018, working on formal modeling and analysis of cyber-physical systems security. Before that, between 2011-2017, he was an Assistant Professor in Computer Science at King Fahd University of Petroleum and Minerals (KFUPM), Dhahran, Saudi Arabia. He received his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 2011. His research interests are in the applications of formal methods to distributed systems. He was a recipient of the prestigious King Abdullah Scholar Award for scientific excellence in 2008.
Formal Verification Engineer
Aellison Cassimiro received his M.Sc. degree from the Federal University of Paraíba, Brazil, where he applied formal methods to solve the NP-Complete time-sensitive traffic scheduling problem. Previously, he worked as a software analyst and researcher exploring various fields, including computer and network security, IoT systems, networks, finance and monetary fraud detection/prevention. Currently, he is interested in tackling challenging and thought provoking problems, which lead him to join the RV team.
Juan Conejero received his B.Sc. in Mathematics from the University of València and his M.Sc. in Pure and Applied Logic from the University of Barcelona. After finishing his studies, he worked on bringing formal verification to the road transport regulation sector and creating verified programs, that automatically check for legal compliance, using Coq. His interests include logic, the formalization of mathematics, programming language theory, and program verification.
Rikard Hjort is researching formal methods for developing high-assurance smart contracts for blockchains. He worked on formalizing WebAssembly in K, and now uses KWasm to verify smart contracts written in WebAssembly. He has an M.Sc. in Computer Science from the Chalmers University of Technology, Sweden. He was an intern at Google in 2016 and 2017, and studied at the University of Tokyo in 2017-2018 where he combined his research on blockchains with studying coercion-resistant voting protocols.
Qian Hu received her M.Sc. and Ph.D. in Computer Science from McMaster University, where she worked on interactive theorem proving, deontic logic, and formal contracts. Her broader interests include logic, type theory, semantics of programming languages, software and system safety, and the use of formal methods in software engineering. Prior to joining RV, Qian was the co-founder and COO of Trinity Studio in China.
David Kretzmer is generally interested in programming languages and how their design impacts the practicality of formal verification. He received his BSc and MSc in Computer Science from TU Darmstadt in Germany. In his master's thesis, he formalized an executable semantics of Dalvik bytecode in the proof assistant Isabelle/HOL and extensively tested it against an existing Dalvik bytecode runtime to ensure its faithfulness.
Formal Verification Engineer
Yan Liu works for RV from Australia. She has received her PhD from National University of Singapore and worked in ADSC, Singapore/UIUC as a post-doc. She has broad interest in logics, programming languages and is keen on exploring applications of formal methods in real world problems. Prior to join Runtime Verification, she worked in confidential computing team of AntChain and Data Foundation team of Ant Group.
Georgy Lukyanov is interested in functional programming and formal verification. Prior to joining RV, Georgy was a PhD student at Newcastle University, UK, working on verifying spacecraft control programs by symbolic execution. Georgy is passionate about formal software design and verification, and believes that software engineering can benefit a lot from drawing inspiration from logic and mathematics.
Sr. Verification Consultant
Petar Maksimović has received his Ph.D. in Theoretical Computer Science from Inria Sophia Antipolis. His expertise lies in the design, implementation, and real-world application of symbolic analysis tools, as well as the formalization and mechanization of programming-language semantics and their properties. He is passionate about bringing formal methods into the industrial software design and development lifecycle. Petar is also a Research Fellow at Imperial College London.
Tolga Ovatman got his PhD from Istanbul Technical University (ITU) Department of Computer Engineering. During his PhD he worked on formal methods and model checking as a visiting researcher in Denmark Technical University. After his PhD Tolga started working as a lecturer in ITU Faculty of Computer and Informatics. During his academic career he worked on applying formal methods and model checking in many different industrial projects including railway interlocking software and embedded TV software. He advocates that formal methods should be an integral part of software quality assurance efforts.
Raoul Schaffranek studied Computer Science at RWTH Aachen University, Germany, where he obtained B.Sc. and M.Sc. degrees. He wrote his master thesis about compositional modeling and fully automated verification of distributed systems and formalized his findings with the Isabelle proof assistant. Before joining RV, Raoul worked for more than eight years as a software engineer for the German software company graphodata AG. Raoul considers programming a human-centric rather than machine-centric activity, and he firmly believes that we should build modern programming languages and verification tools upon this perspective.
Virgil Nicolae Șerbănuță received a Ph.D., an M.S. and a B.S. from the University of Bucharest, Romania. Previously, he participated in ACM programming competitions (1st place Southeastern Europe, 15th place Worldwide) and National Olympiads of Informatics (2nd and, resp., 3rd place). He briefly joined the University of Bucharest as faculty, but he liked developing software used by millions/billions of people more than writing papers. So he joined Google Zurich, Switzerland, where he spent more than 10 years working on Google Maps and Google Timeline, before joining RV where he's developing formal software analysis tools. He enjoys playing Go and solving hard problems.
Formal Verification Engineer
Lisandra Silva finished her MsC in Informatics Engineering at University of Minho in the areas of Formal Methods and Cryptography. During an internship at Oracle Labs in 2019 she made contributions to the modeling and proving safety conditions of distributed systems protocols, and developed a framework to prove liveness properties in Agda. A former physiotherapist, who practiced for 3 years, Lisandra returned to her passion for Math. Professional interests include the use of mathematical and logical reasoning to model and formally verify complex systems, protocols and/or programs.
Stephen Skeirik's research interests include programming languages, formal logic, and program verification. His desire is to ground system design on practical logical foundations enabling formal verification to scale to real-world challenges. He obtained his Bachelor's degree in Computer Science from the University of Tennessee Knoxville and his Ph.D from the University of Illinois at Urbana-Champaign.
Lucas Martinelli Tabajara received his Ph.D. from Rice University, where he worked on temporal logic, automata, and synthesis from formal specifications. His broader interests include formal verification, program synthesis, type theory and programming languages. He believes that formal verification should be an integral part of software development and that we need a tighter integration between formal tools and programming languages and environments.
Hao Xiao is the Engineering Manager of Runtime Verification APAC office in Singapore. He is broadly interested in applying program analyses and verification techniques to software security problems. He worked as an engineering lead in Veracode to develop an automated program vulnerability remediation solution. He also worked on academic projects to detect and remediate vulnerabilities of smart contracts and Java and C programs. He obtained his Ph.D. from Nanyang Technological University and B.S. and M.S. from East China Normal University, all specialized in Computer Science.
Burak Yalçınkaya received his M.Sc. in Computer Science from the Chalmers University of Technology, where he studied programming languages and formal methods. In his M.Sc. thesis, he worked on executable formal semantics and fuzzing using the K framework. Burak is a fan of functional programming and has development experience in various languages from C to Haskell. Previously, he worked as a back-end developer at a start-up. He enjoys competitive programming, generative art, and photography.
Formal Verification Engineer
Ilja Zakharov's mission is to close the gap between formal methods and users seeking the highest security and reliability guarantees for their products. He received B.S. and M.S. degrees in applied mathematics and physics from the Moscow Institute of Physics and Technology. His Ph.D. degree, received from the Ivannikov Institute for System Programming of the Russian Academy of Science, is connected with an application of software verification tools. Prior to joining RV, Ilja spent ten years creating and applying formal verification frameworks to operating system components such as device drivers and file systems.
Yi Zhang is interested in runtime verification and programming languages. He worked on DARPA HACMS, where his mission was to generate efficient monitors for cyber physical systems. He worked as a summer intern at Google in 2016 and 2017. In 2016, he joined the Laser team at Google and worked on Protocol Buffer instrumentation. In 2017, he joined the YouTube team and used TensorFlow to cluster regression testing reports. His goal at RV is to formalize the semantics of programming languages and virtual machines for smart contracts to execute on the blockchain, and use such semantics to verify smart contracts. In addition to his full time work at RV, Yi is wrapping up his Ph.D. at UIUC in formal semantics-based program verification.