Copyright © 2020-2023 Enterprise Ethereum Alliance.
This document defines the requirements for EEA EthTrust Certification, a set of certifications that a smart contract has been reviewed and found not to have a defined set of security vulnerabilities.
This section describes the status of this document at the time of its publication. Newer documents may supersede this document.
This document is an EEA Specification, published by the Enterprise Ethereum Alliance, Inc.
This specification is licensed by the Enterprise Ethereum Alliance, Inc. (EEA) under the terms of the Apache License, Version 2.0 [License] Unless otherwise explicitly authorised in writing by the EEA, you can only use this specification in accordance with those terms.
Unless required by applicable law or agreed to in writing, this specification is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
This is the second version of the EEA EthTrust Security Levels Specification. This specification has been reviewed and approved for publishing by the EEA EthTrust Security Levels Working Group, and the EEA Board.
This version supersedes version 1 of this Specification.
Please send any comments other than vulnerability notifications to the EEA through https://entethalliance.org/contact/, or as issues via the EthTrust-public GitHub Repository. To notify EEA of vulnerabilities, please follow the procedures outlined in § 1.4 Feedback and new vulnerabilities.
The Working Group expects at the time of publication to publish the next version of the Specification in 2025.
This document is the second version of the EEA EthTrust Security Levels Specification, that defines the requirements for granting EEA EthTrust Certification to a smart contract written in Solidity.
This version supersedes the first version of this specification, the EEA EthTrust Security Levels Specification version 1 [EthTrust-sl-v1].
EEA EthTrust Certification is a claim by a security reviewer that the Tested Code is not vulnerable to a number of known attacks or failures to operate as expected, based on the reviewer's assessment against those specific requirements.
No amount of security review can guarantee that a smart contract is secure against all possible vulnerabilities, as explained in § 3. Security Considerations. However reviewing a smart contract according to the requirements in this specification provides assurance that it is not vulnerable to a known set of potential attacks.
This assurance is backed not only by the reputation of the reviewer, but by the collective reputations of the multiple experts in security from many competing organizations, who collaborated within the EEA to ensure this specification defines protections against a real and significant set of known vulnerabilities.
This section is non-normative.
This section describes how to understand this specification, including the conventions used for examples and requirements, core concepts, references, informative sections, etc.
Broadly, the document is structured as follows:
This specification is accompanied by a Checklist, that lists the requirements in a handy table. That checklist can be used to help developers or reviewers familiar with the specification to quickly remind themselves of each individual requirement and track whether they have tested it. In case of any discrepancy, the normative text is in this specification document.
The structure and formatting of requirements is described in detail in § 1.1.3 How to Read a Requirement.
Examples are given in some places. These are not requirements and are not normative. They are distinguished by a background with a border and generally a title, like so:
Some examples are given of vulnerable code, or what NOT to do. It is a very bad idea to copy such examples into production code. These are marked as warnings:
Definitions of terms are formatted Like This and subsequent references to defined terms are rendered as links Like This.
References to other documents are links to the relevant entry in the § B. References, within square brackets: [CWE].
Links to requirements begin with a Security Level: [S], [M] or [Q], and § 4.4 Recommended Good Practices begin with [GP]. They then include the requirement or good practice name. They are rendered as links in bold type:
Example of a link to [M] Document Special Code Use.
Variables, introduced to be described further on in a statement or requirement, are formatted as var.
Occasional explanatory notes, presented as follows, are not normative and do not specify formal requirements.
The core of this document is the requirements, that collectively define EEA EthTrust Certification.
Requirements have
Some requirements at the same Security Level are grouped in a subsection, because they are related to a particular theme or area of potential attacks.
Requirements are followed by explanation, that can include why the requirement is important, how to test for it, links to Overriding Requirement and Related Requirements, test cases, and links to other useful information.
As well as Requirements, this document includes some § 4.4 Recommended Good Practices, that are formatted similarly with an apparent Security Level of "[GP]". It is not necessary to implement these in order to conform to the specification, but if carefully implemented they can improve the security of smart contracts.
The following requirement:
is a Security Level [S] requirement, denoted by the "[S]" before its name.
Its name is Compiler Bug SOL-2022-5 with .push()
. Its URL in this version 2 of the specification,
as linked from the " 🔗 " character, is
https://entethalliance.org/specs/ethtrust-sl/v2/#req-1-compiler-SOL-2022-5-push.
The statement of requirement is:
Tested code that copiesbytes
arrays fromcalldata
ormemory
whose size is not a multiple of 32 bytes, and has an empty.push()
instruction that writes to the resulting array, MUST NOT use a Solidity compiler version older than 0.8.15.
Following the requirement is a brief explanation of the relevant vulnerability, and links to further information.
Good Practices are formatted the same way as Requirements, with an apparent level of [GP]. However, as explained in § 4.4 Recommended Good Practices meeting them is not necessary and does not in itself change conformance to this specification.
For some requirements, the statement will include an alternative condition, introduced with the keyword unless, that identifies one or more Overriding Requirements. These are requirements at a higher Security Level, that can be satisfied to achieve conformance if the Tested Code does not meet the lower-level requirement as stated. In some cases it is necessary to meet more than one Overriding Requirement to meet the requirement they override. In this case, the requirements are described as a Set of Overriding Requirements. It is necessary to meet all the requirements in a Set of Overriding Requirements in order to meet the requirement that is overriden.
In a number of cases, there will be more than one Overriding Requirement or Set of Overriding Requirements that can be met in order to satisfy a given requirement. For example, it is sometimes possible to meet a Security Level [S] Requirement either by directly fulfilling it, or by meeting a Set of Overriding Requirements at Security Level [M], or by meeting a Set of Overriding Requirements at Security Level [Q].
Overriding Requirements enable simpler testing for common simple cases. For more complex Tested Code, that uses features which need to be handled with extra care to avoid introducing vulnerabilities, they ensure such usage is appropriately checked.
In a typical case of an Overriding Requirement for a Security Level [S] requirement, they apply in relatively unusual cases or where automated systems are generally unable to verify that Tested Code meets the requirement. Further verification of the applicable Overriding Requirement(s) can determine that the Tested Code is using a feature appropriately, and therefore passes the Security Level [S] requirement.
If there is not an Overriding Requirement for a requirement that the Tested code does not meet, the Tested code is not eligible for EEA EthTrust Certification. However, even for such cases, note the Recommended Good Practice [GP] Meet as Many Requirements as Possible; meeting any requirements in this specification will improve the security of smart contracts.
In the following requirement:
tx.origin
", andtx.origin
Usage".
The requirement that the tested code does not contain a tx.origin
instruction
is automatically verifiable.
Tested Code that does have a valid use for tx.origin
,
as decided by the auditor, and meets the Security Level [Q] Overriding Requirement
[Q] Verify tx.origin
Usage
conforms to this Security Level [S] requirement.
Requirements that are an Overriding Requirement for another, or are part of a Set of Overriding Requirements, expicitly mention that:
This section is non-normative.
A number of smart contracts that power decentralized applications on Ethereum have been found to contain security issues, and today it is often difficult or impossible in practice to see how secure an address or contract is before initiating a transaction. The Defi space in particular has exploded with a flurry of activity, with individuals and organizations approving transactions in token contracts, swapping tokens, and adding liquidity to pools in quick succession, sometimes without stopping to check security. For Ethereum to be trusted as a transaction layer, enterprises storing critical data or financial institutions moving large amounts of capital need a clear signal that a contract has had appropriate security audits.
Reviewing early, in particular before production deployment, is especially important in the context of blockchain development because the costs in time, effort, funds, and/or credibility, of attempting to update or patch a smart contract after deployment are generally much higher than in other software development contexts.
This smart contract security standard is designed to increase confidence in the quality of security audits for smart contracts, and thus to raise trust in the Ethereum ecosystem as a global settlement layer for all types of transactions across all types of industry sectors, for the benefit of the entire Ethereum ecosystem.
Certification also provides value to the actual or potential users of a smart contract, and others who could be affected by the use or abuse of a particular smart contract but are not themselves direct users. By limiting exposure to certain known weaknesses through EEA EthTrust Certification, these stakeholders benefit from reduced risk and increased confidence in the security of assets held in or managed by the Tested Code.
This assurance is not complete; for example it relies on the competence and integrity of the auditor issuing the certification. That is generally not completely knowable. Professional reputations can change based on subsequent performance of Tested Code. This is especially so if the Tested Code becomes sufficiently high-profile to motivate exploitation of any known weaknesses remaining after certification.
Finally, smart contract developers and ecosystem stakeholders receive value when others (including direct competitors) complete the certification process, because it means those other contracts are less likely to generate exploitation-related headlines which can lead to negative perceptions of Ethereum technology as insecure or high risk, by the general public including business leaders, prospective customers/users, regulators, and investors.
The value of smart contract security certification is in some ways analogous to the certification processes applicable to aircraft parts. Most directly, it helps reduce risks for part manufacturers and the integrators who use those parts as components of a more complex structure, by providing assurance of a minimum level of quality. Less directly, these processes significantly reduce aviation accidents and crashes, saving lives and earning the trust of both regulators and customers who consider the safety and risk of the industry and its supporting technology as a whole. Many safety certification processes began as voluntary procedures created by a manufacturer, or specified and required by a consortium of customers representing a significant fraction of the total market. Having proven their value, some of these certification processes are now required by law, to protect the public (including ground-based bystanders).
We hope the value of the certification process motivates frequent use, and furthers development of automated tools that can make the evaluation process easier and cheaper.
As new security vulnerabilities, issues in this specification, and challenges in implementation are discovered, we hope they will lead to both change requests and increased participation in the Enterprise Ethereum Alliance's EthTrust Security Levels Working Group or its successors, responsible for developing and maintaining this specification.
This section is non-normative.
Security issues that this specification calls for checking are not necessarily obvious to smart contract developers, especially relative newcomers in a quickly growing field.
By walking their own code through the certification process, even if no prospective customer requires it, a smart contract developer can discover ways their code is vulnerable to known weaknesses and fix that code prior to deployment.
Developers ought to make their code as secure as possible. Instead of aiming to fulfil only the requirements to conform at a particular Security Level, ensuring that code implements as many requirements of this specification as possible, per [GP] Meet as Many Requirements as Possible, helps ensure the developer has considered all the vulnerabilities this specfication addresses.
Aside from the obvious reputational benefit, developers will learn from this process, improving their understanding of potential weaknesses and thus their ability to avoid them completely in their own work.
For an organization developing and deploying smart contracts, this process reduces the amount of work required for security reviews, and risks both to their credibility, and to their assets and other capital.
The Working Group seeks feedback on this specification: Implementation experience, suggestions to improve clarity, or questions if a particular section or requirement is difficut to understand.
We also explicitly want feedback about the use of a standard machine-readable format for Valid Conformance Claims, whether being suitable for storing on a blockchain is important for such a format, and for other use cases.
EEA members are encouraged to provide feedback through joining the Working Group. Anyone can also provide feedback through the a href="https://github.com/EntEthAlliance/eta-registry/issues/">EthTrust-public GitHub Repository, or via EEA's contact pages at https://entethalliance.org/contact/ and it will be forwarded to the Working Group as appropriate.
We expect that new vulnerabilities will be discovered after this specification is published. To ensure that we consider them for inclusion in a revised version, we welcome notification of them. EEA has created a specific email address to let us know about new security vulnerabilities: [email protected]. Information sent to this address SHOULD be sufficient to identify and rectify the problem described, and SHOULD include references to other discussions of the problem. It will be assessed by EEA staff, and then forwarded to the Working Group to address the issue.
When these vulnerabilities affect the Solidity compiler, or suggest modifications to the compiler that would help mitigate the problem, the Solidity Development community SHOULD be notified, as described in [solidity-reports].
The key words MAY, MUST, MUST NOT, RECOMMENDED, and SHOULD in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
This specification defines a number of requirements. As described in § 1.1.3 How to Read a Requirement, each requirement has a Security Level ([S], [M] or [Q]), and a statement of the requirement that Tested Code MUST meet.
In order to achieve EEA EthTrust Certification at a specific Security Level, the Tested Code MUST meet all the requirements for that Security Level, including all the requirements for lower Security Levels. Some requirements can either be met directly, or by meeting one or more Overriding requirements that mean the requirement is considered met.
This document does not create an affirmative duty of compliance on any party, though requirements to comply with it could be created by contract negotiations or other processes with prospective customers or investors.
Section § 4.4 Recommended Good Practices, contains further recommendations. Although they are formatted similarly to requirements, they begin with a "level" marker [GP]. There is no requirement to test for these; however careful implementation and testing is RECOMMENDED.
Note that good implementation of the § 4.4 Recommended Good Practices can enhance security, but in some cases incomplete or low-quality implementation could reduce security.
To grant Tested Code EEA EthTrust Certification, an auditor provides a Valid Conformance Claim, that the Tested Code meets the requirements of the Security Level for which it is certified.
There is no required format for a Valid conformance claim for Version 1 of this specification, beyond being legible and containing the required information as specified in this section.
A Valid Conformance Claim MUST include:
A Valid Conformance Claim for Security Level [Q] MUST contain a [SHA3-256] hash of the documentation provided to meet [Q] Document Contract Logic and [Q] Document System Architecture.
A Valid Conformance Claim SHOULD include:
A Valid Conformance Claim MAY include:
Valid values of EVM versions
are those listed in the Solidity documentation [EVM-version].
As of November 2023 the two most recent are shanghai
and paris
.
This section is non-normative.
This version of the specification does not make any restrictions on who can perform an audit and provide EEA EthTrust Certification. There is no certification process defined for auditors or tools who grant certification. This means that Auditors' claims of performing accurate tests are made by themselves. There is always a possibility of fraud, misrepresentation, or incompetence on the part of any auditor who offers "EEA EthTrust certification" for Version 1.
In principle anyone can submit a smart contract for verification. However submitters need to be aware of any restrictions on usage arising from copyright conditions or the like. In addition, meeting certain requirements can be more difficult to demonstrate in a situation of limited control over the development of the smart contract.
The Working Group expects its own members, who wrote the specification, to behave to a high standard of integrity and to know the specification well, and notes that there are many others who also do so.
The Working Group or EEA MAY seek to develop an auditor certification program for subsequent versions of the EEA EthTrust Security Levels Specification.
An EEA EthTrust evaluation is performed on Tested Code, which means the Solidity source code for a smart contract or several related smart contracts, along with the bytecode generated by compiling the code with specified parameters.
If the Tested Code is divided into more than one smart contract, each deployable at a single address, it is referred to as a Set Of Contracts.
This section is non-normative.
Security of information systems is a major field of work. There are risks inherent in any system of even moderate complexity.
This specification describes testing for security problems in Ethereum smart contracts. However there is no such thing as perfect security. EEA EthTrust certification means that at least a defined minimum set of checks has been performed on a smart contract. This does not mean the Tested Code definitely has no security vulnerabilities. From time to time new security vulnerabilities are identified. Manual auditing procedures require skill and judgement. This means there is always a possibility that a vulnerability is not noticed in review.
Ethereum is based on a model of account holders authorising transactions between accounts. It is very difficult to stop a malicious actor with a privileged key from using that to cause undesirable or otherwise bad outcomes.
Likewise, in practice users often interact with smart contracts through a "Ðapp" or "distributed app". Web Application Security is its own extensive area of research and development, beyond the scope of this specification.
Smart contracts in Ethereum are immutable by default. However, for some scenarios, it is desirable to modify them, for example to add new features or fix bugs. An Upgradable Contract is any type of contract that fulfills these needs by enabling changes to the code executed via calls to a fixed address.
Some common patterns for Upgradable Contracts use a Proxy Contract: a simple wrapper that users interact with directly that is in charge of forwarding transactions to and from another contract (called the Execution Contract in this document, but also known as a Logic Contract), which contains the code that actually implements the Smart Contract's behaviour.
The Execution Contract can be replaced while the Proxy Contract, acting as the access point, is never changed. Both contracts are still immutable in the sense that their code cannot be changed, but one Execution Contract can be swapped out with another. The Proxy Contract can thus point to a different implementation and in doing so, the software is "upgraded".
This means that a Set of Contracts that follow this pattern to make an Upgradable Contract generally cannot be considered immutable, as the Proxy Contract itself could redirect calls to a new Execution Contract, which could be insecure or malicous. By meeting the requirements for access control in this specification to restrict upgrade capabilities enabling new Execution Contracts to be deployed, and by documenting upgrade patterns and following that documentation per [Q] Implement as Documented, deployers of Tested Code can demonstrate reliability. In general, EthTrust certification of a Proxy Contract does not apply to the internal logic of an Upgradable Contract, so a new Execution Contract needs to be certified before upgrading to it through the Proxy Contract.
There are several possible variations on this core structure, for example having a Set of Contracts that includes multiple Execution Contracts. In the attack known as a Metamorphic Upgrade, a series of Smart Contracts are used to convince people (e.g. voters in a DAO) to approve a certain piece of code for deployment, but one of the proxy contracts in the chain is updated to deploy different, malicious, code.
Other patterns rely on using the CREATE2
instruction to deploy a Smart Contract at a known address.
It is currently possible to remove the code at that address using the selfdestruct()
method,
and then deploy new code to that address. This possibility is sometimes used to save Gas Fees,
but it is also used in a Metamorphic Upgrade attack.
A common feature of Ethereum networks is the use of Oracles: functions that can provide information sourced from on-chain or off-chain data. Oracles solve a range of problems, from providing random number generation to asset data, managing the operation of liquidity pools, and enabling access to weather, sports, or other special-interest information. Oracles are used heavily in DeFi and gaming, where asset data and randomization are central to protocol design.
This specification contains requirements to check that smart contracts are sufficiently robust to deal appropriately with whatever information is returned, including the possibility of malformed data that can be deliberately crafted for oracle-specific attacks.
While some aspects of Oracles are within the scope of this specification, it is still possible that an Oracle provides misinformation or even actively produces harmful disinformation.
The two key considerations are the risk of corrupted or manipulated data, and the risk of oracle failure. Vulnerabilities related to these considerations - excessive reliance on TWAP, and unsafe management of oracle failure - have occurred repeatedly leading to the loss of millions of dollars of value on various DeFi protocols.
While many high-quality and trusted Oracles are available, it is possible to suffer an attack even with legitimate data. When calling on an Oracle, data received needs to be be checked for staleness to avoid Front-running attacks. Even in non-DeFi scenarios, such as a source of randomness, it is often important to reset the data source for each transaction, to avoid arbitrage on the next transaction.
A common strategy for pricing Oracles is to provide a time-weighted average price (known as TWAP). This provides some level of security against sudden spikes such as those created by a Flashloan attack, but at the cost of providing stale information.
It is important to choose time windows carefully: when a time window is too wide, it won't reflect volatile asset prices, leaking opportunities to arbitrageurs. However the "instantaneous" price of an asset is often not a good data point: It is the most manipulable piece of Oracle data, and in any event it will always be stale by the time a transaction is executed.
Oracles that collate a wide variety of source data, clean outliers from their data, and are well-regarded by the community, are more likely to be reliable. If an Oracle is off-chain, whether it reflects stale on-chain data, or reliable and accurate data that is truly off-chain, is an important consideration.
Even an Oracle using a well-chosen TWAP can enable a liquidity pool or other DeFi structure to be manipulated, especially by taking advantage of flashloans and flashswaps to cheaply raise funds. If an asset targeted for manipulation has insufficient liquidity this can render it vulnerable to large price swings by an attacker holding only a relatively small amount of liquidity.
The second important consideration when using Oracles is that of a graceful failure scenario. What happens if an Oracle no longer returns data, or suddenly returns an unlikely value? At least one protocol has suffered losses due to 'hanging' on a minimum value in the rare event of a price crash rather than truly dropping to zero, with traders who accumulated large amounts of a near zero-priced asset able to sell it back to the protocol. Hardcoding a minimum or maximum value can lead to problems reflecting reality.
Code that relies on external code can introduce multiple attack vectors. This includes cases where an external dependency contains malicious code or has been subject to malicious manipulation through security vulnerabilities. However, failure to adequately manage the possible outcomes of an external call can also introduce security vulnerabilities.
One of the most commonly cited vulnerabilities in Ethereum Smart Contracts is Re-entrancy Attacks. These attacks allow malicious contracts to make a call back into the contract that called it before the originating contract's function call has been completed. This effect causes the calling contract to complete its processing in unintended ways, for example, by making unexpected changes to its state variables.
A Read-only Re-entrancy Attack arises when a view function is reentered. These are a particular additional danger because such functions often lack safeguards since they don't modify the contract's state. However, if the state is inconsistent, incorrect values could be reported. This deception can mislead other protocols into reading inaccurate state values, potentially leading to unintended actions. This issue primarily affects other contracts that rely on the accurate reporting of state from these view functions, rather than the contract itself being reentered.
Some requirements in the document refer to Malleable Signatures. These are signatures created according to a scheme constructed so that, given a message and a signature, it is possible to efficiently compute the signature of a different message - usually one that has been transformed in specific ways. While there are valuable use cases that such signature schemes allow, if not used carefully they can lead to vulnerabilities, which is why this specification seeks to constrain their use appropriately. In a similar vein, Hash Collisions could occur for hashed messages where the input used is malleable, allowing the same signature to be used for two distinct messages.
Other requirements in the document are related to exploits which take advantage of ambiguity in the input used to created the signed message. When a signed message does not include enough identifying information concerning where, when, and how many times it is intended to be used, the message signature could be used (or reused) in unintended functions, contracts, chains, or even at unintended times.
For more information on this topic, and the potential for exploitation, see also [chase].
Gas Griefing is the deliberate abuse of the Gas mechanism that Ethereum uses to regulate the consumption of computing power, to cause an unexpected or adverse outcome much in the style of a Denial of Service attack. Because Ethereum is designed with the Gas mechanism as a regulating feature, it is insufficient to simply check that a transaction has enough Gas; checking for Gas Griefing needs to take into account the goals and business logic that the Tested Code implements.
Gas Siphoning is another abuse of the Gas mechanism that Ethereum uses to regulate the consumption of computing power, where attackers steal Gas from vulnerable contracts either to deny service or for their own gain (e.g. to mint Gas Tokens). Similar to Gas Griefing, checking for Gas Siphoning requires careful consideration of the goals and business logic that the Tested Code implements.
Gas Tokens use Gas when minted and free slightly less Gas when burned. Gas Tokens minted when Gas prices are low can be burned to subsidize Ethereum transactions when Gas prices are high.
In addition, a common feature of Ethereum Network Upgrades is to change the Gas Price of specific operations. EEA EthTrust certification only applies for the EVM version(s) specified; it is not valid for other EVM versions. Thus it is important to recheck code to ensure its security properties remain the same across Network Upgrades, or take remedial action.
MEV, used in this document to mean "Maliciously Extracted Value", refers to the potential for block producers or other paticipants in a blockchain to extract value that is not intentionally given to them, in other words to steal it, by maliciously reordering transactions, as in Timing Attacks, or suppressing them.
The term MEV is commonly expanded as "Miner Extracted Value", and sometimes "Maximum Extractable Value". As in the example above, sometimes block miners can take best advantage of a vulnerability. But MEV can be exploited by other participants, for example duplicating most of a submitted transaction, but offering a higher fee so it is processed first.
Some MEV attacks can be prevented by careful consideration of the information that is included in a transaction, including the parameters required by a contract.
Other strategies include the use of hash commitment schemes [hash-commit], batch execution, private transactions [EEA-clients], Layer 2 [EEA-L2], or an extension to establish the ordering of transactions before releasing sensitive information to all nodes participating in a blockchain.
The Ethereum Foundation curates up to date information on MEV [EF-MEV].
Censorship Attacks occur when a block processor actively suppresses a proposed transaction, for their own benefit.
Future Block Attacks are those where a block proposer knows they will produce a paticular block, and uses this information to craft the block to maliciously extract value from other transactions. See for example [futureblock] or [postmerge-mev].
Timing Attacks are a class of MEV attacks where an adversary benefits from placing their or a victim's transactions earlier or later in a block. They include Front-Running, Back-Running, and Sandwich Attacks.
Front-Running is based on the fact that transactions are visible to the participants in the network before they are added to a block. This allows a malicious participant to submit an alternative transaction, frustrating the aim of the original transaction.
Back-Running is similar to Front-Running, except the attacker places their transactions after the one they are attacking.
In Sandwich Attacks, an attacker places a victim's transaction undesirably between two other transactions.
This version of the specification requires the compiled bytecode as well as the Solidity Source Code that together constitute the Tested Code. Solidity is by a large measure the most common programming language for Ethereum smart contracts, and benefits of requiring source code in Solidity include
Solidity allows the source code to specify the Solidity compiler version used with a pragma
statement.
This specification currently has no requirement for a specific pragma
,
but it is good practice to ensure that the pragma refers to a bounded set of Solidity compiler versions,
where it is known that those Solidity compiler versions produce identical bytecode
from the given source code.
There are some drawbacks to requiring Solidity Source code. The most obvious is that some code that is not written in Solidity. Different languages have different features and often support different coding styles.
Perhaps more important, it means that a deployed contract written in Solidity cannot be tested directly without someone making the source code available.
Another important limitation introduced by reading source code is that it is subject to Homoglyph Attacks, where characters that look the same but are different such as Latin "p" and Cyrillic "р", can deceive people visually reading the source code, to disguise malicious behaviour. There are related attacks that use features such as Unicode Direction Control Characters or take advantage of inconsistent normalisation of combining characters to achieve the same type of deceptions.
This specification primarily addresses vulnerabilities that arise in Smart Contract code. However it is important to note that the deployment of a smart contract is often a crucial element of protocol operation. Some aspects of smart contract security primarily depend on how the Tested Code gets deployed. Even audited protocols can be easily exploited if deployed naively.
Code written for a specific blockchain might depend on features available in that blockchain, and when the code is deployed to a different chain that is compatible (e.g. it uses the same EVM to process smart contracts), the difference in features can expose a vulnerability. For any contract deployed to a blockchain or parachain that uses a patched fork of the EVM, common security assumptions may no longer apply to that EVM. It is valuable to deploy EthTrust Certified contracts to a testnet for each chain first, and undergo thorough penetration testing.
Of particular concern is the issue of upgradeable proxy-type contracts, and any contract utilizing an initializer function in deployment. Many protocols have been hacked due to accidentally leaving their initializer functions unprotected, or using a non-atomic deployment in which the initializing function is not called in the same transaction as the contract deployment. This scenario is ripe for Front-running attacks, and can result in protocol takeover by malicious parties, and theft or loss of funds. Initializing any initializable contract in the same transaction as its deployment reduces the risk that a malicious actor takes control of the contract.
Moreover, the deployment implications of assigning access roles to msg.sender
or other variables
in constructors and initializers need careful consideration.
This is discussed further in § 4.3.2 Access Control requirements.
Several libraries and tools exist specifically for safe proxy usage and safe contract deployment. From command-line tools to libraries to sophisticated UI-based deployment tools, many solutions exist to prevent unsafe proxy deployments and upgrades.
Using access control for a given contract's initializer, and limiting the number of times an initializer can be called on or after deployment, can enhance safety and transparency for the protocol itself and its users. Furthermore, a function that disables the ability to initialize an Execution Contract can prevent any future initializer calls after deployment, preventing later attacks or accidents.
Although this specification does not require that Tested Code has been deployed, some requirements are more easily tested when code has been deployed to a blockchain, or can only be thoroughly tested "in situ".
While monitoring Smart Contracts after deployment is beyond the formal scope of this specification, it is an important consideration for Smart Contract security. New attack techniques arise from time to time, and some attacks can only be prevented by active measures implemented in real time. Monitoring of on-chain activity can help detect attacks before it is too late to stop them.
Monitoring, backed by an automated dataset, can enable identifying an attack that has occurred elsewhere, even on other blockchains.
Automated monitoring can facilitate rapid response, producing alerts or automatically initiating action, improving the security of contracts that might be compromised when security responses are delayed by even a few blocks.
However, it can be difficult to determine the difference between an attack and anamolous behaviour on the part of individuals. Relying purely on automated monitoring can expose a blockchain to the risk that a malicious actor deliberately triggers an automated security response to damage a blockchain or project, analogous to a Denial of Service attack.
The EVM, or Ethereum Virtual Machine, acts as a distributed state machine for the Ethereum network, computing state changes resulting from transactions. The EVM maintains the network state for simple transfers of Ether, as well as more complex Smart Contract interactions. In other words, it is the "computer" (although in fact it is software) that runs the code of Smart Contracts.
From time to time the Ethereum community implements a Network Upgrade, sometimes also called a hard fork. This is a change to Ethereum that is backwards-incompatible. Because they typically change the EVM, Ethereum Mainnet Network Upgrades generally correspond to EVM versions.
A Network Upgrade can affect more or less any aspect of Ethereum, including changing EVM opcodes or their Gas price, changing how blocks are added, or how rewards are paid, among many possibilities.
Because Network Upgrades are not guaranteed to be backwards compatible, a newer EVM version can process bytecode in unanticipated ways. If a Network Upgrade changes the EVM to fix a security problem, it is important to consider that change, and it is a good practice to follow that upgrade.
Because claims of conformance to this specification are only valid for specific EVM versions, a Network Upgrade can mean an updated audit is needed to maintain valid EEA EthTrust Certification for a current Ethereum network.
Network Upgrades typically only impact a few features. This helps limit the effort necessary to audit code after an upgrade: often there will be no changes that affect the Tested Code, or review of a small proportion that is the only part affected by a Network Upgrade will be sufficient to renew EEA EthTrust Certification.
EEA EthTrust Certification is available at three Security Levels. The Security Levels describe minimum requirements for certifications at each Security Level: [S], [M], and [Q]. These Security Levels provide successively stronger assurance that a smart contract does not have specific security vulnerabilities.
The optional § 4.4 Recommended Good Practices, correctly implemented, further enhance the Security of smart contracts. However it is not necessary to test them to conform to this specification.
The vulnerabilities addressed by this specification come from a number of sources, including Solidity Security Alerts [solidity-alerts], the Smart Contract Weakness Classification [swcregistry], TMIO Best practices [tmio-bp], various sources of Security Advisory Notices, discussions in the Ethereum community and academics presenting newly discovered vulnerabilities, and the extensive practical experience of participants in the Working Group.
EEA EthTrust Certification at Security Level [S] is intended to allow an unguided automated tool to analyze most contracts' bytecode and source code, and determine whether they meet the requirements. For some situations that are difficut to verify automatically, usually only likely to arise in a small minority of contracts, there are higher-level Overriding Requirements that can be fulfilled instead to meet a requirement for this Security Level.
To be eligible for EEA EthTrust Certification for Security Level [S], Tested code MUST fulfil all Security Level [S] requirements, unless it meets the applicable Overriding Requirement(s) for any Security Level [S] requirement it does not meet.
[S] Encode Hashes with chainid
🔗
Tested code MUST create hashes for transactions that incorporate chainid
values
following the recommendation described in [EIP-155].
[EIP-155] describes an enhanced hashing rule, incorporating a chain identifier in the hash. While this only provides a guarantee against replay attacks if there is a unique chain identifier, using the mechanism described provides a certain level of robustness and makes it much more difficult to execute a replay attack.
[S] No CREATE2
🔗
Tested code MUST NOT contain a CREATE2
instruction
unless it meets the Set of Overriding Requirements
The CREATE2
opcode provides the ability to interact with addresses
that do not exist yet on-chain but could possibly eventually contain code.
While this can be useful for deployments and counterfactual interactions with contracts,
it can allow external calls to code that is not yet known, and could turn out to be
malicous or insecure due to errors or weak protections.
[S] No tx.origin
🔗
Tested code MUST NOT contain a tx.origin
instruction
unless it meets the Overriding Requirement
[Q] Verify tx.origin
Usage
tx.origin
is a global variable in Solidity which returns the address
of the account that sent the transaction. A contract using tx.origin
can allow an authorized account to call into a malicious contract,
enabling the malicious contract to pass authorization checks in unintended cases. Use
msg.sender
for authorization instead of tx.origin
.
See also SWC-115 [swcregistry].
[S] No Exact Balance Check 🔗
Tested code MUST NOT test that the balance of an account is exactly equal to
(i.e. ==
) a specified amount or the value of a variable
unless it meets the Overriding Requirement
[M] Verify Exact Balance Checks.
Testing the balance of an account as a basis for some action has risks associated with unexpected receipt of ether or another token, including tokens deliberately transfered to cause such tests to fail as an MEV attack.
See also the Related Requirements [M] Sources of Randomness, [M] Don't Misuse Block Data, and [Q] Protect against MEV Attacks, subsection § 3.7 MEV (Maliciously Extracted Value) of the Security Considerations for this specification, SWC-132 in [swcregistry], and improper locking as described in [CWE-667].
[S] No Conflicting Names 🔗
Tested code MUST NOT include more than one variable, or operative function with
different code, with the same name
unless it meets the
Overriding Requirement:
[M] Document Name Conflicts.
In most programming languages, including Solidity, it is possible to use the same name for variables or functions that have different types or (for functions) input parameters. This can be hard to interpret in the source code, meaning reviewers misunderstand the code or are maliciously misled to do so, analogously to Homoglyph Attacks.
This requirement means that unless the Overriding Requirement is met, any function or variable name will not be repeated, to eliminate confusion. It does however allow functions to be overridden, e.g. from a Base contract, so long as there is only one version of the function that operates within the code.
See also the related requirement [M] Compiler Bug SOL-2020-2, and the documentation of function inheritance in [solidity-functions].
[S] No Hashing Consecutive Variable Length Arguments 🔗
Tested Code MUST NOT use abi.encodePacked()
with consecutive variable length arguments.
The elements of each variable-length argument to abi.encodePacked()
are packed in order prior to hashing.
Hash Collisions are possible by rearranging the elements between consecutive,
variable length arguments while maintaining that their concatenated order is the same.
[S] No selfdestruct()
🔗
Tested code MUST NOT contain the selfdestruct()
instruction
or its now-deprecated alias suicide()
unless it meets the Set of Overriding Requirements
If the selfdestruct()
instruction (or its deprecated alternative suicide()
) is not
carefully protected, malicious code can call it and destroy a contract, sending any Ether
held by the contract, thus potentially stealing it. This feature can often break
immutability and trustless guarantees to introduce numerous security issues. In addition,
once the contract has been destroyed any Ether sent is simply lost, unlike when a contract
is disabled which causes a transaction sending Ether to revert.
selfdestruct()
is officially deprecated, its usage discouraged, since Solidity compiler version 0.8.18
[solidity-release-818].
See also SWC-106 in [swcregistry], [EIP-6049].
[S] No assembly {}
🔗
Tested Code MUST NOT contain the assembly {}
instruction
unless it meets the Set of Overriding Requirements
assembly {}
Attack Vectors,assembly {}
,The assembly {}
instruction allows lower-level code to be included. This give the authors
much stronger control over the bytecode that is generated, which can be used for example
to optimise gas usage. However, it also potentially exposes a number of vulnerabilites and
bugs that are additional attack surfaces, and there are a number of ways to use assembly {}
to introduce deliberately malicious code that is difficult to detect.
[S] No Unicode Direction Control Characters 🔗
Tested code MUST NOT contain any of the Unicode Direction Control Characters
U+2066
, U+2067
, U+2068
, U+2029
,
U+202A
, U+202B
, U+202C
, U+202D
,
or U+202E
unless it meets the Overriding Requirement
[M] No Unnecessary Unicode Controls.
Changing the apparent order of characters through the use of invisible Unicode direction control characters can mask malicious code, even in viewing source code, to deceive human auditors.
More information on Unicode direction control characters is available in the W3C note How to use Unicode controls for bidi text [unicode-bdo].
See also the Related Requirements: [M] Protect External Calls, and [Q] Verify External Calls.
[S] Check External Calls Return 🔗
Tested Code that makes external calls using the Low-level Call Functions (i.e. call()
,
delegatecall()
, staticcall()
, and send()
)
MUST check the returned value from each usage to determine whether the call failed,
unless it meets the Overriding Requirement
[M] Handle External Call Returns.
Normally, exceptions in calls cause a reversion. This will "bubble up", unless they are handled in a try
/catch
.
However Solidity defines a set of Low-level Call Functions:
call()
,delegatecall()
,staticcall()
, andsend()
.Calls using these functions behave differently. Instead of reverting on failure they return a boolean indicating whether the call completed successfully.
Not testing explicitly for the return value could lead to unexpected behavior in the caller contract. Assuming these calls reverting on failure will lead to unexpected behaviour when they are not successful.
See also SWC-104 in [swcregistry], error handling documentation in [error-handling], unchecked return value as described in [CWE-252], and the Related Requirements: [S] Use Check-Effects-Interaction, [M] Handle External Call Returns, and [Q] Verify External Calls.
[S] Use Check-Effects-Interaction 🔗
Tested code that makes external calls MUST use the
Checks-Effects-Interactions
pattern to protect against Re-entrancy Attacks
unless it meets the Set of Overriding Requirements
or it meets the Set of Overriding Requirements
The Checks-Effects-Interactions pattern ensures that validation of the request, and changes to the state variables of the contract, are performed before any interactions take place with other contracts. When contracts are implemented this way, the scope for Re-entrancy Attacks is reduced significantly.
As well as checking the particular contract effects, it is possible as part of this pattern to test protocol invariants, to provide a further assurance that a request doesn't produce an unsafe outcome.
See also § 3.4 External Interactions and Re-entrancy Attacks, the explanation of "Checks-Effects-Interactions" [c-e-i] in "Solidity Security Considerations" [solidity-security], "Checks Effects Interactions" in [solidity-patterns], and [freipi].
[S] No delegatecall()
🔗
Tested Code MUST NOT contain the delegatecall()
instruction
unless it meets the Set of Overriding Requirements:
The delegatecall()
instruction enables an external contract to manipulate the state
of a contract that calls it, because the code is run with the caller's balance, storage,
and address.
Implementing the Recommended Good Practice [GP] Use Latest Compiler means that Tested Code passes the requirement in this subsection.
[S] No Overflow/Underflow 🔗
Tested code MUST NOT use a Solidity compiler version older than 0.8.0
unless it meets the Set of Overriding Requirements
Like most programming languages, the EVM and Solidity represent numbers as a set of bytes that by default has a fixed length. This means arithmetic operations on large numbers can "overflow" the size by producing a result that does not fit in the space allocated. This results in corrupted data, and can be used as an attack on code. The [CWE] registry of generic code vulnerabilities contains many overflow attacks; it is a well-known vector that is exposed in many systems and has regularly been exploited.
There are many ways to check for overflows, or underflows (where a negative number is large enough in magnitude to trigger the same effect). Since Solidity compiler version 0.8.0 there is built-in arithmetic overflow protection. Tested Code compiled with an earlier Solidity compiler version needs to check explicitly to mitigate this potential vulnerability.
See also SWC-101 in [swcregistry].
There are a number of known security bugs in different Solidity compiler versions. The requirements in this subsection ensure that Tested Code does not trigger these bugs. The name of the requirement includes the uid first recorded for the bug in [solidity-bugs-json], as a key that can be used to find more information about the bug. [solidity-bugs] describes the conventions used for the JSON-formatted list of bugs.
The requirements in this subsection are ordered according to the latest Solidity compiler versions that are vulnerable.
Implementing the Recommended Good Practice [GP] Use Latest Compiler means that Tested Code passes all requirements in this subsection.
Some compiler-related bugs are in the § 4.2.5 Security Level [M] Compiler Bugs and Overriding Requirements as Security Level [M] requirements, either because they are Overriding Requirements for requirements in this subsection, or because they are part of a Set of Overriding Requirements for Security Level [S] requirements that already ensure that the bug cannot be triggered.
Some bugs were introduced in known Solidity compiler versions, while others are known or assumed to have existed in all Solidity compiler versions until they were fixed.
[S] Compiler Bug SOL-2023-3 🔗
Tested code that includes Yul code and uses the verbatim
instruction twice, in each case surrounded by identical code,
MUST disable the Block Deduplicator when using a Solidity compiler version between 0.8.5 and 0.8.22 (inclusive).
From Solidity compiler version 0.8.5 until 0.8.22, the block deduplicator incorrectly processed verbatim
items,
meaning that sometimes it conflated two items based on the code surrounding them instead of comparing them properly.
See also the 8 November 2023 security alert.
[S] Compiler Bug SOL-2022-6 🔗
Tested code that ABI-encodes a tuple (including a struct
, return
value, or a parameter list)
that includes a dynamic component with the ABIEncoderV2, and whose last element is a
calldata
static array of base type uint
or bytes32
,
MUST NOT use a Solidity compiler version between 0.5.8 and 0.8.15 (inclusive).
From Solidity compiler version 0.5.8 until 0.8.15, ABI encoding a tuple whose final component is a
calldata
static array of base type uint
or bytes32
with the ABIEncoderV2
could result in corrupted data.
See also the 8 August 2022 security alert.
[S] Compiler Bug SOL-2022-5 with .push()
🔗
Tested code that copies bytes
arrays from calldata
or
memory
whose size is not a multiple of 32 bytes, and has an empty .push()
instruction that writes to the resulting array,
MUST NOT use a Solidity compiler version older than 0.8.15.
Until Solidity compiler version 0.8.15 copying memory or calldata whose length is not a multiple of 32 bytes
could expose data beyond the data copied, which could be observable using code through
assembly {}
.
See also the 15 June 2022 security alert and the related requirement
[M] Compiler Bug SOL-2022-5 in assembly {}
.
[S] Compiler Bug SOL-2022-3 🔗
Tested code that
memory
and calldata
pointers for the same function, andMUST NOT use a Solidity compiler version between 0.6.9 and 0.8.13 (inclusive).
Solidity compiler versions from 0.6.9 until it was fixed in 0.8.13 had a bug that incorrectly allowed
internal or public calls to use a simpification only valid for external calls, treating
memory
and calldata
as equivalent pointers.
See also the 17 May 2022 security alert.
[S] Compiler Bug SOL-2022-2 🔗
Tested code with a nested array that
abi.encode()
, orMUST NOT use a Solidity compiler version between 0.6.9 and 0.8.13 (inclusive).
Solidity compiler versions from 0.5.8 until it was fixed in 0.8.13 had a bug that meant a single-pass
encoding and decoding of a nested array could read data beyond the calldatasize()
.
See also the 17 May 2022 security alert.
[S] Compiler Bug SOL-2022-1 🔗
Tested code that
bytesNN
type shorter than 32 bytes, orbytesNN
type,and passes such literals to abi.encodeCall()
as the first parameter,
MUST NOT use Solidity compiler version 0.8.11 nor 0.8.12.
Solidity defines a set of types for variables known collectively as
bytesNN
or Fixed-length Variable types,
that specify the length of the variable as a fixed number of bytes, following the pattern
bytes1
bytes2
bytes10
bytes32
Solidity compiler versions 0.8.11 and 0.8.12 had a bug that meant literal parameters were incorrectly
encoded by abi.encodeCall()
in certain circumstances.
See also the 16 March 2022 security alert.
[S] Compiler Bug SOL-2021-4 🔗
Tested Code that uses custom value types shorter than 32 bytes MUST NOT use Solidity compiler version 0.8.8.
Solidity compiler version 0.8.8 had a bug that assigned a full 32 bytes of storage to custom types that did not need it. This can be misused to enable reading arbitrary storage, as well as causing errors if the Tested Code contains code compiled using different Solidity compiler versions.
See also the 29 September 2021 security alert
[S] Compiler Bug SOL-2021-2 🔗
Tested code that uses abi.decode()
on byte arrays as memory
MUST NOT use the ABIEncoderV2 with a Solidity compiler version between 0.4.16 and 0.8.3
(inclusive).
Solidity compiler version 0.4.16 introduced a bug, fixed in 0.8.4, that meant the ABIEncoderV2
incorrectly validated pointers when reading memory
byte arrays, which could result in
reading data beyond the array area due to an overflow error in calculating pointers.
See also the 21 April 2021 security alert.
[S] Compiler Bug SOL-2021-1 🔗
Tested code that has 2 or more occurrences of an instruction
keccak(mem,length)
where
MUST NOT use the Optimizer with a Solidity compiler version older than 0.8.3.
Solidity compiler versions before 0.8.3 had an Optimizer bug that meant keccak hashes, calculated for the same content but different lengths that were not multiples of 32 bytes, incorrectly used the first value from cache instead of recalculating.
See also the 23 March 2021 security alert.
[S] Compiler Bug SOL-2020-11-push 🔗
Tested code that copies an empty byte array to storage, and subsequently increases
the size of the array using push()
MUST NOT use a Solidity compiler version
older than 0.7.4.
Solidity compiler versions before 0.7.4 had a bug that meant data would be packed after an empty array,
and if the length of the array is subsequently extended by push()
,
that data would be readable from the array.
See also the 19 October 2020 security alert.
[S] Compiler Bug SOL-2020-10 🔗
Tested code that copies an array of types shorter than 16 bytes to a longer array
MUST NOT use a Solidity compiler version older than 0.7.3.
Solidity compiler versions before 0.7.3 had a bug that meant when array data for types shorter than 16 bytes are assigned to a longer array, the extra values in that longer array are not correctly reset to zero.
See also the 7 October 2020 security alert.
[S] Compiler Bug SOL-2020-9 🔗
Tested code that defines Free Functions MUST NOT use Solidity compiler version 0.7.1.
Solidity compiler version 0.7.1 introduced Free Functions [solidity-functions]:
Functions that are defined in the source code of a smart contract but outside the
scope of the formal contract declaration.
Free Functions have internal
visibility, and the compiler "inlines" them to the contracts that call them.
The solidity documentation explains that they are:
executed in the context of a contract. They still have access to the variable this, can call other contracts, send them Ether and destroy the contract that called them, among other things. The main difference to functions defined inside a contract is that free functions do not have direct access to storage variables and functions not in their scope.https://docs.soliditylang.org/en/latest/contracts.html#functions
Solidity compiler version 0.7.1 did not correctly distinguish overlapping Free Function declarations, meaning that the wrong function could be called.
See examples of a passing contract and a failing contract for this requirement.
[S] Compiler Bug SOL-2020-8 🔗
Tested code that calls internal library functions with calldata
parameters
called via using for
MUST NOT use Solidity compiler version 0.6.9.
Solidity compiler version 0.6.9 incorrectly copied calldata
parameters passed to
internal library functions with using for
as if they were calling to
external library functions, leading to stack corruption and an incorrect jump destination.
See also a Github issue with a code example.
[S] Compiler Bug SOL-2020-6 🔗
Tested code that accesses an array slice using an expression for the starting index
that can evaluate to a value other than zero
MUST NOT use the ABIEncoderV2 with a Solidity compiler version between 0.6.0 and 0.6.7 (inclusive).
Solidity compiler version 0.6.0 introduced a bug fixed in 0.6.8 that incorrectly calculated
index offsets for the start of array slices, used in dynamic calldata
types,
when using the ABIEncoderV2.
[S] Compiler Bug SOL-2020-7 🔗
Tested code that passes a string literal containing two consecutive backslash ("\")
characters to an encoding function or an external call
MUST NOT use the ABIEncoderV2 with a Solidity compiler version between 0.5.14 and 0.6.7 (inclusive).
Solidity compiler version 0.5.14 introduced a bug fixed in 0.6.8 that incorrectly encoded consecutive backslash characters in string literals when passing them to an external function, or an encoding function, when using the ABIEncoderV2.
[S] Compiler Bug SOL-2020-5 🔗
Tested code that defines a contract that does not include a constructor, but
has a base contract that defines a constructor not defined as payable
MUST NOT use a Solidity compiler version between 0.4.5 and 0.6.7 (inclusive),
unless it meets the Overriding Requirement
[M] Check Constructor Payment.
Solidity compiler version 0.4.5 introduced a check intended to result in contract creation reverting
if value is passed to a constructor that is not explicitly marked as payable
.
If the constructor was inherited from a base instead of explicitly defined in the contract,
this check did not function properly until Solidity compiler version 0.6.8, meaning the creation would not
revert as expected.
[S] Compiler Bug SOL-2020-4 🔗
Tested code that makes assignments to tuples that
calldata
arrayMUST NOT use a Solidity compiler version older than 0.6.5.
Solidity compiler version 0.1.6 introduced a bug, fixed in Solidity compiler version 0.6.5, that meant
tuple assignments involving nested tuples, pointers to external functions,
or references to dynamically sized calldata
arrays, were corrupted
due to incorrectly calculating the number of stack slots.
[S] Compiler Bug SOL-2020-3 🔗
Tested code that declares arrays of size larger than 2^256-1 MUST NOT use a Solidity compiler version older than 0.6.5.
Solidity compiler version 0.2.0 introduced a bug, fixed in Solidity compiler version 0.6.5, that meant no overflow check was performed for the creation of very large arrays, meaning in some cases an overflow error would occur that would result in consuming all gas in a transaction due to the memory handling error introduced in compiling the contract.
[S] Compiler Bug SOL-2020-1 🔗
Tested code that declares variables inside a for
loop that contains a break
or continue
statement MUST NOT use the Yul Optimizer with Solidity compiler version 0.6.0
nor a Solidity compiler version between 0.5.8 and 0.5.15 (inclusive).
A bug in the Yul Optimiser in Solidity compiler versions from 0.5.8 to 0.5.15 and in Solidity compiler version 0.6.0
meant assignments for variables declared inside a for
loop that contained a break
or
continue
statement could be removed.
[S] Use a Modern Compiler 🔗
Tested code MUST NOT use a Solidity compiler version older than 0.6.0,
unless it meets all the following requirements from the
EEA EthTrust Security Levels Specification Version 1,
as Overriding Requirements:
storage
Explicitly if appropriate)There are a number of known compiler bugs that affect Solidity Compiler Versions older than 0.6.0, but research into compiler bugs tends to focus on those that affect relatively modern Solidity Compiler versions, so any further bugs in older Solidity Compiler versions are only likely to be discovered and generally known as a result of being exploited.
It is a good practice to use a modern Solidity Compiler Version. In the rare cases where it is not possible to use a Solidity Compiler Version later than 0.6.0, it is possible to achieve EEA EthTrust Certification by conforming to the relevant Overriding Requirements that were defined in version 1 of this specification [EthTrust-sl-v1].
See also the Related Requirement [M] Use a Modern Compiler, covering Solidity Compiler bugs that require review for Security Level [M].
[S] No Ancient Compilers 🔗
Tested code MUST NOT use a Solidity compiler version older than 0.3.
Compiler bugs are not tracked for compiler Solidity compiler versions older than 0.3. There is therefore a risk that unknown bugs create unexpected problems.
See also "SOL-2016-1" in [solidity-bugs-json].
EEA EthTrust Certification at Security Level [M] means that the Tested Code has been carefully reviewed by a human auditor or team, doing a manual analysis, and important security issues have been addressed to their satisfaction.
This level includes a number of Overriding Requirements for cases when Tested Code does not meet a Security Level [S] requirement directly, because it uses an uncommon feature that introduces higher risk, or because in certain circumstsances testing that the requirement has been met requires human judgement. Passing the relevant Overriding Requirement tests that the feature has been implemented sufficiently well to satisfy the auditor that it does not expose the Tested Code to the known vulnerabilities identified in this Security Level.
[M] Pass Security Level [S] 🔗
To be eligible for EEA EthTrust certification at Security Level [M],
Tested code MUST meet the requirements for § 4.1 Security Level [S].
[M] Explicitly Disambiguate Evaluation Order 🔗
Tested code MUST NOT contain statements where variable evaluation order
can result in different outcomes
The evaluation order of functions is not entirely deterministic in Solidity, and is not guaranteed to be consistent across Solidity compiler versions. This means that the outcome of a statement calling multiple functions that each have side effects on shared stateful objects can lead to different outcomes if the order that the called functions were evaluated varies.
Also, the evaluation order in events and the instructions addmod
and modmul
generally does not follow the usual pattern,
meaning that Tested Code using those instructions could produce unexpected outcomes.
A common approach to addressing this vulnerability is the use of temporary results, to ensure evaluation order will be the same.
See also [richards2022], [solidity-cheatsheet], and the 19 July 2023 Solidity Compiler Security Bug notification for Solidity Compiler Security Bug 2023-2, noted in [solidity-bugs-json].
[M] No Failing assert()
Statements 🔗
assert()
statements in Tested Code MUST NOT fail.
assert()
statements are meant for invariants, not as a generic error-handling mechanism.
If an assert()
statement fails because it is being used as a mechanism to catch errors,
it is better to replace it with a require()
statement or similar mechanism designed
for the use case. If it fails due to a coding bug, that needs to be fixed.
This requirement is based on [CWE-670] Always-Incorrect Control Flow Implementation.
[M] Verify Exact Balance Checks 🔗
Tested code that checks whether the balance of an account is exactly equal to
(i.e. ==
) a specified amount or the value of a variable.
MUST protect itself against transfers affecting the balance tested.
This is an Overriding Requirement for
[S] No Exact Balance Check.
If a Smart Contract checks that an account balance is some particular exact value at some point during its execution, it is potentially vulnerable to an attack, where a transfer to the account can be used to change the balance of the account causing unexpected results such as a transaction reverting. If such checks are used it is important that they are protected against this possibility.
The requirements in this section are related to the security advisory [CVE-2021-42574] and [CWE-94], "Improper Control of Generation of Code", also called "Code Injection".
[M] No Unnecessary Unicode Controls 🔗
Tested code MUST NOT use Unicode direction control characters
unless they are necessary to render text appropriately,
and the resulting text does not mislead readers.
This is an Overriding Requirement for
[S] No Unicode Direction Control Characters.
Security Level [M] permits the use of Unicode direction control characters in text strings, subject to analysis of whether they are necessary.
[M] No Homoglyph-style Attack 🔗
Tested code MUST NOT use homoglyphs, Unicode control characters, combining characters, or characters from multiple
Unicode blocks, if the impact is misleading.
Techniques such as substituting characters from different alphabets (e.g. Latin "a" and Cyrillic "а" are not the same) can be used to mask malicious code, for example by presenting variables or function names designed to mislead auditors. These attacks are known as Homoglyph Attacks. Several approaches to successfully exploiting this issue are described in [Ivanov].
In the rare case when there is a valid use of characters from multiple Unicode blocks (see [unicode-blocks]) in a variable name or label (most likely to be mixing two languages in a name), requirements at this level allow them to pass EEA EthTrust certification so long as they do not mislead or confuse.
This level requires checking for homoglyph attacks including those within a single character set, such as the use of "í" in place of "i" or "ì", "ت" for "ث", or "1" for "l". When the reviewer judges that the result is misleading or confusing, the relevant code does not meet the Security Level [M] requirements.
See also the Related Requirement: [S] No Unicode Direction Control Characters.
[M] Protect External Calls 🔗
For Tested code that makes external calls:
unless it meets the Set of Overriding Requirements
This is an Overriding Requirement for [S] Use Check-Effects-Interaction.
EEA EthTrust Certification at Security Level [M] allows calling within a set of contracts that form part of the Tested Code. This ensures all contracts called are audited together at this Security Level.
If a contract calls a well-known external contract that is not audited as part of the Tested Code, it is possible to certify conformance to this requirement through the Overriding Requirements, which allow the certifier to claim on their own judgement that the contracts called provide appropriate security. The extended requirements around documentation of the Tested Code that apply when claiming conformance through implementation of the Overriding Requirements in this case reflect the potential for very high risk if the external contracts are simply assumed by a reviewer to be secure because they have been widely used.
Unless the Tested Code deploys contracts, and retrieves their address accurately for calling, it is necessary to check that the contracts are really deployed at the addresses assumed in the Tested Code.
The same level of protection against Re-entrancy Attacks has to be provided to the Tested Code overall as for the Security Level [S] requirement.
[M] Avoid Read-only Re-entrancy Attacks 🔗
Tested Code that makes external calls MUST protect itself against Read-only Re-entrancy Attacks.
As described in § 3.4 External Interactions and Re-entrancy Attacks, code that reads information from a function can end up reading inconsistent or incorrect information. When the Tested Code calls a function in which this possibility arises, the calling code needs an appropriate mechanism to avoid it happening.
One potential mechanism is for view functions to have a modifier that checks whether the data is currently in an inconsistent state, in the manner of a lock function. This enables calling code to explicitly avoid viewing inconsistent data.
[M] Handle External Call Returns 🔗
Tested Code that makes external calls MUST reasonably handle possible errors.
This is an Overriding Requirement for
[S] Check External Calls Return.
It is important that Tested Code works as expected, to the satisfaction of the auditor, when the return value is the result of a possible error, such as if a call to a non-existent function triggers a fallback function instead of simply reverting, or an external call using a low-level function does not revert.
See also the related requirement: [Q] Process All Inputs.
[M] Document Special Code Use 🔗
Tested Code MUST document the need for each instance of:
CREATE2
,assembly {}
,selfdestruct()
or its deprecated alias suicide()
,delegatecall()
,block.number
or block.timestamp
, orand MUST describe how the Tested Code protects against misuse or errors in these cases, and the documentation MUST be available to anyone who can call the Tested Code.
This is part of several Sets of Overriding Requirements, one for each of
There are legitimate uses for all of these coding patterns, but they are also potential causes of security vulnerabilities. Security Level [M] therefore requires testing that the use of these patterns is explained and justified, and that they are used in a manner that does not introduce known vulnerabilities.
The requirement to document the use of external calls applies to all external calls in the tested code, whether or not they meet the Related Requirement [S] Use Check-Effects-Interaction.
See also the Related requirements:
[Q] Document Contract Logic,
[Q] Document System Architecture,
[Q] Implement as Documented,
[Q] Verify External Calls,
[M] Avoid Common assembly {}
Attack Vectors,
[M] Compiler Bug SOL-2022-5 in assembly {}
,
[M] Compiler Bug SOL-2022-4, and
[M] Compiler Bug SOL-2021-3.
[M] Ensure Proper Rounding of Computations Affecting Value 🔗
Tested code MUST identify and protect against exploiting rounding errors:
Smart Contracts typically implement mathematical formulas over real numbers using integer arithmetic. Such code can introduce rounding errors because integers and rational numbers whose size is bounded cannot precisely represent all real numbers in the same range.
If a procedure that uses rounding results in a predictable amount of error, that increases the value produced by the round-trip, it is possible to exploit that difference by repeating the procedure to cumulatively siphon a large sum.
To protect against this vulnerability, the "Keep the Change" approach ensures that any difference created does not provide an advantage to an attacker repeatedly calling a smart contract. It is important to note that differences do still accrue. A contract could use "over-servicing", repeatedly calling a swap protected by the "Keep the Change" approach, to steal from a user.
This vulnerability has been discovered in practice in DeFi protocol Smart Contracts that could have put hundreds of millions of dollars at risk. Further explanation is available in the presentation slides for the DevCon 2023 talk [DevCon-rounding]. An example of a thorough mathematical analysis of integer rounding for an automated market maker is available in [rounding-errors].
This requirement is based on [CWE-1339] Insufficient Precision or Accuracy of a Real Number.
[M] Protect Self-destruction 🔗
Tested code that contains the selfdestruct()
or suicide()
instructions MUST
unless it meets the Overriding Requirement [Q] Enforce Least Privilege.
This is an Overriding Requirement for
[S] No selfdestruct()
.
If the selfdestruct()
instruction (or its deprecated alternative suicide()
) is not
carefully protected, malicious code can call it and destroy a contract, and potentially steal any Ether
held by the contract. In addition, this can disrupt other users of the contract since
once the contract has been destroyed any Ether sent is simply lost, unlike when a contract
is disabled which causes a transaction sending Ether to revert.
See also SWC-106 in [swcregistry].
This vulnerability led to the Parity MultiSig Wallet Failure that blocked around 1/2 Million Ether on mainnet in 2017.
[M] Avoid Common assembly {}
Attack Vectors 🔗
Tested Code MUST NOT use the assembly {}
instruction to change a variable
unless the code cannot:
function
.This is part of a Set of Overriding Requirements for
[S] No assembly {}
.
The assembly {}
instruction provides a low-level method for developers to produce code in
smart contracts. Using this approach provides great flexibility and control, for example to
reduce gas cost. However it also exposes some possible attack surfaces
where a malicious coder could introduce attacks that are hard to detect.
This requirement ensures that two such attack surfaces that are well-known are not exposed.
See also SWC-124 and
SWC-127 [swcregistry],
and the Related Requirements
[M] Document Special Code Use,
[M] Compiler Bug SOL-2022-7,
[M] Compiler Bug SOL-2022-5 in assembly {}
,
[M] Compiler Bug SOL-2022-4, and
[M] Compiler Bug SOL-2021-3.
[M] Protect CREATE2
Calls 🔗
For Tested Code that uses the CREATE2
instruction,
any contract to be deployed using CREATE2
selfdestruct()
, delegatecall()
nor
callcode()
instructions, andunless it meets the Set of Overriding Requirements
This is part of a Set of Overriding Requirements for
[S] No CREATE2
.
The CREATE2
opcode's ability to interact with addresses
whose code does not yet exist on-chain makes it important to prevent external calls to
malicous or insecure contract code that is not yet known.
The Tested code needs to include any code that can be deployed using
CREATE2
, to verify protections are in place and the code behaves
as the contract author claims. This includes ensuring that opcodes that can change the
immutability or forward calls in the contracts deployed with CREATE2
,
such as selfdestruct()
, delegatecall()
and
callcode()
, are not present.
If any of these opcodes are present, the additional protections and documentation required by the Overriding Requirements are necessary.
[M] No Overflow/Underflow 🔗
Tested code MUST NOT contain calculations that can overflow or underflow unless
This is an Overriding Requirement for [S] No Overflow/Underflow.
There are a few rare use cases where arithmetic overflow or underflow is intended, or expected behaviour. It is important such cases are protected appropriately. Note that these are harder to implement since Solidity compiler version 0.8.0 which introduced overflow protection that causes transactions to revert.
See also SWC-101 in [swcregistry].
[M] Document Name Conflicts 🔗
Tested code MUST clearly document the order of inheritance for each function or variable that shares a name with another function or variable.
This is an Overriding Requirement for
[S] No Conflicting Names.
As noted in [S] No Conflicting Names. using the same name for different functions or variables can lead to reviewers misunderstanding code, either inadvertently or due to deliberately malicious code. Explicitly documenting any occurrences of doing this helps security audits, and makes it clear to others using the code where they need to pay close attention to the scope of variable or function declarations.
See also the related requirement [M] Compiler Bug SOL-2020-2, and the documentation of function inheritance in [solidity-functions]
[M] Sources of Randomness 🔗
Sources of randomness used in Tested Code MUST be
sufficiently resistant to prediction that their purpose is met.
This requirement involves careful evaluation for each specific contract and case. Some uses of randomness rely on no prediction being more accurate than any other. For such cases, values that can be guessed at with some accuracy or controlled by miners or validators, like block difficulty, timestamps, and/or block numbers, introduces a vulnerability. Thus a "strong" source of randomness like an oracle service is necessary.
Other uses are resistant to "good guesses" because using something that is close but wrong provides no more likelihood of gaining an advantage than any other guess.
See also the Related Requirements [S] No Exact Balance Check, [M] Don't Misuse Block Data, and [Q] Protect against MEV Attacks.
[M] Don't Misuse Block Data 🔗
Block numbers and timestamps used in Tested Code MUST NOT introduce vulnerabilities
to MEV or similar attacks.
Block numbers are vulnerable to approximate prediction, although they are generally not
reliably precise indicators of elapsed time. block.timestamp
is subject to manipulation
by malicious actors. It is therefore important that these data are not trusted by
Tested Code to function as if they were highly reliable or random information.
The description of SWC-116 in [swcregistry]
includes some code examples for techniques to avoid, for example using
block.number / 14
as a proxy for elapsed seconds, or relying on block.timestamp
to indicate a precise time has passed.
For low precision, such as "a few minutes", block.number / 14 > 1000
can be sufficient
on main net, or a blockchain with a similar regular block period of around 14 seconds.
But using it to determine that e.g. "exactly 36 seconds" have elapsed fails the requirement.
A contract that relies on a specific block period can introduce serious risks if it
is deployed on another blockchain with a very different block frequency.
Likewise, because block.timestamp depends on settings that can be manipulated by a malicious node operator, in cases likes Ethereum mainnet it is suitable for use as a coarse-grained approximation (on a scale of minutes) but the same code on a different blockchain can be vulnerable to MEV attacks.
Note that this is related to the use of Oracles, which can also provide inaccurate information.
See also the Related Requirements [S] No Exact Balance Check, [M] Sources of Randomness, and [Q] Protect against MEV Attacks.
[M] Proper Signature Verification 🔗
Tested Code MUST properly verify signatures to ensure authenticity of messages that were signed off-chain.
Some smart contracts process messages that were signed off-chain to increase flexibility, while maintaining authenticity. Smart contracts performing their own signature verification need to verify such messages' authenticity.
Using ecrecover()
for signature verification, it is important to validate the address returned against the expected outcome.
In particular, a return value of address(0)
represents a failure to provide a valid signature.
See also SWC-122 [swcregistry].
For code that does use ecrecover()
, see the Related Requirement
[M] Use a Modern Compiler.
[M] No Improper Usage of Signatures for Replay Attack Protection 🔗
Tested Code using signatures to prevent replay attacks MUST ensure that signatures cannot be reused:
unless it meets the Overriding Requirement [Q] Intended Replay. Additionally, Tested Code MUST verify that multiple signatures cannot be created for the same message, as is the case with Malleable Signatures.
In Replay Attacks, an attacker replays correctly signed messages to exploit a system. The signed message needs to include enough identifying information so that its intended setting is well-defined.
Malleable Signatures allow an attacker to create a new signature for the same message. Smart contracts that check against hashes of signatures to ensure that a message has only been processed once could be vulnerable to replay attacks if malleable signatures are used.
Some solidity compiler bugs described in § 4.1.4 Compiler Bugs have Overriding Requirements at Security Level [M], and some have trigger conditions that are not readily detectable in software.
Implementing the Recommended Good Practice [GP] Use Latest Compiler means that Tested Code passes all requirements in this subsection.
[M] Solidity Compiler Bug 2023-1 🔗
Tested code that contains a compound expression with side effects that uses .selector
MUST use the viaIR option with Solidity compiler versions between 0.6.2 and 0.8.20 inclusive.
A bug introduced in Solidity compiler version 0.6.2 and fixed in Solidity compiler version 0.8.21
meant that when compound expressions accessed the .selector
member, the expression would not be evaluated,
unless the viaIR pipeline was used.
Thus any side effects caused by the expression would not occur.
See also the 19 July 2023 security alert.
[M] Compiler Bug SOL-2022-7 🔗
Tested code that has storage writes followed by conditional early terminations
from inline assembly functions containing return()
or stop()
instructions
MUST NOT use a Solidity compiler version between 0.8.13 and 0.8.17 inclusive.
This is part of the Set of Overriding Requirements for
[S] No assembly {}
.
A bug fixed in Solidity compiler version 0.8.17 meant that storage writes followed by conditional early terminations from inline assembly functions would sometimes be erroneously dropped during optimization.
See also the 5 September 2022 security alert.
[M] Compiler Bug SOL-2022-5 in assembly {}
🔗
Tested code that copies bytes
arrays from calldata or memory whose size is not
a multiple of 32 bytes, and has an assembly {}
instruction that reads that data
without explicitly matching the length that was copied,
MUST NOT use a Solidity compiler version older than 0.8.15.
This is part of the Set of Overriding Requirements for
[S] No assembly {}
.
Until Solidity compiler version 0.8.15 copying memory
or calldata
whose length is not a multiple of 32 bytes
could expose data beyond the data copied, which could be observable using assembly {}
.
See also the 15 June 2022 security alert and related requirements
[S] Compiler Bug SOL-2022-5 with .push()
,
[M] Avoid Common assembly {}
Attack Vectors,
[M] Document Special Code Use,
[M] Compiler Bug SOL-2022-4, and
[M] Compiler Bug SOL-2021-3.
[M] Compiler Bug SOL-2022-4 🔗
Tested code that has at least two assembly {}
instructions, such that one writes
to memory e.g. by storing a value in a variable, but does not access that memory again,
and code in a another assembly {}
instruction refers to that memory,
MUST NOT use the yulOptimizer with Solidity compiler versions 0.8.13 or 0.8.14.
This is part of the Set of Overriding Requirements for
[S] No assembly {}
.
Solidity compiler version 0.8.13 introduced a yulOptimizer bug, fixed in Solidity compiler version 0.8.15, where memory created in an
assembly {}
instruction but only read in a different assembly {}
instruction was discarded.
See also the 17 June 2022 security alert
and related requirements
[M] Avoid Common assembly {}
Attack Vectors,
[M] Document Special Code Use,
[M] Compiler Bug SOL-2022-7,
[M] Compiler Bug SOL-2022-5 in assembly {}
, and
[M] Compiler Bug SOL-2021-3.
[M] Compiler Bug SOL-2021-3 🔗
Tested code that reads an immutable
signed integer of a type
shorter than
256 bits within an assembly {}
instruction MUST NOT use a Solidity compiler version
between 0.6.5 and 0.8.8 (inclusive).
This is part of the Set of Overriding Requirements for
[S] No assembly {}
.
Solidity compiler version 0.6.8 introduced a bug, fixed in Solidity compiler version 0.8.9,
that meant immutable signed integer types shorter than 256 bits could be read incorrectly
in inline assembly {}
instructions.
See also the
29 September 2021 security alert,
and the related requirements
[M] Safe Use of assembly {}
,
[M] Document Special Code Use,
[M] Compiler Bug SOL-2022-5 in assembly {}
, and
[M] Compiler Bug SOL-2022-4.
[M] Compiler Bug Check Constructor Payment 🔗
Tested code that allows payment to a constructor function that is
payable
,MUST NOT use a Solidity compiler version between 0.4.5 and 0.6.7 (inclusive).
This is an Overriding Requirement for
[S] Compiler Bug SOL-2020-5.
Solidity compiler versions from 0.4.5 set the expectation that payments to a constructor that
was not expicitly denoted as payable
would revert. But when the constructor is inherited
from a base contract, this reversion does not happen using Solidity compiler versions before 0.6.8.
[M] Use a Modern Compiler 🔗
Tested code MUST NOT use a Solidity compiler version older than 0.6.0,
unless it meets all the following requirements from the
EEA EthTrust Security Levels Specification Version 1,
as Overriding Requirements:
See also the Related Requirement [S] Use a Modern Compiler, covering Solidity Compiler bugs that require review for Security Level [S].
In addition to automatable static testing verification (Security Level [S]), and a manual audit (Security Level [M]), EEA EthTrust Certification at Security Level [Q] means checking that the intended functionality of Tested code is sufficiently well documented that its functional correctness can be verified, that the code and documentation has been thoroughly reviewed by a human auditor or audit team to ensure that they are both internally coherent and consistent with each other, carefully enough to identify complex security vulnerabilities.
This level of review is especially relevant for tokens using ERC20 [ERC20], ERC721 [ERC721], and others; [token-standards] identifies a number of other standards that can define tokens.
At this Security Level there are also checks to ensure the code does not contain errors that do not directly impact security, but do impact code quality. Code is often copied, so Security Level [Q] requires code to be as well-written as possible. The risk being addressed is that it is easy, and not uncommon, to introduce weaknesses by copying existing code as a starting point.
[Q] Pass Security Level [M] 🔗
To be eligible for EEA EthTrust certification at Security Level [Q],
Tested code MUST meet the requirements for § 4.2 Security Level [M].
[Q] Code Linting 🔗
Tested code
assert()
statements, andconstructor
keyword, andCode is often copied from "good examples" as a starting point for development. Code that has achieved Security Level [Q] EEA EthTrust Certification is meant to be high quality, so it is important to ensure that copying it does not encourage bad habits. It is also helpful for review to remove pointless code.
Code designed to trap unexpected errors, such as assert()
instructions,
is explicitly allowed, because it would be very unfortunate if defensively written code
that successfully eliminates the possibility of triggering a particular error
could not achieve EEA EthTrust Certification.
[Q] Manage Gas Use Increases 🔗
Sufficient Gas MUST be available to work with data structures in the Tested Code
that grow over time, in accordance with descriptions provided for
[Q] Document Contract Logic.
Some structures such as arrays can grow, and the value of variables is (by design) variable. Iterating over a structure whose size is not clear in advance, whether an array that grows, a bound that changes, or something determined by an external value, can result in significant increases in gas usage.
What is reasonable growth to expect needs to be considered in the context of the business logic intended, and how the Tested Code protects against Gas Griefing attacks, where malicious actors or errors result in values occurring beyond the expected reasonable range(s).
See also SWC-126, SWC-128 [swcregistry] and the Related Requirements in § 4.3.1 Documentation requirements.
[Q] Protect Gas Usage 🔗
Tested Code MUST protect against malicious actors stealing or wasting gas.
Smart contracts allowing "gasless" transactions enable users to submit transactions without having to supply their own gas. They need to be carefully implemented to prevent Denial of Service from Gas Griefing and Gas Siphoning attacks.
See also The Gas Siphon Attack: How it Happened and How to Protect Yourself from the DevCon 2019 talk [DevCon-siphoning].
[Q] Protect against Oracle Failure 🔗
Tested Code MUST protect itself against malfunctions in Oracles it relies on.
Some Oracles are known to be vulnerable to manipulation, for example because they derive the information they provide from information vulnerable to Read-only Re-entrancy Attacks, or manipulation of prices through the use of flashloans to enable an MEV attack, among other well-known attacks.
In addition, as networked software Oracles can potentially suffer problems ranging from latency issues to outright failure, or being discontinued.
It is important to check the mechanism used by an Oracle to generate the information it provides, and the potential exposure of Tested Code that relies on that Oracle to the effects of it failing, or of malicious actors manipulating its inputs or code to enable attacks.
See also the Related Requirements [Q] Protect against Front-running, and [Q] Protect against MEV Attacks.
[Q] Protect against Front-Running 🔗
Tested Code MUST NOT require information
in a form that can be used to enable a Front-Running attack.
In Front-Running attacks, an attacker places their transaction in front of a victim's. This can be done by a malicious miner or by an attacker monitoring the mempool, and preempting susceptible transactions by broadcasting their own transactions with higher transaction fees. Removing incentives to front-run generally means applying mitigations such as hash commitment schemes [hash-commit] or batch execution.
See also the Related Requirement [Q] Protect against Front-running.
[Q] Protect against MEV Attacks 🔗
Tested Code that is susceptible to MEV attacks MUST follow appropriate
design patterns to mitigate this risk.
MEV refers to the potential that a block producer can maliciously reorder or suppress transactions, or another participant in a blockchain can propose a transaction or take other action to gain a benefit that was not intended to be available to them.
This requirement entails a careful judgement by the auditor, of how the Tested Code is vulnerable to MEV attacks, and what mitigation strategies are appropriate. Some approaches are discussed further in § 3.7 MEV (Maliciously Extracted Value).
Many attack types need to be considered, including at least Censorship Attacks, Future Block Attacks, and Timing Attacks (Front-Running, Back-Running, and Sandwich Attacks).
See also the Related Requirements [S] No Exact Balance Check, [M] Sources of Randomness, [M] Don't Misuse Block Data, and [Q] Protect against Oracle Failure, and [Q] Protect against Front-running.
[Q] Protect Against Governance Takeovers 🔗
Tested Code which includes a governance system MUST protect against one external
entity taking control via exploit of the governance design.
Governance attacks are specific to the system that is exploited. Depending on the governance proposal system, some areas of vulnerability may include:
For example, if a staking contract is used to distribute governance tokens as a reward, it is important that the staking contract is not vulnerable to a Flash Loan Attack, where a large amount of tokens are borrowed in a very short-term flash loan, then staked atomically to gain a temporary majority of governance tokens, that are then used to make a governance decision, such as draining all the funds held to an attacker's wallet.
[Q] Process All Inputs 🔗
Tested Code MUST validate inputs, and function correctly whether the input
is as designed or malformed.
Code that fails to validate inputs runs the risk of being subverted through maliciously crafted input that can trigger a bug, or behaviour the authors did not anticipate.
See also SWC-123 [swcregistry] which notes that it is important to consider whether input requirements are too strict, as well as too lax, [CWE-573] Improper Following of Specification by Caller, and note that there are several Related Requirements that are specific to particular Solidity compiler versions in § 4.1.4 Compiler Bugs.
[Q] State Changes Trigger Events 🔗
Tested code MUST emit a contract event for all transactions that cause state changes.
Events are convenience interfaces that give an abstraction on top of the EVM's logging functionality. Applications can subscribe and listen to these events through the RPC interface of an Ethereum client. See more at [solidity-events].
Events are generally expected to be used for logging all state changes as they are not just useful for off-chain applications but also security monitoring and debugging. Logging all state changes in a contract ensures that any developers interacting with the contract are made aware of every state change as part of the ABI and can understand expected behavior through event annotations, as per [Q] Annotate Code with NatSpec.
[Q] No Private Data 🔗
Tested code MUST NOT store Private Data on the blockchain.
This is a Security Level [Q] requirement primarily because the question of what is private data often requires careful and thoughtful assessment and a reasoned understanding of context. In general, this is likely to include an assessment of how the data is gathered, and what the providers of data are told about the usage of the information.
Private Data is used in this specification to refer to information that is not intended to be generally available to the public. For example, an individual's home telephone number is generally private data, while a business' customer enquiries telephone number is generally not private data. Similarly, information identifying a person's account is normally private data, but there are circumstances where it is public data. In such cases, that public data can be recorded on-chain in conformance with this requirement.
PLEASE NOTE: In some cases regulation such as the [GDPR] imposes formal legal requirements on some private data. However, performing a test for this requirement results in an expert technical opinion on whether data that the auditor considers private is exposed. A statement about whether Tested Code meets this requirement does not represent any form of legal advice or opinion, attorney representation, or the like.
[Q] Intended Replay 🔗
If a signature within the Tested Code can be reused, the replay instance MUST be intended, documented,
and safe for re-use.
This is an Overriding Requirement for [M] No Improper Usage of Signatures for Replay Attack Protection.
In some rare instances, it may be the intention of the Tested Code to allow signatures to be replayed. For example, a signature may be used as permission to participate in a whitelist for a given period of time. In these exceptional cases, the replay must be included in documentation as a known allowance. Further, it must be verified that the reuse cannot be exploited.
Security Level [Q] conformance requires a detailed description of how the Tested Code is intended to behave. Alongside detailed testing requirements to check that it does behave as described wth regard to specific known vulnerabililies, it is important that the claims made for it are accurate. This requirement helps ensure that the Tested Code fulfils claims made for it outside audit-specific documentation.
The combination of these requirements helps ensure there is no malicious code, such as malicious "back doors" or "time bombs" hidden in the Tested Code. Since there are legitimate use cases for code that behaves as e.g. a time bomb, or "phones home", this combination helps ensure that testing focuses on real problems.
The requirements in this section extend the coverage required to meet the Security Level [M] requirement [M] Document Special Code Use. As with that requirement, there are multiple requirements at this level that require the documentation mandated in this subsection.
[Q] Document Contract Logic 🔗
A specification of the business logic that the Tested code functionality is intended
to implement MUST be available to anyone who can call the Tested Code.
Contract Logic documented in a human-readable format and with enough detail that functional correctness and safety assumptions for special code use can be validated by auditors helps them assess complex code more efficiently and with higher confidence.
It is important to document how the logic protects against potential attacks such as Flash Loan attacks (especially on governance or price manipulation), MEV, and other complex attacks that take advantage of ecosystem features or tokenomics.
[Q] Document System Architecture 🔗
Documentation of the system architecture for the Tested code MUST be provided that
conveys the overrall system design, privileged roles, security assumptions and intended usage.
System documentation provides auditor(s) information to understand security assumptions and ensure functional correctness. It is helpful if system documentation is included or referenced in the README file of the code repository, alongside documentation for how the source code can be tested, built and deployed.
See also the Related Requirement [Q] Annotate Code with NatSpec.
[Q] Annotate Code with NatSpec 🔗
All Public Interfaces contained in the Tested code MUST be annotated with inline
comments according to the [NatSpec] format that explain the intent behind each function, parameter,
event, and return variable, along with developer notes for safe usage.
Inline comments are important to ensure that developers and auditors understand the intent behind each function and other code components. Public Interfaces means anything that would be contained in the ABI of the compiled Tested code. It is also recommended to use inline comments for private or internal functions that implement sensitive and/or complex logic.
Following the [NatSpec] format allows these inline comments to be understood by the Solidity compiler for extracting them into a machine-readable format that could be used by other third-party tools for security assessments and automatic documentation, including documentation shown to users by wallets that integrate with source code verification tools like Sourcify. This could also be used to generate specifications that fully or partially satisfy the Requirement to [Q] Document Contract Logic.
[Q] Implement as Documented 🔗
The Tested code MUST behave as described in the documentation provided for
[Q] Document Contract Logic, and
[Q] Document System Architecture.
The requirements at Security Level [Q] to provide documentation are important. However, it is also crucial that the Tested Code actually behaves as documented. If it does not, it is possible that this reflects insufficient care and that the code is also vulnerable due to bugs that were missed in implementation. It is also possible that the difference is an attempt to hide malicious code in the Tested Code.
[Q] Enforce Least Privilege 🔗
Tested code that enables privileged access MUST implement appropriate access control mechanisms that provide the least privilege necessary for those interactions,
based on the documentation provided for
[Q] Document Contract Logic.
This is an Overriding Requirement for
[M] Protect Self-destruction.
There are several common methods to implement access control, such as Role-Based Access Control [RBAC] and [Ownable], and bespoke access control is often implemented for a given use case. Using industry-standard methods can help simplify the process of auditing, but is not sufficient to determine that there are no risks arising either from errors in implementation or due to a maliciously-crafted contract.
It is important to consider access control at both the protocol operation and deployment levels.
If a protocol is deployed in a deterministic manner,
for example allowing a multi-chain deployment to have the same address across all chains,
it is important to explicitly set an owner rather than defaulting to msg.sender
,
as that may leave a simple factory deployment contract as the insufficent new admin of your protocol.
It is particularly important that appropriate access control applies to payments, as noted in SWC-105, but other actions such as overwriting data as described in SWC-124, or changing specific access controls, also need to be appropriately protected [swcregistry]. This requirement matches [CWE-284] Improper Access Control.
See also "Access Restriction" in [solidity-patterns].
[Q] Use Revocable and Transferable Access Control Permissions 🔗
If the Tested code makes uses of Access Control for privileged actions, it MUST implement a mechanism
to revoke and transfer those permissions.
Privileged Accounts can perform administrative tasks on the Set of Contracts. If those accounts are compromised or responsibility to perform those tasks is assigned to different people, it is important to have a mechanism to revoke and transfer those permissions.
[Q] No Single Admin EOA for Privileged Actions 🔗
If the Tested code makes uses of Access Control for privileged actions, it MUST ensure that all critical administrative tasks require multiple signatures to be executed,
unless there is a multisg admin that has greater privileges and can revoke permissions in case of a compromised or rogue EOA and reverse any adverse action the EOA has taken.
Privileged accounts can perform administrative tasks on the Set of Contracts. If a single EOA can perform these actions, and that permission cannot be revoked, the risks to a Smart Contract posed by a compromised or lost private key can be existential.
[Q] Verify External Calls 🔗
Tested Code that contains external calls
This is part of a Set of Overriding Requirements for [S] Use Check-Effects-Interaction, and for [M] Protect External Calls.
At Security Level [Q] auditors have a lot of flexibility to offer EEA EthTrust Certification for different uses of External Calls.
This requirement effectively allows a reviewer to declare that the destination of an external call is not a security risk. It is important to note that any such declaration reflects very closely on the reputation of a reviewer.
It is inappropriate to assume that a smart contract is secure just because it is widely used, and it is unacceptable to assume that a smart contract provided by a user in the future will be secure - this is a known vector that has been used for many serious security breaches.
It is also important to consider how any code referenced and declared safe by the reviewer could be vulnerable to attacks based on its use of external calls.
To take a common example, swap contracts that allow a user to provide any pair of token contracts are potentially at risk if one of those contracts is malicious, or simply vulnerable, in a way the swap contract does not anticipate and protect against.
See also the related requirements [Q] Document Contract Logic, [Q] Document System Architecture, and [Q] Implement as Documented.
[Q] Verify tx.origin
Usage 🔗
For Tested Code that uses tx.origin
, each instance
This is an Overriding Requirement for
[S] No tx.origin
.
tx.origin
can be used to enable phishing attacks, tricking a user into interacting
with a contract that gains access to all the funds in their account. It is generally
the wrong choice for authorization of a caller for which msg.sender
is the safer choice.
See also Related Requirements
[Q] Document Contract Logic,
[Q] Enforce Least Privilege,
the section "tx.origin
" in Solidity Security Considerations
[solidity-security], and CWE 284: Improper Access Control [CWE-284].
This section describes good practices that require substantial human judgement to evaluate. Testing for, and meeting these requirements does not directly affect conformance to this document. Note however that meeting the Recommended Good Practice [GP] Meet as Many Requirements as Possible will in practice mean that Tested Code meets all the Requirements based on Compiler Bugs, including the majority of Requirements for Security Level [S].
[GP] Check For and Address New Security Bugs 🔗
Check [solidity-bugs-json] and other sources for bugs announced after 1 November 2023
and address them.
This version of the specification was finalized late in 2023. New vulnerabilities are discovered from time to time, on an unpredictable schedule. The latest solidity compiler bug accounted for in this version is SOL-2023-3.
Checking for security alerts published too late to be incorporated into the current version of this document is an important technique for maintaining the highest possible security.
There are other sources of information on new security vulnerabilities, from [CWE] to following the blogs of many security-oriented organizations such as those that contributed to this specification.
[GP] Meet as Many Requirements as Possible 🔗
The Tested Code SHOULD meet as many requirements of this specification as possible
at Security Levels above the Security Level for which it is certified.
While meeting some requirements for a higher EEA EthTrust certification Security Level makes no change to the formal conformance level of the Tested Code, each requirement is specified because meeting it provides protection against specific known attacks. If it is possible to meet a particular requirement, even if it is not necessary for conformance at the Security Level being tested, meeting that requirement will improve the security of the Tested Code and is therefore worth doing.
[GP] Use Latest Compiler 🔗
The Tested Code SHOULD use the latest available stable Solidity compiler version.
The Solidity compiler is regularly updated to improve performance but also specifically to fix security vulnerabilities that are discovered. There are many requirements in § 4.1.4 Compiler Bugs that are related to vulnerabilities known at the time this specification was written, as well as enhancements made to provide better security by default. In general, newer Solidity compiler versions improve security, so unless there is a specific known reason not to do so, using the latest Solidity compiler version available will result in better security.
[GP] Write Clear, Legible Solidity Code 🔗
The Tested Code SHOULD be written for easy understanding.
There are no strict rules defining how to write clear code. It is important to use sufficiently descriptive names, comment code appropriately, and use structures that are easy to understand without causing the code to become excessively large, because that also makes it difficult to read and understand.
Excessive nesting, unstructured comments, complex looping structures, and the use of very terse names for variables and functions are examples of coding styles that can also make code harder to understand.
It is important to note that in some cases, developers can sacrifice easy reading for other benefits such as reducing gas costs - this can be mitigated somewhat by comments in the code.
Likewise, for complex code involving multiple individual smart contracts, the way source is organised into files can help clarify or obscure what's happening. In particular, naming source code files to match the names of smart contracts they define is a common pattern that eases understanding.
This Good Practice extends somewhat the Related Requirement [Q] Code Linting, but judgements about how to meet it are necessarily more subjective than in the specifics that requirement establishes. Those looking for additional guidance on code styling can refer to the [Solidity-Style-Guide].
[GP] Follow Accepted ERC Standards 🔗
The Tested Code SHOULD conform to finalized [ERC] standards when it is
reasonably capable of doing so for its use-case.
An ERC is a category of [EIP] (Ethereum Improvement Proposal) that defines application-level standards and conventions, including smart contract standards such as token standards [ERC20] and name registries [ERC137].
While following ERC standards will not inherently make Solidity code secure, they do enable developers to integrate with common interfaces and follow known conventions for expected behavior. If the Tested Code does claim to follow a given ERC, its functional correctness in conforming to that standard can be verified by auditors.
[GP] Define a Software License 🔗
The Tested Code SHOULD define a software license
A software license provides legal guidance on how contributors and users can interact with the code, including auditors and whitehats. Because bytecode deployed to public networks can be read by anyone, it is common practice to use an Open-Source license for the Solidity code used to generate it.
It is important to choose a [software-license] that best addresses the needs of the project, and clearly link to it throughout the Tested Code and documentation, e.g. using a prominent LICENSE file in the code repository and referencing it from each source file.
[GP] Disclose New Vulnerabilities Responsibly 🔗
Security vulnerabilities that are not addressed by this specification
SHOULD be brought to the attention of the Working Group
and others through responsible disclosure as described in
§ 1.4 Feedback and new vulnerabilities.
New security vulnerabilities are discovered from time to time. It helps the efforts to revise this specification to ensure the Working Group is aware of new vulnerabilities, or new knowledge regarding existing known vulnerabilities.
The EEA has agreed to manage the specific email address [email protected] for such notifications.
[GP] Use Fuzzing 🔗
Fuzzing SHOULD be used to probe Tested Code for errors.
Fuzzing is an automated software testing method that repeatedly activates a contract, using a variety of invalid, malformed, or unexpected inputs, to reveal defects and potential security vulnerabilities.
Fuzzing can take days or even weeks: it is better to be patient than to stop it prematurely.
Fuzzing relies on a Corpus - A set of inputs for a fuzzing target. It is important to maintain the Corpus to maximise code coverage, and helpful to prune unnecessary or duplicate inputs for efficiency.
Many tools and input mutation methods can help to build the Corpus for fuzzing. Good practice is to build on and leverage community resources where possible, always checking licensing restrictions.
Another important part of fuzzing is the set of specification rules that is checked throughout the fuzzing processes. While Corpus is the set of inputs for fuzzing targets, the specification rules are business logic checks created specifically for fuzzing and are evaluated for each fuzzing input.
For a meaningful and efficient fuzzing campaign, it is not enough to send a large amount of random input to the contracts. This additional set of rules around the contracts should be present, so it gets triggered if fuzzing finds an edge case. The process shouldn't rely on the checks and reverts already within the contracts and the compiler.
As shown above, fuzzing rules and properties can be complex and may depend on specific contracts, functions, variables, their values before and/or after execution, and potentially many other things depending on the fuzzing technology and specification language of choice. If any vulnerabilities are discovered in the Solidity compiler by Fuzzing please disclose them responsibly.
[GP] Use Formal Verification 🔗
The Tested Code SHOULD undergo formal verification.
Formal verification is a family of techniques that can mathematically prove functional correctness of smart contracts. It has been used in other applications such as embedded systems. There are many uses for formal verification in smart contracts, such as testing liveness, protocol invariants for safety at a high level, or proving narrower, more specific properties of a program's execution.
In formal verification, a formal (symbolic or mathematical) specification of the expected or desired outcome of a smart contract is created, enabling a formal mathematical proof of a protocol's correctness. The smart contract itself is often translated into a formal language for this purpose.
Several languages and programs exist for creating fromal verification proofs, some with the explicit aim of making formal verification more accessible to casual users and non-mathematicians. Please see [EF-SL] for some examples.
When implemented correctly by a practitioner with experience and skill, formal verification can make guarantees that fuzzing and testing cannot provide. However, that is often difficult to achieve in practice. Formal verification requires substantial manual labor and expertise.
A comprehensive formal verification most likely has a much a higher cost and complexity than unit or integration testing, fuzzing, or other methods. The immutable nature of many smart contracts, and the complexity of upgrading contracts when it is possible, makes formal verification appealing to administrators and stakeholders of protocols.
[GP] Select an Appropriate Threshold for Multisig Wallets 🔗
Multisignature requirements for privileged actions SHOULD have a sufficient number of signers, and NOT require "1 of N" nor all signatures.
Requiring multiple signatures for administrative actions has become the standard for many teams. When not managed carefully, they can become a source of attack even if the smart contract code is secure.
The problem with "1 of N" setups, that enable a single account to execute transactions, is that it is relatively easy to exploit. "N of N" setups meanwhile mean that if even one signer loses access to their account or will not approve an action, there is no possibility for approval. This can affect necessary operations such as the replacement of one signer with another, for example to ensure operational continuity, which can have a very serious impact.
Choosing a lower number of signatures to meet the requirement allows for quicker response, while a higher value requires stronger majority support. Consider using an "M of N" multisignature where M = (N/2) + 1, in other words, the smallest possible majority of signatures are necessary for approval, as a starting point. However it is important to consider how many potential signers there are, and the specific situations where signatures are needed, to determine a reasonably good value for M in a given case.
[GP] Use TimeLock Delays for Sensitive Operations 🔗
Sensitive operations that affect all or a majority of users SHOULD use [TimeLock] delays.
Sensitive operations, such as upgrades and [RBAC] changes impact all or a majority of users in the protocol. A [TimeLock] delay allows users to exit the system if they disagree with the proposed change, and allows developers to react if they detect a suspicious change.
The following is a list of terms defined in this Specification.
This section provides a summary of all requirements and recommended good practices in this Specification.
[S] Encode Hashes with chainid
🔗
Tested code MUST create hashes for transactions that incorporate chainid
values
following the recommendation described in [EIP-155].
[S] No CREATE2
🔗
Tested code MUST NOT contain a CREATE2
instruction,
unless it meets the Set of Overriding Requirements
[S] No tx.origin
🔗
Tested code MUST NOT contain a tx.origin
instruction
unless it meets the Overriding Requirement
[Q] Verify tx.origin
Usage
[S] No Exact Balance Check 🔗
Tested code MUST NOT test that the balance of an account is exactly equal to
(i.e. ==
) a specified amount or the value of a variable
unless it meets the Overriding Requirement
[M] Verify Exact Balance Checks.
[S] No Conflicting Names 🔗
Tested code MUST NOT include more than one variable, or operative function with
different code, with the same name
unless it meets the
Overriding Requirement:
[M] Document Name Conflicts.
[S] No Hashing Consecutive Variable Length Arguments 🔗
Tested Code MUST NOT use abi.encodePacked()
with consecutive variable length arguments.
[S] No selfdestruct()
🔗
Tested code MUST NOT contain the selfdestruct()
instruction
or its now-deprecated alias suicide()
unless it meets the Set of Overriding Requirements
[S] No assembly {}
🔗
Tested Code MUST NOT contain the assembly {}
instruction
unless it meets the Set of Overriding Requirements
assembly {}
Attack Vectors,assembly {}
,
[S] No Unicode Direction Control Characters 🔗
Tested code MUST NOT contain any of the Unicode Direction Control Characters
U+2066
, U+2067
, U+2068
, U+2029
,
U+202A
, U+202B
, U+202C
, U+202D
,
or U+202E
unless it meets the Overriding Requirement
[M] No Unnecessary Unicode Controls.
[S] Check External Calls Return 🔗
Tested Code that makes external calls using the Low-level Call Functions (i.e. call()
,
delegatecall()
, staticcall()
, send()
, and transfer()
)
MUST check the returned value from each usage to determine whether the call failed,
unless it meets the Overriding Requirement
[M] Handle External Call Returns.
[S] Use Check-Effects-Interaction 🔗
Tested code that makes external calls MUST use the
Checks-Effects-Interactions
pattern to protect against Re-entrancy Attacks
unless it meets the Set of Overriding Requirements
or it meets the Set of Overriding Requirements
[S] No delegatecall()
🔗
Tested Code MUST NOT contain the delegatecall()
instruction
unless it meets the Set of Overriding Requirements:
[S] No Overflow/Underflow 🔗
Tested code MUST NOT use a Solidity compiler version older than 0.8.0
unless it meets the Set of Overriding Requirements
[S] Compiler Bug SOL-2023-3 🔗
Tested code that includes Yul code and uses the verbatim
instruction twice, in each case surrounded identical code,
MUST disable the Block Deduplicator when using a Solidity compiler version between 0.8.5 and 0.8.22 (inclusive).
[S] Compiler Bug SOL-2022-6 🔗
Tested code that ABI-encodes a tuple (including a struct
, return
value, or a parameter list)
that includes a dynamic component with the ABIEncoderV2, and whose last element is a
calldata
static array of base type uint
or bytes32
,
MUST NOT use a Solidity compiler version between 0.5.8 and 0.8.15 (inclusive).
[S] Compiler Bug SOL-2022-5 with .push()
🔗
Tested code that copies bytes
arrays from calldata
or
memory
whose size is not a multiple of 32 bytes, and has an empty .push()
instruction that writes to the resulting array,
MUST NOT use a Solidity compiler version older than 0.8.15.
[S] Compiler Bug SOL-2022-3 🔗
Tested code that
memory
and calldata
pointers for the same function, andMUST NOT use a Solidity compiler version between 0.6.9 and 0.8.13 (inclusive).
[S] Compiler Bug SOL-2022-2 🔗
Tested code with a nested array that
abi.encode()
, orMUST NOT use a Solidity compiler version between 0.6.9 and 0.8.13 (inclusive).
[S] Compiler Bug SOL-2022-1 🔗
Tested code that
bytesNN
type shorter than 32 bytes, orbytesNN
type,and passes such literals to abi.encodeCall()
as the first parameter,
MUST NOT use Solidity compiler version 0.8.11 nor 0.8.12.
[S] Compiler Bug SOL-2021-4 🔗
Tested Code that uses custom value types shorter than 32 bytes MUST NOT use Solidity compiler version 0.8.8.
[S] Compiler Bug SOL-2021-2 🔗
Tested code that uses abi.decode()
on byte arrays as memory
MUST NOT use the ABIEncoderV2 with a Solidity compiler version between 0.4.16 and 0.8.3
(inclusive).
[S] Compiler Bug SOL-2021-1 🔗
Tested code that has 2 or more occurrences of an instruction
keccak(mem,length)
where
MUST NOT use the Optimizer with a Solidity compiler version older than 0.8.3.
[S] Compiler Bug SOL-2020-11-push 🔗
Tested code that copies an empty byte array to storage, and subsequently increases
the size of the array using push()
MUST NOT use a Solidity compiler version
older than 0.7.4.
[S] Compiler Bug SOL-2020-10 🔗
Tested code that copies an array of types shorter than 16 bytes to a longer array
MUST NOT use a Solidity compiler version older than 0.7.3.
[S] Compiler Bug SOL-2020-9 🔗
Tested code that defines Free Functions MUST NOT use Solidity compiler version 0.7.1.
[S] Compiler Bug SOL-2020-8 🔗
Tested code that calls internal library functions with calldata
parameters
called via using for
MUST NOT use Solidity compiler version 0.6.9.
[S] Compiler Bug SOL-2020-6 🔗
Tested code that accesses an array slice using an expression for the starting index
that can evaluate to a value other than zero
MUST NOT use the ABIEncoderV2 with a Solidity compiler version between 0.6.0 and 0.6.7 (inclusive).
[S] Compiler Bug SOL-2020-7 🔗
Tested code that passes a string literal containing two consecutive backslash ("\")
characters to an encoding function or an external call
MUST NOT use the ABIEncoderV2 with a Solidity compiler version between 0.5.14 and 0.6.7 (inclusive).
[S] Compiler Bug SOL-2020-5 🔗
Tested code that defines a contract that does not include a constructor, but
has a base contract that defines a constructor not defined as payable
MUST NOT use a Solidity compiler version between 0.4.5 and 0.6.7 (inclusive),
unless it meets the Overriding Requirement
[M] Check Constructor Payment.
[S] Compiler Bug SOL-2020-4 🔗
Tested code that makes assignments to tuples that
calldata
arrayMUST NOT use a Solidity compiler version older than 0.6.4.
[S] Compiler Bug SOL-2020-3 🔗
Tested code that declares arrays of size larger than 2^256-1 MUST NOT use a Solidity compiler version older than 0.6.5.
[S] Compiler Bug SOL-2020-1 🔗
Tested code that declares variables inside a for
loop that contains a break
or continue
statement MUST NOT use the Yul Optimizer with Solidity compiler version 0.6.0
nor a Solidity compiler version between 0.5.8 and 0.5.15 (inclusive).
[S] Use a Modern Compiler 🔗
Tested code MUST NOT use a Solidity compiler version older than 0.6.0,
unless it meets all the following requirements from the
EEA EthTrust Security Levels Specification Version 1,
as Overriding Requirements:
storage
Explicitly if appropriate)
[S] No Ancient Compilers 🔗
Tested code MUST NOT use a Solidity compiler version older than 0.3.
[M] Pass Security Level [S] 🔗
To be eligible for EEA EthTrust certification at Security Level [M],
Tested code MUST meet the requirements for § 4.1 Security Level [S].
[M] Explicitly Disambiguate Evaluation Order 🔗
Tested code MUST NOT contain statements where variable evaluation order
can result in different outcomes
[M] No Failing assert()
Statements 🔗
assert()
statements in Tested Code MUST NOT fail.
[M] Verify Exact Balance Checks 🔗
Tested code that checks whether the balance of an account is exactly equal to
(i.e. ==
) a specified amount or the value of a variable.
MUST protect itself against transfers affecting the balance tested.
This is an Overriding Requirement for
[S] No Exact Balance Check.
[M] No Unnecessary Unicode Controls 🔗
Tested code MUST NOT use Unicode direction control characters
unless they are necessary to render text appropriately,
and the resulting text does not mislead readers.
This is an Overriding Requirement for
[S] No Unicode Direction Control Characters.
[M] No Homoglyph-style Attack 🔗
Tested code MUST NOT use homoglyphs, Unicode control characters, combining characters, or characters from multiple
Unicode blocks, if the impact is misleading.
[M] Protect External Calls 🔗
For Tested code that makes external calls:
unless it meets the Set of Overriding Requirements
This is an Overriding Requirement for [S] Use Check-Effects-Interaction.
[M] Avoid Read-only Re-entrancy Attacks 🔗
Tested Code that makes external calls MUST protect itself against Read-only Re-entrancy Attacks.
[M] Handle External Call Returns 🔗
Tested Code that makes external calls MUST reasonably handle possible errors.
This is an Overriding Requirement for
[S] Check External Calls Return.
[M] Document Special Code Use 🔗
Tested Code MUST document the need for each instance of:
CREATE2
,assembly {}
,selfdestruct()
or its deprecated alias suicide()
,delegatecall()
,block.number
or block.timestamp
, orand MUST describe how the Tested Code protects against misuse or errors in these cases, and the documentation MUST be available to anyone who can call the Tested Code.
This is part of several Sets of Overriding Requirements, one for each of
[M] Ensure Proper Rounding of Computations Affecting Value 🔗
Tested code MUST identify and protect against exploiting rounding errors:
[M] Protect Self-destruction 🔗
Tested code that contains the selfdestruct()
or suicide()
instructions MUST
unless it meets the Overriding Requirement [Q] Enforce Least Privilege.
This is an Overriding Requirement for
[S] No selfdestruct()
.
[M] Avoid Common assembly {}
Attack Vectors 🔗
Tested Code MUST NOT use the assembly {}
instruction to change a variable
unless the code cannot:
function
.This is part of a Set of Overriding Requirements for
[S] No assembly {}
.
[M] Protect CREATE2
Calls 🔗
For Tested Code that uses the CREATE2
instruction,
any contract to be deployed using CREATE2
selfdestruct()
, delegatecall()
nor
callcode()
instructions, andunless it meets the Set of Overriding Requirements
This is part of a Set of Overriding Requirements for
[S] No CREATE2
.
[M] No Overflow/Underflow 🔗
Tested code MUST NOT contain calculations that can overflow or underflow unless
This is an Overriding Requirement for [S] No Overflow/Underflow.
[M] Document Name Conflicts 🔗
Tested code MUST clearly document the order of inheritance for each function or variable that shares a name with another function or variable.
This is an Overriding Requirement for
[S] No Conflicting Names.
[M] Sources of Randomness 🔗
Sources of randomness used in Tested Code MUST be
sufficiently resistant to prediction that their purpose is met.
[M] Don't Misuse Block Data 🔗
Block numbers and timestamps used in Tested Code MUST NOT introduce vulnerabilities
to MEV or similar attacks.
[M] Proper Signature Verification 🔗
Tested Code MUST use proper signature verification to ensure authenticity of messages
that were signed off-chain, e.g. by using ecrecover()
.
[M] No Improper Usage of Signatures for Replay Attack Protection 🔗
Tested Code using signatures to prevent replay attacks MUST ensure that signatures cannot be reused:
unless it meets the Overriding Requirement [Q] Intended Replay. Additionally, Tested Code MUST verify that multiple signatures cannot be created for the same message, as is the case with Malleable Signatures.
[M] Solidity Compiler Bug 2023-1 🔗
Tested code that contains a compound expression with side effects that uses .selector
MUST use the viaIR option with Solidity compiler versions between 0.6.2 and 0.8.20 inclusive.
[M] Compiler Bug SOL-2022-7 🔗
Tested code that has storage writes followed by conditional early terminations
from inline assembly functions containing return()
or stop()
instructions
MUST NOT use a Solidity compiler version between 0.8.13 and 0.8.17 inclusive.
This is part of the Set of Overriding Requirements for
[S] No assembly {}
.
[M] Compiler Bug SOL-2022-5 in assembly {}
🔗
Tested code that copies bytes
arrays from calldata or memory whose size is not
a multiple of 32 bytes, and has an assembly {}
instruction that reads that data
without explicitly matching the length that was copied,
MUST NOT use a Solidity compiler version older than 0.8.15.
This is part of the Set of Overriding Requirements for
[S] No assembly {}
.
[M] Compiler Bug SOL-2022-4 🔗
Tested code that has at least two assembly {}
instructions, such that one writes
to memory e.g. by storing a value in a variable, but does not access that memory again,
and code in a another assembly {}
instruction refers to that memory,
MUST NOT use the yulOptimizer with Solidity compiler versions 0.8.13 or 0.8.14.
This is part of the Set of Overriding Requirements for
[S] No assembly {}
.
[M] Compiler Bug SOL-2021-3 🔗
Tested code that reads an immutable
signed integer of a type
shorter than
256 bits within an assembly {}
instruction MUST NOT use a Solidity compiler version
between 0.6.5 and 0.8.8 (inclusive).
This is part of the Set of Overriding Requirements for
[S] No assembly {}
.
[M] Compiler Bug Check Constructor Payment 🔗
Tested code that allows payment to a constructor function that is
payable
,MUST NOT use a Solidity compiler version between 0.4.5 and 0.6.7 (inclusive).
This is an Overriding Requirement for
[S] Compiler Bug SOL-2020-5.
[M] Use a Modern Compiler 🔗
Tested code MUST NOT use a Solidity compiler version older than 0.6.0,
unless it meets all the following requirements from the
EEA EthTrust Security Levels Specification Version 1,
as Overriding Requirements:
[Q] Pass Security Level [M] 🔗
To be eligible for EEA EthTrust certification at Security Level [Q],
Tested code MUST meet the requirements for § 4.2 Security Level [M].
[Q] Code Linting 🔗
Tested code
assert()
statements, andconstructor
keyword, and
[Q] Manage Gas Use Increases 🔗
Sufficient Gas MUST be available to work with data structures in the Tested Code
that grow over time, in accordance with descriptions provided for
[Q] Document Contract Logic.
[Q] Protect Gas Usage 🔗
Tested Code MUST protect against malicious actors stealing or wasting gas.
[Q] Protect against Oracle Failure 🔗
Tested Code MUST protect itself against malfunctions in Oracles it relies on.
[Q] Protect against Front-Running 🔗
Tested Code MUST NOT require information
in a form that can be used to enable a Front-Running attack.
[Q] Protect against MEV Attacks 🔗
Tested Code that is susceptible to MEV attacks MUST follow appropriate
design patterns to mitigate this risk.
[Q] Protect Against Governance Takeovers 🔗
Tested Code which includes a governance system MUST protect against one external
entity taking control via exploit of the governance design.
[Q] Process All Inputs 🔗
Tested Code MUST validate inputs, and function correctly whether the input
is as designed or malformed.
[Q] State Changes Trigger Events 🔗
Tested code MUST emit a contract event for all transactions that cause state changes.
[Q] No Private Data 🔗
Tested code MUST NOT store Private Data on the blockchain.
[Q] Intended Replay 🔗
If a signature within the Tested Code can be reused, the replay instance MUST be intended, documented,
and safe for re-use.
This is an Overriding Requirement for [M] No Improper Usage of Signatures for Replay Attack Protection.
[Q] Document Contract Logic 🔗
A specification of the business logic that the Tested code functionality is intended
to implement MUST be available to anyone who can call the Tested Code.
[Q] Document System Architecture 🔗
Documentation of the system architecture for the Tested code MUST be provided that
conveys the overrall system design, privileged roles, security assumptions and intended usage.
[Q] Annotate Code with NatSpec 🔗
All Public Interfaces contained in the Tested code MUST be annotated with inline
comments according to the [NatSpec] format that explain the intent behind each function, parameter,
event, and return variable, along with developer notes for safe usage.
[Q] Implement as Documented 🔗
The Tested code MUST behave as described in the documentation provided for
[Q] Document Contract Logic, and
[Q] Document System Architecture.
[Q] Enforce Least Privilege 🔗
Tested code that enables privileged access MUST implement appropriate access control mechanisms that provide the least privilege necessary for those interactions,
based on the documentation provided for
[Q] Document Contract Logic.
This is an Overriding Requirement for
[M] Protect Self-destruction.
[Q] Use Revocable and Transferable Access Control Permissions 🔗
If the Tested code makes uses of Access Control for privileged actions, it MUST implement a mechanism
to revoke and transfer those permissions.
[Q] No Single Admin EOA for Privileged Actions 🔗
If the Tested code makes uses of Access Control for privileged actions, it MUST ensure that all critical administrative tasks require multiple signatures to be executed,
unless there is a multisg admin that has greater privileges and can revoke permissions in case of a compromised or rogue EOA and reverse any adverse action the EOA has taken.
[Q] Verify External Calls 🔗
Tested Code that contains external calls
This is part of a Set of Overriding Requirements for [S] Use Check-Effects-Interaction, and for [M] Protect External Calls.
[Q] Verify tx.origin
Usage 🔗
For Tested Code that uses tx.origin
, each instance
This is an Overriding Requirement for
[S] No tx.origin
.
[GP] Check For and Address New Security Bugs 🔗
Check [solidity-bugs-json] and other sources for bugs announced after 1 November 2023
and address them.
[GP] Meet as Many Requirements as Possible 🔗
The Tested Code SHOULD meet as many requirements of this specification as possible
at Security Levels above the Security Level for which it is certified.
[GP] Use Latest Compiler 🔗
The Tested Code SHOULD use the latest available stable Solidity compiler version.
[GP] Write Clear, Legible Solidity Code 🔗
The Tested Code SHOULD be written for easy understanding.
[GP] Follow Accepted ERC Standards 🔗
The Tested Code SHOULD conform to finalized [ERC] standards when it is
reasonably capable of doing so for its use-case.
[GP] Define a Software License 🔗
The Tested Code SHOULD define a software license
[GP] Disclose New Vulnerabilities Responsibly 🔗
Security vulnerabilities that are not addressed by this specification
SHOULD be brought to the attention of the Working Group
and others through responsible disclosure as described in
§ 1.4 Feedback and new vulnerabilities.
[GP] Use Fuzzing 🔗
Fuzzing SHOULD be used to probe Tested Code for errors.
[GP] Use Formal Verification 🔗
The Tested Code SHOULD undergo formal verification.
[GP] Select an Appropriate Threshold for Multisig Wallets 🔗
Multisignature requirements for privileged actions SHOULD have a sufficient number of signers, and NOT require "1 of N" nor all signatures.
[GP] Use TimeLock Delays for Sensitive Operations 🔗
Sensitive operations that affect all or a majority of users SHOULD use [TimeLock] delays.
The EEA acknowledges and thanks the many people who contributed to the development of this version of the specification. Please advise us of any errors or omissions.
We are grateful to the entire community who develops Ethereum, for their work and their ongoing collaboration.
In particular we would like to thank the contributors to the previous version of this specification, Co-chairs Christopher Cordi and Opal Graham as well as previous co-chairs David Tarditi and Jaye Herrell the maintainers of the Solidity Compiler and those who write Solidity Security Alerts [solidity-alerts], the community who developed and maintained the Smart Contract Weakness Classification [swcregistry], the Machine Consultancy for publishing the TMIO Best Practices [tmio-bp], and judges and participants in the Underhanded Solidity competitions that have taken place. They have all been very important sources of information and inspiration to the broader community as well as to us in developing this specification.
Security principles have also been developed over many years by many individuals, far too numerous to individually thank for contributions that have helped us to write the present specification. We are grateful to the many people on whose work we build.
This section outlines substantive changes made to the specification since version 1:
The following requirements have been added to the specification since the previous release:
chainid
,The following § 4.4 Recommended Good Practices have also been added
The following requirements have been changed in some way since the previous release:
delegatecall()
to require
[M] Document Special Code Use as an additional Overriding Requirement
at Level [M],delegatecall()
to [Q] Document Special Code Use,The following § 4.4 Recommended Good Practices have also been updated
transfer()
was removed from [S] Check External Calls Return
because it isn't one of the low-level functions: It reverts on failure.storage
Explicitly,assembly {}
,ecrecover()
input, and