Experienced C Developers Can Still Misunderstand Simple Language Concepts

Posted on May 22nd, 2017 by Dwight Guth
Posted in RV-Match

Not long ago, one of our customers came to us confused about an analysis report on a simple program their company used internally to assess knowledge of C. Among the code they sent us was something like the following three lines:

int32_t var = 0xffeeddcc; int32_t var2 = 0x7f000000; var2 <<= 8;

They assumed that this code should be free from defects, and thought that possibly the two defects we reported were incorrect. Let's take a look at the error reports that result when we insert these lines into a simple program and execute them:

$ kcc example.c
example.c:4:1: warning: Conversion to signed integer outside the range that can be represented.
  Implementation defined behavior (IMPL-CCV2).
   see C11 section 6.3.1.3:3 http://rvdoc.org/C11/6.3.1.3
   see C11 section J.3.5:1 item 4 http://rvdoc.org/C11/J.3.5
   see CERT-C section INT31-C http://rvdoc.org/CERT-C/INT31-C
$ ./a.out
 Signed integer overflow.
   at main(example.c:6:1)
  Undefined behavior (UB-CCV1).
   see C11 section 6.5:5 http://rvdoc.org/C11/6.5
   see C11 section J.2:1 item 36 http://rvdoc.org/C11/J.2
   see CERT-C section INT32-C http://rvdoc.org/CERT-C/INT32-C

What precisely is causing these two error reports? The second error report is fairly straightforward to understand if you examine the text of the C11 standard for bitwise shift operations. According to C11 6.5.7:4, considering the expression E1 << E2, "If E1 has a signed type and non-negative value, and E1 * 2^E2 is representable in the result type, then that is the resulting value; otherwise the behavior is undefined." Essentially, in other words, on signed types, left shift is equivalent to multiplication by a power of two, and subject to the same overflow constraints that such a multiplication operation would be subject to. This is why kcc reports the second error.

However, when we explained the reason the first error occurred, our customer did not believe us at first. As they explained their understanding of the program, it became clear what the problem was. They wanted to initialize var with a particular sequence of bits, in this case, 0xffeeddcc, and assumed that the first line in the above code would do this. However, this betrayed a fundamental confusion about the distinction between two related but different concepts in C, namely, integer literals, and the object representation of integer types.

A sequence of bits representing a C object is called an object representation. Object representations can be read through pointers to type char and represent a fixed sequence of bytes in a particular order. By contrast, an integer literal represents a mathematical integer, which may be stored in a variety of different ways (i.e. representations) on different platforms, in terms of its underlying bytes.

In other words, they wanted to initialize the bytes of var with the bytes 0xff, 0xee, 0xdd, 0xcc. This would correspond to the twos complement negative number -1,122,868, in a big endian architecture. However, what they failed to consider is what this sequence of bytes could mean on other platforms. For example, on a two's complement little endian platform, it refers to the number -857,870,593. If we were using sign and magnitude arithmetic (which is not used on any modern platform but still exists as an allowable representation of negative numbers in the C standard), it would instead represent the numbers -2,146,360,780 or -1,289,613,055 depending on the endianness.

Conversely, an integer literal represents just that: an integer. As such the integer literal 0xffeeddcc always represents the positive number 4,293,844,428. Since the maximum value for 32 bit integers is 2,147,483,647, this number is outside the bounds of the 32 bit integer type and actually becomes an integer constant of an unsigned 32-bit integer type (unsigned int on a platform where int is 32 bits). When we convert from the unsigned integer type to the signed integer type, the value is out of the bounds of the values representable by the target type, and therefore the behavior of the initialization is implementation-defined. The value of the signed integer could become any of the above negative numbers in the previous paragraph, depending on the target platform!

Worse still, C allows for arithmetic conversions that go out of bounds to raise an arithmetic signal. Thus it's entirely possible to port your application to a new platform and suddenly have a very deadly run-time error in which your application terminates abnormally, potentially costing you significantly if the bug makes it into production before it is detected.

In practice, if you want a particular object to contain a particular sequence of bytes, the only safe mechanism by which you can do this which generates the same sequence of bytes on all platforms is by writing to its individual bytes with a char pointer. Without doing this, you are left up to the whims of the implementation which can choose any arbitrary implementation-defined representation for the values of any given type. This leads to problems if you are relying on a particular sequence of bits being present, and you move to a new platform.

When writing code for a specific compiler you can rely on the implementation-specified behavior, but signed overflow is still problematic. GCC promises that conversions between integer types will be reduced modulo the appropriate power of two when the value is not representable in the target type. This means that with GCC the conversion above will initialize var to -0x112234 on any architecture that GCC supports. However, only initialization and other conversions are safe. GCC still considers signed overflow in arithmetic as undefined behavior, and optimizes under the assumption that there will be no overflow. This can lead to apparently impossible results when signed values do overflow. Compiled with -O3, this program prints "Impossible!".

#include <stdio.h>; void check(int x, int y) { if (2*x == y && y < 0 && 0 <= 2*x) { puts("Impossible!"); } } int main() { check(0x7F80007F, 0xFF0000FE); }

By adding apparently-redundant casts to 2x to give (int)(2(unsigned int)x), the calculation becomes implementation-specified behavior from an out-of-range conversion instead of undefined behavior. While this code may not be portable between compilers, GCC now guarantees the "impossible" code will not be executed even with -O3.

This is an example of how even experienced C developers who have been programming in C for years can still misunderstand basic language concepts if they aren't familiar with the C standard which defines the behavior of their programs. Relying on your assumptions about how C works without checking the standard can be dangerous, leading to costly bugs and time wasted fixing bugs. It is much better to write ISO standards-compliant C code and rely on the compiler to provide guarantees that your program's behavior conforms with the C abstract machine described by the standard.

Happy bug hunting,
Dwight Guth
Lead Engineer, RV-Match