Back in the early days of compiler benchmarks, one fancy compiler noticed that the result of a lengthy calculation wasn't being used, and dutifully removed the calculations. That calculation was, of course, the kernel of the benchmark. The solution was to print the result.
Or you do something like using binary OR from both ends of the memory chunk simultaneously, so when they finish, you're guaranteed to have 0xff (all 1's) all over your memory. This is off the top of my head, so bugs may exist, etc. int zapmem(uint8 *mem, size_t size) { int i,j,a,b; for (a=0xaa,b=0x55,i=0, j=size-1; i<size, i++,j--) { a|=mem[i]; mem[i]=a; b|=mem[j]; mem[j]=b; } return a|b; } At the end of the loop, mem will be 0xff. Why? Because we use OR, and since one half is using 010101 and the other half is using 10101010, as i,j pass each other and OR themselves, you're guaranteed to have 0xff (11111111) as output. You could also use 0xf0 and 0x0f or any other complementary values for the initial a and b values. There's a return value, the compiler has to return it, and thus can't easily optimize it out. I don't think there exist smart compilers that will figure out what this does - if you can write such a compiler that can figure out what you indend in all cases, you will be a very rich man. Of course if some compiler writer decides to make the above a special case and optimize it out, all bets are off.