According to this post an indeterminate value is:
3.17.2
1 indeterminate value
either an unspecified value or a trap representation
According to google, the definition of indeterminate is:
- Not certain, known, or established
- Left doubtful; vague.
According to thefreedictionary, determinable is:
- capable of being determined
According to merriam-webster, to determine (in the particular context) is:
- to find out or come to a decision about by investigation, reasoning, or calculation
So, common sense dictates that even though an indeterminate value is unknown during compile time, it is perfectly determinable during runtime, e.g. you can always read whatever happens to occupy that memory location.
Or am I wrong? If so, why?
EDIT: To clarify, I post this in relation to what became a heated argument with a user who attempted to convince me that an indeterminate value is indeterminable, which I very much doubt.
EDIT 2: To clarify, by "determinable" I don't mean a stable or usable value, even if it is a garbage value for uninitialized memory the value of that garbage can still be determined. I mean that trying to determine that value will still yield in some value rather than ... no action. So this value must come from some memory, allocated as storage for the still indeterminate value, I highly doubt a compiler will actually use say a random number generator just for the sake of coming up with some arbitrary value.
The authors of the Standard recognized that there are some cases where it might be expensive for an implementation to ensure that code that reads an indeterminate value won't behave in ways that would be inconsistent with the Standard (e.g. the value of a
uint16_t
might not be in the range0..65535
. While many implementations could cheaply offer useful behavioral guarantees about how indeterminate values behave in more cases than the Standard requires, variations among hardware platforms and application fields mean that no single set of guarantees would be optimal for all purposes. Consequently, the Standard simply punts the matter as a Quality of Implementation issue.The Standard would certainly allow a garbage-quality-but-conforming implementation to treat almost any use of e.g. an uninitialized
uint16_t
as an invitation to release nasal demons. It says nothing about whether high-quality implementations that are suitable for various purposes can do likewise (and still be viewed as high-quality implementations suitable for those purposes). If one is needs to accommodate implementations that are designed to trap on possible unintended data leakage, one may need to explicitly clear objects in some cases where their value will ultimately be ignored but where the implementation couldn't prove that it would never leak information. Likewise if one needs to accommodate implementations whose "optimizers" are designed on the basis of what low-quality-but-conforming implementations are allowed to do, rather than than what high-quality general-purpose implementations should do, such "optimizers" may make it necessary to add otherwise-unnecessary code to clear objects even when code doesn't care about the value (thus reducing efficiency) in order to avoid having the "optimizers" treat the failure to do so as an invitation to behave nonsensically.The fact that it is indeterminate not only means that it is unpredictable at the first read, it also means that it is not guaranteed to be stable. This means that reading the same uninitialized variable twice is not guaranteed to produce the same value. For this reason you cannot really "determine" that value by reading it. (See DR#260 for the initial discussion on the subject from 2004 and DR#451 reaffirming that position in 2014.)
For example, a variable
a
might be assigned to occupy a CPU registerR1
withing a certain timeframe (instead of memory location). In order to establish the optimal variable-to-register assignment schedule the language-level concept of "object lifetime" is not sufficiently precise. The CPU registers are managed by an optimizing compiler based on a much more precise concept of "value lifetime". Value lifetime begins when a variable gets assigned a determinate value. Value lifetime ends when the previously assigned determinate value is read for the last time. Value lifetime determines the timeframe during which a variable is associated with a specific CPU register. Outside of that assigned timeframe the same registerR1
might be associated with a completely different variableb
. Trying to read an uninitialized variablea
outside its value lifetime might actually result in reading variableb
, which might be actively changing.In this code sample
the compiler can easily determine that even though object lifetimes of
i
andj
overlap, the value lifetimes do not overlap at all, meaning that bothi
andj
can get assigned to the same CPU register. If something like that happens, you might easily discover that the first cycle prints the constantly changing value ofi
on each iteration. This is perfectly consistent with the idea of value ofj
being indeterminate.Note that this optimization does not necessarily require CPU registers. For another example, a smart optimizing compiler concerned with preserving valuable stack space might analyze the value lifetimes in the above code sample and transform it into
with variables
i
andj
occupying the same location in memory at different times. In this case the first cycle might again end up printing the value ofi
on each iteration.Two successive reads of an indeterminate value can give two different values. Moreover reading an indeterminate value invokes undefined behavior in case of a trap representation.
In a DR#260, C Committee wrote:
The C90 standard made it clear that reading from an indeterminate location was undefined behavior. More recent standards are not so clear any more (indeterminate memory is “either an unspecified value or a trap representation”), but compilers still optimize in a way that is only excusable if reading an indeterminate location is undefined behavior, for instance, multiplying the integer in an uninitialized variable by two can produce an odd result.
So, in short, no, you can't read whatever happens to occupy indeterminate memory.
When the standard introduces a term like indeterminate, it is a normative term: the standard's definition applies, and not a dictionary definition. This means that an indeterminate value is nothing more or less than an unspecified value, or a trap representation. Ordinary English meanings of indeterminate are not applicable.
Even terms that are not defined in the standard may be normative, via the inclusion of normative references. For instance section 2 of the C99 standard normatively includes a document called ISO/IEC 2382−1:1993, Information technology — Vocabulary — Part 1: Fundamental terms..
This means that if a term is used in the standard, and is not defined in the text (not introduced in italics and expained, and not given in the terms section) it might nevertheless be a word from the above vocabulary document; in that case, the definition from that standard applies.
We can not determine the value of an indeterminate value, even under operations that would normally lead to predictable values such as multiplication by zero. The value is wobbly according to the new language proposed(see edit).
We can find the details for this in defect report #451: Instability of uninitialized automatic variables which had a proposed resolution bout a year after this question was asked.
This defect report covers very similar ground to your question. Three questions were address:
and provided the following examples with further questions:
The proposed resolution, which seems unlikely to change much is:
Update to address edit
Part of the discussion includes this comment:
So you will be able to determine a value but the value could change at each evaluation.