How does the brain process spoken language? It is our thesis that word intelligibility and consonant identification are insufficient by themselves to model how the speech signal is decoded - a finer-grained approach is required. In this study, listeners identified 11 different Danish consonants spoken in a Consonant + Vowel + [l] environment. Each syllable was processed so that only a portion of the original audio spectrum was present. Three-quarter-octave bands of speech, centered at 750, 1500, and 3000 Hz, were presented individually and in combination with each other. The conditional, posterior probabilities associated with phonetic-feature decoding were computed from confusion matrices in order to deduce the temporal flow of phonetic processing. Decoding the feature, Manner-of-Articulation, depends on accurate decoding of the feature Voicing (but not vice-versa), and decoding Place-of-Articulation requires precise decoding of Manner (but not the converse). From these data, we conclude that Voicing is processed prior to Manner-of-Articulation, and that Manner is decoded prior to Place-of-Articulation. Voicing and Manner cues are often correctly decoded in conditions where Place is not. This asymmetric pattern of feature decoding may provide extra-segmental information of utility for speech processing, particularly in adverse listening conditions.