Recent studies of selective auditory attention have demonstrated that neural responses recorded with electroencephalogram (EEG) can be decoded to classify the attended talker in everyday multitalker cocktail-party environments. This is generally referred to as the auditory attention decoding (AAD) and could lead to a breakthrough for the next-generation of hearing aids (HAs) to have the ability to be cognitively controlled. The aim of this paper is to investigate whether cepstral analysis can be used as a more robust mapping between speech and EEG. Our preliminary analysis revealed an average AAD accuracy of 96%. Moreover, we observed a significant increase in auditory attention classification accuracies with our approach over the use of traditional AAD methods (7% absolute increase). Overall, our exploratory study could open a new avenue for developing new AAD methods to further advance hearing technology. We recognize that additional research is needed to elucidate the full potential of cepstral analysis for AAD.