Let S be a denumerable state space and let P be a transition probability matrix on S. If a denumerable set M of nonnegative matrices is such that the sum of the matrices is equal to P, then we call M a partition of P. Let K denote the set of probability vectors on S. With every partition M of P we can associate a transition probability function PM on K defined in such a way that if p is an element of K and M is an element of M are such that parallel to pM parallel to andgt; 0, then, with probability parallel to pM parallel to, the vector p is transferred to the vector PM/parallel to pM parallel to. Here parallel to . parallel to denotes the l(1)-norm. In this paper we investigate the convergence in distribution for Markov chains generated by transition probability functions induced by partitions of transition probability matrices. The main motivation for this investigation is the application of the convergence results obtained to filtering processes of partially observed Markov chains with denumerable state space.