Document representation is one of the crucial components that determine the effectiveness of text classification tasks. Traditional document representation approaches typically adopt a popular bag-of-word method as the underlying document representation. Although it’s a simple and efficient method, the major shortcoming of bag-of-word representation is in the independent of word feature assumption. Many researchers have attempted to address this issue by incorporating semantic information into document representation. In this paper, we study the effect of semantic representation on the effectiveness of text classification systems. We employed a novel semantic smoothing technique to derive semantic information in a form of mapping probability between topic signatures and single-word features. Two classifiers, Näive Bayes and Support Vector Machine, were selected to carry out the classification experiments. Overall, our topic-signature semantic representation approaches significantly outperformed traditional bag-of-word representation in most datasets.