Stacked Generalization is a classification technique which aims to increase the performance of individual classifiers by combining them under a hierarchical architecture. In many applications, this technique, performs better than other classification schemas under some circumstances. However, in some applications, the performance of the technique goes astray, for the reasons that are not well-known. Even though it is used in several application domains up to now, it is not clear under which circumstances Stacked Generalization technique increases the performance. In this work, the states of the performance of Stacked Generalization technique is analyzed in terms of the performance parameters of the individual classifiers under the architecture. This work shows that the individual classifiers should learn the training set sharing the members of the set among themselves for the success of the Stacked Generalization architecture.