The paper presents a study of the performance variations of the Bayesian model of peer-assessment implemented in OpenAnswer, in terms of the grades prediction accuracy. OpenAnswer (OA) models a peer assessment session as a Bayesian network. For each student, a sub-network contains variables describing relevant aspects of both the individual cognitive state and the state of the current assessment session. Sub-networks are interconnected to each other to obtain the final one. Evidence propagated through the global network is represented by all the grades given by students to their peers, together with a subset of the teacher’s corrections. Among the possible affecting factors, the paper reports about the investigation of the dependence of grades prediction performance on the quality of the class, i.e., the average level of proficiency of its students, and on the number of peers assessed by each student. The results show that both factors affect the accuracy of the inferred marks produced by the Bayesian network, when compared with the available ground-truth produced by teachers.