Combinatorial testing is a model-based black-box testing technique which aims to reduce the number of required test cases while still providing high failure detection capabilities. To this end, it divides the input and environment of the system under test into a finite number of parameters and values. The combinatorial coverage criterion then requires a test suite to only cover all value combinations of up to t parameter. Since one test case can cover multiple t value combinations this produces fewer test cases than exhaustive testing. Even though combinatorial test suites are comparatively small, they can still contain hundreds or thousands of test cases. Especially in the light of modern development methods such as continuous integration where fast feedback cycles become more and more important, test suites with this many test cases should fail as fast as possible. The field of combinatorial test prioritization addresses this problem and attempts to order the generated test suite so that the failing test cases appear as early as possible. While researchers developed multiple algorithms and techniques for prioritization, no framework which integrates all of them for automated comparison currently exists. To this end, this thesis extends coffee4j, an automated combinatorial testing framework, to support different prioritization techniques. After an overview of the current state of research, it first develops a concept for integrating the techniques into a general combinatorial testing framework. Additionally, this thesis introduces new techniques for test case prioritization based on previously failing test cases and their failure-inducing combinations. It then evaluates these techniques to show how researchers can use the integration into coffee4j for comparing different prioritization algorithms in future work.