[Lim et al, 1999, page 13] reports that the Congressional Voting dataset used is one of the easiest to classify, with error rates between 4% and 6%. Further, from [Lim et al, 1999, page 19] we find that most algorithms have training time less than 30 minutes, with the best being less than 3 minutes.
The fuzzy rules hand developed for this project were extremely difficult to obtain with the short amount of time and resources available for the project. This is borne out by the error rates - greater than 40% in all attempts, with the worst case being around 60% - and the training time - between 2 and 6 hours for each attempt.
Each attempt had a different set of fuzzy rules, with a different approach to solving the classification problem. Further, once the human had deemed that the rules were as desired (based on the human's satisfaction with the rules and their performance on the training set) the rules were run once on the validation set. The reasons for this requirement are discussed in Section 3.2.2.
The relationship was observed that the more time and effort put into analysing the data, the better the results were. In particular, the more sophisticated the visualisation techniques, and the higher the abstraction of the representation away from the raw data, the easier it was to obtain rules (and further, to obtain rules which subsequently performed slightly better). The problem was that sophisticated visualisation techniques were harder to implement and took more time than simple ones. This is discussed in detail in Section 4.2.3.
Unfortunately, the large amount of time spent on the Congressional Votes dataset meant that other datasets couldn't have rules constructed for them, as described in Section 2.1.