Over the next three posts I will summarize the three papers and a few points of interest from their Q and A sessions. I might as well take this chance to address a few questions that Wachtel and Strufwolf did not have time to answer fully.
Reconstructing the Intitial Text in the Editio Critica Maior of the New Testament Using the Coherence-Based Genealogical Method.
Klaus Wachtel
Introduction
This first paper was by far the most challenging of the three, and served two functions. Wachtel first used some very helpful visual guides to introduce us to the mechanics of the digital NA prototype. After immersing us in the logic of its promising design, he then turned to a lengthy description of the critical process that lies behind this new resource. Wachtel pointed out that the Editio Critica Maior of the Catholic Epistles features 23 different readings from the NA 27 text, and as the Catholic Epistles are the only texts currently published in the series, one may extrapolate this to predict the amount of changes that may be in place once the entire NT canon has been evaluated. Thus even at this early stage, the fruit of the Editio Critica Maior's detailed labor is readily apparent and is becoming accessible online.
Section 1: The Resource
In the first section, Wachtel led us step by step through the remarkable features of the digitial NA prototype, that will someday be blessed with the full resources of the Editio Critica Maior. With perhaps his only nod to classic text critical principles, he raised the point that text criticism has two tasks: publishing the evidence of manuscript transmission, and reconstructing the original text. The prototype aims at fulfilling the first task by being a remarkably flexible repository for all the transmission data behind every NT manuscript variant, more on this in the third paper.
Section 2: The Methodology
In the second section, Wachtel addressed the second task in a brief introduction to the complicated Coherence-Based Genealogical Method. It seems that the most basic unique feature of both the prototype and the CBGM is that they allow the two tasks of text criticism (publishing transmission evidence, reconstructing the orgiginal text) to interface at every level of critical inquiry. While the CBGM is a process that potentially enables us to make more definitive decisions regarding variant NT readings, it is also a database of coherence-based genealogies that can be visualized and assessed in a variety of formats. Thus there is a constant interaction in this method between the actual publishing of variants in a database format and the actual use of these variants in constructing the most probable original text.
Much more could be said about this that I will reserve for future comment, but suffice it to say now that my initial impression is that the CBGM exists as a result of the use of emerging database technology, and certain emerging database technologies have taken shape based on the inherent logic to the CBGM. Perhaps it is this link between technology and praxis which has long been discussed in the work of Vattimo, McLuhan, Postman and others that is re-shaping our perspective on NT text criticism rather than any particular ideology or material discovery. This technological distinctive is what distinguishes the CBGM from other genealogical methods.
As far as the actual Coherence-Based Genealogical Method is concerned, I will thankfully defer to Gerd Mink's online introduction. At first the CBGM seems a bit more technically complicated than other genealogical methods, but it does result in elegant renderings of large amounts of manuscript data. Mink's introduction can be distilled into two points (quoting from the above link):
1. Elements of a genealogical hypothesis are not the manuscripts but the states of the text that they convey and that may be far older than the respective manuscript. The text with its respective state will be referred to here as witness, not the manuscript.
2. A hypothesis is called a stemma if it links witnesses or variants genealogically. For a hypothesis about a genealogical connection not only the connection itself but its quality is relevant. This quality has to be documented by adequate data. This complexity is integrated into this understanding of stemma. Consequently, a stemma in the sense of a graphical connection of witnesses is merely a simplified representation of a stemma in the more complex sense.
I will leave you to read the rest of Mink's introduction at your leisure, as these two introductory points are sufficient for the moment. Wachtel pointed out that this method is specifically geared towards the idiosyncracies of our current NT manuscript holdings. While there is a great wealth of NT manuscripts available, there are also far more that haven't survived. This bald fact makes it difficult for us to connect early manuscripts with later readings without relying on a fair bit of conjecture. On the other hand we must be clear that all surviving witness are related to each other in some fashion, there are always elements of coherence. Contamination simply "emerges from those texts at the disposal of the scribe." This leaves us with the working principle of establishing the genealogy of a reading based on every extant permutation of that text through hypotheses represented stemmatically. In these various print and online projects, the INTF has become uniquely capable of such representations.
The trickier areas of the CBGM involve discerning between pre-genealogical and genealogical coherence, and navigating prior and posterior variance within a given stemma. I don't have the clearest grasp of some of these finer points, but the method allows one to establish the potential anscestors of broad groups of variants for a particular reading and work one's way back to the most probable ancestor for the entire group of readings. The most probable ancestor is usually a relatively small group of manuscripts that can thus be regarded as closest to the original text. I hope that these stemmatic diagrams for key NT variants will be published as an additional resource, as they enable very efficient insight into textual relationships and grant easy access into the legwork behind the Editio Critica Maior.
Conclusion
Wachtel had an excellent slide that visually summarized the CBGM, but essentially it is a system of checks and balances between Internal Criteria (explantions for given variants) and External Criteria (pre-concieved text critical ideologies and pre-genealogical coherence). Both Internal and External Criteria establish local stemmata, then genealogical coherence within these stemmata, and then revise our preconcieved notions regarding any particular reading which leads to clarified relationships between manuscript variants. But as the CBGM is a methodology both linked to and part of a database, we are constantly able to revise our External Criteria based on the evidences of Internal Criteria. And we are also able to consistently reapply our revised External Criteria to the stemmatic diagramming of particular readings. It is a completely iterative process.
To conclude, Wachtel made two points regarding the methodology:
1. One problem with text criticism is the intrusion of subjective reasoning when gaps in the data emerge. In the CBGM, however, some of the more subjective elements involved with establishing geneaological coherence is offset by the presence of the objective facts of pre-genealogical coherence that are represented in this set of statistical databases.
2. The Editio Critica Maior has been criticized for not having many differences from N-A 27. But the CBGM is not just a "mopping up exercise" of clarifying and supporting existent readings based on this brilliant new database. The CBGM shows us what we are actually dealing with in terms of variants in a variety of statistical and visual formats, and shows us that we are dealing with probabilities rather than certainties. Wacthel didn't say this, but my impression is that while they are probabilities, they are darn good ones and far more helpful than the "certainties" of past text critical enterprises. The CBGM is simply "a tool that allows us to be coherent in our argument."
Q and A session
(This is just a sampling.)
1. How does the "original text" or "reconstructed text" relate to the most probable ancestors of a given reading? Are they on the same footing?
No, the most probable ancestor is a hypothetical stemmatic rendering of all the extant data. But, it does best explain the variants. The "ancestor" is a hypothesis of what the texts looked like before transmission.
2. What about where the uthor himself makes a spelling or grammar error, and thus the "ancestor" will be incorrect even if original?
In the Editio Critica Maior of James there are 15 points at which such situations are simply referred to as "lacunae." There is allowance in the process that the original text had spelling or grammar errors. (And it seems that the CBGM is very capable of charting the inevitable corrections stemmatically. This is another point at which the CBGM is simply a way of constructing more coherent transmission histories.)