There have been a lot of turns in the road towards building machines that can match or best their human creators. To see how far we’ve come since the days of the infamous chess-playing automaton seen below, please read on:

As many of you know, I had the great pleasure of playing a major role in the development of CDC’s ProofWorks tool. This tool is oriented to the evaluation of Health Communication initiatives. To access it, you can move forward from a new landing page for the suite of HealthCommWorks tools. Or you can visit this page to sign up or sign in. There is a summary of ProofWorks including a list of the contributors on the About page.
I have been using this tool in my teaching and consulting, and I hope you will find many uses for it. As a Decision Support System, it has two main areas at its core:
– A series of questions that ask the user to carefully think about their context. This process is broken into subcategories: defining the audience for the communication program, the communication program itself, evaluation stakeholder interests (broad purpose or focus of the evaluation) , stakeholder preferences and expectations regarding evaluation methods and rigour, and resources (funds and otherwise) for the evaluation. While these questions are all coded and will drive decision rules, I think they dovetail nicely with the types of questions an expert evaluator would ask their client. In total, users would answer 12 questions that use check boxes, radio buttons, or short text fields and another 23 questions answered with Likert scales. The questions are clustered into 5 Steps.


– A series of recommendations based on the answers entered by the user, which they are free to change at any time. Recommendations are given for focus (with six possibilities, including pre-testing, monitoring, etc.), indicators (arranged at ecological levels and more than 50 in number), data collection methods (from a list of 12), and design (with consideration to overall design, sampling, comparison points, frequency of measurements, and group makeup). The recommendations are organized into 4 steps. Users are also encouraged to test out different scenarios (e.g. What would be done with modest resources? What would we do if we could find more substantial resources?)



However, providing decision support means that the program will offer advice on the relationship between the user context and evaluation decisions they make. For the user, this means ProofWorks lists all the options as Recommended, Possible, or Not Recommended. Beyond that, users are encouraged to query (with a simple single click) why that recommendation has been made. Asking “why” opens up a table that lists all the contextual inputs made by the users themselves that led to the recommendations, and in a large number of cases, the table includes brief text statements further explaining the relationship. After considering the recommendations, the user is free to accept or reject any or all of them by clicking on checkboxes, or they may go back and reconsider some of the input context they provided.

ProofWorks also outputs a nice summary which can be used as is, or expanded if users wish to add more material.

On top of that information, which is delivered in real time, and completely tailored to the user’s own entries, ProofWorks has a host of learning tools, many of which link to other useful sites and documents. And, not surprisingly, there are numerous opportunities for online discussions.


My greatest hope is that ProofWorks serves as a solid framework for project teams to carefully define their context, consider alternatives, and consult with other experts. I hope that the “intelligence” built into the tool is solid and will continue to get more sophisticated as ProofWorks gets feedback from users and is further evaluated.
Not too long ago, this type of program would have been unthinkable. But unlike The Turk, the infamous chess-playing automaton referred to earlier, there is no human inside or otherwise controlling the output, though if there were, Dr. Tom Chapel of CDC, who worked on ProofWorks, might be able to provide expert advice for numerous simultaneous users! So if it is not humans quickly providing recommendations in real time, what is it?
Imagine a very large spreadsheet with all possible answers to the 35 context questions as rows, and all possible options for recommendations as columns. Each of the several thousand cells in that table represents an if-then inference that could be made e.g. IF (program is winding down), THEN (pre-testing is NOT recommended). We never did create one big spreadsheet but worked with about 12 more manageable ones. We looked for cases that promoted certain options or eliminated them, and in the end, we had about 350 of these decision rules.
At a certain point in our process, the talented programming team at ORAU created data tables that read the spreadsheets, and once imported, I worked directly on the admin dashboard to enter inferences. We also created a nice utility wherein reviewers could identify issues related to a given inference or explanation. We also created utilities that would show all the Recommended and Not Recommended options that resulted from any given context input, and a reverse logic table where we could see at a glance all the inputs that promoted or eliminated any given evaluation option. In addition to looking at the inferences, the logic summaries, and the reverse logic summaries, we also tested various scenarios to see if the recommendations ProofWorks was making aligned with our thinking.
So there it is. I hope that you will explore ProofWorks, and if you find it useful, that you will use it in your research, teaching, consultations and spread the word!
Many thanks again to the wonderful team who worked on this project.