Practitioner Interviews

Interview with Karyn Milligan – Santa Barbara Co. (CA) Probation Department

Photo of Karyn Milligan

Date:   November 18, 2019

Karyn recently led a trial to test text-message reminders for probation appointments. The trial was planned to include random assignment of 146 clients on probation to either receive a text-message reminder or not, and the outcome was whether the client attended the appointment (link to snapshot). More recently, Karen is working with several colleagues on a trial to test GPS tracking for probationers and is also conducting a deep dive into existing data to determine predictors of recidivism. Karyn discusses her experiences developing and implementing research to gather empirical evidence to guide practice.

As the Research & Special Projects Manager for the Department, you have a lot of experience in coordinating and conducting research. Can you tell us about some of the research opportunities implemented in your department?

Like many agencies, my department is committed to evaluating the effectiveness of its programs and policies. As a result, we are constantly engaged in research and in generating new and interesting questions to test; we rely on a variety of partners to help us accomplish this work. In collaboration with an academic partner we recently completed a process evaluation of our Substance Abuse Treatment Court, evaluated client engagement and retention in our Moral Reconation Therapy program, and analyzed client outcomes for those served in our SB678 and AB109 programs. We also contracted with a firm to assist in locally validating our adult risk-assessment tool. In addition, we partnered with BetaGov to test the extent to which receiving a text-message appointment reminder impacted client attendance at office visits. Our partnership with BetaGov has recently expanded to include a randomized controlled trial to test the use of GPS as a method of increasing supervision compliance and reducing recidivism.

Have you participated in both traditional academic research (developed and led by an outside researcher) and in-house research? If so, what do you see as the pros and cons of each?

As my department moves to deeper implementation of evidence -based practices, we’ve recognized the value and specific utility of both traditional academic and in-house research. Conducting in-house research allows us to build upon the internal knowledge of the program or process we are evaluating and the data available to best inform the research. In-house research also provides us the opportunity to monitor progress, QA the data and related data-entry processes, and quickly pivot if the research design or methodology needs refinement. One of the most influential decision points (other than cost) to perform research in-house is that it offers access to early results to inform our practice. We have found outside research teams to be valuable in studies where advanced statistical expertise is required, or in sensitive evaluations where an independent team can raise the legitimacy of controversial findings or recommendations.

I think our readers would be interested to learn that, even with years of research experience, trial preparation and implementation can still be a challenge. Can you tell us about some of the experiences that you have had?

Trial implementation continues to be our greatest threat to a successful study and reliable results. To strengthen our process, we have established an iterative engagement and QA practice to create “touch-point” opportunities that allow us to address areas of concern early. We intentionally include staff and stakeholders in the formative research design process to share concerns, suggestions, and areas for improvement. In our experience, this involvement of key participants has informed the design and methodology, identified challenges that we’ve collectively created strategies to circumvent, allowed for thorough documentation of the process, and generated a list of frequently asked questions to address before the launch of the trial. After the launch, we schedule check-ins with the team to review any implementation challenges or concerns as well as to confirm adherence to the research model. We also regularly monitor results to identify areas of concern or where additional training of staff in the research design may be required. We have found this deliberate approach to be helpful in keeping to the fidelity of the trial design.

How do you apply what you have learned in a trial to your day-to-day operations?

We share the results and discuss ways to infuse our practice with the knowledge learned. Communicating results can be challenging, especially with academic research that can be dense and complex in the way findings are published in a final report. One of the great benefits of working with BetaGov is that they synthesize trial findings in an easy-to-comprehend, user-friendly, one-page document. Locally, we have used that document and shared it broadly—with external stakeholders, internally with staff, in public presentations, etc. In sharing the results, we prompt conversation about how to better operationalize knowledge gained from the study both in terms of the trial findings and in our approach to trial preparation and implementation.

How do you decide what you want to test? What are your criteria for a trial?

Anything that helps improve our practice or the services we provide to our clients and their success on supervision. BetaGov has been a valued partner in generating action-oriented research!

Do you have any advice for others who may see opportunities for testing an idea?

Test it! BetaGov makes testing an idea extremely easy! Having the evidence of whether or not something is working locally allows us to confidently determine if an approach is working and an effective use of county resources!