Originally published on Discover Rackham.
Solving for the Real World
Imagine having to live your everyday life without access to a cell phone, computer, or tablet. Beyond feeling a prickling annoyance that the information superhighway is just beyond your fingertips, the absence of tech would severely limit your professional opportunities, personal autonomy, and ability to accomplish things in a world dependent on comprehensive connectivity. Without screen readers—technology that interprets and speaks aloud text and describes images on computing devices—that’s what life would be like for people living with blindness.
Rackham Ph.D. candidate and human-computer interaction researcher Jaylin Herskovitz is interested in developing a deep understanding of the needs of blind technology users so that she can co-design more accessible tech with them, leveraging artificial intelligence (AI) to help blind people get precise information with efficiency and accuracy.
“I am fulfilled in work that solves tangible, real-world problems that are impacting people’s everyday lives,” she says.
Collaboration Across Difference
One area where Herskovitz seeks to create greater accessibility is the collaborative functionality of Google Docs. While screen readers can recite the text in Google Docs, they cannot detect new text being added.

Herskovitz designed audio cues to alert blind users when another collaborator is adding new text to the document. The cues are louder when the collaborator is closer to the blind user’s cursor, and softer when the collaborator is further away. Users can customize the cues, which range from chimes to verbal indicators.
“We want to provide people with both modes of interaction, because some people prefer one over another — and it’s nice to have a higher verbosity explanation when you’re new to a system,” she explains.
Herskovitz’s Google Docs modifications also include audio cues to indicate that someone has used the “comments” function of the software, inviting users to inspect the comment further. Comment and edit summaries also offer blind users a more efficient way to collaborate in the software.
“Throughout the design process, we’re conducting research to figure out the ideal way to communicate with blind users. If the results are negative, or if people actually prefer the old way, well, it’s back to the drawing board,” she says.
A Legacy of Access
The user-first spirit of Herskovitz’s work follows a proud legacy of shifting agency away from the limiting use-cases that sighted developers can imagine, and empowers blind users to create the customizations that they need.

In the mid-1980s, Rackham alum and one of U-M’s first doctoral graduates in computer science, Jim Thatcher (Ph.D. 1963), and his U-M advisor-turned-colleague, Jesse Wright, worked together in the Mathematical Sciences Department of IBM Research to create the first computer screen readers.
While Thatcher is often solely credited with the invention, in an interview for the American Association for the Blind, Thatcher cites Wright and other blind employees at IBM as essential collaborators on the project.
“I had no idea it would become an IBM product because I was just having fun, making the PC accessible for Jesse,” he was quoted as saying.
Herskovitz notes that when personal computers were released, IBM hadn’t yet released screen readers for commercial use, leaving people living with disabilities to create their own work-arounds and supports.
“Accessibility is often people with disabilities taking matters into their own hands because they have no other option,” she affirms.
Exactly What You’re Looking For
While screen reading assistance is one aspect of creating access for people with vision disabilities, technology is also used to help blind people see the offline world.

Be My Eyes is a popular technology that connects blind or low vision users with volunteers or AI to help them interpret their surroundings using live video captured from the camera on their phone. While this service provides essential support for so many people, it is also one of the platforms that has inspired Herskovitz’s appetite for improvement.
Herskovitz’s research reveals that blind users are frustrated by AI’s current capacity for interpreting imagery, finding it far too general to be efficient. If a blind user wanted to know the expiration date on a jar of salsa, AI wouldn’t necessarily know how to hone in on that one detail, forcing the user to sit through a tedious computer read-out of every bit of text on the jar’s label to get that information.
To address this issue, Herskovitz used techniques from the field of end-user programming — the practice of designing for non-programmers to create and modify software for their own specific needs — to create ProgramAlly, an application for blind people to write small custom programs to target the specifics of what they’d like to see.
“Programs could include things like finding a specific route number on a bus or finding the title of a book — essentially creating simple commands for people to specify exactly what they’re looking for that the AI can understand,” she says.
Herskovitz’s vision for the application is to use existing models as building blocks that users can customize to create something that fits their unique everyday needs. Her study involving blind participants revealed that even though most users did not describe themselves as tech savvy, there was a lot of enthusiasm around ProgramAlly and its capacity for customization.
“A really rewarding part of my work is to co-design and build something with blind participants — and actually get to experience people’s reactions to it, or how they would use it,” Herskovitz says.
