Philly Truce - Phase 3

Project Overview & Background:

Philly Truce is a non-profit organization with a focus on community safety and violence intervention. Philly Truce operates multi-faceted violence reducing programs like Peace Patrol and Safe Path Monitors through which volunteers actively patrol community streets and neighborhoods to observe and mitigate incidents of violence. As Philly Truce expands its initiatives, the incidents encountered have become increasingly difficult.

Philly Truce aims to develop a digital platform that allows Safe Path Monitors and Peace Patrol to better document and manage incidents they encounter. The creation of this platform would allow Philly Truce to more effectively manage incidents, improve the deployment of Safe Path personnel, and better engage community members in helping curb violence in and around Philadelphia neighborhoods, improving the overall effectiveness of its violence reduction programs.

Time: July - September 2024 (4 2-week sprints)

Role: UX Writer & Content Designer (on team of 4)

Tools: Figma, FigJam, Draw.io, Google Meet, Notion, Flesch Kincaid Calculator, GoogleMeet

Problem

Currently, Safe Path Monitor and Peace Patrol volunteers document incidents they encounter in the community using paper-based reports. This makes effectively utilizing the incident-related data difficult and time-consuming.

Solution

The UX Writing Team iterated on the existing Philly Truce app and designed a chatbot feature that allows students to anonymously report incidents through their phone’s messaging app. This solution would make reporting more streamlined and accessible for both students and Safe Path Monitors and Peace Patrol members.

My primary focus on the team was designing the chatbot flow and crafting UX copy that aligned and met the needs of the user and project objectives.

Sprint 1- Kickoff & Content Audit

Research & Discovery

This phase was the first time a UX Writing Team was on the full UX Design team. To begin this phase, the team and I reviewed previous testing and research insights focusing on current pain points with the MVP. This included a content audit to identify issues with the UX copy such as grammar issues and other inconsistencies. The content audit identified minor issues in the grammar and opportunities to improve clarity in button labels and instructional text.

Understanding the Audience

When it came to getting starting on planning and ideating on the chatbot, the team’s primary consideration was writing appropriately to the student user base, which ranged from 5th to 12th grade.

After researching reading levels and best practices, we determined our goal was to write at a 5th or 6th grade reading level.

We also found a tool called the Flesch Kincaid Calculator to ensure the chatbot language was appropriate and accessible for younger students. This calculator gives an estimated reading level and readability score, among other metrics.

Chatbot Ideation

Based on research from previous project phases and the Report Details page on the app, the team determined the following key information needed necessary to get from the chatbot: type of incident (ie: fight, bullying, threat, shooting, etc.), location, and data/time.

The chatbot’s design is to integrate with the students’ SMS messaging app as anonymity was key. This is to take into account “snitching” culture and the stigma that can carry. With this information, I ideated the general structure of the conversational flow, making sure to include questions that asked for information needed for the reports.

View full chatbot flow here:

Sprint 2- Chatbot Flow

Drafting & Creating the Chatbot

We finalized the first version of the conversational flow for the chatbot, focusing on three key pieces of information: incident type, time, and location. For some questions, instead of requiring students to type lengthy responses, we designed a quick-reply system using number options (ie: “1” for fight, “2” for bullying, etc.), with an “Other” option to capture incidents outside of the predefined categories.

Messaging, Tone & Simplicity

Given the sensitive nature of the reports, the team wrote copy that was empathetic and simple, ensuring that students wouldn’t feel overwhelmed. All chatbot messages were checked using the Flesch Kincaid Readability Calculator to ensure they were at a 5th or 6th grade reading level.

Sprint 3- Adapting to New Information

New Information for Incident Report Details

During this sprint, we were hit with unexpected information about incident types being reported from the client. This forced us to quickly pivot and adjust our current chatbot flow to keep the project on track. We learned that nearly all reported incidents were some type of fight. As a result, the chatbot had to shift focus from various incident types to gathering more detailed information about the fights themselves, this included:

  • Whether the fights was one-on-one or involved multiple people.

  • Whether weapons were involved.

  • If any gang affiliations were mentioned.

  • The time and/or location of the planned fight.

  • If the fight was an act of retaliation or recurring.

Expanding and Larger User Base

We also learned that the target audience for the chatbot would be expanding beyond student use. While the target audience is still teenagers, this required keeping the tone appropriate for younger users, while ensuring it was clear and accessible for the broader community.

Lastly, we were informed that the users of the Philly Truce app were no longer called Safe Path Monitors and instead called Peace Patrol. This was mostly a simple term change, but we needed to ensure that any mentions of Safe Path Monitor in the app were removed or changed to Peace Patrol.

Implementing New Changes

The new report details had a significant impact not only the writing team, but on the entire phase. It was a great opportunity to adapt and stay flexible while staying aligned with the project’s goals, and encouraging collaboration across all teams.

We adjusted the chatbot flow accordingly by adding additional questions and incident options, including questions about weapons and gangs. Additionally, we wrote and included error messages for cases in which there were unexpected responses from the users.

Final Chatbot Flow

With the new changes implemented, the final chatbot flow for Phase 3 was created. This included the new messages about the type of fight, when/where it would occur, and gang affiliations as our main questions. Error messages were also added to finalize the flow for the Development Team.

This was then handed off to the Research Team for Sprint 4 to conduct testing.

Sprint 4- Finalizing & Testing

Testing

After finalizing our iterations, the chatbot was handed off to the research team for a pilot test. They conducted a single pilot test with one participant to ensure the logistics can smoothly before official usability testing. This helped both the research team and the content team to identify a few minor adjustments to improve clarity and ease of use.

The official first round of usability testing was completed with 4 participants, and analyze the clarity, efficiency, and empathy of the chatbot.

Key Insights

  • Participants felt comfortable interacting with the chatbot and felt the overall experience was smooth. They also appreciated the anonymity and straightforward responses.

  • The chatbot’s questions were clear and easy to follow.

  • The participants felt the chatbot was lacking empathy and impersonal as the messages did not address the potential emotional state of the user.

  • While users were able to complete their reports and felt questions were relevant, there was still feedback where things could be better. One piece of consistent feedback was about what to do in an urgent situation in which 911 should be dialed.

Recommendations

  • Be upfront about the chatbot: Let users know from the start of the flow that they are chatting with a bot.

  • Add empathy: Include more empathetic messages like, “I’m sorry you had to witness that,” to show understanding.

  • Offer emergency advice: When users are in immediate danger, have the chatbot adjust its questions or guide users to contact 911 or a nearby adult for help.

  • Make clearer error messages: When errors happen, repeat the original question along with the response options to avoid confusion.

Reflection

  • Staying flexible in Agile was one of my biggest takeaways. Product details and business needs can change unexpectedly, and it is crucial to adapt. In an Agile environment I learned to stay flexible and open, which helped me handle the changes without disrupting my work.

  • Working closely with different teams gave me insight into design sprint cycles and working in an Agile environment. I also learned more about the roles of project management, product strategy, research, design, and development.

  • Working with a chatbot required anticipating how users might respond and interpret questions. Coming from a marketing background, I understand the need to think how your user might respond to certain messaging and the various ways to gain information. Brainstorming with the team was key here and I appreciated everones’s point of view and the perspectives they all brought.

Next Steps in Phase 4

  • Continue refining the chatbot and in-app copy based on feedback from testing.

  • Finalize and update the handoff documentation for the next phase of development.