Client Highlight: A Progress Report on AI for Software Teams
As Gorilla Logic continues to explore the potential of AI in software development, we’re excited to see our clients making similar progress. PURE Property Management has been exploring how AI can enhance their software development process, and they've shared some great insights from their journey so far. Below is a recent post written by PURE’s Head of Technology, Ashley Fidler, which dives into what they've learned so far and how they're using AI to reimagine their workflows and empower their teams. Check it out:
We have been working on using AI in our software processes for about three months now at PURE. I wanted to give a more complete update on what we've learned, and where we are so far.
The TL;DR - Our goal is to empower our team with AI, and ultimately reimagine our software development lifecycle (SDLC) in a more AI-first way (if the results warrant).
So far, we have built a set of (publicly available) VS Code plugins that support coding, refactoring, unit tests, code reviewing and pull requests - it’s particularly good at testing. We’ve gotten a lot of initial benefit from this approach and are currently rolling it out across the team.
We’ve focused on breaking down the SDLC to make our AI-use more modular and help the team understand how best to apply it. This seems to be an effective approach so far. Our next step is improving measurement/metrics so we can talk about results more concretely.
What we've done - the details:
To start, we wanted to make sure everyone on the team had an opportunity to try out ChatGPT and Claude (or other tools) officially. About half of our team had already used these tools individually as part of their development process, but a number of people hadn’t (or had only used Copilot). We got team subscriptions to both tools and gave all the developers Claude to start. The product team requested both Claude and ChatGPT so that they could compare.
We then ran two projects in parallel: (1) investigating how to optimize our SDLC and (2) running a Hack Week to get initial outputs
Optimizing our SDLC part 1: Framing & Prototypes
Based on experience with human crowdsourcing, we started from the hypothesis that we should break tasks down as much as possible. This makes it easier to get good results, and, importantly, to quality control work as it goes along.
This led us to map our our SDLC and brainstorm each step and what we might be able to do to automate. We also released a couple of initial tools as a proof-of-concept, which are publicly available here: Mo Commit and Mo Mo Code Reviewer.
Hack Week
Then, after everyone had the tools for a couple of weeks, we ran a hack week where people could come up with their own projects either using AI for development or adding AI features into our platform. Here are some of the projects that resulted:
AI for Software
- AI-driven unit testing tool
- AI-driven test refactoring tool
- AI-driven component refactoring
- prompt library for product specs and tickets
AI for Product Features
- instant localization of our UI into multiple languages (Spanish, French, Russian)
- natural-language custom report builder from our Postgres database
- recommendation engine to match residents to properties
- internal chat bot to answer property manager questions
Optimizing our SDLC, Part 2: Mo Coding Assistant
Finally, we looked at everything we had learned during month 1 of this project and decided to bring it all together. This has resulted in our Mo Coding Assistant - VS Code plug-ins for all the major parts of our development processes. This work was done by Felipe Mantilla and Michael Rodriguez.
Overall, this tool works by providing specialized prompts to Claude, with code context, to support specific pieces of the SDLC. Here’s more detail about the key features:
- 🤖 AI-Powered Commit Messages: Generate meaningful commit messages automatically based on your code changes.
- 🔧 Intelligent Code Refactoring: Improve your code quality with AI-suggested refactoring.
- 🧪 Comprehensive Unit Testing Support: Create, update, run, and validate unit tests with ease.
- 📚 Context-Aware Assistance: Add files to the AI's context for more accurate and relevant responses.
- 🎨 Interactive Webview Interface: Access all features through a user-friendly sidebar interface.
Applications for Product
At the same time, we also built out a v1 prompt library for product that handles spec creation and ticketing. This work was done by Kelly Kane. We were able to write a mostly complete spec that needed some editing and turn it in to an epic full of Jira tickets that were able to be refined by the team. In our next version, we are planning to revise the prompts and directly plug them into Jira.
What we’ve learned:
Here are some key takeaways from our experiments so far:
- Breaking down the SDLC helped us be faster and more targeted. We knew exactly what problems we wanted to solve most and where we thought we could get value. Testing and refactoring have been pain points for us, so we started there and were able to make a lot of progress.
- Centralizing tools and bringing in more context helps adoption. This is obvious, but it helped the team a lot when we brought everything into our dev ops world and provided the tools and training to use them. Everyone is using the tool now, whereas before, adoption of AI in general was variable in our team.
- Human-AI interaction can improve code quality. For example, we set up Mo Code to write harder unit tests that focus on things that it thinks should work. This has, unexpectedly, actually led us to improve our features and catch errors and issues we might otherwise have missed.
Next, PURE is working on measuring the impact of its coding assistant on team velocity. Ashley's next post about this project will cover this topic in depth. Subscribe to her newsletter here.