What I Learned Building an AI Agent for Leasing
Practices and reflections from my work as a product manager
(This article was drafted in November 2024. In it, AI and GenAI are used interchangeably, with both primarily referring to GenAI.)
I am currently developing an AI agent for leasing purposes at a Series B rental startup. This agent improves conversion rates by providing smoother customer flows while helping human agents manage repetitive tasks in early sales. Over the past year, I have led the project from the ground up, taking it from an MVP with assumption testing to a more AI-native and scalable solution. Here are some of my key learnings from this experience.
Always problem first
The way to leverage GenAI is different for incumbents and new AI startups. Incumbents, like Salesforce, don’t fully rely on AI, but AI can accelerate and reinforce the business. For new AI startups, like Character AI, AI is the foundation and core competency, aka AI-native. In other words, it’s important to understand the relationship between vision and AI, what role AI will play and how to position it, such as reducing costs, increasing efficiency, or generating new revenue streams.
For example, early sales communication in house rentals could be supported by AI, as it follows a standard SOP. However, for house sales, building trust through conversations is key between human agents and customers, as it is more complex and involves a higher-value transaction. The boom in GenAI lowers entry barriers and creates more opportunities, but also bubbles. Some startups combine AI with anything and completely neglect the key problem that needs to be resolved, while AI is always just a means. Gen AI isn’t a one-size-fits-all solution to all problems and sometimes many problems don’t even need AI.
How to find and verify potential needs
Let’s dive deeper and determine whether we should apply AI to a particular direction. AI is a very powerful tool, but only when used in the right way and it is definitely not a silver bullet. Become familiar with the industry and understand that AI cannot replace jobs yet, but it can handle specific tasks. Some criteria to consider include product-technology fit, whether the task involves rules and patterns to follow or requires high-level experience or subjective judgment, the degree of repetitiveness, error tolerance, and whether it involves compliance or other sensitive matters, among others.
In my current company, GenAI is used for recommendations and to answer sales or property-related questions up until rental applications, but it’s not yet ready to handle the lease signing process. I have spent a lot of time addressing pricing issues and hallucinations due to the low error tolerance.
A new ecosystem
After setting a clear direction, building a usable AI product goes beyond models, algorithms, or AI/ML DevOps, and it requires creating an entirely new ecosystem centered around user needs. This ecosystem integrates essential components such as an explanatory UI/UX, data privacy safeguards, specialized measurement systems, and an engaged feedback loop. In summary, the technical infrastructure can be shared across features, enabling reusable techniques and foundational elements, while at the business level, relevant insights can inform both upstream and downstream products.
In the AI agent project I worked on, this ecosystem-driven approach was evident in how we tackled system compatibility challenges. The existing database and system design were not originally built for AI integration, requiring me to familiarize myself with the entire system and product line to facilitate seamless integration and necessary refactoring. At the same time, property insights, whether reported by customers or detected by AI, helped refine the property database, illustrating how project outcomes could enhance the broader system and drive continuous improvement.
The most fascinating aspect of GenAI products is their exploratory nature, full of ambiguity and imagination. To manage this, I always conduct thorough analysis with the team, brainstorming a wide range of test cases and potential ways AI could fail, while also managing leadership expectations. Testing assumptions and iterating on small-scale experiments, after sufficient research, helps me quickly gather feedback and guide the next steps.
Jotting down these reflections and thoughts in the moment has helped me settle and refine my learnings. There will definitely be more insights or different perspectives over time when I look back in the future.