Today's post is a little outside the norm for us. We usually do blog posts on ways to grow your pet care business, how to better utilize Time To Pet or to shine some light on some of our awesome customers. But today, we want to pull the curtain back a little bit on our business. More specifically --- how we evaluate feature requests and determine what features we should be working on.

We are extremely lucky to have clients who are engaged in our product. Because of this engagement, our customers are continuously thinking of ways that Time To Pet can better help their business. It's really amazing because it makes our job easier. All of our best features (like the Client AppVisit Report Cards and Text Messaging) are direct results of feature requests from our customers.

While we would love to build every feature request that comes in, unfortunately it's just not possible with a SaaS product. We have to take into account a lot of different factors like a limited amount of development time and the fact that some features may make Time To Pet more difficult for other users. We also get some feature requests that at first glance seem crazy but end up being some of the most important additions to our system. Because of this --- we track every single feature request that our customers submit. But tracking feature requests is not enough. We also need a fair and comprehensive way to evaluate the feature requests to determine which are most important.

Being a software company --- we try to use systems, rules and procedures in all of the decisions we make. While there is most certainly a place for "common sense", making decisions based on actual data is one of our core values. So what data do we use to evaluate feature requests? There are four major data points.

  1. How many times do we hear this feature request? A feature that is requested all of the time is most certainly something that will give the request more weight.

  2.  How difficult/time-consuming would it be to build, test and release this feature? A feature that is incredibly complex or outside the scope of our system has less weight attributed to it than a small change that would make a big difference.

  3.  How many of our customers would be impacted by this feature? A feature that positively impacts a large number of customers is weighted more than a feature that would impact only a small number or have some negative impact to customers.

  4. Does the feature fit into our general vision of Time To Pet? Kyle and I have a vision of how we expect Time To Pet to help our customers improve their businesses and ultimately their lives. A feature that helps support that vision will be scored higher.

Now how do we use these data points to actually score features? We'll it's actually pretty simple. We've created an internal tool that helps us track each and every request. In addition to tracking the features, our internal tool gives each request a score based on different factors. A feature request that scores high on the different factors will move up the list. A request that doesn't score as highly is kept on the list in case some of the data changes (like more customers request that feature). We review these feature requests regularly to determine what we should be spending our development time on.

As Time To Pet has grown, so has the number of feature requests we get. Managing them in this way has been an incredibly simple, and fair way to determine what we should be building next.