Monday, March 29, 2010

Iterative funding of startups- an entrepreneur's perspective

I am a first-time entrepreneur, and have not raised any external funding yet for my venture. So far it is self-funded (a.k.a bootstrapped). Recently I started to explore different funding options available to us. All the blogs and articles I read are from VC (investors') perspective. So, this is a view from the other-side of the table- an entrepreneur's (an inexperienced one I might add) perspective.

 As I am dabbling with the concept of "Lean" model of building startup, what I see is missing from the model is the "funding" side of the equation. Traditional sources of external funding are Angels and VCs. We all start our venture with a "big" dream. Yet, the success in our minds is subconsciously anchored in getting "funded," instead of building a "sustainable business." As a result, we spend a lot of energy in pursuing VC funding (a proxy for success). If we are lucky, we get a sizable chunk of money to build out your business. We get the whole amount upfront based on a "carefully crafted pitch" and a "big plan." Although the investor will try to hold our feet to the fire, but once we are funded, we get to spend the money in the hope that the plan will come to fruition before we use it all up. If not, we get to work on raising our next round of funding, and repeat the cycle. This time along with our first round investor. It feels there is a "big waste" hiding in there somewhere. That is why 37signals guys are so dead against raising money.

I see this is not an issue of external funding vs. self-funding or customer-funding (customer pays for your product or service that funds the growth of the business) though. It is an issue of how we earmark fund and spend it. Dave McClure has eluded to it in this presentation. He calls his alternative funding model as Incubator 2.0. I like the idea a lot. The gist of it is instead of making a commitment for an amount (usually large) upfront for a longer horizon, fund the business based on "progressive learning." Dave's funding model has 3 stages-

1. "Micro seed" (3-6 months) corresponds to "Customer Discovery" stage of "Lean Startup" model. Expect to spend between 0-$100K.

2. "Seed" (6-12 months) corresponds to "Customer Validation" stage of "Lean Startup" model. Expect to spend about $100K-$1M.

3. "Series A" (12-18 months) corresponds to "Transition to Growth" stage of "Lean Startup" model. Expect to spend  about $1M -$5M.

I would like to extend this funding model further. Why not break the funding into even smaller amounts that actually funds a set of experiments (hypothesis). We make these funding decisions "iteratively." If externally funded, we set up an MoU with the investors to enable "incremental funding" as opposed to "big upfront funding." The whole model would look something like this,

The goal of funding should be "just enough" funding to "sustainable growth." If we could do it through self-funding, great. If not, external funding is fine so long as it is in sync with the iterative model of building a business.

Am I missing anything that would prevent us from implementing a true iterative funding model that works in lock step with iterative customer and product development model? By the way, we are looking for an Angel investor who would like to try this with us...:-) Seriously.

Friday, March 26, 2010

Startup wisdoms are plenty, but data are scarce

If you are trying to implement "Lean" model of startup or building  SaaS business, you probably are looking for answers to these questions like me-

What is a typical time a company spends in "customer discovery" (problem-solution fit) in a specific category of application in my industry?
What is a typical time a company (in my industry/category) spends in "customer validation" (product-market fit)? 
What is a typical customer "acquisition" conversion rate in my industry?
What is a typical customer "activation" conversion rate?
What is a typical customer "churn" or "retention" rate?
What is a typical customer "referral" rate?
What is a typical CAC or CLTV in different stages of development?

You may say, the answers to these questions depend on specific situations. Yes, I agree. However, without any empirical shared knowledge, it is like walking in the dark. A shared database of such data would be helpful, don't you think?

When it comes to advices on how to start a business, run and grow it to a success, there are plenty to go around on the net. I appreciate all those insightful words of wisdoms. These wisdoms help me pull through days when going gets tough. They show me what I should consider when presented with a situation. However, these wisdoms fall short of actually helping us validate that we are on the right track, or we are capital efficient. As much we are open about sharing wisdoms as we are closed when it comes to sharing actual data. I understand we are very reluctant about sharing real data because we are then exposed (to our competitors, or to the scrutiny /critique by the media or industry experts). As a result, we are repeating the same mistakes as our sage predecessors have experienced themselves. It is a vicious cycle.

The value of these metrics will vary from industry to industry, but for the same industry and similar applications, they should be similar. When I started to look for data for SaaS business, I came across a great blog by Philippe Botteri from Bessemer Venture Partners. He maintains these data for public SaaS companies. However, it is not helpful for startups. I wish we were tracking these numbers for "SaaS Satrtups" without revealing the identity of individual companies. There are some examples of brave individuals- Peldi of Balsamic, Dharmesh of Hubspot who shared some data in the past. We can create a shared Google spreadsheet where we can voluntarily share these data as we progress through different stages of our ventures.  The format for this spreadsheet could be
                                                                                Stages of Development
Industry Category  Metric  "Customer Discovery"    "Customer Validation"   "Transition to Growth"    Growth

With a shared  database of these metrics, we can become more efficient with our capital, and we all will be better off.

What do you think of the idea of sharing these data? Am I missing something that would "really" get in the way to start sharing data anonymously? Or, do you think it is not an issue and hence not worth solving for?

Friday, March 19, 2010

Meet your customer at the door

Recently, I read an interesting blog post by Brant Cooper about accidental discovery of product-market fit. The gist of it is that if your server goes down for some reason, and you don't hear your customers calling you or even better screaming at you, you probably do not have a product-market fit yet.

We also unknowingly implemented an effective process for exploring product-market fit. Essentially it was embedded in our "customer acquisition" workflow. This is how it worked:

1. Customer would signup for ScrumPad for a 30-day free trial.
2. At any time during the trial period the customer can cancel the trial subscription.
3. We would notify them of upcoming end of trial by email and remind them of explicitly canceling the subscription.
4. If they choose to cancel the subscription, we ask them to take a short "exit interview" (which was optional; we allowed the users to skip the interview) with four questions:
                          a. Reason for cancellation?
                                1. Speed, 2. Usability 3. Lack of Feature, 4. Price, 5. Any other
                          b. Switching to another tool? If yes, which tool?
                          c. Reconsider in the future?
5. If not canceled before the trial period ended, the account "automatically" rolled into a "paid subscription" (since we do not have any "tiered pricing").

Requiring explicit cancellation turned out to be a very effective "product-market fit" exploration tool. We almost got 100% participation in the "exit interview." There were users who would forget (even after all the reminders) to cancel. When they would get an invoice (the controversial aspect of the process) at the end of the 2nd month, they would come back and cancel the subscription. We would always waive the invoice. Based on the the interview answers, we would go back to some of them to get further clarification on some of their answers.

However, we also quickly learned that some customers considered this to be an "annoying," even "deceptive" practice. Although we never intended to use "auto opt-in" as a way to mislead anyone,  we understood how this could be misconstrued as a "bad practice." It is important for us to do what feels right to our valued customers.  So, we now changed the process from "auto opt-in" to "explicit opt-in" (that is customers need to explicitly upgrade to a paid subscription before the trial period ends or the trial account is suspended). This reduced the number of trial customers that go through an explicit cancellation process, and hence the exit interview.

We have now embedded in our application in addition to our "exit interview" during cancellation process. However, it is difficult to get people to take the survey or explicitly cancel subscription.

How are you exploring product-market fit for your SaaS application? What do you think how we could increase participation in our "exit survey" or Your insights will be much appreciated.