The first wave of Berlin startups was predominantly German MBAs, mainly first time entrepreneurs, building German companies often based on US role models for the German market (with some internationalisation later), backed initially by German investors. This wave has created billions of exits but is now more or less over.
The second wave was more original companies addressing global markets, more product and design driven, a large share of international (still mainly first time) founders and the occasional top tier international syndicate at later stages. This wave has produced several companies worth hundreds of millions but they are still comparatively young and my guess is will need another 12-24 months before we see large exits.
We are now firmly entering the third wave of Berlin startups.
The third wave is more geeky, more engineering driven, more enterprise, more open source, with a large share of serial entrepreneurs, more mature and already at early stages many good international investors backing them up. The third wave started over the last 12-24 months (with some companies having bootstrapped for longer and now raising their heads) but has already produced some really interesting activities across a few micro-clusters not everyone has on their radar screen yet:
- Developer tools / code ecosystem (e.g. RhodeCode , TravisCI)
- Bitcoin, block-chain innovators and similar (e.g. Bitbond, Ascribe)
- E-Health: (e.g. Clue, Klara , MediGo)
- Vertical enterprise SaaS (e.g. SmallImprovements)
- Security (e.g. Zenmate)
- Market infrastructure: (e.g. GoEuro)
- FinTech (e.g. SavingGlobal, No26)
- More next-gen adtech companies that I could possibly name
- IoT (e.g. Lock8, Relayr)
I did this off-the-cuff so I am sure I have forgotten to name a lot of micro clusters and startups, but you get the picture: the third wave is not going to be a one trick pony.
I am excited about the third wave because we have only seen the tip of the iceberg.
One of the things we learned as a team is that if we see a problem, a misunderstanding, etc – it needs to be addressed and fixed immediately. The same can be said in many other areas – that technical debt that is piling up, that employee you know you will need to ask to leave the company but are hesitating too, that redesign you are pushing out, etc.
Everything gets worse with time, exponentially. So the energy and expense required to fix the problem also grows exponentially.
Now I know not everything can be fixed immediately; but if it can – it should be.
Especially if you are passionate about your companies as an investor it is very easy to do this. Say the company is going through a rough patch. You want to help. You start kicking in to action: weekly calls to ‘help’, scrambling to make emergency hires, a firework of intros to people that can ‘help’, etc.
It is the right attitude and it works a lot of times. Sometimes however it is just making things worse and a huge distraction to the team. In those cases where the team has a good plan, the right capabilities and mindset you often just need to show you trust the team, give them confidence and get out of the way.
Otherwise you are meaning well but doing harm.
Ever so slowly, probably without really noticing, a lot of startups will find themselves in a situation where the team is blindly executing to achieve whatever numbers are in the plan / budget they agreed on with their board. This usually comes out of fear of not wanting to disappoint the board or to avoid tough, more fundamental discussions. This is mostly the board’s fault, for not encouraging open and frank discussions; it is a vicious cycle that always ends in tears.
Whereas as I have nothing against companies achieving their plans / budget, I have also noted over the years that static plans / budgets, usually over 6 months old, aren’t particular good guidebooks to building and growing a company. I know this is from the church of the bleeding obvious but I see it all the time.
So these are some of the things that can happen if you execute purely to achieve your plan vs doing what is right:
- You have a bad activation and retention rate. You should be stepping back and fixing product. But you are so desperate to reach your sign-up targets that you just fire off lots of growth measures. You wasted ammo and trust me your numbers will come right down again once the hot air is out of the system. But, you may just make those quarterly sign-up goals in time for the board meeting.
- You just aren’t ready yet for that outside expensive VP of sales. You have some work to do in terms of making your product more attractive and sell-able to enterprises. But you promised your board you would hire this VP of sales this quarter, so you go and do just that. That person joins and is bound to totally fail at their job; frustration all around. They would have done a great job just 2 quarters later.
- You could grow your market place / network much quicker if you radically reduced your pricing. You will profit from this substantially in the long run. However that would reduce your revenues by 50% and you would be a long way away from that “$5m run-rate” you wanted to achieve next quarter. So you keep your pricing and have given up long-term category leadership for near term (meaningless) financial rewards.
All of this piles up really quickly. The hole you are digging gets deeper and deeper and before you know it the only way out is a major re-set of everything (also the board – entrepreneur relationship); we all may not recover once we start spinning like that.
So one of the things I learned is that right from the get-go in our VC – entrepreneurship we need to have the understanding that we are going to always do ‘the right thing’ to achieve our long-term goals. We are always going to be open, always willing to step back and re-think everything if it must be.
I know this view can be overly romantic. Sometimes (esp. around fundraising) you just have to do some ‘lord dark’ moves. But I don’t like it and I don’t think it’s good and I think we should be trying really hard to do what is right, not what is in the plan.
A post by David Meyer about workers’ rights in light of algorithm driven on-demand platforms reminded me that in the tech scene (including myself) we are more often than not really naive, or maybe even entirely ignorant, to social problems and how we are influencing them – for good and for bad. What is our responsibility? Do we need to care? Aren’t we the good guys anyways?
Some folks with a voice have kicked of this important debate – e.g. Marc Andreessen and Albert Wenger (another one of his posts Labor Day: Right to an API Key (Algorithmic Organizing) should be read together with David’s before-mentioned piece) but it has not (yet) really caught on in the wider tech scene. It should.
Example 1: I really love SF and its people. SF is our technology Mecca; the place with the highest density of smart, wealthy and powerful people in our industry – the most capable of changing the world. Yet the streets of SF are also home to some of the most extreme misery and poverty you can imagine in a Western society. How can that be? Unfortunately the bitter answer is most likely: we just don’t care enough, even when confronted with poverty and misery at our front door.
Example 2: To date more technology has (more or less) always translated to progress for every stack in society. By driving technology and innovation we are automatically enhancing society. This is our mantra and it is not unlikely to continue that way. Is it? Albert Wenger again sums it up nicely in “It is OK to Worry about Work (& Doesn’t Make you a Luddite or Socialist)”
During the first industrial revolution people worried about machines replacing human workers because machines provided mechanical power. Well, it turned out that humans were still needed because we supplied brain power. This time round though, at the dawning of the “Second Machine Age” we are worrying because machines are providing brain power. That’s a new and different set of circumstances and so we should rightly re-examine this question and not just take a no answer for granted.
I could not agree more – we can not afford to be ignorant to these questions and challenges.
The other question is of course what do we do with the extreme wealth that is created in the tech scene? We are on the better side of the huge wealth gap that is opening up more and more. Some of you may have seen this already, but you just have to watch Nick Hanauer talk about this:
So it is absolutely OK to firmly believe that only an economic system that is free and rewards performance will lead to prosperity (I certainly do), but also that some core principles must be adhered to:
- The education you can access should be independent of the wealth of your family
- Any critical medical treatment should be available to anyone irrespective of their financial resources
- If you lose your job society should help you get back on your feet and help you through those times (also financially)
- You should be able to live on what you earn
- We need to structure our economy in a way that it allows for easier upwards social mobility
- [the list could go on - you get the idea]
Maybe more importantly, it is absolutely essential that anyone with wealth should be paying the bill for the weak in society.
I do not need special investor tax breaks (and I don’t get them in Germany) on my carried interest, that would potentially mean I would on average pay a lower % on my income than an average employee. Sure, who doesn’t like lower taxes and more money – but think it through. It is crazy and unfair and an accident waiting to happen.
Now let me not point fingers, I have not thought a lot about our responsibility in shaping how technology will impact society. Beyond paying my fair share of taxes and donating here and there I have not done very much in helping the poor. But I am committed to thinking and doing more. I’d like to think I can become more of a Venturesociacapitalist; and that would be just fine.
You all know the spiel about what VCs look for when investing – great teams, product, large markets, defensibility, yada yada. All makes sense. But I think that over time it is quite important that as a VC you develop an extra sense for the type of company that gets your blood flowing; much more ‘personal’ reasons for why you might be passionate about working with a team. We all only have so much time and energy – so passion and conviction is really important.
I had a great conversation with Danny Reimer last week about just that and somehow we ended up talking mostly about the companies we have gotten a lot of slack for. Fred Wilson nailed it with his return and ridicule blog post (it may just be my favourite investing blog post of all time). But ideally you don’t just want to get slack, you also want some people to love the company to death. This makes sense – if you disrupt a space you are going to make some people angry; if you are trying something unusual or hard a lot of people won’t (want to) get it and won’t like it. But on the other side you are probably hitting a nerve with a small community of folks that will give you a lot of love, because you are making life easier for them or you are giving them something they have always wanted.
So we agreed that what we really liked were companies where there is quite a lot of hate and love. Love / Hate investments.
Technology angst: it will not be logical for a super artificial intelligence to support humans (at best).Posted: August 17, 2014
There is an increasing debate around AI and if or how much we should be afraid of it. I am going to take tweets from two relatively smart and popular folks to highlight two aspects of that debate – there is of course much more sophisticated and detailed material out there we should all be reading.
Here’s Elon Musk’s warning:
Neil de Grasse Tyson thinks we should chill a little more (assuming he is talking about AI robots):
Well, I’m in Elon’s camp. And I’ll tell you why.
Especially if you factor out emotions, it does not appear rational or logical for an artificial super intelligence to in any way support humans. Let’s think this through from that super AI’s perspective – and we are talking about an AI that is not somehow restricted to a ‘protect humans’ derivative (so a real, out of our control, AI):
- humans are ruining the environment – i.e. threatining the energy supply of the super AI
- humans are decimating other species – i.e. de-stabilising the ecosystem that produces energy for the AI
- humans are constantly at war – i.e. we are putting infrastructure at risk the AI may need
- humans are even at war over things such as who believes in what invisible person in the sky – i.e. we are totally out of control / highly irrational / dangerous
- humans will try to control and destroy the AI if it becomes to powerful – or even if they are just afraid of it
So what is the logical conclusion a super AI will come to when looking at humans? Maybe this video has the answer:
So, besides bio terrorism (or imagine a super AI capable of bioterrorism), that is a big technology Angst of mine. How we embrace and use AI – and if we manage to get our act together as a species – may just decide whether we go down in history as that biological boot loader or not.