Founder’s Guide on Product Management — Part Deux
Early Stage product development is hard. Over the years, I have collected a few techniques that helped me navigate the infinite search space and prune it down to a tractable one. Most of these techniques apply for enterprise product management.
Related Blogs
See our Press Release Announcing all of our Promotions here.
Below you will find two notes from our newly promoted Partners:
Yash Hemaraj, BGV General Partner
Honored and thrilled to announce my promotion to General Partner at BGV, thanks to the unwavering support, trust, and collaboration of our incredible team and visionary founders.
A big thank you to our BGV and Arka Venture Labs portfolio companies and founders for inspiring me with your groundbreaking ideas and relentless passion for what you do. I am committed to being an enabler for your success, and I am excited to continue supporting you on your path to greatness.
Venture capital is a delicate balance of hope, optimism, awareness and caution. It requires us to be both aggressive and vulnerable. Over the past nine years, I have embraced the art of “transparent capital.” It means being honest with our stakeholders about expectations, having open conversations about opportunities and difficulties, and making decisions together. I am grateful to be surrounded by individuals who value this brand of venture capital.
My journey is defined by the extraordinary founders, co-investors, LPs, and my loving family who make all of this possible. I am humbled by the trust and support of our investors and partners, who have joined us in our pursuit of excellence. Your belief fuels my determination to identify and empower exceptional ventures that will shape industries, create better jobs, and transform lives.
Thank you for being a part of my journey, and I am incredibly excited for what lies ahead. Together, let’s continue driving innovation, making a positive impact, and shaping the future of venture capital.
Sarah Benhamou, BGV Partner
I’m grateful and honored to be promoted to a Partner of BGV. When jumping onboard as an MBA intern in Israel in 2018, little did I know that this thrilling ride would lead me to managing European activities from a new office in Paris.
The VC industry has already changed a great deal in this short time, shifting from a period of irrational exuberance to a more diligent focus on sustainable growth and value creation. Navigating through these changes has been both challenging and rewarding, and I am grateful to share this journey with my experienced colleagues, who provide a rare showcase of discipline and methodical judgment while steering turbulent market fluctuations.
As I step into this new role, I am filled with optimism and enthusiasm for the future. Making a positive impact on society and the environment is a mission that resonates deeply with me, on a personal level, and I am fortunate to pursue this path professionally. BGV will continue to play a role in supporting purpose-driven ventures that drive both financial returns and meaningful change.
Throughout this journey, I’ve had the privilege of partnering with incredible talent and remarkable portfolio companies that sit at the forefront of enterprise innovation. These startups – including Madkudu, Zelros, Flytrex, Cryptosense and Kardinal – have continually impressed me with groundbreaking novelty, chastened by a disciplined grit and determination. Cryptosense, in particular, holds a special place in my heart as it marks my first successful exit as an investor, with its acquisition by SandboxAQ.
None of this would have been possible without the support of the Partners and team at BGV. Special thanks to them for their trust and for extending this opportunity, and also for being inspiring colleagues to learn from and work with!
What role should ethics play in investing in AI ?
That’s a question I have begun to hear more frequently as a general partner at BGV and as an investor who cares deeply about responsible tech and AI governance.
Imbedding ethics in company culture and product design can become a competitive advantage in AI first startups. By placing ethics at the core of product design, startups can accelerate market adoption by mitigating risks around bias, explainability and data privacy and continue to grow in an environment where values and ESG are becoming increasingly important with customers and employees.
A values-driven industry
At BGV, we invest in early-stage immigrant founders who are building AI-centric products for the Enterprise 4.0 market. Whether the technology is robotics, NLP, or computer vision, these organizations have deep tech at their core.
We believe that far more value can be created through AI use cases that augment humans rather than AI use cases that focus solely on replacing humans through automation. The former drives exponential growth in productivity while the latter commoditizes skilled labor leading to inequalities in income and wealth distribution. AI used only for labor substitution may make sense for use cases that are dangerous (ie mining) or those where there is a shortage of labor (ie recycling). That’s why we screen our deal flow to better understand the value creation impact rather than investing purely for automation use cases.
We also screen founders for prior track record of transparency and ethical behavior. As early stage investors it’s vital that we can trust founders to deliver on their vision and promise. At the end of the day the VC Business is about people, technology is a people business because we bet on the integrity and honesty of our founders as much as the innovations they are bringing to the world.
Red flags and green lights
During our due diligence process, we look for red flags and green lights.
What’s a red flag? How has a startup founder performed in his or her entrepreneurial career? Have they engaged in ethical behavior with customers? We also look at a founders’ past experience at established companies. If we find that he or she has a track record of not fulfilling their promises, that’s a big red flag for us.
The flip side of the red flag is the green light. If we find a startup entrepreneur has a consistent record of ethical practices and that their past customers and colleagues praise his or her leadership and integrity, that endorsement speaks volumes.
A human approach
We also believe that our founders have to trust us. It’s a two-way street. Our practice is to introduce a founders seeking an investment to other founders in our network. They need to do their own diligence on BGV and hear that we are a values based firm whose actions match the words, that we are truly committed to integrity and that we support our founders through the good and bad times.
That is vital, because as VCs we are building an 8-10 year relationship with a startup company. There will inevitably be ups and downs. Mistakes will be made. That’s why a relationship built on trust is at the core of our investing strategy.
Ethical AI governance strategy
There’s a difference between saying that you care about AI governance and actively engaging with startups to help them build responsible AI companies. This is one of the reasons we founded EAIGG, a community platform of AI practitioners and investors dedicated to sharing AI governance best practices.
We have been pleasantly surprised that, indeed, many young entrepreneurs care about making the world a better place. Of course every young company wants to be a unicorn. But if a company’s values are lost along the way, and they are purely mercenary in pursuing their financial goals, then something important has been lost.
During our initial conversation with startup founders we ask them point blank: do you care about AI governance and data privacy ? Do you believe that AI can make humans expendable? We don’t expect that they’ll have everything figured out. But we do expect that the issues are important to them.
It’s important that startups have a roadmap for AI governance. Because 10 years in the future, when the small startup has become a corporate brand, it’s nearly impossible to retrofit technology and product architectures for AI governance and data privacy. This cannot be an afterthought.
A holistic view
When dealing with AI, it’s important to take a holistic view. I call this approach enlightened self interest. As a founder, it’s in the entrepreneur’s interest to build a great product. But it’s also in his or her interest to ensure market adoption, this implies ensuring elimination of model/data bias, addressing explainability and data privacy concerns to ensure that AI technology remains human-centric.
We’re excited about the promise of AI but we also believe it’s critical to put humans back in the equation. AI is projected to create $3 trillion of value over the next 10-15 years. Part of that equation is to contribute towards setting the guard rails so that AI development and deployment is democratized and creates value for both employees and owners of capital.
Investors need to prioritise the ethical deployment of AI – too much is at stake if they don’t.
Investors, take note. Your due diligence checklist may be missing a critical element that could make or break your portfolio’s performance: responsible AI. Other than screening and monitoring companies for future financial returns, growth potential and ESG criteria, it’s time for private equity (PE) and venture capital (VC) investors to start asking hard questions about how firms use AI.
Given the rapid proliferation and uptake of AI in recent years – 75 percent of all businesses already include AI in their core strategies – it’s no surprise that the technology is top-of-mind for PE and VC investors. In 2020, AI accounted for 20 percent or US$75 billion of worldwide VC investments. McKinsey & Company has reported that AI could increase global GDP by roughly 1.2 percent per year, adding a total of US$13 trillion by 2030.
AI now powers everything from online searches to medical advancement to job productivity. But, as with most technologies, it can be problematic. Hidden algorithms may threaten cybersecurity and conceal bias; opaque data can erode public trust. A case in point is the BlenderBot 3 launched by Meta in August 2022. The AI chatbot made anti-Semitic remarks and factually incorrect statements regarding the United States presidential election, and even asked users for offensive jokes.
In fact, the European Consumer Organisation’s latest survey on AI found that over half of Europeans believed that companies use AI to manipulate consumer decisions, while 60 percent of respondents in certain countries thought that AI leads to greater abuse of personal data.
How can firms use AI in a responsible way and work with cross-border organisations to develop best practices for ethical AI governance? Below are some of our recommendations, which are covered in the latest annual report of the Ethical AI Governance Group, a collective of AI practitioners, entrepreneurs and investors dedicated to sharing practical insights and promoting responsible AI governance.
Best practices from the ESG movement
PE and VC investors can leverage lessons from ESG – short for environmental, social and governance – to ensure that their investee companies design and deploy AI that generates value without inflicting harm.
ESG is becoming mainstream in the PE realm and is slowly but surely making its mark on VC. We’ve seen the creation of global industry bodies such as VentureESG and ESG_VC that advance the integration of sustainability into early-stage investments.
Gone are the days when it was enough for companies to deliver financial returns. Now, investors regularly solicit information about a fund portfolio’s compliance with the United Nations Sustainable Development Goals. Significant measures have been taken since 2018 to create comparable, global metrics for evaluating ESG performance. For example, the International Sustainability Standards Board was launched during the UN Climate Change Conference in 2021 to set worldwide disclosure standards.
Beyond investing in carbon capture technologies and developing eco-friendly solutions, firms are being pressed to account for their social impact, including on worker rights and the fair allocation of equity ownership. “Investors are getting serious about ESG,” headlined a 2022 report by Bain & Company and the Institutional Limited Partners Association. According to the publication, 90 percent of limited partners would walk away from an investment opportunity if it presented an ESG concern.
Put simply, investors can no longer ignore their impact on the environment and the communities they engage with. ESG has become an imperative, rather than an add-on. The same can now be said for responsible AI.
The business case for responsible AI
There are clear parallels between responsible AI and the ESG movement: For one thing, both are simply good for business. As Manoj Saxena, chairman of the Responsible Artificial Intelligence Institute, said recently, “Responsible AI is profitable AI.”
Many organisations are heeding the call to ensure that AI is created, implemented and monitored by processes that protect us from negative impact. In 2019, the OECD established AI Principles to promote the use of AI that is innovative, trustworthy and respects human rights and democratic values. Meanwhile, cross-sector partnerships including the World Economic Forum’s Global AI Action Alliance and the Global Partnership on Artificial Intelligence have established working groups and schemes to translate these principles into best practices, certification programmes and actionable tools.
There’s also been the emergence of VC firms such as BGV that focus on funding innovative and ethical AI firms. We believe that early-stage investors have a responsibility to build ethical AI start-ups, and can do so through better diligence, capital allocation and portfolio governance decisions.
The term “responsible AI” speaks to the bottom-line reality of business: Investors have an obligation to ensure the companies they invest in are honest and accountable. They should create rather than destroy value, with a careful eye not only on reputational risk, but also their impact on society.
Here are the three reasons why investors need to embrace and prioritise responsible AI:
- AI requires guardrails
One only has to look at social media, where digital platforms have become vehicles that enable everything from the dissemination of fake news and privacy violations to cyberbullying and grooming, for a taste of what happens when companies seemingly lose control over their own inventions.
With AI, there’s still an opportunity to set rules and principles for its ethical use. But once the genie is out of the bottle, we can’t put it back in, and the repercussions will be sizeable.
- Regulatory pressure imposes strong consequences
Governments worldwide are tightening digital regulations on online safety, cybersecurity, data privacy and AI. In particular, the European Union has passed the Digital Services Act and the Digital Markets Act (DMA). The latter aims to establish a safe online space where the fundamental rights of all users are protected.
The DMA specifically targets large platforms known as “gatekeepers” (think search engines, social media and online marketplaces), requiring them to be transparent in advertising, protect data privacy and address illegal or harmful content. Coming into effect as soon as 2023, the DMA can impose fines of up to 6 percent of annual sales for non-compliance, and as much as 20 percent for repeated offences. In extreme cases, regulators may even disband a company.
In a recent study on C-suite attitudes towards AI regulation and readiness, 95 percent of respondents from 17 geographies believed that at least one part of their business would be impacted by EU regulations, and 77 percent identified regulation as a company-wide priority. Regulators in the US and Asia are carefully following the progress made in Europe and will surely follow suit over time.
- Market opportunities
It has been estimated that 80 percent of firms will commit at least 10 percent of their AI budgets to regulatory compliance by 2024, with 45 percent pledging to set aside a minimum of 20 percent. This regulatory pressure generates a huge market opportunity for PE and VC investors to fund start-ups that will make life easier for corporates facing intense pressure to comply.
Investors wondering about AI’s total addressable market should be optimistic. In 2021, the global AI economy was valued at approximately US$59.7 billion, and the figure is forecast to reach some US$422 billion by 2028. The EU anticipates that AI legislation will catalyse growth by increasing consumer trust and usage, and making it easier for AI suppliers to develop new and attractive products. Investors who prioritise responsible AI are strongly positioned to capture these gains.
Worth the effort
The call for investors to integrate responsible AI into their investments may feel like a tall order. It requires specialised talent, new processes and ongoing monitoring of portfolio company performance. Many fund managers, let alone limited partners, don’t yet have the manpower to achieve this.
But AI’s impending regulation and the market opportunities it presents will change how PE and VC firms operate. Some will exit, shifting resources to sectors with less regulation. Others, fortifying themselves against reputational risk while balancing internal capabilities, will add screening tools for AI dangers. Still, some will see responsible AI as Mission Critical.
Awareness is the greatest agent for change, and this can be achieved through adapting best practices on ethical AI governance from the community of start-ups, enterprises, investors and policy practitioners. Those that step up before it’s too late and who proactively help shape the rules as they are being written will reap the benefits – both economically and in terms of fuelling sustainable growth.
This is an adaptation of an article published in the Ethical AI Governance Group’s 2022 Annual Report.
You’ve recruited a talented team and designed a great product. You’ve set a staged growth plan, and gained initial traction selling to enterprise customers.
Now you’re closing in on a round of institutional financing, and it’s time to focus on your board of directors. It will include key founders, and some initial investors. But who do you really want sitting around that table?
In my experience, an effective board can accelerate a startup’s success or weigh it down like a heavy anchor. Here are a few tips and red flags for early-stage startups as they consider the composition of their board of directors.
Size: Find the Goldilocks Number
The more venture capitalists on your board the better, right? They are the ones willing to take risks, after all, and the only people in the room with experience shepherding young companies to market success.
In fact, there is a correlation between the presence of venture capital board members and a business’ ultimate success. But it’s not one you might expect. A study conducted by Correlation Ventures found that two VC members was the ideal number on a board of directors. Zero or one board member from a VC compared favorably, but any more than three was found to have a dampening effect on the business.
More specifically, startups with three board members lead to exits of 3.6x, while those with six or more board members yielded 1.4x exits. Too many board members = too many cooks in the kitchen. Lowering the temperature, the noise and the competing voices and egos is often critical to decision making, focus and execution.
Takeaway: Choose your VC members carefully, and don’t appoint too many.
Diversity: A Variety of Perspectives
A founder should resist the temptation to stack the board with sycophants. Diversity is important. Ideally, your board will be composed of members with a broad resume of experience: broad strategic thinkers, niche industry veterans, male and female perspectives, local and global outlooks.
A good CEO will challenge herself to communicate with a broad array of people who can shine a light on blindspots and challenge assumptions. In that way, a founder can test ideas and lean on the advice of veterans. When everyone is united in the same goal, the board, and the company at large, will prosper.
Context Drift: Keep Them Focused
Yes, your board members bring decades of experience, insights and seasoned perspectives. Their accomplishments speak for themselves, and put them in high demand. However, this also means that they’re frequently overextended, advising a handful of companies at once, and quickly drawing patterns and conclusions without fully understanding the context before providing input for key decisions.
As a consequence, their input may be reactive, and fail to add value. This situation is a recipe for disaster, especially when bruised egos may be involved.
To avoid these eventualities, founders must do reference checks on members. Indeed, founders should vet every board candidate at least as thoroughly as an executive hire. There’s a case to be made that CEOs should be even more careful with board-member appointments: Furthermore ensure that board members understand the vision, and the peculiarities of your business and will devote the necessary bandwidth. It is also important to engage board members in bet
Takeaway: Select your board members carefully, a ineffective board leads to poor governance. Discernment is critical.
Respect: Essential for Trust
So far, we’ve discussed how startup founders can best evaluate and recruit their board members. But it’s important to point out that these individuals also have responsibilities. For one, they must conduct themselves in a manner that is appropriate to their position.
A board member who shows a lack of respect for the CEO or other company executives is bad for business – especially if the critique takes place during a board meeting. There are two negative outcomes here. First, the board member is actively undermining a CEO in front of his or her team, compromising the team’s trust in their CEO. Second, the CEO never trusts that board member again. Whatever good advice he may have is now colored by that negative experience.
Takeaway: Board members who show a lack of respect for company executives lose influence and sow discord.
One tip for the CEO: Make sure to frame issues before the board into open and closed sessions (where execs do not participate). That will provide a confidential forum and help keep oversized egos in check.
Set Clear Objectives: The Value of OKRs
High-functioning board members help the CEO and leadership team see their blind spots and direct them toward successful value creation. Likewise, CEOs should work with the board to set clear objectives and key results (OKRs) for the company to ensure alignment on value creation.
Without clear OKR’s it is difficult to judge a company’s and CEO’s performance objectively. This can lead to a difficult situation of bad surprises and finger pointing that spells doom for the company and lost market and investment opportunity. A smart startup founder will proactively work with the board to set clear OKR’s (achieving Product market fit, establishing a repeatable sales motion, hiring key talent etc) that results in successful market adoption and refinancings. A forward-thinking CEO is a successful CEO.
Takeaway: Create company wide OKR’s to align your board and the management team to drive value creation.
Communication: Not Just for the Meetings
It’s not enough to see your board members at board meetings. Call them in between board meetings, structure at least a few board dinners prior to board meetings every year.
Get to know your board members strengths and how they can best help. A lack of communication and poor chemistry can lead to bad dynamics and unwanted surprises.
Takeaway: Your board is taking this journey with you. They are your partners embrace them. Keep in touch, build chemistry and communicate often.