Resources

Generative AI – A Contrarian Perspective

The release of ChatGPT has catalyzed a literal “Generative AI” storm in the tech industry. Microsoft’s investment in OpenAI has made headlines as a potential challenger to Google’s monopoly in search. According to Pitchbook, VCs have increased investment in Generative AI by 425% since 2020 to $2.1bn. “Tech-Twitter” blew up and even mainstream BBC reported on it. Now ChatGPT is too full to process queries most of the time.

The release of ChatGPT has catalyzed a literal “Generative AI” storm in the tech industry.  Microsoft’s investment in OpenAI has made headlines as a potential challenger to Google’s monopoly in search.  According to Pitchbook, VCs have increased investment in Generative AI by 425% since 2020 to $2.1bn. “Tech-Twitter” blew up and even mainstream BBC reported on it.  Now ChatGPT is too full to process queries most of the time.

As a General Partner at BGV, Human-centric AI is a foundational core of our vc investment thesis, and we have dug deeply into the challenges of building B2B AI businesses in our portfolio as well as how disruptive venture scale businesses can be built around Generative AI.

As a founder of the Ethical AI Governance Group (EAIGG), I share a mutual passion with fellow members around best practice sharing for the responsible development and deployment of AI in our industry.  I decided to put ChatGPT to the test on the topic of human centric AI by asking two simple questions: a) What are the promises and perils of Human AI?; b) What are important innovations in Ethical AI? I contrasted ChatGPT’s responses with what we have learned from subject matter experts on the same set of questions (Erik Brynjolffson in Daedelus (2022) titled the Turing Trap on the promises and perils of AI, see link here) and (Abhinav Ragunathan published a market map of Human Centric Ethical AI startups (EAIDB) in the EAIGG annual report (2022).  See link here).This analysis combined with BGV investment thesis work around enterprise 4.0 and ethical AI governance led me to several conclusions that are contrary and more nuanced than what the generic media narrative about ChatGPT and Generative AI in general suggests:

  • ChatGPT represents a tremendous area of innovation but it will not replace Google search engine overnight:
  • The answers are generic, lack depth and are sometimes wrong.  Before trusting ChatGPT responses implicitly, users will need to confirm the integrity and veracity of the information sources.  Already, StackOverflow has banned ChatGPT, saying: “Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to our site and to users who are looking for correct answers.” Given the effort required to verify responses, ChatGPT’s chatbot is not prepared to penetrate enterprise (B2B) use cases.  As unverified and inaccurate data sets increasingly proliferate, ChatGPTs “garbage in equals garbage out” output will quickly expose the shaky data foundation on which its answers rely
  • Putting aside the matter of accuracy, there is also the question of sustainable business models.  While ChatGPT is free today, running GPUs is expensive, so profitably competing at scale with Google search engine is a tall order.
  • Google will not stand still, they have already launched Bard with machine and imitation learning to “outscore” ChatGPT on conversational services.  We’ve only seen the opening act in a much larger showdown.
  • Generative AI tools like ChatGPT can augment subject matter experts by automating repetitive tasks to provide a baseline but will never displace them (especially in B2B use cases) because of the lack of domain specific contextual knowledge as well as the need for trust and verification of underlying data sets. For this reason, broader adoption of ChatGPT will spur increased demand for authenticated, verifiable, ground truth, fueling tailwinds for data integrity and verification solutions, alongside a whole host of ethical AI issues, like privacy, fairness and governance innovations.  Ultimately, human-centric AI will emerge as the bridge to Human-Like Artificial Intelligence (HLAI).
  • Generative AI will go well beyond chatbots to cover use cases like creating content, writing code (automation scripts) & debugging, generating AI art and managing and manipulating data BUT many of the first wave of generative AI startups will fail to build profitable venture scale B2B businesses unless they explicitly address a few challenges:
  • Address inherent trust and verification issues
  • Lack of defensible moats
  • Unsustainable business models given the high costs of running GPU’s
  • Wearing my vc hat of an enterprise tech investor, winning Generative AI B2B startups will fall into several categories:  
  • Applications – That integrated generative AI models into a user facing sticky productivity apps. Using foundation models or proprietary models as a base to build on (verticals like media, gaming, design, copywriting etc OR key functions like DevOps, marketing, customer support etc). Models – To power the applications highlighted above verticalized models will be needed. Leveraging foundation models, using open-source checkpoints can yield productivity and a quicker path to monetization but may lack defensibility.
  • Infrastructure – Innovations to cost effectively run training and inference workloads for generative AI models by breaking the GPU cost curve.  We will also see AI Governance solutions to address the unintended consequences of “disinformation” that will be created by broader adoption of tools like ChatGPT, as well as a wide range of ethical issues.

Today it is unclear where in the stack most of the value will accrue – Infrastructure, models, or apps. Currently, the infrastructure providers (like NVIDIA) are the biggest benefactors of OpenAI. It is also unclear where startups can break the oligopoly of the infrastructure incumbents like Google, AWS and Microsoft who touch everything.  See a16Z article that does a good job expanding on this here.  In the shorter term, BGV believes that most of the winning startups will be in the application layer – which make use the of the democratization of generative AI, but take it to everyday workflows by using intelligent workflow automation and leveraging proprietary verticalized data sets to provide most value to the end users in terms of productivity improvements, agility, resiliency and cost savings.  There will also be opportunities for startups to innovate at the hardware layer – to break the GPU cost curve but these are likely to be more  capital intensive investments.

What is increasingly clear, however, is that the surge in Generative AI interest will quickly prompt demands to prioritize human values, guardrails and ethical AI governance norms.  As Abhinav Ragunathan writes, ”the modern “tech stack” will quickly evolve into an “ethics stack” in which the flow of data moves through companies [that ensure these constraints are in place].” Moreover, the subsequent innovation prompted by the Generative AI “boom” — is poised to follow a curve similar to “ethics predecessors” like that of cybersecurity in the late-2000s and privacy in the late-2010s.

Chat GPT Queries vs Thought Leader Perspectives

Query #1. What are the promises and perils of human AI?

Chat GPT Response

The promises and perils of human-AI refer to the potential benefits and risks associated with the integration of artificial intelligence into human societies.

Promises of human-AI include:

  • Increased efficiency and productivity: AI can automate repetitive tasks, freeing up time for humans to focus on more complex and creative work.
  • Improved decision-making: AI can analyze vast amounts of data and provide insights that may lead to better decision-making.
  • Enhanced quality of life: AI can assist with tasks such as diagnosing medical conditions and providing personalized healthcare, making life easier and more convenient for people.

Perils of human-AI include:

  • Job displacement: As AI becomes increasingly capable of performing tasks traditionally done by humans, there may be a risk of job loss and unemployment.
  • Bias and discrimination: AI systems can perpetuate existing biases and discrimination if they are trained on biased data.
  • Privacy concerns: AI systems can collect and store large amounts of personal data, raising concerns about privacy and security.
  • Loss of human control: As AI systems become more autonomous, there is a risk that humans may lose control over their actions and decisions, leading to unintended consequences.

In conclusion, the integration of AI into human societies holds both promises and perils, and it’s important for society to carefully consider the potential impacts and take steps to minimize the risks.

Thought Leader Perspective

Erik Brynjolffson (Stanford HAI) authored an insightful thought piece in Daedelus (2022) titled the Turing Trap on the promises and perils of AI, see link here.

Key takeaways summarized by Anik from the thought piece are:

  • The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly a better understanding of our own minds. But not all types of AI are human-like–in fact, many of the most powerful systems are very different from humans–and an excessive focus on developing and deploying HLAI can lead us into a trap.
  • The breakthroughs in robotics depend not merely on more dexterous mechanical hands and legs, and more perceptive synthetic eyes and ears, but also on increasingly human-like artificial intelligence (HLAI). – Powerful AI systems are crossing key thresholds: matching humans in a growing number of fundamental tasks such as image recognition and speech recognition, with applications from autonomous vehicles and medical diagnosis to inventory management and product recommendations.
  • Automation increases productivity. Moreover, there are many tasks that are dangerous, dull, or dirty, and those are often the first to be automated. As more tasks are automated, a fully automated economy could, in principle, be structured to redistribute the benefits from production widely, even to those people who are no longer strictly necessary for value creation.
  • Government policy could do more to steer the economy clear of the Turing Trap. The growing use of AI, even if only for complementing workers, and the further reinvention of organizations around this new general-purpose technology imply a great need for worker training or retraining. In fact, for each dollar spent on machine learning technology, companies may need to spend nine dollars on intangible human capital.
  • The risks of the Turing Trap are increased not by just one group in our society, but by the misaligned incentives of technologists, businesspeople, and policy-makers. The future is not preordained. We control the extent to which AI either expands human opportunity through augmentation or replaces humans through automation.

Query #2: What are important innovations in Ethical AI?

Chat GPT Response

There have been several important innovations in ethical AI, including:

  • Fairness: Techniques to ensure that AI systems are fair and unbiased, such as algorithmic auditing and debiasing.
  • Explainability: Methods to make AI systems more transparent and explainable, such as interpretability and model explanation techniques.
  • Privacy: Tools to protect the privacy of individuals, such as differential privacy and federated learning.
  • Accountability: Frameworks for holding developers and users of AI systems accountable for their actions, such as responsible AI governance.
  • Human-centered design: Incorporating the values and needs of the people who will interact with AI systems into the design process, such as user-centered design and participatory design.

Thought Leader Perspective

Abhinav Ragunathan (EAIGG community member) published a market map of Human Centric Ethical AI startups (EAIDB) in the EAIGG annual report.  See link here.

Key takeaways summarized by Anik were:

  • Demand for ethical AI has skyrocketed, and public discussion has internalized the challenge, and is moving towards innovative solutions. A nascent, sometimes opaque, ecosystem of ethical AI startups has emerged to drive solutions to ethical challenges.
  • They are organized across 5 broad category areas:
  1. Data for AI — any treatment or manipulation done to data sets, including generating synthetic data, done to preserve data privacy
  2. ModelOps, Monitoring & Explainability — specific tools that assist in the governance and lifecycle management of production machine learning. This includes detecting bias and offering monitoring and explainability
  3. AI Audits and GRC — consulting firms or platforms that help establish accountability, governance and/or business risk/compliance
  4. Targeted AI Solutions — companies solving ethical AI issues for a particular niche or vertical, i.e. insuretech, fintech, healthtech, etc5. Open-Sourced Solutions — fully open-source solutions meant to provide easy access to ethical technologies and responsible AI.
  • As the space grows, we anticipate the following key trends:
  1. Consulting firms may decline in popularity and MLOps / GRC platforms may rise due to the ability to programmatically enforce compliance given concrete metrics.
  2. Incumbents will start incorporating less effective versions of bias-related technology in an effort to keep their platforms viable in this new world of bias-conscious policy. They lack the specialty, expertise, and first-mover advantage of these startups but have a well-established client base to draw from.
  3. The modern “tech stack” will quickly evolve into an “ethics stack” in which the flow of data moves through companies in the aforementioned categories. For example, a company might employ Gretel for data privacy and synthetic data, Arthur for MLOps management and bias detection, then Saidot for model versioning and governance.
  4. The “boom” for ethical AI is estimated to be somewhere from the mid- to late-2020s and will follow a curve similar to “ethics predecessors” like that of cybersecurity in the late-2000s and privacy in the late-2010s.

Scaling With System Integrators

System integrators (SI’s) are the handmaidens between innovative startups and large enterprises. Both parties benefit from the relationship, as does the SI itself.

Introduction

System integrators (SI’s) are the handmaidens between innovative startups and large enterprises. Both parties benefit from the relationship, as does the SI itself.

In my two decades-long business experience at Capgemini, however, I have noticed that startups do not always understand the role of an SI, how they can use these firms to their advantage, or how they must evaluate the pros and cons of the relationship.

System integrators include companies like Accenture, Deloitte, Infosys, TCS or Capgemini that are hired by large corporations to integrate component subsystems that drive automation, further digital transformation, and spur innovation across business platforms.

Startups or scale-ups must often work with SI’s to target an enterprise with their novel software solution. This can be a struggle, however, for solution providers that don’t have much experience working with SI’s or understand their relationship with the enterprise.

Understanding Value Against the Right Backdrop

Startups often offer a single-point solution, which can be of limited value if it cannot be integrated into an enterprise company’s broader IT infrastructure. Most enterprises rely on some level of outsourcing, and often depend on third parties to provide expertise in managing their information. This is precisely where a system integrator creates its value: helping the enterprise conduct an impact assessment to review how and where a given startup solution might be useful in the enterprise tech stack. Startups can benefit by doing some homework here as input to the assessment, which will help it to establish a clearer positioning in the context of the enterprise’s architecture.

Since enterprises often lack the in-house staffing required to fully explore the information landscape, a system integrator is called upon to manage IT infrastructure, conduct a digital transformation project, and/or to identify automation solutions that will keep the enterprise up-to-date. As a result, when an enterprise evaluates a solution offered by a startup, the company will often ask a system integrator for its guidance.

It’s of paramount importance, therefore, to understand the interests of the SI in this process and how these entities charge for their services. SI’s are fundamentally about deploying people and the SI will also assess the impact of the startup solution on its own economics.

SI contracts are typically structured in the following ways:

  1. Service Level Agreement: If the SI is hired by way of an SLA, typically for ongoing managed services, the startup’s economics are aligned with that of the enterprise. Under this arrangement, the savings that a startup solution generates either funnels to the SI or is shared with the client. This alignment bodes well for the startup.
  2. Fixed Price contracts, typically utilized for software development, are also fairly well aligned with the startup, although the SI will need to find ways to improve margin and improve quality of delivery. In these arrangements, the ability to scale is key.
  3. Time and Material contracts, typically utilized for “resource provider” engagements, are those in which the SI is paid for the people it deploys to the enterprise – logged in time and material. This scenario is more complex for the startup vendor, as the economics may not align, and could even oppose the SI, which would lose billable hours with the enterprise if the startup solution is adopted by the enterprise. To make matters worse, many SI’s contract on this basis.

Enterprises will usually reveal the structure of their contracts with the SI. It is, therefore, incumbent on the startup vendor to understand the economic incentives of the SI, and work with that firm to discover how their input will improve the business case for the client (the enterprise) and benefit to the SI.

The Utilization Consideration

Utilization is a key metric for a system integrator.

Once a startup or scaleup has identified how its solution aligns with the business case of the system integrator, it must begin to explore the marginal value it offers. In other words, to successfully recruit the SI as a champion in the sales process, the startup vendor must demonstrate that the value it creates is worth the investment of the SI.

Consider that top system integrators deploy most of their staff on ongoing projects, and don’t have a deep bench of idle labor waiting for the next client. Therefore, if an SI is to expend time and energy to learn a new solution, evaluate it, stress test it, deploy it, and then train its team (and the enterprise) on this solution, the value must be clear.

If the SI doesn’t see the potential to deploy the solution broadly, and likely across a large base of its clients, then it could pose a challenge for the SI to justify the business expense of investing in that particular solution. The onboarding costs are rarely covered by one client.

The Pressure to Innovate

Another major driver for SI’s is the pressure to innovate. For an SI to justify its contract with the enterprise, it must demonstrate it has its finger on the pulse of the innovation landscape, and is equipped to deploy cutting edge technology solutions amongst at least some portion of its client base.  

Customers are increasingly savvy, so SI’s often feel pressure to offer new services and innovate whenever and wherever possible. For example, when the trend of Robotic Process Automation (RPA) surged, SI’s felt a keen pressure to learn and deploy these solutions, if only to burnish their innovation credentials and capabilities.  

When a trend is peaking and appears destined to win the future, this pressure toward innovation can override an SI’s direct economic interest. In rare instances, an SI might even adopt a solution and scale it up across its broader service offerings if only to demonstrate the firm’s forward thinking and bend toward innovative practices.

TCS, for example, drives billions of dollars of revenue each year from startups. They occupy one end of the spectrum, in terms of appetite for innovation. Others tend to follow the innovation curve, and only adopt new solutions if they’re offered by unicorn startups, or if they feel pressured or forced into action. Startups should research the SI to understand their appetite for innovation.  

The Cemetery of Proof of Concept: Bear the End in Mind

SI’s may sometimes leverage their enterprise clients‘ appetite for innovation, and do so in less constructive ways for startups.

Large SI’s enjoy a certain level of overhead from their enterprise contracts, who must justify their salary and value to their enterprise clients. After all, if an SI is hired to point their clients to innovative solutions and vendors, then a Proof of Concept (POC) with a startup vendor becomes an easy way to show an activity and justify a salary – even if the firm has no intention of ever truly adopting it. The cemetery of proof of concept is a classic (and very real) challenge for startups both with enterprises and SI’s.

Startups offering the POC must understand what it takes to move the concept into production, where it can ultimately generate revenue. Otherwise, a POC exercise can result in a costly showcase experiment with little prospect of graduating into a revenue-generating arrangement. When targeting a solution for an enterprise client, startups must identify which entity has requested the POC. It’s important to know whether the person on the enterprise or the SI side has the power to advance the POC to the next step.

Since POC’s are costly for startups to create, the founders must carefully evaluate the risk and reward of pursuing such an option. In certain instances, even if it’s clear to a startup that a POC has almost no chance of moving into production, but it may be worth pursuing it a) to gain exposure to the enterprise; b) to use it as a proof point with other potential clients; or c) simply to learn more about a corporation’s business processes and onboarding funnel.  

If, however, the POC is conducted simply to serve the SI’s innovation agenda, the startup may choose to charge the SI for serving as a consulting specialist. In that case, it’s important for the startup to be upfront with the SI, and vice versa. A startup must bear the end in mind and monitor the risk before entering into this kind of engagement.

Knowing When to Engage

SI’s tend to favor startup solutions that have an established Product Market Fit. Therefore, these firms prefer to engage with startup/scale-up vendors that have raised enough financing and have the ability to scale their solution broadly.  

If the startup is less mature, it would be wise to target the SI’s innovation program. That path will allow the startup to demonstrate the viability of its solution, showcase its architecture to targeted audiences, and provide a general exposure for the business on roadshows, podcasts, and conferences.  

The economics of joining an innovation program, however, may not be particularly compelling since they are not designed to create instant business outcomes. If the startup chooses to engage the SI at this stage, therefore, it may choose to work with the SI as a paid consultant, as previously discussed. For their part, SI’s tend to engage with startups with which they can see a scaling opportunity over the next 12 months.

Never Stop Selling  

Oftentimes, startup vendors underestimate the time it takes to secure a contract with an SI.  In fact, making the sale to an SI can be as long and laborious as dealing directly with the large enterprise. It takes time to build awareness, to entice a business unit within the SI to fall in love with your product, commit to it, and become your champion within the SI. In addition, other business units within the SI must buy in to escalate the sales process to the firm’s top echelon. But even after you’ve closed the contract with the SI, you haven’t finished your work.  

All you’ve accomplished is that you’ve secured the “right to sell.” This initial buy-in does not automatically generate outcomes. Startup vendors must continue to push the SI to implement and deploy its solutions with their enterprise clients. This means hands-on monitoring and relationship management to see the solution through to the deployment stages, and then tracking progress and customer success. This is the same process by which a customer success team works to ensure a client is successfully leveraging a tool to meet its goals.  

Conclusion

Selling into SI’s can be tremendously helpful to securing broad buy-in from large enterprise clients. These actors are uniquely positioned with the enterprise to adopt and deploy innovative automation solutions that are scalable, especially when related to IT infrastructure and digital transformation.  

However, the SI’s have their own unique challenges, and for a startup to successfully engage with an SI, it must not only understand the value of its solution within the target enterprise’s tech stack, but also have a clear view of the SI’s economics. To make a strong business case, the startup should be clear-eyed about where they sit on a particular SI’s priority list and its innovation agenda. The process can be challenging, but the rewards are mighty.

A Founder’s Board of Directors: Six Tips for Success

You’ve recruited a talented team and designed a great product. You’ve set a staged growth plan, and gained initial traction selling to enterprise customers.

You’ve recruited a talented team and designed a great product. You’ve set a staged growth plan, and gained initial traction selling to enterprise customers.

Now you’re closing in on a round of institutional financing, and it’s time to focus on your board of directors. It will include key founders, and some initial investors. But who do you really want sitting around that table?

In my experience, an effective board can accelerate a startup’s success or weigh it down like a heavy anchor. Here are a few tips and red flags for early-stage startups as they consider the composition of their board of directors.

Size: Find the Goldilocks Number

The more venture capitalists on your board the better, right? They are the ones willing to take risks, after all, and the only people in the room with experience shepherding young companies to market success.

In fact, there is a correlation between the presence of venture capital board members and a business’ ultimate success. But it’s not one you might expect. A study conducted by Correlation Ventures found that two VC members was the ideal number on a board of directors. Zero or one board member from a VC compared favorably, but any more than three was found to have a dampening effect on the business.

More specifically, startups with three board members lead to exits of 3.6x, while those with six or more board members yielded 1.4x exits. Too many board members =  too many cooks in the kitchen. Lowering the temperature, the noise and the competing voices and egos is often critical to decision making, focus and execution.

Takeaway: Choose your VC members carefully, and don’t appoint too many.

Diversity: A Variety of Perspectives

A founder should resist the temptation to stack the board with sycophants. Diversity is important. Ideally, your board will be composed of members with a broad resume of experience: broad strategic thinkers, niche industry veterans, male and female perspectives, local and global outlooks.

A good CEO will challenge herself to communicate with a broad array of people who can shine a light on blindspots and challenge assumptions. In that way, a founder can test ideas and lean on the advice of veterans. When everyone is united in the same goal, the board, and the company at large, will prosper.

Context Drift: Keep Them Focused

Yes, your board members bring decades of experience, insights and seasoned perspectives.  Their accomplishments speak for themselves, and put them in high demand.  However,  this also means that they’re frequently overextended, advising a handful of companies at once, and quickly drawing patterns and conclusions without fully understanding the context before providing input for key decisions.

As a consequence, their input may be reactive, and fail to add value.  This situation is a recipe for disaster, especially when bruised egos may be involved.  

To avoid these eventualities, founders must do reference checks on members. Indeed, founders should vet every board candidate at least as thoroughly as an executive hire. There’s a case to be made that CEOs should be even more careful with board-member appointments: Furthermore ensure that board members understand the vision, and the peculiarities of your business and will devote the necessary bandwidth. It is also important to engage board members in bet

Takeaway: Select your board members carefully, a ineffective board leads to poor governance.  Discernment is critical.

Respect: Essential for Trust

So far, we’ve discussed how startup founders can best evaluate and recruit their board members. But it’s important to point out that these individuals also have responsibilities. For one, they must conduct themselves in a manner that is appropriate to their position.

A board member who shows a lack of respect for the CEO or other company executives is bad for business – especially if the critique takes place during a board meeting. There are two negative outcomes here. First, the board member is actively undermining a CEO in front of his or her team, compromising the team’s trust in their CEO. Second, the CEO never trusts that board member again. Whatever good advice he may have is now colored by that negative experience.

Takeaway: Board members who show a lack of respect for company executives lose influence and sow discord.

One tip for the CEO: Make sure to frame issues before the board into open and closed sessions (where execs do not participate). That will provide a confidential forum and help keep oversized egos in check.

Set Clear Objectives: The Value of OKRs

High-functioning board members help the CEO and leadership team see their blind spots and direct them toward successful value creation. Likewise, CEOs should work with the board to set clear objectives and key results (OKRs) for the company to ensure alignment on value creation.

Without clear OKR’s it is difficult to judge a company’s and CEO’s performance objectively.  This can lead to a difficult situation of bad surprises and finger pointing that spells doom for the company and lost market and investment opportunity.  A smart startup founder will proactively work with the board to set clear OKR’s (achieving Product market fit, establishing a repeatable sales motion, hiring key talent etc) that results in successful market adoption and refinancings. A forward-thinking CEO is a successful CEO.

Takeaway: Create company wide OKR’s to align your board and the management team to drive value creation.

Communication: Not Just for the Meetings

It’s not enough to see your board members at board meetings. Call them in between board meetings, structure at least a few board dinners prior to board meetings every year.

Get to know your board members strengths and how they can best help.  A lack of communication and poor chemistry can lead to bad dynamics and unwanted surprises.

Takeaway: Your board is taking this journey with you.  They are your partners embrace them.  Keep in touch, build chemistry and communicate often.

Responsible AI Has Become Critical for Business

Investors need to prioritise the ethical deployment of AI – too much is at stake if they don’t.

Investors need to prioritise the ethical deployment of AI – too much is at stake if they don’t.

Investors, take note. Your due diligence checklist may be missing a critical element that could make or break your portfolio’s performance: responsible AI. Other than screening and monitoring companies for future financial returns, growth potential and ESG criteria, it’s time for private equity (PE) and venture capital (VC) investors to start asking hard questions about how firms use AI.

Given the rapid proliferation and uptake of AI in recent years – 75 percent of all businesses already include AI in their core strategies – it’s no surprise that the technology is top-of-mind for PE and VC investors. In 2020, AI accounted for 20 percent or US$75 billion of worldwide VC investments. McKinsey & Company has reported that AI could increase global GDP by roughly 1.2 percent per year, adding a total of US$13 trillion by 2030.

AI now powers everything from online searches to medical advancement to job productivity. But, as with most technologies, it can be problematic. Hidden algorithms may threaten cybersecurity and conceal bias; opaque data can erode public trust. A case in point is the BlenderBot 3 launched by Meta in August 2022. The AI chatbot made anti-Semitic remarks and factually incorrect statements regarding the United States presidential election, and even asked users for offensive jokes.

In fact, the European Consumer Organisation’s latest survey on AI found that over half of Europeans believed that companies use AI to manipulate consumer decisions, while 60 percent of respondents in certain countries thought that AI leads to greater abuse of personal data.

How can firms use AI in a responsible way and work with cross-border organisations to develop best practices for ethical AI governance? Below are some of our recommendations, which are covered in the latest annual report of the Ethical AI Governance Group, a collective of AI practitioners, entrepreneurs and investors dedicated to sharing practical insights and promoting responsible AI governance.

Best practices from the ESG movement

PE and VC investors can leverage lessons from ESG – short for environmental, social and governance – to ensure that their investee companies design and deploy AI that generates value without inflicting harm.

ESG is becoming mainstream in the PE realm and is slowly but surely making its mark on VC. We’ve seen the creation of global industry bodies such as VentureESG and ESG_VC that advance the integration of sustainability into early-stage investments.

Gone are the days when it was enough for companies to deliver financial returns. Now, investors regularly solicit information about a fund portfolio’s compliance with the United Nations Sustainable Development Goals. Significant measures have been taken since 2018 to create comparable, global metrics for evaluating ESG performance. For example, the International Sustainability Standards Board was launched during the UN Climate Change Conference in 2021 to set worldwide disclosure standards.

Beyond investing in carbon capture technologies and developing eco-friendly solutions, firms are being pressed to account for their social impact, including on worker rights and the fair allocation of equity ownership. “Investors are getting serious about ESG,” headlined a 2022 report by Bain & Company and the Institutional Limited Partners Association. According to the publication, 90 percent of limited partners would walk away from an investment opportunity if it presented an ESG concern.

Put simply, investors can no longer ignore their impact on the environment and the communities they engage with. ESG has become an imperative, rather than an add-on. The same can now be said for responsible AI.

The business case for responsible AI

There are clear parallels between responsible AI and the ESG movement: For one thing, both are simply good for business. As Manoj Saxena, chairman of the Responsible Artificial Intelligence Institute, said recently, “Responsible AI is profitable AI.”

Many organisations are heeding the call to ensure that AI is created, implemented and monitored by processes that protect us from negative impact. In 2019, the OECD established AI Principles to promote the use of AI that is innovative, trustworthy and respects human rights and democratic values. Meanwhile, cross-sector partnerships including the World Economic Forum’s Global AI Action Alliance and the Global Partnership on Artificial Intelligence have established working groups and schemes to translate these principles into best practices, certification programmes and actionable tools.

There’s also been the emergence of VC firms such as BGV that focus on funding innovative and ethical AI firms. We believe that early-stage investors have a responsibility to build ethical AI start-ups, and can do so through better diligence, capital allocation and portfolio governance decisions.

The term “responsible AI” speaks to the bottom-line reality of business: Investors have an obligation to ensure the companies they invest in are honest and accountable. They should create rather than destroy value, with a careful eye not only on reputational risk, but also their impact on society.

Here are the three reasons why investors need to embrace and prioritise responsible AI:

  1. AI requires guardrails

One only has to look at social media, where digital platforms have become vehicles that enable everything from the dissemination of fake news and privacy violations to cyberbullying and grooming, for a taste of what happens when companies seemingly lose control over their own inventions.

With AI, there’s still an opportunity to set rules and principles for its ethical use. But once the genie is out of the bottle, we can’t put it back in, and the repercussions will be sizeable.

  1. Regulatory pressure imposes strong consequences

Governments worldwide are tightening digital regulations on online safety, cybersecurity, data privacy and AI. In particular, the European Union has passed the Digital Services Act and the Digital Markets Act (DMA). The latter aims to establish a safe online space where the fundamental rights of all users are protected.

The DMA specifically targets large platforms known as “gatekeepers” (think search engines, social media and online marketplaces), requiring them to be transparent in advertising, protect data privacy and address illegal or harmful content. Coming into effect as soon as 2023, the DMA can impose fines of up to 6 percent of annual sales for non-compliance, and as much as 20 percent for repeated offences. In extreme cases, regulators may even disband a company.

In a recent study on C-suite attitudes towards AI regulation and readiness, 95 percent of respondents from 17 geographies believed that at least one part of their business would be impacted by EU regulations, and 77 percent identified regulation as a company-wide priority. Regulators in the US and Asia are carefully following the progress made in Europe and will surely follow suit over time.

  1. Market opportunities

It has been estimated that 80 percent of firms will commit at least 10 percent of their AI budgets to regulatory compliance by 2024, with 45 percent pledging to set aside a minimum of 20 percent. This regulatory pressure generates a huge market opportunity for PE and VC investors to fund start-ups that will make life easier for corporates facing intense pressure to comply.

Investors wondering about AI’s total addressable market should be optimistic. In 2021, the global AI economy was valued at approximately US$59.7 billion, and the figure is forecast to reach some US$422 billion by 2028. The EU anticipates that AI legislation will catalyse growth by increasing consumer trust and usage, and making it easier for AI suppliers to develop new and attractive products. Investors who prioritise responsible AI are strongly positioned to capture these gains.

Worth the effort

The call for investors to integrate responsible AI into their investments may feel like a tall order. It requires specialised talent, new processes and ongoing monitoring of portfolio company performance. Many fund managers, let alone limited partners, don’t yet have the manpower to achieve this.

But AI’s impending regulation and the market opportunities it presents will change how PE and VC firms operate. Some will exit, shifting resources to sectors with less regulation. Others, fortifying themselves against reputational risk while balancing internal capabilities, will add screening tools for AI dangers. Still, some will see responsible AI as Mission Critical.

Awareness is the greatest agent for change, and this can be achieved through adapting best practices on ethical AI governance from the community of start-ups, enterprises, investors and policy practitioners. Those that step up before it’s too late and who proactively help shape the rules as they are being written will reap the benefits – both economically and in terms of fuelling sustainable growth.

This is an adaptation of an article published in the Ethical AI Governance Group’s 2022 Annual Report.

The Role of Ethics When Investing in AI

What role should ethics play in investing in AI ?That’s a question I have begun to hear more frequently as a general partner at BGV and as an investor who cares deeply about responsible tech and AI governance.

What role should ethics play in investing in AI ?

That’s a question I have begun to hear more frequently as a general partner at BGV and as an investor who cares deeply about responsible tech and AI governance.

Imbedding ethics in company culture and product design can become a competitive advantage in AI first startups. By placing ethics at the core of product design, startups can accelerate market adoption by mitigating risks around bias, explainability and data privacy and continue to grow in an environment where values and ESG are becoming increasingly important with customers and employees.

A values-driven industry

At BGV, we invest in early-stage immigrant founders who are building AI-centric products for the Enterprise 4.0 market. Whether the technology is robotics, NLP, or computer vision, these organizations have deep tech at their core.

We believe that far more value can be created through AI use cases that augment humans rather than AI use cases that focus solely on replacing humans through automation. The former drives exponential growth in productivity while the latter commoditizes skilled labor leading to inequalities in income and wealth distribution.  AI used only for labor substitution may make sense for use cases that are dangerous (ie mining) or those where there is a shortage of labor (ie recycling).  That’s why we screen our deal flow to better understand the value creation impact rather than investing purely for automation use cases.  

We also screen founders for prior track record of transparency and ethical behavior. As early stage investors it’s vital that we can trust founders to deliver on their vision and promise.  At the end of the day the VC Business is about people, technology is a people business because we bet on the integrity and honesty of our founders as much as the innovations they are bringing to the world.

Red flags and green lights

During our due diligence process, we look for red flags and green lights.

What’s a red flag? How has a startup founder performed in his or her entrepreneurial career? Have they engaged in ethical behavior with customers? We also look at a founders’ past experience at established companies. If we find that he or she has a track record of not fulfilling their promises, that’s a big red flag for us.

The flip side of the red flag is the green light. If we find a startup entrepreneur has a consistent record of ethical practices and that their past customers and colleagues praise his or her leadership and integrity, that endorsement speaks volumes.

A human approach

We also believe that our founders have to trust us. It’s a two-way street. Our practice is to introduce a founders seeking an investment to other founders in our network. They need to do their own diligence on BGV and hear that we are a values based firm whose actions match the words, that we are truly committed to integrity and that we support our founders through the good and bad times.

That is vital, because as VCs we are building an 8-10 year relationship with a startup company. There will inevitably be ups and downs. Mistakes will be made. That’s why a relationship built on trust is at the core of our investing strategy.

Ethical AI governance strategy

There’s a difference between saying that you care about AI governance and actively engaging with startups to help them build responsible AI companies.  This is one of the reasons we founded EAIGG, a community platform of AI practitioners and investors dedicated to sharing AI governance best practices.

We have been pleasantly surprised that, indeed, many young entrepreneurs care about making the world a better place. Of course every young company wants to be a unicorn. But if a company’s values are lost along the way, and they are purely mercenary in pursuing their financial goals, then something important has been lost.

During our initial conversation with startup founders we ask them point blank: do you care about AI governance and data privacy ? Do you believe that AI can make humans expendable? We don’t expect that they’ll have everything figured out. But we do expect that the issues are important to them.

It’s important that startups have a roadmap for AI governance.  Because 10 years in the future, when the small startup has become a corporate brand, it’s nearly impossible to retrofit technology and product architectures for AI governance and data privacy.  This cannot be an afterthought.

A holistic view

When dealing with AI, it’s important to take a holistic view. I call this approach enlightened self interest. As a founder, it’s in the entrepreneur’s interest to build a great product. But it’s also in his or her interest to ensure market adoption, this implies ensuring elimination of model/data bias, addressing explainability and data privacy concerns to ensure that AI technology remains human-centric.

We’re excited about the promise of AI but we also believe it’s critical to put humans back in the equation. AI is projected to create $3 trillion of value over the next 10-15 years. Part of that equation is to contribute towards setting the guard rails so that AI development and deployment is democratized and creates value for both employees and owners of capital.

Yash Hemaraj, Partner, BGV in conversation with Sramana Mitra

Yash Hemaraj, Founding Partner at Arka Venture Labs and Partner at Benhamou Global Ventures (BGV), discusses Arka’s recent partnership with 1Mby1M to accelerate Indian B-to-B SaaS companies.

Yash Hemaraj, Founding Partner at Arka Venture Labs and Partner at Benhamou Global Ventures (BGV), discusses Arka’s recent partnership with 1Mby1M to accelerate Indian B-to-B SaaS companies.

Celebrating Women's History Month | Arka Talks w/ Sonal Puri & Usha Amin

In this month’s episode of Arka Talks, we are celebrating Women History Month featuring Sonal Puri (CEO, Webscale) & Usha Amin (Co-founding Partner, SAHA fund) in a panel moderated by Dhanasree Molugu (Arka Alum & MBA Associate at Menlo Ventures) to hear inspiring stories of women from the frontlines of being in Tech & VC. Arka Talks is our monthly fireside chat where we feature founders, operators, VCs and corporates discussing enterprise trends in the cross-border space and explore ways to build and scale a successful cross-border startup.


Sonal Puri serves as the Chief Executive Officer of Webscale, from pre-product to recently closing $26M in growth financing. Prior to Webscale, she was the Chief Marketing Officer at Aryaka Networks and led sales, marketing and alliances for the pioneer in SaaS for global enterprise networks from pre-product to Series D. Sonal has more than 20 years of experience with internet infrastructure, across four startups, in sales, marketing, corporate and business development and channels. Previously, Sonal headed corporate strategy for the Application Acceleration business unit, and the Western US Corporate Development team at Akamai, working on partnerships, mergers and acquisitions. Sonal also ran global business operations and alliances from pre-product to Series C and exit, as well as the acquisition process for Speedera (AKAM). She has held additional key management roles in sales, marketing and IT at Inktomi, CAS and Euclid.Usha has co-founded SAHA Asset Advisor – First Venture Capital Fund registered with SEBI to Empower women entrepreneurship in Technology. Invested in 11 companies, seven active, mentoring them to excel in their sectors, scaled successfully, created employment and growth opportunities in the ecosystem and impacted gender parity. The companies have focussed not only on growing their business but also on women’s welfare by implementing policies that make it favourable for them to continue work despite domestic challenges, especially during the current situation.

Arka Spring Showcase - Jan '21

Arka spring showcase is an invite-only coming together of the best minds in the US-India cross-border startup ecosystem where we have our founders showcase their work and a set of insightful discussions on the US-India cross-border eco-system.

We started the year off with the Arka Spring Showcase where we had the best minds of the US-India cross border come and be a part of the bi-annual showcase. The session started off with a keynote by Eric Benhamou, (Founder, Benhamou Global Ventures). Eric spoke about the transformation of the global enterprise and enterprise trends to look forward to in 2021.  Arka startups did a showcase where they presented their work and spoke about their progress so far.

We then had Sanjay Nath (Co-founder, Blume Ventures), Rashmi Gopinath (General Partner, B Capital Group) & Ankur Jain (Founder, Emergent Ventures) for a riveting panel discussion on “The Emergence of Enterprise Innovation in the US-India Eco-System”.

This was followed by a great panel discussion on “Building and Scaling a cross-border startup led by Yashwanth Hemaraj (Partner, Benhamou Global Ventures) in conversation with  Rajoshi Ghosh (Co-founder, Hasura) & Shiv Agarwal (Co-founder, Arkin (acq. by VMWare).

Arka Talks w/ Gaurav Manglik | The Art and Science of Building a Global Enterprise Startup

In this Episode of Arka Talks, we had Ankur Jain (Advisor, Arka & Founder, Emergent Ventures) in conversation with Gaurav Manglik, Co-founder, CliQr (acq. by Cisco) & GP, WestWave Capital on the 24th of Feb at 9.30 PM IST/ 8 AM PT. Arka Talks is our monthly fireside chat where we feature founders, operators, VCs and corporates discussing enterprise trends in the cross-border space and explore ways to build and scale a successful cross-border startup. We’re excited to have Gaurav on Arka Talks and we will explore his journey and talk about the art and science of building a global enterprise startup. Gaurav is currently a General Partner at WestWave Capital where he focuses on investments in early-stage Enterprise B2B startups. Gaurav has been a key driver of innovation in the cloud computing Industry. He co-founded CliQr Technologies in 2010 and served as its Chief Executive Officer until it was acquired by Cisco for $260M in 2016. At Cisco, Gaurav led cloud engineering for the Cloud Platform and Solutions Group and advised on Cisco’s cloud and container strategy and related investments and acquisitions.

Watch the full episode below –

Arka Talks ft. Obviously AI | No-Code & AI - Discussing the Endless Possibilities

In this episode, we feature our Arka startup founder & INK fellow Nirman Dave from Obviously AI in a panel discussion with Rafael Ugolini – a Sr. Engineering Manager at Collibra (a leading Data Intelligence company) and an Angel Investor. Along with, Avantika Mohapatra an ex-BMW engineer who now leads the charge in No-Code ML landscape as the partnerships head at AIxDesign.

Obviously AI enables anyone to quickly build and run AI models in minutes, without writing code. Crafted for a citizen analyst, Obviously AI sits in the heart of 3,000 BI teams across the world, delivering over 82,000 models since it’s launch in Feb 2020. It is a recommended no-code AI tool by Forbes and named among the top 5 no-code AI tools by Analytics Insight. Learn more on https://obviously.ai

Planning for the Best and Worst Case Scenario during a Pandemic

Radhesh Kanumury, Managing Partner, Arka Venture Labs was in conversation with Mahesh Krishnamurti, an investor and transformational leader where he gave timely advice on how enterprise startups can plan for the best and worst-case scenarios during a pandemic.

The 3 Lean Marketing Principles of Highly Effective Startups

In this episode of Arka Talks, Elias Rubel of Mattermade talks about the 3 lean marketing principles of highly effective enterprise startups and how early-stage enterprise startups can leverage them.

Topics that were covered:

  • Lead Demand Gen: Is it smart to leverage ABM while running a hyper lean marketing program? How should we think about budgeting and channel testing in the post-covid economy? What cost-free demand gen strategy will always out preform companies with a bigger budget?
  • Growth Foundations: How do you set the right growth goals for your team? What does a best in class MarTech stack look like on a leaned out budget? What are the highest leverage growth channels that don’t cost a dime?
  • Marketing, the right way: What’s the most effective way for Sales and Marketing to partner? What’s the best way to focus your marketing efforts and messaging? How should we approach product marketing and leverage it to close more deals?