Resources

16 Takeaways from the 2023 Securing AI Summit in San Francisco

While GenAI adoption is now on the fast track, widespread adoption will take time. For perspective, it took six years for 50% of US households to adopt mobile internet; we expect a comparable level of widespread AI adoption to take 3+ years. This is consistent with the halving time scales of technology adoption we’ve seen with the PC era, the desktop era, and the mobile internet era.

Generative AI: The Macro Opportunity in Harnessing AI and Automation

#1 While GenAI adoption is now on the fast track, widespread adoption will take time. For perspective, it took six years for 50% of US households to adopt mobile internet; we expect a comparable level of widespread AI adoption to take 3+ years. This is consistent with the halving time scales of technology adoption we’ve seen with the PC era, the desktop era, and the mobile internet era.

  • The drivers of adoption hinge on three main factors: ROI (productivity growth versus cost), friction reduction, and budget alignment. Encouragingly, ROI and budget allocations show positive trends. Reducing friction requires developing robust data organization and privacy standards, which could limit the pace of adoption. On the other hand, digitally native employees have already begun readily embracing Generative AI and are enhancing workforce productivity.

#2 The swift uptake of GenAI introduces new cybersecurity challenges. For instance, the introduction of GenAI systems like ChatGPT amplifies the risk of phishing attacks. A shortage of cybersecurity talent compounds this challenge, elevating it to a critical concern for enterprises.

  • The macro-opportunity in harnessing AI and automation, despite gradual adoption, is undeniable. In terms of opportunity size, GenAI’s cybersecurity potential represents a $34 billion Total Addressable Market (TAM), with productivity gains acting as a driving force.  It is important for organizations to proactively address the implications and maintain a strong focus on AI cybersecurity.

Securing the Future: Demystifying LLMs and Threats

#3 There are three broad areas of LLM threats worth noting: prompt injection, data poisoning, and data leakage (not from the LLMs but from agents and vector databases).

  • Prompt injection can be compared to confusing the model, e.g., instructing it to behave as someone else to access information it wouldn’t provide otherwise. This tactic is not new in cybersecurity. The reason it works lies in the machine’s inability to distinguish between the control plane and the data plane. In simpler terms, it can’t differentiate between system code and user input. Prompt injection can occur through various means, including images or text, and it can prompt actions by agents, making it a more potent threat (e.g., a bad actor can inject an action into an email to “delete all” of an inbox). Potential risks include damage to an entity’s brand, data losses and financial losses.
  • Data poisoning involves intentionally manipulating data to undermine a model’s behavior. This can occur in different forms, depending on where the intrusion takes place within the tech stack. The two primary forms are:
  • Input Poisoning (most common): Adversaries alter trusted data sources used by LLMs to skew their learning (e.g., Wikipedia, or expired domains).
  • Model Editing: This form of data poisoning involves modifying LLMs to spread misinformation. For example, adversaries might tweak facts within an LLM and upload the altered model to a public repository like Hugging Face. From there, LLM builders integrate these manipulated models into their solutions, thus spreading false information to end users.

#4 Looking ahead, we anticipate that data poisoning will evolve to become more advanced, and possibly the hardest area to address with modern cybersecurity.

As LLMs increasingly incorporate diverse data sources, data poisoning becomes more sophisticated and practical, thereby expanding the potential attack surface. This evolution is fueled by an expanding array of data sources, shorter training cycles, and a shift towards smaller, specialized models. As a result, data poisoning is likely to become an increasingly prominent threat.

  • Data leakage – Enterprises are understandably concerned about employees sending sensitive data to LLMs, as malicious actors could exploit this information. However, it’s important to recognize that LLMs are not data stores; they’re data generators so this threat is a bit over hyped. Extracting data from LLMs requires several prerequisites:
  • Sufficient References: To extract data, there must be a substantial number of references to it, enabling memorization.
  • Knowledge of the Secret: Adversaries need to possess enough knowledge about the secret or its format to generate it accurately (e.g., first 4 digits of SSN).
  • Verification of Accuracy: The attacker must verify that the response is accurate and not a hallucination.

However, data leakage emerges as a deeper concern when data is extracted not from the LLM itself, but from agents or vector databases. This highlights the critical importance of access control rules to safeguard information security.

Unveiling Generative AI: Navigating Risks and Opportunities

#5 Prioritize use cases to reinforce trust in LLM deployments from the very beginning.

  • A staggering number of opportunities exist across industries and enterprise functions to leverage GenAI, but use cases need to be prioritized based on expected productivity gains and anticipated risks.  One approach is to prioritize use cases with lower risk profiles. This can be facilitated through a framework that considers the criticality of the desired outcome and the level of accuracy required for the task. Many internal use cases may fall into the “low-low” risk category, such as coding assistance. For riskier or externally focused cases, like underwriting in financial services, human oversight becomes essential. However, it’s vital not to discount potential high impact but higher risk initiatives and give them proper consideration.

#6 Ensure that the right guardrails are in place to navigate the risks associated with deploying GenAI technology.

  • This includes compliance with regulations like GDPR, the California Consumer Privacy Act, and the EU AI Act, amongst others, and being thoughtful about how enterprises handle data and inform users about its use. Adopting the NIST AI Risk Management Framework can also play an important role if enterprises do not already have a robust framework in place. If an enterprise decides to build a private LLM, it becomes a team’s responsibility to embed ethics and moral principles that align with the organization’s values.

#7 Engage cross-functional teams on GenAI deployment because this is a team sport that will often require that different stakeholders like the CTO, CIO, CISO, product managers, HR, and legal are all involved.

  • The CEO is a starting point whose buy-in of the potential gains and understanding of GenAI technology is crucial. But getting to scale and efficiency requires broad collaboration and communication. This is often more important amongst those executing the vision than those seeding it.

Armor for AI: Implementing AI Security While Enabling AI Quality

#8 The cybersecurity talent gap is wide and continues to widen. As the barrier to deploying AI applications lowers, the difficulty in properly securing these systems grows disproportionately.

  • Many organizations lack the necessary resources and data to train their own models, making external sources an attractive option. Outsourcing AI models, however, often results in reduced control and limited understanding of how they function. Consequently, many organizations are adopting a “build and buy” strategy, which is traditional in cybersecurity, as opposed to the “build vs. buy” approach.    

#9 There is rapidly growing demand for anyone selling solutions. In the startup space, AI Security (AISec) has been one of the strongest categories for fundraising this year.

#10  Innovation in AI does not discriminate between defender and attacker. As GenAI models become stronger, they also become more dangerous tools for malicious intent.

  • “ML to attack ML” (which has been around for a long time now) has already evolved into “AI to attack AI.” Whose responsibility is it to ensure that new foundational models, open-source models, etc. are not used maliciously?

Table Stakes: Exploring Guardrails for Large Language Models

#11 There is simply no way to prevent bad actors from using Gen AI for advanced phishing attacks, this is in their DNA and motivation.

#12 Having a clear LLM Data Privacy policy in place and an advanced AI roadmap with corresponding safety measures is essential.

  • Enterprises can start with limited access controls, small private models, and involve humans in the loop to ensure protection against risks. However, merely formulating a policy isn’t enough;  collaboration, governance and a cultural shift are required to fully implement and enforce the right measures for success.

#13 To effectively establish data privacy guardrails for LLMs, it’s crucial to approach it as a process rather than a static product.

  • If an enterprise is building a private model, you can implement various guardrails at different stages, including training, fine-tuning, and prompt crafting. Numerous existing technologies can assist in this process. For instance, sensitive data can be replaced during the training stage.

#14 While defining guardrail policies for LLMs constitutes a strong first step, the point of failure often lands at enforcement. Several best practices are emerging to ensure data privacy. These include output scanning techniques such as anonymization, data minimization, differential privacy, encryption and federated learning.

The Hierarchy of Unknowns and What Do We Do Now

#15 GenAI is a technology breakthrough beyond precedent; for the first time, humans will need to reckon with the eventual existence of super-intelligence.

  • This brings a hierarchy of unknowns, of questions that span all facets of human life as we know it, from the technology implications to corporate, social, legislative, political, national, environmental, philosophical and existential factors.
  • As we consider the many productivity and efficiency gains from GenAI, how can humankind prevent its weaponization and institute the necessary protections to preserve humanity and human-in-the-loop controls? We are at the beginning of asking and addressing these fundamental questions.

Communities: Embracing the complexities of an AI driven Future and charting a course that is Ethical

#16 The impact of communities is accelerating:

  • EAIGG– an AI-practitioner community with 1,800+ global members- released its 2023 Annual Report at the event. In this ever-evolving landscape, opinions on the opportunities and risks presented by AI are as diverse as they are passionate. It’s easy to get swayed by the cacophony of voices, each asserting its version of the truth. EAIGG has made a conscious effort not to lean too heavily in one direction. Instead, this annual report presents a collection of thought-provoking articles that aims to elevate the discourse, offering insights that can illuminate the path forward and foster a platform for meaningful dialogue.
  • The Ethical AI Database– developed in partnership with the EAIGG- remains the only publicly available, vetted database of AI startups providing ethical services. The database is updated live, with market maps and reports published semiannually.
  • The Markkula Center of Applied Ethics at Santa Clara University promoted its framework for ethical decision-making enterprise Boards and C level executives at the event.

Conclusion

Enterprises have begun to recognize the productivity opportunities associated with GenAI, but they must be ready to innovate without being paralyzed by fear because this is where the future is headed. Technology breakthroughs have always produced equal levels of optimism and pessimism, excitement and anxiety. A few frameworks and strategies can help enterprises navigate the path. Prioritizing the right use cases and putting guardrails in place early are crucial steps. Guardrails include protecting against threats such as prompt injection, data poisoning, data leakage as well as ensuring compliance with regulations while engaging cross-functional teams on deployment to achieve the benefits of scale and efficiency. Innovation from AISec startups will play a strong role to secure AI as enterprises may lack the resources to invest in this themselves and there are simply no ways to completely prevent bad actors from exploiting GenAI to launch advanced phishing attacks. Finally, while defining guardrail policies for LLMs represent a good first step, enforcement is often the Achilles heel, so leveraging emerging best practices around output scanning is of critical importance to ensuring data privacy, and a secure information environment for the enterprise.  Perhaps the ultimate conclusion is that we are still in the early days of GenAI and we realize that more collective discussion and cross-industry collaboration are vital to ensure we are Securing AI while advancing innovation and growth.

Many thanks to our honored guests and speakers who shared their expertise and perspective with us:

Journey of Becoming a VC

Venture capital is a delicate balance of hope, optimism, awareness and caution. It requires us to be both aggressive and vulnerable. Over the past nine years, I have embraced the art of “transparent capital.” It means being honest with our stakeholders about expectations, having open conversations about opportunities and difficulties, and making decisions together. I am grateful to be surrounded by individuals who value this brand of venture capital.

See our Press Release Announcing all of our Promotions here.

Below you will find two notes from our newly promoted Partners:

Yash Hemaraj, BGV General Partner

Yashwanth Hemaraj

Honored and thrilled to announce my promotion to General Partner at BGV, thanks to the unwavering support, trust, and collaboration of our incredible team and visionary founders.

A big thank you to our BGV and Arka Venture Labs portfolio companies and founders for inspiring me with your groundbreaking ideas and relentless passion for what you do. I am committed to being an enabler for your success, and I am excited to continue supporting you on your path to greatness.

Venture capital is a delicate balance of hope, optimism, awareness and caution. It requires us to be both aggressive and vulnerable. Over the past nine years, I have embraced the art of “transparent capital.” It means being honest with our stakeholders about expectations, having open conversations about opportunities and difficulties, and making decisions together. I am grateful to be surrounded by individuals who value this brand of venture capital.

My journey is defined by the extraordinary founders, co-investors, LPs, and my loving family who make all of this possible. I am humbled by the trust and support of our investors and partners, who have joined us in our pursuit of excellence. Your belief fuels my determination to identify and empower exceptional ventures that will shape industries, create better jobs, and transform lives.

Thank you for being a part of my journey, and I am incredibly excited for what lies ahead. Together, let’s continue driving innovation, making a positive impact, and shaping the future of venture capital.

Sarah Benhamou, BGV Partner

Sarah Benhamou

I’m grateful and honored to be promoted to a Partner of BGV. When jumping onboard as an MBA intern in Israel in 2018, little did I know that this thrilling ride would lead me to managing European activities from a new office in Paris.

The VC industry has already changed a great deal in this short time, shifting from a period of irrational exuberance to a more diligent focus on sustainable growth and value creation. Navigating through these changes has been both challenging and rewarding, and I am grateful to share this journey with my experienced colleagues, who provide a rare showcase of discipline and methodical judgment while steering turbulent market fluctuations.

As I step into this new role, I am filled with optimism and enthusiasm for the future. Making a positive impact on society and the environment is a mission that resonates deeply with me, on a personal level, and I am fortunate to pursue this path professionally. BGV will continue to play a role in supporting purpose-driven ventures that drive both financial returns and meaningful change.

Throughout this journey, I’ve had the privilege of partnering with incredible talent and remarkable portfolio companies that sit at the forefront of enterprise innovation. These startups – including Madkudu, Zelros, Flytrex, Cryptosense and Kardinal – have continually impressed me with groundbreaking novelty, chastened by a disciplined grit and determination. Cryptosense, in particular, holds a special place in my heart as it marks my first successful exit as an investor, with its acquisition by SandboxAQ.

None of this would have been possible without the support of the Partners and team at BGV. Special thanks to them for their trust and for extending this opportunity, and also for being inspiring colleagues to learn from and work with!

Competitive Positioning and Messaging - BGV Talk Shop

BGV/Arka recently hosted a 9-Day Interactive Workshop Series with an audience joining us from across the world – India, Israel, France, Singapore, and the United States.

BGV/Arka recently hosted a 9-Day Interactive Workshop Series with an audience joining us from across the world – India, Israel, France, Singapore, and the United States. What started as a cross-learning opportunity for our portfolio companies as well as a way for them to interact with industry experts, led us to some great discussions and insights that we’d love to share with you all. We’ve tried our best to summarize these discussions. This workshop series has been hosted by June Bower. June Bower is a seasoned marketing professional with over three decades of experience in the industry, June began her career with marketing at Apple, working alongside Steve Jobs as one of the company’s first 300 employees. She has also served as Head of Brand Marketing at Adobe, Cisco (WebEx) and Alcatel to name a few. Over the last five years, she has focused on working with start-ups and venture capital firms, most recently Lightspeed Ventures, where she served as a Partner for marketing, helping them navigate the complex world of marketing and overcoming their unique challenges. The key theme and takeaway from the first session was that a company’s messaging is crucial to its success in marketing. Without a powerful and persuasive message, any marketing efforts and money spent will not yield the desired results. Good messaging is not about the founder or the product, good messaging is about one’s prospects and customers. To create a persuasive message, the following components need to be in place:

  1. A company must first own the problem, or "own the playing field”, by choosing a customer problem they can solve in a way that provides them a competitive advantage. A problem that none of their competitors can credibly claim to solve. This is a simple but powerful concept. Typically, founders are driven to talk about everything they do diluting the one that gives them that advantage over their competitors.
  2. A company must also know who its target audience is - identify who cares about the problem the most. Tailoring the message to a specific, narrow audience is crucial to the success of any marketing campaign. A great example is Nike’s ad campaign from 2020 starring Collin Kaepernick - “Believe in something, even if it means sacrificing everything.” A perfect example of provocative messaging only intended for the target audience – the athlete. Nike emphasizes the fact that they only care about their target audience and neglect the remaining opinions. They received a lot of publicity and PR at the time of launch of this ad campaign, turning out positively for their business.
  3. The third component is to create a sense of urgency by shaping evidence to make them believe it is the most important problem. When the company is in its pre-seed stages, the founders may have a strong hypothesis on this evidence (manufactured evidence to begin with) that they can work with their design partners to validate.
  4. Finally, the company must present its solution in a clear and effective way, crafting a compelling and influential story. A company’s story needs to be embraced by everyone in the team as a shared responsibility, including senior staff and sales representatives. This also has a multiplicative on your marketing operations with network effects. While it is crucial to listen to everyone's feedback, the story should not lose its impact. The primary focus should be on the customer's narrative, and then tweak for other target audiences.

The best message

  • Tells a story,
  • Is targeted,
  • Is emotional,
  • Is Persuasive - not descriptive,
  • Is evergreen - not constantly changing to avoid losing sense of identity,
  • Is specific
  • Is difficult for competitors to credibly claim, and
  • Is urgent.
  • While marketing and sales go hand in hand, they are different. You can target one market but sell to everybody. Messaging also ties product decisions to marketing.

In conclusion, a company's messaging should focus on the problem it solves and the target audience it serves, rather than just its product or service. A persuasive message has a significant impact on business, and companies should focus on creating a compelling story that is targeted, emotional, and persuasive.

Here are some good resources on this topic:

https://www.aha.io/roadmapping/guide/marketing-templates/messaging-templates

https://www.nandinijammi.com/blog/product-messaging-framework

Special: Harnessing the Power of Generative AI in Cybersecurity

Over the past decade, BGV and Forgepoint have been investing in innovative AI and Cybersecurity startups. Alberto Yépez (Managing Director at Forgepoint Capital) and Anik Bose (Managing Partner at BGV and Founder of the Ethical AI Governance Group (EAIGG)) share their perspectives on how cybersecurity innovation will be impacted by Generative AI. Their joint theses is that Generative AI will enhance the capabilities of malevolent actors, increase the need for guardrails, with innovation advantage going to the incumbents in the near term and to startups for the longer term.

Over the past decade, BGV and Forgepoint have been investing in innovative AI and Cybersecurity startups. Alberto Yépez (Managing Director at Forgepoint Capital) and Anik Bose (Managing Partner at BGV and Founder of the Ethical AI Governance Group (EAIGG)) share their perspectives on how cybersecurity innovation will be impacted by Generative AI. Their joint theses is that Generative AI will enhance the capabilities of malevolent actors, increase the need for guardrails, with innovation advantage going to the incumbents in the near term and to startups for the longer term.

Artificial Intelligence is currently experiencing its “Netscape” moment, propelled by the advent of potent Generative AI models such as Chat GPT. Research conducted by McKinsey estimates that generative AI could contribute an equivalent of $2.6 trillion to $4.4 trillion annually to the global economy. (To put this into perspective, the United Kingdom’s total GDP in 2021 was approximately $3.1 trillion.) According to their analysis, about 75% of the potential value generative AI use cases could deliver is concentrated in four areas: customer operations, marketing and sales, software engineering, and R&D across industries. Unsurprisingly, AI is dominating conversations across the cyber world as businesses rapidly adopt and develop AI-based technologies- and/or react to their sudden rise and accessibility. So what are the implications on AI and Cybersecurity?

AI AND GENERATIVE AI: CONTEXT AND DEFINITIONS

Let’s begin with our context. AI is hardly new despite the intense hype cycle we find ourselves within. AI was first defined as an academic discipline in the mid-1950’s and has since gone through its own boom and busts – periods of intense interest (and funding) followed by “AI winters” and so on. Before the advent of Generative AI, our understanding of AI’s impact on cybersecurity was twofold. First, we recognized the application of AI for protection and detection, either as part of new solutions or as a means to bolster more conventional countermeasures. Second, we acknowledged the necessity to secure AI itself- both as a protective technology and as a tool used by threat actors to develop new attack vectors. Use cases varied from Transaction Fraud Detection, Botnet detection, File-based Malware detection, Network risk assessment, Vulnerability remediation, user authentication, endpoint protection (XDR), and spam filtering.

Today, with the release of several Generative AI platforms, we anticipate the Cybersecurity sector to be profoundly impacted in additional ways including:

  1. Amplifying the capabilities of malevolent actors through attack vectors such as evasion, extraction, and enumeration attacks.
  2. Bridging the cyber skills gap with powerful AI assistants, to boost the productivity of enterprise cyber teams. These include those launched by incumbents like Crowdstrike and Microsoft.
  3. Elevating compliance guardrails around data privacy and output data verification to ensure responsible AI deployment.

Before delving deeper, it’s essential to clarify a few key definitions:

  1. AGI (Artificial General Intelligence): AGI refers to highly autonomous systems that can outperform humans at most economically valuable work. AGI encompasses general intelligence and is capable of understanding, learning, and applying knowledge across a wide range of tasks. The goal is to replicate human-level intelligence, with the potential to exhibit self-awareness and consciousness. Our hypothesis is that Threat Intelligence Platforms (TIP) will shift towards GPT-like chats as a more effective information source for users, either as auto prompts and API feeds based on detection Indicators of Compromise (IOCs), or interactive for R&D, similar to how Microsoft Copilot is used forapp development, Security, and M365, and GitHub Copilot is used for programming.
  2. GPT (Generative Pre-trained Transformer): GPT is a specific type of AI model developed by OpenAI (for clarity, the popular ChatGPT is an AI chatbot app powered by GPT, similar to how a Lenovo or Dell laptop might be powered by Intel). Models such as GPT-3 and GPT-4 are designed for language generation tasks. They are pre-trained on large volumes of text data and can generate human-like responses given a prompt. These models excel at tasks like natural language understanding, text completion, and language translation. Our hypothesis is that AGI will improve interpretive systems (SOAR and Anti-Fraud) as Large Language Models (LLMs) and Small Language Models (SLMs) are harnessed for their most suitable functions.

NEW ATTACK VECTORS: ENHANCING THE CAPABILITIES OF MALEVOLENT ACTORS

Generative AI is a double-edged sword. While it holds immense potential for improving cybersecurity defenses, it also amplifies the capabilities of malevolent actors. By exploiting the capabilities of sophisticated AI models, attackers can devise new attack vectors that traditional security measures may struggle to counter:

  1. Evasion Attacks: In evasion attacks, the adversary uses generative AI to create inputs that are designed to be misclassified by AI-based detection systems. For example, they could manipulate malware so it appears benign to the security system, thereby evading detection. Generative AI, with its ability to understand and generate data patterns, can significantly improve the success rate of these evasion attempts.
  2. Extraction Attacks: Extraction attacks refer to scenarios where an adversary trains a model to extract sensitive information from a system, leading to potential data breaches. The advent of Generative AI means that attackers can train models to mimic the behavior of legitimate users or systems, thus tricking security measures and gaining unauthorized access.
  3. Enumeration Attacks: Enumeration attacks involve using generative AI to discover system vulnerabilities. Hackers can automate the process of testing different attack vectors, rapidly identifying weak points in a system that they can then exploit.
  4. Influence Attacks on Classifiers: Influence campaigns have been demonstrated in social media and securities/commodities trading systems’ reliance on AI repeatedly over the past decade or more – including election cycle and quarantine era mis/disinformation as well as the manipulation of market pricing and performance news. As generative AI is used for more specific, yet broader contexts and concepts in organizational functions, those same techniques will be exercised to exploit the dependencies on knowledge offered to organizations and consumers.
  5. Poisoning Attacks on Data: One simple example is in Copilot and generative AI code samples that hallucinate functions or resources that hackers may take advantage of to create malicious resources that are subsequently called by that code. This vulnerability requires code validation and testing before production release, which is generally a common activity in modern CI/CD development. This means that even development systems can be compromised and offer back doors for more nefarious software supply chain compromises, especially cine those development systems are rarely subject to network isolation or security controls levied on production systems.

As Generative AI continues to evolve, we anticipate an increase in these types of sophisticated attacks. Therefore, it is imperative for both incumbent and startup entities in the cybersecurity sector to remain vigilant and proactive, developing countermeasures that anticipate these new forms of threats.

While this may seem daunting, we believe it is also an opportunity for cybersecurity innovation. The challenges posed by generative AI-powered cyberattacks necessitate novel solutions, opening new frontiers in the cyber defense sector. Our discussions with key industry players reveal a robust willingness and preparedness to address these concerns.

BROAD YET PRECISE: GENERATIVE AI’S IMPACT ON CYBERSECURITY INNOVATION

Generative AI has significant potential to influence cybersecurity innovation, both in established companies (incumbents) and startups. Here’s how generative AI is shaping cybersecurity:

  1. Anomaly Detection and Analysis: Generative AI models, trained on substantial datasets of known malware and cyber threats, can identify patterns and generate new threat signatures. This aids real-time threat detection and analysis, empowering security systems to proactively identify and respond to emerging threats. Generative AI models are used to detect adversarial attacks, where bad actors attempt to manipulate or deceive AI systems.
  2. Security Testing and Vulnerability Assessment: Generative AI can automate security testing by generating and executing various attack scenarios to identify vulnerabilities in software, networks, or systems.
  3. Password and Credential Security: Startups are using generative AI to develop password and credential security solutions.
  4. Malware Generation and Defense: Generative AI can be employed to generate new malware samples for research purposes and to strengthen antivirus and anti-malware systems.
  5. Security Operations Automation: Generative AI models can automate routine security operations while augmenting SOC analyst productivity.

THE NEED FOR GUARDRAILS: THE GENERATIVE AI ACCURACY PROBLEM

Generative AI has its limitations- primarily around consistently providing accurate outputs.  Therefore, what guardrails are needed to reduce risks and ensure success with broader adoption? Generative AI tools like ChatGPT can augment subject matter experts by automating repetitive tasks. However, they are unlikely to displace experts entirely in B2B use cases due to AI’s lack of domain-specific contextual knowledge and the need for trust and verification of underlying data sets. Broader adoption of Generative AI will stimulate an increased demand for authenticated, verifiable data, free of AI hallucinations. This appetite will spur advancements in data integrity and verification solutions, alongside a number of other ethical AI issues such as privacy, fairness, and governance innovations. Boards of Directors now more vocally demand the responsible use of AI to improve operational efficiency, customer satisfaction and innovation, while safeguarding customer, employee and supplier data and protecting intellectual property assets.

ON NEAR-TERM INNOVATION: INCUMBENTS’ EDGE

Incumbents carry the advantage of pre-existing infrastructure, high-compute resources, and access to substantial datasets. Consequently, we anticipate a surge of innovation from these entities in the near term. Industry stalwarts such as Crowdstrike, Palo Alto Networks, Microsoft, Google, IBM and Oracle are already harnessing Generative AI to bolster their security solutions. Here’s an exploration of their endeavors:

Crowdstrike:

  • Threat Detection and Response: Crowdstrike employs generative AI to detect and respond to advanced threats in real-time. Their AI-integrated platform, Falcon, scrutinizes large amounts of data to discern patterns and threat indicators, enabling swift detection and response to cyber threats.
  • Adversarial Attack Detection: Utilizing generative AI models, Crowdstrike can detect and counter adversarial attacks like fileless malware and ransomware. Their AI algorithms are capable of pinpointing suspicious behavior, anomalies, and threat indicators.
  • AI-Driven Security Analytics: By leveraging generative AI, Crowdstrike enhances its security analytics capabilities, thereby enabling the identification of intricate attack patterns, threat prediction, and the generation of actionable insights for security teams.

Palo Alto Networks:

  • Threat Intelligence and Automation: The company integrates generative AI into their security platform, Cortex XSOAR, automating threat intelligence and incident response processes. Their AI algorithms sift through extensive threat data, equipping security teams with actionable insights and automated playbooks for efficient threat response.
  • Malware Analysis: Generative AI models power advanced malware analysis. This helps companies understand emerging threats, devise effective countermeasures, and fortify cybersecurity solutions.
  • Behavioral Analytics: Generative AI aids in developing behavioral analytics models that learn standard user, device, and network behaviors to detect anomalies and potential security breaches.
  • Security Policy Optimization: By using generative AI, Palo Alto Networks optimizes security policies through the analysis of network traffic patterns, user behavior, and threat intelligence data, dynamically adjusting security policies for robust protection against emerging threats.

Microsoft

  • SOC Automation: MS’s Security Copilot is a large language AI model powered by OpenAI’s GPT-4, combined with a Microsoft security-specific model that incorporates what Microsoft describes as a growing set of security-specific skills informed by its global threat intelligence and vast signals volume. Security Copilot integrates with the Microsoft Security products portfolio, which means that it offers the most value to those with a significant investment in the Microsoft security portfolio.
  • Human-in-the-Loop Augmentation – While Security Copilot calls upon its existing security skills to respond, it also learns new skills thanks to the learning system with which the security-specific model has been equipped. Users can save prompts into a “Promptbook,” a set of steps or automations that users have developed. This introduction is likely to be resonant and disruptive because of the human aspect that remains — and will remain — so vital to security operations. The ability of large language AI models to comb through vast amounts of information and present it conversationally addresses one of the primary use cases of automation in SecOps: gathering the context of incidents and events to help analysts triage and escalate those that pose a significant threat.

Google:

  • Vulnerability and Malware Detection: Google announced the release of Cloud Security AI Workbench powered by a specialized “security” AI language model called Sec-PaLM. An offshoot of Google’s PaLMmodel, Sec-PaLM is “fine-tuned for security use cases,” Google says — incorporating security intelligence such as research on software vulnerabilities, malware, threat indicators and behavioral threat actor profiles.
  • Threat Intelligence: Cloud Security AI Workbench also spans a range of new AI-powered tools, like Mandiant’s Threat Intelligence AI, which will leverage Sec-PaLM to find, summarize and act on security threats. VirusTotal, another Google property, will use Sec-PaLM to help subscribers analyze and explain the behavior of malicious scripts.

IBM:

  • Threat Detection and Response: IBM’s QRadar Suite is a subscription-based (SaaS) offering that combines AI-enhanced versions of IBM’s existing threat detection and response solutions into a comprehensive global product. The new QRadar Suite goes beyond traditional security information and event management (SIEM) capabilities, aiming to provide a unified experience for security management. Its goal is to assist organizations in managing extended detection and response (EDR/XDR) capabilities, SIEM functionalities, and Security Orchestration Automation and Response (SOAR) in cybersecurity.
  • Security Compliance: IBM’s approach to security and compliance in highly regulated industries, such as financial services, emphasizes the importance of continuous compliance within a cloud environment. By integrating the Security and Compliance Center, organizations can minimize the risks associated with historically challenging and manual compliance processes. The solution enables the integration of daily, automatic compliance checks into the development lifecycle, ensuring adherence to industry standards and protecting customer and application data.

Oracle, SAP, Salesforce and other enterprise application providers are beginning to provide comprehensive AI service portfolios integrating their cloud applications and their existing AI infrastructure with state-of-the-art generative innovations.  Their unique approach and differentiation means their customers will have complete control and ownership of their own data inside their “wall gardens” to derive insights and avoid data loss and contamination.

The incumbents not only have the company and customer install base and diverse platform to develop, test, and secure the safe and productive use of Generative AI / AI in general – but also having their own first party security products (Google’s Mandiant and Microsoft Security/Sentinel along with IBM’s Q Labs and Resilient acquisitions) that are using generative AI to power automated threat intel and security…while needing to retain human in the loop decision-making throughout the SDLC (and modern SOCs).

LONGER TERM INNOVATION: ADVANTAGE STARTUPS

Startups offer innovative, agile solutions in the realm of generative AI for cybersecurity. However, the investment climate for generative AI-driven cyber solutions is still nascent, given the limited number of attacks witnessed to date involving the AI attack surface.

The pivotal role of data cannot be overstated. For startups to flourish, they must leverage open-source LLMs while enriching data with proprietary information. We anticipate that synthetic data innovation and Robotic Process Automation (RPA) will play crucial roles, especially in regulated sectors like financial services and healthcare that have unique data privacy requirements. However, synthetic data is not expected to significantly influence decision support automation, such as privileged access management.

Another key area for startup innovation exists around Verification and Testing, driven by mounting enterprise demand to harness Large Language Models (LLMs). Other noteworthy areas of opportunity include Explainability, ModelOps, Data Privacy for Generative AI applications, Adversarial AI/Data Poisoning, Autonomous Security Operations Centers (SOCs), Differential Data Privacy, and Fraud Detection.

Capital efficient startups will need to utilize existing infrastructure (foundational models) and concentrate on applications that add value through Single Language Models (SLM) via contextual data enrichment. Acquiring proprietary datasets may also be a strategic move for startups aiming to establish a competitive edge.

Furthermore, we posit that the compliance and regulatory environment shaped by the EU Act will direct startup innovation toward responsible AI and Governance, Risk Management, and Compliance (GRC). Notably, the founder DNA in this space will require a unique blend of cybersecurity domain expertise paired with generative AI technical prowess.

IN CONCLUSION

We anticipate strong innovation at the intersection of Cybersecurity and Generative AI, fueled by incumbents in the near term and startups in the long term. Automating repetitive tasks with Security Co-pilots will go a long way towards addressing the cyber skills gap, while newfound protection and defense capabilities enabled by Generative AI will help secure large enterprise datasets and enable more effective identity orchestration to prevent breaches amid expanding attack surfaces. Morgan Stanley predicts that Cybersecurity is ripe for AI automation representing a $30Bn market opportunity. The bar on compliance guardrails will be raised in this space given the ethical concerns around the accuracy of Generative AI outputs (hallucinations), increasing the need for human-in-the-loop, regulations and raising the stakes to build an “ethics stack” to complement and safeguard the explosive AI technology stack.  Finally, enterprise CTA’s (committees of technology and architecture) will increasingly need to embrace responsible application of Generative AI to succeed and compete.

Board of Directors will play an important role to demand good governance and the use of responsible AI, while protecting the key information assets of every business.

The Art and Science of Building an Enterprise Stack Startup w/ Jon Gelsey (Xnor & Auth0)

Yash Hemaraj in conversation with Jon Gelsey. The Art and Science of Building an Enterprise Stack Startup with Jon Gelsey (Ex-CEO - Xnor.ai)

INTERVIEW TRANSCRIPT
Yash: Welcome everyone to our opening keynote. We are fortunate to have with us, John Gelsey. John most recently was the CEO of Xnor.ai, which was a computer vision and ML spinoff of the Allen Institute for AI and University of Washington. Xnor was acquired by Apple for a publicly reported price of $200 million in January 2020. Before Xnor, he was the founding CEO of Auth0, an industry leading identity-as-a-service platform, which he grew from 3 employees to nearly 300 over 4 years. Okta announced its acquisition of Auth0 for $6.5 billion dollars in March 2021. John's previous experience includes responsibility for strategy, acquisitions, and investments in Microsoft's corporate development and strategy teams, venture investments at Intel Capital and product management at Mentor Graphics. John started his illustrious career as a computer designer at Convex Computer, which was acquired by Hewlett Packard in 1995. John, thank you for kicking off this year's Arka showcase.

Jon: Thank you for having me.

Yash - Question 1 - You were CEO of an identity-as-a-service platform at Auth0, which you grew from 3 employees to 300 over four years, and then CEO of a computer vision and machine learning spinoff in Xnor. Could you compare and contrast being CEO of these two companies?

Jon: Sure, there are a lot of similarities and a lot of differences. Mostly, a big part of the difference is around the stage that the market was at for sort of their excitement and interest in the technology. If we start with Auth0, it was essentially an abstraction layer to make it easier to integrate authentication and authorization into your application. That for the environment that you were given, you're at an enterprise and they've got an existing environment they built up over the decades, or it's a consumer application and you've got an environment that you're already working in, and so it made it easier for that. Well, the nice thing about that is everybody understood the problem, whereas everybody hated dealing with a problem. It was very painful, but everybody understood the problems. There's little customer education that had to be done. My favourite quote for what the problem was from the CTO of a very large utility in Asia Pacific, who said identity is a tar ball covered with razor blades. Every time you touch it, your fingers are bloody. It's been great with Auth0 because you're insulating us. You're the thick, heavy gloves that makes it easier to deal with identity and our speeding our digital transformation. So that was great. We didn't have to do any market education about why identity was a good thing. Instead it was market education as to why we, out of the other sort of 20 different approaches, active directory or Ping or ForgeRock or whoever, why we might be a better path forward. And for that, we focused heavily on what the VCs now are calling product led growth. For us, in terms of being able to demonstrate hands-on to people with free samples and have their friends talk about it and such about why we might be a good, a better path than whatever they might have used before or whatever they had been considering. Now when I compare that to Xnor, Xnor was doing some of the most advanced machine learning capabilities in the world. Xnor’s founder had actually invented one of the most commonly used machine learning models for the last four or five years called YOLO (You Only Look Once), which says a multi object recognition. Our founders had invented a number of technologies used in sort of the foundation of modern machine learning and were particularly focused with Xnor on edge machine learning, on low end processors at the edge that were power constrained and compute constrained being able to do very accurate pattern recognition. Machine learning, of course, is just pattern recognition, be it images or text or speech or whatever the pattern might be. I'd say sort of one of the biggest differences that struck me between the two companies as we were going into sort of our go-to market efforts to accelerate our revenue, which we were lucky with, we were in, eight digits revenue by the time the Apple deal closed. With identity, everybody had done it before, everybody hated it, everybody knew how hard it was, and so was eager to find a better path to doing identity. With machine learning, very few people had done it before, or if they'd done it, they'd used PI Torch or TensorFlow, and it's like, oh, you know I've got a model here. It's working fine. I mean, it's only 80% accurate, but I'm sure soon I can get it to 90% accurate. So what do I need you guys for? Well, it turns out that it's actually really hard to get your models highly accurate and also fit within sort of the compute constraints and such that you have. So we had a lot of sort of customer education to do about why you might want to work with Xnor as opposed to just doing it yourself. There’s also the dynamic, of course, with machine learning instead of the new and sexy things, everybody's like, oh yeah, I'm a machine learning engineer, look what I've done here, and just a little bit more work and it'll be great. And so we had more education to do with Xnor. But frankly, to tell you the truth, the sort of the way this kind of education works is that people just need to be burned enough times to say, well, you know, maybe I should turn to a third party. Again, it's just like the people who tried to write their own authentication solutions in 2005- 2010, you know, that works great until it doesn't, and so I'd say that was maybe the biggest difference was the maturity of the market and accepting the technologies that were the foundation for the products for each company.

Yash - Question 2 - Thank you. So, you mentioned about PLG, product-led growth. This has definitely emerged as a very popular go-to-market approach for a lot of start-ups. Auth0, in my opinion, was one of the companies that did it right. You led the company during the most critical stage of going from 0 to 300. Could you share some biggest learnings of applying a robust PLG strategy as you were growing Auth0?

Jon: Sure. We actually didn't call it PLG. Again, we just sort of did stuff that we thought kind of made sense. One of the things I've learned actually in conversations with VCs, especially over the last few years, is how much it's misunderstood. I think there's sort of a popular belief that PLG is great, because what it means is you offer a free sample and fire your enterprise sales team and everything's wonderful and sadly that doesn't work. What PLG really is, it's a way to reduce the cost of qualifying leads for your sales team so that once your enterprise sales team, which is critical, has the lead, it's a much more qualified lead, and therefore their productivity is much higher and you can scale much more quickly. Ultimately, PLG is all about building a sentiment and reputation online. So that, effectively, the web and Google are marketing you rather than you having to do all the heavy lifting of buying advertisements or extensive trade shows or something like that. We would say, sort of as a joke at Auth0, but it's kind of accurate, which is, I don't care if you're buying a new toaster or you're buying a luxury yacht, you always start with Google. It's like, “What's out there?”. The number one metric for good PLG is have a high Google SEO ranking. You want to show up in the top half of the first page with the Google search results. Google's pretty good now at detecting, when you're stuffing keywords in a webpage or things like that, you can't “ defraud” Google. The only way to get good rankings is to actually have very solid and organic content that's viewed as authoritative across the web. You know, lots and lots of sites pointing to you and when you show up on a search result, people clicking on you and then being happy with the results. So, what that really means is a lot of very, very good content marketing is required to have good PLG. Now, content marketing, sort of back when I was a kid meant you write a white paper for download or something like that and I guess maybe that's kind of a component. Content marketing from the PLG perspective is that you create the content that will be surfaced by Google, that'll be discovered by Google that generates positive sentiment. We were very successful in early days kicking things off by having a very well written blog. You curate your social media presence not in a sort of spammy way. You do it as people are talking about you on Twitter or Reddit or whatever, that you respond in a supportive and authentic way. We actually could have decided that our ICP, our ideal customer persona, maybe a better term, is sort of the ideal influencer as a developer. We were selling a security product, but we used the developer as our Trojan to get access to the security or VP of Engineering's budget and we did that by selling an awesome developer tool that would make a developer more productive. He or she could do in a day what would take weeks or months. That was the goal. So what are other forms of content that would appeal to that developer who would get us get all excited and then say, oh, we have to have these guys as part of our product and unleash the budget. Having your documentation on the other side of the firewall so it's searchable, that's actually really powerful. We talked about the sort of, here's a free sample as one of the misunderstandings. That was sort of maybe not quite accurate. A free sample is super important, and in fact, one of the most powerful techniques is the free sample. So I can try it for myself and say, oh yeah, this is a lot, actually a lot better than the other stuff I've tried. And then of course, your trusted friends saying great things, you should try Auth0. When I was a new grad engineer, you know, try to figure my way through. I’d just walk down the hall and talk to some of the grey beards, the people that were 10 years older than me and say, “Hey, you know, do you like PI or emax?. Oh, I like emax.” Okay, cool. I'll use emax. You go with your trusted friend cause you don't have the time, or probably not even the expertise to really evaluate. So having the free sample, Google saying nice things about you is, well, that all plays into here's an online environment where your reputation is high and multiple sources are saying nice things about you. It's like, yeah, this is pretty good. You should try it. And you can even try it yourself and say, oh yeah, this seems to work. I'm trying it with a trivial example, but that was easy. And so when I now get into my complex enterprise environment that's riddled with corner cases. I think I see how I can make this work fairly quickly. So PLG is, if you can do it right, is a super powerful form of marketing to generate, self-qualified leads where you're not spending the money to qualify the leads, they're qualifying themselves by trying to sample and such, and is a key to super-fast, inexpensive growth if you can get that flywheel coin. I will point out a critical part of it, you got to have an awesome product so that people actually love your product. But if you have that product that everybody loves then, PLG is a super-efficient way to go to market.

Yash- Question 3 - So quick follow up to that. So can pure PLG motions work or do you need an enterprise sales team to assist PLG?

Jon: No, you have to have an enterprise sales team. Because see, at the end of the day, you're asking for, fifty, a hundred, a million, a thousand dollars, a million dollars a year. And people are like, oh my God, that's a lot of money - If I spend this wrong, I'm going to get fired. You need your traditional enterprise sales team to do it. What salespeople are good at, let me understand what your concerns are, let me address your objections, let me convey that you should trust my company, that we're going to be great partners for you and by the way, let me negotiate the price with you, the legal terms and all of the sort of mundane steps that's required to actually get somebody to put their career on the line and write you a big check. Okay. That being said, PLG does have the nice side effect is that you can change the balance of your sales team to have more inside salespeople than relationship salespeople. Inside sales is the guy or girl on the phone, online and saying, oh yeah, you know, I'll essentially say I'll take your order. I mean, they're doing more than that. They're answering questions about their product and such, but they're at the end of the day, I know I want this product and let's see if I can get the best deal and ask for the special term and thing like that. The relationship salesperson, the paradigm is maybe that expensive Oracle salesperson who's taking you out to dinner, taking the customer out to dinner and playing golf, and what they're really doing actually, is not bribing them with dinner or something like that, it's establishing a reputation and a level of trust. So it’s historically, “I don't know about company XYZ, but you know, Bob, the sales guy, he's smart he's paying attention when we had this problem at 2:00 AM. He answered the phone and got it fixed for us and I'm going to make my big bet on XYZ corp., because Bob, I trust Bob.” The awesome thing about PLG is that your reputation is now online as opposed to Bob, the sales guy with your reputation. So people will buy from you because you've got a great online reputation rather than because they trust the salesperson. So that means that you can have more, less expensive, but still expensive because they're salespeople, less expensive inside salespeople selling million dollar deals than you would've had to with a traditional enterprise sales motion. It's a change in the ratio as opposed to an elimination of the expensive salespeople. You still need those. There's many customers that are super conservative and they still need that salesperson there, a big bank or a big telecom or something like that. But it does allow you to rejig your salesforce to be more efficient and ultimately at the end of the day, lower cost of sales means higher gross margins, and gross margin is a big component of your evaluation. So PLG at the end of the day, drives a much higher exit valuation, be it you going public or getting acquired. And so it has those very fortuitous side effects of how you construct your sales team.

Yash - Question 4- Thanks, John. When you look at the ICP between the two companies, Auth0 and Xnor, were there any differences between building early GTM teams that I'm talking about between the two companies and also like as you look, what do you look for before hitting the gas button to scale things up?

Jon: The difference in the GTM teams were sort of reflective of the different states of the market. We had to spend a lot more effort on education at Xnor than we did at Auth0. I'd say that was maybe the key difference and there's a lot of experimentation, at Xnor, what do people really want? As opposed to Auth0 where we knew what they wanted, which was just less pain. I'm sorry, what was the second part of the question?

Yash: When do you hit the gas button?

Jon: Oh, thank you. Of course. It's when you have signs that you actually have a value proposition that people are willing to pay for in a replicable way. So with Auth0, we started with, we had the free sample and we had the PLG, a few blog posts, tweets to other developers in the open source community and it's like, Hey, try this. We think it might be helpful to you. And so we were, fairly rapidly to, get to hundreds of users, that were saying nice things about us. Ah, this is great. Most of them for free, like 98-99%. But we had more than a handful, we had dozen or more of paying users that were paying some amount - $19/month or $99/month. Those were our sort of proof points. It's awesome. We have something that people are willing to pay for. Let's double down our outreach and our messaging to the kinds of people who have already picked us up to try us for free. Many of the people trying us for free were not “trying us for free”, they were evaluating us because they wanted to use us in their enterprise. It was them doing all the work of self-qualifying. When you have those early signs, ideally through revenue or some close proxy to revenue, like lots of users, that's when it's time to time to double down and spend big on your marketing to try and accelerate this. The sort of the converse, and this is especially when you have product, I've got this enterprise infrastructure thing, and it's a million dollars a year and 12 months of professional services getting installed. You have lots of people who are willing to meet with you and talk with you and say I can see a lot of issues that this could solve and help me in my enterprise, but nobody's actually paying you yet. Until you've seen people go through that entire cycle and get budget allocated to you really don't know if you have a product market fit. I'm a big fan of figuring out what you can do sort of inexpensively. So perceived as a low, in a low risk way to get those signs that somebody's willing to pay you or no, they're not and then reacting with your product development plans based on that.

Yash - Question 5- Got it. Now moving on to getting towards the exit, right? You know, in both companies you had amazing exits. Could you kind of share some things on when and what factors led to the decision to exit?

Jon: Well, there's never a clear answer about what's the right time to exit. There's a good target to aim for every start up, every enterprise, which is going public. Whether or not you actually go public well you figure that out later, but you want to go public because it drives a couple of great behaviours within the organization. So first off, the public markets tend to pay the highest valuation of any kind of exit simply because you've got the markets are highly liquid and lots and lots of disclosures and therefore you can’t have less sophisticated investors that are like, I'll buy this cause everybody else is buying it. That tends to drive the price up. So the public market is awesome. Public market is highly regulated and you have to really be buttoned up. The hygiene that it drives – the compliance hygiene, the financial reporting hygiene, the CEO making sure the financial ratios make sense. That's really beneficial. It’s a great thing to do regardless if you're going to go public or not, because those are actually the metrics the public market values that you should value in growing your company in a healthy way. My general advice for people (start-ups) is first off, think what you need to do to go public. Start doing that because that's going to be a great thing. Then inevitably, in the life of a start up, you always have folks coming to you and saying would you be interested in acquisition? My response to that has always been, we're rational capitalists and when what you offer makes sense, we'll consider it if we don't think it makes sense because we think we they were better staying standalone then no. Because you never know when you might have an acquirer that will have an economic proposition that they understand internally and might not be publicly obvious that where they would pay a very fair amount. One of the things that sort of amazed me when I moved out of the VC world or less in the VC world. I'm still doing investments but in doing M&A at Microsoft was how often an acquisition or how much of an acquisition this decision was actually an emotional decision. In fact, very analogous to a VC - you look at the team and the market and the business but at the end of the day, I feel this is going to be good, I’m going to make the bet. Before at Microsoft acquisitions, you've got a detailed spreadsheet and it's much more buttoned up. There's certainly that as well, but a lot of it also, I think that with this technology and our distribution channels and a little bit of additional work here, it's going to be amazing. And actually acquisitions tend to fail not too far off the rate that start-ups fail. Like 70% of acquisitions don't achieve the value that was articulated as a justification for the acquisition. Well, that works in your favour as a start-up where Company XYZ comes to you and the CEO has made some public statements about how great things are going to be, and they realize that they've got big gaps in technology and they could develop it themselves, but it'll take three years but they could buy you and have it immediately. They might suddenly pay up a lot. And you don't know. You don't know until you engage with them. And so it's always worthwhile to engage. Always as an informal, let's have a discussion. You do have to be careful with potential acquirers coming in and here's my 50 page due diligence list, and can you go do this? To me the right response is this is awesome. It's great how excited you are. But you know, we're busy growing really quickly and we don't have time for a 50 page due diligence. I can't distract my team, and so why don't you make me an offer? I'll discuss it with the board and if it makes sense, we'll talk more. If it doesn't make sense, then no harm to foul. At the end, you always want to sort of have somebody tell you, I think you're worth, 50 million, I think you're worth a billion, whether you take that acquisition offer or you hold out for going public it really is a judgment call. Going public is great until suddenly you're in a terrible stock market and then going public isn't great, and suddenly you have to delay things or this acquisition looks great until, you find out that company was sort of cooking the books. I friend that that happened to, and so it's like, oh my God, they're going to pay us in in stock that actually isn't going to be worth very much in a few years. You really don't know until you get into it and make a judgment call, but you should always, as a CEO engage being very careful about the amount of time you spend, just to kind of see what is the market telling me right now. But at the end of the day, never get too excited about anybody who's coming in because there's lots of people that are out there just sort of fishing and doing market surveys and such. As they talk about acquisition

Yash - Question 6 - That's some amazing insights for our CEOs and so some rapid questions - If you were to lead an early stage company today, what would you do differently?

Jon: Lots of things I would do differently. I think one of the things that I would call out is marketing. One of the things I've observed, we were super fortunate with Auth0, with the marketing talent that we had.We made mistakes there. Marketing of the day is figuring out what messages work and what communication channels to your prospects. We neglected really powerful channels like account based marketing, we should have done more of that earlier, and we didn't really know. In our defence, we were growing really quickly without it, which is great but we could have done a lot more if we'd had the additional bandwidth to be able to run those experiments earlier than we did. So that would be one thing. Another thing, and this is sort of a general observation across the board for start-ups that I'm working with now, I've got a bunch of start-ups that have asked me to advise them and such is that it's really important when you have a complex technical product selling to the enterprise, to have a marketing leader, ideally marketing team that actually are deep domain experts. I'd say more important when you have a marketing strategy because an important part of PLG of course, is content marketing and content means that you need to be able to talk to your customers as a peer. I'll say ordinary marketing people, they don't understand the deep domain problems that a JavaScript developer has or a security professional has, and they kind of understand superficiality that they hear about from Gartner or buzzword that they discover and such. They have a lot of difficulty in communicating in an attractively authentic way. I think where I'm going is that all start-ups are a series of hopefully low cost experiments. I was really successful in actually not having a marketing person on marketing, but instead a super skilled developer who was willing to take on the marketing burden and that was great for authentic conversations. I now advise CEOs that I work with - you're selling a deeply technical product, hire developers to do marketing. You're selling to a security professional, hire an ex CSO to run marketing, have domain experts, not marketing experts, because it's always easier to learn marketing techniques that you apply your domain expertise to than try and teach a marketing person domain expertise, that takes a decade to actually really understand.

Yash - Question - 7 - John, I've seen some of the companies that you’ve been advising maybe a couple of takeaways for our entrepreneurs, especially those that are managing remote teams, with building cross-border companies, navigating also this current macroeconomic headwinds that they're facing. Maybe a couple of quick takeaways.

Jon: Sure. Let me talk in particular about having a distributed company, which I think is an awesome benefit to scaling. It allows you to scale much more quickly because it gives you access to talent based on availability worldwide as opposed to particular location. You're not stuck in one geo. The problem with building a distributed company that many people run into is the communication culture. My team in Bangalore and my team in Seattle, it's a 12 hour difference. They’re not talking as much as they should. For a distributed company to work well, you have to build a culture of communication that's modelled at the very top of the company. Ideally your executive team is distributed as well, and have the tools available so teams that are distributed can have asynchronous communication and all feel like they're clearly up to speed and there's not two guys in London who’ve white boarded it out and telling the team to just do this. I’ve found Slack to be a super helpful tool where you can have those asynchronous communications so that when the remote person, sorry the person not in your geo comes online six hours later, they can look through the conversation. To have those conversations and that disclosure of information happen on Slack, that's a cultural thing that you really have to be careful about as a leader to say, we should make sure that everybody's involved. Let's not have conversations off on their own without the contents of the conversations being disclosed through some asynchronous channel. Another thing we did culturally to help with communications, especially at Auth0, was we declared a policy of radical transparency. I had actually observed this, I'd experienced it as an employee at this first company I worked for, Convicts Computer, and it was acquired by HP to be the high end of their server line in the nineties. The CEO there would tell us, I have company meetings every month or so and tell me sort of everything - In fact, he'd say, here's the gold bucket. You know, we won this deal and projects are advancing well, and here's the shit bucket. You know, we lost this deal and we lost some people and whatever else. And I remember talking to this guy, Bob Pollack, 10, 15 years later at a conference and telling him how impressed we were, he gave the impression that he was telling us everything. He said, actually, I was telling you everything, other than like personnel issues, I'm not going to disclose PII. You’re hiring people, they're brilliant and engineers are really good at taking weak signal and drawing conclusions from weak signal, that's part of being an engineer and if you don't tell them, they're going to figure it out anyway. So you might as well tell them, one, to make it easier, because it's a hassle to keep secrets, you know? And two, People love being treated like adults. I'm telling you everything's going on. We’re moving forward as adults, people respond to love You reduce turnover, because people love a culture where they're treated as the responsible adults they are. You also have conveyed that, Hey, I'm the CEO and I'm sharing the board memo with you the day after the board meeting and we're having a company meeting so we can discuss it and you know everything that's going on. In your team, make sure that everybody knows everything that's going on. Don't try and conceal information for political purposes. That’s old and culturally frowned upon. That worked as well. I think that's something that's also an essential foundation to an effectively managed distributed team.

Yash: This is some wonderful insight, especially, as everybody understands that there are macroeconomic head headwinds, so being very transparent about where things are, communicating it in the right way, I think those are some really wonderful insights. John, thank you so much. We are so fortunate to have you with us and share your insights with us today.

Jon: Thank you so much. Of course. Well, thank you for having me. I enjoyed and I look forward to seeing what the current cohort of attendees deliver over the coming years.

Yash: Thank you John, really means a lot.

In Conversation with AMD: Building a Foundation for our AI Future

In a recent interview with Eric Buatois, General Partner at BGV, Matt Hein, Chief Strategy Officer at AMD, discussed his company’s strategy and approach to industry collaboration. The interview covered a wide range of topics, including AI infrastructure, partnerships with large and small companies, customer engagement, geopolitical concerns, licensing, manufacturing, and talent.

In a recent interview with Eric Buatois, General Partner at BGV, Matt Hein, Chief Strategy Officer at AMD, discussed his company’s strategy and approach to industry collaboration. The interview covered a wide range of topics, including AI infrastructure, partnerships with large and small companies, customer engagement, geopolitical concerns, licensing, manufacturing, and talent.

One of the main topics of discussion was the infrastructure buildout of AI, which Hein likened to the deployment of the internet and mobile infrastructure. He emphasized that AI will affect every element of the industry, including AI training and models. AMD, as a high-performance adaptive computing company, sells to the data center, PC client, and gaming markets. Hein noted that over time, there will be more emphasis on inference at the edge of the network, with AI being deployed across all elements of the ecosystem.

Matt Hein and Eric Buatois
Matt Hein and Eric Buatois at GCVI Monterey 2023

In terms of competition, AMD has successfully competed against Intel in the CPU market and is now focused on competing against NVIDIA. Hein noted that as AMD gains share in the CPU market, it will go up the stack and partner more closely with customers, such as Microsoft and Sony, on their gaming consoles. AMD is more partner-focused and less focused on full stack integration.

Hein provided examples of both large and small company partnerships. AMD has partnered with Sony and Microsoft on their gaming consoles and with hyperscalers. Samsung has also licensed AMD’s GPU for their mobile processes. On the smaller side, AMD looks for potential customers that will need to deploy a lot of its infrastructure or tie into its graphics engine. Some of these partnerships may become acquisitions, but that is not the primary strategy.

When asked about learning from customers, Hein emphasized the need for a deep level of engagement. Customers are using their own platforms and software, so AMD needs to tie into that and optimize for it. This can take months to years to align roadmaps. There is also a lot of discussion around AI being seen as a national interest, with every country building its own AI strategies.

Regarding talent, Hein noted that AMD recruits from the market and has been fortunate to perform very well. The talent pool is universally educated graduate students, including low-level software, high-level software, and business people. AMD partners closely with foundries and is highly supportive of the CHIPS Act, which aims to build out more manufacturing in the US.

Overall, Hein emphasized the importance of collaboration and partnership in the industry. He noted that advanced technology partnerships are based on fitting into the roadmap, while business unit partnerships are focused on revenue. AMD looks for companies going to market in an innovative way using its products, and while it may never buy them, it will partner with them. Hein concluded by saying that AMD’s approach is to get the partnership right, customize the approach, and move away from non-standard investment with a lot of hooks.

Celebrating Women's History Month | Arka Talks w/ Sonal Puri & Usha Amin

In this month’s episode of Arka Talks, we are celebrating Women History Month featuring Sonal Puri (CEO, Webscale) & Usha Amin (Co-founding Partner, SAHA fund) in a panel moderated by Dhanasree Molugu (Arka Alum & MBA Associate at Menlo Ventures) to hear inspiring stories of women from the frontlines of being in Tech & VC. Arka Talks is our monthly fireside chat where we feature founders, operators, VCs and corporates discussing enterprise trends in the cross-border space and explore ways to build and scale a successful cross-border startup.


Sonal Puri serves as the Chief Executive Officer of Webscale, from pre-product to recently closing $26M in growth financing. Prior to Webscale, she was the Chief Marketing Officer at Aryaka Networks and led sales, marketing and alliances for the pioneer in SaaS for global enterprise networks from pre-product to Series D. Sonal has more than 20 years of experience with internet infrastructure, across four startups, in sales, marketing, corporate and business development and channels. Previously, Sonal headed corporate strategy for the Application Acceleration business unit, and the Western US Corporate Development team at Akamai, working on partnerships, mergers and acquisitions. Sonal also ran global business operations and alliances from pre-product to Series C and exit, as well as the acquisition process for Speedera (AKAM). She has held additional key management roles in sales, marketing and IT at Inktomi, CAS and Euclid.Usha has co-founded SAHA Asset Advisor – First Venture Capital Fund registered with SEBI to Empower women entrepreneurship in Technology. Invested in 11 companies, seven active, mentoring them to excel in their sectors, scaled successfully, created employment and growth opportunities in the ecosystem and impacted gender parity. The companies have focussed not only on growing their business but also on women’s welfare by implementing policies that make it favourable for them to continue work despite domestic challenges, especially during the current situation.

Arka Spring Showcase - Jan '21

Arka spring showcase is an invite-only coming together of the best minds in the US-India cross-border startup ecosystem where we have our founders showcase their work and a set of insightful discussions on the US-India cross-border eco-system.

We started the year off with the Arka Spring Showcase where we had the best minds of the US-India cross border come and be a part of the bi-annual showcase. The session started off with a keynote by Eric Benhamou, (Founder, Benhamou Global Ventures). Eric spoke about the transformation of the global enterprise and enterprise trends to look forward to in 2021.  Arka startups did a showcase where they presented their work and spoke about their progress so far.

We then had Sanjay Nath (Co-founder, Blume Ventures), Rashmi Gopinath (General Partner, B Capital Group) & Ankur Jain (Founder, Emergent Ventures) for a riveting panel discussion on “The Emergence of Enterprise Innovation in the US-India Eco-System”.

This was followed by a great panel discussion on “Building and Scaling a cross-border startup led by Yashwanth Hemaraj (Partner, Benhamou Global Ventures) in conversation with  Rajoshi Ghosh (Co-founder, Hasura) & Shiv Agarwal (Co-founder, Arkin (acq. by VMWare).

Arka Talks w/ Gaurav Manglik | The Art and Science of Building a Global Enterprise Startup

In this Episode of Arka Talks, we had Ankur Jain (Advisor, Arka & Founder, Emergent Ventures) in conversation with Gaurav Manglik, Co-founder, CliQr (acq. by Cisco) & GP, WestWave Capital on the 24th of Feb at 9.30 PM IST/ 8 AM PT. Arka Talks is our monthly fireside chat where we feature founders, operators, VCs and corporates discussing enterprise trends in the cross-border space and explore ways to build and scale a successful cross-border startup. We’re excited to have Gaurav on Arka Talks and we will explore his journey and talk about the art and science of building a global enterprise startup. Gaurav is currently a General Partner at WestWave Capital where he focuses on investments in early-stage Enterprise B2B startups. Gaurav has been a key driver of innovation in the cloud computing Industry. He co-founded CliQr Technologies in 2010 and served as its Chief Executive Officer until it was acquired by Cisco for $260M in 2016. At Cisco, Gaurav led cloud engineering for the Cloud Platform and Solutions Group and advised on Cisco’s cloud and container strategy and related investments and acquisitions.

Watch the full episode below –

Arka Talks ft. Obviously AI | No-Code & AI - Discussing the Endless Possibilities

In this episode, we feature our Arka startup founder & INK fellow Nirman Dave from Obviously AI in a panel discussion with Rafael Ugolini – a Sr. Engineering Manager at Collibra (a leading Data Intelligence company) and an Angel Investor. Along with, Avantika Mohapatra an ex-BMW engineer who now leads the charge in No-Code ML landscape as the partnerships head at AIxDesign.

Obviously AI enables anyone to quickly build and run AI models in minutes, without writing code. Crafted for a citizen analyst, Obviously AI sits in the heart of 3,000 BI teams across the world, delivering over 82,000 models since it’s launch in Feb 2020. It is a recommended no-code AI tool by Forbes and named among the top 5 no-code AI tools by Analytics Insight. Learn more on https://obviously.ai

Planning for the Best and Worst Case Scenario during a Pandemic

Radhesh Kanumury, Managing Partner, Arka Venture Labs was in conversation with Mahesh Krishnamurti, an investor and transformational leader where he gave timely advice on how enterprise startups can plan for the best and worst-case scenarios during a pandemic.

The 3 Lean Marketing Principles of Highly Effective Startups

In this episode of Arka Talks, Elias Rubel of Mattermade talks about the 3 lean marketing principles of highly effective enterprise startups and how early-stage enterprise startups can leverage them.

Topics that were covered:

  • Lead Demand Gen: Is it smart to leverage ABM while running a hyper lean marketing program? How should we think about budgeting and channel testing in the post-covid economy? What cost-free demand gen strategy will always out preform companies with a bigger budget?
  • Growth Foundations: How do you set the right growth goals for your team? What does a best in class MarTech stack look like on a leaned out budget? What are the highest leverage growth channels that don’t cost a dime?
  • Marketing, the right way: What’s the most effective way for Sales and Marketing to partner? What’s the best way to focus your marketing efforts and messaging? How should we approach product marketing and leverage it to close more deals?