Resources
Data science startup Jovian raises $450K led by Arka Venture Labs
Founded by Aakash NS and Siddhant Ujjain in 2018, Jovian plans to use the capital to grow its engineering team and further develop the product.
Founded by Aakash NS and Siddhant Ujjain in 2018, Jovian plans to use the capital to grow its engineering team and further develop the product.
California and Bengaluru-based Jovian, a platform for data scientists to track and reproduce machine learning experiments, collaborate with teammates, and automate repetitive tasks, announced ithas raised $450,000 led by Arka Venture Labs. The other investors who participated in the round includeBetter Capital, SenseAI, Axilor Ventures, and other individual angel investors from Silicon Valley.
Commenting on the investment,Radhesh Kanumury, CEO and Managing Partner of Arka Venturessaid,
“With the proliferation of data scientists across the globe and Jovian providing DevOps capability in that area, it is a great space to be in.”
Owned and operated by SwiftAce Inc., Jovian was part of the Axilor Accelerator programme in 2018.Get connected to Revvsales
“Based on our experience and also of our portfolio firms, we had seen a huge gap for continuous collaboration between dispersed data science teams as well access to models under experimentation. We feel Swiftace is well positioned to be a true interactive platform for data scientists,” saidVinish Kathuria, Managing Partner, SenseAI.
The startup,founded by Aakash NS and Siddhant Ujjain in 2018,plans to use the funding to grow its engineering team and further develop the product. It is also looking for community development via meetups, webinars, online courses and hackathons, and customer development in India and the Silicon Valley.Get connected to Revvsales
Jovian is building the tools, workflows and collaboration stack to power the future of artificial intelligence (AI) and machine learning (ML). As per the startup, the platform is language, framework, and cloud-provider agnostic, and easy to try out, and it tracks everything (datasets, source code, hyper-parameters, trained models, etc.) in a simple yet powerful online dashboard.
16 Takeaways from the 2023 Securing AI Summit in San Francisco
While GenAI adoption is now on the fast track, widespread adoption will take time. For perspective, it took six years for 50% of US households to adopt mobile internet; we expect a comparable level of widespread AI adoption to take 3+ years. This is consistent with the halving time scales of technology adoption we’ve seen with the PC era, the desktop era, and the mobile internet era.
Generative AI: The Macro Opportunity in Harnessing AI and Automation
#1 While GenAI adoption is now on the fast track, widespread adoption will take time. For perspective, it took six years for 50% of US households to adopt mobile internet; we expect a comparable level of widespread AI adoption to take 3+ years. This is consistent with the halving time scales of technology adoption we’ve seen with the PC era, the desktop era, and the mobile internet era.
- The drivers of adoption hinge on three main factors: ROI (productivity growth versus cost), friction reduction, and budget alignment. Encouragingly, ROI and budget allocations show positive trends. Reducing friction requires developing robust data organization and privacy standards, which could limit the pace of adoption. On the other hand, digitally native employees have already begun readily embracing Generative AI and are enhancing workforce productivity.
#2 The swift uptake of GenAI introduces new cybersecurity challenges. For instance, the introduction of GenAI systems like ChatGPT amplifies the risk of phishing attacks. A shortage of cybersecurity talent compounds this challenge, elevating it to a critical concern for enterprises.
- The macro-opportunity in harnessing AI and automation, despite gradual adoption, is undeniable. In terms of opportunity size, GenAI’s cybersecurity potential represents a $34 billion Total Addressable Market (TAM), with productivity gains acting as a driving force. It is important for organizations to proactively address the implications and maintain a strong focus on AI cybersecurity.
Securing the Future: Demystifying LLMs and Threats
#3 There are three broad areas of LLM threats worth noting: prompt injection, data poisoning, and data leakage (not from the LLMs but from agents and vector databases).
- Prompt injection can be compared to confusing the model, e.g., instructing it to behave as someone else to access information it wouldn’t provide otherwise. This tactic is not new in cybersecurity. The reason it works lies in the machine’s inability to distinguish between the control plane and the data plane. In simpler terms, it can’t differentiate between system code and user input. Prompt injection can occur through various means, including images or text, and it can prompt actions by agents, making it a more potent threat (e.g., a bad actor can inject an action into an email to “delete all” of an inbox). Potential risks include damage to an entity’s brand, data losses and financial losses.
- Data poisoning involves intentionally manipulating data to undermine a model’s behavior. This can occur in different forms, depending on where the intrusion takes place within the tech stack. The two primary forms are:
- Input Poisoning (most common): Adversaries alter trusted data sources used by LLMs to skew their learning (e.g., Wikipedia, or expired domains).
- Model Editing: This form of data poisoning involves modifying LLMs to spread misinformation. For example, adversaries might tweak facts within an LLM and upload the altered model to a public repository like Hugging Face. From there, LLM builders integrate these manipulated models into their solutions, thus spreading false information to end users.
#4 Looking ahead, we anticipate that data poisoning will evolve to become more advanced, and possibly the hardest area to address with modern cybersecurity.
As LLMs increasingly incorporate diverse data sources, data poisoning becomes more sophisticated and practical, thereby expanding the potential attack surface. This evolution is fueled by an expanding array of data sources, shorter training cycles, and a shift towards smaller, specialized models. As a result, data poisoning is likely to become an increasingly prominent threat.
- Data leakage – Enterprises are understandably concerned about employees sending sensitive data to LLMs, as malicious actors could exploit this information. However, it’s important to recognize that LLMs are not data stores; they’re data generators so this threat is a bit over hyped. Extracting data from LLMs requires several prerequisites:
- Sufficient References: To extract data, there must be a substantial number of references to it, enabling memorization.
- Knowledge of the Secret: Adversaries need to possess enough knowledge about the secret or its format to generate it accurately (e.g., first 4 digits of SSN).
- Verification of Accuracy: The attacker must verify that the response is accurate and not a hallucination.
However, data leakage emerges as a deeper concern when data is extracted not from the LLM itself, but from agents or vector databases. This highlights the critical importance of access control rules to safeguard information security.
Unveiling Generative AI: Navigating Risks and Opportunities
#5 Prioritize use cases to reinforce trust in LLM deployments from the very beginning.
- A staggering number of opportunities exist across industries and enterprise functions to leverage GenAI, but use cases need to be prioritized based on expected productivity gains and anticipated risks. One approach is to prioritize use cases with lower risk profiles. This can be facilitated through a framework that considers the criticality of the desired outcome and the level of accuracy required for the task. Many internal use cases may fall into the “low-low” risk category, such as coding assistance. For riskier or externally focused cases, like underwriting in financial services, human oversight becomes essential. However, it’s vital not to discount potential high impact but higher risk initiatives and give them proper consideration.
#6 Ensure that the right guardrails are in place to navigate the risks associated with deploying GenAI technology.
- This includes compliance with regulations like GDPR, the California Consumer Privacy Act, and the EU AI Act, amongst others, and being thoughtful about how enterprises handle data and inform users about its use. Adopting the NIST AI Risk Management Framework can also play an important role if enterprises do not already have a robust framework in place. If an enterprise decides to build a private LLM, it becomes a team’s responsibility to embed ethics and moral principles that align with the organization’s values.
#7 Engage cross-functional teams on GenAI deployment because this is a team sport that will often require that different stakeholders like the CTO, CIO, CISO, product managers, HR, and legal are all involved.
- The CEO is a starting point whose buy-in of the potential gains and understanding of GenAI technology is crucial. But getting to scale and efficiency requires broad collaboration and communication. This is often more important amongst those executing the vision than those seeding it.
Armor for AI: Implementing AI Security While Enabling AI Quality
#8 The cybersecurity talent gap is wide and continues to widen. As the barrier to deploying AI applications lowers, the difficulty in properly securing these systems grows disproportionately.
- Many organizations lack the necessary resources and data to train their own models, making external sources an attractive option. Outsourcing AI models, however, often results in reduced control and limited understanding of how they function. Consequently, many organizations are adopting a “build and buy” strategy, which is traditional in cybersecurity, as opposed to the “build vs. buy” approach.
#9 There is rapidly growing demand for anyone selling solutions. In the startup space, AI Security (AISec) has been one of the strongest categories for fundraising this year.
- According to the Ethical AI Database (EAIDB), 75% of active AISec startups raised funding in 2023.
#10 Innovation in AI does not discriminate between defender and attacker. As GenAI models become stronger, they also become more dangerous tools for malicious intent.
- “ML to attack ML” (which has been around for a long time now) has already evolved into “AI to attack AI.” Whose responsibility is it to ensure that new foundational models, open-source models, etc. are not used maliciously?
Table Stakes: Exploring Guardrails for Large Language Models
#11 There is simply no way to prevent bad actors from using Gen AI for advanced phishing attacks, this is in their DNA and motivation.
#12 Having a clear LLM Data Privacy policy in place and an advanced AI roadmap with corresponding safety measures is essential.
- Enterprises can start with limited access controls, small private models, and involve humans in the loop to ensure protection against risks. However, merely formulating a policy isn’t enough; collaboration, governance and a cultural shift are required to fully implement and enforce the right measures for success.
#13 To effectively establish data privacy guardrails for LLMs, it’s crucial to approach it as a process rather than a static product.
- If an enterprise is building a private model, you can implement various guardrails at different stages, including training, fine-tuning, and prompt crafting. Numerous existing technologies can assist in this process. For instance, sensitive data can be replaced during the training stage.
#14 While defining guardrail policies for LLMs constitutes a strong first step, the point of failure often lands at enforcement. Several best practices are emerging to ensure data privacy. These include output scanning techniques such as anonymization, data minimization, differential privacy, encryption and federated learning.
The Hierarchy of Unknowns and What Do We Do Now
#15 GenAI is a technology breakthrough beyond precedent; for the first time, humans will need to reckon with the eventual existence of super-intelligence.
- This brings a hierarchy of unknowns, of questions that span all facets of human life as we know it, from the technology implications to corporate, social, legislative, political, national, environmental, philosophical and existential factors.
- As we consider the many productivity and efficiency gains from GenAI, how can humankind prevent its weaponization and institute the necessary protections to preserve humanity and human-in-the-loop controls? We are at the beginning of asking and addressing these fundamental questions.
Communities: Embracing the complexities of an AI driven Future and charting a course that is Ethical
#16 The impact of communities is accelerating:
- EAIGG– an AI-practitioner community with 1,800+ global members- released its 2023 Annual Report at the event. In this ever-evolving landscape, opinions on the opportunities and risks presented by AI are as diverse as they are passionate. It’s easy to get swayed by the cacophony of voices, each asserting its version of the truth. EAIGG has made a conscious effort not to lean too heavily in one direction. Instead, this annual report presents a collection of thought-provoking articles that aims to elevate the discourse, offering insights that can illuminate the path forward and foster a platform for meaningful dialogue.
- The Ethical AI Database– developed in partnership with the EAIGG- remains the only publicly available, vetted database of AI startups providing ethical services. The database is updated live, with market maps and reports published semiannually.
- The Markkula Center of Applied Ethics at Santa Clara University promoted its framework for ethical decision-making enterprise Boards and C level executives at the event.
Conclusion
Enterprises have begun to recognize the productivity opportunities associated with GenAI, but they must be ready to innovate without being paralyzed by fear because this is where the future is headed. Technology breakthroughs have always produced equal levels of optimism and pessimism, excitement and anxiety. A few frameworks and strategies can help enterprises navigate the path. Prioritizing the right use cases and putting guardrails in place early are crucial steps. Guardrails include protecting against threats such as prompt injection, data poisoning, data leakage as well as ensuring compliance with regulations while engaging cross-functional teams on deployment to achieve the benefits of scale and efficiency. Innovation from AISec startups will play a strong role to secure AI as enterprises may lack the resources to invest in this themselves and there are simply no ways to completely prevent bad actors from exploiting GenAI to launch advanced phishing attacks. Finally, while defining guardrail policies for LLMs represent a good first step, enforcement is often the Achilles heel, so leveraging emerging best practices around output scanning is of critical importance to ensuring data privacy, and a secure information environment for the enterprise. Perhaps the ultimate conclusion is that we are still in the early days of GenAI and we realize that more collective discussion and cross-industry collaboration are vital to ensure we are Securing AI while advancing innovation and growth.
Many thanks to our honored guests and speakers who shared their expertise and perspective with us:
- Andres Andreu, CISSP-ISSAP, QTE, 2U, Inc.
- Abhinav Raghunathan, Ethical AI Database
- Betsy Greytok, IBM
- Caleb Sima, Cloud Security Alliance
- Carolyn Crandall, Marticulate
- Hailey Buckingham, HiddenLayer
- Hamza Fodderwala, Morgan Stanley
- Ilya Katsov, Grid Dynamics
- Joe Levy, Sophos
- Katharina Koerner, Tech Diplomacy Network
- Patrick Trinkler, CYSEC SA
- Sandeep Mehta, The Hartford
- Shayak Sen, Truera
- Tobias Yergin, Major Global Retailer
- Tom Kelly, Markkula Center for Applied Ethics
Special: Harnessing the Power of Generative AI in Cybersecurity
Over the past decade, BGV and Forgepoint have been investing in innovative AI and Cybersecurity startups. Alberto Yépez (Managing Director at Forgepoint Capital) and Anik Bose (Managing Partner at BGV and Founder of the Ethical AI Governance Group (EAIGG)) share their perspectives on how cybersecurity innovation will be impacted by Generative AI. Their joint theses is that Generative AI will enhance the capabilities of malevolent actors, increase the need for guardrails, with innovation advantage going to the incumbents in the near term and to startups for the longer term.
Over the past decade, BGV and Forgepoint have been investing in innovative AI and Cybersecurity startups. Alberto Yépez (Managing Director at Forgepoint Capital) and Anik Bose (Managing Partner at BGV and Founder of the Ethical AI Governance Group (EAIGG)) share their perspectives on how cybersecurity innovation will be impacted by Generative AI. Their joint theses is that Generative AI will enhance the capabilities of malevolent actors, increase the need for guardrails, with innovation advantage going to the incumbents in the near term and to startups for the longer term.
Artificial Intelligence is currently experiencing its “Netscape” moment, propelled by the advent of potent Generative AI models such as Chat GPT. Research conducted by McKinsey estimates that generative AI could contribute an equivalent of $2.6 trillion to $4.4 trillion annually to the global economy. (To put this into perspective, the United Kingdom’s total GDP in 2021 was approximately $3.1 trillion.) According to their analysis, about 75% of the potential value generative AI use cases could deliver is concentrated in four areas: customer operations, marketing and sales, software engineering, and R&D across industries. Unsurprisingly, AI is dominating conversations across the cyber world as businesses rapidly adopt and develop AI-based technologies- and/or react to their sudden rise and accessibility. So what are the implications on AI and Cybersecurity?
AI AND GENERATIVE AI: CONTEXT AND DEFINITIONS
Let’s begin with our context. AI is hardly new despite the intense hype cycle we find ourselves within. AI was first defined as an academic discipline in the mid-1950’s and has since gone through its own boom and busts – periods of intense interest (and funding) followed by “AI winters” and so on. Before the advent of Generative AI, our understanding of AI’s impact on cybersecurity was twofold. First, we recognized the application of AI for protection and detection, either as part of new solutions or as a means to bolster more conventional countermeasures. Second, we acknowledged the necessity to secure AI itself- both as a protective technology and as a tool used by threat actors to develop new attack vectors. Use cases varied from Transaction Fraud Detection, Botnet detection, File-based Malware detection, Network risk assessment, Vulnerability remediation, user authentication, endpoint protection (XDR), and spam filtering.
Today, with the release of several Generative AI platforms, we anticipate the Cybersecurity sector to be profoundly impacted in additional ways including:
- Amplifying the capabilities of malevolent actors through attack vectors such as evasion, extraction, and enumeration attacks.
- Bridging the cyber skills gap with powerful AI assistants, to boost the productivity of enterprise cyber teams. These include those launched by incumbents like Crowdstrike and Microsoft.
- Elevating compliance guardrails around data privacy and output data verification to ensure responsible AI deployment.
Before delving deeper, it’s essential to clarify a few key definitions:
- AGI (Artificial General Intelligence): AGI refers to highly autonomous systems that can outperform humans at most economically valuable work. AGI encompasses general intelligence and is capable of understanding, learning, and applying knowledge across a wide range of tasks. The goal is to replicate human-level intelligence, with the potential to exhibit self-awareness and consciousness. Our hypothesis is that Threat Intelligence Platforms (TIP) will shift towards GPT-like chats as a more effective information source for users, either as auto prompts and API feeds based on detection Indicators of Compromise (IOCs), or interactive for R&D, similar to how Microsoft Copilot is used forapp development, Security, and M365, and GitHub Copilot is used for programming.
- GPT (Generative Pre-trained Transformer): GPT is a specific type of AI model developed by OpenAI (for clarity, the popular ChatGPT is an AI chatbot app powered by GPT, similar to how a Lenovo or Dell laptop might be powered by Intel). Models such as GPT-3 and GPT-4 are designed for language generation tasks. They are pre-trained on large volumes of text data and can generate human-like responses given a prompt. These models excel at tasks like natural language understanding, text completion, and language translation. Our hypothesis is that AGI will improve interpretive systems (SOAR and Anti-Fraud) as Large Language Models (LLMs) and Small Language Models (SLMs) are harnessed for their most suitable functions.
NEW ATTACK VECTORS: ENHANCING THE CAPABILITIES OF MALEVOLENT ACTORS
Generative AI is a double-edged sword. While it holds immense potential for improving cybersecurity defenses, it also amplifies the capabilities of malevolent actors. By exploiting the capabilities of sophisticated AI models, attackers can devise new attack vectors that traditional security measures may struggle to counter:
- Evasion Attacks: In evasion attacks, the adversary uses generative AI to create inputs that are designed to be misclassified by AI-based detection systems. For example, they could manipulate malware so it appears benign to the security system, thereby evading detection. Generative AI, with its ability to understand and generate data patterns, can significantly improve the success rate of these evasion attempts.
- Extraction Attacks: Extraction attacks refer to scenarios where an adversary trains a model to extract sensitive information from a system, leading to potential data breaches. The advent of Generative AI means that attackers can train models to mimic the behavior of legitimate users or systems, thus tricking security measures and gaining unauthorized access.
- Enumeration Attacks: Enumeration attacks involve using generative AI to discover system vulnerabilities. Hackers can automate the process of testing different attack vectors, rapidly identifying weak points in a system that they can then exploit.
- Influence Attacks on Classifiers: Influence campaigns have been demonstrated in social media and securities/commodities trading systems’ reliance on AI repeatedly over the past decade or more – including election cycle and quarantine era mis/disinformation as well as the manipulation of market pricing and performance news. As generative AI is used for more specific, yet broader contexts and concepts in organizational functions, those same techniques will be exercised to exploit the dependencies on knowledge offered to organizations and consumers.
- Poisoning Attacks on Data: One simple example is in Copilot and generative AI code samples that hallucinate functions or resources that hackers may take advantage of to create malicious resources that are subsequently called by that code. This vulnerability requires code validation and testing before production release, which is generally a common activity in modern CI/CD development. This means that even development systems can be compromised and offer back doors for more nefarious software supply chain compromises, especially cine those development systems are rarely subject to network isolation or security controls levied on production systems.
As Generative AI continues to evolve, we anticipate an increase in these types of sophisticated attacks. Therefore, it is imperative for both incumbent and startup entities in the cybersecurity sector to remain vigilant and proactive, developing countermeasures that anticipate these new forms of threats.
While this may seem daunting, we believe it is also an opportunity for cybersecurity innovation. The challenges posed by generative AI-powered cyberattacks necessitate novel solutions, opening new frontiers in the cyber defense sector. Our discussions with key industry players reveal a robust willingness and preparedness to address these concerns.
BROAD YET PRECISE: GENERATIVE AI’S IMPACT ON CYBERSECURITY INNOVATION
Generative AI has significant potential to influence cybersecurity innovation, both in established companies (incumbents) and startups. Here’s how generative AI is shaping cybersecurity:
- Anomaly Detection and Analysis: Generative AI models, trained on substantial datasets of known malware and cyber threats, can identify patterns and generate new threat signatures. This aids real-time threat detection and analysis, empowering security systems to proactively identify and respond to emerging threats. Generative AI models are used to detect adversarial attacks, where bad actors attempt to manipulate or deceive AI systems.
- Security Testing and Vulnerability Assessment: Generative AI can automate security testing by generating and executing various attack scenarios to identify vulnerabilities in software, networks, or systems.
- Password and Credential Security: Startups are using generative AI to develop password and credential security solutions.
- Malware Generation and Defense: Generative AI can be employed to generate new malware samples for research purposes and to strengthen antivirus and anti-malware systems.
- Security Operations Automation: Generative AI models can automate routine security operations while augmenting SOC analyst productivity.
THE NEED FOR GUARDRAILS: THE GENERATIVE AI ACCURACY PROBLEM
Generative AI has its limitations- primarily around consistently providing accurate outputs. Therefore, what guardrails are needed to reduce risks and ensure success with broader adoption? Generative AI tools like ChatGPT can augment subject matter experts by automating repetitive tasks. However, they are unlikely to displace experts entirely in B2B use cases due to AI’s lack of domain-specific contextual knowledge and the need for trust and verification of underlying data sets. Broader adoption of Generative AI will stimulate an increased demand for authenticated, verifiable data, free of AI hallucinations. This appetite will spur advancements in data integrity and verification solutions, alongside a number of other ethical AI issues such as privacy, fairness, and governance innovations. Boards of Directors now more vocally demand the responsible use of AI to improve operational efficiency, customer satisfaction and innovation, while safeguarding customer, employee and supplier data and protecting intellectual property assets.
ON NEAR-TERM INNOVATION: INCUMBENTS’ EDGE
Incumbents carry the advantage of pre-existing infrastructure, high-compute resources, and access to substantial datasets. Consequently, we anticipate a surge of innovation from these entities in the near term. Industry stalwarts such as Crowdstrike, Palo Alto Networks, Microsoft, Google, IBM and Oracle are already harnessing Generative AI to bolster their security solutions. Here’s an exploration of their endeavors:
Crowdstrike:
- Threat Detection and Response: Crowdstrike employs generative AI to detect and respond to advanced threats in real-time. Their AI-integrated platform, Falcon, scrutinizes large amounts of data to discern patterns and threat indicators, enabling swift detection and response to cyber threats.
- Adversarial Attack Detection: Utilizing generative AI models, Crowdstrike can detect and counter adversarial attacks like fileless malware and ransomware. Their AI algorithms are capable of pinpointing suspicious behavior, anomalies, and threat indicators.
- AI-Driven Security Analytics: By leveraging generative AI, Crowdstrike enhances its security analytics capabilities, thereby enabling the identification of intricate attack patterns, threat prediction, and the generation of actionable insights for security teams.
Palo Alto Networks:
- Threat Intelligence and Automation: The company integrates generative AI into their security platform, Cortex XSOAR, automating threat intelligence and incident response processes. Their AI algorithms sift through extensive threat data, equipping security teams with actionable insights and automated playbooks for efficient threat response.
- Malware Analysis: Generative AI models power advanced malware analysis. This helps companies understand emerging threats, devise effective countermeasures, and fortify cybersecurity solutions.
- Behavioral Analytics: Generative AI aids in developing behavioral analytics models that learn standard user, device, and network behaviors to detect anomalies and potential security breaches.
- Security Policy Optimization: By using generative AI, Palo Alto Networks optimizes security policies through the analysis of network traffic patterns, user behavior, and threat intelligence data, dynamically adjusting security policies for robust protection against emerging threats.
Microsoft
- SOC Automation: MS’s Security Copilot is a large language AI model powered by OpenAI’s GPT-4, combined with a Microsoft security-specific model that incorporates what Microsoft describes as a growing set of security-specific skills informed by its global threat intelligence and vast signals volume. Security Copilot integrates with the Microsoft Security products portfolio, which means that it offers the most value to those with a significant investment in the Microsoft security portfolio.
- Human-in-the-Loop Augmentation – While Security Copilot calls upon its existing security skills to respond, it also learns new skills thanks to the learning system with which the security-specific model has been equipped. Users can save prompts into a “Promptbook,” a set of steps or automations that users have developed. This introduction is likely to be resonant and disruptive because of the human aspect that remains — and will remain — so vital to security operations. The ability of large language AI models to comb through vast amounts of information and present it conversationally addresses one of the primary use cases of automation in SecOps: gathering the context of incidents and events to help analysts triage and escalate those that pose a significant threat.
Google:
- Vulnerability and Malware Detection: Google announced the release of Cloud Security AI Workbench powered by a specialized “security” AI language model called Sec-PaLM. An offshoot of Google’s PaLMmodel, Sec-PaLM is “fine-tuned for security use cases,” Google says — incorporating security intelligence such as research on software vulnerabilities, malware, threat indicators and behavioral threat actor profiles.
- Threat Intelligence: Cloud Security AI Workbench also spans a range of new AI-powered tools, like Mandiant’s Threat Intelligence AI, which will leverage Sec-PaLM to find, summarize and act on security threats. VirusTotal, another Google property, will use Sec-PaLM to help subscribers analyze and explain the behavior of malicious scripts.
IBM:
- Threat Detection and Response: IBM’s QRadar Suite is a subscription-based (SaaS) offering that combines AI-enhanced versions of IBM’s existing threat detection and response solutions into a comprehensive global product. The new QRadar Suite goes beyond traditional security information and event management (SIEM) capabilities, aiming to provide a unified experience for security management. Its goal is to assist organizations in managing extended detection and response (EDR/XDR) capabilities, SIEM functionalities, and Security Orchestration Automation and Response (SOAR) in cybersecurity.
- Security Compliance: IBM’s approach to security and compliance in highly regulated industries, such as financial services, emphasizes the importance of continuous compliance within a cloud environment. By integrating the Security and Compliance Center, organizations can minimize the risks associated with historically challenging and manual compliance processes. The solution enables the integration of daily, automatic compliance checks into the development lifecycle, ensuring adherence to industry standards and protecting customer and application data.
Oracle, SAP, Salesforce and other enterprise application providers are beginning to provide comprehensive AI service portfolios integrating their cloud applications and their existing AI infrastructure with state-of-the-art generative innovations. Their unique approach and differentiation means their customers will have complete control and ownership of their own data inside their “wall gardens” to derive insights and avoid data loss and contamination.
The incumbents not only have the company and customer install base and diverse platform to develop, test, and secure the safe and productive use of Generative AI / AI in general – but also having their own first party security products (Google’s Mandiant and Microsoft Security/Sentinel along with IBM’s Q Labs and Resilient acquisitions) that are using generative AI to power automated threat intel and security…while needing to retain human in the loop decision-making throughout the SDLC (and modern SOCs).
LONGER TERM INNOVATION: ADVANTAGE STARTUPS
Startups offer innovative, agile solutions in the realm of generative AI for cybersecurity. However, the investment climate for generative AI-driven cyber solutions is still nascent, given the limited number of attacks witnessed to date involving the AI attack surface.
The pivotal role of data cannot be overstated. For startups to flourish, they must leverage open-source LLMs while enriching data with proprietary information. We anticipate that synthetic data innovation and Robotic Process Automation (RPA) will play crucial roles, especially in regulated sectors like financial services and healthcare that have unique data privacy requirements. However, synthetic data is not expected to significantly influence decision support automation, such as privileged access management.
Another key area for startup innovation exists around Verification and Testing, driven by mounting enterprise demand to harness Large Language Models (LLMs). Other noteworthy areas of opportunity include Explainability, ModelOps, Data Privacy for Generative AI applications, Adversarial AI/Data Poisoning, Autonomous Security Operations Centers (SOCs), Differential Data Privacy, and Fraud Detection.
Capital efficient startups will need to utilize existing infrastructure (foundational models) and concentrate on applications that add value through Single Language Models (SLM) via contextual data enrichment. Acquiring proprietary datasets may also be a strategic move for startups aiming to establish a competitive edge.
Furthermore, we posit that the compliance and regulatory environment shaped by the EU Act will direct startup innovation toward responsible AI and Governance, Risk Management, and Compliance (GRC). Notably, the founder DNA in this space will require a unique blend of cybersecurity domain expertise paired with generative AI technical prowess.
IN CONCLUSION
We anticipate strong innovation at the intersection of Cybersecurity and Generative AI, fueled by incumbents in the near term and startups in the long term. Automating repetitive tasks with Security Co-pilots will go a long way towards addressing the cyber skills gap, while newfound protection and defense capabilities enabled by Generative AI will help secure large enterprise datasets and enable more effective identity orchestration to prevent breaches amid expanding attack surfaces. Morgan Stanley predicts that Cybersecurity is ripe for AI automation representing a $30Bn market opportunity. The bar on compliance guardrails will be raised in this space given the ethical concerns around the accuracy of Generative AI outputs (hallucinations), increasing the need for human-in-the-loop, regulations and raising the stakes to build an “ethics stack” to complement and safeguard the explosive AI technology stack. Finally, enterprise CTA’s (committees of technology and architecture) will increasingly need to embrace responsible application of Generative AI to succeed and compete.
Board of Directors will play an important role to demand good governance and the use of responsible AI, while protecting the key information assets of every business.
In Conversation with AMD: Building a Foundation for our AI Future
In a recent interview with Eric Buatois, General Partner at BGV, Matt Hein, Chief Strategy Officer at AMD, discussed his company’s strategy and approach to industry collaboration. The interview covered a wide range of topics, including AI infrastructure, partnerships with large and small companies, customer engagement, geopolitical concerns, licensing, manufacturing, and talent.
In a recent interview with Eric Buatois, General Partner at BGV, Matt Hein, Chief Strategy Officer at AMD, discussed his company’s strategy and approach to industry collaboration. The interview covered a wide range of topics, including AI infrastructure, partnerships with large and small companies, customer engagement, geopolitical concerns, licensing, manufacturing, and talent.
One of the main topics of discussion was the infrastructure buildout of AI, which Hein likened to the deployment of the internet and mobile infrastructure. He emphasized that AI will affect every element of the industry, including AI training and models. AMD, as a high-performance adaptive computing company, sells to the data center, PC client, and gaming markets. Hein noted that over time, there will be more emphasis on inference at the edge of the network, with AI being deployed across all elements of the ecosystem.
In terms of competition, AMD has successfully competed against Intel in the CPU market and is now focused on competing against NVIDIA. Hein noted that as AMD gains share in the CPU market, it will go up the stack and partner more closely with customers, such as Microsoft and Sony, on their gaming consoles. AMD is more partner-focused and less focused on full stack integration.
Hein provided examples of both large and small company partnerships. AMD has partnered with Sony and Microsoft on their gaming consoles and with hyperscalers. Samsung has also licensed AMD’s GPU for their mobile processes. On the smaller side, AMD looks for potential customers that will need to deploy a lot of its infrastructure or tie into its graphics engine. Some of these partnerships may become acquisitions, but that is not the primary strategy.
When asked about learning from customers, Hein emphasized the need for a deep level of engagement. Customers are using their own platforms and software, so AMD needs to tie into that and optimize for it. This can take months to years to align roadmaps. There is also a lot of discussion around AI being seen as a national interest, with every country building its own AI strategies.
Regarding talent, Hein noted that AMD recruits from the market and has been fortunate to perform very well. The talent pool is universally educated graduate students, including low-level software, high-level software, and business people. AMD partners closely with foundries and is highly supportive of the CHIPS Act, which aims to build out more manufacturing in the US.
Overall, Hein emphasized the importance of collaboration and partnership in the industry. He noted that advanced technology partnerships are based on fitting into the roadmap, while business unit partnerships are focused on revenue. AMD looks for companies going to market in an innovative way using its products, and while it may never buy them, it will partner with them. Hein concluded by saying that AMD’s approach is to get the partnership right, customize the approach, and move away from non-standard investment with a lot of hooks.
Generative AI – A Contrarian Perspective
The release of ChatGPT has catalyzed a literal “Generative AI” storm in the tech industry. Microsoft’s investment in OpenAI has made headlines as a potential challenger to Google’s monopoly in search. According to Pitchbook, VCs have increased investment in Generative AI by 425% since 2020 to $2.1bn. “Tech-Twitter” blew up and even mainstream BBC reported on it. Now ChatGPT is too full to process queries most of the time.
The release of ChatGPT has catalyzed a literal “Generative AI” storm in the tech industry. Microsoft’s investment in OpenAI has made headlines as a potential challenger to Google’s monopoly in search. According to Pitchbook, VCs have increased investment in Generative AI by 425% since 2020 to $2.1bn. “Tech-Twitter” blew up and even mainstream BBC reported on it. Now ChatGPT is too full to process queries most of the time.
As a General Partner at BGV, Human-centric AI is a foundational core of our vc investment thesis, and we have dug deeply into the challenges of building B2B AI businesses in our portfolio as well as how disruptive venture scale businesses can be built around Generative AI.
As a founder of the Ethical AI Governance Group (EAIGG), I share a mutual passion with fellow members around best practice sharing for the responsible development and deployment of AI in our industry. I decided to put ChatGPT to the test on the topic of human centric AI by asking two simple questions: a) What are the promises and perils of Human AI?; b) What are important innovations in Ethical AI? I contrasted ChatGPT’s responses with what we have learned from subject matter experts on the same set of questions (Erik Brynjolffson in Daedelus (2022) titled the Turing Trap on the promises and perils of AI, see link here) and (Abhinav Ragunathan published a market map of Human Centric Ethical AI startups (EAIDB) in the EAIGG annual report (2022). See link here).This analysis combined with BGV investment thesis work around enterprise 4.0 and ethical AI governance led me to several conclusions that are contrary and more nuanced than what the generic media narrative about ChatGPT and Generative AI in general suggests:
- ChatGPT represents a tremendous area of innovation but it will not replace Google search engine overnight:
- The answers are generic, lack depth and are sometimes wrong. Before trusting ChatGPT responses implicitly, users will need to confirm the integrity and veracity of the information sources. Already, StackOverflow has banned ChatGPT, saying: “Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to our site and to users who are looking for correct answers.” Given the effort required to verify responses, ChatGPT’s chatbot is not prepared to penetrate enterprise (B2B) use cases. As unverified and inaccurate data sets increasingly proliferate, ChatGPTs “garbage in equals garbage out” output will quickly expose the shaky data foundation on which its answers rely
- Putting aside the matter of accuracy, there is also the question of sustainable business models. While ChatGPT is free today, running GPUs is expensive, so profitably competing at scale with Google search engine is a tall order.
- Google will not stand still, they have already launched Bard with machine and imitation learning to “outscore” ChatGPT on conversational services. We’ve only seen the opening act in a much larger showdown.
- Generative AI tools like ChatGPT can augment subject matter experts by automating repetitive tasks to provide a baseline but will never displace them (especially in B2B use cases) because of the lack of domain specific contextual knowledge as well as the need for trust and verification of underlying data sets. For this reason, broader adoption of ChatGPT will spur increased demand for authenticated, verifiable, ground truth, fueling tailwinds for data integrity and verification solutions, alongside a whole host of ethical AI issues, like privacy, fairness and governance innovations. Ultimately, human-centric AI will emerge as the bridge to Human-Like Artificial Intelligence (HLAI).
- Generative AI will go well beyond chatbots to cover use cases like creating content, writing code (automation scripts) & debugging, generating AI art and managing and manipulating data BUT many of the first wave of generative AI startups will fail to build profitable venture scale B2B businesses unless they explicitly address a few challenges:
- Address inherent trust and verification issues
- Lack of defensible moats
- Unsustainable business models given the high costs of running GPU’s
- Wearing my vc hat of an enterprise tech investor, winning Generative AI B2B startups will fall into several categories:
- Applications – That integrated generative AI models into a user facing sticky productivity apps. Using foundation models or proprietary models as a base to build on (verticals like media, gaming, design, copywriting etc OR key functions like DevOps, marketing, customer support etc). Models – To power the applications highlighted above verticalized models will be needed. Leveraging foundation models, using open-source checkpoints can yield productivity and a quicker path to monetization but may lack defensibility.
- Infrastructure – Innovations to cost effectively run training and inference workloads for generative AI models by breaking the GPU cost curve. We will also see AI Governance solutions to address the unintended consequences of “disinformation” that will be created by broader adoption of tools like ChatGPT, as well as a wide range of ethical issues.
Today it is unclear where in the stack most of the value will accrue – Infrastructure, models, or apps. Currently, the infrastructure providers (like NVIDIA) are the biggest benefactors of OpenAI. It is also unclear where startups can break the oligopoly of the infrastructure incumbents like Google, AWS and Microsoft who touch everything. See a16Z article that does a good job expanding on this here. In the shorter term, BGV believes that most of the winning startups will be in the application layer – which make use the of the democratization of generative AI, but take it to everyday workflows by using intelligent workflow automation and leveraging proprietary verticalized data sets to provide most value to the end users in terms of productivity improvements, agility, resiliency and cost savings. There will also be opportunities for startups to innovate at the hardware layer – to break the GPU cost curve but these are likely to be more capital intensive investments.
What is increasingly clear, however, is that the surge in Generative AI interest will quickly prompt demands to prioritize human values, guardrails and ethical AI governance norms. As Abhinav Ragunathan writes, ”the modern “tech stack” will quickly evolve into an “ethics stack” in which the flow of data moves through companies [that ensure these constraints are in place].” Moreover, the subsequent innovation prompted by the Generative AI “boom” — is poised to follow a curve similar to “ethics predecessors” like that of cybersecurity in the late-2000s and privacy in the late-2010s.
Chat GPT Queries vs Thought Leader Perspectives
Query #1. What are the promises and perils of human AI?
Chat GPT Response
The promises and perils of human-AI refer to the potential benefits and risks associated with the integration of artificial intelligence into human societies.
Promises of human-AI include:
- Increased efficiency and productivity: AI can automate repetitive tasks, freeing up time for humans to focus on more complex and creative work.
- Improved decision-making: AI can analyze vast amounts of data and provide insights that may lead to better decision-making.
- Enhanced quality of life: AI can assist with tasks such as diagnosing medical conditions and providing personalized healthcare, making life easier and more convenient for people.
Perils of human-AI include:
- Job displacement: As AI becomes increasingly capable of performing tasks traditionally done by humans, there may be a risk of job loss and unemployment.
- Bias and discrimination: AI systems can perpetuate existing biases and discrimination if they are trained on biased data.
- Privacy concerns: AI systems can collect and store large amounts of personal data, raising concerns about privacy and security.
- Loss of human control: As AI systems become more autonomous, there is a risk that humans may lose control over their actions and decisions, leading to unintended consequences.
In conclusion, the integration of AI into human societies holds both promises and perils, and it’s important for society to carefully consider the potential impacts and take steps to minimize the risks.
Thought Leader Perspective
Erik Brynjolffson (Stanford HAI) authored an insightful thought piece in Daedelus (2022) titled the Turing Trap on the promises and perils of AI, see link here.
Key takeaways summarized by Anik from the thought piece are:
- The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly a better understanding of our own minds. But not all types of AI are human-like–in fact, many of the most powerful systems are very different from humans–and an excessive focus on developing and deploying HLAI can lead us into a trap.
- The breakthroughs in robotics depend not merely on more dexterous mechanical hands and legs, and more perceptive synthetic eyes and ears, but also on increasingly human-like artificial intelligence (HLAI). – Powerful AI systems are crossing key thresholds: matching humans in a growing number of fundamental tasks such as image recognition and speech recognition, with applications from autonomous vehicles and medical diagnosis to inventory management and product recommendations.
- Automation increases productivity. Moreover, there are many tasks that are dangerous, dull, or dirty, and those are often the first to be automated. As more tasks are automated, a fully automated economy could, in principle, be structured to redistribute the benefits from production widely, even to those people who are no longer strictly necessary for value creation.
- Government policy could do more to steer the economy clear of the Turing Trap. The growing use of AI, even if only for complementing workers, and the further reinvention of organizations around this new general-purpose technology imply a great need for worker training or retraining. In fact, for each dollar spent on machine learning technology, companies may need to spend nine dollars on intangible human capital.
- The risks of the Turing Trap are increased not by just one group in our society, but by the misaligned incentives of technologists, businesspeople, and policy-makers. The future is not preordained. We control the extent to which AI either expands human opportunity through augmentation or replaces humans through automation.
Query #2: What are important innovations in Ethical AI?
Chat GPT Response
There have been several important innovations in ethical AI, including:
- Fairness: Techniques to ensure that AI systems are fair and unbiased, such as algorithmic auditing and debiasing.
- Explainability: Methods to make AI systems more transparent and explainable, such as interpretability and model explanation techniques.
- Privacy: Tools to protect the privacy of individuals, such as differential privacy and federated learning.
- Accountability: Frameworks for holding developers and users of AI systems accountable for their actions, such as responsible AI governance.
- Human-centered design: Incorporating the values and needs of the people who will interact with AI systems into the design process, such as user-centered design and participatory design.
Thought Leader Perspective
Abhinav Ragunathan (EAIGG community member) published a market map of Human Centric Ethical AI startups (EAIDB) in the EAIGG annual report. See link here.
Key takeaways summarized by Anik were:
- Demand for ethical AI has skyrocketed, and public discussion has internalized the challenge, and is moving towards innovative solutions. A nascent, sometimes opaque, ecosystem of ethical AI startups has emerged to drive solutions to ethical challenges.
- They are organized across 5 broad category areas:
- Data for AI — any treatment or manipulation done to data sets, including generating synthetic data, done to preserve data privacy
- ModelOps, Monitoring & Explainability — specific tools that assist in the governance and lifecycle management of production machine learning. This includes detecting bias and offering monitoring and explainability
- AI Audits and GRC — consulting firms or platforms that help establish accountability, governance and/or business risk/compliance
- Targeted AI Solutions — companies solving ethical AI issues for a particular niche or vertical, i.e. insuretech, fintech, healthtech, etc5. Open-Sourced Solutions — fully open-source solutions meant to provide easy access to ethical technologies and responsible AI.
- As the space grows, we anticipate the following key trends:
- Consulting firms may decline in popularity and MLOps / GRC platforms may rise due to the ability to programmatically enforce compliance given concrete metrics.
- Incumbents will start incorporating less effective versions of bias-related technology in an effort to keep their platforms viable in this new world of bias-conscious policy. They lack the specialty, expertise, and first-mover advantage of these startups but have a well-established client base to draw from.
- The modern “tech stack” will quickly evolve into an “ethics stack” in which the flow of data moves through companies in the aforementioned categories. For example, a company might employ Gretel for data privacy and synthetic data, Arthur for MLOps management and bias detection, then Saidot for model versioning and governance.
- The “boom” for ethical AI is estimated to be somewhere from the mid- to late-2020s and will follow a curve similar to “ethics predecessors” like that of cybersecurity in the late-2000s and privacy in the late-2010s.
Scaling With System Integrators
System integrators (SI’s) are the handmaidens between innovative startups and large enterprises. Both parties benefit from the relationship, as does the SI itself.
Introduction
System integrators (SI’s) are the handmaidens between innovative startups and large enterprises. Both parties benefit from the relationship, as does the SI itself.
In my two decades-long business experience at Capgemini, however, I have noticed that startups do not always understand the role of an SI, how they can use these firms to their advantage, or how they must evaluate the pros and cons of the relationship.
System integrators include companies like Accenture, Deloitte, Infosys, TCS or Capgemini that are hired by large corporations to integrate component subsystems that drive automation, further digital transformation, and spur innovation across business platforms.
Startups or scale-ups must often work with SI’s to target an enterprise with their novel software solution. This can be a struggle, however, for solution providers that don’t have much experience working with SI’s or understand their relationship with the enterprise.
Understanding Value Against the Right Backdrop
Startups often offer a single-point solution, which can be of limited value if it cannot be integrated into an enterprise company’s broader IT infrastructure. Most enterprises rely on some level of outsourcing, and often depend on third parties to provide expertise in managing their information. This is precisely where a system integrator creates its value: helping the enterprise conduct an impact assessment to review how and where a given startup solution might be useful in the enterprise tech stack. Startups can benefit by doing some homework here as input to the assessment, which will help it to establish a clearer positioning in the context of the enterprise’s architecture.
Since enterprises often lack the in-house staffing required to fully explore the information landscape, a system integrator is called upon to manage IT infrastructure, conduct a digital transformation project, and/or to identify automation solutions that will keep the enterprise up-to-date. As a result, when an enterprise evaluates a solution offered by a startup, the company will often ask a system integrator for its guidance.
It’s of paramount importance, therefore, to understand the interests of the SI in this process and how these entities charge for their services. SI’s are fundamentally about deploying people and the SI will also assess the impact of the startup solution on its own economics.
SI contracts are typically structured in the following ways:
- Service Level Agreement: If the SI is hired by way of an SLA, typically for ongoing managed services, the startup’s economics are aligned with that of the enterprise. Under this arrangement, the savings that a startup solution generates either funnels to the SI or is shared with the client. This alignment bodes well for the startup.
- Fixed Price contracts, typically utilized for software development, are also fairly well aligned with the startup, although the SI will need to find ways to improve margin and improve quality of delivery. In these arrangements, the ability to scale is key.
- Time and Material contracts, typically utilized for “resource provider” engagements, are those in which the SI is paid for the people it deploys to the enterprise – logged in time and material. This scenario is more complex for the startup vendor, as the economics may not align, and could even oppose the SI, which would lose billable hours with the enterprise if the startup solution is adopted by the enterprise. To make matters worse, many SI’s contract on this basis.
Enterprises will usually reveal the structure of their contracts with the SI. It is, therefore, incumbent on the startup vendor to understand the economic incentives of the SI, and work with that firm to discover how their input will improve the business case for the client (the enterprise) and benefit to the SI.
The Utilization Consideration
Utilization is a key metric for a system integrator.
Once a startup or scaleup has identified how its solution aligns with the business case of the system integrator, it must begin to explore the marginal value it offers. In other words, to successfully recruit the SI as a champion in the sales process, the startup vendor must demonstrate that the value it creates is worth the investment of the SI.
Consider that top system integrators deploy most of their staff on ongoing projects, and don’t have a deep bench of idle labor waiting for the next client. Therefore, if an SI is to expend time and energy to learn a new solution, evaluate it, stress test it, deploy it, and then train its team (and the enterprise) on this solution, the value must be clear.
If the SI doesn’t see the potential to deploy the solution broadly, and likely across a large base of its clients, then it could pose a challenge for the SI to justify the business expense of investing in that particular solution. The onboarding costs are rarely covered by one client.
The Pressure to Innovate
Another major driver for SI’s is the pressure to innovate. For an SI to justify its contract with the enterprise, it must demonstrate it has its finger on the pulse of the innovation landscape, and is equipped to deploy cutting edge technology solutions amongst at least some portion of its client base.
Customers are increasingly savvy, so SI’s often feel pressure to offer new services and innovate whenever and wherever possible. For example, when the trend of Robotic Process Automation (RPA) surged, SI’s felt a keen pressure to learn and deploy these solutions, if only to burnish their innovation credentials and capabilities.
When a trend is peaking and appears destined to win the future, this pressure toward innovation can override an SI’s direct economic interest. In rare instances, an SI might even adopt a solution and scale it up across its broader service offerings if only to demonstrate the firm’s forward thinking and bend toward innovative practices.
TCS, for example, drives billions of dollars of revenue each year from startups. They occupy one end of the spectrum, in terms of appetite for innovation. Others tend to follow the innovation curve, and only adopt new solutions if they’re offered by unicorn startups, or if they feel pressured or forced into action. Startups should research the SI to understand their appetite for innovation.
The Cemetery of Proof of Concept: Bear the End in Mind
SI’s may sometimes leverage their enterprise clients‘ appetite for innovation, and do so in less constructive ways for startups.
Large SI’s enjoy a certain level of overhead from their enterprise contracts, who must justify their salary and value to their enterprise clients. After all, if an SI is hired to point their clients to innovative solutions and vendors, then a Proof of Concept (POC) with a startup vendor becomes an easy way to show an activity and justify a salary – even if the firm has no intention of ever truly adopting it. The cemetery of proof of concept is a classic (and very real) challenge for startups both with enterprises and SI’s.
Startups offering the POC must understand what it takes to move the concept into production, where it can ultimately generate revenue. Otherwise, a POC exercise can result in a costly showcase experiment with little prospect of graduating into a revenue-generating arrangement. When targeting a solution for an enterprise client, startups must identify which entity has requested the POC. It’s important to know whether the person on the enterprise or the SI side has the power to advance the POC to the next step.
Since POC’s are costly for startups to create, the founders must carefully evaluate the risk and reward of pursuing such an option. In certain instances, even if it’s clear to a startup that a POC has almost no chance of moving into production, but it may be worth pursuing it a) to gain exposure to the enterprise; b) to use it as a proof point with other potential clients; or c) simply to learn more about a corporation’s business processes and onboarding funnel.
If, however, the POC is conducted simply to serve the SI’s innovation agenda, the startup may choose to charge the SI for serving as a consulting specialist. In that case, it’s important for the startup to be upfront with the SI, and vice versa. A startup must bear the end in mind and monitor the risk before entering into this kind of engagement.
Knowing When to Engage
SI’s tend to favor startup solutions that have an established Product Market Fit. Therefore, these firms prefer to engage with startup/scale-up vendors that have raised enough financing and have the ability to scale their solution broadly.
If the startup is less mature, it would be wise to target the SI’s innovation program. That path will allow the startup to demonstrate the viability of its solution, showcase its architecture to targeted audiences, and provide a general exposure for the business on roadshows, podcasts, and conferences.
The economics of joining an innovation program, however, may not be particularly compelling since they are not designed to create instant business outcomes. If the startup chooses to engage the SI at this stage, therefore, it may choose to work with the SI as a paid consultant, as previously discussed. For their part, SI’s tend to engage with startups with which they can see a scaling opportunity over the next 12 months.
Never Stop Selling
Oftentimes, startup vendors underestimate the time it takes to secure a contract with an SI. In fact, making the sale to an SI can be as long and laborious as dealing directly with the large enterprise. It takes time to build awareness, to entice a business unit within the SI to fall in love with your product, commit to it, and become your champion within the SI. In addition, other business units within the SI must buy in to escalate the sales process to the firm’s top echelon. But even after you’ve closed the contract with the SI, you haven’t finished your work.
All you’ve accomplished is that you’ve secured the “right to sell.” This initial buy-in does not automatically generate outcomes. Startup vendors must continue to push the SI to implement and deploy its solutions with their enterprise clients. This means hands-on monitoring and relationship management to see the solution through to the deployment stages, and then tracking progress and customer success. This is the same process by which a customer success team works to ensure a client is successfully leveraging a tool to meet its goals.
Conclusion
Selling into SI’s can be tremendously helpful to securing broad buy-in from large enterprise clients. These actors are uniquely positioned with the enterprise to adopt and deploy innovative automation solutions that are scalable, especially when related to IT infrastructure and digital transformation.
However, the SI’s have their own unique challenges, and for a startup to successfully engage with an SI, it must not only understand the value of its solution within the target enterprise’s tech stack, but also have a clear view of the SI’s economics. To make a strong business case, the startup should be clear-eyed about where they sit on a particular SI’s priority list and its innovation agenda. The process can be challenging, but the rewards are mighty.
Celebrating Women's History Month | Arka Talks w/ Sonal Puri & Usha Amin
In this month’s episode of Arka Talks, we are celebrating Women History Month featuring Sonal Puri (CEO, Webscale) & Usha Amin (Co-founding Partner, SAHA fund) in a panel moderated by Dhanasree Molugu (Arka Alum & MBA Associate at Menlo Ventures) to hear inspiring stories of women from the frontlines of being in Tech & VC. Arka Talks is our monthly fireside chat where we feature founders, operators, VCs and corporates discussing enterprise trends in the cross-border space and explore ways to build and scale a successful cross-border startup.
Sonal Puri serves as the Chief Executive Officer of Webscale, from pre-product to recently closing $26M in growth financing. Prior to Webscale, she was the Chief Marketing Officer at Aryaka Networks and led sales, marketing and alliances for the pioneer in SaaS for global enterprise networks from pre-product to Series D. Sonal has more than 20 years of experience with internet infrastructure, across four startups, in sales, marketing, corporate and business development and channels. Previously, Sonal headed corporate strategy for the Application Acceleration business unit, and the Western US Corporate Development team at Akamai, working on partnerships, mergers and acquisitions. Sonal also ran global business operations and alliances from pre-product to Series C and exit, as well as the acquisition process for Speedera (AKAM). She has held additional key management roles in sales, marketing and IT at Inktomi, CAS and Euclid.Usha has co-founded SAHA Asset Advisor – First Venture Capital Fund registered with SEBI to Empower women entrepreneurship in Technology. Invested in 11 companies, seven active, mentoring them to excel in their sectors, scaled successfully, created employment and growth opportunities in the ecosystem and impacted gender parity. The companies have focussed not only on growing their business but also on women’s welfare by implementing policies that make it favourable for them to continue work despite domestic challenges, especially during the current situation.
Arka Talks ft. Obviously AI | No-Code & AI - Discussing the Endless Possibilities
In this episode, we feature our Arka startup founder & INK fellow Nirman Dave from Obviously AI in a panel discussion with Rafael Ugolini – a Sr. Engineering Manager at Collibra (a leading Data Intelligence company) and an Angel Investor. Along with, Avantika Mohapatra an ex-BMW engineer who now leads the charge in No-Code ML landscape as the partnerships head at AIxDesign.
Obviously AI enables anyone to quickly build and run AI models in minutes, without writing code. Crafted for a citizen analyst, Obviously AI sits in the heart of 3,000 BI teams across the world, delivering over 82,000 models since it’s launch in Feb 2020. It is a recommended no-code AI tool by Forbes and named among the top 5 no-code AI tools by Analytics Insight. Learn more on https://obviously.ai
Arka Spring Showcase - Jan '21
Arka spring showcase is an invite-only coming together of the best minds in the US-India cross-border startup ecosystem where we have our founders showcase their work and a set of insightful discussions on the US-India cross-border eco-system.
We started the year off with the Arka Spring Showcase where we had the best minds of the US-India cross border come and be a part of the bi-annual showcase. The session started off with a keynote by Eric Benhamou, (Founder, Benhamou Global Ventures). Eric spoke about the transformation of the global enterprise and enterprise trends to look forward to in 2021. Arka startups did a showcase where they presented their work and spoke about their progress so far.
We then had Sanjay Nath (Co-founder, Blume Ventures), Rashmi Gopinath (General Partner, B Capital Group) & Ankur Jain (Founder, Emergent Ventures) for a riveting panel discussion on “The Emergence of Enterprise Innovation in the US-India Eco-System”.
This was followed by a great panel discussion on “Building and Scaling a cross-border startup led by Yashwanth Hemaraj (Partner, Benhamou Global Ventures) in conversation with Rajoshi Ghosh (Co-founder, Hasura) & Shiv Agarwal (Co-founder, Arkin (acq. by VMWare).
The 3 Lean Marketing Principles of Highly Effective Startups
In this episode of Arka Talks, Elias Rubel of Mattermade talks about the 3 lean marketing principles of highly effective enterprise startups and how early-stage enterprise startups can leverage them.
Topics that were covered:
- Lead Demand Gen: Is it smart to leverage ABM while running a hyper lean marketing program? How should we think about budgeting and channel testing in the post-covid economy? What cost-free demand gen strategy will always out preform companies with a bigger budget?
- Growth Foundations: How do you set the right growth goals for your team? What does a best in class MarTech stack look like on a leaned out budget? What are the highest leverage growth channels that don’t cost a dime?
- Marketing, the right way: What’s the most effective way for Sales and Marketing to partner? What’s the best way to focus your marketing efforts and messaging? How should we approach product marketing and leverage it to close more deals?
Arka Talks w/ Gaurav Manglik | The Art and Science of Building a Global Enterprise Startup
In this Episode of Arka Talks, we had Ankur Jain (Advisor, Arka & Founder, Emergent Ventures) in conversation with Gaurav Manglik, Co-founder, CliQr (acq. by Cisco) & GP, WestWave Capital on the 24th of Feb at 9.30 PM IST/ 8 AM PT. Arka Talks is our monthly fireside chat where we feature founders, operators, VCs and corporates discussing enterprise trends in the cross-border space and explore ways to build and scale a successful cross-border startup. We’re excited to have Gaurav on Arka Talks and we will explore his journey and talk about the art and science of building a global enterprise startup. Gaurav is currently a General Partner at WestWave Capital where he focuses on investments in early-stage Enterprise B2B startups. Gaurav has been a key driver of innovation in the cloud computing Industry. He co-founded CliQr Technologies in 2010 and served as its Chief Executive Officer until it was acquired by Cisco for $260M in 2016. At Cisco, Gaurav led cloud engineering for the Cloud Platform and Solutions Group and advised on Cisco’s cloud and container strategy and related investments and acquisitions.
Watch the full episode below –
Talent Lifecycle Management and its Impact Post Covid-19
Shifting work trends in the pandemic era reflect a clear acceleration of certain forces, a disruptive and disrupted trend in others- and the emergence of totally new drivers of change. In this episode of Arka Talks, our portfolio founder Ganesh Iyer of dotin was in conversation with Aadesh Goyal – Chief Human Resource Officer at Tata Communications and Venkatesan Sukumaran – Head of Business Analytics A&I at Tata Consultancy Services on how Covid-19 has impacted the talent lifecycle management/ workforce and the tools/best practices put in place to address the same.