Blog Posts Process Management

Unlocking AI potential for CISOs: A framework for safe adoption

Blog: OpenText

Finger touching a shield that has an AI symbol in the middle with lines coming out of it. The lines have labels reading Zero Trust, Cloud, IoT, Ransomware, Supply Chain, Identity, DevSecOps, and Privacy

Many organizations are in the exploratory phases of using Artificial Intelligence (AI). Some of these AIs include machine learning (ML), and natural language processing (NLP) in the AI category, which makes AI more inclusive. Remember, AI has been around for a long time but with different semantics. 

The benefits of AI for some industries like healthcare, banking, and telecomm will drive major strategy changes, and the impact will be vast. Developers are eager to build LLMs for their business applications and system integrators are eager to integrate AI into their existing job functions. AI is quickly becoming ubiquitous, which means CISOs must know how to manage, guide, and lead AI’s adoption. 

ChatGPT has given a glimpse of GenAI and generated quite a buzz about its astounding progress to the entire world.  As per Forbes, the global artificial intelligence market is projected to reach $1.8k billion by 2030. 

eMarketer mentioned GenAI adoption will climb to 77.8 million users in the two years following the November 2022 release of ChatGPT, more than doubling the adoption rate of both tablets and smartphones. 

PwC found that 14% of enterprises with the adoption of AI and machine learning in product development earn more than 30% of their revenues from fully digital products or services. 

Challenges for organizations in adopting AI

While enterprises like to move to adopt AI faster to drive growth, automation, and security, there are a few concerns that CISOs and their enterprises are struggling with. 

  1. Visibility – many teams in the organization are using or building AI applications right now, some have the knowledge, resources, and security awareness to do it right, but others don’t. 
  1. Uncontrolled LLMs -the AI model is a mixture of instructions and data, and end users usually add further instructions to get any result from that model. Now, a bad actor could change these instructions to let AI produce a biased or wrong response.  
  1. Building secure AI applications – creating custom actions with AI workflow needs to be validated from a security perspective, sometime addition of a vulnerable python library in the AI application makes a faulty software supply chain. 
  1. Data – there may be no visibility of proprietary data while training models  
  1. Security controls – code assistant apps/plugins are impersonating existing roles, so user security controls need to be strict. 

Recently, some of the security research team uncovered evidence that users of ChatGPT are being misled into installing malicious open-source software packages that they believe are legitimate. It’s difficult to make predictions, especially about the future. As AI development and usage continue to evolve, the security landscape is bound to evolve with them. 

Overall solutions within processes, people, and technology 

When it comes to integrating AI into organizational frameworks, the challenges can seem daunting. Yet, the answer to the question “Is it safe to adopt AI?” is a resounding YES! The key lies in implementing the right adoption framework. AI, when harnessed correctly, holds immense potential. Here’s a structured approach to the framework: 

  1. Planning
    As the organization embarks on its AI journey, the first task is to identify the most suitable AI use cases. These use cases should align directly with the desired business outcomes. Without a clear grasp of these mappings, investing in AI technologies might lead to low/no Return on Investment (ROI). Begin by addressing crucial questions such as which use cases yield the greatest business impact, and which ones are viable for AI integration, and so on.
    The role of a CISO: The CISO collaborates closely with business stakeholders to assess AI use cases, ensuring alignment with security objectives and minimal impact on existing infrastructure.
  2. Strategizing
    Integrating AI risk management into the overarching security strategy is of paramount importance. Foster a culture of security awareness across the organization by educating employees on potential AI risks and their roles in mitigating them.
    The role of a CISO: Demand transparency from teams exploring AI. Ensure every level of organization is aware of how the AI works and the rationale behind its decisions. This awareness will help teams to go through the Transformation easily without thinking AI can cut jobs. Engage in open dialogue with stakeholders like developers, legal teams, and business leaders. Share concerns and work together to develop comprehensive risk management strategies. And develop collaborative policies that embrace every department and every level of the organization in comprehensive security controls including data, application, encryption, and permission policies.
  3. Project initiation
    Every project initiation phase should adhere to qualifying criteria established in the planning stage, based on defined use cases and business outcomes. Criteria may involve assessing if the current technology landscape supports AI implementation and the availability of relevant data points for project delivery.
    The role of a CISO: The CISO must oversee project governance, ensuring quantifiable milestones are achieved. A “fail fast” approach may be necessary to either pivot from the original plan or proceed to the next viable project.
  4. Project development
    During AI project or application development, strict adherence to security best practices is crucial. Top security practices that should be used include identifying sensitive data used to train the model and assess the protection requirements, integrating security testing, code review, and vulnerability assessments in development, subjecting third-party components to white label processes, and validating model development through bias detection and robustness testing. It should also include MFA for AI application access, role-based access controls to restrict access to sensitive data and functionalities, a comprehensive incident response plan tailored to AI applications, outlining procedures for data breaches or AI model failures. Organizations should also employ a threat intelligence system to continuously monitor emerging AI threats and update security measures accordingly.
    The role of a CISO: The CISO oversees all of this development and should make sure all of these best practices are being used.

Adopting responsible Artificial Intelligence (AI) is inevitable. All should embrace it. It will not solve all of our problems with no effort, but, with the right framework in place, AI has the potential to make a significant impact.

How OpenText and TechMahindra can help

DevSecOps is the normal way of any application development today and the AI-enabled DevSecOps only makes it more powerful to develop secure applications. With appropriate machine learning and LLM models, AI-enabled DevSecOps platforms make the journey of building secure software with improved efficiency and very minimal vulnerabilities.  OpenText, in conjunction with TechMahindra has launched a cloud-based MSSP solution “FastTrack to Application Security” covering automated DAST, SAST, software composition analysis, and API security.  This helps developers to test the security of the software quickly and accurately.  The turnaround time to perform the scans is reduced by 75% than one-time scans as a part of security testing. With this solution, we can run the scan when its code is being written and at runtime with minimal false positives.

AI-enabled DevSecOps helps CISOs with more accurate information, well built co-relations from the threat feeds/intel and the technology-agnostic vulnerabilities. They bring in accuracy, speed, proactive security, and enhanced collaborations. The executive dashboards from the platform can be further leveraged as intel feed to SOC operations for better Detect and Respond.

Conclusion

CISOs need to be part of a cross-functional team of leaders in a company that lays out guidance for employees. A governance framework and an inventory of existing AI use should be developed. Embracing AI in DevSecOps allows CISO to build secure and resilient software systems while enabling faster and more efficient development practices. Stay secure and embrace the power of AI!

Co-written by Rohit Baryha, Application Security Solution, OpenText Cybersecurity and Suchitra Krishnagiri, AppSec & DevSecOps Head of CoE, TechMahindra

The post Unlocking AI potential for CISOs: A framework for safe adoption appeared first on OpenText Blogs.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/unlocking-ai-potential-for-cisos-a-framework-for-safe-adoption/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×