TABLE OF CONTENTS

    The use of Generative AI (GenAI) has moved from exploratory and piloting phases to an operational reality across Hong Kong’s financial services sector. A recent report from The Hong Kong Institute for Monetary and Financial Research (HKIMR) covering banks, insurers, and wealth and asset managers shows 75% of surveyed firms have already implemented at least one GenAI use case, or are actively piloting and designing use cases. 83% of large firms have also rolled out at least one GenAI use case. The same study also demonstrates how firms are thinking about risk. 95% flagged model performance and accuracy as a primary risk-management factor, with transparency and explainability (65%) and data privacy and security (64%) top considerations.

    Hong Kong’s regulators, the Hong Kong Monetary Authority (HKMA), which handles day-to-day oversight, and the Hong Kong Securities and Futures Commission (SFC), which takes primary responsibility for enforcement action, are responding to GenAI use in a predictable yet evolving way. They are widely supportive of innovation, but making it apparent that responsibility does not rest with the AI model, the vendor, or the AI algorithms. For financial firms, responsible use of AI means stronger governance, well-defined accountability, deeper model validation, tighter data controls, and more deliberate safeguards, particularly where customer outcomes are concerned.

    A key point regulators keep returning to is human judgement. Hong Kong’s supervisory messaging, across banking, securities and futures, and privacy, continually reiterates the importance of human oversight and controls, along with the expectation that firms can explain what their AI systems are doing, why they are doing it, and how risk is prevented and corrected.

    Our article breaks down what the use of AI in finance looks like in practice, from the shifting AI landscape to what regulators expect, the risks they are concerned about, and the practical steps financial firms can take now.

    Key Highlights

    • GenAI is no longer confined to niche use cases. 75% of surveyed financial institutions (FIs) in Hong Kong report at least one GenAI use case in production or in active pilot or design stages, with adoption expected to rise to 87% in the next three to five years.
    • AI adoption right now is mainly internal and non-customer-facing, with employee virtual assistants the most common use case. Firms are more cautious of higher-risk and customer-facing deployments due to accuracy and trust concerns.
    • The HKMA’s expectations for customer-facing GenAI centre on governance and accountability, fairness, transparency and disclosure, and data privacy and protection, with ‘human-in-the-loop’ controls and oversight.
    • The SFC highlights that GenAI language models may potentially amplify traditional AI risks, expecting firms to employ stronger risk mitigations, including disclosures, validation, monitoring, and human oversight.
    • Skills and accountability are key regulatory considerations, underscoring the need for qualified staff and ownership of the full AI lifecycle across business, risk, compliance, and technology.

    What’s Happening with AI Adoption in Hong Kong's Financial Services Industry?

    The HKIMR reports that GenAI adoption is “progressing steadily across the financial services industry in Hong Kong”. And while 75% of surveyed FIs have already implemented at least one GenAI use case, or are currently piloting and designing use cases, the HKIMR expects this figure to increase to 87% within the next three to five years. Large firms are also taking the lead, with 83% having rolled out at least one GenAI use case or taking steps towards adoption, versus 63% of small firms.

    The shift towards GenAI implementation isn’t confined to one niche. It’s happening across multiple financial sectors, including banking, insurance, and wealth and asset management. However, deployment patterns remain cautious. The most common implementations are internal and non-customer facing, with additional deployments including coding support, enhanced AML/KYC capabilities, and natural-language search or enquiry for company information and internal policies. Firms are using GenAI to raise productivity, manage information overload, and reduce repetitive work, rather than to automate client-critical decisions at scale.

    The Regulatory Perspective on Financial Firms’ Adoption of AI

    The HKMA’s Perspective on AI

    In an October 2025 keynote speech at the “Risk & Compliance Re-Imagined” Seminar, HKMA Executive Director (Banking and Conduct) Alan Au noted that globally, regulators have been reviewing and refining their frameworks to ensure responsible and ethical deployment of AI. He went on to say that “The HKMA shares this commitment, supporting the integration of new technologies by banks through a 'risk-based' and 'technology-neutral' supervisory approach.”

    The use of AI in Hong Kong’s banking industry has been in focus for the HKMA for some time now. In November 2019, it issued a circular outlining four guiding principles, namely governance and accountability, fairness, transparency and disclosure, and data privacy and protection. In August 2024, it issued further guidance regarding the use of GenAI in customer-facing activities by banks. This guidance provided clear direction for firms to ensure AI applications are transparent, explainable, and fair. Safeguard measures outlined include:

    Governance and accountability: the boards of directors and senior management of banks must be responsible for all decisions and programmes driven by GenAI. The HKMA also requires banks to adopt a “human-in-the-loop” approach, retaining human control in the decision-making process to ensure that model-generated results are accurate and reliable, and free from bias

    Fairness: banks must ensure GenAI outputs do not unfairly bias or disadvantage any customers. The HKMA also requires banks to provide customers with the option to opt out of using GenAI and to allow customers to request human intervention in decisions made by GenAI.

    Transparency and disclosure: banks are required to disclose to customers the purposes and limitations of using GenAI

    Data privacy and protection: banks must comply with Hong Kong’s privacy laws, in-line with the Office of the Privacy Commissioner for Personal Data (PCPD), which published a checklist on guidelines for the use of GenAI by employees in March 2025, providing a clear view of privacy considerations.

    The SFC’s Perspective on AI

    The SFC echoes the HKMA’s support of licensed corporations (LCs) to implement responsible AI technologies to innovate, deliver products or services more effectively, or enhance operational efficiency. However, it also warns that AI language models may “amplify” existing risks and create additional risks. In its November 2024 circular to licensed corporations on the use of GenAI language models (AI LMs), the regulator notes, “AI LMs democratize access to AI as they take natural language instructions from users as input such that very little technical proficiency is required to use them. The lower entry barriers for firms without the technical expertise in traditional AI to use AI LMs may result in firms deploying such technology before proper risk mitigation measures are put in place. Furthermore, the ability of AI LMs to output human-like responses may result in over-reliance, with users accepting their outputs without critical evaluation.”

    Firms should also keep in mind that, generally speaking, the SFC considers using an AI LM for providing investment recommendations, investment advice or investment research to investors or clients as high-risk use cases. It urges LCs to adopt additional risk mitigation strategies for such high-risk use cases, including:

    Conducting model validation: undertaking ongoing review and monitoring in relation to the performance of AI LMs so as to improve factual accuracy to a level commensurate with the specific use case.

    Taking a ‘human-in-the-loop’ approach: engaging in oversight to address hallucination risk and ensure AI LM outputs are factually accurate before providing responses to users.

    Testing output robustness to prompt variations: due to reports that AI LMs may generate different predictions based on text inputs that have the same meaning.

    Informing users: providing prominent disclosures to users so they understand they are interacting with AI rather than humans and that the outputs generated may not be accurate.

    The SFC’s guidance also focuses on four core principles of GenAI LM use. Each reinforces a consistent message that responsibility remains with LCs at every stage of the AI model lifecycle.

    1. Senior Management Responsibility

    Accountability for the use of GenAI sits squarely with senior management. LCs must establish effective policies, internal controls, and governance arrangements covering the full AI model lifecycle, from development and customisation through to deployment and ongoing monitoring. Oversight must be demonstrated by suitably qualified and experienced individuals, with competence spanning AI, data science, model risk management, and relevant business domains. Specific tasks, such as validation, may be delegated, though regulatory responsibility cannot. Firms must be able to demonstrate robust oversight with senior management responsibility.

    2. AI Model Risk Management

    GenAI must be subject to disciplined model risk management. Where firms undertake development, fine-tuning, retrieval augmented generation, or other enhancements, development and validation functions should be segregated where applicable. The SFC expects LCs to undertake end-to-end testing, covering inputs, outputs, system integrations, and related controls. Once deployed, models must also be subject to ongoing review to ensure they remain fit for purpose.

    For higher-risk use cases, including those with potential investor impact, the SFC expects additional safeguards. These include validation to improve factual accuracy, testing against prompt variations, appropriate ‘human-in-the-loop’ review before outputs are deemed reliable, and clear disclosure to users about when they are interacting with AI.

    3. Cybersecurity and Data Risk Management

    Firms must also address the heightened cybersecurity and data risks associated with GenAI. Guarding against adversarial attacks, data leakage, and attempts to manipulate model outputs are imperative in the AI context. LCs should also uphold high quality standards for the training data, identifying and mitigating biases that may affect the use case, and ensure they adhere to existing guidelines on data protection in the context of AI.

    4. Third-Party Provider Risk Management

    Use of third-party or open-source models does not absolve LCs of regulatory responsibility. They must conduct due diligence on third-party providers, assess the effectiveness of their risk management frameworks, and evaluate whether model performance is appropriate for the intended use case. Ongoing monitoring is required, along with contingency planning for service disruption or dependency risk. Where open-source models are used, and no clear provider exists, firms remain responsible for ensuring compliance with model development and management standards.

    Taken together, the SFC’s four principles reinforce a straightforward position: innovation is encouraged, but governance, documentation, and accountability must underpin deployment. Read more about The Age of AI and the New Frontier of Conflicts of Interest.

    The Human Element in Effective Use of AI

    The evolving regulatory landscape of GenAI use in Hong Kong crystallises that talent and governance are inseparable. While firms can purchase AI models, tools, and managed services, the SFC explicitly states that oversight and risk management should be performed by fit and proper staff. The regulator also expects senior management to ensure that responsible staff across business, risk, compliance, and technology functions hold the relevant competencies in the AI context.

    Industry data reinforces the importance of skilled talent overseeing AI deployments. The HKIMR report shows that 80% of surveyed FIs identified technical skills relating to GenAI use and development as a top skills gap, with 60% highlighting compliance skills as a key gap for supporting GenAI initiatives.

    Regulatory messaging is increasingly direct on talent reskilling and upskilling. In April 2025, HKMA Deputy Chief Executive Darryl Chan clarified in his opening remarks at the HKMA-AoF/HKIMR-CEPR International Conference on Generative Artificial Intelligence that the “human-in-the-loop” element is a necessary safeguard depending on the risks associated with an AI use case. Chan detailed the HKMA’s emphasis on the need for banks to “set a clear future direction for manpower development to support business priorities, and the need to draw up effective strategies that address talent needs, supported by the deployment of sufficient financial resources to staff reskilling and upskilling.”

    Privacy expectations also lean heavily on people, not just tools. The PCPD checklist calls for transparency and internal communications around GenAI policies, alongside employee education on how to use GenAI safely and responsibly. As GenAI continues to become common operational infrastructure, AI and compliance literacy are an increasingly essential component of FIs oversight and controls.

    How Can AI Enhance Firms’ Use of RegTech Tools?

    As revealed in the HKIMR report, firms are often using GenAI to help employees more efficiently find and use information rather than to replace expertise. Among cited use cases are employee virtual assistants and natural-language search functions to locate company information and policies.

    RegTech systems already excel at helping firms manage attestations, approvals, monitoring, and recordkeeping requirements. For example, the MCO (MyComplianceOffice) platform is a complete compliance solution that includes a Know Your Employee (KYE) suite, covering personal trading, gifts, entertainment and hospitality, outside business activities, attestations, and ‘fit and proper’, along with a Know Your Obligations (KYO) suite, which includes policies and procedure management, compliance risk assessment, and evidencing compliance.

    MCO’s single platform for compliance empowers FIs to meet regulatory obligations with confidence and transparency. MCO incorporates AI across its compliance solutions to strengthen risk detection, monitoring, and decision‑making. All AI capabilities are implemented responsibly, transparently, and securely to meet evolving regulatory expectations.

    The combination of AI supporting better employee decision-making and better evidence directly supports what Hong Kong regulators expect: accountability, transparency, and human judgement where it matters most.


    Next Steps for Financial Firms to Meet Regulatory Expectations of Responsible AI Use

    Firms should consider GenAI implementation from a governance perspective, rather than simply a technology rollout. By starting with this approach, firms can be well-positioned to meet regulatory expectations by showing a clear inventory of GenAI use cases, classification that flags high-risk activities, and documented controls mapped to those risks. Particularly where GenAI touches customer communications, investment decisioning, product recommendations, or any workflow that could create suitability or consumer protection issues, a robust governance and oversight framework becomes imperative. Learn more about Preparing for the Agentic Era: Stay Ahead of AI Conflicts of Interest.

    Operationally, firms should approach GenAI deployment with five evidence points in mind that regulators keep signalling:

    1. Accountable ownership, with senior management oversight and defined responsibilities across business, technology, risk, and compliance.
    2. A GenAI risk framework that covers validation, end-to-end testing, monitoring, and documentation that stands up to regulatory scrutiny.
    3. Data and privacy controls that specify what information can be entered into tools, how data is anonymised or restricted, how outputs are stored, and how data breaches will be reported and handled.
    4. Cybersecurity and third-party controls that explicitly address adversarial threats, encryption, vendor due diligence, and contingency planning for service disruption or dependency risk.
    5. Talent reskilling and upskilling, AI literacy, and role-based training to ensure the human element supports AI initiatives.

    Finally, also consider supervisory engagement as a risk management tool. The SFC encourages firms, particularly in high-risk use cases, to discuss plans early, while the HKMA has framed sandboxes and toolkits as mechanisms to test and learn in a controlled way.

    In Hong Kong’s current environment, firms have a significant opportunity to leverage GenAI innovation, provided they can retain human oversight, focus on customer protection, and evidence their controls. Those taking an informed, considered, and strategic approach to AI deployment will be best placed to continue meeting regulatory expectations around the responsible use of AI in finance.

    See the MCO (MyComplianceOffice) complete compliance suite in action to help your firm meet evolving regulatory requirements.