Skip to content

アプリケーション内で をクリックすると、お使いのDataRobotバージョンに関する全プラットフォームドキュメントにアクセスできます。

Suspicious activity reporting (SAR)

特性
Ease of Implementation
影響
インパクトのタイプ 効率化/最適化
Primary Users 内部
タイプ RAG, summarization
Includes Predictive AI はい

What it is

Financial organizations monitor suspicious activity as part of the industry’s regulatory framework. The industry is already using predictive AI to improve accuracy of suspicious activity detection, but the suspicious activity reports (SARs) that have to be prepared for each incident still require a lot of scrupulous work. Fraud analysts spend a lot of time preparing these reports for delivery to regulators.

By combining the existing predictive AI workflows around suspicious activity monitoring with a generative AI component, the reporting process can be streamlined, improving the efficiency of Fraud/BSA/AML analysts.

The existing predictive AI system flags a transaction as suspicious based on its parameters (amount of money, location, type of transaction, venue, etc.) and serves it to the fraud analyst. The analyst reviews the transaction and related context, such as previous alert history and check images, to determine whether it’s fraud. They instruct the generative AI application tied into the predictive AI workflow to automatically create the necessary, case-specific narrative for the report whether it’s determined to be fraud or not.

仕組み

Prediction Explanations (a quantitative indicator of the effect variables have on the predictions) from the predictive model are fed to the generative AI model. This data is formatted as a JSON dictionary for the generative AI model to process. As such, there’s no vector database that the generative AI model interacts with. The JSON file is the source of truth for the LLM. A system prompt in the background controls the format of the output, based on the predefined template for how the financial institution needs to format their reports. The simplified transformation of predictive insights to the narrative within the report looks like this: variable A exceeds the threshold of X, thus it indicates this transaction as fraudulent.

The analyst can then review the report and make necessary amendments, like elaborate on certain values that the predictive model highlighted, before passing the report along to the Financial Crimes Enforcement Network (FinCEN), which then routes it to the appropriate law enforcement agency for further investigation.

User experience

The fraud investigator in this scenario would interact with a pre-built application which connects to their existing suspicious activity alerting system, retrieves known relevant context such as transaction history and check images, and generates output from a separate predictive model based on the results of the alerting system.

For each alert that has been flagged, the analyst would read a summary, created by generative AI, in natural language, detailing why an alert was tripped, its likelihood to truly be suspicious activity, and a compilation of pre-prepared relevant documents. Using human expertise and potentially seeking out additional information, the analyst reviews the case and makes the final determination on whether any given alert was correct or not. Once the analyst has confirmed the decision, the generative AI will construct the final narrative using all information given and format it into the predefined template that is consistent for the bank, explaining why a certain alert was truly suspicious or not. Once the report is generated within the app, the user can review it, make amendments if needed, and then generate the file in the format necessary for the reporting workflow. This file is then submitted into a different system, tied into FinCEN.

Why it might benefit your business

A single suspicious activity report can take hours to write, review, and finalize, especially due to the variety of all the information that requires review. Large financial institutions receive tens of thousands of suspicious activity alerts every day. So even a small improvement in how much time it might take to process one report, can have an incredibly powerful cumulative effect that can save tens of thousands of work hours. This also allows analysts to devote more of their time to actual investigation, not being pushed by the growing backlog of alerts to review, which improves their performance and minimizes human error. A reduction in processing time would also expedite approval for legitimate transactions.

Potential risks

Risks associated with this use case span both generative and predictive AI components of the solution.

  • Inaccurate flagging of transactions can result in inaccurate reports.

  • Generated reports may misrepresent predictive data or draw incorrect conclusions.

  • A poorly tuned system prompt can produce reports with unconventional wording and structure, requiring extensive manual amendments by analysts.

Baseline mitigation tactics

  • Model monitoring for the predictive solution to ensure that it flags only the most relevant transactions.

  • Extensive pre-production testing of the LLM, its parameters, like the system prompts or the temperature of responses.

  • Consider a retraining regiment that uses grounding data to improve the model’s outputs. This might require a new process, where the user can amend the automated report, which is then fed into a retraining database.


更新しました July 9, 2024