The total fraud losses in 2020 amounted to 56 billion USD and U.S. businesses will lose an average of 5% of gross revenue to fraud, making fraud mitigation a core area of focus in businesses of all manner. At the same time, detecting fraud in real-time is a difficult proposition causing substantially high monetary losses due to late detection of fraudulent behavior. More than 50% organizations said in a survey that they recover less than only 25% of fraud losses. In addition to monetary losses, late fraud detection causes irreparable damage to brand equity and user trust alike. Users of online services are at an increased privacy risk in the form of personal data exposure, account appropriation for fraud, or monetary losses, making fraud detection crucial to the larger trust and safety goal.
Banks, retail, insurance companies, online community products and marketplaces alike, need tools and techniques both to detect fraud and abuse in real-time, as well as, take appropriate mitigation measures in the moment vs when it is too late. Rapidly evolving adversarial patterns make early detection of fraud particularly challenging. This is exacerbated due to a lack of tooling to detect those patterns in real-time through rules or Machine Learning (ML) models, but Oscilar can change that!
Instant fraud decisioning powered by a combination of Machine Learning and rules is the future of fraud mitigation.
Real-time fraud detection: Rules or Machine Learning?
Up until recently, the primary approach to fraud prevention was centered around fraud detection based on human-defined rules coupled with manual review of possibly fraudulent transactions. While human review of fraudulent transactions might lead to fewer false positives, it unfortunately leaves a window of opportunity for advanced and fast evolving adversarial techniques leading to substantial financial and reputation damage. Therefore, effective fraud mitigation requires accurate and instant decisioning ability with low false positives. Instant fraud decisioning involves ingesting hundreds of signals, some from the incoming transaction and some from historical analytics, to decide appropriate course of action for the transaction as it is happening.
Why a pure rules based decisioning approach is insufficient?
Historically, fraud mitigation primarily centered around heuristics defined using rules, which when satisfied, trigger corresponding action. In a rules-based approach, you mark a data point as anomalous if it exceeds a preset, human-defined boundary. This approach requires significant domain knowledge of the incoming data and can become less effective with changes to the underlying fraud signals.
As new adversarial patterns emerge, decisioning must evolve instantly to address them. Rules lend themselves to changes easily, making them a good fit for mitigating new adversarial patterns. However, as adversarial patterns adapt to human-defined thresholds, the rules and corresponding thresholds must adapt quickly as well. This necessitates tuning rules based on continuous rule analytics, as well as, using Machine Learning.
Tuning heuristic based rules might work well for addressing certain high-confidence signals, but might fall short of discerning new multi-dimensional patterns.
For example, the billing address used for a credit card transaction being different from the card's billing address is a high confidence signal for increased risk of the transaction. But a combination of discerning trends across various financial transaction signals based on historical patterns, namely transaction amount, past transaction trends, GPS location of the transaction, transaction time, merchant account history etc quickly limit the effectiveness of rules driven by human insights.
Why a pure Machine Learning approach is insufficient?
Machine learning is effective at deriving predictions from a vast variety of historical signals, but is fundamentally not agile to respond to rapidly evolving adversarial patterns in the moment. Furthermore, a pure Machine Learning approach to fraud and abuse detection necessitates substantial expertise that might not scale well if applied everywhere. For instance, ML models might operate at high recall and relatively higher false positive rates, but can be tuned to have lower false positive rate and a lower recall.
Finding the right operating point on the ROC curve is necessary to strike the appropriate balance between false positive rate and recall.
Machine Learning models are also substantially less explainable compared to rules, limiting the set of people who can effectively tune the model.
It's about complexity of the decision logic and rate of change
Application code is the starting point for lightweight rules that are simple and don't change often but is also less approachable to most risk teams given the requirement to write code. As the degree of complexity and rate of change increases, application code is no longer suitable for business logic. A rules engine is a good fit for logic that changes often but is less effective at decisioning as the logic increases in complexity. Machine Learning is a better fit for synthesizing complex relationships amongst hundreds of data points into a probability score when the rate of change in fraud patterns is relatively low. As the decision complexity, as well as, the rate of change of decisions increases, a decision engine that integrates both Machine Learning and rules offers the best fit for holistic and instant decisioning.
Machine Learning vs Rules: The fit
Machine Learning vs Rules: The Fit
Other than the complexity and rate of change, other vectors for assessing the fit for rules vs machine learning center around explainability, precision of output, origin of logic.
Rules vs Machine Learning: The Fit
Effective fraud mitigation requires both ML and rules
Accurate real-time risk decisioning using an appropriate combination of Machine Learning and rules is a journey. This evolution starts by replacing a subset of existing rules with ML models, followed by backstopping ML model risk scores with rules, and finally applying several ML models—some internal and other external—in the final decision.
This evolution is characterized by the following 3 step journey.
Step 1: Replace a subset of rules with new Machine Learning models
The first step in the application of Machine Learning to risk decisioning is to replace a subset of rules with an ML model for ease of maintenance and increase in recall.
Step 1: Replace a subset of well-tuned rules using Machine Learning
Over time, risk organizations accumulate a large number of rules with manually-tuned thresholds. Not only is this less maintainable but also cumbersome for the tuning process to keep up with the evolution of fraud patterns. To do this, features used in rules serve as training data for the Machine Learning model. For instance, a rule might deny a user request if the user fails to login thrice in the last 30 minutes and the account age is less than 2 days. Another rule might step-up authentication if the user's transaction zip code does not match one in their customer profile. The features used in these rules— namely transaction amount and number of failed login attempts—serve as training data for the Machine Learning model to provide a probability of risk for the particular user transaction. A human typically defines the decision and recommended action based on the threshold of the probability score.
Step 2: Backstop new Machine Learning models using well-tuned rules
The next step in this evolution is to backstop these newly created ML models using well-tuned rules.
Step 2: Backstop new ML models with well-tuned rules
Backstopping ML models with rules is especially effective while launching a new Machine Learning model or when the signal that is being used in the rules is not available during model training. Offsetting the relatively less tuned ML model's accuracy with some well-tuned heuristics-based rules serves as a reasonable starting point. ML models trained on other relevant fraud also help protect the rollout of a new one. For instance, if account fraud ML model output > 0.65 and the IP reputation ML model < 0.76 and the number of user requests in the last 30 minutes > 3, then block the transaction.
Machine Learning works well when the probability score of a well-tuned model is sufficiently high. However, a probability score in the gray area can lead to a high false positive rate that is hard to explain or reason about. This is also where backstopping the ML model score using well-defined rules effectively aid in increasing the overall accuracy of the decision. For example, let's say that your fraud detection ML model has a risk score of 0.75 for user account fraud. At the same time, if the payment transaction amount is unusually higher than the previous transactions by the same user, then you can increase the risk score.
Another reason to apply this pattern is when the effectiveness of the ML model might be deteriorating due to shift in data or user behavioral patterns. Similarly, the model might be trained on labels that are different from but correlated to the label being predicted. When this happens, adding well-tuned heuristics based rules can increase the overall accuracy of the decision. For instance, block the transaction if the credit card transaction ML model score > 0.81 and user account's age > 10 days, or the credit card transaction ML model score is between 0.55 and 0.81 and user account's age ≤ 10 days. In this case, the ML model might already be trained with user account's age but the behavior of the fraud patterns influenced by the user account's age might change in a way that the model is unaware of. You subsequently train a new ML model incorporating account fraud, new behavior of user account's age, along with other features, and replace this rule eventually.
Step 3: Integrate multiple ML models into a more advanced ML model
The final step in the evolution swings the pendulum from rules to the Machine Learning side. This step integrates probability scores from several well-tuned ML models—some internal and some third-party—into rules and other ML models.
Over time, as the organization's ML prowess increases, several risk scores coming from specialized third-party tools and internal ML models alike, need to be integrated for a holistic fraud decisioning ability. For instance, you might have a third-party tool offering a risk score on account fraud, and an internal ML model that offers a risk score on transaction fraud. Both risk scores must be integrated in assessing the overall risk score for the transaction. Such an integration starts with writing rules that effectively combine various risk scores and corresponding thresholds to then recommend the appropriate course of action. And evolves into training new ML models using outputs of existing well-tuned ML models.
Step 3: Integrate several ML model risk scores into rules at first, followed by a more advanced ML model
Oscilar for real-time fraud prevention, powered by ML
Real-time fraud prevention is an increasingly complex data processing problem involving multi-faceted data integration capability to backstop machine learning risk scores with rules when required, and have online training capability to continuously learn new and evolving adversarial patterns to protect the business and the user. Oscilar solves for real-time fraud detection using a no-code decision engine that integrates machine learning and rules to enable real-time and accurate fraud prevention.