Audimation Services has been acquired by Caseware International Learn More.
When it comes to fraud, an ounce of prevention is worth a pound of cure. The full impact of fraud can be devastating, well beyond the financial loss. It has the potential to impact customer relationships, tarnish reputations and depress internal morale. Fraud can be committed by anyone inside or outside the company, even c-suite executives.
For accounting firms, the risks are even higher. External auditors who fail to discover fraud can face legal action. Luckily, advanced data mining is helping auditors and accountants search through high volumes of data to pick out irregularities and inconsistencies that may indicate fraud.
While most of us are familiar with the 4-phase fraud audit program: assess, identify, respond and conclude, this process is formed from the fraud scenarios identified in the fraud risk assessment. Actual investigations of allegations of fraud can range from rather simple to highly sophisticated. Fraud investigation expert Leonard Vona has expanded on these four steps to provide a more robust plan of action to help detect and prevent fraud using data mining techniques.
“Data mining must be driven by the fraud scenario versus the data mining routine.”
Leonard Vona, Fraud Auditing, Inc.
You can’t audit what you don’t understand. Databases exist to help make decisions. The first time you work with a data file, familiarize yourself with the data including the underlying business processes representing that data. The goal is to test business facts you believe to be true against the data to gain assurance about its accuracy. Normalize and harmonize the data – if you identify discrepancies, determine why they exist.
One of the greatest impediments to fraud data mining is the identification and extraction of the data from the IT environment. While CaseWare IDEA® can be used to convert the data format, you must also consider storage capacity, table identification, data location and IT cooperation to build an effective data mining environment. Some cleanup of the data may be necessary:
Data mapping is the process of starting with each field in the database, understanding how the data correlates to the fraud scenario and how to search the data for indicators that link to the scheme. Essentially, it is the process of drawing a picture of a fraud scenario with data. Auditors should focus on both master file data and the transactional data associated with the business system.
To illustrate this concept, consider the following scenario where non-existent companies with false billing addresses are used to commit fraud. The vendor master file, vendor name, address, telephone number, government identification number and bank account number, are useful to identify false vendors.
The auditor would search for vendors with missing key information, illogical information, and information that matches to other key databases. An inherent assumption is that the accounting department populates the database and the information has integrity. In our scenario, we believe an operations manager has vendor master file update access and has set up phony vendors, for the purpose of submitting payment for work that was never done.
Using the vendor telephone number illustrates the concept of data mapping to identify fraud. A missing telephone number may be an indicator of a false vendor. Matching telephone numbers to employee information can help uncover this fraud scheme. Area codes that are not consistent with the vendor address is an indicator of the mail forward technique of a false vendor. An illustration of data mapping patterns is:
Vendor Invoice Number:
Data Mining Search Results:
Two types of filters can be used to restrict records to be extracted or included to meet specified criteria. The Include/Exclude filter is case sensitive and looks for exact matches between text entered in the list and the data in the field to determine whether a record should be included or not. It can be applied to any data type. Range filters can only be applied to a field with a Numeric data type.
The inclusion portion of the theory starts with a database of transactions where the data is categorized into similar groups. The purpose of doing this categorization is twofold: examining a smaller database is more manageable, and an anomaly is easier to spot when all the transactions are uniform. The grouping of data is dependent on the fraud scenario. Some logical groupings are:
Geographical business divisions or territories |
Account number used by multi-entities may be temporary or one-time vendors |
Dollar value of the account of transaction |
Active entity vs. inactive entity |
Transactional codes |
False entity or real entity |
Major category of revenue or expenditure |
Class of transactions |
Those with or without control documents |
Specific to a person, entity or account |
Company anomalies, house accounts, overrides |
When looking for fraud, a significant problem is differentiating between legitimate transactions and nearly identical fraudulent ones. By definition, a false positive transaction is one that meets the fraud data profile but is not in and of itself a fraudulent transaction. The challenge is to decide how to reduce or eliminate false positives without missing the opportunity to capture the false negatives because each loss may be very costly.
IDEA’s built-in @functions can be used to clean up messy data fields that may be generating false positives. Some examples include:
Tip: For more @functions, access Passport from the Files tab within IDEA.
Sampling is based on discovery sampling instead of a random selection process. The purpose is to include the fraud possibilities while reducing the population file to a more manageable size (from millions to thousands or hundreds). The sampling process for a fraud audit is the most extensive process and can be high risk. Therefore, only the most experienced fraud auditors should be in control of this step. Some typical data mining techniques are Filter or Display Data or Text, Extraction, Duplication Key, Sequence Check, Summarization, Stratification, Benford’s Law, and text and number search.
The collective knowledge gathered during the investigation is pulled together to develop the search routines. The investigation team by now has a complete profile of the fraud. Data mining methods that can be used to identify Patterns: Sequence of Events – date and time; Charts/Graphs; Association; Clustering (Differences & Similarities); Classification; Historical Data, Understanding Mining Patterns, Incorporate Knowledge & Repeat as Necessary. Don’t rely on just one technique. An example search process in a Pass-Through Fraud Scheme:
“Something doesn’t feel right,” is a good place to start inquiries and searches, but always test your conclusions before announcing the results of tests. If the tests indicate fraud is going on, the best course of action is to always assume something is wrong with the analysis and double check your approach.
The investigation team will need to develop an action plan to respond to a specific scenario. In the case of possibly non-existent vendors, a covert action plan would include calling the vendor phone number or conducting a website search. An overt procedure would be an unannounced site visit to the vendor’s indicated address.
With the latest release of IDEA, new features help quickly identify trends and patterns that may point to fraud:
This data visualization tool in the Analytics tab helps you see outliers, distributions and trends across multiple databases. Findings are presented in dashboards which can be saved and shared through the library.
Visualize also provides the ability to drill down with a grid view of your data and extract insights from any particular slice, bar, column or area of a chart. Additionally, the new Auto-Stratification feature stratifies on numeric and date fields by automatically setting appropriate ranges within your graphs and charts.
Using pre-written algorithms, visual dashboards are automatically populated to identify specific data types and help perform analytics to uncover trends, outliers and anomalies. Dashboards can be modified, saved and shared via the Library. Discover also allows you to identify key fields within a database and extract the key field statistical information. You can also identify outlier distribution and flag areas of interest in a database on numeric and date fields.
Even the best organizations are subject to fraud risks, which can cause serious financial and reputational damage. The key is using technology and proven techniques to stay ahead of fraud and build stronger controls.
Interested in using data analytics to uncover fraud schemes? Check out these articles on our website:
Best Kept Secrets of Fraud
The High Cost of Fraud (includes analytic tests)
See how IDEA was used to shut down a $1.5 million fraud run by a ring of oil cartel thieves.
Read the story here.
This website has been designed for modern browsers. Please update. Update my browser now