CERT-SEI
CERT Insider Threat Blog

Common Sense Guide to Mitigating Insider Threats - Best Practice 11 (of 19)

By CERT Insider Threat Center on 01/25/2013 | Permalink

Hello, this is Todd Lewellen, Cybersecurity Threat and Incident Analyst for the CERT Program, with the eleventh of 19 blog posts that describe the best practices fully documented in the fourth edition of the Common Sense Guide to Mitigating Insider Threats.

The CERT Program announced the public release of the fourth edition of the Common Sense Guide to Mitigating Insider Threats on December 12, 2012. The guide describes 19 practices that organizations should implement across the enterprise to mitigate (prevent, detect, and respond to) insider threats, as well as case studies of organizations that failed to do so. The eleventh of the 19 best practices follows.
 
Practice 11: Institutionalize system change controls.

This best practice is one that could have prevented a substantial number of insider attacks that we’ve researched. While these controls are most often thought to prevent cases of IT sabotage, there are many cases of insider fraud that could have been detected and prevented if change control systems were in place. For example, let’s look at a case from our recent Insider Threat Study: Illicit Cyber Activity Involving Fraud in the U.S. Financial Services Sector:

 

The insider was employed as a lead software developer at a prominent credit card company. This credit card company had a rewards points program where customers could earn points based on the volume and frequency of their credit card usage. These points could later be "cashed in" for gift cards, services, and other items of monetary value. Due to the high transaction volume of corporate accounts, a typical corporate account would hypothetically accumulate an immense number of rewards points. Therefore, the rewards points program was set up in such a way that the back-end software would not allow corporate accounts to earn points. At an unknown date, the insider devised a scheme by which he could earn fraudulent rewards points by bypassing the back-end checks in the software and linking his personal accounts to corporate business credit card accounts of third-party companies. After compromising a coworker's domain account by guessing the password, he was able to temporarily change the database security configuration to enable him to successfully link his personal accounts to several corporate accounts. The insider would cash in the rewards points for items of value, such as gift cards to popular chain stores, and then sell them in online auctions for cash. In all, the insider was able to accumulate approximately 46 million rewards points, $300,000 of which he was able to turn into cash before being caught by internal fraud investigators. The insider admitted to the scheme and bargained with investigators to reduce his sentence if he agreed to give them information on how he changed the security configuration and how to prevent a similar occurrence from happening in the future.

 

As you can see, a change control system could have easily and effectively detected the insider temporarily changing the security controls in the database. Had the organization audited the changes to critical configurations, the insider could have been caught well before 46 million rewards points had accumulated in his personal account.

So what type of system would be considered a “change control system”? Homebrew solutions are in place in many organizations, but the most prominent classification that comes to mind is Host-Based Intrusion Detection Systems (HIDS). Whereas the more commonly known Network Intrusion Detection System (NIDS) analyzes networks for suspicious traffic, a HIDS analyzes suspicious activity on any host with an ‘agent’, whether the host is a workstation, server, network appliance, or something else.

A simple and common feature of HIDS is the ability to constantly monitor a file’s integrity, usually done through cryptographic hash algorithms. If a file’s cryptographic hash changes, it must be because its data changed. There are many configuration files on systems that should rarely (if ever) change, so having a HIDS agent monitor these files for modification is an effective way to ensure the integrity (and therefore security) of a system.

If there are systems or applications with configurations that change regularly, such as antivirus signature databases, implementing a change control system for monitoring these data will create too much “noise” to effectively filter out suspicious change events. However, for critical systems with concrete configurations, change control systems are ideal.

For example, have your change control system monitor for critical network appliance changes, such as firewall and router configurations. These configurations likely don’t change all that often, and if they do, the impact can be high. Therefore, have a HIDS or other change control system generate an alert each time a modification is made to a critical configuration. These alerts are an effective way to detect abnormal or irregular changes that have a direct impact on your organization’s security posture.

Insider threats can severely damage an organization’s reputation, competitive advantage, and bottom line, so be sure to take proactive measures to mitigate these costly events from occurring. Refer to the complete fourth edition of the Common Sense Guide to Mitigating Insider Threats  for a comprehensive understanding of the issues and recommendations mentioned and see how change control systems can effectively assist you in your battle against insider threats.

Check back in a few days to read about best practice 12, Use a log correlation engine or security information and event management (SIEM) system to log, monitor, and audit employee actions; or subscribe to a feed of CERT Program blogs to be alerted when a new post is available.

If you have questions or want to share experiences you've had with insider threats, send email to insider-threat-feedback@cert.org.

Topics: Best Practices , Insider Threat