Tag Archives: Audit

Auditing Time…

Time is critical in security systems; specifically, having systems know the time  is very important. Adequate clock synchronization is important for:

  • Operational Integrity (things happen when they are supposed to happen – backups, tasks, etc.)
  • Reproducibility of events (meaningful logs and records)
  • Validation of SSL certificate expiration (or other tokens, etc.)
  • Correct application of time restricted controls
  • Etc.

So, the big question is, what is “adequate clock synchronization”, and how do we achieve it ?

But First, What Time Is It ?

Time itself is of course a natural phenomenon. Just like distance, volume, and weight, the measurements for time are artificial and man-made.  The dominant time standard (especially from a computer and therefore Information Security perspective) is Coordinated Universal Time (UTC). This could probably have been called Universal Compromise Time, as it turns out that getting the whole world to drop their cultural biases, deployed technology, etc. and move to a single time system has been a long and complicated road (and it isn’t over yet).

One major component of UTC is an agreement on what time it in fact is, and how that is determined. Also, there are  questions surrounding how to adjust leap seconds, leap years,  and other “measurement vs reality” anomalies.  Time (and its measurement) is quite complex in itself, but for the purposes of Information Security (system operation, log correlation, certificate expiration, etc.), the good news is that UTC provides a solid time standard.

Now, all we need to do is synchronize our clocks to UTC !
(and adjust for our local time zone…)

Network Time Protocol (NTP)

Network Time Protocol (NTP) is a well established, but often misconfigured and misunderstood, internet protocol. NTP utilizes Marzullo’s Algorithm to synchronize clocks in spite of the fact that:

  • The travel time for information passed between systems via a network is constantly changing
  • Remote clocks themselves may contain some error (noise) vs UTC
  • Remote clocks may themselves be using NTP to determine the time

In spite of this, a properly configured NTP client can synchronize its clock to within 10 milliseconds (1/100 s) of UTC over the public internet. Servers on the same LAN can synchronize much more closely . For Information Security purposes, clock synchronization among systems and to UTC, within 1/5 or 1/10 of a second, should be sufficient.

Classic Misconfiguration Mistakes (and how to avoid them)

The misconfiguration mistakes that folks make tend to be the result of:

  • Overestimating the importance of Stratum 1 servers
  • Over-thinking the NTP configuration

NTP Servers are divided into Stratums based on what time source. A Stratum 1 server is directly connected to a device that provides a time reference. Some examples of reference time sources include:

NTP servers which synchronize with a Stratum 1 time source are Stratum 2 servers, with the Stratum number increasing by one for each level.

Big Mistake – Using a Well Known NTP Reference

The most frequent mistake people make when configuring NTP on a server is assuming that they need (or will get the best time synchronization) by using one of the well known atomic clock sources. This tends (thought not always) to be a bad idea because it overloads a small number of servers. Also, a server with a simpler network access path will generally provide better synchronization than a more remote one.

When configuring the NTP protocol, it is a good idea to specify several servers. The general rule of thumb is 2-4 NTP servers. If everyone specifies the same servers, then those servers become overloaded and their response times become erratic (which doesn’t help things). In some cases, an unintended denial of service attack is caused.

Both Trinity College of Dublin, Ireland and the University of Wisconsin at Madison experienced unintended denial of service attacks caused by misconfigured product deployments. In the case of the University of Wisconsin at Madison, NETGEAR shipped over 700,000 routers which were set-up to all pull time references from the university’s servers. NETGEAR is not the only router or product manufacturer to have made such an error.

Enter the NTP Pool…

“The pool.ntp.org project is a big virtual cluster of timeservers striving to provide reliable easy to use NTP service for millions of clients without putting a strain on the big popular timeservers.” quoted from pool.ntp.org

Basically, the NTP pool is a set of over 1500 time servers, all of which are volunteering to participate in a large load-balanced virtual time service. The quality and availability of the time service provided by each of the NTP servers in the pool is monitored, and servers are removed if they fail to meet certain guidelines.

Unless a system itself is going to be an NTP server, then use of the NTP Pool is your best bet 100% of the time. It is a good idea to use the sub-pool that is associated with your region on the globe. Here is ta sample configuration: (/etc/ntp.conf file)

server 0.us.pool.ntp.org
server 1.us.pool.ntp.org
server 2.us.pool.ntp.org
server 3.us.pool.ntp.org

It may not be necessary for your to run the NTP service itself. Running the ntpdate command at boot and then in a cron job once or twice a day may be sufficient. The command would look like:

ntpdate 0.us.pool.ntp.org 1.us.pool.ntp.org 2.us.pool.ntp.org 3.us.pool.ntp.org

If you do need to install ntp on Ubuntu, the commands are:

sudo apt-get install ntp

and then edit the /etc/ntp.conf file and add the server lines from above. On my OSX workstation, the entire /etc/ntp.conf file is:

driftfile /var/ntp/ntp.drift

server 0.us.pool.ntp.org
server 1.us.pool.ntp.org
server 2.us.pool.ntp.org
server 3.us.pool.ntp.org

Overthinking the Configuration

The “server” parameter in the configuration file has a number of additional directives that can be specified. These are almost never needed, but can generate a lot of extra traffic on the NTP server. Avoid over thinking the server configurations and avoid using prefer, iburst, or burst.

When Should I Run NTP Service Rather Than Use The NTPDate Command ?

There is almost no downside to running the NTP service. It is very low overhead and generates almost no network traffic. That being said, the only downside to running the ntpdate command a few times a day, is that the clock can drift more. If I were performing an audit, and the shop-practice was to use ntpdate on everything except infrastructure service machines (directory servers, syslog concentrators, etc.), I would accept that practice. I would be more concerned about how time synchronization was being managed on HSMs, directory services, NIDS, firewalls, etc.

When Should I Run My Own NTP Server ?

There are two cases when you should consider running your own server:

  • You have a large number of machines that need time services
  • You wish to participate in NTP Pool
In both, cases your options for running a server are:
  1. Purchase a time reference (such as a GPS card)
  2. Arrange for authenticated NTP from a Stratum 1 server
  3. Local (short network hop) servers to sync with

A Stratum 1 time server appliance or a GPS/CDMA card can be purchased for costs similar to a rack mounted server (of course you will need two). If that is just out of the (budgetary) question, then I would look for the time servers to use authenticated time sources. NIST and several other Stratum 1 NTP providers have servers which are only available to folks who have requested access, and are authenticating to the server. If time accuracy is critical to risk management, and GPS/CDMA is not available, then I would push for authenticated NTP.

Option 3 is acceptable in the vast majority of situations, including cases where logs and events are only correlated locally, or where no compelling need exists.

NTP and Network Security

NTP uses UDP on port 123. This traffic should be restricted in DMZ or other secure network zones to only route to authorized NTP servers. Tools like hping can be used to turn any open port into a file transfer gateway or tunnel.

One option is to set-up a transparent proxy on your firewalls and to direct all 123/UDP traffic to your NTP server or to one you trust. (The risk of the open port involves providing a data path out of the organization, not rogue clocks…)

Resources and More Information

Cheers,

Erik

Advertisement

Risk ROI for –Some– Provisioning Solutions…

Today I ran into an interesting post on Matt Flynn’s Identity Management Blog entitled Extending the ROI on Provisioning in which he discusses the fact that, in addition to the “traditional” value propositions centered around increased efficiency and cost reduction, there are also significant risk management and oversight capabilities that can be had.

All provisioning solutions provide some facilities for:

  • Reduction of paper-based processes in favor of electronic requests and work flows
  • Reduction of manual updates in favor of automated entitlement updates

All provisioning solution providers strive to have a compelling story for these items. Additionally, these were the focus of the first generation of solutions which emerged in the ’90s.

For the Identity Management programs with which I have been involved, automation and risk management have been equally important. This is somewhat reflected in the definition I use for provisioning:

Provisioning is the processes and systems which:

  • Manage the entire Lifecycle of an Entitlement from request, through approval processes, onto issuance, and eventual revocation
  • Provide transparent views of the status and history of each step in the Entitlement Lifecycle through the creation of durable and detailed records, which include all the information required to provide non-repudiation and event reconstruction for each step in an Entitlement Lifecycle

Note: Fulfilling these objectives always involves a mix of manual and automated activities, technical and procedural controls.

Based on my experiences, having prepared several product selection scorecards in this space, there are two major approaches (philosophies), that provisioning products take in this space:

The provisioning system “sees itself as”…

  • Coordinating identity and entitlement activities among systems with the objective of providing automation

– – – OR – – –

  • Maintaining a single centralized record of reference for identity and entitlement, as well as providing tools to automate approval, issuance, revocation, and reconciliation

The “Centralized Record of Reference” concept is the watershed between these two. The systems that are designed purely for automation tend to focus on “Coordination” of external events. These systems often do not contain an internal store of entitlements. The systems that maintain a “Centralized Record of Reference” approach have the ability, through reconciliation, to validate that the entitlements in the “wild” (e.g., in AD, LDAP, within local applications, etc.) match the “official” state (which they maintain). This enables these systems to detect changes and take action (e.g., drop the privilege, report the discrepancy, trigger a follow-up work flow, etc.)

Which system is right for you?

This really depends on what percentage of your systems require tight oversight. If you are in an industry with low-IT regulation, and the data of your core business is low risk, then it may make more sense to invest in routine manual audits of a few systems, rather than monitoring your entire IT world. On the other hand, if you are in an industry that is highly regulated, with high-risk data, then the automated oversight and reconciliation capabilities are likely a good fit for you.

FYI, last week I co-taught a one-day class on Identity and Access Management Architecture at RSA 2008. For the last 3rd of the class, Dan Houser and I had a list of advanced topics for the class to vote on. I prepared a module on Provisioning, but alas it was number 4 out of 7 options, and we only had time to cover 3… As a result, a Provisioning slidecast is “coming soon” to the Art of Information Security podcast.

Cheers, Erik