Category Archives: Analysis and Insight

Auditing Time…

Time is critical in security systems; specifically, having systems know the time  is very important. Adequate clock synchronization is important for:

  • Operational Integrity (things happen when they are supposed to happen – backups, tasks, etc.)
  • Reproducibility of events (meaningful logs and records)
  • Validation of SSL certificate expiration (or other tokens, etc.)
  • Correct application of time restricted controls
  • Etc.

So, the big question is, what is “adequate clock synchronization”, and how do we achieve it ?

But First, What Time Is It ?

Time itself is of course a natural phenomenon. Just like distance, volume, and weight, the measurements for time are artificial and man-made.  The dominant time standard (especially from a computer and therefore Information Security perspective) is Coordinated Universal Time (UTC). This could probably have been called Universal Compromise Time, as it turns out that getting the whole world to drop their cultural biases, deployed technology, etc. and move to a single time system has been a long and complicated road (and it isn’t over yet).

One major component of UTC is an agreement on what time it in fact is, and how that is determined. Also, there are  questions surrounding how to adjust leap seconds, leap years,  and other “measurement vs reality” anomalies.  Time (and its measurement) is quite complex in itself, but for the purposes of Information Security (system operation, log correlation, certificate expiration, etc.), the good news is that UTC provides a solid time standard.

Now, all we need to do is synchronize our clocks to UTC !
(and adjust for our local time zone…)

Network Time Protocol (NTP)

Network Time Protocol (NTP) is a well established, but often misconfigured and misunderstood, internet protocol. NTP utilizes Marzullo’s Algorithm to synchronize clocks in spite of the fact that:

  • The travel time for information passed between systems via a network is constantly changing
  • Remote clocks themselves may contain some error (noise) vs UTC
  • Remote clocks may themselves be using NTP to determine the time

In spite of this, a properly configured NTP client can synchronize its clock to within 10 milliseconds (1/100 s) of UTC over the public internet. Servers on the same LAN can synchronize much more closely . For Information Security purposes, clock synchronization among systems and to UTC, within 1/5 or 1/10 of a second, should be sufficient.

Classic Misconfiguration Mistakes (and how to avoid them)

The misconfiguration mistakes that folks make tend to be the result of:

  • Overestimating the importance of Stratum 1 servers
  • Over-thinking the NTP configuration

NTP Servers are divided into Stratums based on what time source. A Stratum 1 server is directly connected to a device that provides a time reference. Some examples of reference time sources include:

NTP servers which synchronize with a Stratum 1 time source are Stratum 2 servers, with the Stratum number increasing by one for each level.

Big Mistake – Using a Well Known NTP Reference

The most frequent mistake people make when configuring NTP on a server is assuming that they need (or will get the best time synchronization) by using one of the well known atomic clock sources. This tends (thought not always) to be a bad idea because it overloads a small number of servers. Also, a server with a simpler network access path will generally provide better synchronization than a more remote one.

When configuring the NTP protocol, it is a good idea to specify several servers. The general rule of thumb is 2-4 NTP servers. If everyone specifies the same servers, then those servers become overloaded and their response times become erratic (which doesn’t help things). In some cases, an unintended denial of service attack is caused.

Both Trinity College of Dublin, Ireland and the University of Wisconsin at Madison experienced unintended denial of service attacks caused by misconfigured product deployments. In the case of the University of Wisconsin at Madison, NETGEAR shipped over 700,000 routers which were set-up to all pull time references from the university’s servers. NETGEAR is not the only router or product manufacturer to have made such an error.

Enter the NTP Pool…

“The pool.ntp.org project is a big virtual cluster of timeservers striving to provide reliable easy to use NTP service for millions of clients without putting a strain on the big popular timeservers.” quoted from pool.ntp.org

Basically, the NTP pool is a set of over 1500 time servers, all of which are volunteering to participate in a large load-balanced virtual time service. The quality and availability of the time service provided by each of the NTP servers in the pool is monitored, and servers are removed if they fail to meet certain guidelines.

Unless a system itself is going to be an NTP server, then use of the NTP Pool is your best bet 100% of the time. It is a good idea to use the sub-pool that is associated with your region on the globe. Here is ta sample configuration: (/etc/ntp.conf file)

server 0.us.pool.ntp.org
server 1.us.pool.ntp.org
server 2.us.pool.ntp.org
server 3.us.pool.ntp.org

It may not be necessary for your to run the NTP service itself. Running the ntpdate command at boot and then in a cron job once or twice a day may be sufficient. The command would look like:

ntpdate 0.us.pool.ntp.org 1.us.pool.ntp.org 2.us.pool.ntp.org 3.us.pool.ntp.org

If you do need to install ntp on Ubuntu, the commands are:

sudo apt-get install ntp

and then edit the /etc/ntp.conf file and add the server lines from above. On my OSX workstation, the entire /etc/ntp.conf file is:

driftfile /var/ntp/ntp.drift

server 0.us.pool.ntp.org
server 1.us.pool.ntp.org
server 2.us.pool.ntp.org
server 3.us.pool.ntp.org

Overthinking the Configuration

The “server” parameter in the configuration file has a number of additional directives that can be specified. These are almost never needed, but can generate a lot of extra traffic on the NTP server. Avoid over thinking the server configurations and avoid using prefer, iburst, or burst.

When Should I Run NTP Service Rather Than Use The NTPDate Command ?

There is almost no downside to running the NTP service. It is very low overhead and generates almost no network traffic. That being said, the only downside to running the ntpdate command a few times a day, is that the clock can drift more. If I were performing an audit, and the shop-practice was to use ntpdate on everything except infrastructure service machines (directory servers, syslog concentrators, etc.), I would accept that practice. I would be more concerned about how time synchronization was being managed on HSMs, directory services, NIDS, firewalls, etc.

When Should I Run My Own NTP Server ?

There are two cases when you should consider running your own server:

  • You have a large number of machines that need time services
  • You wish to participate in NTP Pool
In both, cases your options for running a server are:
  1. Purchase a time reference (such as a GPS card)
  2. Arrange for authenticated NTP from a Stratum 1 server
  3. Local (short network hop) servers to sync with

A Stratum 1 time server appliance or a GPS/CDMA card can be purchased for costs similar to a rack mounted server (of course you will need two). If that is just out of the (budgetary) question, then I would look for the time servers to use authenticated time sources. NIST and several other Stratum 1 NTP providers have servers which are only available to folks who have requested access, and are authenticating to the server. If time accuracy is critical to risk management, and GPS/CDMA is not available, then I would push for authenticated NTP.

Option 3 is acceptable in the vast majority of situations, including cases where logs and events are only correlated locally, or where no compelling need exists.

NTP and Network Security

NTP uses UDP on port 123. This traffic should be restricted in DMZ or other secure network zones to only route to authorized NTP servers. Tools like hping can be used to turn any open port into a file transfer gateway or tunnel.

One option is to set-up a transparent proxy on your firewalls and to direct all 123/UDP traffic to your NTP server or to one you trust. (The risk of the open port involves providing a data path out of the organization, not rogue clocks…)

Resources and More Information

Cheers,

Erik

Advertisement

Optimize Your RSA, Part 3 – Network, Network, Network…

Probably this single most significant advantage to attending a conference, is the fact that it pulls so many people with a common interest into one place and time. If the interaction amongst participants wasn’t important, then it would be very difficult to make a compelling argument for in-person attendance.

Talk to People – Join in the Conversation

In the last year, I can think if 10 times where I was able to call (or I was called by) a colleague who I met at a past RSA. In the professional development series with Lee Kushner (link), ideas about developing, having, and being able to utilize your professional network are going to be a reoccurring theme.  If you are attending RSA (or any large event) don’t pass on the opportunity to meet and connect with new people. 

It can be Easy…

Don’t be mislead into thinking you need to “work the room” to meet people at RSA. 90% of the people who will be in Mascone Center are there because Information Security is important them, either as a practitioner or as a provider. (The other 10% are there to make sure everything runs smoothly.) 

So, you will be surrounded by people, who at least share that one item in common with you. Reaching out can be very easy. The people who you are in-line with, or waiting for a session to start with, etc. almost all do something connected to what you do. Just saying hello is all it takes. 

Leverage Events

There are a number of events that can make networking even more effective. The conference itself has roundtables session that are 100% focused on establishing peer to peer communication on targeted topics. Any vendor sponsored dinner or event also creates easy opportunities.

New to Networking? 

The RSA conference understands the value of the networking opportunity it is creating. As a result, there is a “Networking 101” session on Monday evening at 5:15, immediately following the First-Time Delegate Orientation. Each year the conference brings in someone who has professional training experiencing in helping people network – helping people connect. This is always a great session to attend if you have the time, and are around the conference center on Monday evening.

Cheers, Erik

Optimize Your RSA, Part 2 – Session Tips…

There is a TON of stuff to do at RSA if you are going, and managing all of that can be quite difficult. One of the things that I find difficult to do every year is select the sessions that I am going to. There are a few tools that the conference provides to make this easier.

Let’s take a look at the Session Catalog.

See Who’s Speaking

I have my own personal list of folks who always have great presentations and really pack a lot of punch for me. But, the attendance at the conference is so diverse that my list would certainly not work for everyone. The conference itself measures and metrics speaker performance. You know those forms they hand you as you walk into the session? Turns out that they use that data, and they even share it with you. When using the Session Catalog and the printed materials, you may notice a star next to some of the names. These are the folks who have had the strongest feedback during past conferences.

If this is your first RSA, it may be worth your while to ask folks who have attended in the past and who have similar interests, which speakers stood out to them. If you are a member of the RSA Conference group on Linked In (link), you could even post a question about “Best Session for X”. (Which I have done…)

Preview The Slides

RSA has always made the slides available in advance. Usually this was on media (CD/USB) handed out at the conference. (So, “in advance” was day-before…) But now they are available for most sessions right in the Session Catalog. (Note, you need to be logged in to the site before you visit the page to see these.)

Post Session…

There is a lot of time and energy that goes into being a speaker. Please, help your speaker and the conference, and complete the evaluation forms. And, if a session clicks for you – don’t be shy – meet the speaker. Most of the speakers are presenting because they are committed to the mission and the profession. Participation and feedback are the biggest rewards any speaker can ask for from the audience – don’t hold back.

Hope this is helpful – see you in SFO.

Cheers, Erik

Optimize Your RSA, Part 1 – Expo Management

It is one week until RSA, and now is the time to start planning to make the most of your trip. RSA has one of largest (if not the single largest) vendor Expositions for Information Security. Every year I use this as a one-week refresher course on the products and services that are available. Frequently the class sessions are very valuable to me, in terms of my long term professional development, but  (for my employer) the information I collect on the Expo floor is valuable almost immediately.

Screen Now and Benefit All Year

I am very selective about the vendors with whom, I have  meetings.  Sure, I am missing out on free lunches, but the fact is that I don’t have endless time to meet with people.  As a result I screen, and whenever possible pre-qualify vendors. Most of the time I spend on the RSA Expo floor is spent identifying who I don’t need to meet with, and establishing whom I definitely do want to meet with in the following year.

Understand your Organizations or Clients Needs !

In general you should have a good understanding of your employer or clients… Some key things to understand before heading out to the exposition:

Q: What are the emerging needs of your organization?

What are the areas of concern for your CISO, Risk Mgmt., LOB partners, or other important constituents? In the week or two leading up to RSA, I ping my CISO, key LOB partners, etc. to find out what concerns they have, what vendors have been hounding them for meetings, what alternatives they may need, etc.

Q: What products or services are subject to change?

I feel that, even for our deployed products, it is incumbent on me as a good corporate citizen to make sure those products are still competitive in the market. Information about the competition is especially important during contract renewals. No one negotiates a win-win deal without being fully informed.

Q: Who are you key partners, and what new offerings do they have?

Who are the top vendors whose products you have, and love? Make sure to take the opportunity to visit them, understand emerging features, and make sure that you are getting the most out of your existing investment.

Q: Who will your organization generally buy or not buy from?

Many organizations have firm rules about the types of organizations they will purchase from; know what these are. My experience is that if a product is truly compelling, there is always a way for purchasing to see that and make a deal happen. But, if you sense a weak offering from a company, that is going to be a hard sell to your organization, save time for both you and the vendor – tell them, and move on.

Be There Monday Night

Monday evening at RSA, the Expo opens to Delegates only. The fact that there are fewer people on the expo floor, the booth people are not burned out, and the free food makes this the ideal Expo floor time.

Arrange Key Visits In Advance

As I already mentioned, I try to pre-qualify vendor meetings. There are folks whom I know that I need to be meeting with (established relationships, emerging solutions, emerging risk needs, etc.) and there are a number of folks I know I don’t want to wast time on (lack of compelling product story, people who wasted my time in the past,etc.), but there are also a number of folks in the gray area in-between.

From November on, I start asking folks in the gray area if they are going to have an Expo presence at RSA. If they are, I ask for them to follow-up with me before the show with a booth # and contact name. After I arrive on-site and have the conference book in hand, I add to the list. I avoid setting up specific times, because with everything that happens at the show my schedule is too dynamic.

For each of these “quick meet and greets”, I prep one of my business cards in advance. I have the booth #, contact name, and subject clue on the back of the card. If my contact isn’t at the booth, I leave the card. When you in fact follow-up, you build credibility and relationship, even if there is no service to need synergy at this time.

Be Quick and Targeted

If the printed information, name, etc. on the booth catches my eye, I stop for a quick visit. I try to get the facts quickly, in 3-6 min. The secret is to not be afraid to ask tough questions quickly (but politely), such as:

  • What’s compelling about your offering?
  • Who is your primary competition?
  • Do you have hard data, or a case study you can forward to me?
  • Do you have reference accounts for the use cases that are most important to my organization?
  • What industry analysis (Gartner, Burton, etc.) has been published on this space? Was your product included?

Be Specific About Follow-up

If I have an immediate need, I ask for contact info and I initiate the follow-up before I leave the show. If I am interested in follow-up for a long-term, or next budget cycle, etc. then I usually ask for follow-up later in the year (e.g. Q3/4). Q2 is always a very busy time for me and the people around me, so I try to defer long-term information and knowledge capture until later in the year.

Hope this is helpful – see you in SFO.

Cheers, Erik

Secure Your Linux Host – Part 2: Secure SSH

SSH is the preferred (perhaps de facto) remote login service for all things UNIX. The old-school remote login was telnet. But telnet was completely insecure.  Not only was the confidentiality of the session not protected, but the password wasn’t protected at all – not weak protection – no protection.

Trinity hacking ssh with nmap in ReloadedAnd so SSH (aka Secure Shell was developed)… But it has not been without its failings. There are two “flavors” for SSH: Protocol 1 and 2.  Protocol 1 turned out to have pretty serious design flaws. The hack of SSH using the Protocol 1 weaknesses was featured in the movie Matrix Reloaded. So, by 2003, the flaws and the script kiddie attack were understood well enough to have the Wachowski Brothers immortalize them.

Another concern to watch out for is that SSH has port-forwarding capabilities built into it. So, it can be used to circumvent web proxies and pierce firewalls.

All in all though, SSH is very powerful and can be a very secure way to remotely access either the shell or (via port forwarding) the services on your host.

For additional information on SSH’s port-forwarding capabilities:

Be aware that SSH is part of a family of related utilities; check out SCP, too.

Configuration

After installing the SSH server (perhaps: apt-get install openssh-server), you will want to turn your attention to the configuration file /etc/ssh/sshd_config

Here are a few settings to consider:

Protocol 2
PermitRootLogin no
Compression yes
PermitTunnel yes
Ciphers aes256-cbc,aes256-ctr,aes128-cbc,aes192-cbc,aes128-ctr
MACS hmac-sha1,hmac-sha1-96
Banner /etc/issue.net

  1. The “Protocol” setting should not include “Protocol 1”. It’s broken; don’t use it.
  2. PermitRootLogin should never be “yes” (so, of course that is the default !). The best option here is “no”, but if you need or want to have direct remote root access (perhaps as a rescue account), then the “nopwd” option is better than “yes”. The nopwd option will force you to set up and use a certificate to authenticate access.
  3. Unless your host’s CPU is straining to keep up, turn on compression. Turn it on especially if you are ever using a slow network connection (and who isn’t).
  4. If you are not going to access services remotely using SSH as sort of a micro-VPN, then set this to “off”.  Because I use the tunneling feature, I have it turned on.
  5. OK; I work and consult on cryptographic controls, so I restrict SSH to the FIPS 140-2 acceptable encryption algorithms.
  6. Likewise, I restrict the Message Authentication Codes (MACS) to stronger hashes.
  7. Some jurisdictions seem to not consider hacking a crime unless you explicitly forbid unauthorized access, so I use a banner.

Sample Banner

It seems that (at least at one point in the history of law & the internet) systems which did not have a login banner prohibiting unauthorized use may have had difficulty punishing those that abused their systems. (Of course, it is pretty hard to do so anyway, but…) Here is the login banner that I use:
* - - - - - - - W A R N I N G - - - - - - - - - - W A R N I N G - - - - - - - *
*                                                                             *
* The use of this system is restricted to authorized users. All information   *
* and communications on this system are subject to review, monitoring and     *
* recording at any time, without notice or permission.                        *
*                                                                             *
* Unauthorized access or use shall be subject to prosecution.                 *
*                                                                             *
* - - - - - - - W A R N I N G - - - - - - - - - - W A R N I N G - - - - - - - *

Account Penetration Countermeasures

Within hours of establishing an internet accessible host running SSH, your logs will start to show failed attempts to log into root and other accounts. Here is a sample from a recent Log Watch report:

--------------------- SSHD Begin ------------------------
Failed logins from:
58.222.11.2: 6 times
211.156.193.131: 1 time
Illegal users from:
60.31.195.66: 3 times
203.188.159.61: 1 time
211.156.193.131: 3 times
Users logging in through sshd:
myaccount name:
xx.xx.xxx.xx: 3 times
---------------------- SSHD End -------------------------

One of the most effective controls against password guessing attacks is locking out accounts after a predetermined and limited number of password attempts. This has a tendency to turn out to be a “three strikes and you’re out” rule.

The problem with applying such a policy with a remote service, like SSH, as opposed to your desktop login/password, is that blocking the password guessing attack becomes a Denial of Service attack. Any known (or guessed) login ID on the remote machine will end up being locked out due to the remote attacks.

Enter Fail2ban: Rather than lock out the account, Fail2ban blocks the IP address. Fail2ban will monitor your logs, and when it detects login or password failures that are coming from a particular host, it blocks future access (to either that service or your entire machine) from that host for a period of time. (Oh, and you may notice I said blocks access to the “service”, and not “SSH” – that’s because Fail2ban can detect and block Brute Force Password attacks against SSH, apache, mail servers, and so on…)

How to Forge has a great article on setting up Fail2ban – Preventing Brute Force Attacks With Fail2ban – check it out.

One tweak for now. As I tend to use certificate authentication with SSH (next topic), I rarely am logging in with a password. As a result, I tend to use a bantime that is long, ranging from a few hours on up. Three guesses every few hours really slows down a Brute Force Attack! Also, check out the ignoreip option, which can be used to make sure that at least one host doesn’t get locked out. (You can lock yourself out with Fail2ban… I have done it…)

SSH Certificate Based Authentication Considerations

Secure Shell offers the ability to use certificate based authentication with a self-signed certificate. There are two ways you might consider using this:

  1. With a password protecting the private key
  2. With no password required

Please note: When you establish certificate based authentication with SSH, you will generate a public/private key pair on your local computer. The public key will only be copied up to the server which you wish to access. The private key always stays on your local computer.

During the process of generating the private and public key pair, you will be asked if you want to password protect the private key. Some things to consider:

  • Will this ID be used for automated functional access ?

If you are creating the certificate based authentication so that a service can access data or run commands on the remote machine, then you will not want to password protect the local file. (If you do, you will end up including the password in the scripts anyway, so what would be the point?)

Personally, I have backup scripts which either pull data or snapshots on a regular basis. Google “rsync via ssh” for tips on this, or “remote commands with ssh” for tips and ideas. (Also, I may cover my obsessive compulsive backups in a later post.)

  • This ID will be used for a rescue account

In this case the certificate is usually created to avoid password expiration requirements. If it is a rescue account, it often logs into root. Any time you use certificate access for root, the private key should be password protected. Rescue accounts are often stored on centralized “jump boxes” and are expected to only be used during a declared emergency of some kind (such as full system lockout due to a password miss-synchronization.)

These private keys should always be password protected.

If someone has access to backups or disk images of the jump box, or otherwise gets access to your .ssh directory, and you have not password protected the private key, then they own the account (e.g., they can use the public/private key pair from any box).

  • Convenient remote logons…

The most common use of certificate based authentication for SSH is in fact to log you into the remote box without having to type passwords. (I do this, too…) But there are a few things to think about (these are all good general recommendations, but I consider them requirements when using an automated login…)

  1. Automatic login should never be used on a high-privilege account (e.g., root)
  2. If those accounts have sudo privileges, sudo should require a password
  3. A new certificate (public and private key pair) should be created for each machine you want to access the remote server from (e.g., desktop, laptop, etc.).  Do not reuse the same files.
  4. The certificate should be replaced occasionally (perhaps every 6 months).
  5. Use a large key and use the RSA algorithm option (e.g., ssh-keygen -b 3608 -t rsa)

SSH Certificate Based Authentication Instructions

So, without further ado… Let’s set up a Certificate for authentication.

Part 1 – From the client (e.g. your workstation, etc…)

First, confirm that you can generate a key.

$ ssh-keygen --help

The options that are going to be of interest are:

  • -b bits  Number of bits in the key to create
  • -t type  Specify type of key to create

DSA type keys, you will note, have a key length of exactly 1024. As a result, I choose RSA with a long key. My recommendation is that you take 2048 as a minimum length. I am pretty paranoid, and I have a strong background in cryptography, but I have never used a key longer than 4096.

The longer the key, the more math the computer must perform while establishing the session. After the session is established, then one of the block-ciphers discussed above performs all of the crypto. If you are making a key for a slow device (like a PDA) or a microcontroller based device, then use a shorter key length. Regardless, actually changing the keys regularly is a more secure practice than making a large one that is never changed.

$ ssh-keygen -b 3608 -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/erikheidt/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/erikheidt/.ssh/id_rsa.
Your public key has been saved in /Users/erikheidt/.ssh/id_rsa.pub.
The key fingerprint is:
43:69:d8:8e:c4:af:f8:8b:5a:2d:db:75:91:fd:06:be erikheidt@Trinity.local
The key's randomart image is:
+--[ RSA 3608]----+
|                 |
|     . o .       |
|      + =        |
|     . *   o     |
|      . S o o    |
|     o . . o o   |
|    + o . . . o  |
|   . * . .   o   |
|  ..o +.    E    |
+-----------------+

Now, make sure your .ssh directory is secured properly…

$ chmod 700 ~/.ssh

Next, you need to copy the public key (only) to the server or remote host you wish to login to.

$ cd ~/.ssh

$ scp id_rsa.pub YourUser@Hostname

Now we have copied the file up to the server….

Part 2 – On the Server or remote host….

Logon to the target system (probably using a password) and then set things up on that end…

$ ssh YourUser@Hostname

$ mkdir .ssh
$ chmod 700 .ssh
$ cat id_rsa.pub >> ~/.ssh/authorized_keys

Done ! Your next login should use certificate based authentication !

I hope this posting on SSH was useful.

Cheers, Erik

Being Probed for phpMyAdmin ?

In Secure Your Linux Host – Part 1 I recommended using Log Watch to keep an eye on what may be happening with your host. Well, today’s review of my own Log Watch indicates that I am being probed for phpMyAdmin. (Someone wants to abuse my database…)

Here is a sample from the log:

401 Unauthorized
/admin/mysql/main.php: 1 Time(s)
/admin/php-my-admin/main.php: 1 Time(s)
/admin/php-myadmin/main.php: 1 Time(s)
/admin/phpMyAdmin-2.2.3/main.php: 1 Time(s)
/admin/phpMyAdmin-2.2.6/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.1/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.4/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.5-pl1/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.5-rc1/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.5-rc2/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.5/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.6-rc1/main.php: 1 Time(s)

Now I have seen activity like this before, but I thought this provided a good example of the increased awareness that scanning through the Log Watch report can provide.

This also provides some solid data in support of having some other controls in place if you are in fact running phpMyAdmin (or even MySQL). Most of the time the passwords that are used to access the content of databases are not used by humans – they are stored in the properties files of the applications that are using the database.

Ok, So Your Logs are Letting You Know What is Being Probed, Now What ?

This awareness allows you to make sure that you are adequately protecting that which is being attacked.  In this case, I already have controls in place to manage this risk. Let’s discuss them.

Lock Down Web Access to Administrative Tools

phpMyAdmin (usually) requires a password (more on that in a second), but you can also add an additional layer of security to your web-based administrative services by adding authentication at the http server itself.

Apache has a nice tutorial: Authentication, Authorization and Access Control

If you run web-based administrative tools, you may wish to lock down the web paths that contain them. In addition to providing a first line of defense, this will reduce the information available to attackers during the reconnaissance portion of their attacks.

If you lock down “mywebsite.com/admin” as described in the Apache How-To above, and you have additional directories under this “mywebsite.com/admin/phpMyAdmin” and “mywebsite.com/admin/keys2Kingdom” , they will not be visible to the attacker (until they guess the password…).

Confirm Strong Passwords

Functional IDs (also called service accounts) are used for application to application (e.g. wordpress to MySQL) authentication, and are (and should be) only handled by humans during installation and maintenance activities. Functional IDs should be long, very random, and not contain words or memorable substrings. (Functional accounts often do not have password retry limits, which heightens the importance of the strength of the password.)

I sometimes use the GRC Strong Password Generator (ah yeah, my ow site gotentropy.artofinfosec.com is down right now…). You can also generate strong passwords using openSSL from the Linux command line:

openssl rand -base64 60

In both cases I prefer to cut and paste a long substring of 40 to 50 characters  (dropping a few characters off both ends, especially the “==” base64 termination marker from the openssl command), and then adding a few characters of my own.

Now, I would never expect an application user to type a 40+ character password. But for a Functional ID – why not ? The root MySql and all db user’s ID should be very complex and long, especially if the host is internet accessible. (If you are using phpMyAdmin, it has a very good password generator included in the “Add User” functionality.)

We will be discussing other ways to protect password based systems from remote attacks in “Secure Your Linux Host – Part 2″… Out soon…

Cheers, Erik

( Part of the Secure Your Linux Host series…)

Risk ROI for –Some– Provisioning Solutions…

Today I ran into an interesting post on Matt Flynn’s Identity Management Blog entitled Extending the ROI on Provisioning in which he discusses the fact that, in addition to the “traditional” value propositions centered around increased efficiency and cost reduction, there are also significant risk management and oversight capabilities that can be had.

All provisioning solutions provide some facilities for:

  • Reduction of paper-based processes in favor of electronic requests and work flows
  • Reduction of manual updates in favor of automated entitlement updates

All provisioning solution providers strive to have a compelling story for these items. Additionally, these were the focus of the first generation of solutions which emerged in the ’90s.

For the Identity Management programs with which I have been involved, automation and risk management have been equally important. This is somewhat reflected in the definition I use for provisioning:

Provisioning is the processes and systems which:

  • Manage the entire Lifecycle of an Entitlement from request, through approval processes, onto issuance, and eventual revocation
  • Provide transparent views of the status and history of each step in the Entitlement Lifecycle through the creation of durable and detailed records, which include all the information required to provide non-repudiation and event reconstruction for each step in an Entitlement Lifecycle

Note: Fulfilling these objectives always involves a mix of manual and automated activities, technical and procedural controls.

Based on my experiences, having prepared several product selection scorecards in this space, there are two major approaches (philosophies), that provisioning products take in this space:

The provisioning system “sees itself as”…

  • Coordinating identity and entitlement activities among systems with the objective of providing automation

– – – OR – – –

  • Maintaining a single centralized record of reference for identity and entitlement, as well as providing tools to automate approval, issuance, revocation, and reconciliation

The “Centralized Record of Reference” concept is the watershed between these two. The systems that are designed purely for automation tend to focus on “Coordination” of external events. These systems often do not contain an internal store of entitlements. The systems that maintain a “Centralized Record of Reference” approach have the ability, through reconciliation, to validate that the entitlements in the “wild” (e.g., in AD, LDAP, within local applications, etc.) match the “official” state (which they maintain). This enables these systems to detect changes and take action (e.g., drop the privilege, report the discrepancy, trigger a follow-up work flow, etc.)

Which system is right for you?

This really depends on what percentage of your systems require tight oversight. If you are in an industry with low-IT regulation, and the data of your core business is low risk, then it may make more sense to invest in routine manual audits of a few systems, rather than monitoring your entire IT world. On the other hand, if you are in an industry that is highly regulated, with high-risk data, then the automated oversight and reconciliation capabilities are likely a good fit for you.

FYI, last week I co-taught a one-day class on Identity and Access Management Architecture at RSA 2008. For the last 3rd of the class, Dan Houser and I had a list of advanced topics for the class to vote on. I prepared a module on Provisioning, but alas it was number 4 out of 7 options, and we only had time to cover 3… As a result, a Provisioning slidecast is “coming soon” to the Art of Information Security podcast.

Cheers, Erik

What do the Cold Boot Crypto Attack, DVD Players, and MiFare tell us about the Future of Biometrics?

Last week Slashdot pointed me to an “interesting” article in The Standard:
Understanding anonymity and the need for biometrics.

In fact, I found the article to be rather upsetting. Not because of the article’s thesis that strong authentication through a national ID program would not necessarily pose a threat to privacy; but rather, because of their naive (and irresponsible) handling of the realities of the biometric authentication challenge. They gloss over the real security challenges with creating a national biometric infrastructure. Here are the two quotes that are most misleading:

  • Confusing privacy with anonymity has delayed implementation of robust, virtually tamper-proof biometric authentication to replace paper-based forms of ID that neither assure privacy nor reliably prove identity.”
  • “This emerging technology makes it virtually impossible to assume someone else’s unique identity.”

The problem that the authors are glossing over is that no such technology exists today, and it is unlikely to ever exist. Now, to be fair, I am assuming that a critical success factor for any national biometric program, as described, would be that the authentication devices have to be available, and usable, anyplace paper-based IDs can be used today. This of course implies that the authenticator must be an inexpensive, commodity device, easy to purchase, maintain, and operate. Such a device would have to be even more ubiquitous than the electronic credit card machine.

The problem is that the authenticator itself may be in the possession of the attacker (Perhaps after you authenticate your legitimate purchase the clerk desires to use your identity herself…). In the history of security controls, when the attacker has unsupervised at-will physical access, the attacker wins. Here are a few examples:

  • Defeated copy protection on DVDs ( more & more info)
  • Cold Boot Crypto Attack on hard disk encryption (more info)
  • MiFare RFID Cards (more info)
  • Skimming devices attached to ATM machines to steal card and PIN data (more info)

Of course, all of these systems worked in the lab. But when a security system is widely deployed, it has to withstand an enormous amount of scrutiny, and minor flaws will be exploited. And of course, the greater the financial gain, the greater the time and energy attackers invest in trying to defeat the system. The authors of the article ignore these issues, idealistically assuming biometrics will just work.

Now, of course there are lots of examples where biometrics work very effectively. But I would propose that biometric authentication is most useful when the authentication device is physically secure and the authentication itself is supervised. The MiFare example above also demonstrates two other issues:

  • The system chose not to implement a reviewed and standard cryptographic algorithm – always a bad idea
  • MiFare was able to sell 1 billion cards and authenticators before the system failed

The cost of investing in a national biometric authentication program, and then having the security fail, is enormous. Can you imagine deploying a biometric authentication infrastructure to every bank, police car, restaurant, shop, etc. and then having video on YouTube of it being defeated ?

– Erik

BTW, Maybe the attacker doesn’t even need to tamper with the device -> ftp://ftp.ccc.de/pub/video/Fingerabdruck_Hack/fingerabdruck.mpg