Category Archives: Site Info

AoIS Resurrection… to blogs.Gartner.com

As you may have noticed there has been no activity on Art of Information Security for a long time. Things got really busy in my work and personal lives, and well, something had to give.

One of those changes is a move to the Security and Risk Management Strategies team at Gartner. I will be blogging on Gartner.com at blogs.gartner.com/erik-heidt. So, if you have been a fan of the content on Art of Information Security please keep an eye there.

My current coverage areas include:

1. IT GRC practice strategy
2. IT Risk Management (and measurement)
3. Assessing cloud risk decisions
4. Cryptographic controls and key management
5. Application security

All the best.

Cheers, Erik

Advertisement

((AoIS Webcast)) Cryptography: Issues and Insight from Practical Implementations

Kevin Flanagan and I delivered a presentation on Cryptography at this year’s RSA 2010. Now, doing a cryptography presentation at RSA is a bit like putting a target on yourself that says “please shoot me down!”. Well, the presentation was very well received, and the RSA conference folks have asked Kevin and I to do a encore presentation via Webcast.  A few quick facts:

This is not your math teacher’s Cryptography presentation !
The core of this presentation is about discussing the various points in an application where a cryptographic control, primarily encryption, can be applied. Kevin and I walk through an expanded version of the 3-tier application architecture. We go beyond discussing the encryption controls available to the web server, application server, and database backends, to expand our scope to include the PC, storage, backup, and file systems. At each point we will discuss the kinds of controls that can be applied, the risks that those controls help manage, and risks which are ofttimes overlooked and remain.

This presentation is more focused than the RSA Version from March.
In our presentation in March we tried to also include an introduction to Key Management. This proved to be too much to bite off, so we have pruned that material from the presentation that is planned for the Webcast. Kevin and I may be submitting a presentation proposal for RSA 2011, 100% dedicated to Key Management. (Feedback on that idea would be of great value… Feel free to comment below.)

In fact, I am always interested in feedback from readers of AoIS. So, if you tune in the the WebCase, please drop me a note. I personally find web and teleconference presentations much more difficult than in the in-person kind…

When and Where ?
The Webcast in this Wed (June 23, 2010) at 1:00 PM EST, 10:00 AM PST, 5:00 PM GMT.
Here is a link to the registration: Webcast: Cryptography: Issues and Insight from Practical Implementations

Cheers, Erik

Add Some Architecture to RSA 2010

Once again the RSA Conference is giving Dan Houser and I the opportunity to provide a one-day Identity Management Architecture tutorial. One-day tutorials can be added to your RSA Conference registration for a small fee. These sessions are designed to provide more depth and detail on particular important topics.

This year’s program is titled “Foundations for Success: Enterprise Identity Management Architecture”, and the content follows the successful pattern of past years. The morning will focus on establishing a base of understanding, and the afternoon will be spent covering modules selected by the attendees (the description from the RSA website is attached below).

This year I am especially excited as I am leading a major Information Security infrastructure initiative that involves the complete build out of the Information Security stack for a new company (actually a $2.4B spin-off). I have just completed full requirements, RFP, and the product selection cycle for an Identity Management solution. At the time of the class, I will be at the mid-point of the provisioning system’s deployment, and will have Password Vaulting in production. This project has been a source of great challenges and new insights, all of which I hope to bring with me on March 1st (well, the insights anyway).

Identity Management is at the core of a successful Information Security program. In many ways, it is the primary technical control for policy enforcement and oversight. In addition to the important role Identity Management plays in risk management and oversight, many of your business partners think of Identity Management “as” Information Security. The question of “how do I get access to X” is a question near and dear to the heart of your business partners. Many of the security controls we all work with day to day are largely invisible to business partners, but password problems, access request delays, and audit findings are very visible to them.

Information about the tutorial is available form the RSA 1-Day Tutorials page, but here is a copy of the tutorial description:

Tutorial ID : TUT-M21

Foundations for Success: Enterprise Identity Management Architecture

Identity and Access Management is the foundation for access controls in the Enterprise, a mission-critical IT function that is both the lifeblood of your business, and a frustrating and difficult beast to tame. Your IdM infrastructure is more complicated, with more moving parts, and more partners across the enterprise, than any other security related service.

This interactive session, taught by experienced IdM veterans and practitioners, provides an architectural view to resolving identity challenges, and will provide detailed and informative discussions on directory services, web access management, Single Sign-on, federated identity, authorization, provisioning and more. The morning session will provide an overview of the foundations of IdM, while the afternoon will provide a customized, detailed and interactive session to focus on the specific identity disciplines they find most challenging.

This workshop will cover:

  • Principles of Identity and Access Management and implementation strategies
  • Infrastructure architecture — critical underlying processes to run a successful enterprise
  • Web-based authentication & Web Access Management
  • Selling Identity strategy in the C-suite
  • Directory Services – Enterprise, meta-directories and virtual directories
  • Provisioning – managing the processes of Identity and Access Management
  • Identity mapping and roll-up
  • Detailed Single Sign-on strategies: Getting off Identity islands
  • Detailed Federated Identity discussion and case studies
  • Gritty Reality of Federation SSO: Lessons learned from 14 major federation projects
  • Multi-factor authentication: biometrics, tokens & more
  • Functional IDs – real world considerations of this often forgotten access control
  • User Access Audit: Proving only authorized users have access
  • Auditing the identity systems

Key Learning Objectives:
Participants should have a basic background in Information Security, IT systems, and identity management. After the class, participants should feel well grounded in identity management, understand the broad landscape from both a technical as well as a business perspective, and have gained practical insight into the strategies which will enable them to meet identity challenges in their organization.

Cheers,
Erik

AoIS Interviews Michael Rash, Part 3

Michael Rash HeadshotThe Art of Information Security continues our interview with Michael Rash, Network Security expert and the driving force behind several open source security tools including PSADFWSnort, and FWKnop.

In Part 2 of the interview Michael discussed how network threats, and network counter measures have been evolving. He also touched on the development of his book. Here goes the final installment in this series…

Erik: What would be your recommendations for folks who are adopting Linux (either enthusiasts or corporations) in terms of properly protecting their hosts and networks from network attacks?

Michael: I think that deploying host and network firewalls is a great first step here, and iptables functions admirably. Many people in corporate environments are concerned about the questions of performance, manageability, scalability, and support, and iptables together with some third party software have decent answers to these concerns. For example, the fwbuilder project provides good graphical support for the display and manipulation of iptables policies, and large Linux distributions such as Red Hat and SuSE offer commercial support.

Beyond having proper firewalls deployed, intrusion detection systems are a critical piece to point the way to attempted (and sometimes successful) compromises. Also, strong security mechanisms such as SELinux can provide a powerful barrier to attempted malicious usages of hosts. Finally, patch early and patch often.

Erik:  Do you have any tool or reference recommendations for debugging IP tables firewalls?

Michael: For debugging iptables policies and maintaining tight controls on the type of packets that are allowed to traverse those policies, one of the best techniques is to use tcpdump either on the end points or on the firewall itself (and these may be the same system) and watch how network traffic is allowed to progress. For example, a SYN packet to a port that is filtered will not respond either with a SYN/ACK or a RST, and seeing this behavior with tcpdump is quite easy. At the same time, understanding where in an iptables policy packets are getting dropped (or otherwise messed with) is usually made clear by watching how packet and byte counters are incremented on particular iptables rules. Use ‘iptables -v -n -L’ for this, and couple this with the ‘watch’ command to see how things change. Beyond this, if you have a kernel compiled with support for the iptables TRACE target, then you can use an iptables TRACE rule that causes all packets hitting this rule to be logged. Lastly, for really advanced debugging of iptables code itself, the nfsim project provides a simulator for running Netfilter code within userspace (and hence the ability to test code before running it within the kernel itself where a bug can have dire consequences). The nfsim project can be found here:

http://ozlabs.org/~jk/projects/nfsim/

Erik: So, you obviously are deeply connected to all things Network IDS/IPS. What kinds of trends have you seen in 2008? Were there any new attack styles that surprised you? Do you have any ideas about what 2009 may hold?

Michael: Well, 2008 will certainly go down in history as the year that people were forced to really pay attention to DNS by the Kaminsky attack. One thing Dan did really well is make it clear just how important DNS is for literally everything on the Internet, and how a flaw there has implications that are difficult to over estimate. Online banking, acquiring SSL certificates, SMTP, “forgot my password links”, and countless other infrastructures depend on DNS information being correct. But, then there were also serious issues in 2008 with BGP and with SSL, so if there was any trend in 2008 I would say that it was the year of security flaws in big Internet infrastructures. In 2009, it will be interesting to see whether this trend remains true for as-yet undiscovered vulnerabilities in other important systems.

Erik: Has your support for open source helped you professionally?

Michael: Absolutely. My current position as a Security Architect on the Dragon IDS/IPS developed by Enterasys Networks is a role that my open source work helped me to acquire. Many forward looking innovations are created by the open source community, and understanding this community helps to guide many companies and the products they develop. Companies are recognizing the power of open source software more and more, and this translates to better professional positions for open source developers and technology enthusiasts.

Many Thanks to Michael !

Thanks a ton for the time and energy you put into this, the first of what I hope will be many, interviews with notables from around the Information Security community.

Thanks, Erik

Secure Your Linux Host – Part 3: Why A Host Firewall ?

This post is going to focus on building and applying a Host Firewall using the IPTables functionality that is built into Linux. (If you are already lost, try googling “securing linux with IPTables”, and check out the resources section below.)

Please note: This Secure Your Linux Host series is very hands-on.  The tools and tips that will enable you to use a Host Firewall are coming, but let’s lay the foundation for using them first…

What is a Host Firewall?

When the concept of Firewall is mentioned, the most common meaning that comes to mind is a network services control between networks. Over 90% of the information that you can find on Firewalls is targeted at people who want to protect systems on one network (such as their corporate or home LAN) from systems on another network (generally the internet), while permitting a list of known services to be accessed by one network from the other. There are in fact several effective strategies for using Network Firewalls as boundaries between networks, or network segments.  For a detailed introduction (or tune up) on this subject, please refer to the NIST document in the resources section below, or click here for a great SANS introduction.

A Host Firewall is different in that it exists to protect and control access to a single system from all others. Common scenarios a Host Firewall is well suited to address:

  • Host is in direct contact with the Internet (or other hostile network)
  • Host is located in a DMZ
  • Host cannot trust systems on its network segment
  • Host has high control expectations due to legal, regulatory, audit, or risk requirements

If you have servers that are hosted in a data center or directly connected to a broadband/DSL connection and, as a result, are in direct contact with the internet, then I highly recommend configuring a Host Firewall. Systems that are in this situation will be attacked from other systems all over the globe all of the time. There are so many attackers who are running probing scans across the entire network space of the Internet that you will get scanned. The recent log information that I supplied on http scans and ssh password attempts is an example of how any host (no matter how insignificant) will be regularly attacked.

dmz_conceptual OK –  so what if the host is behind a firewall in a DMZ with other hosts (such as the www and SMTP, hosts in this illustration)? Most DMZ networks do not provide protection against attacks from other “peer” hosts in the DMZ. The problem that this presents is that, in the event that one host in the DMZ becomes exploited, then it can be used to probe and attack all of the hosts in the DMZ. Even worse, if a single host in the DMZ falls prey to a Worm or other self-propagating threat, then all similar hosts in the DMZ can be rapidly infected.

The “Host cannot trust systems on its network segment” argument for a Host Firewall is almost identical to the DMZ argument. Why provide access to services on the box to systems that do not need them?

The last point is about high-risk or highly-regulated systems. The rules on a Host Firewall are much simpler to review and understand (but perhaps not manage) than the rule set on a network boundary Firewall. This can have two major  advantages. First, it can make it much easier to provide complete and frequent reviews of the Firewall rule set. Second, it can remove confusion, limit scope, and simplify formal audits of the network access that the given Host has.

Isn’t Linux Secure by Default?

Many Linux distributions and commercial operating systems advertise that they ship in a “fail safe” or at least “start safe” mode; let’s assume that to be the case. When you install any operating system, the first thing you do is start installing software and applications. With each application that you install, you may be exposing services to the network.

With a Host Firewall, you will know precisely what services you are and are not exposing. As you know from Part 1, I run a Mail Transfer Agent so that email to root, events, etc. is in fact delivered to an email account I actually use. Running a Host Firewall dramatically raises my confidence that I am not a SPAM relay – sure, I think I configured the MTA properly… But with the Host Firewall I know that only services on my host (via 127.0.0.1) can send email. Running a LAMP server provides a very similar situation. With the Host Firewall in place, I know that MySQL isn’t accessible on its native ports to the world.

So, What is the Downside?

The reason that more systems are not running a Host Firewall is a lack of management tools. If you have a small number of hosts that you are administrating, then adding and managing a Host Firewall is not much work at all. But, if you have a hundred servers with a mix of operating systems, split into several data centers, suddenly managing Host Firewalls is not only a nightmare but may be causing more operational risk than is acceptable.

Every modern operating system (Linux, Unix-*, Windows, System/Z, openBSD, etc.) comes with a built in Host Firewall capability. What is needed is tooling that enables both centralized management and harmonization with network boundary Firewalls. (Unfortunately, I won’t be able to provide that in this series!)  The vendors with the best management of the network boundary Firewalls tend to be the manufacturers of those Firewalls, and they would be the most logical group to expand their existing management capabilities into the Host Firewall space. But, I do not think that anyone has developed a revenue model to justify that as worth the investment. (Hope springs eternal!)

What’s Next?

In the next installment, I am going to walk through the actual artofinfosec.com Firewall. (No B.S. “Security Through Obscurity” here!) And then in the following segment, I am going to discuss tools for monitoring and adding countermeasures to the Host Firewall.

Resources

  • Securing Linux Systems With Host-Based Firewalls Implemented With Linux iptables (htmlpdf)

This is a great introduction to building a Host Firewall. (The html site version seems like a paraphrase of the Sun Blueprint document pdf.) It is a resource that I return to time and again. The firewall example provided here includes full egress control, and the article walks the reader through the firewall step-by-step. The description is for a very controlled Host Firewall, so controlled that I in fact found myself moving to a simpler implementation.

  • NIST: Guidelines on Firewalls and Firewall Policy (pdf)

The NIST documentation (as usual) provides a great 360-degree medium-depth introduction to the topic. If you currently, or are about to, manage firewalls as part of your network security function, then read this guide!

Cheers, Erik

AoIS Interviews Michael Rash, Part 1

Michael Rash Headshot

The Art of Information Security has the great pleasure of interviewing Michael Rash. Michael holds a Master’s Degree in applied mathematics with a concentration in computer security from the University of Maryland.  He is the founder of cipherdyne.org, a website dedicated to open source security software for Linux systems, and works professionally as a Security Architect on the Dragon IDS/IPS for Enterasys Networks. He also is the author of “Linux Firewalls: Attack Detection and Response with  iptables, psad, and fwsnort”  (Sample chapter and more information here) published by No Starch Press.

When I started the Art of Information Security blog, I felt that it was important to appropriately lock down the host. It would be an unfortunate irony to have the server hosting a security blog “owned” by some script kiddy. So, of course AoIS runs a firewall. I had been using iptables firewalls on Linux for a while, and there were a few things that I felt were lacking from the set-ups that I had in the past. One was the ability to understand that the firewall is working. A solid firewall generates logs – but what do you do with those? And, what do they tell you? Second, I knew that I should be able to detect certain types of automated attacks and block those IPs. There are so many improperly configured hosts to attack that a few simple countermeasures go a long way. Third, I have also been very interested in running host IDS/IPS, but all the requirements to run Snort for a single host seemed a bit too much. Alas, I ran to cipherdyne.org and the great tools sponsored (and authored) by Michael.

Erik: So, Michael, Network Security is obviously more than just a job for you. How did you come to be involved so deeply in Network Security and Intrusion Countermeasures?

Michael: During the late 1990’s I was introduced to intrusion detection on a large ISP’s network, and that experience coupled with learning networking protocols sparked a deep and abiding interest in network security. This interest eventually led me to systems programming on Linux, and to the internals of systems that need to be protected. The constant game of cat and mouse played by attackers and defenders in the network security world never ceases to provide new directions for security research, and thanks to the open source development model, many of the techniques to defend systems can be investigated and contributed to by anyone.

Erik:  So when did you get the idea for PSAD?

Michael: In 1999 I started working with Jay Beale on the Bastille Linux project. At the time, both portsentry and Snort were around and were designed to detect network attacks (with the former only focused on port scans). Because Bastille was designed to harden the security stance of the host, a strong iptables policy was built in by Peter Watkins. With the strategy implemented by portsentry of listening on sockets in order to detect port scans (see the this link for why this is less than ideal from many perspectives), we needed a way to detect port scans in a manner compatible with Bastille’s iptables policy. The result was a portion of Bastille initially called “Bastille-NIDS”, but I eventually split it off as a dedicated project, and called it “PSAD”. An option would also have been to just write a configuration utility for Snort, but there would still have been a void since no tool really analyzed iptables log messages for suspicious activity. I made it my goal to try and fill this void mostly because the data source provided by iptables log is quite rich and has a lot to say.

Erik:  On your website you identify three principles around which PSAD was developed. Why are these important? How does PSAD accomplish them?

  1. Good network security starts with a properly configured firewall
  2. A significant amount of intrusion detection data can be gleaned from firewalls logs
  3. Suspicious traffic should not be detected at the expense of trying to also block such traffic

Michael: Network security is more relevant for more people today than at any other point in Internet history. Important infrastructure is increasingly being put online (such as online banking access), and the threats are evolving to compromise this infrastructure. The default stance of many operating systems is to listen on several services to make things easier for users, and while many OS’s (particularly mainstream Linux distributions) offer to configure firewall policies, many users elect not to go through with this step. Sometimes people are too busy to maintain a properly configured firewall, or they reason that the local border firewall is sufficient. Firewalls should always be configured in a default-drop stance in order to provide an additional layer of protection for any vulnerable services that may be listening. For Linux systems, psad helps to verify that the local iptables policy is configured in this manner.

Firewall logs are also an important area to pay some attention. Although firewall logs cannot replace the full packet capture and logging capability of many intrusion detection systems, they can still be a valuable source of data to highlight efforts to break into systems. With a logging format that is as complete as provided by the iptables logging infrastructure, it is possible to detect and differentiate most types of nmap scans, passively fingerprint remote operating systems, detect probes for back doors, and more. The process of parsing iptables logs to look for these kinds of activities is automated by psad.

Finally, just detecting malicious traffic will always play second fiddle to an effective mechanism for also blocking such traffic. The iptables firewall is a well-tested piece of code that runs inline to the packet data path. Hence, it is a strong weapon to block suspicious traffic with a default drop stance before such traffic is allowed to target internal systems. By using the iptables string match extension, iptables blocking actions can even be tied to the inspection of application layer data.

Stay Tuned for Part 2

Part 2 of this series is coming soon, with more discussion about network security and open source security tools. More information is available on PSAD at http://www.cipherdyne.com/psad/. (Oh, and PSAD will be featured in an upcoming installment of the AoIS Secure Your Linux Host series !)

Cheers, Erik

Pro Dev: Who are We? What is Our Role?

I was recently  in New York for a two-day briefing on emerging technologies from a key technology partner. During the morning session the presenter asked a number of questions of the room as he worked through his deck.

At one point he asked: “Who likes their Information Security guy ?”

I raised my hand, to which he quipped: “Well, they aren’t doing their job then!”

To which I quipped: “Actually, I do my job quite well.”

Stereotypes…

In ancient times, skillful warriors first made themselves invincible,

and then watched for vulnerability in their opponents…

“Formation”, Art of War, Sun Tzu, 6th century B.C.

The core of Information Security is Risk Management. The pursuit isn’t an “invincible” password policy, but one that provides reasonable protection against known threats. The goal is often not an “invincible” application, but one which is hardened appropriately and also still usable.

But all too often, many practitioners jump right to NO – I WON’T ALLOW IT. this leap is made without understanding the whole of the problem, or the real risks that are specific to the situation.

Now, there are folks in Information Security (and HR, accounting, etc.) who have to say NO because corporate policy, procedure, etc. require them to. This is really not the case that I am exploring here. Here, I want to focus on the role of the Information Security Architect, Consultant, Vulnerability Manager, Risk Manager, CISO, etc. when they are working with the business and IT partners.

Solid Risk Management requires a partnership between the folks who are the Subject Matter Experts in the risk space, and the folks who have a business or organizational need that must be met.  The right or proper answer often isn’t the Black-and-White “We never allow X” (sometimes it is 😉 ), but generally “We usually avoid X, due to these risks, but in this case we can compensate by applying these additional controls” or “We usually don’t permit X, but in this situation it isn’t problematic due to Y”.

I spent a lot of 2007 learning this lesson.

This lesson was taking hold enough that I started researching some of the business literature on this topic. It was then that I ran into Organizational Consulting: How to Be an Effective Internal Change Agent by Alan Weiss, and this definition on page 4:

Organizational Consultants are basically advisers to management who must provide objective, pragmatic, and honest advice to their clients. If there is a trusting relationship, then the clients will always be confident that their best interests are being served, no matter how threatening, contrarian, or painful that advice may be.

 Organizational Consulting is a book on becoming an effective internal change agent. In a way, when I am acting in an Information Security (Architect, Consultant, Advisor, fill in the blank…) role, I see myself being responsible for not just managing the risk issue at hand, but engaging my IT/LOB/etc in such that they can understand why and how the final state came to be.

So, let’s paraphrase Alan’s definition some…

Information Security Consultants are basically advisors to Information Technology and Line of Business partners who must provide objective, pragmatic, and honest advice to their clients, with the objective of managing risk for the benefit of the organization as a whole.

If there is a trusting relationship, then the clients will always be confident that their best interests are being served, no matter how threatening, contrarian, or painful that advice may be.

It has been my experience that when I take the time to…

  • Listen and demonstrate genuine interest in the business problem at hand
  • Educate the key players about the risks that various approaches contain
  • Make those risks tangible, using examples and data when available
  • Work with them, not against them

…that my success rate is very high ! “Success” being defined as both getting the Information Security risks managed, getting the underlying business need met, and being re-engaged pro-actively by the people I worked with the next time around.

Of course, all of these are relationship-building behaviors. All to often, relationship-building is thought of as lunches and golf games, neither of which I do much of. Relationship building is about how you treat people when you are working with them. No one cares that you played golf with them once if you won’t help them solve the problem at hand. Helping them find a way to meet their business needs risk appropriately builds relationships.

Of course, saying NO is a lot less work… for a while….

Cheers, Erik

( If you enjoyed this, check out more Professional Development on AoIS )

Are You in Central Ohio Wednesday January 21st, 2009 ?

A colleague and I are co-presenting at the Central Ohio ISSA chapter on Wednesday morning…

Information Security Awareness Raising – A Example to Critique and Discussion

The aim of this presentation is to provide ISSA attendees with fresh ideas, for increasing the awareness of Information Security issues with their internal customers and partners. The presentation will have two parts. In the first part Justin and Erik will present a Information Security Awareness Presentation which is targeted at an audience of business and IT partners. 

During the second part of the presentation, preliminary information regarding the vital role of Information Security Awareness Raising will be discussed. After this initial introduction, everyone will be asked to participate in a dialog discussing if the materials were or were not effective Awareness Raising materials and to share their experiences and insights.

If you read this post, and then attended the presentation – please let me know. (This will be my tip off that highly un-likely events are occurring in my world, and that I should purchase a lottery ticket… 😉 )

Cheers, Erik

The Internet Never Forgets — your mistakes !

My apologies for this “phantom” posting… “Pro Dev: Who are We? What is Our Role?”

While editing that posting, I published it way prematurely. (Can you say miss-click?)  Now, I corrected this within minutes, but due the magic of Google and Feedburner that fragment was whisked onto the net (and perhaps will live forever… 😦 )

Now, you would think that you could just delete the post, and all would be well – Wrong !

So, that fragment (which was on-line for less than 3 min), was cached into the google reader and other blog aggregators, and has (embarrassingly) set a record for views in the first 24 hours. 

The good news is that it looks the like Professional Development series I have planned for AoIS is going to be a hit ! The bad news is I need to find a WordPress plugin that asks for a “are you sure” idiot confirmation on the publish button…

BTW, It appears that 2009 will be the year of the series on AoIS. Currently in the pipeline are:

  • The Secure Your Linux Host Series
  • Professional Development Series
  • Cryptographic Controls Series 
  • Interviews with Infomation Security, Risk Management, and Privacy Luminaries !

I hope to have at least one installment for all of these series posted by the end of January.

Again, my appologies for the draft fragment – the actual posting (Part 1 in the Professional Development series) is being proofed and should be up in a few days.

Cheers, Erik

Got Entropy ?

So I have been planning a series of podcasts on Cryptographic Controls. In the process of this planning, I fell into one of the classic traps that crypto-geeks fall into: obsessing about random number generators (RNGs).

(FYI, for the impatient, click here.)

There are two ways to generate random numbers on computers: (1) use a software program called a Pseudorandom Number Generator (PRNG) or (2) use a hardware random number generator. A Pseudorandom Number Generator uses a seed value to generate a sequence of numbers that appear random. The problem is that the same seed generates the same random sequence. The hardware based RNG observes and samples some physical phenomenon which is random, such as cosmic rays, RF noise, etc. (aka Entropy).

RNGs are important in Information Security because they are used to generate encryption keys, salts, etc. Historically, attacking RNGs has proven effective, such as the defeat of Netscape’s HTTPS sessions.

Most operating systems utilize a hybrid approach, implementing a PseudoRandom Number Generator that has a seed that is regularly updated through the collection of random hardware events. This process is called Entropy Collection or Entropy Harvesting. For most applications, this approach should be completely sufficient. However, one of the key assumptions is that the operating system has been up and running long enough for the seed value itself to become hard to predict through the collection of Entropy. Also, many of the Entropy collecting events come from properties of hardware devices, such as the minor variations in hard drive rate of rotation. As such, there are a few circumstances where the OS RNG may not be good enough for strong cryptographic key generation:

  • Live Boot CD ( The start state of the RNG may be predictable. )
  • Virtualized Hosts ( OS may be dependent on simulated events for randomness. )

( Given the exploding popularity of virtualization, this is an area worthy of research. Stay tuned. )

Design of the Got Entropy Service

Many RNGs (such as the one included in Linux, as well as OpenSSL’s) allow the addition of entropy from outside sources. So I started looking to Entropy sources I could use to bolster the RNGs on my virtual hosts (and other uses…). While I was looking into this, it occurred to me that I had an unused TV tuner card, a PVR-350.

When a TV is tuned to a channel with no local station, the ‘snow’ on the screen is RF noise (the same as the static between stations on AM radios). But, for reasons beyond our scope, you never use a direct physical observation as the RNG. You have to ‘de-skew and whiten’ the data prior to sampling it. Here is the process that I use:

  1. Collect about 3 minutes of video ( about 130 MB data ).
  2. Using a random key and IV, encrypt the data ( using openssl & AES-128-CBC ).
  3. Discard the first 32k of the file.
  4. Use each of the following 32k blocks as samples.
  5. Compress each sample with SHA-256.
  6. Discard the last block.
  • Steps 2 and 3 remove any patterns, such as MPEG file formatting, from the data.
  • Steps 4 and 5 generate a 32-byte random value ( 1024 to 1 compression in the hash ).

Check it out at http://gotentropy.artofinfosec.com

Can an Attacker Broadcast a Signal to Undermine This?

Such an attacker could not remove RF noise from the received signal. Our eyes and brains are good at filtering out the noise in the TV video, but there is a lot of it. Part of the noise comes from the atmospheric background RF, but there are also flaws (noise) in the tuner’s radio and analog-to-digital capture circuitry.

I think this is a pretty strong RNG, and I have provided an interface for pulling just the values.

Also, I have written a script ( getEntropy.sh ) that will pull Entropy from the service and seed it into /dev/random on Linux.

Results from ENT

Here are results, from a sample run of the Got Entropy, analyzed by ENT ( A Pseudorandom Number Sequence Test Program provided by John Walker of http://www.fourmilab.ch – Thanks, John! ).

  • Entropy = 7.999987 bits per byte
  • Optimum compression would reduce the size of this 13366112 byte file by 0 percent.
  • Chi square distribution for 13366112 samples is 233.85, and randomly would exceed this value 82.48 percent of the time.
  • Arithmetic mean value of data bytes is 127.4767 (127.5 = random).
  • Monte Carlo value for Pi is 3.143054786 (error = 0.05 percent).
  • Serial correlation coefficient is -0.000078 (totally uncorrelated = 0.0).

Resources for the Curious…

Cheers, Erik