Monthly Archives: January 2009

Lie Detector Libel

I noticed a posting on Slashdot (link) this morning regarding a gag order on an article that was to be published in a peer reviewed scientific journal but has been suppressed. The article was critical of lie detector technology, and evidently provided information debunking it.

More information is available her:  Stockholm University article.

The thing I find most interesting about this is that the US Supreme Cort has already determined that Lie Detectors are unreliable. From Wikipedia article on the polygraph:

In the 1998 Supreme Court case, United States v. Scheffer, the majority stated that “There is simply no consensus that polygraph evidence is reliable” and “Unlike other expert witnesses who testify about factual matters outside the jurors’ knowledge, such as the analysis of fingerprints, ballistics, or DNA found at a crime scene, a polygraph expert can supply the jury only with another opinion…”.

One of the things I find most interesting about the challenge of “testing” lie detectors is that no testing, such as the tests performed my Emily Rosa to debunk Therapeutic Touch, have ever been offered with can objectivity demonstrate the that they even work.

Cheers, Erik

Advertisements

Secure Your Linux Host – Part 2: Secure SSH

SSH is the preferred (perhaps de facto) remote login service for all things UNIX. The old-school remote login was telnet. But telnet was completely insecure.  Not only was the confidentiality of the session not protected, but the password wasn’t protected at all – not weak protection – no protection.

Trinity hacking ssh with nmap in ReloadedAnd so SSH (aka Secure Shell was developed)… But it has not been without its failings. There are two “flavors” for SSH: Protocol 1 and 2.  Protocol 1 turned out to have pretty serious design flaws. The hack of SSH using the Protocol 1 weaknesses was featured in the movie Matrix Reloaded. So, by 2003, the flaws and the script kiddie attack were understood well enough to have the Wachowski Brothers immortalize them.

Another concern to watch out for is that SSH has port-forwarding capabilities built into it. So, it can be used to circumvent web proxies and pierce firewalls.

All in all though, SSH is very powerful and can be a very secure way to remotely access either the shell or (via port forwarding) the services on your host.

For additional information on SSH’s port-forwarding capabilities:

Be aware that SSH is part of a family of related utilities; check out SCP, too.

Configuration

After installing the SSH server (perhaps: apt-get install openssh-server), you will want to turn your attention to the configuration file /etc/ssh/sshd_config

Here are a few settings to consider:

Protocol 2
PermitRootLogin no
Compression yes
PermitTunnel yes
Ciphers aes256-cbc,aes256-ctr,aes128-cbc,aes192-cbc,aes128-ctr
MACS hmac-sha1,hmac-sha1-96
Banner /etc/issue.net

  1. The “Protocol” setting should not include “Protocol 1”. It’s broken; don’t use it.
  2. PermitRootLogin should never be “yes” (so, of course that is the default !). The best option here is “no”, but if you need or want to have direct remote root access (perhaps as a rescue account), then the “nopwd” option is better than “yes”. The nopwd option will force you to set up and use a certificate to authenticate access.
  3. Unless your host’s CPU is straining to keep up, turn on compression. Turn it on especially if you are ever using a slow network connection (and who isn’t).
  4. If you are not going to access services remotely using SSH as sort of a micro-VPN, then set this to “off”.  Because I use the tunneling feature, I have it turned on.
  5. OK; I work and consult on cryptographic controls, so I restrict SSH to the FIPS 140-2 acceptable encryption algorithms.
  6. Likewise, I restrict the Message Authentication Codes (MACS) to stronger hashes.
  7. Some jurisdictions seem to not consider hacking a crime unless you explicitly forbid unauthorized access, so I use a banner.

Sample Banner

It seems that (at least at one point in the history of law & the internet) systems which did not have a login banner prohibiting unauthorized use may have had difficulty punishing those that abused their systems. (Of course, it is pretty hard to do so anyway, but…) Here is the login banner that I use:
* - - - - - - - W A R N I N G - - - - - - - - - - W A R N I N G - - - - - - - *
*                                                                             *
* The use of this system is restricted to authorized users. All information   *
* and communications on this system are subject to review, monitoring and     *
* recording at any time, without notice or permission.                        *
*                                                                             *
* Unauthorized access or use shall be subject to prosecution.                 *
*                                                                             *
* - - - - - - - W A R N I N G - - - - - - - - - - W A R N I N G - - - - - - - *

Account Penetration Countermeasures

Within hours of establishing an internet accessible host running SSH, your logs will start to show failed attempts to log into root and other accounts. Here is a sample from a recent Log Watch report:

--------------------- SSHD Begin ------------------------
Failed logins from:
58.222.11.2: 6 times
211.156.193.131: 1 time
Illegal users from:
60.31.195.66: 3 times
203.188.159.61: 1 time
211.156.193.131: 3 times
Users logging in through sshd:
myaccount name:
xx.xx.xxx.xx: 3 times
---------------------- SSHD End -------------------------

One of the most effective controls against password guessing attacks is locking out accounts after a predetermined and limited number of password attempts. This has a tendency to turn out to be a “three strikes and you’re out” rule.

The problem with applying such a policy with a remote service, like SSH, as opposed to your desktop login/password, is that blocking the password guessing attack becomes a Denial of Service attack. Any known (or guessed) login ID on the remote machine will end up being locked out due to the remote attacks.

Enter Fail2ban: Rather than lock out the account, Fail2ban blocks the IP address. Fail2ban will monitor your logs, and when it detects login or password failures that are coming from a particular host, it blocks future access (to either that service or your entire machine) from that host for a period of time. (Oh, and you may notice I said blocks access to the “service”, and not “SSH” – that’s because Fail2ban can detect and block Brute Force Password attacks against SSH, apache, mail servers, and so on…)

How to Forge has a great article on setting up Fail2ban – Preventing Brute Force Attacks With Fail2ban – check it out.

One tweak for now. As I tend to use certificate authentication with SSH (next topic), I rarely am logging in with a password. As a result, I tend to use a bantime that is long, ranging from a few hours on up. Three guesses every few hours really slows down a Brute Force Attack! Also, check out the ignoreip option, which can be used to make sure that at least one host doesn’t get locked out. (You can lock yourself out with Fail2ban… I have done it…)

SSH Certificate Based Authentication Considerations

Secure Shell offers the ability to use certificate based authentication with a self-signed certificate. There are two ways you might consider using this:

  1. With a password protecting the private key
  2. With no password required

Please note: When you establish certificate based authentication with SSH, you will generate a public/private key pair on your local computer. The public key will only be copied up to the server which you wish to access. The private key always stays on your local computer.

During the process of generating the private and public key pair, you will be asked if you want to password protect the private key. Some things to consider:

  • Will this ID be used for automated functional access ?

If you are creating the certificate based authentication so that a service can access data or run commands on the remote machine, then you will not want to password protect the local file. (If you do, you will end up including the password in the scripts anyway, so what would be the point?)

Personally, I have backup scripts which either pull data or snapshots on a regular basis. Google “rsync via ssh” for tips on this, or “remote commands with ssh” for tips and ideas. (Also, I may cover my obsessive compulsive backups in a later post.)

  • This ID will be used for a rescue account

In this case the certificate is usually created to avoid password expiration requirements. If it is a rescue account, it often logs into root. Any time you use certificate access for root, the private key should be password protected. Rescue accounts are often stored on centralized “jump boxes” and are expected to only be used during a declared emergency of some kind (such as full system lockout due to a password miss-synchronization.)

These private keys should always be password protected.

If someone has access to backups or disk images of the jump box, or otherwise gets access to your .ssh directory, and you have not password protected the private key, then they own the account (e.g., they can use the public/private key pair from any box).

  • Convenient remote logons…

The most common use of certificate based authentication for SSH is in fact to log you into the remote box without having to type passwords. (I do this, too…) But there are a few things to think about (these are all good general recommendations, but I consider them requirements when using an automated login…)

  1. Automatic login should never be used on a high-privilege account (e.g., root)
  2. If those accounts have sudo privileges, sudo should require a password
  3. A new certificate (public and private key pair) should be created for each machine you want to access the remote server from (e.g., desktop, laptop, etc.).  Do not reuse the same files.
  4. The certificate should be replaced occasionally (perhaps every 6 months).
  5. Use a large key and use the RSA algorithm option (e.g., ssh-keygen -b 3608 -t rsa)

SSH Certificate Based Authentication Instructions

So, without further ado… Let’s set up a Certificate for authentication.

Part 1 – From the client (e.g. your workstation, etc…)

First, confirm that you can generate a key.

$ ssh-keygen --help

The options that are going to be of interest are:

  • -b bits  Number of bits in the key to create
  • -t type  Specify type of key to create

DSA type keys, you will note, have a key length of exactly 1024. As a result, I choose RSA with a long key. My recommendation is that you take 2048 as a minimum length. I am pretty paranoid, and I have a strong background in cryptography, but I have never used a key longer than 4096.

The longer the key, the more math the computer must perform while establishing the session. After the session is established, then one of the block-ciphers discussed above performs all of the crypto. If you are making a key for a slow device (like a PDA) or a microcontroller based device, then use a shorter key length. Regardless, actually changing the keys regularly is a more secure practice than making a large one that is never changed.

$ ssh-keygen -b 3608 -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/erikheidt/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/erikheidt/.ssh/id_rsa.
Your public key has been saved in /Users/erikheidt/.ssh/id_rsa.pub.
The key fingerprint is:
43:69:d8:8e:c4:af:f8:8b:5a:2d:db:75:91:fd:06:be erikheidt@Trinity.local
The key's randomart image is:
+--[ RSA 3608]----+
|                 |
|     . o .       |
|      + =        |
|     . *   o     |
|      . S o o    |
|     o . . o o   |
|    + o . . . o  |
|   . * . .   o   |
|  ..o +.    E    |
+-----------------+

Now, make sure your .ssh directory is secured properly…

$ chmod 700 ~/.ssh

Next, you need to copy the public key (only) to the server or remote host you wish to login to.

$ cd ~/.ssh

$ scp id_rsa.pub YourUser@Hostname

Now we have copied the file up to the server….

Part 2 – On the Server or remote host….

Logon to the target system (probably using a password) and then set things up on that end…

$ ssh YourUser@Hostname

$ mkdir .ssh
$ chmod 700 .ssh
$ cat id_rsa.pub >> ~/.ssh/authorized_keys

Done ! Your next login should use certificate based authentication !

I hope this posting on SSH was useful.

Cheers, Erik

Pro Dev: Who are We? What is Our Role?

I was recently  in New York for a two-day briefing on emerging technologies from a key technology partner. During the morning session the presenter asked a number of questions of the room as he worked through his deck.

At one point he asked: “Who likes their Information Security guy ?”

I raised my hand, to which he quipped: “Well, they aren’t doing their job then!”

To which I quipped: “Actually, I do my job quite well.”

Stereotypes…

In ancient times, skillful warriors first made themselves invincible,

and then watched for vulnerability in their opponents…

“Formation”, Art of War, Sun Tzu, 6th century B.C.

The core of Information Security is Risk Management. The pursuit isn’t an “invincible” password policy, but one that provides reasonable protection against known threats. The goal is often not an “invincible” application, but one which is hardened appropriately and also still usable.

But all too often, many practitioners jump right to NO – I WON’T ALLOW IT. this leap is made without understanding the whole of the problem, or the real risks that are specific to the situation.

Now, there are folks in Information Security (and HR, accounting, etc.) who have to say NO because corporate policy, procedure, etc. require them to. This is really not the case that I am exploring here. Here, I want to focus on the role of the Information Security Architect, Consultant, Vulnerability Manager, Risk Manager, CISO, etc. when they are working with the business and IT partners.

Solid Risk Management requires a partnership between the folks who are the Subject Matter Experts in the risk space, and the folks who have a business or organizational need that must be met.  The right or proper answer often isn’t the Black-and-White “We never allow X” (sometimes it is 😉 ), but generally “We usually avoid X, due to these risks, but in this case we can compensate by applying these additional controls” or “We usually don’t permit X, but in this situation it isn’t problematic due to Y”.

I spent a lot of 2007 learning this lesson.

This lesson was taking hold enough that I started researching some of the business literature on this topic. It was then that I ran into Organizational Consulting: How to Be an Effective Internal Change Agent by Alan Weiss, and this definition on page 4:

Organizational Consultants are basically advisers to management who must provide objective, pragmatic, and honest advice to their clients. If there is a trusting relationship, then the clients will always be confident that their best interests are being served, no matter how threatening, contrarian, or painful that advice may be.

 Organizational Consulting is a book on becoming an effective internal change agent. In a way, when I am acting in an Information Security (Architect, Consultant, Advisor, fill in the blank…) role, I see myself being responsible for not just managing the risk issue at hand, but engaging my IT/LOB/etc in such that they can understand why and how the final state came to be.

So, let’s paraphrase Alan’s definition some…

Information Security Consultants are basically advisors to Information Technology and Line of Business partners who must provide objective, pragmatic, and honest advice to their clients, with the objective of managing risk for the benefit of the organization as a whole.

If there is a trusting relationship, then the clients will always be confident that their best interests are being served, no matter how threatening, contrarian, or painful that advice may be.

It has been my experience that when I take the time to…

  • Listen and demonstrate genuine interest in the business problem at hand
  • Educate the key players about the risks that various approaches contain
  • Make those risks tangible, using examples and data when available
  • Work with them, not against them

…that my success rate is very high ! “Success” being defined as both getting the Information Security risks managed, getting the underlying business need met, and being re-engaged pro-actively by the people I worked with the next time around.

Of course, all of these are relationship-building behaviors. All to often, relationship-building is thought of as lunches and golf games, neither of which I do much of. Relationship building is about how you treat people when you are working with them. No one cares that you played golf with them once if you won’t help them solve the problem at hand. Helping them find a way to meet their business needs risk appropriately builds relationships.

Of course, saying NO is a lot less work… for a while….

Cheers, Erik

( If you enjoyed this, check out more Professional Development on AoIS )

Are You in Central Ohio Wednesday January 21st, 2009 ?

A colleague and I are co-presenting at the Central Ohio ISSA chapter on Wednesday morning…

Information Security Awareness Raising – A Example to Critique and Discussion

The aim of this presentation is to provide ISSA attendees with fresh ideas, for increasing the awareness of Information Security issues with their internal customers and partners. The presentation will have two parts. In the first part Justin and Erik will present a Information Security Awareness Presentation which is targeted at an audience of business and IT partners. 

During the second part of the presentation, preliminary information regarding the vital role of Information Security Awareness Raising will be discussed. After this initial introduction, everyone will be asked to participate in a dialog discussing if the materials were or were not effective Awareness Raising materials and to share their experiences and insights.

If you read this post, and then attended the presentation – please let me know. (This will be my tip off that highly un-likely events are occurring in my world, and that I should purchase a lottery ticket… 😉 )

Cheers, Erik

The Internet Never Forgets — your mistakes !

My apologies for this “phantom” posting… “Pro Dev: Who are We? What is Our Role?”

While editing that posting, I published it way prematurely. (Can you say miss-click?)  Now, I corrected this within minutes, but due the magic of Google and Feedburner that fragment was whisked onto the net (and perhaps will live forever… 😦 )

Now, you would think that you could just delete the post, and all would be well – Wrong !

So, that fragment (which was on-line for less than 3 min), was cached into the google reader and other blog aggregators, and has (embarrassingly) set a record for views in the first 24 hours. 

The good news is that it looks the like Professional Development series I have planned for AoIS is going to be a hit ! The bad news is I need to find a WordPress plugin that asks for a “are you sure” idiot confirmation on the publish button…

BTW, It appears that 2009 will be the year of the series on AoIS. Currently in the pipeline are:

  • The Secure Your Linux Host Series
  • Professional Development Series
  • Cryptographic Controls Series 
  • Interviews with Infomation Security, Risk Management, and Privacy Luminaries !

I hope to have at least one installment for all of these series posted by the end of January.

Again, my appologies for the draft fragment – the actual posting (Part 1 in the Professional Development series) is being proofed and should be up in a few days.

Cheers, Erik

Being Probed for phpMyAdmin ?

In Secure Your Linux Host – Part 1 I recommended using Log Watch to keep an eye on what may be happening with your host. Well, today’s review of my own Log Watch indicates that I am being probed for phpMyAdmin. (Someone wants to abuse my database…)

Here is a sample from the log:

401 Unauthorized
/admin/mysql/main.php: 1 Time(s)
/admin/php-my-admin/main.php: 1 Time(s)
/admin/php-myadmin/main.php: 1 Time(s)
/admin/phpMyAdmin-2.2.3/main.php: 1 Time(s)
/admin/phpMyAdmin-2.2.6/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.1/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.4/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.5-pl1/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.5-rc1/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.5-rc2/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.5/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.6-rc1/main.php: 1 Time(s)

Now I have seen activity like this before, but I thought this provided a good example of the increased awareness that scanning through the Log Watch report can provide.

This also provides some solid data in support of having some other controls in place if you are in fact running phpMyAdmin (or even MySQL). Most of the time the passwords that are used to access the content of databases are not used by humans – they are stored in the properties files of the applications that are using the database.

Ok, So Your Logs are Letting You Know What is Being Probed, Now What ?

This awareness allows you to make sure that you are adequately protecting that which is being attacked.  In this case, I already have controls in place to manage this risk. Let’s discuss them.

Lock Down Web Access to Administrative Tools

phpMyAdmin (usually) requires a password (more on that in a second), but you can also add an additional layer of security to your web-based administrative services by adding authentication at the http server itself.

Apache has a nice tutorial: Authentication, Authorization and Access Control

If you run web-based administrative tools, you may wish to lock down the web paths that contain them. In addition to providing a first line of defense, this will reduce the information available to attackers during the reconnaissance portion of their attacks.

If you lock down “mywebsite.com/admin” as described in the Apache How-To above, and you have additional directories under this “mywebsite.com/admin/phpMyAdmin” and “mywebsite.com/admin/keys2Kingdom” , they will not be visible to the attacker (until they guess the password…).

Confirm Strong Passwords

Functional IDs (also called service accounts) are used for application to application (e.g. wordpress to MySQL) authentication, and are (and should be) only handled by humans during installation and maintenance activities. Functional IDs should be long, very random, and not contain words or memorable substrings. (Functional accounts often do not have password retry limits, which heightens the importance of the strength of the password.)

I sometimes use the GRC Strong Password Generator (ah yeah, my ow site gotentropy.artofinfosec.com is down right now…). You can also generate strong passwords using openSSL from the Linux command line:

openssl rand -base64 60

In both cases I prefer to cut and paste a long substring of 40 to 50 characters  (dropping a few characters off both ends, especially the “==” base64 termination marker from the openssl command), and then adding a few characters of my own.

Now, I would never expect an application user to type a 40+ character password. But for a Functional ID – why not ? The root MySql and all db user’s ID should be very complex and long, especially if the host is internet accessible. (If you are using phpMyAdmin, it has a very good password generator included in the “Add User” functionality.)

We will be discussing other ways to protect password based systems from remote attacks in “Secure Your Linux Host – Part 2″… Out soon…

Cheers, Erik

( Part of the Secure Your Linux Host series…)

Secure Your Linux Host – Part 1: Foundations…

Bot-net masters, phishers, spammers, hackers, etc., all need insecure hosts on the internet that they can find, break into, and bend to their will. I was in a meeting one day with a very frustrated retail banking executive who wanted to know who was providing all of the computers, all over the globe, to the phishing and spamming teams that were attacking his bank’s on-line services. The bad news that I had to give him was that the root causes of the problem were people not taking security basics seriously – basics like patching their systems promptly, and using strong passwords.

The most shocking statistics that come up over and over again in the Information Security and Risk Management research:

  • 75% of Data Loss events involve an insider
  • 75% of the insider’s actions were negligent and not self-serving or malicious

This means that over half (56%) of Data Loss events would not have been but for incompetent or naive personnel.

As an Information Security professional, I have no delusions that the internet underground is going to someday run out of computers susceptible to the script kiddie attacks that are dependent on weak security practices, but I do believe you can protect your host – if you choose to. So I decided to kick off this series on AoIS, from one Linux enthusiast to another. Oh, and FYI, currently I am running Ubuntu as my ‘distribution of the moment’.

Foundations

The low-hanging fruit in securing your host is all pretty basic stuff. Here is the list:

  • Set the root password to something very long and complex
  • Forbid remote access to root (Part 2 will cover this for ssh)
  • Update and patch your host early and often
  • Set-up Mail Transfer Agent (MTA), and forward root’s email…
  • Install Log Watch

Complex Root Passwords

Ok – you would think that this has been beaten into the ground, but the data shows that there are lots of systems which (1) allow remote root access and (2) aren’t using very hard-to-guess passwords.

  • Set a long and complex password
  • Stay tuned for properly secured remote access with SSH (part 2)

Of course, this should include not just root but also accounts with root-like privileges (e.g., can run sudo). Many attacks on privileged accounts make assumptions about account names. Set up a personalized account on the host, and then grant sudo privileges.  (Oh, and “admin” is a great name for that account, but your name will work just fine. It seems that the attack scripts don’t yet troll the public information about the box for names yet…)

Early and often?

The first thing I do after I install and boot a new host is update it.

# apt-get update
# apt-get upgrade

No brainer there…

I use the following variation in a Cron job to regularly update all of the packages on the box. There are a few tutorials on applying only the security patches, but I choose to go ahead and update all packages.

su --login --command 'apt-get update -qq; apt-get upgrade -q -y; exit'

The -q and -qq flags suppress messages, shortening the output. The Cron facility will automatically  forward that output to the root user.  The -y option tells apt-get to assume a Yes answer for any of the questions.

The “su –login –command” provides context to the command so that it can operate properly (and this results in the “stdin: is not a tty” notice below…).

This results in an email from Cron that looks somthing like this:

From: Cron Daemon
Date: Tue, Dec 23
Subject: Cron su –login –command ‘apt-get update -qq; apt-get upgrade -q -y; exit’
To: root

stdin: is not a tty
Reading package lists…
Building dependency tree…
Reading state information…
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

So, how often? Automated beats manually. A Weekly scheduled task is so much better than the ‘intention to log in routinely and update the box’. You have to balance the risks associated with the update itself interfering with the services the box provides against the risks of not having up-to-date patches .

One thing that is critical -> if you are using automated patch updates, you need to either get emails with the update output or log this data to a fixed location. In the event that one of these updates causes you a problem, you want to know exactly where to look to aid in your trouble-shooting.

Set-up Mail Transfer Agent (MTA)

System services will send email to root whenever something isn’t right. So, unless you are going to do some very agressive log monitoring, getting those messages forwarded to your actual email box will be invaluable. Not just from a security perspective, but also to understand the operational integrity (aka health) of the system.

Unfortunately, details of properly setting up a Mail Transfer Agent are beyond the scope of what I can explore in a blog posting. For our purposes, the goals are:

  • Set-up an MTA that will work with your ISP/hosting provider
  • Configure .forward file for the root account
  • Make sure the MTA is NOT an “open relay”

I use Postfix in a pretty trivial configuration. If  you research the “install postfix” on Google, it will at first seem like you are embarking on a complex journey. Not so. Those posts are directed toward people who want to set up complete email solutions. The only goal from a security and operational awareness point of view is that email messages that get sent to root get forwarded to you, the system admin. To accomplish that, the steps should go something like:

  • Install Postfix
  • Select either “Internet Site” or “Smart Host”
  • Continue through the configuration wizard

(and of course, the devil is in the details of that last step…)

Install Log Watch

# apt-get install logwatch

Now that you have the required facilities to have email sent from your host, install Logwatch. The default installation on Ubuntu will automatically send a daily email with tons of useful information, including:

  • Summaries of the logs of utilities and services (postfix, httpd, etc.)
  • PAM unix authentication attempts and failures
  • SSH session activity
  • Uses of the SUDO command
  • Disk space levels

After a few days, you will be able to scan through the email in a few seconds and understand that your host is operating normally. And of course, if you detect any problems, having a few of these emails to look back through for changes is also very valuable.

Well, the fundamentals are never super exciting. But these steps will put you well on your way to securing and hardening your host. In part 2, properly configuring and hardening SSH will be discussed.

Cheers, Erik