Category Archives: Tools, Tips, and Techniques

Auditing Time…

Time is critical in security systems; specifically, having systems know the time  is very important. Adequate clock synchronization is important for:

  • Operational Integrity (things happen when they are supposed to happen – backups, tasks, etc.)
  • Reproducibility of events (meaningful logs and records)
  • Validation of SSL certificate expiration (or other tokens, etc.)
  • Correct application of time restricted controls
  • Etc.

So, the big question is, what is “adequate clock synchronization”, and how do we achieve it ?

But First, What Time Is It ?

Time itself is of course a natural phenomenon. Just like distance, volume, and weight, the measurements for time are artificial and man-made.  The dominant time standard (especially from a computer and therefore Information Security perspective) is Coordinated Universal Time (UTC). This could probably have been called Universal Compromise Time, as it turns out that getting the whole world to drop their cultural biases, deployed technology, etc. and move to a single time system has been a long and complicated road (and it isn’t over yet).

One major component of UTC is an agreement on what time it in fact is, and how that is determined. Also, there are  questions surrounding how to adjust leap seconds, leap years,  and other “measurement vs reality” anomalies.  Time (and its measurement) is quite complex in itself, but for the purposes of Information Security (system operation, log correlation, certificate expiration, etc.), the good news is that UTC provides a solid time standard.

Now, all we need to do is synchronize our clocks to UTC !
(and adjust for our local time zone…)

Network Time Protocol (NTP)

Network Time Protocol (NTP) is a well established, but often misconfigured and misunderstood, internet protocol. NTP utilizes Marzullo’s Algorithm to synchronize clocks in spite of the fact that:

  • The travel time for information passed between systems via a network is constantly changing
  • Remote clocks themselves may contain some error (noise) vs UTC
  • Remote clocks may themselves be using NTP to determine the time

In spite of this, a properly configured NTP client can synchronize its clock to within 10 milliseconds (1/100 s) of UTC over the public internet. Servers on the same LAN can synchronize much more closely . For Information Security purposes, clock synchronization among systems and to UTC, within 1/5 or 1/10 of a second, should be sufficient.

Classic Misconfiguration Mistakes (and how to avoid them)

The misconfiguration mistakes that folks make tend to be the result of:

  • Overestimating the importance of Stratum 1 servers
  • Over-thinking the NTP configuration

NTP Servers are divided into Stratums based on what time source. A Stratum 1 server is directly connected to a device that provides a time reference. Some examples of reference time sources include:

NTP servers which synchronize with a Stratum 1 time source are Stratum 2 servers, with the Stratum number increasing by one for each level.

Big Mistake – Using a Well Known NTP Reference

The most frequent mistake people make when configuring NTP on a server is assuming that they need (or will get the best time synchronization) by using one of the well known atomic clock sources. This tends (thought not always) to be a bad idea because it overloads a small number of servers. Also, a server with a simpler network access path will generally provide better synchronization than a more remote one.

When configuring the NTP protocol, it is a good idea to specify several servers. The general rule of thumb is 2-4 NTP servers. If everyone specifies the same servers, then those servers become overloaded and their response times become erratic (which doesn’t help things). In some cases, an unintended denial of service attack is caused.

Both Trinity College of Dublin, Ireland and the University of Wisconsin at Madison experienced unintended denial of service attacks caused by misconfigured product deployments. In the case of the University of Wisconsin at Madison, NETGEAR shipped over 700,000 routers which were set-up to all pull time references from the university’s servers. NETGEAR is not the only router or product manufacturer to have made such an error.

Enter the NTP Pool…

“The pool.ntp.org project is a big virtual cluster of timeservers striving to provide reliable easy to use NTP service for millions of clients without putting a strain on the big popular timeservers.” quoted from pool.ntp.org

Basically, the NTP pool is a set of over 1500 time servers, all of which are volunteering to participate in a large load-balanced virtual time service. The quality and availability of the time service provided by each of the NTP servers in the pool is monitored, and servers are removed if they fail to meet certain guidelines.

Unless a system itself is going to be an NTP server, then use of the NTP Pool is your best bet 100% of the time. It is a good idea to use the sub-pool that is associated with your region on the globe. Here is ta sample configuration: (/etc/ntp.conf file)

server 0.us.pool.ntp.org
server 1.us.pool.ntp.org
server 2.us.pool.ntp.org
server 3.us.pool.ntp.org

It may not be necessary for your to run the NTP service itself. Running the ntpdate command at boot and then in a cron job once or twice a day may be sufficient. The command would look like:

ntpdate 0.us.pool.ntp.org 1.us.pool.ntp.org 2.us.pool.ntp.org 3.us.pool.ntp.org

If you do need to install ntp on Ubuntu, the commands are:

sudo apt-get install ntp

and then edit the /etc/ntp.conf file and add the server lines from above. On my OSX workstation, the entire /etc/ntp.conf file is:

driftfile /var/ntp/ntp.drift

server 0.us.pool.ntp.org
server 1.us.pool.ntp.org
server 2.us.pool.ntp.org
server 3.us.pool.ntp.org

Overthinking the Configuration

The “server” parameter in the configuration file has a number of additional directives that can be specified. These are almost never needed, but can generate a lot of extra traffic on the NTP server. Avoid over thinking the server configurations and avoid using prefer, iburst, or burst.

When Should I Run NTP Service Rather Than Use The NTPDate Command ?

There is almost no downside to running the NTP service. It is very low overhead and generates almost no network traffic. That being said, the only downside to running the ntpdate command a few times a day, is that the clock can drift more. If I were performing an audit, and the shop-practice was to use ntpdate on everything except infrastructure service machines (directory servers, syslog concentrators, etc.), I would accept that practice. I would be more concerned about how time synchronization was being managed on HSMs, directory services, NIDS, firewalls, etc.

When Should I Run My Own NTP Server ?

There are two cases when you should consider running your own server:

  • You have a large number of machines that need time services
  • You wish to participate in NTP Pool
In both, cases your options for running a server are:
  1. Purchase a time reference (such as a GPS card)
  2. Arrange for authenticated NTP from a Stratum 1 server
  3. Local (short network hop) servers to sync with

A Stratum 1 time server appliance or a GPS/CDMA card can be purchased for costs similar to a rack mounted server (of course you will need two). If that is just out of the (budgetary) question, then I would look for the time servers to use authenticated time sources. NIST and several other Stratum 1 NTP providers have servers which are only available to folks who have requested access, and are authenticating to the server. If time accuracy is critical to risk management, and GPS/CDMA is not available, then I would push for authenticated NTP.

Option 3 is acceptable in the vast majority of situations, including cases where logs and events are only correlated locally, or where no compelling need exists.

NTP and Network Security

NTP uses UDP on port 123. This traffic should be restricted in DMZ or other secure network zones to only route to authorized NTP servers. Tools like hping can be used to turn any open port into a file transfer gateway or tunnel.

One option is to set-up a transparent proxy on your firewalls and to direct all 123/UDP traffic to your NTP server or to one you trust. (The risk of the open port involves providing a data path out of the organization, not rogue clocks…)

Resources and More Information

Cheers,

Erik

AoIS Interviews Michael Rash, Part 3

Michael Rash HeadshotThe Art of Information Security continues our interview with Michael Rash, Network Security expert and the driving force behind several open source security tools including PSADFWSnort, and FWKnop.

In Part 2 of the interview Michael discussed how network threats, and network counter measures have been evolving. He also touched on the development of his book. Here goes the final installment in this series…

Erik: What would be your recommendations for folks who are adopting Linux (either enthusiasts or corporations) in terms of properly protecting their hosts and networks from network attacks?

Michael: I think that deploying host and network firewalls is a great first step here, and iptables functions admirably. Many people in corporate environments are concerned about the questions of performance, manageability, scalability, and support, and iptables together with some third party software have decent answers to these concerns. For example, the fwbuilder project provides good graphical support for the display and manipulation of iptables policies, and large Linux distributions such as Red Hat and SuSE offer commercial support.

Beyond having proper firewalls deployed, intrusion detection systems are a critical piece to point the way to attempted (and sometimes successful) compromises. Also, strong security mechanisms such as SELinux can provide a powerful barrier to attempted malicious usages of hosts. Finally, patch early and patch often.

Erik:  Do you have any tool or reference recommendations for debugging IP tables firewalls?

Michael: For debugging iptables policies and maintaining tight controls on the type of packets that are allowed to traverse those policies, one of the best techniques is to use tcpdump either on the end points or on the firewall itself (and these may be the same system) and watch how network traffic is allowed to progress. For example, a SYN packet to a port that is filtered will not respond either with a SYN/ACK or a RST, and seeing this behavior with tcpdump is quite easy. At the same time, understanding where in an iptables policy packets are getting dropped (or otherwise messed with) is usually made clear by watching how packet and byte counters are incremented on particular iptables rules. Use ‘iptables -v -n -L’ for this, and couple this with the ‘watch’ command to see how things change. Beyond this, if you have a kernel compiled with support for the iptables TRACE target, then you can use an iptables TRACE rule that causes all packets hitting this rule to be logged. Lastly, for really advanced debugging of iptables code itself, the nfsim project provides a simulator for running Netfilter code within userspace (and hence the ability to test code before running it within the kernel itself where a bug can have dire consequences). The nfsim project can be found here:

http://ozlabs.org/~jk/projects/nfsim/

Erik: So, you obviously are deeply connected to all things Network IDS/IPS. What kinds of trends have you seen in 2008? Were there any new attack styles that surprised you? Do you have any ideas about what 2009 may hold?

Michael: Well, 2008 will certainly go down in history as the year that people were forced to really pay attention to DNS by the Kaminsky attack. One thing Dan did really well is make it clear just how important DNS is for literally everything on the Internet, and how a flaw there has implications that are difficult to over estimate. Online banking, acquiring SSL certificates, SMTP, “forgot my password links”, and countless other infrastructures depend on DNS information being correct. But, then there were also serious issues in 2008 with BGP and with SSL, so if there was any trend in 2008 I would say that it was the year of security flaws in big Internet infrastructures. In 2009, it will be interesting to see whether this trend remains true for as-yet undiscovered vulnerabilities in other important systems.

Erik: Has your support for open source helped you professionally?

Michael: Absolutely. My current position as a Security Architect on the Dragon IDS/IPS developed by Enterasys Networks is a role that my open source work helped me to acquire. Many forward looking innovations are created by the open source community, and understanding this community helps to guide many companies and the products they develop. Companies are recognizing the power of open source software more and more, and this translates to better professional positions for open source developers and technology enthusiasts.

Many Thanks to Michael !

Thanks a ton for the time and energy you put into this, the first of what I hope will be many, interviews with notables from around the Information Security community.

Thanks, Erik

Even more SSH – Great Article on /dev/random

Quick update to Part 2 of the AoIS Secure Your Linux Host Series on SSH.

I noticed a great article today on  Xavier Mertens/dev/random blog (which by the way has several great posts that have caught my eye…), on SSH tunneling -> “Keep an Eye on SSH Forwarding“.

In addition to providing a solid introudction to SSH Port Forwarding Xavier also discusses:

  • Using SSH as a SOCKS Proxy via the SSH Server
  • Logging port forwarding
  • Restricting  ports that can be forwarded

Check it out.

Cheers, Erik

AoIS Interviews Michael Rash, Part 2

Michael Rash Headshot

The Art of Information Security continues our interview with Michael Rash, Network Security expert and the driving force behind several open source security tools including PSAD, FWSnort, and FWKnop.

In Part 1 of the interview Michael discussed how he came to be involved in Network Security and Intrusion Detection system design. Here in Part 2 we get a little deeper into Michael’s philosophy on Network Intrusion Protection and discuss more open source tools that he is involved with the develop and support of.

Erik: How do you see network based attacks changing ?

 Michael: Over time, I think network based attacks will continue to be more automated and therefore accessible and deployable by more people. When it comes to educating oneself on the details of network insecurity, excellent projects such as Metasploit, Nessus, and Nmap point the way – and this is essential also for people trying to defend networks too. We will see more attacks delivered over IPv6, and we will see ever more clever ways to exploit the natural tendency of people to trust data in ways they shouldn’t. For me as a person trying to protect networks, the later is the most worrisome. A good example of a new and clever attack is “in-session phishing” as described here (Arstechnica link).

Erik: The firewalls that I run are utilized as host based protection. As you see network security becoming increasingly important, do you see the firewall “concept” become a hybrid of network protection layered over host based network controls?

Michael: With good firewall implementations (such as iptables) that do not place undue burdens on network processing that takes place on hosts, I do believe that firewalls will be viewed more and more as an essential protection mechanism for the host. The network perimeter will also continue to be an important deployment point for large firewalls to enforce global policy, but limiting the damage a successful exploit against an internal system is a problem that such an external firewall is not well-suited to address. Having a hardened network security stance on each host can provide an important benefit in this area. Further, as firewalls offer more application layer processing features, hosts can deploy customized policies that define sets of application layer data (derived from Snort rules) that are unfit for communicating with local sockets.

There are challenges though regarding managing all of those host-level firewall policies, and this is where some patience and scripting ability can play a roll.

Erik: And then came FWSnort? What were the principles that drove the development of FWSnort ?

Michael: The fwsnort project was inspired originally by the snort2iptables script written by William Stearns. This was back in the Linux 2.4 days when the string match extension was still distributed within the patch-o-matic system from the Netfilter project. Being interested in intrusion detection and firewalls at the same time, it was a goal of mine to see how far iptables could be taken in the direction of detecting (and blocking) malicious traffic. The snort IDS had a well-developed signature language, and at that time the signatures were still free and released under the GPL. So, it was natural to try and extend the snort2iptables code, and fwsnort was created.

The main goal of fwsnort is to use facilities provided by iptables to recast Snort signature sets within iptables policies. A clean translation is not always possible particularly with complex Snort signatures that use regular expression matching (because no regex engine is available to the iptables code running in the kernel), but many Snort signatures can faithfully be translated.

 Erik: Was your vision that PSAD and fwsnort teamed up as host IDS dynamic duo, or more as services that strengthen network firewalls?

Michael: Ideally I would say both here. The difference between the two types of deployments is negligible from psad and fwsnort’s perspectives – both can be deployed just as effectively against the iptables INPUT chain (for packets directed at the local system) as the FORWARD chain (for packets directed through a network firewall). The effect of not deploying host firewalls is that the outside of the network may be protected by a crunchy shell, but the inside is a chewy center. If any system can be compromised internally on such a network, an attacker is presented with few barriers to additional actions once the perimeter is breached.

 Erik: But wait – there’s more ! You are also the driving force behind FWKnop !

Michael: Thanks for mentioning fwknop. This project has received a large percentage of my attention in the last year or so. It was started originally in 2004 as the first port knocking system that added passive OS fingerprinting as an authentication parameter, but in 2005 Single Packet Authorization was added. SPA solves many of the protocol limitations that are built into port knocking (ease of replay attacks, lack of decent data transmission, and difficulty of scaling to many users), and takes the idea of “default-drop” to a new level. That is, a service such as SSH is itself made completely inaccessible before the lightweight SPA packet is passively sniffed and the firewall is reconfigured to allow access only if the SPA packet is valid. This essentially combines techniques from the IDS world (passive packet sniffing) with techniques from the authentication and authorization world (encryption and the like).

Erik: And how did the book come to be ?

Michael: I have generally tried to capture my thoughts on computer security by writing them down. In 2001 I started writing articles, and wrote a few for the Linux Journal after working with Jay Beale on the Bastille Linux project. From there, I joined Jay with writing material for Snort books for Syngress. My open source development interest has always remained in IDS and firewall technologies, so I eventually decided to write a book about the two together. The result was the No Starch book. Let me just mention here that if any of your readers is interested in writing a book, I can wholeheartedly recommend No Starch as an absolutely fantastic publisher to work with.

Stay Tuned for Part 3

Part 3 of this series is coming soon, with more discussion about network security as well as the impact that contributing to open source tools has had on Michael professional opportunities.

Cheers, Erik

 

More SSH Anyone ?

Two Quick updates to Part 2 of the AoIS Secure Your Linux Host Series on SSH.

Interesting Series by ISS X-Force on SSH

Just this morning I ran across a three part series on SSH published last year in IBM’s Internet Security Systems X-Force Threat Insight in the following issues:

X-Force expresses a slightly different set of concerns, and solutions. One topic that I did not touch on was the use of ssh agents for the management of sessions. Part 3 (June) is almost entirely focused on that.

Logwatch Samples

One of the great things about the script kiddies is they are keep testing your security for you ! 😉 Below is a mash-up and edit-down of the last few days of ssh related itms from my logwatch logs. Logwatch really has become one of my favorite tools. I don’t have tons of attacks on my servers, but there is always enough activity in the logs to let me know that the controls and countermeasures are up and running. After installing fail2ban, I always have some activity in 24 hour period of time. 

And a tip for the paranoid – if you have Failed logins and Illegal users but no fail2ban activity – then fail2ban has stopped running (or worse…).

——————— fail2ban-messages Begin ————————
Banned services with Fail2Ban:
Bans:Unbans  
ssh: [ 6:6 ]  
ssh: [ 4:7 ]  
ssh: [ 6:5 ]
ssh: [ 5:3 ]
———————- fail2ban-messages End ————————-

——————— SSHD Begin ————————
Failed logins from:
75.xxx.109.82 (75-xxx-109-82-Indianapolis.hfc.comcastbusiness.net): 1 time
79.xxx.248.27 (host27-xxx-static.38-79-b.business.telecomitalia.it): 1 time
200.xxx.209.156 (dedint-200-xx-209-156.mexdf.axtel.net): 3 times
59.xxx.92.26: 6 times
88.xxx.16.23 (…): 7 times
119.xxx.154.57: 6 times
203.xxx.198.3 (…): 6 times

Illegal users from:
60.xxx.249.90 (…): 3 times
75.xxx.109.82 (…): 3 times
79.xxx.248.27 (…): 3 times
200.xxx.209.156 (…): 2 times
202.xxx.28.244 (…): 3 times
85.xxx.133.177: 4 times
193.xxx.161.136: 4 times
———————- SSHD End ————————-

Cheers, Erik

Secure Your Linux Host – Part 2: Secure SSH

SSH is the preferred (perhaps de facto) remote login service for all things UNIX. The old-school remote login was telnet. But telnet was completely insecure.  Not only was the confidentiality of the session not protected, but the password wasn’t protected at all – not weak protection – no protection.

Trinity hacking ssh with nmap in ReloadedAnd so SSH (aka Secure Shell was developed)… But it has not been without its failings. There are two “flavors” for SSH: Protocol 1 and 2.  Protocol 1 turned out to have pretty serious design flaws. The hack of SSH using the Protocol 1 weaknesses was featured in the movie Matrix Reloaded. So, by 2003, the flaws and the script kiddie attack were understood well enough to have the Wachowski Brothers immortalize them.

Another concern to watch out for is that SSH has port-forwarding capabilities built into it. So, it can be used to circumvent web proxies and pierce firewalls.

All in all though, SSH is very powerful and can be a very secure way to remotely access either the shell or (via port forwarding) the services on your host.

For additional information on SSH’s port-forwarding capabilities:

Be aware that SSH is part of a family of related utilities; check out SCP, too.

Configuration

After installing the SSH server (perhaps: apt-get install openssh-server), you will want to turn your attention to the configuration file /etc/ssh/sshd_config

Here are a few settings to consider:

Protocol 2
PermitRootLogin no
Compression yes
PermitTunnel yes
Ciphers aes256-cbc,aes256-ctr,aes128-cbc,aes192-cbc,aes128-ctr
MACS hmac-sha1,hmac-sha1-96
Banner /etc/issue.net

  1. The “Protocol” setting should not include “Protocol 1”. It’s broken; don’t use it.
  2. PermitRootLogin should never be “yes” (so, of course that is the default !). The best option here is “no”, but if you need or want to have direct remote root access (perhaps as a rescue account), then the “nopwd” option is better than “yes”. The nopwd option will force you to set up and use a certificate to authenticate access.
  3. Unless your host’s CPU is straining to keep up, turn on compression. Turn it on especially if you are ever using a slow network connection (and who isn’t).
  4. If you are not going to access services remotely using SSH as sort of a micro-VPN, then set this to “off”.  Because I use the tunneling feature, I have it turned on.
  5. OK; I work and consult on cryptographic controls, so I restrict SSH to the FIPS 140-2 acceptable encryption algorithms.
  6. Likewise, I restrict the Message Authentication Codes (MACS) to stronger hashes.
  7. Some jurisdictions seem to not consider hacking a crime unless you explicitly forbid unauthorized access, so I use a banner.

Sample Banner

It seems that (at least at one point in the history of law & the internet) systems which did not have a login banner prohibiting unauthorized use may have had difficulty punishing those that abused their systems. (Of course, it is pretty hard to do so anyway, but…) Here is the login banner that I use:
* - - - - - - - W A R N I N G - - - - - - - - - - W A R N I N G - - - - - - - *
*                                                                             *
* The use of this system is restricted to authorized users. All information   *
* and communications on this system are subject to review, monitoring and     *
* recording at any time, without notice or permission.                        *
*                                                                             *
* Unauthorized access or use shall be subject to prosecution.                 *
*                                                                             *
* - - - - - - - W A R N I N G - - - - - - - - - - W A R N I N G - - - - - - - *

Account Penetration Countermeasures

Within hours of establishing an internet accessible host running SSH, your logs will start to show failed attempts to log into root and other accounts. Here is a sample from a recent Log Watch report:

--------------------- SSHD Begin ------------------------
Failed logins from:
58.222.11.2: 6 times
211.156.193.131: 1 time
Illegal users from:
60.31.195.66: 3 times
203.188.159.61: 1 time
211.156.193.131: 3 times
Users logging in through sshd:
myaccount name:
xx.xx.xxx.xx: 3 times
---------------------- SSHD End -------------------------

One of the most effective controls against password guessing attacks is locking out accounts after a predetermined and limited number of password attempts. This has a tendency to turn out to be a “three strikes and you’re out” rule.

The problem with applying such a policy with a remote service, like SSH, as opposed to your desktop login/password, is that blocking the password guessing attack becomes a Denial of Service attack. Any known (or guessed) login ID on the remote machine will end up being locked out due to the remote attacks.

Enter Fail2ban: Rather than lock out the account, Fail2ban blocks the IP address. Fail2ban will monitor your logs, and when it detects login or password failures that are coming from a particular host, it blocks future access (to either that service or your entire machine) from that host for a period of time. (Oh, and you may notice I said blocks access to the “service”, and not “SSH” – that’s because Fail2ban can detect and block Brute Force Password attacks against SSH, apache, mail servers, and so on…)

How to Forge has a great article on setting up Fail2ban – Preventing Brute Force Attacks With Fail2ban – check it out.

One tweak for now. As I tend to use certificate authentication with SSH (next topic), I rarely am logging in with a password. As a result, I tend to use a bantime that is long, ranging from a few hours on up. Three guesses every few hours really slows down a Brute Force Attack! Also, check out the ignoreip option, which can be used to make sure that at least one host doesn’t get locked out. (You can lock yourself out with Fail2ban… I have done it…)

SSH Certificate Based Authentication Considerations

Secure Shell offers the ability to use certificate based authentication with a self-signed certificate. There are two ways you might consider using this:

  1. With a password protecting the private key
  2. With no password required

Please note: When you establish certificate based authentication with SSH, you will generate a public/private key pair on your local computer. The public key will only be copied up to the server which you wish to access. The private key always stays on your local computer.

During the process of generating the private and public key pair, you will be asked if you want to password protect the private key. Some things to consider:

  • Will this ID be used for automated functional access ?

If you are creating the certificate based authentication so that a service can access data or run commands on the remote machine, then you will not want to password protect the local file. (If you do, you will end up including the password in the scripts anyway, so what would be the point?)

Personally, I have backup scripts which either pull data or snapshots on a regular basis. Google “rsync via ssh” for tips on this, or “remote commands with ssh” for tips and ideas. (Also, I may cover my obsessive compulsive backups in a later post.)

  • This ID will be used for a rescue account

In this case the certificate is usually created to avoid password expiration requirements. If it is a rescue account, it often logs into root. Any time you use certificate access for root, the private key should be password protected. Rescue accounts are often stored on centralized “jump boxes” and are expected to only be used during a declared emergency of some kind (such as full system lockout due to a password miss-synchronization.)

These private keys should always be password protected.

If someone has access to backups or disk images of the jump box, or otherwise gets access to your .ssh directory, and you have not password protected the private key, then they own the account (e.g., they can use the public/private key pair from any box).

  • Convenient remote logons…

The most common use of certificate based authentication for SSH is in fact to log you into the remote box without having to type passwords. (I do this, too…) But there are a few things to think about (these are all good general recommendations, but I consider them requirements when using an automated login…)

  1. Automatic login should never be used on a high-privilege account (e.g., root)
  2. If those accounts have sudo privileges, sudo should require a password
  3. A new certificate (public and private key pair) should be created for each machine you want to access the remote server from (e.g., desktop, laptop, etc.).  Do not reuse the same files.
  4. The certificate should be replaced occasionally (perhaps every 6 months).
  5. Use a large key and use the RSA algorithm option (e.g., ssh-keygen -b 3608 -t rsa)

SSH Certificate Based Authentication Instructions

So, without further ado… Let’s set up a Certificate for authentication.

Part 1 – From the client (e.g. your workstation, etc…)

First, confirm that you can generate a key.

$ ssh-keygen --help

The options that are going to be of interest are:

  • -b bits  Number of bits in the key to create
  • -t type  Specify type of key to create

DSA type keys, you will note, have a key length of exactly 1024. As a result, I choose RSA with a long key. My recommendation is that you take 2048 as a minimum length. I am pretty paranoid, and I have a strong background in cryptography, but I have never used a key longer than 4096.

The longer the key, the more math the computer must perform while establishing the session. After the session is established, then one of the block-ciphers discussed above performs all of the crypto. If you are making a key for a slow device (like a PDA) or a microcontroller based device, then use a shorter key length. Regardless, actually changing the keys regularly is a more secure practice than making a large one that is never changed.

$ ssh-keygen -b 3608 -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/erikheidt/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/erikheidt/.ssh/id_rsa.
Your public key has been saved in /Users/erikheidt/.ssh/id_rsa.pub.
The key fingerprint is:
43:69:d8:8e:c4:af:f8:8b:5a:2d:db:75:91:fd:06:be erikheidt@Trinity.local
The key's randomart image is:
+--[ RSA 3608]----+
|                 |
|     . o .       |
|      + =        |
|     . *   o     |
|      . S o o    |
|     o . . o o   |
|    + o . . . o  |
|   . * . .   o   |
|  ..o +.    E    |
+-----------------+

Now, make sure your .ssh directory is secured properly…

$ chmod 700 ~/.ssh

Next, you need to copy the public key (only) to the server or remote host you wish to login to.

$ cd ~/.ssh

$ scp id_rsa.pub YourUser@Hostname

Now we have copied the file up to the server….

Part 2 – On the Server or remote host….

Logon to the target system (probably using a password) and then set things up on that end…

$ ssh YourUser@Hostname

$ mkdir .ssh
$ chmod 700 .ssh
$ cat id_rsa.pub >> ~/.ssh/authorized_keys

Done ! Your next login should use certificate based authentication !

I hope this posting on SSH was useful.

Cheers, Erik

Being Probed for phpMyAdmin ?

In Secure Your Linux Host – Part 1 I recommended using Log Watch to keep an eye on what may be happening with your host. Well, today’s review of my own Log Watch indicates that I am being probed for phpMyAdmin. (Someone wants to abuse my database…)

Here is a sample from the log:

401 Unauthorized
/admin/mysql/main.php: 1 Time(s)
/admin/php-my-admin/main.php: 1 Time(s)
/admin/php-myadmin/main.php: 1 Time(s)
/admin/phpMyAdmin-2.2.3/main.php: 1 Time(s)
/admin/phpMyAdmin-2.2.6/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.1/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.4/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.5-pl1/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.5-rc1/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.5-rc2/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.5/main.php: 1 Time(s)
/admin/phpMyAdmin-2.5.6-rc1/main.php: 1 Time(s)

Now I have seen activity like this before, but I thought this provided a good example of the increased awareness that scanning through the Log Watch report can provide.

This also provides some solid data in support of having some other controls in place if you are in fact running phpMyAdmin (or even MySQL). Most of the time the passwords that are used to access the content of databases are not used by humans – they are stored in the properties files of the applications that are using the database.

Ok, So Your Logs are Letting You Know What is Being Probed, Now What ?

This awareness allows you to make sure that you are adequately protecting that which is being attacked.  In this case, I already have controls in place to manage this risk. Let’s discuss them.

Lock Down Web Access to Administrative Tools

phpMyAdmin (usually) requires a password (more on that in a second), but you can also add an additional layer of security to your web-based administrative services by adding authentication at the http server itself.

Apache has a nice tutorial: Authentication, Authorization and Access Control

If you run web-based administrative tools, you may wish to lock down the web paths that contain them. In addition to providing a first line of defense, this will reduce the information available to attackers during the reconnaissance portion of their attacks.

If you lock down “mywebsite.com/admin” as described in the Apache How-To above, and you have additional directories under this “mywebsite.com/admin/phpMyAdmin” and “mywebsite.com/admin/keys2Kingdom” , they will not be visible to the attacker (until they guess the password…).

Confirm Strong Passwords

Functional IDs (also called service accounts) are used for application to application (e.g. wordpress to MySQL) authentication, and are (and should be) only handled by humans during installation and maintenance activities. Functional IDs should be long, very random, and not contain words or memorable substrings. (Functional accounts often do not have password retry limits, which heightens the importance of the strength of the password.)

I sometimes use the GRC Strong Password Generator (ah yeah, my ow site gotentropy.artofinfosec.com is down right now…). You can also generate strong passwords using openSSL from the Linux command line:

openssl rand -base64 60

In both cases I prefer to cut and paste a long substring of 40 to 50 characters  (dropping a few characters off both ends, especially the “==” base64 termination marker from the openssl command), and then adding a few characters of my own.

Now, I would never expect an application user to type a 40+ character password. But for a Functional ID – why not ? The root MySql and all db user’s ID should be very complex and long, especially if the host is internet accessible. (If you are using phpMyAdmin, it has a very good password generator included in the “Add User” functionality.)

We will be discussing other ways to protect password based systems from remote attacks in “Secure Your Linux Host – Part 2″… Out soon…

Cheers, Erik

( Part of the Secure Your Linux Host series…)

Secure Your Linux Host – Part 1: Foundations…

Bot-net masters, phishers, spammers, hackers, etc., all need insecure hosts on the internet that they can find, break into, and bend to their will. I was in a meeting one day with a very frustrated retail banking executive who wanted to know who was providing all of the computers, all over the globe, to the phishing and spamming teams that were attacking his bank’s on-line services. The bad news that I had to give him was that the root causes of the problem were people not taking security basics seriously – basics like patching their systems promptly, and using strong passwords.

The most shocking statistics that come up over and over again in the Information Security and Risk Management research:

  • 75% of Data Loss events involve an insider
  • 75% of the insider’s actions were negligent and not self-serving or malicious

This means that over half (56%) of Data Loss events would not have been but for incompetent or naive personnel.

As an Information Security professional, I have no delusions that the internet underground is going to someday run out of computers susceptible to the script kiddie attacks that are dependent on weak security practices, but I do believe you can protect your host – if you choose to. So I decided to kick off this series on AoIS, from one Linux enthusiast to another. Oh, and FYI, currently I am running Ubuntu as my ‘distribution of the moment’.

Foundations

The low-hanging fruit in securing your host is all pretty basic stuff. Here is the list:

  • Set the root password to something very long and complex
  • Forbid remote access to root (Part 2 will cover this for ssh)
  • Update and patch your host early and often
  • Set-up Mail Transfer Agent (MTA), and forward root’s email…
  • Install Log Watch

Complex Root Passwords

Ok – you would think that this has been beaten into the ground, but the data shows that there are lots of systems which (1) allow remote root access and (2) aren’t using very hard-to-guess passwords.

  • Set a long and complex password
  • Stay tuned for properly secured remote access with SSH (part 2)

Of course, this should include not just root but also accounts with root-like privileges (e.g., can run sudo). Many attacks on privileged accounts make assumptions about account names. Set up a personalized account on the host, and then grant sudo privileges.  (Oh, and “admin” is a great name for that account, but your name will work just fine. It seems that the attack scripts don’t yet troll the public information about the box for names yet…)

Early and often?

The first thing I do after I install and boot a new host is update it.

# apt-get update
# apt-get upgrade

No brainer there…

I use the following variation in a Cron job to regularly update all of the packages on the box. There are a few tutorials on applying only the security patches, but I choose to go ahead and update all packages.

su --login --command 'apt-get update -qq; apt-get upgrade -q -y; exit'

The -q and -qq flags suppress messages, shortening the output. The Cron facility will automatically  forward that output to the root user.  The -y option tells apt-get to assume a Yes answer for any of the questions.

The “su –login –command” provides context to the command so that it can operate properly (and this results in the “stdin: is not a tty” notice below…).

This results in an email from Cron that looks somthing like this:

From: Cron Daemon
Date: Tue, Dec 23
Subject: Cron su –login –command ‘apt-get update -qq; apt-get upgrade -q -y; exit’
To: root

stdin: is not a tty
Reading package lists…
Building dependency tree…
Reading state information…
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

So, how often? Automated beats manually. A Weekly scheduled task is so much better than the ‘intention to log in routinely and update the box’. You have to balance the risks associated with the update itself interfering with the services the box provides against the risks of not having up-to-date patches .

One thing that is critical -> if you are using automated patch updates, you need to either get emails with the update output or log this data to a fixed location. In the event that one of these updates causes you a problem, you want to know exactly where to look to aid in your trouble-shooting.

Set-up Mail Transfer Agent (MTA)

System services will send email to root whenever something isn’t right. So, unless you are going to do some very agressive log monitoring, getting those messages forwarded to your actual email box will be invaluable. Not just from a security perspective, but also to understand the operational integrity (aka health) of the system.

Unfortunately, details of properly setting up a Mail Transfer Agent are beyond the scope of what I can explore in a blog posting. For our purposes, the goals are:

  • Set-up an MTA that will work with your ISP/hosting provider
  • Configure .forward file for the root account
  • Make sure the MTA is NOT an “open relay”

I use Postfix in a pretty trivial configuration. If  you research the “install postfix” on Google, it will at first seem like you are embarking on a complex journey. Not so. Those posts are directed toward people who want to set up complete email solutions. The only goal from a security and operational awareness point of view is that email messages that get sent to root get forwarded to you, the system admin. To accomplish that, the steps should go something like:

  • Install Postfix
  • Select either “Internet Site” or “Smart Host”
  • Continue through the configuration wizard

(and of course, the devil is in the details of that last step…)

Install Log Watch

# apt-get install logwatch

Now that you have the required facilities to have email sent from your host, install Logwatch. The default installation on Ubuntu will automatically send a daily email with tons of useful information, including:

  • Summaries of the logs of utilities and services (postfix, httpd, etc.)
  • PAM unix authentication attempts and failures
  • SSH session activity
  • Uses of the SUDO command
  • Disk space levels

After a few days, you will be able to scan through the email in a few seconds and understand that your host is operating normally. And of course, if you detect any problems, having a few of these emails to look back through for changes is also very valuable.

Well, the fundamentals are never super exciting. But these steps will put you well on your way to securing and hardening your host. In part 2, properly configuring and hardening SSH will be discussed.

Cheers, Erik