Friday, December 19, 2014

Shell script "libraries"

If you are a security profesional (or an IT profesional) probably you -like us- are constantly writing shell scripts, so that you can automate certain tasks in your linux (or unix) environment.

We don't usually use shell scripting to write complex applications (although some shell scripts become quite big), but we do use it extensively to create some "utilities" or little tools to quickly fulfill certain needs that arise along the way.

This happens to us all the time when doing pentesting. Very often, we have to write a shell script very quickly just to solve a particular problem, so we write it as fast as possible, without regard to any software design aspect. When you do this, you know that that is not right way to write programs, but you accept it because you think the extra work that would entail doing it well is not worth it, and you prefer to have a quick working result over a well designed code.

An obvious consequence is that you end up writing the same piece of code again and again. One of the most infamous examples that applies to our case is the argument parsing function: we cannot count the number of times we have written a function to handle script options and arguments and display usage help in a way that is reasonably comfortable for us.

During the latest few months, we have been working on a job that has required us to write (and use) many shell scripts, and this time, since we suspected in advance that that would be the case, we decided to take a -let's say- cleaner approach: we decided to write what we call "shell script libraries", which turned out to be a big help for us with the aforementioned situation.

These "shell script libraries" are sets of shell functions that you can import and use from within your shell scripting code, and some of the functions can be useful even if invoked directly from the shell command line.

In this article we present the following shell libraries:


We started out by writing an option parser library. If your shell script needs to be able to behave in different ways depending on its invocation or if you need to pass information to it, you usually achieve this through the use of options and/or arguments. We liked the way this is handled in libraries that you can find in C or python languages, so we tried to write something similar. The library that we have written is intented to be generic and easy to use.

Note: Perhaps there is something similar out there, but none of the code we found and tested happened to match exactly what we were looking for

To use the library you have to download it and put it in a directory that is in your PATH environmental variable (or in the same directory as the invoking shell script).

Then, source it from within your code, for example as follows:

. || exit 1

Then, call add_program_option as many times as options you have to handle, in this way:

Note: In this context we use option and argument as synonyms; see considerations below

add_program_option "-h" "--help" "Shows this help." "NO" "NO"
-h is the short flag of the option
--help is the long flag of the option
"Shows this help" is the explanation that will appear when the usage is shown
"NO" means that this is not a mandatory option
"NO" means that this option doesn't have an associated value

After you have all your options added, you just call:

parse_program_options $@

And then you may call:

show_program_usage "-h" && exit 0

Which will test if "-h" (or "--help") is present and, in that case, will show program usage and then exit. You can also specify no arguments to show_program_usage in which case no test will be performed.

If latter in your code you want to know if an option is present, you can do it like this:

if is_option_present "-h"

And if you want to get the value for a specific option, you can do it in this way:

_myvar=`get_option_value "-h"`

_myvar will take the value associated to the option. A value is everything between the option and the next short or long option, or the end of the command line. Obviously in this example _myvar will simply be assigned an empty string.

That's _almost_ everything you need to know to use the library! In the code comments you have deeper explanation of the functions, although you probably won't need it.

Let us add just a couple of considerations we think you should be aware of if you are considering using the library:

  • The library is written for bash, because that is the shell interpreter that we use, and we haven't tested it on other interpreters. Perhaps it could be re-written in a more universal way, but we have no plans to move in that direction because, at least for now, bash is enough for us.
  • We know there is much discussion about the right terminology regarding arguments, options and parameters. Please note that, arbitrarily, we decided to use the terms "option", "argument" and "parameter" as synonyms in the context of our shell scripting libraries, and we, also arbitrarily, decided that all options would always include an explicit switch (e.g: "-h", "--help"), some of them with an associated value (e.g: "-i INTERFACE") and some without (e.g: "-h" for help or "-v" for verbose), and finally, we also decided that each option will be either mandatory (its presence will be required) or optional. Please note that therefore, in this context, "option" does not mean "optional" :-)
The library worked so well for us that we decided to take the same approach to tackle other problems, and so we started two more libraries that are described in the following sections. They are far from being complete, but our idea is to continue expanding them, and any new libraries we may find interesting to create, with ever growing functionality. is a library that will contain mathematical utilities. At the present moment, it just includes the following functions:

  • get_random_uint
  • get_random_hex_digits
  • hex2dec

The following is an example of use:

jl:~ root # .
jl:~ root # get_random_uint 0 -1
jl:~ root # get_random_uint 0 10
jl:~ root # get_random_uint 0 10
jl:~ root # get_random_uint 200 100000
jl:~ root # get_random_uint 200 100000
jl:~ root # get_random_uint 200 1000000
jl:~ root # get_random_uint 200 1000000
jl:~ root #
jl:~ root # get_random_hex_digits
jl:~ root # get_random_hex_digits 20
jl:~ root # get_random_hex_digits 20
jl:~ root #
jl:~ root # hex2dec x
jl:~ root # hex2dec
jl:~ root # hex2dec FA
jl:~ root # hex2dec 10
16 is a library that will contain networking related utilities. At this moment it just includes the following functions:

  • is_mac_address
  • generate_rand_mac

Here are some usage examples:

jl:~ root # .
jl:~ root #
jl:~ root # is_mac_address "This is not a MAC"; echo $?
jl:~ root # is_mac_address "XX:XX:XX:XX:XX:XX"; echo $?
jl:~ root # is_mac_address "0A:1B:2C:3D:4E:5X"; echo $?
jl:~ root # is_mac_address "0A:1B:2C:3D:4E:5F"; echo $?
jl:~ root #
jl:~ root # generate_rand_mac
jl:~ root # generate_rand_mac FULL

Conclusion and future work

We found these small shell libraries to be really useful for us, and so we thought we would share them. We hope you find it useful. You are free to use them in almost any way you see fit, since we are publishing them under the GPLv3 license.

Obviously, the code can be improved and expanded, and while we will certainly do so, we would also be more than happy to get your comments and contributions, which we would study and eventually include in the code, giving you the appropriate credit, of course.

Thursday, May 29, 2014

Book: Mobile communications Hacking and Security - SECOND EDITION

Back in november 2011 we published our first book about mobile communications security... After more than a thousand units sold, we are proud to announce that the second edition of the boook is available.

During this two and a half years, like other researchers, we have maintained our activity in this field. The aim of this second edition of the book is to collect and synthesise most part of this information. So, what has changed during this period and has been added to this second edition?

In the 2G field, new inexpensive technologies have arised, allowing anyone to perform most practical published attacks. New attacks have also been published: denial of service, subscriber impersonation and geolocation of subscribers, among others.

We have also expanded both the theoretical study of 3G protocols and attack techinques not covered in the first edition, including the ones that we explained in RootedCON 2014.

Also, a first approximation to the study of the security of the 4G protocols, including a review of the state-of-the-art around 4G attacks, has been added.

The index of the book is available here and you can get it through the publishing house 0xWord.

Wednesday, March 26, 2014

3G Attacks at RootedCON 2014

Some time ago we learned that a subset of the attacks that were possible for 2G mobile communications using a fake base station were also possible in 3G, particularly:
  • IMSI Catching: or how to know whether an IMSI is active in the range of the attacker's base station
  • Geo-location of mobile devices, with high accuracy and reliability. You can find more details on this attack in the associated materials of two of our previous talks: RootedCON 2013 (Spanish) [slides] and [video], and BruCON 2013 (English) [slides] and [video], where we demonstrated in practice how an attacker could use a fake base station to know the location of a device (identified by its IMSI or its IMEI).
  • Denial of service: there are many flavours of denial of service attacks. We explained some of them and we demonstrated in practice the "LUR Reject Cause codes" one at RootedCON 2012. [Slides] and [video] of the talk are available (Spanish).
  • Selective donwgrade to 2G: this attack allows an attacker to force a mobile device to choose 2G service instead of 3G, regardless of the availability of the 3G service.

In our recent talk at the fantastic RootedCON 2014, we explained the protocol concepts and issues behind these attacks and how an attacker could theoretically exploit the underlying vulnerabilities to perform the attacks (slides available in our lab page). As we clarified during the talk, this was actually a summary of information that was already in the public domain, though not very publicized.

Our goal is to test the aforementioned attacks and our first step in that direction has been the development a 3G software modem. During the talk we demonstrated the modem decoding the bits of the BCH of a real cell.

This practical work is only a first -but necessary- step towards our goal. We continue our research activities in this area, so stay tuned!

[*** UPDATE March 26, 2014 ***] English version of the slides have been added to our lab page]

Tuesday, January 7, 2014

Using easy-rsa certificates for authentication within IPsec in standalone Windows systems


In this article and the white paper that accompanies it, we describe how to use easy-rsa, the free and open source certification authority software based on OpenSSL, to generate digital certificates that can be used to mutually authenticate IPsec connections between standalone Windows systems. First we describe the problem to be solved, then we discuss different approaches to the generation distribution of certificates, and finally, in the white paper, we provide two illustrative examples in the form of step by step guides to generate, install and use those digital certificates, using two different methods: generating certificate signing requests (CSR) in the hosts and signing those requests with easy-rsa, and generating the full certificates directly with easy-rsa, including their private keys and CSRs.

We hope this contributes in helping people to protect at least some of the insecure traffic still flowing through their networks.

Insecure communication protocols

Some communication protocols, like HTTP or TELNET, are vulnerable to man-in-the-middle attacks, because they exchange data in the clear, and because they do not provide mutual authentication of both ends of the communication. A subset of those insecure protocols have secure counterparts that we can use to replace them, like HTTPS instead of HTTP, or SSH instead of TELNET, but some of them do not have a secure alternative.

A clear example of the latter is the SMB (Server Message Block) protocol, which is used to access shared files and folders in Windows (and other) environments. Its variants SMBv1 and SMBv2 provide authentication of the user attempting to access resources in the shared folders, but they do not provide an authentication mechanism for the server to prove its identity to the client, and they do not offer the possibility of encrypting the traffic while in transit over the network. Version 3 of the protocol, SMBv3, introduces traffic encryption as a new feature  of the protocol, which is an improvement, although it still seems to fail to address the problem of the authentication of the server. However, since SMBv3 is only available in systems running Windows 8, Windows Server 2012, or later, the vast majority of SMB traffic in our networks today is still SMBv1 or SMB2 and therefore exchanged in the clear. That means that any attacker that can position himself between the client and the server can obtain a copy of all files transferred, as demonstrated by the SMB export plug-in that we implemented for Wireshark some time ago.

Nevertheless, SMB is just an example. In many, if not most, of the networks we have analyzed, both in the long distant past and in very recent times, we have found custom applications that use completely insecure communication protocols, with a total lack of mutual authentication and/or encryption.

When trying to protect the communications of these insecure protocols, IPsec may come in handy.

IPsec in Windows

IPsec (IP security) is a set of protocols that allow two IP entities to authenticate each other, negotiate cryptographic keys, and use those keys to authenticate and encrypt each IP packet they exchange, thus protecting any communication carried out between those two endpoints at any higher level in the TCP/IP stack.

When compared to other security protocols that operate at higher levels in the TCP/IP stack, like TLS or SSH, IPsec offers the advantage of being transparent to any applications communicating through it: applications do not need to be aware of the existence of IPsec, whereas for an application to be able to use higher level security protocols, like TLS for example, the application must be designed or modified to support those protocols.

In the case of insecure protocols that cannot be replaced with a secure alternative, and which cannot be modified (be it because the source code is not available or because the cost would be prohibitive), IPsec may be a good option, if not the only option, to protect their communications.

IPsec has been available in Windows, included in the operating system (no need to install extra software), since Windows 2000. While the early versions could be considered difficult to configure and manage, that is hardly the case in recent Windows systems like Windows 7 or Windows Server 2008, where the configuration of IPsec rules is integrated in the Windows Firewall with Advanced Security console.

Figure 1 - Windows Firewall with Advanced Security

Windows' implementation of IPsec offers several authentication methods, including using digital certificates issued by trusted certification authorities (CA). Other methods of authentication include Kerberos (in a Windows domain environment) or NTLMv2, but these articles will concentrate on the use of digital certificates because it is the strongest authentication method available both to domain and standalone computers.

In a Windows domain environment, managing the generation and installation of the appropriate certificates in the appropriate systems is really easy. An Enterprise (online) Certificate Authority can be set up in any domain member server (the CA software is included in Windows Server), and all the appropriate computers can be configured to automatically request from it, and install, the appropriate certificates. Such configuration may (should) be carried out centrally, using Group Policy, and applied automatically to the appropriate organizational units. Furthermore, IPsec may (should) also be configured using Group Policy, making the management of the whole solution very convenient, even for hundreds of systems.

However, configuring two standalone Windows systems to communicate using IPsec is a little more tedious, because the certificates will have to be generated and installed manually, and IPsec will need to be configured manually on each system.

Easy-rsa: A free and open source CA based on OpenSSL

Easy-rsa is a lightweight, free and open source software application that allows the user to set up and manage a certification authority (CA). It constitutes a subproject of OpenVPN. Programmed in POSIX shell and based on OpenSSL, which it requires, easy-rsa is available for Linux and Windows among other platforms.

The installation of easy-rsa in the Linux platform is especially simple, because all the external utilities that it requires are usually already there or can easily be installed using the appropriate package manager.
Easy-rsa constitutes a great alternative to Microsoft Active Directory Certificate Services (MS ADCS, Microsoft's CA) if you don't need an online CA and you don't want to dedicate a Windows Server to this function. On top of that, using easy-rsa you will also get more flexibility, because its behavior can be tailored in much more detail.

Digital certificates

A digital certificate is a file that contains the public key and other relevant data (but not the private key) of some entity, all of which is signed by a CA. Anyone in possession of the public key of the CA will be able to validate its signature and therefore be assured that the information contained in the certificate was validated by the CA.

In order to generate a digital certificate, first a certificate signing request (CSR) must be generated. An entity needs to generate a private-public key pair and then include the public key in a so-called certificate signing request (CSR), along with whatever information it wants to have included in the certificate (e.g. common name, intended uses of its keys, etc.).

Then, the CSR must be brought to a certification authority (CA), which, after validating the information contained in the CSR, will use its own private key to sign the contents of the CSR, producing the corresponding digital certificate.

The certificate needs then to be given back to the requesting entity, so that such entity can use it, in tandem with the corresponding private key, which never left the requesting entity.

Finally, the requesting entity may or may not want to export the private key together with the certificate so that it can be backed up or transported to a different system.

Certificate generation and distribution: different approaches

Given the general process of generating certificates described in the previous section, a system administrator who wants to use digital certificates for the authentication of IPsec communication among some systems that fall under his control, could use at least two different strategies to generate and install the certificates. One way to do it would be to generate each key pair and CSR on each system, have them signed by the CA and the resulting certificates installed back in their corresponding systems. An alternative method would be to generate a bunch of key pairs and their corresponding CSRs on a central system, have them signed by the CA, get the certificates back, and then export together each private key with its corresponding certificate, and install each whole set on each appropriate system.

The first method has the advantage of the private key never leaving the single system where it will be used, but has the disadvantage of the system administrator needing to visit each system twice, to generate the key pair an CSR first, and then to install the certificate signed by the CA. However, the fact that the private key never leaves the system may not be too relevant if all systems are managed by the same administrator.

The second method has the advantage of allowing the system administrator to generate all keys and certificates at once, having to visit each system only once, to install both the certificate and the private key at the same time.

Additionally, the administrator also has the option to deal with X.509 extensions in different ways. Extensions in a certificate are fields that, among other things, may declare the uses for which such certificate should be trusted. Examples are "Server authentication", "IP security end system" or "Code Signing". CSRs may contain a list of desired extensions, which the CA may then decide to include or not in the final certificates, and the CA may or may not include additional extensions (not specified in the CSRs) to the final certificates. Thus, for a given set of required extensions, the administrator may include those extensions in the CSRs and configure the CA to include those extensions in the certificates, or generate the CSRs without extensions, and configure the CA to add those extensions when generating the certificates, or any combination of the above.

Step by step guides

In the accompanying white paper, available from our lab, we include two step by step guides that illustrate two different ways to set up an IPsec connection between two standalone Windows systems, using digital certificates for authentication, issued using easy-rsa.

In the first guide, each key pair and certificate signing request (CSR) is generated on its corresponding Windows system, including the desired extensions in the CSR, and easy-rsa is be configured to sign those CSRs including the requested extensions to generate the certificates.

In the second guide, however, all CSRs are generated in the Linux host and they do not contain information about any desired extensions; then, the CSRs are signed using easy-rsa, configured to add the appropriate extensions to the corresponding certificates, and then the certificates, including their corresponding private keys, are distributed to the Windows systems.

If you want to experiment with the exact same files that were created and used when writing the white paper, you may download a zipped copy of all of them from our lab.

Please note that by no means are the two procedures presented in these two step by step guides the only possible methods that could be followed. Indeed, a combination of both techniques, and possibly many other variations, could also be successfully applied. These step by step guides should be regarded just as illustrative examples.

For more detailed information, please refer to the accompanying white paper.

Wednesday, November 13, 2013

Hello world! from Layakk

With this post we start the blog associated to a new stage in the professional project that we have been developing during the past few years.

After the closing of our previous chapter in Taddong (more information here), Jose Pico and David Perez have created a new company, called Layakk, with the aim of reinforcing and expanding our areas of work.

In this blog we will communicate the most significant part of our professional activity with regards to research, articles and publications, tools, etc. and we will share as much as possible of our work with the community.

We hope that the contents of this new blog will be of interest to you. If you want to subscribe to it you can do it by using the syndication buttons.

If you would like to follow us in twitter or get in touch with us via e-mail, please check out our contact information in our web site: