Pentesting Linux Thick Client ApplicationsĀ 

A thick client is a software application that has most of the resources installed locally for processing on typically a personal computer. Thick Clients follows the client-server architecture. Thick client applications are divided into two categories: 

  • 2-tier architecture:  The application has two components: client-side resources and server-side database. The client-side interface is responsible for rendering resources and handling user input. Server-side database, or in some cases local database, stores user data. The client communicates directly with the database to retrieve or update the data. 
  • 3-tier architecture:  In this the application adds an additional layer between client and database called the application server. The client handles user interface and user input and sends it to the server for processing. The server processes the request from the client and performs the necessary operations and communicates with the database. 

The 3-tier applications offer better scalability and can manage concurrent requests easily. Changes to program logic can be made easily without interfering with user interface resources. 

In a general overview, the Windows OS, developed by Microsoft has historically dominated the desktop operating system market. So naturally most of the thick client applications (desktop applications) developed were for windows. But Linux-based OS have also had its share of popularity in the desktop market overtime. This made developers create apps for Linux along with Windows to cover more audience.  

In this blog post we will delve into the process of testing Linux thick client application to identify and address potential security vulnerabilities, by exploring various testing techniques and tools. 

Suggested Reads:  Windows Thick Client Testing  

1. Information GatheringĀ 

Doing reconnaissance on your target is crucial to gather more information on the target. First start with using the application as a normal user would, get familiar with the application features and capabilities.  

1.1 Open-Source vs Proprietary thick client application:Ā 

Now, after using the application, it is time to gather additional information from external resources regarding the technology stack and architecture employed by the application. Begin by determining whether the application is open source or proprietary. If it is open source, examining the source code will help in identifying technology stack utilized by the application.  

1.2 ā€œfileā€ command:Ā 

If the application is a proprietary one, we will explore the utilization of the ā€œfile” command to identify binaryā€™s architecture and associated libraries. It also helps in analysing how the application was compiled, such as if Position-independent Executable (PIE) is enabled or disabled, whether binary is stripped or not.  

When PIE is enabled while compiling the application, the resulting binary is compiled to be loaded at any memory address in the system. This makes attackers exploit memory corruption vulnerabilities harder as they cannot rely on fixed memory addresses for their attack. On the other hand, when PIE is disabled, the resulting binary is loaded at a specific memory address. This makes it easier for attackers to exploit memory corruption vulnerabilities as they can rely on fixed memory addresses of certain functions and data of binary. With this information, we know whether memory corruption attack is possible to conduct easily or not. 

Similarly, when a binary is ā€œstrippedā€, it means that certain debugging and symbol information has been removed from the executable file to reduce the binary size and make it harder for reverse engineers to understand the workings of binary.  

Stripped information includes: 

Function names: The names of functions or methods in code that help in identifying the purpose and behaviour of code blocks. 

Variables names: The names of variables or objects, which provide information on the data being stored or being manipulated. 

Type information: Information on the data types used in the program like structures, classes, arrays, etc. 

 Line information: The mapping between source code lines and corresponding machine level code. 

Debugging information: Additional information like breakpoints, stack frames or function call traces etc. 

So, when you see that the binary is not stripped it means you can easily reverse-engineer the application and understand the code base using tools like Ghidra, IDA etc. 

First find out the binary path and then run the ā€œfileā€ command on it. e.g., analysing Firefox binary. 

1.3 ā€œlddā€ command:Ā 

Next, we will investigate the ā€œlddā€ command for assessing the security of Linux applications. Running the ā€œlddā€ command shows the libraries required for successful execution, and it can provide valuable insights into potential security issues: 

  • If a library with known issues is being used by the application, then they can be mitigated. 
  • It can identify whether the application is loading any unintended library, like in the case of LD_PRELOAD attack, which we will discuss in detail in the later section. 

For example, Firefox loads following shared libraries on my machine for successful execution, and ldd shows libraries in the order they are loaded. 

1.4 Reversing the application:Ā 

As we discussed in the file command section that non-stripped binaries could be easy to reverse engineering due to the reasons mentioned earlier. But stripped binaries can also be reverse engineered to a certain extent. 

Tools like Ghidra and IDA Pro are good for reversing Linux applications. Ghidra can also help in identifying the technology stack used in creating the application. 

Open your application with the tool of your choice. E.g. we can try to reverse engineer a simple binary which is present on every Linux machine called ā€œfile, yes, the ā€œfileā€ command is actually an executable binary that you run, and you can reverse engineer it. Locate the path of the binary using ā€œwhich fileā€ command. After that, open that binary in Ghidra. 

1.5 ā€œstringsā€ command:Ā 

Sometimes binaries can have human readable strings in compiled binaries. ā€œstringsā€ is one such tool in Linux that can help in identifying those readable strings. 

e.g Running ā€œstringsā€ on ā€œfileā€ binary reveals some text. 

1.6: Package managers:Ā 

Linux distributions utilise package managers to simplify the process of installing, updating, and managing software packages e.g. apt, dnf, yum, Pacman etc.  

If a software is installed through these package managers, then they can also provide some information about the application. 

e.g. In a Debian-based system, we can use an ā€œaptā€ package manager to gather some information about a package. 

1. ā€œapt show <package_name>ā€ command will show information about package version, maintainer, dependencies, repository etc.Ā 

2. ā€œapt depends <package_name>ā€ shows a list of dependencies for the specified package. It includes the dependencyā€™s name and version.  

1.7 File System Interactions by the Thick Client Application:Ā 

When an application is running, it interacts with the files system to perform operations such as creating, updating, or deleting files on the system. It is crucial to monitor such activity as the application might be temporarily creating some files with sensitive information. Monitoring the processā€™s activity is also valuable for tracking external binaries or librariesā€™ calls. Are they fetching these binaries/libraries using absolute paths? If not, can we hijack this binary/library call? An application also makes some system calls for successful execution, like taking input from stdin and showing output to stdout. All these syscalls should be monitored. 

Tools like ā€œstraceā€, ā€œltraceā€, ā€œhtopā€, ā€œpspyā€ can help in monitoring the process for various activities and syscalls made by the application. 

strace: This command is used for debugging applications. It allows you to trace and analyse the system calls and signals made by a process, providing detailed information about its interactions with the operating system. strace intercepts and displays the syscalls made by the process. E.g. strace showing sycalls made while trying to read a file. 

As you can see, the first syscall made was ā€œexecveā€. It is a syscall in Unix-like OS, it executes a program in the current process. Similarly, you can observe other syscalls made like ā€œreadā€, ā€œmprotectā€ etc. 

You can try to read about each syscall in detail by searching here: https://man7.org/linux/man-pages/man2/syscalls.2.html 

A complex application like the browser, Microsoft Teams, Slack will make thousands of syscalls. So, it is a good idea to learn that you can filter specific syscalls in strace. Read strace manual page for better understanding.  Analyzing syscalls can help in: 

  1. Strace can monitor third party libraries loaded in run time, which can help in insecure library usage or dependency issues. 
  1. Strace can help in identifying unauthorized calls to potential risky functions, file access or excessive privilege usage. 

E.g. strace captures syscalls such as ā€œopenā€, ā€œreadā€, ā€œwriteā€, etc., which involve file operations. You can check if the application is trying to read or write to a file that it shouldnā€™t otherwise. Permission-related syscalls like ā€œchmodā€, ā€œsetuidā€, ā€œsetgidā€ can reveal when an application escalates its privilege and reduces its privileges which can indicate security risks. 

ltrace: It is used for dynamic tracing tools to monitor library calls made by a program. It attaches to the process until the process is finished and intercepts the calls to shared libraries that a program uses and displays it. ltrace also shows syscalls made by the libraries. 

pspy:  This tool can help monitor the process on a Linux OS without root permission. By design, everyone can list all the running processes by default. You can check what happens when you run your application. Check if the application is calling external binaries with relative paths. These can then be hijacked to execute arbitrary commands. Check if your application is taking some command line arguments. Check if the application is trying to read and write to some file. Review if the application creates some temporary file or application tries to read some configuration file which was unknown to you. Configuration files can often contain sensitive information. pspy is a powerful tool for in-depth monitoring of a process. 

Github pspy: https://github.com/DominicBreuker/pspy 

2. ForensicsĀ 

When the application executes, it leaves some traces on the system it becomes crucial to follow the trails of an application as sometimes it can reveal sensitive information. 

2.1 Logging:Ā 

Developers log information in the console for debugging purposes, and sometimes they can forget to strip these debugging codes from production. So, we can test for that by running your application and redirecting all console output to a file, which can be later analysed easily. 

E.g., Running Firefox and saving all console messages, considering if there are any, are saved into a file. 

Sometimes these log file contain decrypted messages, password, API keys etc. and which can cause security issues. 

2.2 Inspecting /proc/[PID]/ directory.Ā 

In Linux systems, /proc/ directory is a special directory that provides access to information on running processes and system-related information. In /proc/ each process on the system is represented by a numbered directory. That number is the Process ID (PID) of that process. 

Within these process directories, various files and directories provide information about the respective process. It is always a good idea to inspect this directory. Find the PID of your running application. 

E.g. Finding PID of Firefox on my machine. 

In the above image you can see PID 75932 is Firefox’s parent process, which later spawned many child processes. Now if you go to /proc/75932/ directory, you can find all the information related to this process and its child processes. 

cmdline: Command and its command line arguments that were used to start the process. 

cwd: The working directory of process. 

environ: The environment variables of process. 

exe: A symbolic link to executable that was used to start the process. 

fd: This directory contains a list of file descriptors opened by a process. e.g., for the Firefox file descriptor 91 is in use which means that this process can send and receive data over a network connection. 

You can read more about file descriptors in Unix-like system here: https://en.wikipedia.org/wiki/File_descriptor 

status: A text-based summary of process’s status. 

wchan: The kernel function that the process is currently executing. 

mountinfo: Information about the filesystems mounted by the process. 

tasks: This directory contains the information about all the subprocesses of the process. 

limits: This file contains information about number of resources a process can use. 

net: This directory contains information about the network connections. 

Similarly, each file and directory in this directory can give some information about the process. It is advisable to check it out and dedicate some time to understanding it. 

2.3 LD_PRELOAD Attack on Linux Thick Client ApplicationĀ 

As we know, a library is a collection of code that later another program can reuse without rewriting the same piece of code. Take the example of importing libraries when you write your first code in different programming languages, e.g., C/C++, Python, Java etc. Similarly, Operating Systems also make use of libraries. 

If the library code is included in the program itself, then it is known as a static library, and if it is linked at run time, then it is known as a shared library. Programs built with a shared library require runtime linker/loader support. Before executing a program, all required libraries are loaded, and the program is prepared for execution.  

In Linux, you often see ld.so or ld-linux.so file name. These are dynamic linker/loader which loads required libraries for a program before execution. LD_PRELOAD is an environment variable in Linux which states which shared library to load for successful execution of this program. When LD_PRELOAD variable is set, the dynamic linker/loader will prioritize loading the specified library before any other library. 

For example, you can write your own shared library to implement a specific function and specify it in the LD_PRELOAD variable. Loader will load this library, and it will override the original function definition with your function definition. The primary use case of this behaviour is debugging and testing without needing to rewrite the whole program. 

An example of shared libraries loaded by ls command – 

This presents an opportunity to hijack legitimate libraries and run malicious code in the name of a legitimate process for hackers. For example running a shared library that is not related to the ls command. 

If the LD_PRELOAD variable is set in a process, then all commands executed under that process will load the library specified. 

More than one library can be specified in the LD_PRELOAD variable separated by colon. You can remove the variable using the unset command. 

This attack can be prevented by timely auditing your binaries for such malicious unknown libraries. Using the ā€œlddā€ command, check which libraries are being loaded and if there is any arbitrary library that you donā€™t recognise and odd one out. You should also check for the presence of /etc/ld.so.preload file in your system. It is a system-wide configuration file and contains the path to shared libraries that should be loaded before any program executes. In most systems, this file is not present by default, or it is empty. 

3. Memory AnalysisĀ 

Memory analysis is crucial in penetration testing thick client applications. It involves examining the contents of computer memory when the process is running, which can disclose sensitive information like usernames, passwords, session tokens, secret keys, etc.  

Core dump analysis is a form of memory analysis. When a program experiences critical error or crash, it may generate core dump file containing processā€™s current memory state. This memory dump includes data, stack traces, register values which can reveal saucy information.  

For testing we will have to force the application to create core dump while it is running. Analysing core dump files require knowledge of assembly language. 

Generating Core dumpsĀ 

  1. gcore from gdb suite is one such tool that can create core dump files without killing the process. ā€œgcore <PID>ā€. 

If your dump file is not created, run ā€œulimit -c unlimitedā€ command.  

ulimit in Linux is used for setting resource limits for a process. ulimit -c sets the size of the core dump file after a process crashes. By default, it is set to 0 to save space by totally disabling the core dump file creation. You may want to set it to unlimited for the successful creation of a crash dump. Use the command ulimit -c to check your limit and if it’s 0, then set it to unlimited.  

Note: ulimit -c unlimited command should be executed in the terminal where the process is running. As ā€œulimitā€ sets the resource limit for the current shell session only. It does not change the system-wide resource limits. If you want to make system-wide changes, you will need to typically modify  /etc/security/limits.conf. 

  1. Using the kill command, we can create dump files manually, but unlike gcore this kills the original process completely. There are multiple SIGNALS in the kill command which can be used for killing a process. E.g. signal SIGTERM won’t create a dump file while signals like SIGBUS and SIGABRT will create a dump file. 

      Signal -3 is equivalent to -SIGQUIT. 

 Read about all kill signals:   https://unix.stackexchange.com/questions/317492/list-of-kill-signals 

  1. ProcDump 

Procdump from sysinternals is also a great tool for creating dump files.  

Procdump Github: https://github.com/Sysinternals/ProcDump-for-Linux 

  1. GDBĀ 

You can create dump files from gdb itself by running generate-core-file after you hook the process in gdb with gdb -p <pid> command. 

Analysing Core dumps 

  1. Using gdb 

You can open the dump file in gdb by running gdb <binary_path> <core_file_path> and gather the following information: 

  • info variables print all global, local variables. info locals print only local variables, and you can check value of a variable by print <variable name>. 
  • bt or backtrace command lists functions leading up to the crash. Shows the function name, memory address, and arguments passed to that function. 
  • info registers command can help in retrieving CPU registers values. 
  • info proc mappings gives details of memory mappings of process. 
  • info auxv gives information about the system environment and libraries that were present when the process was running. This can help in identifying if the process escalates privileges in the background. 
  1. strings command 

Running strings command on the core dump file can reveal plain text data, which can sometimes reveal sensitive information. 

4. Traffic analysisĀ 

All applications exchange information between client and server. It can be a valuable technique to capture and analyse data exchanged between the client and the server. 

Network traffic capture 

Network traffic capture tools like wireshark, tshark and tcpdump can help in capturing the data between the thick client and the server. Look through each captured packet for: 

  • IP client is connecting to. 
  • Protocol in use. 
  • Whether communication is encrypted or plain text. 

Proxying Network traffic 

Thick client applications are generally non-proxy aware, which means there is no option to set up a proxy from within the application or it is not aware of proxy settings. But this doesnā€™t restrict us from trying different ways to proxy the thick client applicationā€™s traffic. 

  • Proxychains 

It is a command-line tool in Linux that is used for proxying traffic through the proxy server. 

Proxychains use a configuration file called proxychains.conf or proxychains4.conf depending upon your installation. In this file you can specify the address and the port of your proxy server. 

You might see socks4 instead of socks5 which is just an older version of the socks5 protocol. socks5 supports both TCP and UDP connections, and thatā€™s why it is preferable to use. 

Now from your terminal you can run your application using proxychains. 

Now it is important to note that not all applications can be proxied this way, as modern applications have advance protections like encryption, CA certificate validation etc. to protect against such attack. 

  • System-wide proxy 

For some application settings system-wide proxy can help. In your OS from network manager, you can set proxy, which will proxy all traffic from your system to the proxy server. 

  • Environment Variables 

In Linux, there are certain environment variables e.g.  http_proxy, https_proxy, ftp_proxy which you can set.  

Run your application from this terminal and if the application respects the proxy environment variable it will route its http and https traffic to your server. 

Adding Burp Suite Certificate to OS Trusted Certificate Store 

As discussed in case of Spotify application, modern applications have advance protection against such attacks. This forces the application to use only secure channel for communication. 

Now, even though you have set up a system-wide proxy, for https traffic to go through the burp, burpā€™s CA certificate needs to be installed in Linuxā€™s  /etc/ssl/certs/ directory. It is easier to understand when you compare it to capturing traffic from your browser. You install the burp CA certificate in the browserā€™s trusted CA certificate list, which allows the burp to read SSL/TLS encrypted traffic and forward it to the server. 

/etc/ssl/certs directory contains certificates issued by certificate authorities. When a client tries to connect to a server using HTTPS or any other secure protocol, the client will check whether the serverā€™s certificate is signed by one of the trusted certificate authorities or not. 

Different Linux distributions may use different paths for storing SSl/TLS certificates.  

First export the certificate in DER format from burp and save it. 

Content of this certificate will be gibberish. 

We need to extract the serverā€™s public key from this encrypted certificate. 

Openssl x509 -in <cert_name.der> -inform DER -out <cert_name.cert> 

Now simply put this certificate in /etc/ssl/certs/ directory or /usr/local/share/ca-certificates/ directory and run the update-ca-certificates command to update the systemā€™s certificate list. 

This will let you capture the applicationā€™s encrypted traffic in burp as the burpā€™s certificate is stored as a trusted certificate in the system. But this doesn’t guarantee that you will be able to capture all the applicationā€™s traffic as some applications implement a security mechanism called certificate pinning. 

Certificate pinning is a technique used for enhancing security of SSL/TLS connections. Instead of solely relying on trust established by the certificate authorities, the application saves(pins) the server certificate by embedding its public key or hash within the codebase. This ensures that client is talking to a legitimate server and stops man-in-the-middle attacks. 

Bypassing certificate pinning for thick client application is a topic for future. 

Suggested Reads: Windows Thick client Analysis Roadmap 

Conclusion 

I hope this article has helped you better understand how to approach pentesting a Linux Thick Client Application. I did my best to demonstrate every potential method a security risk could arise in a Linux thick client application. I feel this provides a foundation for anyone interested in researching Linux thick client applications.  

Subscribe to our Newsletter
Subscription Form
DOWNLOAD THE DATASHEET

Fill in your details and get your copy of the datasheet in few seconds

CTI Report
DOWNLOAD THE EBOOK

Fill in your details and get your copy of the ebook in your inbox

Ebook Download
DOWNLOAD A SAMPLE REPORT

Fill in your details and get your copy of sample report in few seconds

Download ICS Sample Report
DOWNLOAD A SAMPLE REPORT

Fill in your details and get your copy of sample report in few seconds

Download Cloud Sample Report
DOWNLOAD A SAMPLE REPORT

Fill in your details and get your copy of sample report in few seconds

Download IoT Sample Report
DOWNLOAD A SAMPLE REPORT

Fill in your details and get your copy of sample report in few seconds

Download Code Review Sample Report
DOWNLOAD A SAMPLE REPORT

Fill in your details and get your copy of sample report in few seconds

Download Red Team Assessment Sample Report
DOWNLOAD A SAMPLE REPORT

Fill in your details and get your copy of sample report in few seconds

Download AI/ML Sample Report
DOWNLOAD A SAMPLE REPORT

Fill in your details and get your copy of sample report in few seconds

Download DevSecOps Sample Report
DOWNLOAD A SAMPLE REPORT

Fill in your details and get your copy of sample report in few seconds

Download Product Security Assessment Sample Report
DOWNLOAD A SAMPLE REPORT

Fill in your details and get your copy of sample report in few seconds

Download Mobile Sample Report
DOWNLOAD A SAMPLE REPORT

Fill in your details and get your copy of sample report in few seconds

Download Web App Sample Report

Letā€™s make cyberspace secure together!

Requirements

Connect Now Form

What our clients are saying!

Trusted by