Identifying detection opportunities in cryptojacking attacks

More flexibility and visibility with agentless coverage for workloadsCryptojacking/cryptomining (T1496) is a well-known threat to the security industry. While frequently dismissed as an annoyance rather than an actual security incident, cryptomining is often seen co-deployed with additional offensive tooling. In some cases, this includes userland or kernel rootkits, credential theft bash scripts, and the ever-popular Mirai/Kingsing bots. This threat is becoming more severe as former ransomware authors publicly shift their operations to cryptojacking. Simply put, overlooking cryptojacking is like disregarding someone breaking into your home because they only stole silverware. This blog post will examine the obstacles adversaries must overcome to succeed in their cryptojacking campaigns and the detection opportunities that arise from it with corresponding Atomic Red Team tests.

Anatomy of a Cryptomining Attack

Lacework Labs typically sees cryptojacking attacks in nearly all observed cloud compromises. The standard execution flow occurs with threat actors adopting both recent and legacy vulnerabilities to compromise a public-facing application and achieve shell access. The images below show a generic attack flow Lacework Labs traditionally observes. Figure 1 represents a high-level overview of how the attacker gains initial access and Figure 2 shows the execution of the initial access payload, followed by additional payloads.  This workflow will serve as a model for discussing the challenges an adversary must overcome to succeed and detection opportunities defenders have along the way.

 

Figure 1 – Cryptojacking initial access

Figure 2 – Initial access payload execution (shortened for readability)

Adversary Challenge –  Defenses on by Default

The high-level objective for cryptomining attacks is to have as many hosts as possible under attacker control to maximize mining operations and generate cryptocurrency. This “breadth vs. depth” attack strategy may appear simple compared to more sophisticated adversaries that traditionally make the headlines, but the reality is that access is just the beginning. Lacework Labs often sees remote access methods co-deployed with cryptomining malware (IRC bots, dropped ssh keys, etc.). Additionally, the secondhand market for selling access to cloud accounts is a growing economic opportunity for attackers. Cloud environment misconfigurations often result in unintentional privilege escalations or unintended access to resources. Finally, threat groups focused on ransomware or extortion campaigns could purchase and abuse this access for monetary gain.

Before deploying cryptomining malware and associated payloads, an attacker needs to deploy their malware on a target host. This often results in the first payload (e.g., a bash script) performing host modifications/reconnaissance commands to ensure the attacker succeeds. This often comes in the form of disabling (T1562.001) host-based firewalls, SELinux, AppArmor, and some cloud monitoring agents such as Aliyun (default cloud agent on Alibaba cloud). Monitoring for commands that outright disable defenses such as iptables -F, ufw disable, modify kernel runtime settings via sysctl commands, or write underneath /proc/sys/kernel can be early indicators of potentially malicious behavior. In combination with this execution, having a user of apache/nginx/$webserver will further increase detection fidelity to lower the possibility of false positives when dealing with these opportunistic attacks.

 

$> iptables -F

$> ufw disable

$> setenforce 0

Red Canary Atomic Test: disable host defenses

Adversary Challenge –  Hiding Increased CPU Load

The nature of cryptocurrency mining results in processes running with higher CPU loads. This is one of the reasons why it’s common for the nmi_watchdog variable to be disabled in the initial bash payloads. The Linux kernel watchdog subsystem can monitor various hardware aspects of a Linux system to include CPU. The watchdog is responsible for restarting services if they appear hung. Disabling the subsystem via sysctl at runtime should trigger an alert to notify your SOC that something fishy is going on—especially if this occurs on a production system where (hopefully) kernel parameter changes are not done ad hoc. You can use the command below to test your visibility of this type of attack.

$> sysctl -w kernel.nmi_watchdog=0

Atomic Test: disable watchdog

Adversary Challenge –  Hiding Payloads

Once an attacker achieves access, they must hide their payload from a nosy sysadmin. Lacework Labs sees this accomplished in two ways: userland (T1574.006) and kernel (T1014) level rootkits. For userland rootkits, a shared object (SO) is downloaded and the /etc/ld.so.preload path or LD_PRELOAD environment variable is modified to load SOs prior to any other library on the system. An SO typically hooks syscalls associated with displaying content in file systems or running processes. Lacework Labs published a detailed walkthrough on analyzing SOs observed in a real-world attack. Identifying modifications to the /etc/ld.so.preload path, or detecting the LD_PRELOAD environment variable being set, is an excellent way to detect suspicious behavior that aligns with attacks observed by Lacework Labs.

$> LD_PRELOAD=/tmp/notareal.so whoami

Atomic Test: LD_PRELOAD environment testing

 

$> echo “#accessing ld.so.preload” >> /etc/ld.so.preload

Atomic Test: ld.so.preload testing

For kernel rootkits (T1014), the attacker must compile a kernel module for the underlying kernel on the victim machine. This results in an incredibly complex scenario that Lacework has observed from only one threat actor. The threat actor installed kernel headers on the victim machine, downloaded the source code to the kernel module onto the victim machine, and compiled and installed the kernel module. This behavior presents numerous scenarios for detection and an excellent candidate for anomalous detection. However, it’s also possible for an attacker to compile a smaller subset of popular LTS kernels rather than using the same approach as the threat actor. The following detection opportunity is built around the insmod and modprobe commands, which are used to install kernel modules on Linux hosts.

 

$> insmod /tmp/notareal.ko;

$> modprobe -l;

Atomic Test: ld.so.preload testing

Adversary Challenge –  Hiding Outbound Network Connections

Cryptocurrency miners must communicate with a mining pool. This is often the first point of detection for defenders regarding network telemetry (DS0029). Using DNS logs to identify outbound connections to mining pools can be beneficial for early detection opportunities. Direct connections to the IP address of mining pools have also been observed. There are numerous open source cryptocurrency mining pool lists that aggregate the latest domains and associated IP addresses with mining pools. Cloud service providers offer DNS filtering to prevent cloud workloads from resolving and connecting to known bad domains. Lacework Labs wrote a detailed article on how to use threat intel feeds in AWS via AWS DNS Firewall.

Some sophisticated adversaries leverage a mining proxy to avoid common detections described above. A mining proxy allows cryptocurrency miners to connect to an attacker-controlled server, which then communicates to the mining pool. This gives the attacker more flexibility to control and update the coin being mined and avoids the initial detection of connecting to well-known mining pools, thus potentially evading detection. However, this also results in the burden of additional resource management and operational risk to a cryptojacking campaign.

$> curl xmr.f2pool

Atomic Test: Generate network telemetry to known mining pools

Adversary Challenge –  Persistence

After an attacker achieves access, the last thing they want to do is lose it. Attackers must establish a  persistence method (TA0003) to continue access in the event that defenders attempt to remove them from an environment. Systemd services (T1543.002) and cron jobs (T1053.003) are  easily implementable methods for continued access. Observed cron entries are often simple bash one-liners that curl a script and pipe to bash to kick off the entire infection process. Sometimes this occurs every minute on a host and therefore the deployed tools are killed and redeployed frequently.

$> echo"* * * * * echo lacework-labs > /tmp/lw-pwn.log" > tmpcron;

$> crontab tmpcron;

$> crontab -l;

Atomic Test: Adding a cronjob

Beyond systemd services and cronjobs, dropping ssh keys to users on the host machine to log in is a common technique attackers use to obtain shell access to a victim machine. The initial bash script often has an attacker’s public key that’s added to a victim machine’s root user account. Disabling root ssh access and monitoring for modifications to said files presents an opportunity for detection in production environments where new ssh keys are infrequently or not expected to be added. The example below illustrates fetching ssh keys from Github, but this could be substituted with simply writing a public ssh key into the authorized_keys file.

$> curl https://github.com/$USER_NAME.keys >> ~/.ssh/authorized_keys

Atomic Test: Adding an  SSH key to authorized_keys file

Conclusion

Lacework Labs consistently observes attackers adopting the latest proof-of-concept scripts for recently published CVEs as a way to achieve access and deploy their offensive payloads in victim environments. Proactively planning and simulating attacks internally in a purple team exercise can help security teams understand their individual roles during an incident, where potential visibility gaps may exist, and what their existing security tooling can accomplish. Understanding how real attackers operate will also fuel a more beneficial exercise rather than running ad hoc commands. For content like this and more, follow us on Twitter, LinkedIn, and YouTube!