Cybersecurity on a Linux PC — Stage 1 of 7

Threat Hunting on a Live Linux Machine

Ben Santora — March 2026

Most people wait until something breaks before they look. The correct approach — especially on a machine that has been online, running services, and accumulating history — is to make the assumption that something may already be wrong. Stage 1 is about looking before you have a reason to. It's forensic work on a live system, done from the inside.

So, the starting premise: assume the machine is already compromised. Then go look for evidence. This is how real threat hunters work — not waiting for an alert, but actively hunting artifacts that don't belong.

The Sweep Order

The sequence matters. You work from most volatile to most persistent:

#TargetWhy this order
1Running processesGone on reboot — must be checked live
2Active network connectionsChange constantly — must be attributed to processes
3Persistence mechanismsCron, systemd, autostart, shell init files
4Credentials and sensitive filesHistory, dotfiles, backup directories
5Filesystem anomaliesSUID binaries, recently modified system files
6Login and auth recordsWho authenticated, when, from where
7Snapshot comparisonWhat changed since a known-good state

Step 1 — Running Processes

ps aux --sort=-%cpu

ps aux lists every process on the system. --sort=-%cpu puts the highest CPU consumers first — malware doing active work (cryptomining, beaconing, brute-forcing) shows up here.

What you're looking for: processes running from unusual paths (/tmp, /dev/shm, /home/user/.cache), processes with names that mimic system tools but run from the wrong location, and anything with suspiciously high CPU for a process that should be idle.

What I found: Standard desktop stack — Firefox, PulseAudio, NetworkManager, Bluetooth. Nothing masquerading, no unusual parent/child relationships, no unexpected binaries.

Step 2 — Active Network Connections

# Listening sockets — what's waiting for connections
ss -tulpn

# Established connections — what's actively talking
ss -tupn

ss replaced the deprecated netstat. The flags: -t TCP, -u UDP, -l listening, -p show process, -n don't resolve names. That last flag matters — name resolution slows output and can tip off a DNS-based monitoring system.

What you're looking for: listening ports you didn't intentionally open, established connections to unfamiliar IPs on non-standard ports, and services bound to 0.0.0.0 (all interfaces) that should only be on 127.0.0.1.

PortServiceAssessment
0.0.0.0:22SSHOpen to all interfaces including WiFi
127.0.0.1:631CUPS printingLocal only — fine
0.0.0.0:123NTPExpected for time sync

All established connections traced to Firefox and Claude, both on port 443. Nothing phoning home to unexpected destinations. The SSH finding — 0.0.0.0:22 — means every device on the local network can attempt authentication. This gets a full audit in Stage 3.

Step 3 — Persistence Mechanisms

Cron

crontab -l              # your cron jobs
sudo crontab -l         # root's cron jobs
cat /etc/crontab        # system-wide cron
ls /etc/cron.d/         # package-installed jobs

Cron is the most common persistence mechanism on Linux. An attacker who gains access will almost always drop a cron job — it survives reboots, runs silently, and is easy to overlook in a busy listing.

What you're looking for: jobs you don't recognize, jobs pointing to unusual paths, scripts that no longer exist (payload ran and was deleted — the hook remains), and unusually frequent intervals.

Pattern to know

A 1-minute cron interval for a "utility" script is a red flag. Real maintenance tasks run every 5–30 minutes. Every-minute execution is a beacon interval — a script checking in with a remote server, waiting for commands. If you find */1 * * * * on a script you didn't write, treat it as high priority.

Systemd Services

systemctl list-units --type=service --state=running

# Find recently added or modified service files
find /etc /usr/lib/systemd ~/.config/systemd \
  -name "*.service" -newer /etc/passwd

Attackers increasingly use systemd for persistence because it runs as a proper daemon, restarts on failure, and is harder to spot than a cron entry in a long listing. Using /etc/passwd as the timestamp reference is a standard trick — anything newer than that was modified after the system was set up.

Shell Init Files

diff ~/.bashrc ~/.bashrc.bak     # compare against a backup
cat ~/.xinitrc                   # X session startup script
ls ~/.config/autostart/          # GUI autostart entries

Shell init files run every time a shell opens. If you have a backup copy of .bashrc, diffing it against the current version reveals exactly what changed — and whether you made those changes.

The Hidden Directory Trick

ls -la ~/
file ~/.profile       # should say "ASCII text" not "directory"

During this sweep, ~/.profile appeared as a directory instead of a file. On a standard Linux system, .profile is a shell script sourced at login. A directory with that name does two things: bash silently skips sourcing it (breaking login shell initialization), and the directory name makes any files stored inside look like normal system infrastructure.

In this case the content was a benign Codeberg README. In a real intrusion, this is a staging area for payloads — hidden in plain sight behind a filename that looks like it belongs.

Step 4 — Credentials and Sensitive Files

cat ~/.bash_history
ls -la ~/.ssh/
find ~ -name "*.token" -o -name "*credential*" 2>/dev/null

Shell history is a goldmine for both attackers and defenders. It records API tokens passed on the command line, SSH destinations, internal IP addresses, and the full operational picture of what this machine has been used for.

Finding — API token in bash history

A live Cloudflare API token appeared in ~/.bash_history from a curl command run during testing. A second copy existed at ~/backup/cloudflare_token with 0644 permissions — readable by any local user or process. Any malware running as this user can read both files with a single command and take full control of the associated domain.

The fix:

# Step 1 — revoke the token at cloudflare.com/profile/api-tokens
# (deleting the file is not enough — the token still works)

# Step 2 — remove from history
history -d <line_number>
# or clear all history:
history -c && > ~/.bash_history

# Step 3 — fix backup file permissions
chmod 600 ~/backup/cloudflare_token

Step 5 — Filesystem Anomalies

SUID Binaries

find / -perm -4000 -type f 2>/dev/null | sort

SUID (Set User ID) binaries run with the file owner's permissions regardless of who executes them. /usr/bin/sudo is SUID root — that's how it grants elevated access. An attacker who adds a SUID shell or modifies an existing SUID binary has a permanent privilege escalation path.

What I found: Standard set — sudo, passwd, mount, su, pkexec. No additions, all timestamps consistent with package installs.

Recently Modified System Binaries

find /usr/bin /usr/sbin /usr/local/bin -newer /etc/passwd -ls 2>/dev/null

/etc/passwd as a reference timestamp is a standard forensic technique. It was last modified when the system was installed or when users were added. Any system binary newer than that should be traceable to a known package update.

One anomaly found: /usr/bin/copy_by_ext.sh was owned by the user, not root. It was a legitimate file organizer script in the wrong location. The risk: if an attacker gains user-level access, they can silently rewrite it — and it will look like a standard system utility to anyone scanning the directory.

Step 6 — Login and Auth Records

last -20                       # recent logins and reboots
lastb                          # failed login attempts (needs root)
journalctl -u ssh --since "7 days ago" --no-pager
grep "Failed\|Accepted" /var/log/auth.log

last reads from /var/log/wtmp. lastb reads failed attempts from /var/log/btmp. Together they give you a timeline of who authenticated, from where, and how many times someone failed before succeeding.

What I found: Two boot records. No SSH login events. No authorized_keys file — no backdoor key installed. The absence of login events is itself informative: this machine hasn't been accessed remotely.

Step 7 — Snapshot Comparison

# List available snapshots
ls /timeshift/snapshots/

# Compare a system file between snapshot and current
diff /timeshift/snapshots/DATE/localhost/etc/crontab /etc/crontab

# Find files that changed between two snapshots
diff -rq \
  /timeshift/snapshots/OLDER/localhost/etc/ \
  /timeshift/snapshots/NEWER/localhost/etc/

Timeshift snapshots are your ground truth. They let you answer: was this cron job here three weeks ago? Did this binary change? Was this user account added after the last snapshot? This is where "assume compromise" becomes a time-bounded investigation.

Lesson learned: Timeshift was configured to exclude the home directory. For future sweeps, configure it to include /home — that's where most user-space persistence lives.

Findings Summary

FindingSeverityAction
API token in bash history Critical Revoke token, clear history
Cloudflare token world-readable in backup Medium chmod 600
SSH on all interfaces Medium Addressed in Stage 3
copy_by_ext.sh user-owned in /usr/bin Low Move to ~/bin
.profile is a directory Low Breaks login shell init
/etc/hosts immutable Info Intentional
Second machine at 10.0.0.x Info Noted for Stage 3

Three Perspectives

What the attacker sees

An SSH port open on the LAN. A bash history file with a cloud provider token that controls a live domain. Two cron slots pre-wired at high frequency — drop a script to the right path and you have persistent, silent code execution without touching a single system file. A user-owned script in /usr/bin that can be overwritten without elevated privileges.

What the defender sees

No active intrusion indicators in processes or network state. No backdoor keys, no injected users, no suspicious kernel modules, no active command-and-control connections. But credential hygiene is poor and several persistence paths exist that would be trivially exploitable if the machine were breached at the network level.

At organizational scale

In a real company environment, one workstation with a cloud provider token in its shell history is a supply chain risk. The attacker reads the history, rotates DNS to point the domain at their infrastructure, harvests inbound traffic, then cleans up. The "sleeper hook" pattern — cron entries pointing to nonexistent scripts — is a documented persistence technique in enterprise threat intelligence. The hook stays dormant until a payload is dropped. Security teams scan for exactly this pattern when investigating a suspected breach.

What Comes Next

Each stage of this series builds on the last. We have the baseline. Now we go deeper:

StageFocus
Stage 2Log analysis — 24 hours of system logs, hunting auth anomalies and reconnaissance patterns
Stage 3Attack surface mapping — nmap scan of localhost and the LAN, full port audit
Stage 4Live traffic capture — 60 seconds of tcpdump, everything analyzed
Stage 5Honeypot — a fake listener that logs everything that connects to it
Stage 6Malware sandbox — a controlled environment for detonating suspicious files
Stage 7CVE hunting — scanning running services for known vulnerabilities
Series: Cybersecurity on a Linux PC — Stage 1 of 7