Linux Fundamentals | Hack the Box Academy Walkthrough

Bishal Ray -#GxbNt
44 min readOct 3, 2023

--

System Information

System information refers to various details and statistics about the computer’s hardware, software configuration, and network settings. This information is essential for system administrators, developers, and users to understand and manage the system effectively.

Here is a list of essential Linux command-line tools for system information and management:

Find out the machine hardware name and submit it as the answer.

The machine hardware name can be viewed using the command uname -i

What is the path to htb-student’s home directory?

The user home directory can be accessed using the cd command, and the current path can be viewed using the pwd command

What is the path to the htb-student’s mail?

The mail address is located in the /var/mail directory, so the path of htb-student’s mail is /var/mail/htb-student.

Which shell is specified for the htb-student user?

The assigned shell for a user can be viewed using the echo $SHELL command, which prints the user’s shell.

Which kernel version is installed on the system? (Format: 1.22.3)

The kernel version can be found using the command uname -r, which helps identify the current kernel version of the system.

What is the name of the network interface that MTU is set to 1500

Network interfaces can be viewed using the command ifconfig, and you can search for the MTU value of 1500 in the given interface names.

Navigation

Navigation in Linux refers to the process of moving around the file system using command-line commands. Here’s a short explanation of essential commands:

  1. pwd (Print Working Directory): Displays the current directory’s path.
  2. ls (List): Lists the contents of the current directory.
  3. cd (Change Directory): Allows you to change your current directory. For example, cd /home/user changes to the "/home/user" directory.
  4. cd .. (Change to Parent Directory): Moves up one directory level.
  5. mkdir (Make Directory): Creates a new directory. For example, mkdir myfolder creates a folder named "myfolder."
  6. touch (Create Empty File): Creates a new empty file. For example, touch myfile.txt creates a file named "myfile.txt."
  7. rm (Remove): Deletes files or directories. Be cautious when using this command.
  8. cp (Copy): Copies files or directories. For example, cp file.txt /destination copies "file.txt" to the "/destination" folder.
  9. mv (Move or Rename): Moves files or directories from one location to another or renames them.
  10. find (Find Files and Directories): Searches for files and directories based on specified criteria.
  11. grep (Search Text): Searches for text patterns within files.
  12. ln (Create Links): Creates symbolic or hard links to files or directories.
  13. chmod (Change File Permissions): Modifies file permissions to control access.
  14. chown (Change Ownership): Changes the owner of files or directories.
  15. history (Command History): Shows a list of previously executed commands.

What is the name of the hidden “history” file in the htb-user’s home directory?

First, the home directory of a user can be navigated using the cd command. Then, the ls -la command can be used to list all files, including hidden files, in a detailed list format.

What is the index number of the “sudoers” file in the “/etc” directory?

The -i option with the ls command indeed displays the index number of a file in the output.

Working with Files and Directories

Working with files and directories in Linux involves various command-line operations for managing and manipulating them. Mastering these basic commands is crucial for efficient file and directory management in Linux. Here’s a brief overview of essential tasks:

  1. Navigation: Use cd to change directories and pwd to display the current directory.
  2. Listing: Employ ls to list files and directories in the current directory. Use ls -a to show hidden files (those starting with a dot).
  3. Creating: Use touch to create empty files and mkdir to create directories.
  4. Copying and Moving: Utilize cp to copy files or directories and mv to move or rename them.
  5. Deleting: Use rm to delete files and rmdir or rm -r to remove directories. Be cautious with the rm command, as it's irreversible.
  6. Viewing File Content: Use cat, less, or more to view file contents. For text searching, use grep.
  7. Editing Files: Employ text editors like nano, vim, or emacs to create and modify text files.
  8. Permissions: Adjust file and directory permissions with chmod. Change ownership with chown.
  9. Symbolic Links: Create symbolic links with ln -s to reference files or directories.
  10. File Search: Use find to search for files based on various criteria.
  11. File Compression and Archiving: Compress files with gzip or tar, and extract archives with tar.
  12. File Transfer: Transfer files between local and remote systems using tools like scp (Secure Copy) or rsync.
  13. File Information: Use stat to display detailed file information and file to determine a file's type.

What is the name of the last modified file in the “/var/backups” directory?

The file that was last modified can be viewed using the -t option with the ls command, which sorts files by modification time in descending order.

What is the inode number of the “shadow.bak” file in the “/var/backups” directory?

The inode number can be viewed using the -i option with the ls command.

Find Files and Directories

The importance of finding files and directories in a Linux-based system cannot be understated. Accessing vital configuration files, user-created scripts, and other essential resources necessitates efficient searching without the need for manual exploration of every folder or manual time-checks for modifications. Several tools can simplify this task, with “which” being one of them. This tool provides information about the path to the file or link that should be executed, enabling the verification of program availability on the operating system. For instance, the command `which python` can determine the existence and location of the Python program.

Another valuable tool for file and directory searches is “find.” In addition to locating files and directories, it offers various filtering capabilities based on criteria such as file size, modification date, and file type. The syntax for using “find” includes specifying the location and desired options. For example:

find / -type f -name *.conf -user root -size +20k -newermt 2020–03–03 -exec ls -al {} \; 2>/dev/null

In this command:

- `-type f` defines the searched object type as a file.
- `-name *.conf` filters files with the “.conf” extension.
- `-user root` selects files owned by the root user.
- `-size +20k` restricts results to files larger than 20 KiB.
- `-newermt 2020–03–03` identifies files newer than the specified date.
- `-exec ls -al {} \;` executes the “ls -al” command on each result, with curly brackets serving as placeholders.

2>/dev/null” redirects standard error (STDERR) to the null device, preventing error messages from displaying in the terminal.

For quicker system-wide searches, the “locate” command offers a more efficient alternative. Unlike “find,” “locate” relies on a local database containing information about existing files and directories. Updating this database can be accomplished with the “sudo updatedb” command. Subsequently, searching for files with a specific extension, such as “.conf,” using “locate” yields significantly faster results. However, “locate” lacks the extensive filtering options of “find,” making it essential to consider the appropriate tool based on specific search requirements.

What is the name of the config file that has been created after 2020–03–03 and is smaller than 28k but larger than 25k?

How many files exist on the system that have the “.bak” extension?

Submit the full path of the “xxd” binary.

File Descriptors and Redirections

File descriptors in Linux are numbers used to identify open files, with three standard descriptors: 0 (stdin for input), 1 (stdout for output), and 2 (stderr for error messages). Redirection allows you to change where data is read from or written to. For example, you can use > to redirect stdout to a file, 2> for stderr, and < to read input from a file. These techniques are essential for managing input and output in the command line.

How many files exist on the system that have the “.log” file extension?

How many total packages are installed on the target system?

Filter Contents

Filtering contents in the context of Linux or Unix-like operating systems refers to the process of selectively extracting or manipulating data from a text-based input, such as a file or the output of a command, based on specific criteria or patterns. This is typically done using command-line utilities and text-processing tools. Here are some common tools and techniques for filtering contents:

grep: The grep command is used to search for specific patterns or text within a file or input stream. It allows you to filter lines that match a given regular expression or keyword.

Example:

grep "error" logfile.txt

sed: The sed (stream editor) command is used for text manipulation. It can be used to find and replace text, delete lines, and perform various transformations on the input.

Example:

sed 's/old_text/new_text/g' input.txt

awk: The awk command is a versatile text-processing tool that allows you to specify patterns and actions to perform on each line of input. It's often used for data extraction and transformation.

Example:

awk '/pattern/ {print $2}' data.txt

cut: The cut command is used to extract specific columns or fields from lines of text. It's particularly useful for working with structured data separated by delimiters like spaces or tabs.

Example:

cut -f 1,3 -d ',' data.csv

sort: The sort command arranges lines of text in a specified order, such as alphabetical or numerical. It's helpful for sorting data before further processing.

Example:

sort -r names.txt

uniq: The uniq command removes duplicate lines from sorted input. It's often used in combination with sort.

Example:

sort data.txt | uniq

head and tail: These commands are used to display the first few or last few lines of a file, respectively. They are useful for quickly inspecting the beginning or end of a large file.

Example:

head -n 10 bigfile.txt

grep, sed, and awk in Combination: These tools can be combined in complex ways to perform advanced text processing tasks, such as searching for a pattern in a file and then extracting specific information from matching lines.

How many services are listening on the target system on all interfaces? (Not on localhost and IPv4 only)

Determine what user the ProFTPd server is running under. Submit the username as the answer.

Use cURL from your Pwnbox (not the target machine) to obtain the source code of the “https://www.inlanefreight.com" website and filter all unique paths of that domain. Submit the number of these paths as the answer.

User Management

User management is a critical aspect of Linux administration, encompassing a range of tasks aimed at creating, modifying, and controlling user accounts and their access to resources. It is fundamental for maintaining security, access control, and organizational structure within a Linux-based system. Here are key aspects of user management:

Creating and Modifying Users:

  • User Creation: Administrators can create new user accounts, specifying usernames, passwords, and user attributes.
  • Group Membership: Users can be added to specific groups, which helps in defining access permissions.
  • User Modification: User attributes, including the home directory and shell, can be updated as needed.

Access Control:

  • User Permissions: Users can be granted permissions to access files and directories, ensuring that only authorized individuals can view or edit specific resources.
  • Group Permissions: Groups are used to manage permissions more efficiently, with users sharing common access rights.
  • Root Access: Administrators can execute commands with superuser privileges using sudo or switch to the root user with su to perform system-wide tasks.

Password Management:

  • Password Policies: Password policies can be enforced, requiring users to follow specific rules for password complexity and expiration.
  • Password Reset: Administrators can reset user passwords in case of forgotten passwords or security concerns.

User Groups:

  • Group Management: Administrators can create and manage groups to simplify user access control and group-specific tasks.
  • Group Deletion: Groups that are no longer needed can be removed.

User Shell and Environment:

  • Shell Configuration: Users can customize their shell environment, selecting from various available shells (e.g., bash, zsh).
  • Environment Customization: Users can configure shell profiles and settings according to their preferences.

User Management Commands:

Here are some common Linux commands and utilities used for user management:

  • sudo: Executes commands with elevated privileges, allowing administrators to perform actions as different users.
  • su: Switches user credentials via PAM and starts a new shell session with the specified user's identity.
  • useradd: Creates new user accounts or updates default new user information.
  • userdel: Deletes user accounts and associated files.
  • usermod: Modifies existing user account attributes.
  • addgroup: Adds a new group to the system.
  • delgroup: Removes a group from the system.
  • passwd: Allows users to change their passwords.

Which option needs to be set to create a home directory for a new user using “useradd” command?

Which option needs to be set to lock a user account using the “usermod” command? (long version of the option)

Which option needs to be set to execute a command as a different user using the “su” command? (long version of the option)

Package Management

Linux package management is a critical aspect of system administration, whether you’re maintaining servers, home machines, or configuring penetration testing distributions. It involves using package managers to install, update, or remove software packages, which are archives containing binaries, configuration files, dependency information, and update tracking.

Key Features of Package Management Systems:

  1. Package Downloading: Package managers download software packages from repositories or remote sources.
  2. Dependency Resolution: They handle dependencies automatically, ensuring that all required libraries and components are installed.
  3. Standard Binary Package Format: Packages use standardized formats (e.g., .deb for Debian-based systems, .rpm for Red Hat-based systems).
  4. Common Installation and Configuration Locations: Packages follow conventions for installation paths and configuration files.
  5. Additional System-Related Configuration: Package managers configure system settings, ensuring software integration.
  6. Quality Control: Packages undergo quality checks and testing before being made available in repositories.

PackageManagement Systems:

Different package management systems exist, catering to various Linux distributions. Here are examples:

  • dpkg: Manages Debian packages. Commonly used with the front-end tool “aptitude.”
  • apt: Provides a high-level command-line interface for package management.
  • aptitude: An alternative to “apt” and offers a high-level package management interface.
  • snap: Installs, configures, and manages snap packages for secure software distribution.
  • gem: Front-end to RubyGems, the Ruby package manager.
  • pip: Python package installer, useful for packages not available in the Debian archive.
  • git: Revision control system for downloading software and scripts from repositories.

Using Package Managers:

Package management typically involves working with repositories that store software packages. Here’s a basic overview of package management using APT (Advanced Package Tool), common in Debian-based systems:

  • Updating Repositories: Repositories are updated regularly. Check your system’s repository list (e.g., /etc/apt/sources.list) to ensure it points to the desired repository.
  • APT Cache: APT uses an APT cache, which provides offline package information. You can search for packages, view package details, and list installed packages using apt-cache commands.
  • Installing Packages: Use the apt install command followed by the package name to install software packages. APT resolves dependencies automatically.
  • Manual Package Installation: You can also download packages manually using tools like “wget” and install them with “dpkg.”

Example:

  • To install a package with APT: sudo apt install <package-name>
  • To search for packages: apt-cache search <package-name>
  • To view package details: apt-cache show <package-name>
  • To list installed packages: apt list --installed
  • To manually install a package with “dpkg”: sudo dpkg -i <package-file>.deb

Service and Process Management

In the realm of Linux systems, two primary categories of services exist: internal services that are integral to system startup, responsible for hardware-related tasks, and user-installed services, which often include various server services running in the background, known as daemons. Daemons can be identified by the ‘d’ at the end of their program names, such as ‘sshd’ or ‘systemd.’

Most modern Linux distributions have transitioned to using ‘systemd’ as their init system. ‘systemd’ is the first process (PID 1) started during boot and is responsible for managing the orderly start and stop of other services. Each process is assigned a Process ID (PID), which can be found under ‘/proc/’ with its corresponding number. Processes can also have Parent Process IDs (PPID) if they are child processes.

Apart from ‘systemctl,’ ‘update-rc.d’ is another tool used to manage SysV init script links. Here are some examples using the OpenSSH server:

systemctl:

  1. Start the OpenSSH service:
systemctl start ssh
  1. Check the service status:
systemctl status ssh
  • luaCopy code
  • systemct
  1. Enable the service to run on startup:
systemctl enable ssh
  1. List all services:
systemctl list-units --type=service
  1. View service logs using ‘journalctl’:
journalctl -u ssh.service --no-page

Process Management:

  • Processes can be controlled using signals sent to them. Commonly used signals include:
  • SIGHUP (1): Sent when the controlling terminal is closed.
  • SIGINT (2): Sent when a user interrupts a process using [Ctrl] + C.
  • SIGQUIT (3): Sent when a user quits a process using [Ctrl] + D.
  • SIGKILL (9): Immediately terminates a process without clean-up.
  • SIGTERM (15): Requests graceful program termination.
  • SIGSTOP (19): Stops a process, which cannot be handled further.
  • SIGTSTP (20): Suspends a process, allowing the user to resume it later.

For example, to forcefully terminate a frozen prog

kill -9 <PID>

Background and Foreground Processes:

  • To put a process in the background and continue using the terminal, press [Ctrl] + Z to suspend it and then use the bg command to move it to the background.
  • To automatically start a process in the background, add an ampersand (&) to the command.
  • View background processes with the jobs command.
  • Bring a background process to the foreground using fg.

Executing Multiple Commands:

  • You can run multiple commands sequentially using semicolons (;), double ampersands (&&), or pipes (|).
  • Semicolons separate commands and execute them regardless of previous command results.
  • Double ampersands execute commands sequentially but stop if any command fails.

For example:

echo '1' && ls MISSING_FILE && echo '3'

In this example, ‘1’ is printed, but if ‘ls’ fails to find ‘MISSING_FILE,’ the subsequent ‘echo ‘3’’ is not executed.

Use the “systemctl” command to list all units of services and submit the unit name with the description “Load AppArmor profiles managed internally by snapd” as the answer.

systemctl list-units --type=service | grep "Load AppArmor profiles managed internally by snapd"

Task Sheduling

Task scheduling in Linux is a valuable feature that empowers users to automate and schedule tasks effortlessly, eliminating the need for manual execution. This functionality finds application in various Linux distributions such as Ubuntu, Redhat Linux, and Solaris, serving as a versatile tool for managing diverse tasks. These tasks span from routine activities like software updates, script execution, database maintenance, to the automation of backups. It enables users to ensure the consistent execution of repetitive tasks on predefined schedules and provides the capability to set up alerts for specific events, promptly notifying administrators or users. The automation possibilities are extensive, encompassing a wide range of use cases.

Systemd: Systemd is a service designed for Linux distributions such as Ubuntu, Redhat Linux, and Solaris, offering a means to initiate processes and scripts at scheduled times. With systemd, users can configure processes and scripts to launch at specified intervals, in response to particular events or triggers. The setup process involves several steps and precautions before scripts or processes become automated by the system.

Creating a Timer: To create a systemd timer, a directory for the timer script needs to be established first:

sudo mkdir /etc/systemd/system/mytimer.timer.d
sudo vim /etc/systemd/system/mytimer.timer

The timer script should define options including “Unit” (description for the timer), “Timer” (specifying when and how often the timer activates), and “Install” (indicating where to install the timer).

[Unit]
Description=My Timer
[Timer]
OnBootSec=3min
OnUnitActiveSec=1hour
[Install]
WantedBy=timers.target

The configuration depends on whether the script should run once after system boot or at regular intervals. Next, the service needs to be created.

Creating a Service: Create the service by defining a description and specifying the full path to the script that needs to be executed. “multi-user.target” designates the unit system activated during normal multi-user mode startup, specifying services to launch at system startup.

sudo vim /etc/systemd/system/mytimer.service

Description=My Service
[Service]
ExecStart=/full/path/to/my/script.sh
[Install]
WantedBy=multi-user.target

To incorporate the changes, systemd must reload its configuration:

sudo systemctl daemon-reload

Finally, you can manually start the service and enable automatic startup

sudo systemctl start mytimer.service
sudo systemctl enable mytimer.service

Cron: Cron is another tool available in Linux systems for scheduling and automating processes. It enables users and administrators to execute tasks at specified times or intervals. Similar tasks to those automated with systemd can also be accomplished using Cron. To do this, create a script and configure the Cron daemon to execute it at specific times. Cron uses a different setup process compared to systemd.

To set up Cron, tasks are stored in a file called “crontab,” and the daemon is configured to run them according to specified schedules. Cron’s structure includes time components such as minutes, hours, days of the month, months, and days of the week. For example:

# System Update
* */6 * * /path/to/update_software.sh
# Execute scripts
0 0 1 * * /path/to/scripts/run_scripts.sh
# Cleanup DB
0 0 * * 0 /path/to/scripts/clean_database.sh
# Backups
0 0 * * 7 /path/to/scripts/backup.sh

This crontab specifies when tasks should run based on the time components. Notifications for task execution status can be set up, and logs can be created for monitoring task execution.

Systemd vs. Cron: Systemd and Cron are both tools used for task scheduling and automation in Linux systems. The key distinction lies in how they are configured. Systemd requires creating timer and service scripts to define when tasks should execute, while Cron relies on crontab files specifying task schedules for the Cron daemon to follow.

What is the type of the service of the “syslog.service”?

 systemctl show syslog.service --property=Type

Network Services

Working within a Linux environment requires proficiency in handling various network services, a skill crucial for multiple reasons. This competency enables users to communicate with other computers over the network, establish connections, transfer files, scrutinize network traffic, and configure services to identify potential vulnerabilities in subsequent penetration tests. Understanding these services also enhances comprehension of network security, as it unveils the intricacies of each service and its associated configurations.

Imagine conducting a penetration test and encountering a Linux host under examination. By monitoring network activities, it becomes evident that a user from this Linux host connects to another server through an unencrypted FTP server, inadvertently exposing their credentials in plaintext. Awareness of the lack of encryption in FTP could have prevented this scenario. As a Linux administrator, such lapses could have severe repercussions, shedding light not only on network security but also on the competence of the administrators responsible for safeguarding the network.

While it’s impossible to cover all network services comprehensively, we’ll focus on the most critical ones, as they offer substantial benefits for both administrators and users, as well as penetration testers examining interactions between hosts.

SSH (Secure Shell): SSH is a network protocol that ensures secure data and command transmission over networks. It’s widely employed to manage remote systems securely, enabling the execution of commands and file transfers. Connecting to a Linux host via SSH necessitates the availability and operation of an SSH server. OpenSSH is the most commonly used SSH server, known for its open-source nature.

Penetration testers leverage OpenSSH to access remote systems securely during network audits. To confirm the SSH server’s status, the following command can be used:

systemctl status ssh

To establish an SSH connection, you can employ the following command, replacing the placeholders with appropriate values:

ssh username@hostname_or_ip

Customizing OpenSSH can be achieved by modifying the /etc/ssh/sshd_config file, granting control over various settings like concurrent connections, login methods (passwords or keys), host key verification, and more. Changes to the OpenSSH configuration should be handled cautiously.

NFS (Network File System): NFS is a network protocol facilitating the storage and management of files on remote systems as if they were local. It streamlines file management across networks, promoting collaboration and data management. Administrators use NFS for centralized file storage and access, including Linux and Windows systems.

To install NFS on Linux, execute the following command:

sudo apt install nfs-kernel-server -y

Check the NFS server’s status with:

systemctl status nfs-kernel-server

NFS configuration is handled through the /etc/exports file, specifying shared directories and access rights. NFS access rights determine which users and systems can access shared directories and the actions permitted. Example access rights include "rw" (read and write), "ro" (read-only), "no_root_squash" (root user not restricted), and "root_squash" (root user restricted), among others.

To mount an NFS share on a target system, you can use a command like thi

mount server_ip:/path/to/shared/folder /local/mount/point

Web Server: Web servers are essential for web applications and are often targeted by penetration testers. These servers facilitate data and document delivery over the Internet, using HTTP to send data to clients (web browsers) and receiving requests in return. Popular web servers for Linux include Apache, Nginx, Lighttpd, and Caddy.

As penetration testers, web servers serve various purposes, such as facilitating file transfers and conducting phishing attacks. Apache, a widely-used web server, offers features for hosting secure and efficient web applications. It allows configuration adjustments for directory accessibility, access control, and other settings via the /etc/apache2/apache2.conf file.

To install Apache on Linux, use:

sudo apt install apache2 -y

Customizing folder accessibility can be achieved by editing the Apache configuration. For example:

apacheconfCopy code
<Directory /var/www/html>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>

Python Web Server: Python Web Server is a lightweight alternative to Apache, enabling single-folder hosting with a simple command. It’s useful for transferring files to other systems.

To install Python Web Server, first install Python3:

sudo apt install python3 -y

Start a Python Web Server with:

python3 -m http.server

You can specify a different directory and port if needed. For example:

python3 -m http.server --directory /path/to/directory --port 8000

VPN (Virtual Private Network): VPNs create secure connections between clients and servers, encrypting data during transmission. They are used by companies to provide secure remote access to internal networks and can also anonymize traffic.

OpenVPN, among other options, is available for Linux servers. It offers features like encryption, tunneling, and routing. Install OpenVPN with:

sudo apt install openvpn -y

Customize OpenVPN by editing the /etc/openvpn/server.conf file. To connect to an OpenVPN server, use a command like:

sudo openvpn --config config.ovpn

These essential network services and protocols are indispensable for Linux system administrators, users, and penetration testers. They underpin secure communication, data management, and system administration across diverse network environments.

Working with Web Services

Effective communication with web servers is a fundamental aspect of working within Linux environments. Various methods exist for setting up web servers on Linux operating systems. Among the most widely used and popular web servers, alongside IIS and Nginx, Apache stands out. Apache allows for the utilization of specific modules that facilitate essential functions such as encrypting communication between the browser and web server (mod_ssl), functioning as a proxy server (mod_proxy), and executing complex manipulations of HTTP header data (mod_headers) and URLs (mod_rewrite).

Apache also offers the capability to create dynamic web pages using server-side scripting languages like PHP, Perl, or Ruby. Additional languages, such as Python, JavaScript, Lua, and .NET, can also serve this purpose. Installing the Apache web server is as straightforward as executing the following command:

apt install apache2 -y

Once installed, you can access the default page by navigating to http://localhost in your web browser. This page confirms the correct functioning of the web server.

cURL: cURL is a versatile tool that facilitates file transfers through the shell, supporting protocols like HTTP, HTTPS, FTP, SFTP, FTPS, and SCP. It empowers users to control and remotely test websites, enabling the examination of both the content of remote servers and individual requests between clients and servers. cURL is typically pre-installed on most Linux systems, making it a valuable tool for simplifying various processes.

You can use cURL to access web content like this:

curl http://localhost

By analyzing the source code of the website retrieved through cURL, you can gather valuable information.

Wget: Wget, an alternative to cURL, serves as a robust download manager and allows you to download files from FTP or HTTP servers directly from the terminal. Unlike cURL, Wget downloads and stores website content locally. Here’s an example:

wget http://localhost

Python 3: Python 3 presents another option for data transfer. In this scenario, the web server’s root directory is where the command is executed to initiate the server. For example, if you’re in a directory containing a “readme.html” file and want to start a Python 3 web server, use the following command:

python3 -m http.server

This will start the server on port 8000. You can access the content hosted by the Python 3 web server through a web browser.

Find a way to start a simple HTTP server inside Pwnbox or your local VM using “npm”. Submit the command that starts the web server on port 8080 (use the short argument to specify the port number).

http-server -p 8080

Find a way to start a simple HTTP server inside Pwnbox or your local VM using “php”. Submit the command that starts the web server on the localhost (127.0.0.1) on port 8080.

php -s 127.0.0.1:8080

Backup and Restore

Linux systems provide a range of efficient and secure software tools for data backup and restoration, ensuring data protection and accessibility. When performing data backup on an Ubuntu system, you can employ the following tools:

  1. Rsync: Rsync is an open-source tool designed for quick and secure file and folder backup to remote locations. It excels in transferring only the modified parts of files, making it suitable for large data transfers over networks. Rsync can be used for both local and remote backups.
  2. Duplicity: Duplicity is a graphical backup tool for Ubuntu that offers comprehensive data protection and secure backups. It utilizes Rsync as its backend and supports encryption of backup copies. Duplicity allows storing backups on remote storage media like FTP servers or cloud storage services such as Amazon S3.
  3. Deja Dup: Deja Dup is a user-friendly graphical backup tool for Ubuntu, simplifying the backup process. It provides an intuitive interface for creating backups on local or remote storage media. Deja Dup also utilizes Rsync as its backend and supports data encryption.

To ensure the security and integrity of backups, encryption is recommended. Encryption safeguards sensitive data from unauthorized access. Ubuntu systems offer various tools like GnuPG, eCryptfs, and LUKS for encrypting backups.

To install Rsync on Ubuntu, use the apt package manager:

sudo apt install rsync -y

Once installed, you can use the following command to back up an entire directory using Rsync:

rsync -av /path/to/mydirectory user@backup_server:/path/to/backup/directory

This command copies the specified directory to a remote host, preserving original file attributes, such as permissions and timestamps, and providing a verbose output of the process.

Additional options can be included for customization, such as compression and incremental backups:

rsync -avz --backup --backup-dir=/path/to/backup/folder --delete /path/to/mydirectory user@backup_server:/path/to/backup/directory

This command backs up the directory, enables compression, creates incremental backups, and removes files from the remote host that are no longer present in the source directory.

To restore a directory from the backup server to the local system, use the following command:

rsync -av user@remote_host:/path/to/backup/directory /path/to/mydirectory

For secure file transfers between your local host and the backup server, you can combine Rsync with SSH:

rsync -avz -e ssh /path/to/mydirectory user@backup_server:/path/to/backup/directory

This command ensures encrypted data transfer, enhancing confidentiality and integrity.

To automate synchronization using Rsync, you can create a script and use cron jobs. For example, create a script named RSYNC_Backup.sh:

#!/bin/bash
rsync -avz -e ssh /path/to/mydirectory user@backup_server:/path/to/backup/directory

Make the script executable:

chmod +x RSYNC_Backup.sh

Set up a cron job to run the script at regular intervals:

0 * * * * /path/to/RSYNC_Backup.sh

This cron job executes the script hourly, ensuring automatic synchronization of data between your local directory and the remote host. Adjust the timing to suit your needs.

File System Management

Managing the file system on Linux is a multifaceted process involving the organization and maintenance of data stored on storage devices. Linux supports a wide array of file systems, such as ext2, ext3, ext4, XFS, Btrfs, NTFS, and more, each with unique features. The choice of file system depends on specific application or user requirements. For instance, ext2 suffices for basic file management, while Btrfs excels in data integrity and snapshots. NTFS is ideal for Windows compatibility. Prior to selecting a file system, it’s crucial to analyze the application or user’s needs thoroughly.

The Linux file system derives from the Unix file system, characterized by a hierarchical structure. At its apex lies the inode table, the bedrock of the entire file system. Inodes contain metadata about files and directories, including permissions, size, type, owner, etc. The inode table serves as a database of information about all files and directories on a Linux system, enabling rapid file access and management. Files can be stored in two ways:

  1. Regular Files: These are the most common files and are stored in the root directory of the file system.
  2. Directories: These are used to group collections of files. When a file resides in a directory, that directory becomes the parent directory of the file. Additionally, Linux supports symbolic links, which are references to other files or directories, aiding quick access to files in different locations. Each file and directory requires permission management to control access — permissions dictate who can read, write, and execute the file. These permissions apply to all users, implying that altering one user’s permissions impacts all others.

Managing Disks & Drives on Linux involves handling physical storage devices like hard drives, SSDs, and removable storage. The primary tool for disk management is fdisk, which facilitates partition creation, deletion, and management. It also provides information about partition tables, including size and type. Disk partitioning on Linux involves segmenting physical storage into logical partitions, each with its file system (e.g., ext4, NTFS, FAT32). Common partitioning tools include fdisk, gpart, and GParted.

To manage these partitions, they must be mounted, attaching them to specific directories, making them accessible in the file system hierarchy. The mount tool serves this purpose, with default file systems defined in the /etc/fstab file.

Linux’s fstab File:

UUID=3d6a020d-...SNIP...-9e085e9c927a /              btrfs   subvol=@,defaults,noatime,nodiratime,nodatacow,space_cache,autodefrag 0 1
UUID=3d6a020d-...SNIP...-9e085e9c927a /home btrfs subvol=@home,defaults,noatime,nodiratime,nodatacow,space_cache,autodefrag 0 2
UUID=21f7eb94-...SNIP...-d4f58f94e141 swap swap defaults,noatime 0 0

To view mounted file systems, use the mount command without arguments. It lists currently mounted file systems along with their device names, file system types, mount points, and options.

Unmounting involves detaching a file system from its mount point using the umount command. Adequate permissions and the absence of running processes using the file system are prerequisites for unmounting. To automate unmounting during system shutdown, add the noauto option to the /etc/fstab entry for that file system.

Swap space is vital for Linux memory management, acting as an extension of physical memory when RAM is insufficient. The kernel transfers inactive memory pages to swap when physical memory is depleted, a process known as swapping. Swap space can be created during installation or afterward using mkswap and swapon. Proper placement, separate from the main file system, prevents fragmentation. Encryption of swap space is recommended due to potential temporary storage of sensitive data. Swap space also facilitates hibernation, a power management feature where the system saves its state to disk and powers off, later resuming from the swap space.

How many disks exist in our Pwnbox? (Format: 0)

Containerization

Containerization is the process of encapsulating and executing applications within isolated environments, such as containers, virtual machines, or serverless platforms. Technologies like Docker, Docker Compose, and Linux Containers enable this process on Linux systems, facilitating the rapid, secure, and efficient creation, deployment, and management of applications. These tools provide flexibility in configuring applications to specific requirements. Containers are lightweight, ideal for concurrent execution of multiple applications, offering scalability and portability. Containerization enhances application management, deployment, and security.

Container security is a vital aspect of containerization, offering a secure environment for running applications by isolating them from the host system and other containers. This isolation safeguards the host system from potential malicious activities within the container, bolstering application security. Containers’ lightweight nature makes them less susceptible to compromise than traditional virtual machines. Moreover, their easy configurability ensures secure application execution.

Apart from security benefits, containers offer numerous advantages, simplifying application deployment and management while efficiently supporting multiple concurrent applications. However, potential privilege escalation vulnerabilities exist.

Docker: Docker is an open-source platform automating application deployment via self-contained units called containers. It employs a layered filesystem and resource isolation, promoting flexibility and portability. Docker streamlines container creation, deployment, and management through a robust set of tools.

Installing Docker-Engine: To install Docker, a straightforward process on Ubuntu includes:

#!/bin/bash
# Preparation
sudo apt update -y
sudo apt install ca-certificates curl gnupg lsb-release -y
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt update -y
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
# Add user htb-student to the Docker group
sudo usermod -aG docker htb-student
echo '[!] You need to log out and log back in for the group changes to take effect.'
# Test Docker installation
docker run hello-world

Creating a Docker Image: Creating a Docker image is achieved through a Dockerfile, which defines container instructions. Example Dockerfile:

# Use the latest Ubuntu 22.04 LTS as the base image
FROM ubuntu:22.04
# Update the package repository and install the required packages
RUN apt-get update && \
apt-get install -y \
apache2 \
openssh-server \
&& \
rm -rf /var/lib/apt/lists/*
# Create a new user called "student"
RUN useradd -m docker-user && \
echo "docker-user:password" | chpasswd
# Permissions and ports
RUN chown -R docker-user:docker-user /var/www/html && \
chown -R docker-user:docker-user /var/run/apache2 && \
chown -R docker-user:docker-user /var/log/apache2 && \
chown -R docker-user:docker-user /var/lock/apache2 && \
usermod -aG sudo docker-user && \
echo "docker-user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
# Expose ports
EXPOSE 22 80
# Start services
CMD service ssh start && /usr/sbin/apache2ctl -D FOREGROUND

Building a Docker Image: Build a Docker image using:

docker build -t FS_docker

Running a Docker Container: Run a Docker container:

docker run -p 8022:22 -p 8080:80 -d FS_docker

Docker Management: Docker offers various management commands, such as:

  • docker ps: List running containers.
  • docker stop: Stop a container.
  • docker start: Start a stopped container.
  • docker restart: Restart a running container.
  • docker rm: Remove a container.
  • docker rmi: Remove a Docker image.
  • docker logs: View container logs.

Remember, these commands can be customized with options for specific needs.

Linux Containers (LXC): Linux Containers (LXC) provide lightweight virtualization, isolating multiple Linux systems on a single host. It uses Linux kernel features like cgroups and namespaces. LXC offers a set of tools and APIs for container management, combining the advantages of LXC with Docker for a comprehensive containerization experience.

LXC and Docker have differences in approach, image building, portability, ease of use, and security. LXC is lightweight but may require manual image creation, while Docker streamlines image creation and offers more user-friendliness.

Securing LXC: To secure LXC, implement measures like restricting access, limiting resources (CPU, memory, disk space), isolating containers from the host, enforcing mandatory access control, and keeping containers updated.

Resource limits can be set using cgroups in LXC configuration files. For example:

lxc.cgroup.cpu.shares = 512
lxc.cgroup.memory.limit_in_bytes = 512M

Ensure these settings take effect by restarting the LXC service:

sudo systemctl restart lxc.service

Namespaces in LXC: LXC uses namespaces to provide isolation for processes, networks, and file systems. Namespaces ensure processes, network interfaces, routing tables, and firewall rules are isolated from the host system. Containers have unique process IDs (pids), network interfaces (net), and file systems (mnt).

Practice exercises to enhance LXC skills include container creation, network configuration, custom image creation, resource limit setup, using LXC management commands, configuring SSH access, enabling persistence, and testing software in controlled environments.

Containerization and LXC offer powerful tools for application management, security, and testing in a controlled, efficient manner.

Network Configuration

As a penetration tester, one of the essential skills you need is the ability to configure and manage network settings on Linux systems. This skill is invaluable when setting up testing environments, controlling network traffic, or identifying and exploiting vulnerabilities. A solid grasp of Linux’s network configuration options allows you to tailor your testing approach to meet specific requirements and optimize results.

One of the primary tasks in network configuration is configuring network interfaces. This encompasses tasks such as assigning IP addresses, configuring network devices like routers and switches, and setting up network protocols. It’s crucial to have a comprehensive understanding of network protocols and their specific use cases, including TCP/IP, DNS, DHCP, and FTP. Additionally, familiarity with different network interfaces, including wireless and wired connections, and the ability to troubleshoot connectivity issues are essential skills.

Another critical aspect of network configuration is network access control (NAC). As a penetration tester, you should be well-versed in the importance of NAC for network security and the various NAC technologies available, including:

  1. Discretionary access control (DAC)
  2. Mandatory access control (MAC)
  3. Role-based access control (RBAC)

Understanding these NAC technologies and their enforcement mechanisms is essential. You should know how to configure Linux network devices for NAC, which includes setting up SELinux policies, configuring AppArmor profiles, and using TCP wrappers to control access.

Monitoring network traffic is also a crucial part of network configuration. Therefore, you should be proficient in configuring network monitoring and logging and analyzing network traffic for security purposes. Tools like syslog, rsyslog, ss, lsof, and the ELK stack can be instrumental in monitoring network traffic and identifying security issues.

Furthermore, a strong knowledge of network troubleshooting tools is crucial for identifying vulnerabilities and interacting with other networks and hosts. In addition to the tools mentioned, tools like ping, nslookup, and nmap can be used to diagnose and enumerate networks. These tools provide valuable insights into network traffic, packet loss, latency, DNS resolution, and more. Knowing how to use these tools effectively enables you to quickly pinpoint the root cause of network problems and take the necessary steps to resolve them.

When it comes to configuring network interfaces on Ubuntu, you have two commonly used commands: ifconfig and ip. These commands allow you to view and configure your system’s network interfaces. Whether you need to make changes to your existing network setup or check the status of your interfaces, these commands simplify the process. In today’s interconnected world, having a deep understanding of network interfaces is crucial due to rapid technological advancements and increasing reliance on digital communication.

To gather information about network interfaces, including IP addresses, netmasks, and status, you can use the ifconfig command. This command provides a clear and organized view of available network interfaces and their attributes. While ifconfig is widely used in many Linux distributions, it’s important to note that it has been deprecated in newer versions of Linux and replaced by the more advanced ip command. However, ifconfig remains a reliable tool for network management.

Activating network interfaces is a common task, and you can use either the ifconfig or ip commands for this purpose. These commands allow you to modify and activate settings for specific interfaces like eth0. You can adjust network settings to suit your needs by using the appropriate syntax and specifying the interface name.

Assigning an IP address to a network interface is essential when setting up a network connection. An IP address serves as a unique identifier for the interface, enabling communication between devices on the network. To assign an IP address, you can use the ifconfig command by specifying the interface name and IP address as arguments.

Setting the netmask for a network interface is also important, and you can achieve this by using the appropriate ifconfig command with the interface name and netmask as arguments.

Configuring the default gateway for a network interface is crucial for routing traffic to destinations outside the local network. You can use the route command with the add option to set the default gateway, specifying the gateway’s IP address and the network interface to which it should apply.

Proper DNS server configuration is vital for network functionality, as DNS servers translate domain names into IP addresses. Without correct DNS settings, devices may experience connectivity issues and be unable to access certain online resources. You can update the /etc/resolv.conf file with the appropriate DNS server information to ensure proper DNS resolution.

After making changes to the network configuration, it’s essential to save these changes to persist across reboots. You can edit the /etc/network/interfaces file to define network interfaces for Linux-based operating systems and ensure that the changes are saved.

Remote Desktop Protocols in Linux

Remote desktop protocols facilitate graphical remote access to systems across various operating systems such as Windows, Linux, and macOS. These protocols serve multiple purposes for administrators, including troubleshooting, software or system upgrades, and remote systems administration. To administer a remote system, administrators select the appropriate protocol and establish a connection. Different protocols may be used for specific tasks, like installing applications on remote systems. Two common protocols for this purpose are RDP (used in Windows) and VNC (used in Linux).

XServer The XServer is the user-side component of the X Window System network protocol (X11 / X), a system comprising protocols and applications that enable the management of application windows in graphical user interfaces. While X11 is primarily used on Unix systems, X servers are also available for other operating systems. Today, XServer is typically included in desktop installations of Ubuntu and its derivatives, requiring no separate installation.

When a Linux desktop is initiated, the graphical user interface communicates with the operating system through an X server, even if the computer is not on a network. The key feature of the X protocol is network transparency. This protocol primarily employs TCP/IP for transport but can also use Unix sockets. X server ports typically range from TCP/6001–6009 and facilitate communication between the client and server. For instance, when starting a new desktop session via X server, the TCP port 6000 is opened for the initial X display, often labeled as :0. These ports enable the server to host applications and provide services to clients, including remote access. This flexibility allows users to access applications and data remotely. However, it’s important to note that X11 lacks encryption, which can be mitigated by tunneling through the SSH protocol.

To enable X11 forwarding in the SSH configuration file (/etc/ssh/sshd_config) on the server, set the “X11Forwarding” option to “yes.”

X11Forwarding

X11Forwarding yes

With this configuration, you can start applications from the client using the following command:

ssh -X htb-student@10.129.23.11 /usr/bin/firefox

X11 Security X11 is inherently insecure due to its lack of encryption. An open X server allows anyone on the network to read its window contents without detection. This makes it possible to capture keystrokes, take screenshots, move the mouse cursor, and send keystrokes remotely. Vulnerabilities in XServer have been exploited in the past to execute arbitrary code with user privileges on UNIX and Linux systems.

XDMCP The X Display Manager Control Protocol (XDMCP) is employed by the X Display Manager for communication via UDP port 177 between X terminals and Unix/Linux computers. It is used to manage remote X Window sessions on other machines and is sometimes utilized by Linux system administrators to provide remote desktop access. However, XDMCP is considered insecure and should not be used in highly secure environments, as it can be susceptible to man-in-the-middle attacks.

VNC Virtual Network Computing (VNC) is a remote desktop sharing system based on the RFB protocol. VNC allows users to control a computer remotely, viewing and interacting with its desktop environment over a network connection. VNC is generally considered secure, employing encryption and requiring authentication before access is granted. It is widely used for tasks such as troubleshooting, server maintenance, accessing applications, and screen sharing on Linux hosts.

VNC servers traditionally listen on TCP port 5900, with additional displays offered on ports like 5901, 5902, and so on. Various VNC server and viewer programs are available for different operating systems, with UltraVNC and RealVNC being popular choices for their encryption and security features.

To set up a VNC server on Linux, you can use tools like TigerVNC, and you may need to install additional packages, such as XFCE4 desktop manager, for a stable connection. After installation, configuration involves creating xstartup and config files and granting execute permissions to xstartup. The VNC server can then be started, and sessions can be listed to identify the appropriate display.

To enhance security, an SSH tunnel can be established for encrypted communication. This can be done by creating an SSH tunnel and connecting to the VNC server through the tunnel using a VNC viewer like xtightvncviewer.

Please note that security measures should be taken seriously when using remote desktop protocols, and encryption and authentication should be implemented wherever possible to safeguard your systems and data.

Linux Security

Inherent vulnerabilities exist within all computer systems, although the degree of risk varies. For example, internet-facing web servers hosting complex web applications pose a higher security risk than Linux systems, which are less susceptible to Windows viruses and have a smaller attack surface compared to Active Directory domain-joined hosts. Nonetheless, fundamental security practices are essential for safeguarding any Linux system.

One of the most critical security measures for Linux operating systems is ensuring that both the OS and installed packages remain up to date. This can be accomplished with a command such as:

sudo apt update && sudo apt dist-upgrade

Inadequate network-level firewall rules can be supplemented with Linux firewall and/or iptables configurations to control inbound and outbound traffic effectively.

If SSH access is enabled on the server, configuration should include the prohibition of password-based logins and disallowing root user SSH access. Additionally, it’s crucial to minimize root user interactions and implement access control rigorously. The principle of least privilege should guide user access, and instead of granting full sudo rights, specific commands requiring root access should be defined in the sudoers configuration. Fail2ban is a valuable tool for protecting against brute force attacks by monitoring failed login attempts and applying predefined rules.

Regular system auditing is necessary to identify potential issues that could lead to privilege escalation. This includes monitoring for outdated kernels, managing user permissions, securing against world-writable files, reviewing cron job configurations, and ensuring that services are correctly configured. Some kernel versions may require manual updates.

Enhancing Linux system security can involve the use of Security-Enhanced Linux (SELinux) or AppArmor. These kernel security modules enforce access control policies by assigning labels to every process, file, directory, and system object. Policy rules are then established to govern access between labeled processes and objects, providing fine-grained control over user and application access to resources.

Other security tools and practices, such as Snort, chkrootkit, rkhunter, Lynis, and more, can contribute to Linux security. Additionally, certain security settings should be implemented:

  1. Removal or disabling of unnecessary services and software.
  2. Elimination of services relying on unencrypted authentication methods.
  3. Enabling Network Time Protocol (NTP) and ensuring Syslog is operational.
  4. Individual user accounts for each user.
  5. Enforcement of strong password usage.
  6. Implementation of password aging and restriction of previous passwords.
  7. Account locking after login failures.
  8. Disabling of unwanted SUID/SGID binaries.

These security practices form the foundation of Linux system security, but it’s important to recognize that security is an ongoing process. Administrators must continuously adapt and improve their security measures based on their familiarity with the system and ongoing training.

TCP Wrappers TCP wrappers are a security mechanism used in Linux systems to control which services can access the system. System administrators can restrict access to services based on the hostname or IP address of the requesting user. When a client attempts to connect to a service, the system checks the rules defined in TCP wrappers configuration files to determine if the client’s IP address meets the criteria. If the criteria are met, access to the service is granted; otherwise, access is denied, adding an extra layer of security.

TCP wrappers use two configuration files:

  1. /etc/hosts.allow: Specifies which services and hosts are allowed access to the system.
  2. /etc/hosts.deny: Specifies which services and hosts are denied access.

Administrators can define specific rules in these files to control access to services.

Example /etc/hosts.allow:

# Allow access to SSH from the local network
sshd : 10.129.14.0/24
# Allow access to FTP from a specific host
ftpd : 10.129.14.10
# Allow access to Telnet from any host in the inlanefreight.local domain
telnetd : .inlanefreight.local

Example /etc/hosts.deny:

# Deny access to all services from any host in the inlanefreight.com domain
ALL : .inlanefreight.com
# Deny access to SSH from a specific host
sshd : 10.129.22.22
# Deny access to FTP from hosts with IP addresses in the range of 10.129.22.0 to 10.129.22.255
ftpd : 10.129.22.0/24

The order of rules in these files matters, as the first rule that matches the requested service and host will be applied. It’s important to note that TCP wrappers are not a substitute for a firewall, as they can only control access to services, not ports.

Firewall Setup

Firewalls serve as a critical security measure by controlling and monitoring network traffic across different network segments, ensuring the protection of computer networks against unauthorized access, malicious activities, and other security threats. Linux, a widely-used operating system for servers and network devices, offers built-in firewall capabilities that enable the management of network traffic. Essentially, these firewalls filter incoming and outgoing data according to predefined rules, protocols, ports, and other criteria, preventing unauthorized access and mitigating security risks. The primary purpose of firewall implementation may vary depending on an organization’s specific requirements, which can include preserving the confidentiality, integrity, and availability of network resources.

A notable milestone in the history of Linux firewalls is the introduction of the iptables tool, replacing earlier tools like ipchains and ipfwadm. iptables was first introduced in the Linux 2.4 kernel in 2000, offering a flexible and efficient approach to filtering network traffic. iptables rapidly became the standard firewall solution for Linux systems and gained widespread adoption among organizations and users.

iptables introduced a straightforward yet powerful command-line interface for configuring firewall rules, enabling the filtration of traffic based on diverse criteria such as IP addresses, ports, protocols, and more. iptables was designed for high customizability, allowing the creation of complex firewall rule sets capable of safeguarding against various security threats, including denial-of-service (DoS) attacks, port scans, and network intrusion attempts.

In Linux, firewall functionality is typically implemented through the Netfilter framework, an integral part of the kernel. Netfilter offers hooks to intercept and modify network traffic as it traverses the system, with iptables serving as the common tool for configuring firewall rules on Linux systems.

Iptables The iptables utility furnishes a versatile set of rules for filtering network traffic based on criteria such as source and destination IP addresses, port numbers, protocols, and more. Alternative solutions like nftables, ufw, and firewalld are also available. Nftables, in particular, provides a modern syntax and improved performance over iptables, but transitioning to nftables may require some effort due to syntax differences. UFW, short for “Uncomplicated Firewall,” offers a user-friendly interface for firewall rule configuration, built atop the iptables framework, simplifying rule management. On the other hand, FirewallD offers a dynamic and adaptable firewall solution, supporting complex configurations, custom firewall zones, and services. It comprises several components collaborating to provide a potent and versatile firewall solution.

Iptables primarily consists of the following key components:

  1. Tables: Tables categorize and organize firewall rules, each responsible for specific tasks.
  2. Chains: Chains group firewall rules that apply to specific types of network traffic.
  3. Rules: Rules specify criteria for filtering network traffic and actions for matching packets.
  4. Matches: Matches define specific criteria for network traffic filtering, such as source or destination IP addresses, ports, protocols, and more.
  5. Targets: Targets dictate actions for packets that match specific rules, like accepting, dropping, rejecting, or modifying packets.

Tables Understanding how tables function in iptables is crucial when working with Linux firewalls. Tables in iptables are used to classify and organize firewall rules based on the type of traffic they handle. Each table serves a specific set of functions.

Table Name Description Built-in Chains filter Used for filtering network traffic based on IP addresses, ports, and protocols. INPUT, OUTPUT, FORWARD nat Used for modifying the source or destination IP addresses of network packets. PREROUTING, POSTROUTING mangle Used for modifying header fields of network packets. PREROUTING, OUTPUT, INPUT, FORWARD, POSTROUTING Additionally, iptables provides a raw table for configuring special packet processing options. The raw table encompasses two built-in chains: PREROUTING and OUTPUT.

Chains Chains in iptables organize rules defining how network traffic is filtered or altered. Two types of chains exist:

  1. Built-in chains: These chains are predefined and automatically generated when a table is created. Each table has a distinct set of built-in chains. For example, the filter table incorporates three built-in chains — INPUT, OUTPUT, and FORWARD — which respectively handle incoming, outgoing, and forwarded network traffic. Similarly, the nat table comprises PREROUTING and POSTROUTING chains, altering the destination and source IP addresses of incoming and outgoing packets.
  2. User-defined chains: These chains enable the grouping of firewall rules based on specific criteria, simplifying rule management. User-defined chains can be added to any of the primary tables. For instance, rules for multiple web servers with similar firewall requirements can be grouped within a user-defined chain.

Rules and Targets Iptables rules define criteria for filtering network traffic and specify actions for packets matching those criteria. Rules are added to chains using the ‘-A’ option followed by the chain name, and they can be modified or deleted using various other options.

Each rule comprises criteria or matches and a target indicating the action for packets that meet those criteria. Matches allow the selection of specific network traffic characteristics like source or destination IP addresses, protocol, source or destination port numbers, and more. Targets, on the other hand, determine actions for packets matching specific rules. Common targets employed in iptables rules include:

Target Name Description ACCEPT Allows the packet to pass through the firewall. DROP Discards the packet, blocking it from traversing the firewall. REJECT Discards the packet and sends an error message to the source address, indicating that the packet was blocked. LOG Records packet information in the system log. SNAT Modifies the source IP address of the packet, typically used for Network Address Translation (NAT).

System Logs

System logs in the Linux environment consist of a collection of files that store information about the system’s activities and operations. These logs serve a critical role in monitoring and diagnosing system performance, application activities, and security-related incidents. Linux system logs can offer valuable insights into potential security vulnerabilities and weaknesses within the system. Analyzing these logs on target systems allows us to gain a deeper understanding of system behavior, network traffic, and user actions. This information can be instrumental in detecting unusual or suspicious activities, including unauthorized logins, attempted attacks, exposure of plain-text credentials, or unusual file access, all of which could indicate a security breach.

As penetration testers, we also rely on system logs to assess the effectiveness of our security testing activities. By reviewing logs after conducting security tests, we can determine if our actions triggered any security events, such as intrusion detection alerts or system warnings. This post-assessment log analysis helps us refine our testing strategies and enhance overall system security.

Configuring system logs correctly is crucial to ensure the security of a Linux system. This entails setting appropriate log levels, configuring log rotation to manage log file size, and securing logs to prevent unauthorized access. Regularly reviewing and analyzing logs is essential to identify potential security risks and promptly respond to security incidents. Linux systems maintain various types of system logs, including:

  1. Kernel Logs: These logs contain information about the Linux kernel, including hardware drivers, system calls, and kernel events. They are typically stored in the /var/log/kern.log file. For instance, kernel logs may reveal outdated or vulnerable drivers that attackers could exploit. Additionally, they provide insights into system crashes, resource limitations, and events that may lead to security issues or denial-of-service incidents.
  2. System Logs: These logs capture system-level events, such as service startups and shutdowns, login attempts, and system reboots. They are stored in the /var/log/syslog file. By analyzing system logs, we can detect unauthorized access attempts, access activities, and other system-level events that might indicate vulnerabilities or security threats.
  3. Authentication Logs: Authentication logs record user authentication attempts, including both successful and failed attempts. They are stored in the /var/log/auth.log file. These logs are specifically focused on user authentication and are invaluable for identifying security threats and potential compromises.
  4. Application Logs: Application logs contain details about specific application activities on the system. Each application may have its own log file, such as /var/log/apache2/error.log for the Apache web server or /var/log/mysql/error.log for the MySQL database server. Analyzing application logs is crucial when targeting specific applications, as they can reveal vulnerabilities or misconfigurations that may be exploited by attackers.
  5. Security Logs: Security logs record events related to security applications or tools. These logs are often found in various files, depending on the specific security application in use. For example, Fail2ban logs failed login attempts in /var/log/fail2ban.log, while the UFW firewall logs activities in /var/log/ufw.log. Security logs help in monitoring and identifying potential security incidents or attack patterns.

It is important to be aware of the default log file locations on Linux systems, as this knowledge is beneficial when conducting security assessments or penetration testing. By understanding where security-related events are logged, testers can efficiently analyze log data to uncover potential security issues and vulnerabilities.

Accessing and analyzing system logs can be accomplished using various tools, including built-in log viewers in Linux desktop environments and command-line utilities like tail, grep, and sed. Effective log analysis enables the detection and resolution of system issues and aids in the identification of security breaches and other noteworthy events.

Solaris

Solaris, initially developed by Sun Microsystems in the 1990s and later acquired by Oracle Corporation, is a Unix-based operating system renowned for its robustness, scalability, and compatibility with high-end hardware and software solutions. This operating system is widely adopted in enterprise settings, particularly for critical tasks such as database management, cloud computing, and virtualization. Notably, Solaris includes an integrated hypervisor known as Oracle VM Server for SPARC, enabling multiple virtual machines to operate on a single physical server. Solaris excels in managing vast data volumes while delivering reliable and secure services. Consequently, it’s a prevalent choice in enterprise environments that prioritize security, performance, and stability.

The primary objective of Solaris is to furnish a dependable, secure, and expandable platform for enterprise-level computing. Solaris incorporates built-in functionalities that ensure high availability, fault tolerance, and efficient system management, making it particularly suitable for mission-critical applications. It enjoys extensive adoption across sectors such as finance, banking, and government, where security, reliability, and performance are paramount. Moreover, Solaris finds application in large-scale data centers, cloud computing environments, and virtualization platforms. Recognized industry leaders like Amazon, IBM, and Dell incorporate Solaris into their products and services, underscoring its significance in the IT industry.

Comparing Solaris and Linux Distributions: Solaris and Linux distributions exhibit significant distinctions, with one of the key disparities being that Solaris is a proprietary operating system owned by Oracle Corporation, while most Linux distributions are open source, granting public access to their source code. This divergence extends to various aspects, such as the utilization of the advanced Zettabyte File System (ZFS) by Solaris, featuring data compression, snapshots, and scalability capabilities. Additionally, Solaris employs a Service Management Facility (SMF) for service management, enhancing the reliability and availability of system services.

Filesystem: Both Solaris and Linux leverage the filesystem, but they differ in terms of permissions and command syntax. Solaris uses a different permission system, requiring a hyphen before permission values in commands.

Process Management: Solaris employs the pfiles command for process file analysis, whereas Linux uses the lsof command for a similar purpose.

Package Management: Ubuntu, a Linux distribution, uses the apt-get command for package management, while Solaris relies on pkgadd.

NFS Configuration: Solaris has its own NFS implementation, utilizing the share and mount commands to manage NFS shares. Ubuntu, on the other hand, uses standard NFS commands.

Permission Management: Both operating systems use the chmod command to manage file permissions, but the syntax differs slightly between them.

Executable Access: Solaris utilizes the truss command for tracing system calls, whereas Ubuntu employs strace for similar purposes. Truss can also trace signals and monitor child processes, which strace cannot.

--

--

No responses yet