CompTIA XK0-005 Linux+ Exam Dumps and Practice Test Questions Set 2 21-40

Visit here for our full CompTIA XK0-005 exam dumps and practice test questions.

Question 21:

A Linux administrator needs to create a persistent firewall rule using firewalld to allow incoming HTTPS traffic on port 443. Which command will correctly add this rule so that it persists after reboot?

A) firewall-cmd –add-port=443/tcp
B) firewall-cmd –add-service=https
C) firewall-cmd –permanent –add-port=443/tcp
D) iptables -A INPUT -p tcp –dport 443 -j ACCEPT

Answer:

C

Explanation:

The correct command is firewall-cmd –permanent –add-port=443/tcp because firewalld manages firewall rules dynamically and persistently through the firewall-cmd interface. When administrators want changes to last across reboots, they must use the –permanent flag. Without it, the rule is applied only to the runtime configuration and disappears after reloading or rebooting. The question requires a persistent firewall rule that allows incoming HTTPS traffic on port 443. The syntax –add-port=443/tcp identifies the port and protocol exactly. After issuing a permanent rule, an administrator must run firewall-cmd –reload to activate persistent rules.

Option A adds the port but does not include –permanent, meaning the rule applies only to the running configuration. Without persistence, the rule will not survive a reboot. This makes it incomplete for the requirement.

Option B adds the service https but without –permanent, the rule again is only runtime. While adding a service is valid, the lack of persistence makes the rule unsuitable because the administrator explicitly wants a rule that survives reboot. Although using services instead of ports is sometimes preferred for readability, the absence of –permanent disqualifies this choice.

Option D uses iptables, which is deprecated or replaced on many modern distributions by nftables or firewalld. Even if the iptables command works, it does not create a persistent rule unless saved and restored through service configurations like iptables-save and iptables-restore. The question specifically mentions firewalld, so using iptables is not appropriate in this scenario.

Firewalld is designed to manage firewall rules dynamically through zones and services. It uses direct rules, service definitions, and port settings to manage inbound and outbound traffic. The distinction between runtime and permanent configurations is essential: runtime rules apply immediately and temporarily until reloaded, while permanent rules persist across system restarts. Therefore, when a requirement states persistence, the command must include –permanent. Because only option C does this, it is the correct answer.

Question 22:

A Linux systems engineer needs to troubleshoot boot issues and wants to list all GRUB2 menu entries configured on the system. Which command displays the currently available GRUB2 boot entries?

A) grub2-mkconfig -o /boot/grub2/grub.cfg
B) cat /etc/default/grub
C) grep menuentry /boot/grub2/grub.cfg
D) update-grub

Answer:

C

Explanation:

The correct answer is grep menuentry /boot/grub2/grub.cfg because this command directly inspects GRUB2’s main configuration file and extracts all menuentry definitions. GRUB2 stores its actual menu entries within the compiled configuration file located in /boot/grub2/grub.cfg on most RPM-based distributions (such as RHEL, CentOS, AlmaLinux, Rocky Linux, and Fedora). The menuentry lines specify the titles and boot configurations for each bootable kernel or OS entry. Using grep allows the administrator to quickly list all defined entries without manually scanning through the entire file.

Option A regenerates the GRUB2 configuration file rather than listing existing entries. While grub2-mkconfig -o /boot/grub2/grub.cfg compiles a new configuration based on templates and settings found in /etc/default/grub and /etc/grub.d, it does not display entries. Running it unnecessarily may overwrite custom GRUB changes, making it unsuitable for diagnostic listing.

Option B displays GRUB default settings such as timeout, theme, kernel parameters, and boot order preferences. However, /etc/default/grub does not contain specific menuentry definitions. It influences what appears in the final grub.cfg file but does not list entries directly.

Option D, update-grub, is used primarily in Debian-based systems such as Ubuntu. Although it triggers regeneration of grub.cfg, it also does not list menu entries. The command may not even exist on non-Debian systems. Even where it exists, it does not provide the requested listing of menu entry definitions.

Troubleshooting boot issues often requires verifying boot entries, especially when kernels have been removed, misconfigured, or corrupted. Administrators may need to confirm the presence of fallback kernels, rescue entries, or custom boot scripts. GRUB menu entries define which kernel and initramfs files are used during boot. Viewing them helps diagnose kernel loading errors, mis-specified root filesystems, or missing boot images. The only option that directly lists menu entries is option C.

Question 23:

A Linux administrator must expand an existing ext4 filesystem on a logical volume after increasing the LV size. The LV has already been extended using lvextend. Which command finalizes the filesystem expansion while it remains online?

A) fdisk /dev/vgdata/lvfiles
B) resize2fs /dev/vgdata/lvfiles
C) mkfs.ext4 /dev/vgdata/lvfiles
D) mount -o remount,rw /dev/vgdata/lvfiles

Answer:

B

Explanation:

The correct command is resize2fs /dev/vgdata/lvfiles because resize2fs expands an ext2/3/4 filesystem to fill the newly available space provided by the logical volume extension. Since the LV was already extended with lvextend, the filesystem must now be grown to match the new block device size. Resize2fs is capable of resizing ext4 filesystems online, meaning the administrator does not need to unmount the filesystem if it is mounted and in use, provided it is being extended rather than shrunk. This supports continuous availability and minimizes downtime.

Option A, fdisk, is used for partition management, not for resizing filesystems on logical volumes. LVM abstracts away partitions, and fdisk cannot operate on logical volumes structured within LVM. Because the filesystem resides directly on an LV, fdisk is irrelevant and potentially dangerous.

Option C, mkfs.ext4, formats a filesystem. Running it on an existing LV containing data would wipe all content. It is used only when creating a new filesystem. Administrators must avoid formatting devices when resizing is needed.

Option D remounts the filesystem but does not resize it. Remounting can change permissions or mount options but cannot grow a filesystem. Therefore, it does not accomplish the necessary step of expanding the ext4 filesystem.

Properly resizing storage on a live system requires two main steps: expanding the block device and expanding the filesystem. LVM simplifies storage management by allowing logical volume resizing with commands such as lvextend -L +10G or lvextend -r, where the -r flag triggers an automatic resize. When lvextend is done without -r, administrators must run resize2fs manually. Ext4 supports online expansion, making resize2fs ideal for production servers where downtime must be minimized. Thus, option B is the correct choice.

Question 24:

A Linux engineer wants to run a containerized application using Podman and needs to start a container named webapp from the image nginx:latest, mapping port 80 on the host to port 80 in the container. Which command accomplishes this?

A) podman start nginx:latest -p 80:80 –name webapp
B) podman run –name webapp -p 80:80 nginx:latest
C) podman create nginx:latest 80:80 webapp
D) podman exec -it webapp nginx:latest

Answer:

B

Explanation:

The correct command is podman run –name webapp -p 80:80 nginx:latest because podman run both creates and starts a new container. The –name flag assigns a custom container name, and -p 80:80 maps host port 80 to container port 80. Podman uses syntax similar to Docker but is rootless by default, offering enhanced security and improved isolation. The run command automatically starts the container, ensuring that the nginx web server becomes immediately available.

Option A incorrectly uses podman start, which only starts an already-created container. It cannot start an image directly, and using nginx:latest in this context is invalid. The start command requires a container name or ID that already exists.

Option C incorrectly structures arguments. Podman create builds a stopped container, but the placement of port mapping and name arguments is not correct. Additionally, create does not start the container. The engineer must run podman start afterward. Even then, the syntax shown would not work.

Option D uses podman exec, which executes commands inside an already running container. However, the container must already exist and be running. Exec is not used to start or create containers.

Podman provides drop-in Docker compatibility for most commands while offering systemd integration, rootless operation, and OCI compliance. Running web services like nginx inside containers is common for testing, development, and production deployments. Mapping host ports ensures external access. Option B is the only command satisfying the requirement to create and start the container with correct port mapping.

Question 25:

A Linux administrator needs to add an entry to the /etc/fstab file for an XFS filesystem located on /dev/sdb1 so that it mounts automatically at /data on boot. The mount options should include defaults and noatime. Which fstab entry is correct?

A) /dev/sdb1 /data ext4 defaults,noatime 0 0
B) /dev/sdb1 /data xfs defaults,noatime 0 2
C) /dev/sdb1 /data xfs defaults,noatime 1 1
D) /dev/sdb1 /data xfs defaults 0 0

Answer:

B

Explanation:

The correct entry is /dev/sdb1 /data xfs defaults,noatime 0 2 because it uses the correct filesystem type, correct mount point, and correct fsck order. The first field identifies the device, /dev/sdb1. The second identifies the mount point, /data. The third field specifies the filesystem type, xfs. The fourth field lists mount options: defaults and noatime. The noatime option disables updates to access timestamps on files, improving performance for read-heavy workloads. The fifth field (dump) is usually set to 0. The sixth field controls filesystem checks on boot; XFS filesystems should use a value of 2 because they do not follow standard fsck ordering but still require proper indexing within fstab.

Option A uses ext4, which is incorrect for an XFS filesystem. Filesystem mismatch prevents boot mounting and causes errors.

Option C incorrectly sets both dump and fsck fields to 1. XFS does not use fsck in the same manner as ext4 or other filesystems; system tools expect fsck order 2 for secondary filesystems.

Option D fails to include noatime, which is part of the requirement. While defaults may work, it does not satisfy the administrator’s request.

The /etc/fstab file automates filesystem mounting. Proper formatting is essential to avoid boot failures. Using the correct fsck order ensures the system handles filesystem checks gracefully. Because option B satisfies all mounting requirements, it is the correct answer.

Question 26:

A Linux administrator needs to analyze disk I/O performance on a production server and identify which processes are causing the highest read/write operations. Which command provides a real-time, process-level breakdown of disk I/O usage?

A) vmstat 5
B) iotop
C) df -i
D) du -sh /var/*

Answer:

B

Explanation:

The correct command is iotop because it provides real-time monitoring of disk I/O usage broken down by process, allowing administrators to identify which applications consume the most read/write bandwidth. Iotop is an invaluable tool when diagnosing performance problems involving storage subsystems. Many production systems experience slow response times due to disk saturation rather than CPU or memory constraints. Iotop samples kernel I/O accounting data and displays which processes are performing I/O operations, how heavily they are utilizing the disk, and how their behavior changes over time.

Option A, vmstat 5, provides system-level statistics such as memory usage, context switching, CPU utilization, and some block I/O summaries, but it does not display per-process I/O. Vmstat shows aggregated metrics for the entire system, making it unsuitable when an administrator needs to pinpoint which particular process is generating heavy disk utilization.

Option C, df -i, reports inode usage for filesystems. While inode exhaustion can cause application failures, df -i does not relate to performance monitoring and cannot identify processes causing high disk reads or writes. It is essentially a capacity measurement tool rather than a real-time monitoring utility.

Option D, du -sh /var/*, calculates directory sizes, showing disk usage of filesystem paths rather than real-time performance. It does not track I/O operations nor identify active processes performing the I/O. It is useful for identifying large directories but not for diagnosing I/O bottlenecks.

Iotop is powerful because it provides a live view similar to tools like top or htop, but focused specifically on I/O usage. It displays columns such as Read, Write, Swap-in, and IO% for each process. This allows administrators to identify problematic applications that repeatedly perform heavy writes, such as database servers stuck in checkpoint loops, logging processes writing excessively, misconfigured backup routines, or runaway background jobs.

Additionally, iotop often requires root privileges because it relies on kernel accounting. It reads data from /proc and uses taskstats or cgroups to track disk I/O. The tool can be filtered to show only processes actively performing I/O rather than all processes. This makes it easier to isolate the cause of disk contention and take corrective action such as adjusting process I/O scheduling, migrating workloads, resizing storage, or tuning kernel I/O settings.

In production systems, identifying disk-heavy processes is crucial because storage is frequently the largest bottleneck. Solid-State Drives (SSDs), high-performance RAID arrays, or NVMe technologies reduce latency, but poorly optimized applications can still saturate I/O. Using iotop allows the administrator to react quickly by stopping or modifying the offending service.

Therefore, iotop is the only option that meets the requirement for real-time, process-level disk I/O monitoring.

Question 27:

A Linux security administrator wants to configure auditing so that all executions of the /usr/bin/passwd command are logged for compliance purposes. Which auditd rule should be added to ensure every invocation is recorded?

A) -a exit,always -F path=/usr/bin/passwd -F perm=rw
B) -a always,exit -F path=/usr/bin/passwd -F perm=x -k passwd_changes
C) auditctl -x /usr/bin/passwd
D) -w /usr/bin/passwd -p w -k passwd_changes

Answer:

B

Explanation:

The correct rule is -a always,exit -F path=/usr/bin/passwd -F perm=x -k passwd_changes because it instructs auditd to record all executions of the passwd command. This rule uses the -a flag to append a syscall rule, specifying always,exit so that the audit system captures events when the syscall exits. The -F path= filter locks the rule to a specific executable, and -F perm=x triggers logging when the file is executed. This ensures that every invocation—successful or not—is recorded. The keyword -k passwd_changes tags audit entries with an identifiable label for easier searching using ausearch or aureport.

Option A incorrectly uses perm=rw, which monitors read/write operations rather than execution. The passwd command’s compliance requirement focuses on when users execute the program to change passwords, not when someone reads or writes the file. Therefore, perm=x is needed.

Option C is invalid because auditctl does not have an -x flag for execution logging. Auditctl is used to load audit rules temporarily, but its syntax does not match what is shown. The rule must specify syscalls or filesystem watches using proper auditd syntax.

Option D uses -w, which creates a watch rule for attribute changes, not execution. The -p w permission only captures writes to the file, such as modifications to the passwd binary. This does not satisfy the requirement because the administrator must track executions, not modifications.

Auditing passwd execution is an essential compliance task because changing passwords directly impacts authentication systems, user security, and audit trails. Many organizations must track who attempted to change passwords and when, ensuring policy enforcement and accountability. The rule in option B captures each invocation, and the -k key provides a convenient label for querying audit logs. Administrators can then use ausearch -k passwd_changes or aureport to summarize events.

Thus, option B is the correct answer.

Question 28:

A Linux administrator needs to restore a file named config.yaml from a Git repository to the version committed three revisions ago. Which Git command accomplishes this?

A) git revert HEAD~3 config.yaml
B) git checkout HEAD~3 — config.yaml
C) git restore –staged HEAD~3 config.yaml
D) git merge HEAD~3 config.yaml

Answer:

B

Explanation:

The correct command is git checkout HEAD~3 — config.yaml because this restores the working copy of the file to the version that existed three commits prior to the current HEAD. The syntax checkout HEAD~3 — filename instructs Git to extract that historical version without altering the entire repository or changing branch pointers. After running this, the file will appear in the working directory in that older state. From there, the administrator can commit the restored file if desired.

Option A, git revert HEAD~3 config.yaml, attempts to revert a specific commit. However, revert applies to whole commits, not individual files, and creates a new commit that undoes the changes from a specific commit. Because revert cannot target only one file from an old commit, this option is not appropriate.

Option C, git restore –staged, modifies the staging area and does not affect working directory content in the way required. Additionally, pairing restore with HEAD~3 in this format is incorrect. Git restore typically works with –source flag for pulling file versions.

Option D, git merge HEAD~3 config.yaml, makes no sense conceptually. Git merge merges branches or commits, not individual files, and the syntax shown is invalid.

Git’s ability to extract specific files from previous commits is a powerful feature. This allows administrators to revert configuration changes without affecting unrelated code. This is particularly useful for Linux configuration management, where Git often tracks application and service settings. Restoring a single file avoids rolling back the entire project. Therefore, option B is the correct choice.

Question 29:

A Linux cloud engineer wants to view CPU, memory, disk, and network statistics in a single, interactive, curses-based interface. Which tool provides this multi-resource system monitoring capability?

A) sar
B) htop
C) nload
D) atop

Answer:

D

Explanation:

The correct answer is atop because it is a comprehensive, interactive performance monitoring tool that displays CPU, memory, disk, and network statistics simultaneously in a colorful, structured interface. Atop stands out from other monitoring tools because it provides an aggregated view of system performance combined with per-process breakdowns for key resources. It also logs activity for later analysis, helping administrators troubleshoot past performance anomalies.

Option A, sar, collects system activity data but is not interactive and requires running specific sar commands to view different resources. Sar excels at historical reporting, not consolidated real-time monitoring.

Option B, htop, provides CPU, memory, and process-related statistics but cannot display detailed disk or network performance in the way atop does. While htop is more intuitive for process management, it is not as comprehensive for multi-resource monitoring.

Option C, nload, focuses only on network throughput. Although it is useful for identifying bandwidth bottlenecks, it cannot show CPU load, disk utilization, or memory consumption.

Atop provides detailed metrics for disk I/O per process, network usage per process, thread activity, swap behavior, and more. Administrators can sort by various resource columns, diagnose bottlenecks, and examine resource spikes. Atop logs allow retrospective diagnostics, which is essential for cloud environments where intermittent issues are common. Its ability to present multiple subsystems in one interface makes it the best answer.

Question 30:

A systems administrator needs to configure a persistent kernel parameter that increases the maximum number of inotify watches available to applications. Which configuration file should be modified to ensure the parameter persists after reboot?

A) /etc/sysctl.conf
B) /proc/sys/fs/inotify/max_user_watches
C) /etc/fstab
D) /etc/modules-load.d/inotify.conf

Answer:

A

Explanation:

The correct file is /etc/sysctl.conf because persistent kernel parameters must be defined in sysctl configuration files to ensure they are applied at boot. To increase inotify watches, an administrator would add a line such as fs.inotify.max_user_watches=524288 to sysctl.conf or to a custom file in /etc/sysctl.d/. Running sysctl -p applies the changes immediately.

Option B shows the runtime sysctl interface located under /proc/sys/. Modifying it with echo works temporarily (e.g., echo 524288 > /proc/sys/fs/inotify/max_user_watches), but changes do not persist after reboot.

Option C, /etc/fstab, controls filesystem mounting and cannot configure kernel parameters.

Option D, /etc/modules-load.d, loads kernel modules and has no relation to sysctl parameters.

Inotify watches are essential for tools that monitor file changes, such as IDEs, configuration management systems, and real-time log processors. Without increasing this limit, applications may fail with errors indicating insufficient watch resources. Persisting kernel tunables ensures stable system behavior across reboots, making /etc/sysctl.conf the correct choice.

Question 31:

A Linux administrator needs to perform a scheduled system shutdown in 30 minutes to apply maintenance updates. Which command will correctly schedule the shutdown with a warning message to all logged-in users?

A) shutdown now “System maintenance”
B) shutdown -r +30 “System will reboot in 30 minutes”
C) poweroff -f
D) halt –no-wall

Answer:

B

Explanation:

The correct command is shutdown -r +30 “System will reboot in 30 minutes” because it schedules a reboot in exactly 30 minutes and broadcasts the provided warning message to all logged-in users. The shutdown command supports timed operations where +minutes indicates the delay before the action occurs. The -r flag triggers a reboot after shutdown completes. The shutdown utility also automatically sends a wall message to all users connected locally or through SSH, warning them about impending system downtime. This makes it ideal for controlled maintenance operations.

Option A, shutdown now “System maintenance”, initiates an immediate shutdown rather than scheduling one. Although it includes a message, the keyword now means the reboot begins at once, contradicting the requirement for a 30-minute schedule.

Option C, poweroff -f, forces an immediate shutdown without warning users. This is dangerous in production environments because it terminates running processes abruptly, potentially causing corruption, data loss, or service interruption. It does not support messages or timed delays, making it unsuitable.

Option D, halt –no-wall, stops the system but suppresses warning messages. The –no-wall flag prevents notifications from being sent, violating the requirement to alert users. Additionally, halt may not fully power off or reboot depending on system configuration.

Scheduling controlled reboots is essential in environments where applications, databases, or user sessions are active. Administrators must give users time to save work and exit safely. The shutdown command has predictable behavior, handles messaging automatically, and provides timestamps in system logs. It is widely used for planned outages, update cycles, kernel patching, and hardware maintenance. Because option B is the only one that schedules the action correctly and includes a notification message, it is the correct answer.

Question 32:

A system administrator needs to inspect SELinux denials that occurred recently on a server to identify blocked operations affecting an application. Which command provides a filtered report of SELinux AVC denials?

A) ausearch -m avc
B) seinfo
C) setenforce 0
D) getenforce

Answer:

A

Explanation:

The correct command is ausearch -m avc because it searches the audit logs for Access Vector Cache (AVC) messages, which record SELinux denials. AVC messages provide detailed information about which process attempted an action, the target it tried to access, and the SELinux context mismatch that caused the denial. The ausearch tool is part of the auditd suite and is designed to parse audit logs efficiently.

Option B, seinfo, displays SELinux policy information such as types, classes, booleans, and roles. While seinfo is useful for examining the structure of SELinux policies, it does not display denial events or runtime logs. It cannot troubleshoot blocked operations.

Option C, setenforce 0, sets SELinux to permissive mode, temporarily disabling enforcement. It does not display historical denials. Using setenforce 0 can help during diagnostics by allowing operations while still logging denials, but it does not meet the requirement of viewing past denials.

Option D, getenforce, reports the current SELinux mode (Enforcing, Permissive, or Disabled). It does not display logs or denial events. It is a simple status command with no diagnostic capability.

SELinux is a mandatory access control system that enforces strict policy-based access. When an application experiences unexpected failures, administrators often discover SELinux is preventing actions such as file access, port binding, or process execution. AVC denials provide details about the denial and suggested remediation steps. Using ausearch -m avc allows administrators to quickly identify patterns, troubleshoot issues, and determine whether policy adjustments or SELinux booleans are necessary.

This level of auditing is essential for security, compliance, and forensic investigations. Because ausearch directly queries SELinux denial events from audit logs, it is the correct tool.

Question 33:

A Linux administrator needs to find all running processes owned by a user named developer. Which command will list these processes effectively?

A) ps -u developer
B) who developer
C) grep developer /etc/passwd
D) last developer

Answer:

A

Explanation:

The correct command is ps -u developer because it lists all processes owned by the specified user. The ps utility provides detailed process information, including PID, CPU usage, memory usage, and command names. Using the -u option filters the process list by user, ensuring that only processes belonging to developer are displayed. This is essential when troubleshooting user-specific issues such as resource exhaustion, runaway scripts, or permission conflicts.

Option B, who developer, is syntactically incorrect. The who command lists logged-in users, not processes, and it does not take a username as an argument in this manner. It only displays user login sessions from UTMP records.

Option C, grep developer /etc/passwd, retrieves user account information but does not list processes. It is useful for identifying UID, GID, and home directory paths but cannot monitor running applications.

Option D, last developer, lists login history for the user but not their active processes. It is useful for forensic investigation or tracking login patterns but irrelevant for real-time process monitoring.

Process management is fundamental to Linux administration. When systems slow down, behave unpredictably, or crash, identifying user-level processes helps narrow down the cause. Using ps -u developer allows administrators to correlate specific tasks with resource usage and take corrective action through kill, renice, or service troubleshooting. Because option A is the only one that lists running processes for a specific user, it is the correct answer.

Question 34:

A Linux engineer needs to configure persistent environment variables for a specific user so that variables are applied at every login session. Which file should the variables be added to?

A) /etc/environment
B) ~/.bash_profile
C) /etc/skel/.bashrc
D) ~/.bash_history

Answer:

B

Explanation:

The correct answer is ~/.bash_profile because this file executes at login for users using bash as their shell. Environment variables placed in ~/.bash_profile apply to all login sessions, including SSH logins and console logins. This makes it ideal for persistent per-user configuration such as PATH updates, application environments, or custom variables.

Option A, /etc/environment, sets global environment variables for all users. While functional, it does not apply specifically to one user and does not allow conditional logic or shell scripting.

Option C, /etc/skel/.bashrc, provides template files for new users created in the future. Editing it does not affect existing user accounts, including the engineer’s target user. It only influences users created later through useradd.

Option D, ~/.bash_history, stores command history and cannot apply environment variables. It is a log, not a configuration file.

Linux shells execute specific initialization files depending on login type. For bash, ~/.bash_profile is executed at login, while ~/.bashrc executes for non-login interactive sessions. Administrators typically place export VAR=value statements in ~/.bash_profile when they require environment variables to apply consistently across sessions. Because the requirement is per-user persistence at every login, option B is correct.

Question 35:

A Linux administrator needs to monitor system journal logs in real time to observe service events as they occur. Which command achieves this behavior?

A) journalctl -n 100
B) journalctl -f
C) systemctl status
D) dmesg

Answer:

B

Explanation:

The correct command is journalctl -f because it follows the systemd journal in real time, displaying new log entries as they are added. The -f option functions similarly to tail -f for text logs. Journalctl integrates logs across the system, including kernel messages, service output, user sessions, and system events. This allows administrators to observe live activity during service restarts, troubleshooting, or debugging application behavior.

Option A, journalctl -n 100, shows the last 100 log entries but does not follow new events. It is useful for historical review but not real-time monitoring.

Option C, systemctl status, provides the status of a single systemd service and includes recent logs, but it does not stream live logs continuously.

Option D, dmesg, shows kernel ring buffer messages but does not monitor system logs from all services. It is useful for hardware or kernel issues but not for comprehensive journal monitoring.

Monitoring logs is essential during troubleshooting. Administrators may watch journal logs during service reloads, network outages, authentication failures, or application debugging. Journalctl -f consolidates logs from multiple sources, making diagnosis more efficient. Because it meets the requirement for real-time monitoring, option B is correct.

Question 36:

A Linux administrator needs to override only the Environment= variables of an existing systemd service called myapp.service without modifying the original unit file in /usr/lib/systemd/system. Which of the following is the best way to apply this persistent override?

A) Edit /usr/lib/systemd/system/myapp.service directly and add new Environment= lines
B) Create /etc/systemd/system/myapp.service.d/override.conf and define updated Environment= lines there
C) Add the new Environment= lines to /etc/rc.local and restart the service
D) Run systemctl set-environment followed by systemctl restart myapp.service

Answer:
B

Explanation:

When working with systemd service management on a modern Linux distribution such as RHEL, CentOS Stream, Fedora, Ubuntu, Debian, or openSUSE, administrators must follow a strict configuration hierarchy that separates vendor-supplied unit definitions from administrator-supplied customizations. The correct answer is option B because creating a drop-in override file located within the /etc/systemd/system directory is the officially supported, persistent, and safe method for customizing only parts of an existing service configuration such as Environment variables.

Systemd is designed around the principle that files under /usr/lib/systemd/system (or for some distributions, /lib/systemd/system) are the authoritative service definitions installed by packages. These files must remain untouched to ensure system integrity and smooth package upgrades. If an administrator edits these vendor-supplied unit files directly, those modifications can easily be overwritten during system updates, causing unpredictable service behavior or configuration drift. Furthermore, modifying vendor unit files makes it difficult for future administrators to understand which settings belong to the software provider and which are local customizations.

To solve this problem, systemd provides an elegant layering approach. The system administrator may place override configuration files in a directory structure such as /etc/systemd/system/myapp.service.d. Any file ending with the .conf extension placed in this directory automatically merges with and overrides the corresponding unit file. This approach keeps vendor files intact while giving administrators full control over customization. For example, if the original unit file contains Environment variables pointing to development settings, an administrator can override only those variables by supplying new ones in an override configuration file. Systemd automatically applies the new values during the next daemon reload and subsequent service restart.

Option A is incorrect because directly editing the vendor unit file violates best practices and creates long-term maintainability problems. Changes made directly in /usr/lib/systemd/system are likely to be lost during package updates, resulting in unexpected service behavior.

Option C, adding entries to /etc/rc.local, is outdated. Rc.local was used in SysV-style initialization systems but does not integrate with modern systemd workflows. Even if rc.local were executed, it cannot override systemd’s internal Environment settings for a service unit. The systemd process would have already read its configuration long before rc.local executes.

Option D, using systemctl set-environment, only applies temporary environment variables to the current systemd manager instance. These values disappear on reboot and do not persist. They also do not apply specifically to myapp.service but instead become global environment entries for systemd itself. This makes it unsuitable for controlling per-service environment variables in a persistent manner.

Therefore, creating a drop-in override configuration file inside /etc/systemd/system/myapp.service.d is the correct, safe, persistent, and future-proof method for adjusting environment variables without altering the original unit file. It preserves system integrity, ensures updates do not overwrite customizations, and provides clear separation between vendor and local settings. For these reasons, option B is the only correct choice.

Question 37:

A backup operator needs to create a compressed archive of /var/www while excluding the cache subdirectory and store it as /backups/www-$(date +%F).tar.gz. Which command correctly accomplishes this?

A) tar czf /backups/www-$(date +%F).tar.gz /var/www
B) tar czf /backups/www-$(date +%F).tar.gz –exclude=/var/www/cache /var/www
C) tar xzf /backups/www-$(date +%F).tar.gz –exclude=/var/www/cache /var/www
D) gzip -r /var/www -o /backups/www-$(date +%F).tar.gz –exclude=cache

Answer:
B

Explanation:

This question tests understanding of backup operations, file archiving, exclusion handling, and proper use of traditional Unix tools. The correct answer is option B because it uses the tar utility to create a gzip-compressed archive while excluding a specific directory that should not be included in the backup. Tar is the standard tool for filesystem archiving, heavily used in enterprise backup scripts, automation routines, and disaster recovery operations.

The key requirement in the question is excluding the cache directory, which is typically used to store temporary or easily regenerated files, making it unnecessary or even undesirable to include in a regular backup. Including cache data would unnecessarily inflate archive size and extend backup duration. Using tar with the correct flag ensures that the archive contains only important data while leaving out files that serve no long-term value.

Option B satisfies all elements of the requirement. The tarczf portion of the command instructs tar to create an archive, compress it with gzip, and save it to the provided filename. The path /backups/www-$(date +%F).tar.gz uses shell expansion to include the current date in YYYY-MM-DD format, which is a widely accepted naming convention that facilitates organization, retention rotation, and tracking of backups. The –exclude=/var/www/cache option tells tar to skip that directory entirely when gathering files.

Option A fails because it creates the archive without excluding the cache directory. Although it produces a valid compressed archive, it does not meet the requirement of excluding unnecessary data.

Option C is incorrect because it uses the extract flag (x). The x option extracts an archive; it does not create one. Therefore, this option cannot possibly produce a new backup file.

Option D attempts to use gzip directly on a directory, which is not how gzip works. Gzip can compress individual files, not directories, and cannot create structured archives. Additionally, the –exclude option shown is not a valid gzip flag.

Therefore, option B is the only correct and technically valid solution, aligning with best practices for file backup and exclusion management.

Question 38:

A Linux systems engineer needs to run a maintenance script /usr/local/bin/db_maintenance.sh every Sunday at 03:15 using cron. Which crontab entry correctly schedules this job?

A) 15 3 * * 0 /usr/local/bin/db_maintenance.sh
B) 3 15 * * 7 /usr/local/bin/db_maintenance.sh
C) 15 3 0 * * /usr/local/bin/db_maintenance.sh
D) 0 3 7 * * /usr/local/bin/db_maintenance.sh

Answer:
A

Explanation:

Cron is one of the most widely used scheduling systems in Unix-like operating environments, including Linux. It allows administrators to automate recurring tasks such as backups, log rotations, cleanup routines, monitoring scripts, and in this case, weekly maintenance operations. The correct answer is option A because it correctly follows the cron syntax and places the script at the correct day and time.

Cron entries follow a strictly ordered syntax consisting of five fields followed by a command: minute, hour, day of month, month, and day of week. In this question, the engineer needs to schedule a job at 03:15 every Sunday. Using the cron format, 15 corresponds to the 15th minute, 3 corresponds to 3 AM, and the day-of-week value for Sunday is typically represented as 0 (although 7 is also accepted on some systems). Using an asterisk for fields such as day-of-month and month ensures the task runs every month and every day-of-month, but only when the correct day-of-week matches.

Option A correctly expresses this schedule. It places 15 in the minute field, 3 in the hour field, and 0 in the day-of-week field. This ensures the script runs specifically on Sundays at the exact time required. It is also a typical pattern used across many Linux distributions and cloud platforms.

Option B incorrectly switches the minute and hour fields. A cron entry beginning with 3 15 means 3 minutes past 3 PM, not 3:15 AM. Additionally, while 7 may represent Sunday in some cron implementations, the misplaced hour-minute ordering makes this option invalid.

Option C incorrectly places 0 in the day-of-month field. Days of the month begin at 1, meaning 0 is not a valid entry. This would prevent the job from running, and cron would either ignore it or treat it as misconfigured syntax. Cron cannot execute tasks based on an invalid day-of-month value.

Option D incorrectly sets the day-of-month to 7. This creates a monthly schedule that triggers only on the 7th day of each month at 03:00, regardless of what weekday it occurs, which does not match the requirement.

Therefore, option A is the only correct schedule that runs exactly on Sundays at 03:15, complying fully with standard cron syntax.

Question 39:

A storage administrator has created an LVM snapshot logical volume lvdata_snap of lvdata in volume group vgapps to test a risky application upgrade. After confirming the upgrade failed, the administrator wants to roll back the original logical volume to the state captured by the snapshot. Which sequence of commands correctly performs this rollback?

A) lvremove /dev/vgapps/lvdata ; lvrename /dev/vgapps/lvdata_snap /dev/vgapps/lvdata
B) lvconvert –merge /dev/vgapps/lvdata_snap
C) vgconvert –merge /dev/vgapps/lvdata /dev/vgapps/lvdata_snap
D) lvextend -r /dev/vgapps/lvdata /dev/vgapps/lvdata_snap

Answer:
B

Explanation:

Logical Volume Manager (LVM) snapshots are powerful tools used by enterprise administrators to capture point-in-time versions of logical volumes for testing, backups, and safe experimentation. When an administrator creates a snapshot of a logical volume, they freeze the original volume’s state while allowing changes to be recorded separately. If testing or changes go wrong, those changes can be discarded, and the original volume can be restored by merging the snapshot back into the origin. The correct answer is option B because lvconvert –merge is the official LVM method for snapshot rollback.

The merge operation works by instructing LVM to replace the current contents of the original logical volume with the saved snapshot data. LVM performs this by copying over the preserved blocks from the snapshot, restoring the logical volume to its exact earlier condition. This is a safe and reliable process that ensures the original LV state is fully recovered.

Option A is a dangerous and incorrect approach. Removing the original LV and renaming the snapshot cannot guarantee system consistency because the original LV may have mount points, filesystem labels, or dependencies tied to it. Additionally, snapshot volumes are not equivalent to fully independent logical volumes until merged. Simply renaming a snapshot does not replicate the original volume’s metadata and risks catastrophic data loss.

Option C is invalid because vgconvert is not a command used for merging snapshots. The shown syntax does not exist. LVM merging requires lvconvert, not vgconvert, which serves entirely different purposes.

Option D attempts to extend the original logical volume using the snapshot, which makes no logical sense. Snapshots are not used for extension; they are used to preserve earlier states and cannot be applied in this manner.

Therefore, only lvconvert –merge correctly rolls back the snapshot, making option B the correct and safe choice.

Question 40:

A Linux administrator notices unusually high network usage and wants to identify which processes are generating the most outbound and inbound traffic on the server. Which tool provides real-time, per-process network bandwidth monitoring?

A) tcpdump
B) ss -tuna
C) nethogs
D) ip addr show

Answer:
C

Explanation:

Network diagnostics are critical in Linux server administration, especially in production environments where bandwidth usage directly affects application performance, user experience, and operational stability. The correct answer is option C because nethogs is one of the few tools capable of showing per-process bandwidth consumption in real time, allowing administrators to instantly identify which process is responsible for heavy traffic.

Unlike general traffic analysis tools that show packets or raw interfaces, nethogs groups traffic by process ID and displays data transfer rates for each process. This allows an administrator to immediately spot misbehaving applications, runaway daemons, malware, unauthorized transfers, or unexpected traffic patterns.

Option A, tcpdump, is a packet-capture tool. It is excellent for deep inspection, troubleshooting protocol-level issues, or security analysis. However, tcpdump does not aggregate traffic by process, nor does it show bandwidth usage in terms of bytes per second. It requires detailed interpretation and is unsuitable for quick identification of “who is using the network.”

Option B, ss -tuna, lists active sockets and their connection states. While useful for inspecting open ports, connected clients, and listening services, ss does not show bandwidth usage or per-process throughput metrics. It provides static connection information rather than dynamic bandwidth monitoring.

Option D, ip addr show, displays interface configuration details such as IP addresses, hardware addresses, and operational status. It is an administrative reporting tool and provides no bandwidth measurement or per-process insights.

Only nethogs provides real-time, human-friendly monitoring that ties network usage directly to responsible processes. It is invaluable when diagnosing unexplained network saturation, compromised applications, or resource-intensive service behavior.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!