Visit here for our full CompTIA XK0-005 exam dumps and practice test questions.
Question 61:
A Linux administrator needs to identify which systemd service is responsible for opening a specific port after noticing unexpected network activity. Which command will list the service names associated with open sockets managed by systemd?
A) systemctl status
B) systemd-analyze
C) systemctl list-sockets
D) netstat -rn
Answer:
C
Explanation:
The correct command is systemctl list-sockets because it displays all active sockets managed by systemd, along with the services associated with them. Systemd can manage socket activation, meaning that services may not be running initially but systemd monitors specific ports or sockets and starts the associated service automatically when activity occurs. This behavior requires administrators to trace which sockets correspond to which services, and systemctl list-sockets provides this mapping clearly.
When unexpected network activity appears on a server, one of the first tasks is identifying the source of the open port. Traditional commands such as lsof or ss can show which processes are using a port, but in systems with socket-activated services, the process may not yet be running. Instead, systemd itself listens on the port until activity triggers service activation. Systemctl list-sockets shows each listening socket along with the unit responsible for accepting connections.
Option A, systemctl status, is useful for checking the status of a specific service but does not enumerate all sockets. It must be used with a service name already known, and it does not directly show socket-to-service mappings.
Option B, systemd-analyze, focuses on boot performance and system state analysis. It cannot list sockets or reveal which services correspond to open ports.
Option D, netstat -rn, displays routing tables, not listening ports or service associations. It cannot identify which service owns a port, nor can it work with systemd socket activation mechanisms.
Systemctl list-sockets displays output such as:
LISTEN SOCKET SERVICE
path/to/socket service-name
This makes it easy to identify, for example, that a socket file or network port belongs to a specific web server, remote shell, logging service, or custom application. Administrators can then inspect or disable the unit if necessary.
Socket activation is efficient because it prevents services from consuming resources unnecessarily when idle. However, it can complicate troubleshooting unless the administrator understands how to inspect socket-managed units. Using systemctl list-sockets ensures full visibility into which sockets are present, why they exist, and which service they belong to. For this reason, option C is the correct answer.
Question 62:
A Linux engineer needs to create a new logical volume named lvdata with a size of 10 gigabytes inside an existing volume group called vgmain. Which command correctly performs this action?
A) lvextend -L 10G vgmain/lvdata
B) lvcreate -n vgmain -L 10G lvdata
C) lvcreate -L 10G -n lvdata vgmain
D) mkfs.ext4 /dev/vgmain/lvdata
Answer:
C
Explanation:
The correct command is lvcreate -L 10G -n lvdata vgmain because this command creates a new logical volume named lvdata with a size of 10 gigabytes inside the volume group vgmain. The lvcreate command is the standard tool for provisioning logical volumes within the Linux Logical Volume Manager infrastructure. Administrators use lvcreate when allocating storage for databases, application directories, container storage, virtual machines, or any area where flexible disk management is required.
Option A, lvextend -L 10G vgmain/lvdata, attempts to extend an existing logical volume. It is used only when increasing the size of an already existing LV, not when creating a new one. If lvdata does not yet exist, this command fails.
Option B, lvcreate -n vgmain -L 10G lvdata, incorrectly swaps the name of the LV with the name of the VG. The -n flag should be used for the LV name, not the volume group. This command incorrectly treats lvdata as the volume group and vgmain as the logical volume name, making the syntax invalid.
Option D, mkfs.ext4 /dev/vgmain/lvdata, formats a filesystem on a logical volume. It does not create the logical volume itself. This step occurs only after the LV has been created.
Creating logical volumes allows flexible resizing and snapshot creation. Because logical volumes are abstracted from physical disks, administrators can extend or shrink storage by modifying LV sizes rather than manipulating partitions directly. Lvcreate accepts size arguments in various formats, allowing precise storage allocation.
Since lvcreate -L 10G -n lvdata vgmain is the only correct command for creating a new logical volume with the required parameters, option C is correct.
Question 63:
A Linux administrator needs to extract the contents of a compressed archive named archive.tar.gz into the current directory while preserving file permissions and directory structure. Which command should be used?
A) unzip archive.tar.gz
B) gunzip archive.tar.gz
C) tar -xvzf archive.tar.gz
D) gzip -d archive.tar.gz
Answer:
C
Explanation:
The correct command is tar -xvzf archive.tar.gz because tar handles both archiving and extraction of .tar.gz files. The x flag extracts files, the v flag enables verbose output, the z flag applies gzip decompression, and the f flag specifies the archive filename. This combination extracts the full directory structure and file metadata such as permissions, timestamps, and ownership, ensuring a faithful restoration of the archived content.
Option A, unzip archive.tar.gz, is incorrect because unzip works with .zip files, not .tar.gz archives. Attempting to unzip a .tar.gz file results in errors, as the file format is entirely different.
Option B, gunzip archive.tar.gz, removes the gzip compression and produces archive.tar but does not extract the archive. A second command would be required to extract the tar file, making this option incomplete.
Option D, gzip -d archive.tar.gz, decompresses the archive but does not extract the tar portion. It is equivalent to gunzip and leaves a plain .tar file in place.
Tar archives are widely used on Linux for packaging files due to their ability to preserve file attributes and directory hierarchies. The addition of gzip compression (.gz extension) reduces file size, making it ideal for backups, software distribution, and transfers. Using tar with the proper flags ensures accurate extraction without disrupting file permissions or directory structure.
Because tar -xvzf archive.tar.gz is the only complete and correct extraction command, option C is the correct answer.
Question 64:
A Linux administrator needs to analyze overall system performance and wants to observe CPU, memory, swap, I/O, and system activity in a continuous display updated every few seconds. Which command provides this multicolumn, real-time monitoring interface?
A) vmstat
B) sar
C) uptime
D) killall
Answer:
A
Explanation:
The correct command is vmstat because it provides a continuous, real-time snapshot of system performance, including CPU use, memory utilization, swap behavior, block I/O, system interrupts, context switches, and process queues. Vmstat accepts an interval value to specify how frequently data is refreshed. For example, vmstat 3 prints performance information every three seconds, giving administrators a near real-time view of system behavior.
Option B, sar, is a powerful reporting tool for historical system activity but does not provide an interactive live-monitoring interface by default. It is often used with logs to display performance trends over time, not for active monitoring.
Option C, uptime, displays load averages and system uptime but provides no multicolumn performance metrics. It cannot reveal memory or I/O problems.
Option D, killall, is used to terminate processes by name and does not provide any monitoring functionality.
Vmstat helps administrators diagnose performance bottlenecks such as CPU saturation, memory exhaustion, excessive swapping, or I/O overload. It displays key metrics including:
r (run queue)
b (blocked processes)
swpd (swap usage)
si/so (swap in/out)
bi/bo (block I/O)
us/sy/wa (CPU user, system, wait time)
These metrics give a clear understanding of how heavily loaded a system is and how resources are being utilized. By analyzing vmstat output, administrators can determine whether slowness is due to insufficient memory, high I/O latency, CPU starvation, or kernel overhead.
Because vmstat provides the required multicolumn, continuous performance display, option A is correct.
Question 65:
A Linux systems engineer needs to execute a command that modifies multiple files and wants to record all terminal output, including errors, into a single log file for review. Which command enables recording the entire terminal session?
A) tee logfile
B) script logfile
C) history > logfile
D) echo logfile
Answer:
B
Explanation:
The correct command is script logfile because it records the entire terminal session, including output, input, error messages, interactive command responses, and prompt text. When the engineer finishes executing commands, they exit the script session, and all recorded content is saved into the specified file.
The script utility is especially useful for capturing logs during troubleshooting, documenting complex administrative procedures, or reviewing system changes after the fact. It captures exact output that appears on the terminal, ensuring nothing is lost. This includes diagnostic messages, error output, verbose command output, or unexpected behavior.
Option A, tee logfile, only captures standard output and cannot capture error output unless specifically redirected. It also does not record the full session or interactive prompts.
Option C, history > logfile, saves previously run commands but not their output. It misses errors, command results, prompts, and any real-time interaction.
Option D, echo logfile, does nothing meaningful in this context. It simply prints text to the screen and does not record terminal activity.
The script command is often used when performing risky operations such as system migrations, database maintenance, or complex troubleshooting. Capturing everything ensures administrators can retrace steps, confirm actions taken, and diagnose failures with accuracy. Because script logfile records the complete session exactly as required, option B is the correct answer.
Question 66:
A Linux administrator needs to ensure that a script runs automatically whenever a specific directory receives a new file. The script should be triggered instantly upon file creation. Which tool is designed to monitor directories for such real-time filesystem events?
A) cron
B) atd
C) inotifywait
D) wall
Answer:
C
Explanation:
The correct tool is inotifywait because it listens for file system events in real time, making it ideal for triggering scripts when new files appear in a directory. Inotify-based monitoring provides immediate reaction to events such as file creation, modification, deletion, or movement. This fulfills the requirement of executing a script instantly upon the arrival of a new file.
Option A, cron, allows recurring scheduled tasks but cannot react to file system changes. Cron operates based on time intervals and cannot be used to detect when new files appear. This makes it unsuitable for event-driven automation.
Option B, atd, schedules one-time tasks at specific times. Like cron, it has no ability to monitor file system changes. Its purpose is delayed or scheduled execution, not real-time monitoring.
Option D, wall, sends messages to logged-in users. It does not monitor file systems or trigger scripts.
Inotifywait allows administrators to set up real-time triggers. A common usage example involves monitoring a directory such as /incoming and executing a processing script whenever a new file arrives. The administrator runs a command such as:
inotifywait -m -e create /incoming
The -m flag keeps the command running continuously, and -e create specifies the event type the administrator wants to monitor. When a new file appears, inotifywait prints output that another script can capture for execution logic. Administrators often combine inotifywait with loops to automate processing pipelines for logs, uploads, data ingestion, or backups.
Real-time monitoring is important for systems that rely on event-driven workflows. For example, automated backup pipelines may rely on detecting newly uploaded files. Security monitoring may require alerts when unauthorized modifications occur. Continuous data flow systems may require immediate ingestion of newly arrived content.
Inotify-based tools offer high accuracy and immediate reaction times. Because inotifywait directly supports continuous monitoring for file creation events, it is the correct answer.
Question 67:
A Linux engineer wants to determine which users have active SSH sessions on the server along with their login times and originating IP addresses. Which command provides this information in a clear, session-based format?
A) who
B) lastb
C) id
D) lslogins
Answer:
A
Explanation:
The correct command is who because it displays active user sessions, including their login time, terminal, and originating host. This tool shows which users are currently logged in through SSH or local terminals. When administrators need to verify who is connected at a particular moment, who presents concise information in real time.
Option B, lastb, displays failed login attempts from the system’s security logs. It does not show current sessions and is focused on authentication failures rather than active connections.
Option C, id, shows user and group information for a specific user but does not reveal active sessions or login sources.
Option D, lslogins, provides detailed information about user accounts, including login history and shell configuration, but it does not display only active sessions. It is more suitable for user account auditing rather than session tracking.
The who command displays each logged-in user in a simple format such as:
username terminal date/time host
This helps administrators monitor active connections, detect unauthorized access, identify idle sessions, or confirm whether maintenance windows can proceed safely.
Monitoring active SSH sessions is crucial for security and operational awareness. Administrators must ensure no one is using the system before restarting services, applying patches, or executing disruptive tasks. The who command makes real-time user monitoring straightforward without requiring log analysis or additional tools.
Because who directly lists active sessions with login source details, it is the correct answer.
Question 68:
A Linux administrator needs to identify which process is currently writing heavily to disk to diagnose performance issues. Which command provides a live, process-level view of read and write operations?
A) free
B) ss
C) iotop
D) uname
Answer:
C
Explanation:
The correct command is iotop because it provides real-time monitoring of disk I/O usage on a per-process basis. When disk activity spikes unexpectedly, iotop helps administrators identify which process or user is generating heavy read or write workloads. The tool displays columns such as actual disk read and write speeds, accumulated I/O time, and process ownership.
Option A, free, shows memory statistics such as RAM and swap usage. Although it is vital for diagnosing memory-related problems, it does not show disk activity or process-level I/O consumption.
Option B, ss, reports socket statistics and network connections. It has nothing to do with file or disk I/O and cannot detect which processes are writing heavily to storage.
Option D, uname, prints kernel and system information. It is useful for verifying architecture, kernel version, or system identity but cannot provide process-level performance statistics.
Iotop is invaluable during performance troubleshooting. Many applications create heavy I/O workloads that cause the server to become sluggish. For instance, log rotation processes, database flush operations, compression tasks, or backup systems may write large amounts of data. Iotop shows which processes are performing these operations at any given moment.
Administrators often observe that high I/O wait times cause CPU performance degradation. Iotop helps verify whether the disk subsystem is saturated by read/write activity. Equipped with this information, the administrator can decide whether to terminate processes, adjust scheduling priorities, allocate more storage throughput, or debug inefficient applications.
Because iotop uniquely provides real-time, process-level disk I/O data, option C is the correct answer.
Question 69:
A Linux system administrator needs to create a new partition table on a disk and wants a modern, GPT-based layout instead of MBR. Which tool provides an interactive interface specifically designed for managing GPT partitions?
A) fdisk
B) parted
C) mkfs
D) blkid
Answer:
B
Explanation:
The correct tool is parted because it supports modern partitioning schemes such as GPT and provides an interactive or command-based interface for creating, resizing, and deleting partitions. GPT partitioning is preferred for large disks, UEFI-based systems, and configurations requiring flexibility in partition counts or sizes. Parted allows administrators to create GPT partition tables using commands such as mklabel gpt and then define individual partitions.
Option A, fdisk, traditionally supports only MBR-style partitioning. Some modern versions support GPT, but they lack the intuitive GPT-focused interface of parted. Fdisk is primarily intended for MBR layouts and may not work as smoothly for GPT tasks.
Option C, mkfs, creates filesystems on partitions but does not manage partition tables or create partitions. It is used only after partitions have been defined.
Option D, blkid, identifies block devices and their associated filesystems, labels, and UUIDs. It does not provide any functionality for partition creation or modification.
Parted is especially useful for managing large disks exceeding the size limitations of MBR layouts. GPT supports significantly more partitions and avoids the restrictive primary and extended partition structure required by MBR. Parted enables precise alignment of partitions, which improves performance for SSDs and RAID systems.
Administrators use parted for disk provisioning tasks such as preparing storage for servers, configuring OS installations, or adding new disks to virtualized environments. It provides clear commands and can be run interactively or scripted.
Because parted is the preferred tool designed specifically for GPT management, option B is correct.
Question 70:
A Linux administrator needs to adjust the priority of a running process to give it lower CPU scheduling priority. Which command will change the process’s nice value while it is still running?
A) renice
B) chown
C) kill
D) nohup
Answer:
A
Explanation:
The correct command is renice because it changes the nice value of a running process. The nice value determines CPU scheduling priority, where higher values result in lower priority. Renice allows administrators to deprioritize CPU-intensive processes to ensure other applications receive more processing time.
Option B, chown, changes file ownership and is unrelated to CPU scheduling or process priorities.
Option C, kill, sends signals to processes, typically used to terminate or control them. It does not modify scheduling priority.
Option D, nohup, allows a process to continue running after the user logs out. It does not influence CPU scheduling or nice values.
Renice is particularly useful when administrators observe a process consuming excessive CPU resources and want to reduce its impact without terminating it. By adjusting its nice value to something higher, such as renice 15 -p processID, the process yields CPU time more readily, allowing higher-priority workloads to perform better.
Renice supports modifying priorities for individual processes, entire process groups, or all processes belonging to a user. It requires elevated privileges when attempting to raise priority or adjust other users’ processes.
Because renice directly modifies the CPU scheduling priority of active processes, option A is correct.
Question 71:
A Linux administrator needs to ensure that a specific systemd service automatically restarts if it fails unexpectedly. Which option should be configured inside the service’s unit file to enable automatic recovery?
A) RestartAlways=yes
B) Restart=on-failure
C) AutoStart=yes
D) FailureRestart=true
Answer:
B
Explanation:
The correct option is Restart=on-failure because this directive instructs systemd to automatically restart a service whenever it exits with a non-zero status or terminates due to a signal such as a crash. This configuration is commonly used for services that must remain available at all times, such as web servers, monitoring agents, DNS services, background daemons, and various application services.
Option A, RestartAlways=yes, appears similar but is not a valid systemd directive. Systemd does not support RestartAlways as an actual keyword. The administrator must use Restart with a valid parameter.
Option C, AutoStart=yes, is not part of the systemd unit file syntax. Systemd does not recognize this directive for controlling service restart behavior.
Option D, FailureRestart=true, is also invalid. Systemd unit files require specific directives, and this syntax is not recognized.
System administrators use Restart=on-failure because it provides fault tolerance without causing unnecessary restarts. If a service exits cleanly with a status of zero or is manually stopped, systemd will not restart it. However, if the service crashes, receives an unexpected signal, or exits with an error code, systemd immediately attempts a restart. This ensures resilience without interfering with maintenance procedures.
Administrators often pair Restart=on-failure with other options such as:
RestartSec, which defines how long systemd waits before restarting
StartLimitInterval, controlling how often the service may attempt restarts
StartLimitBurst, preventing runaway restart loops
These additional directives help prevent system churn and give administrators time to address deeper issues if a service enters a repeated crash state.
Using Restart=on-failure is essential in environments where service availability is critical. For example, infrastructure supporting business operations, automation, or continuous monitoring relies on predictable availability. Without automatic restarts, temporary glitches or unexpected behavior could leave services offline until human intervention occurs. Systemd’s automatic recovery avoids downtime, reduces operational overhead, and strengthens overall system robustness.
Since Restart=on-failure is the correct and valid systemd directive for enabling automatic service recovery, option B is the correct answer.
Question 72:
A Linux engineer needs to identify the default gateway currently in use on the server to troubleshoot network routing issues. Which command provides the most direct way to display the active default route?
A) ip route show
B) hostnamectl
C) ethtool
D) ifconfig -s
Answer:
A
Explanation:
The correct command is ip route show because it provides a clear view of the system’s routing table, including the default gateway. When diagnosing network connectivity issues such as unreachable external hosts or improper routing, administrators must verify the default route. The output typically contains a line beginning with default, followed by the gateway address and network interface through which traffic is forwarded.
Option B, hostnamectl, is used to view and configure system hostname information. It provides details such as the operating system version and kernel but does not display routing or network gateway data.
Option C, ethtool, queries or modifies Ethernet device settings such as speed, duplex, link status, and driver statistics. It cannot show routing information or gateway configuration.
Option D, ifconfig -s, shows interface statistics and packet counters. Although ifconfig may display basic interface information, it does not show routes or default gateways.
The ip route show command is part of the modern ip suite, replacing older networking tools. It displays routes in a clear and structured format, listing destination networks, gateway addresses, routing metrics, and associated interfaces. The default gateway entry, for example, appears as:
default via gateway-address dev interface-name
This tells the administrator which device the system uses to reach external networks. Identifying the correct gateway is critical in diagnosing issues such as failed external pings, unreachable DNS servers, and communication problems affecting remote hosts.
Routing issues commonly arise from misconfigured network interfaces, incorrect DHCP settings, incorrect static routes, VPN conflicts, or changes in network infrastructure. Using ip route show allows administrators to pinpoint the exact routing configuration quickly and determine if adjustments are necessary.
Because this command directly displays the default gateway and related routing entries, option A is correct.
Question 73:
A Linux administrator needs to uncover why a service failed to start and wants to inspect detailed logs specifically for that systemd unit. Which command displays only the logs associated with a particular service?
A) cat /var/log/boot.log
B) journalctl -u servicename
C) systemctl isolate servicename
D) service servicename status
Answer:
B
Explanation:
The correct command is journalctl -u servicename because it displays logs recorded specifically for the given systemd unit. Systemd stores service logs within the system journal, and journalctl allows administrators to filter entries by unit, date, priority, or boot session. When diagnosing why a service failed to start, examining these logs is essential for identifying missing files, permission issues, configuration errors, port conflicts, or unexpected exceptions.
Option A, cat /var/log/boot.log, shows messages only from the boot process and does not include detailed service logs generated at other times. It also excludes services that start manually after boot.
Option C, systemctl isolate servicename, switches the system to a different systemd target, which can disrupt the system significantly. It does not show logs and is unrelated to troubleshooting startup failures.
Option D, service servicename status, displays the service status and may include a short log snippet, but it does not provide full, detailed logs. It lacks the depth and granularity required for thorough debugging.
Using journalctl -u servicename allows an administrator to view all log entries related to that specific unit. Logs may contain messages indicating dependency failures, environment variable issues, invalid configuration directives, or incompatible components. Administrators can include flags such as:
-b to see logs from the current boot
-f to follow logs in real time
-p to filter by priority
These options enable focused troubleshooting and help isolate problems without wading through unrelated logs.
Because journalctl -u servicename provides the full, detailed logs needed to diagnose service failures, option B is correct.
Question 74:
A Linux administrator needs to configure a swap file of 2 gigabytes to extend system memory. Which sequence correctly creates the swap file and activates it?
A) touch /swapfile; swapon /swapfile
B) dd if=/dev/zero of=/swapfile bs=1M count=2048; chmod 600 /swapfile; mkswap /swapfile; swapon /swapfile
C) mkfs.ext4 /swapfile; swapon /swapfile
D) fallocate –remove /swapfile; swapon /swapfile
Answer:
B
Explanation:
The correct sequence is dd if=/dev/zero of=/swapfile bs=1M count=2048; chmod 600 /swapfile; mkswap /swapfile; swapon /swapfile because it properly creates a swap file, secures it, formats it as swap space, and then enables it.
Option A incorrectly uses touch, which creates an empty file of zero bytes. A swap file must have actual allocated storage space, so using touch is insufficient.
Option C incorrectly formats the file as a filesystem instead of swap. Swap space requires mkswap, not mkfs.
Option D contains invalid syntax because fallocate –remove is not used to create swap files. While fallocate alone can create a swap file, it must still be formatted with mkswap, and the example does not follow correct procedures.
The correct steps are:
Allocate space for the file using dd.
Restrict permissions with chmod 600 to prevent unauthorized access.
Initialize the file as swap using mkswap.
Activate it using swapon.
Swap files are used to extend system memory when RAM is insufficient. They provide additional virtual memory space, helping prevent out-of-memory conditions. Ensuring the swap file is properly formatted and secured is critical, as improper permissions may expose sensitive memory contents.
Because option B includes all necessary steps in the correct order, it is the correct answer.
Question 75:
A Linux engineer needs to mount a USB drive formatted with the ext4 filesystem located at /dev/sdb1 onto the directory /mnt/usb. Which command completes this task?
A) mount -t vfat /dev/sdb1 /mnt/usb
B) mount /mnt/usb /dev/sdb1
C) mount -t ext4 /dev/sdb1 /mnt/usb
D) mount /dev /mnt/usb
Answer:
C
Explanation:
The correct command is mount -t ext4 /dev/sdb1 /mnt/usb because it mounts the ext4-formatted partition at the specified mount point. The mount command attaches a filesystem to the system’s directory structure, allowing files on the device to be accessed.
Option A specifies the vfat filesystem, which is incorrect because the drive is formatted with ext4.
Option B reverses the arguments, placing the mount point first and the device second, which is invalid syntax.
Option D attempts to mount the /dev directory itself, which is not the target device.
Mounting removable drives is a common administrative task. The administrator must ensure the mount point exists before use. Once mounted, the files become accessible within /mnt/usb. Proper unmounting with umount is required before removal to prevent data loss.
Because option C correctly identifies the filesystem type and mount order, it is the correct answer.
Question 76:
A Linux administrator needs to determine which kernel modules are currently loaded to diagnose a hardware issue involving device drivers. Which command directly lists all loaded kernel modules?
A) modprobe
B) lsmod
C) insmod
D) depmod
Answer:
B
Explanation:
The correct command is lsmod because it lists all kernel modules currently loaded in the running kernel. Kernel modules act as dynamically loadable components, such as hardware drivers, filesystem support modules, virtualization drivers, and system extensions. When diagnosing hardware issues, administrators must verify whether the appropriate driver module is active. The lsmod command displays module names, memory usage, and dependency counts, allowing clear visibility into what the kernel is using at any moment.
Option A, modprobe, loads or unloads kernel modules and resolves module dependencies, but it does not list currently loaded modules. Its purpose is to manipulate module states, not to display them.
Option C, insmod, inserts a module manually without handling dependencies. It also cannot list which modules are already loaded.
Option D, depmod, rebuilds the module dependency database. It is used to help modprobe find correct dependencies but does not list active modules.
Kernel modules allow Linux to remain lightweight by loading only necessary functionality. When a device malfunctions or is unrecognized, administrators examine whether its driver loaded correctly. If a module expected to be present does not appear in lsmod output, troubleshooting steps may include manual loading using modprobe, rebuilding initramfs, checking compatible kernel versions, or reviewing dmesg logs for error messages.
Because lsmod directly lists all loaded modules in a clean and readable format, it is the correct answer.
Question 77:
A Linux system engineer needs to copy an entire directory structure, including hidden files, symbolic links, and file permissions, from /var/webdata to /backup/webdata. Which command ensures that all attributes and file types are preserved?
A) cp /var/webdata /backup/webdata
B) cp -r /var/webdata /backup/webdata
C) cp -a /var/webdata /backup/webdata
D) cp -p /var/webdata /backup/webdata
Answer:
C
Explanation:
The correct command is cp -a /var/webdata /backup/webdata because the -a (archive) option ensures all file attributes, ownership, timestamps, symbolic links, and directory recursion are preserved. Archive mode is designed for making complete, faithful copies of directory trees without altering their structure or metadata. This is essential for backups, service migrations, or duplicating application data exactly.
Option A copies files non-recursively and will not correctly handle directories.
Option B copies recursively but does not preserve attributes like ownership or symbolic links. It may follow symlinks rather than copying them as symlinks, potentially breaking configurations.
Option D preserves permissions and timestamps but does not recurse automatically and does not correctly preserve symlinks.
Many directory structures include hidden configuration files, symbolic links that link components of an application, and permission-sensitive data. A regular copy operation risks losing this structure, potentially causing applications to break or fail to run. Using cp -a ensures:
All hidden files are included
Ownership and permissions remain correct
Symlinks remain symlinks
Directory tree structure remains identical
Special device files are preserved
This makes cp -a the safest and most accurate method for copying full directory hierarchies.
Question 78:
A Linux administrator needs to verify that a backup script executed successfully earlier in the day by reviewing scheduled job activity. The script is executed by cron. Which file should be checked to confirm cron job execution results?
A) /etc/cron.allow
B) /var/spool/cron
C) /var/log/cron
D) /etc/crontab
Answer:
C
Explanation:
The correct file is /var/log/cron because it contains records of cron daemon activity, including timestamps and execution attempts of scheduled jobs. When a job fails silently or an administrator suspects a scheduled task did not execute, reviewing this log provides clear evidence of whether the cron subsystem triggered the job.
Option A, /etc/cron.allow, controls which users are permitted to schedule cron jobs but does not record execution results.
Option B, /var/spool/cron, stores individual user crontab definitions but does not contain logs or execution reports.
Option D, /etc/crontab, defines system-wide schedules but, again, contains no execution logs.
Cron logs include entries identifying:
The precise time the job ran
The user account running the job
Whether the cron daemon attempted execution
Errors emitted during execution
Indicators of successful job triggering
Backup scripts are critical to system reliability, and their execution must be verified. If a job does not appear in /var/log/cron, issues may include incorrect crontab formatting, incorrect script paths, execution permissions, or disabled cron services. Reviewing cron logs helps administrators trace failures and confirm operational integrity.
Because /var/log/cron is the authoritative execution log for cron jobs, option C is correct.
Question 79:
A Linux engineer needs to determine the amount of free and used memory on a system, including buffer and cache usage. Which command provides a clear summary of system memory usage?
A) free -h
B) df -h
C) du -sh /
D) procinfo
Answer:
A
Explanation:
The correct command is free -h because it displays a full summary of system memory, including total RAM, used RAM, free RAM, buffers, cache, and swap usage. The -h option formats numbers in a human-readable form such as megabytes or gigabytes, making it easier for administrators to interpret.
Option B, df -h, reports disk space usage, not memory.
Option C, du -sh /, calculates disk space consumption for directories, not RAM.
Option D, procinfo, provides system statistics but is not standard on all systems and does not present memory details as clearly.
The free command helps administrators quickly assess:
Whether the system is running low on free memory
How much memory is cached and available for reclamation
Swap usage levels
Whether excessive swapping is occurring
Total memory pressure on the system
Linux uses available memory aggressively for caching, so what appears as “used” memory may include cache that can be reclaimed. The free command provides the “available” metric, giving a more accurate understanding of actual usable memory.
Because free -h provides the most accurate and complete snapshot of memory usage, option A is the correct answer.
Question 80:
A Linux administrator needs to extract lines from a large text file that contain the exact word error in lowercase only. Which command accomplishes this precisely?
A) grep ERROR logfile
B) grep -i error logfile
C) grep -w error logfile
D) grep -v error logfile
Answer:
C
Explanation:
The correct command is grep -w error logfile because the -w option matches whole words exactly. This ensures that only lines containing the exact word error (in lowercase, and not as part of another word) are displayed. For example, it excludes variations such as errors, erroneous, or ERROR.
Option A searches for uppercase ERROR and will not match lowercase error.
Option B performs case-insensitive matching, which violates the requirement that only lowercase error should be matched.
Option D prints lines that do not contain the target word, which is the opposite of what is required.
The -w option is essential for ensuring precise word boundaries. Without it, grep would match the substring error inside longer words, leading to inaccurate results. Administrators often use grep to filter logs, configuration files, scripts, and database exports where exact matches are required for troubleshooting or auditing. The ability to isolate exact keywords helps identify specific incidents, such as actual error messages versus unrelated text.
Because grep -w error logfile uniquely satisfies the requirement for exact lowercase matching, option C is correct.