CompTIA XK0-005 Linux+ Exam Dumps and Practice Test Questions Set 6 101-120

Visit here for our full CompTIA XK0-005 exam dumps and practice test questions.

Question 101:

A Linux administrator needs to recursively locate all files larger than 500MB within the directory /data/records and then delete them after verification. Which command correctly identifies these files based on size?

A) find /data/records -type f -name “500MB”
B) find /data/records -size +500k
C) find /data/records -type f -size +500M
D) find /data/records -perm 500

Answer:

C

Explanation:

The correct command is find /data/records -type f -size +500M because it correctly uses the -size option to filter files based on their actual size in megabytes. The +500M parameter tells find to return files larger than 500 megabytes, making this the appropriate solution when searching for large files that may be consuming excessive disk space. The -type f option ensures only regular files are included, preventing directories or other filesystem objects from being mistakenly matched. This is essential when performing cleanup tasks or preparing for system maintenance involving storage analysis.

Option A attempts to match files with a literal name of “500MB,” which does not represent size filtering. Naming conventions rarely use such literal names, and even if they did, this option would match only filenames, not actual file sizes. Therefore, option A cannot satisfy the requirement of identifying files larger than 500MB.

Option B uses +500k, which specifies kilobytes, not megabytes. Five hundred kilobytes represent only a small amount of data, far below the threshold required. Using this option would produce an overwhelmingly large list of small files, making it unsuitable and incorrect for locating very large files. Misusing size units leads to inaccurate results, and administrators must carefully differentiate between K, M, and G when applying size filters.

Option D uses -perm 500, which filters files by permission bits rather than size. While permission filtering is useful in other contexts, such as auditing security configurations or tracking improperly permissioned files, it has nothing to do with detecting large files.

Large file identification is an essential task in disk space management. When a filesystem begins reaching capacity, administrators must track down oversized files to evaluate whether they need to be removed, archived, compressed, or relocated. Large logs, database dumps, backup archives, and temporary files frequently accumulate unnoticed. Using find with the -size option allows efficient scanning without manually inspecting directories.

In addition to identification, administrators often combine the find command with optional actions such as:

printing file paths for review

deleting confirmed unnecessary files

summarizing file sizes

executing custom cleanup scripts

Administrators may run a command such as:

find /data/records -type f -size +500M -exec ls -lh {} ;

to list files and verify their sizes before deletion.

Using accurate size parameters is critical in environments where large files can accumulate rapidly, such as data warehouses, log servers, video processing systems, or applications generating large binary artifacts. When diagnosing storage problems, filtering by size ensures that cleanup efforts focus on impactful data rather than minor files.

Because find /data/records -type f -size +500M precisely selects all files larger than 500MB, option C is correct.

Question 102:

A Linux engineer needs to create a new group named financeops and assign a user named jordan to this group without affecting jordan’s existing primary group. Which command correctly completes this task?

A) groupadd jordan financeops
B) useradd -g financeops jordan
C) usermod -aG financeops jordan
D) passwd -g financeops jordan

Answer:

C

Explanation:

The correct command is usermod -aG financeops jordan because it appends the user jordan to the supplementary group financeops without disturbing existing group memberships. The -a option stands for append, and -G specifies one or more secondary groups. This combination ensures that jordan retains all current group associations while gaining access to the new group. This is crucial for maintaining access privileges, especially in systems where group-based permissions control file access, shared directories, and application behavior.

Option A, groupadd jordan financeops, incorrectly attempts to create multiple groups but is improperly formatted. Group creation requires one argument to specify the group name. It does not assign users to groups and misunderstanding this behavior could lead to incorrect system configurations.

Option B, useradd -g financeops jordan, attempts to create a new user with financeops as the primary group. In this scenario, the user jordan already exists, and using useradd again would either fail or create an additional unwanted account. Furthermore, changing a primary group is different from adding a user to a supplementary group.

Option D, passwd -g financeops jordan, is invalid. The passwd command manages user passwords, not group membership. Attempting to use passwd for group management reflects a misunderstanding of Linux account tools.

Supplementary group membership is essential for granting users access to additional resources without altering their main group identity. Many workflows, directories, and applications rely on group-based access control. For example, shared project directories may require membership in a specific group to read, write, or execute files. Using usermod -aG ensures that administrators can update group associations seamlessly.

Administrators must exercise caution when modifying group memberships. Forgetting the -a flag results in replacing all existing supplementary groups with only those specified after -G. This can remove important permissions and disrupt access. To avoid this, usermod -aG is always the correct syntax for adding groups to an existing account.

Because usermod -aG financeops jordan is the proper command to add jordan to the financeops group without changing the primary group, option C is correct.

Question 103:

A Linux system administrator needs to monitor and analyze incoming and outgoing network packets at the interface level to troubleshoot packet drops. Which tool provides detailed packet statistics per interface?

A) ss -ant
B) ip -s link
C) tcpdump -i eth0
D) traceroute eth0

Answer:

B

Explanation:

The correct tool is ip -s link because it displays detailed packet statistics for each network interface. These statistics include packet counts, errors, dropped packets, overruns, frame errors, and collisions. This level of detail is essential for diagnosing low-level network performance issues, including faulty network cables, driver problems, incorrect MTU settings, or hardware faults.

Option A, ss -ant, lists socket connections and related statistics but does not display interface-level packet counters. It is useful for connection-related insights but cannot diagnose physical-layer or driver-level packet loss.

Option C, tcpdump -i eth0, captures live network traffic for inspection. While helpful for analyzing packet contents, it does not summarize packet drop statistics or hardware-level errors. Tcpdump is better suited for examining packet payloads and protocol-level troubleshooting.

Option D, traceroute eth0, is invalid because traceroute expects a destination hostname or IP, not an interface name. Additionally, traceroute examines routing paths rather than interface statistics.

The ip command suite is the modern replacement for older networking commands. The -s flag means statistics, and link instructs ip to show link-layer interface details. When running ip -s link, administrators see structured output that includes:

RX packet counts

RX dropped packets

RX errors

RX overruns

TX errors

TX dropped

TX carrier issues

Interface state and configuration

This allows administrators to isolate problems such as:

high packet drop rates

collisions on half-duplex links

misconfigured speed or duplex settings

buffer overruns

noisy network segments

kernel queue issues

Monitoring packet statistics is critical in environments supporting VoIP, high-throughput workloads, or latency-sensitive applications. Interface errors can degrade performance significantly, and ip -s link provides reliable visibility into hardware and driver behavior.

Because ip -s link gives precise interface statistics required for packet-drop troubleshooting, option B is correct.

Question 104:

A Linux administrator needs to adjust file access so that members of the devgroup group have read and write permissions to the directory /srv/devdata, while the directory owner retains full control. Which command sets these permissions correctly?

A) chmod 755 /srv/devdata
B) chmod 770 /srv/devdata
C) chmod 775 /srv/devdata
D) chmod 750 /srv/devdata

Answer:

B

Explanation:

The correct command is chmod 770 /srv/devdata because this grants full permissions (read, write, execute) to both the directory owner and members of the group while removing all access for others. In octal notation, 7 represents full permissions (rwx). Therefore, 770 means:

owner: rwx

group: rwx

others: no access

Option A, chmod 755, gives full permissions to the owner but only read and execute to the group. This does not allow group members to write to the directory, which is required.

Option C, chmod 775, gives read and execute permissions to others, which is not desired when the directory must be restricted to owner and group members.

Option D, chmod 750, removes write access for the group, preventing them from modifying or creating files.

Directories require execute permissions to allow users to enter them. Write permissions are necessary for creating, renaming, or deleting files within the directory. For group-collaborative directories, 770 ensures both owner and group members have identical rights, enabling unrestricted collaboration.

Administrators often pair chmod 770 with chgrp devgroup /srv/devdata to assign group ownership. This ensures that the directory belongs to the correct group and that the group permissions actually take effect.

Using correct permissions helps enforce security boundaries while enabling controlled collaboration. Allowing unnecessary access to others can expose sensitive data or violate segmentation policies, while insufficient permissions can disrupt workflows.

Because chmod 770 provides the correct combination of owner and group permissions with no access for others, option B is correct.

Question 105:

A Linux administrator needs to scan a system for open ports and detect which services are listening to help identify a potential unauthorized service. Which tool performs a comprehensive port scan and service enumeration?

A) netstat -rn
B) ss -ln
C) nmap
D) arp -n

Answer:

C

Explanation:

The correct tool is nmap because it performs detailed port scanning, service enumeration, banner grabbing, and host analysis. Nmap is widely used in security auditing, vulnerability assessment, and network exploration. It can detect open ports, identify running services, determine service versions, and even perform advanced probing for security assessment.

Option A, netstat -rn, displays routing tables and does not perform port scanning or service enumeration.

Option B, ss -ln, lists listening ports on the local machine but does not actively scan the system or enumerate service versions across the network. It is useful but limited to local inspection.

Option D, arp -n, shows MAC address resolution entries and has no relationship to port scanning or service probing.

Nmap supports numerous scan techniques, such as:

TCP SYN scan

TCP connect scan

UDP scan

Version detection

OS fingerprinting

Administrators use nmap to detect unauthorized services that may have been started intentionally or accidentally. Unauthorized open ports may indicate:

misconfigured software

malware

outdated services still running

rogue applications

unintended container exposure

Nmap’s accuracy and depth make it an ideal tool for identifying security exposures. It can differentiate between open, closed, and filtered ports, helping administrators interpret firewall behavior and access controls.

Because nmap provides comprehensive port scanning and service identification, option C is correct.

Question 106:

A Linux administrator needs to create a new filesystem on the partition /dev/sdd2 using the ext4 filesystem type. Before creation, the administrator has verified the partition is unmounted and ready. Which command correctly formats the partition as ext4?

A) mkfs -t xfs /dev/sdd2
B) mkfs.ext4 /dev/sdd2
C) format -ext4 /dev/sdd2
D) mke2fs -b ext4 /dev/sdd2

Answer:

B

Explanation:

The correct command is mkfs.ext4 /dev/sdd2 because it directly creates an ext4 filesystem on the specified partition. Ext4 is one of the most widely used Linux filesystems due to its reliability, performance, journaling capabilities, and backward compatibility with ext3 and ext2. When an administrator needs to prepare a partition for general-purpose storage, application data, or server workloads, using mkfs.ext4 is the recommended procedure.

Option A, mkfs -t xfs /dev/sdd2, would create an XFS filesystem, not ext4. While XFS is suitable for large-file workloads and specific environments, it does not meet the requirement of creating ext4.

Option C, format -ext4, is not a valid Linux command. Some administrators mistakenly assume there is a generic “format” command similar to other operating systems, but Linux uses mkfs utilities to create filesystems.

Option D, mke2fs -b ext4, is incorrect because the -b option specifies block size, not filesystem type. Although mke2fs can create ext4 filesystems when passed appropriate parameters, the provided syntax does not indicate ext4 and would result in incorrect or default behavior. The most reliable and recommended method is to use mkfs.ext4 directly.

Creating an ext4 filesystem involves several operations, including initializing metadata structures, creating superblocks, journaling areas, group descriptors, and inode tables. Ext4 provides enhanced scalability, reduced fragmentation, extended timestamps, and improved recovery capabilities compared to earlier filesystem types. Using mkfs.ext4 ensures that all required ext4 structures are correctly initialized.

Before creating a filesystem, administrators must ensure the partition is unmounted; otherwise, the creation process could damage active data. Tools such as lsblk, blkid, mount, and df can verify whether a device is currently mounted.

Once the filesystem is created, administrators often perform additional steps, such as:

creating a mount point directory

mounting the new filesystem

adding an entry to /etc/fstab for persistent mounting

For example:

mount /dev/sdd2 /mnt/newdata

and in fstab:

/dev/sdd2 /mnt/newdata ext4 defaults 0 2

This ensures that the filesystem becomes available during each system boot.

In many system administration tasks, creating new filesystems is part of provisioning new storage arrays, configuring virtual machine disks, preparing removable media, or setting up workloads requiring isolated data partitions. The mkfs.ext4 command remains one of the most essential tools for such tasks.

Because mkfs.ext4 /dev/sdd2 is the proper and direct method to format a partition as ext4, option B is correct.

Question 107:

A Linux system engineer needs to control which users can run scheduled tasks through cron. To explicitly allow only selected users to access cron, which file should be modified?

A) /etc/cron.allow
B) /etc/cron.daily
C) /etc/cron.block
D) /etc/cron.timer

Answer:

A

Explanation:

The correct file is /etc/cron.allow because this file determines which users are explicitly permitted to run cron jobs. If this file exists, only users listed in it may execute cron tasks using the crontab command. This granular control is necessary in high-security environments, multi-user servers, or systems where administrators must restrict scheduled task creation.

Option B, /etc/cron.daily, contains scripts that run daily but does not control user permissions. Adding a user here does nothing related to cron access.

Option C, /etc/cron.block, is not a standard Linux file and has no connection to cron permission management.

Option D, /etc/cron.timer, is incorrect; cron does not use timer-based configuration files. That concept is associated with systemd timers, not cron.

The cron.allow file functions in a simple, strict manner: if it exists, only users listed inside may use cron. If it does not exist, cron checks cron.deny instead. Administrators can use these two files to apply inclusive or exclusive policies:

If cron.allow exists → only users listed inside are allowed

If cron.allow does not exist → users in cron.deny are blocked

If neither file exists → only root can use cron

Entries in cron.allow should contain one username per line, with no additional formatting. Administrators typically include accounts such as:

root
adminuser
automation

This ensures that general users cannot schedule tasks that could consume resources, disrupt services, or introduce security risks.

Misconfigured cron permissions can lead to serious issues, such as:

unauthorized job execution

resource exhaustion

accidental deletion or modification of files

repeated execution of harmful scripts

users bypassing policies using cron loops

Cron security plays an important role in regulated industries, shared hosting environments, and educational institutions.

Because /etc/cron.allow is the correct mechanism for controlling cron access, option A is correct.

Question 108:

A Linux administrator needs to detect filesystem changes, such as file creation, deletion, modification, or attribute changes, within the directory /srv/logs. Which tool continuously monitors directories for such activity?

A) chmod
B) inotifywait
C) tar
D) rsync

Answer:

B

Explanation:

The correct tool is inotifywait because it uses the inotify subsystem to monitor file and directory events in real time. This tool can detect actions such as creation, deletion, modification, renaming, attribute changes, and more. Administrators depend on inotifywait when they need immediate notification or automated reactions to filesystem changes.

Option A, chmod, modifies file permissions but cannot monitor changes.

Option C, tar, creates archives but does not track modifications.

Option D, rsync, synchronizes files and directories but does not monitor runtime changes. Although rsync may detect differences during synchronization, it does not provide real-time event detection.

Inotifywait is part of the inotify-tools suite, which allows administrators to automate workflows, trigger scripts, or log activity as soon as filesystem events occur. For example, an administrator may run:

inotifywait -m -e modify,create,delete /srv/logs

This produces real-time output whenever a log file is updated, created, or removed. Such monitoring is invaluable for:

debugging log rotation problems

monitoring applications that write logs unexpectedly

tracking suspicious activity on monitored directories

triggering backup or synchronization processes

automation pipelines where new files initiate processing

Inotify-based monitoring is efficient because it relies on kernel event notifications rather than continuous polling. This reduces resource consumption and increases accuracy.

Because inotifywait provides continuous, event-driven filesystem monitoring, option B is correct.

Question 109:

A Linux administrator needs to determine which process is locking a particular file, preventing other applications from accessing it. Which command identifies the process using that file?

A) ls -l
B) ps -ef
C) lsof
D) fg

Answer:

C

Explanation:

The correct command is lsof because it lists open files and shows which processes are associated with them. Open files include regular files, directories, sockets, pipes, and devices. When an application cannot access a file because another process is holding it open, lsof reveals the blocking process so the administrator can decide whether to stop it, kill it, or modify system behavior.

Option A, ls -l, lists file metadata such as permissions and ownership but does not show active file usage or which processes are interacting with the file.

Option B, ps -ef, lists processes but provides no information about which files they have open. Without correlation to file descriptors, it cannot identify file locks.

Option D, fg, brings background jobs to the foreground but is limited to shell job control and not relevant to filesystem locks.

Lsof is essential when diagnosing problems such as:

log files failing to rotate because a process still has them open

devices refusing to unmount due to active file handles

applications failing to write to files due to locks

unexpected behavior caused by daemons holding open temporary files

storage systems refusing cleanup because files remain open

Running lsof filename displays the PID, user, type of file descriptor, and full command using the file. Administrators can then determine appropriate action, such as restarting a service or terminating a runaway process.

Because lsof directly identifies which process is using a file, option C is correct.

Question 110:

A Linux engineer must analyze memory usage by each running process to identify which applications are consuming excessive RAM. Which tool provides a real-time, interactive view of process-level memory consumption?

A) free
B) top
C) fdisk
D) uptime

Answer:

B

Explanation:

The correct tool is top because it provides an interactive, real-time display of system processes, including CPU usage, memory usage, swap usage, and various performance metrics. Top allows administrators to sort processes by memory consumption, making it easy to identify applications using large amounts of RAM.

Option A, free, displays overall memory statistics but cannot break usage down by process.

Option C, fdisk, handles disk partitioning and is completely unrelated to memory monitoring.

Option D, uptime, shows system load averages and uptime duration but does not display memory or process information.

Top is an essential diagnostic tool. It provides key insights such as:

resident memory usage (RES)

virtual memory usage (VIRT)

shared memory usage (SHR)

actual CPU consumption

process identifiers

thread counts

process owners

dynamic updates every few seconds

Administrators rely on top when troubleshooting memory pressure, swap thrashing, or slow system performance. If an application leaks memory, its RES value steadily increases, making top invaluable for identification.

In addition to observation, top allows administrators to interactively manage processes by sending signals, adjusting priorities, or terminating tasks.

Because top offers real-time, process-level memory analysis, option B is correct.

Question 111:

A Linux administrator needs to check the current SELinux security mode to confirm whether the system is in enforcing, permissive, or disabled mode. Which command displays the active SELinux mode immediately?

A) selinuxstatus
B) getenforce
C) setsebool
D) sestatus -h

Answer:

B

Explanation:

The correct command is getenforce because it provides an immediate, concise output showing whether SELinux is currently operating in enforcing, permissive, or disabled mode. This command is essential when administrators need a quick assessment of the system’s security state without retrieving additional detailed information. In many environments, SELinux plays a key role in restricting access to system resources, controlling process behaviors, and implementing mandatory access control. Therefore, administrators frequently check the active mode before testing services, installing applications, or debugging access issues.

Option A, selinuxstatus, is not a valid command. Although its name appears related to SELinux, it is not part of typical SELinux utility sets and cannot be used to check mode information.

Option C, setsebool, modifies SELinux boolean values, which control optional security behaviors. While setsebool is useful when adjusting policy configurations or tuning SELinux rules, it does not provide information on the system’s active SELinux mode.

Option D, sestatus -h, displays help information for the sestatus command rather than SELinux status details. The help option does not report mode information and is therefore not useful for determining whether SELinux is enforcing or permissive.

Getenforce displays a simple output such as:

Enforcing
Permissive
Disabled

This allows immediate validation of security level. SELinux modes have specific implications:

Enforcing: SELinux actively blocks unauthorized actions. Policies are fully enforced.

Permissive: SELinux logs policy violations but does not block operations. Useful for troubleshooting.

Disabled: SELinux functionality is turned off entirely.

Administrators often use getenforce before and after modifying SELinux modes. For example, to temporarily switch to permissive mode for troubleshooting, an administrator may run setenforce 0 and then immediately confirm the change using getenforce. Because SELinux policies can be complex, especially for services such as web servers, mail servers, or database applications, verifying mode is crucial before attempting to solve application access failures.

Checking SELinux mode is also important before system hardening audits, migration tasks, or service deployments where compliance frameworks require certain configurations. Some applications behave differently depending on SELinux mode, and administrators need reliable feedback on the current environment.

Because getenforce directly displays the system’s active SELinux mode with minimal output and no additional options required, it is the correct answer.

Question 112:

A Linux engineer needs to trace system calls made by a running process to diagnose unexpected behavior, such as failures when calling system libraries. Which command attaches to an existing process and displays real-time system call activity?

A) dmesg -k
B) strace -p PID
C) lscpu
D) file PID

Answer:

B

Explanation:

The correct command is strace -p PID because strace allows administrators to attach to a running process and observe every system call it makes in real time. System calls include file operations, network operations, memory operations, permission checks, and interactions with the kernel. When an application behaves unexpectedly—such as failing to read configuration files, encountering permission errors, or failing to open sockets—strace reveals precisely where the breakdown occurs.

Option A, dmesg -k, displays kernel ring buffer messages. While dmesg helps diagnose kernel-level events such as driver issues or hardware problems, it does not provide detailed insights into system calls made by user-space applications.

Option C, lscpu, shows CPU architecture information but is irrelevant to diagnosing system call behavior.

Option D, file PID, is invalid because file is used to inspect file types, not process behavior.

Strace allows administrators to observe granular operations such as open, read, write, close, socket creation, memory allocation requests, and permission checks. When troubleshooting, strace can reveal issues including:

missing configuration files

permission-denied errors due to incorrect user contexts

dependency failures where shared libraries cannot be loaded

incorrect path references in applications

system resource exhaustion

unexpected signals terminating processes

For example, if a program fails to start and displays a generic error message but strace shows repeated attempts to open a nonexistent file, the administrator now has a direct lead on what to fix.

Strace also helps in debugging applications that stall. If a process hangs, attaching strace reveals whether it is waiting on network input, file I/O, or a blocking system call. This is critical when diagnosing:

deadlocks

infinite loops

slow network responses

timeouts

Administrators must use strace carefully because attaching it to heavily loaded production processes may slow performance. However, for investigative troubleshooting, strace is one of the most effective tools for diagnosing low-level interactions.

Because strace -p PID attaches to a running process and displays system calls in real time, it is the correct answer.

Question 113:

A Linux administrator needs to view current firewall rules configured through nftables, including chains, tables, and counters. Which command provides a full listing of the active nftables configuration?

A) nft flush ruleset
B) nft list ruleset
C) nft new table filter
D) iptables -L

Answer:

B

Explanation:

The correct command is nft list ruleset because it outputs the entire active nftables configuration, including all tables, chains, counters, and rules. This command is the nftables equivalent of viewing the entire rule structure at once. Nftables is the modern replacement for iptables and provides a more efficient and flexible firewalling architecture with improved performance, a unified structure, and more granular control.

Option A, nft flush ruleset, clears all firewall rules. This is destructive and does not display configuration.

Option C, nft new table filter, creates a new table but does not display existing structures.

Option D, iptables -L, lists iptables rules, not nftables rules. Modern systems using nftables may emulate iptables commands through a compatibility layer, but this does not reflect actual nftables structures accurately.

The nft list ruleset command provides detailed information such as:

defined tables (filter, nat, raw, mangle, etc.)

custom chains

hook points (such as input, forward, output)

rule counters

traffic statistics

matching conditions and actions

This is critical for auditing firewall behavior, diagnosing connectivity issues, or confirming that policies have been applied correctly. Administrators often view rulesets when investigating:

blocked traffic

unexpected firewall behavior

NAT translation issues

interface-specific filtering

rate-limiting behavior

Because nft list ruleset provides the complete active configuration, option B is correct.

Question 114:

A Linux engineer needs to temporarily elevate priority for a specific running process, giving it more CPU time during heavy workload operations. Which command increases the scheduling priority (lowers the nice value) of a running process?

A) renice -n 10 PID
B) renice -n -5 -p PID
C) nice –adjust=20 PID
D) top -a PID

Answer:

B

Explanation:

The correct command is renice -n -5 -p PID because lowering the nice value increases the scheduling priority of the process. Linux nice values range from -20 (highest priority) to +19 (lowest priority). Renice allows administrators to change this value for a running process, and using a negative nice value effectively increases the CPU share that the process receives.

Option A, renice -n 10, increases the nice value, resulting in lower priority, not higher.

Option C, nice –adjust=20, applies only to new processes, not running ones.

Option D, top -a PID, is invalid because top is used for monitoring and cannot set priorities directly. Although top has interactive renice capabilities for selected processes, top -a PID does nothing useful.

Raising priority is important for processes requiring rapid execution, such as:

data compression tasks

scientific computations

high-value analytics workloads

major batch transactions

system-critical applications needing smooth performance

However, administrators must use renice cautiously. Increasing priority for one process can starve others, especially on systems with limited CPU resources.

Because renice -n -5 -p PID raises priority by lowering the nice value, option B is correct.

Question 115:

A Linux administrator needs to determine which services are configured to start automatically at boot using systemd. Which command displays a list of enabled systemd services?

A) systemctl list-timers
B) systemctl list-unit-files –state=enabled
C) systemctl status
D) systemctl isolate multi-user.target

Answer:

B

Explanation:

The correct command is systemctl list-unit-files –state=enabled because it provides a list of all systemd unit files that are configured to start automatically when the system boots. This includes essential services, custom services, and any additional units that administrators have enabled manually. The output shows unit names and their enabled state, giving administrators a clear overview of the system’s startup configuration.

Option A, systemctl list-timers, shows active timers, not services.

Option C, systemctl status, displays the status of a specific service but does not list all enabled services.

Option D, systemctl isolate multi-user.target, switches the system to a different target and does not list services. This can disrupt system operation and should not be used for inspection.

Reviewing which services are enabled is crucial for:

startup troubleshooting

performance optimization

security auditing

identifying unnecessary or unwanted services

verifying system hardening compliance

Enabled services start automatically at boot and often remain running, consuming resources or providing functionality. Administrators must ensure only necessary services run to minimize potential attack surfaces and optimize system performance.

Because systemctl list-unit-files –state=enabled provides a clear list of all startup-enabled services, option B is correct.

Question 116:

A Linux administrator needs to ensure that a service starts only after the network is fully online, not merely available at a basic level. Which systemd directive within a unit file guarantees that the service waits for complete network connectivity before starting?

A) After=network.target
B) Wants=multi-user.target
C) After=network-online.target
D) Before=network-pre.target

Answer:

C

Explanation:

The correct directive is After=network-online.target because this ensures that the service begins only when the network is fully initialized and functional. Systemd provides multiple targets to define dependency ordering. The basic network.target indicates the network stack is configured at a minimal level, but it does not guarantee that network interfaces have acquired IP addresses, resolved routes, or completed DHCP assignments. Many services depend on complete, stable network availability. Therefore, they must wait for network-online.target rather than the more generic network.target.

Option A, After=network.target, is insufficient for services requiring full network readiness. This target merely ensures that basic networking is brought up, but it does not guarantee that interfaces are fully operational. For example, DHCP-based systems may still be negotiating IP addresses when network.target is reached.

Option B, Wants=multi-user.target, has no relevance to network readiness. Instead, it indicates that the service desires the multi-user target but does not enforce ordering or connectivity readiness requirements.

Option D, Before=network-pre.target, is incorrect because network-pre.target is executed early in the boot sequence, before most network configuration tasks occur. Services requiring full connectivity must not run before primary networking configurations complete.

Network-online.target manages dependencies through units such as network-online.service, provided by network managers like NetworkManager or systemd-networkd. These services wait until interfaces have successfully completed their configuration processes. For example, DHCP clients must request and receive IP leases, domain name resolution may need to be validated, and routing tables may need to be applied. Only after these tasks complete does systemd consider the network fully online, thereby triggering network-online.target.

Administrators frequently encounter situations requiring full network readiness before starting a service. These include:

Applications requiring database connectivity

Services relying on remote APIs

Distributed logging systems

Backup servers connecting to remote storage

Cluster services that depend on remote nodes

Time synchronization daemons contacting external servers

Cloud configuration tools requiring full network routes

Using the correct target avoids issues such as services starting too early, failing due to missing connectivity, crashing on startup, or looping during reconnection attempts. It also ensures predictable behavior during boot.

Unit files calling After=network-online.target often also include Wants=network-online.target to ensure that the network-online process itself is brought up correctly.

Because After=network-online.target ensures complete network readiness before service startup, option C is the correct answer.

Question 117:

A Linux engineer must configure a firewall rule using firewalld to allow only HTTPS traffic from subnet 10.10.5.0/24 into the public zone. Which command properly adds this rule permanently?

A) firewall-cmd –zone=public –add-port=443/tcp
B) firewall-cmd –add-rich-rule “rule family=’ipv4′ port port=’443′ protocol=’tcp’ accept”
C) firewall-cmd –permanent –zone=public –add-rich-rule=”rule family=ipv4 source address=10.10.5.0/24 port port=443 protocol=tcp accept”
D) firewall-cmd –zone=trusted –add-service=https

Answer:

C

Explanation:

The correct command is firewall-cmd –permanent –zone=public –add-rich-rule=”rule family=ipv4 source address=10.10.5.0/24 port port=443 protocol=tcp accept” because this rule restricts access to HTTPS traffic originating specifically from subnet 10.10.5.0/24, places the rule inside the public zone, and persists across system reboots by using the –permanent option. Firewalld rich rules provide granular control over source networks, ports, protocols, and actions.

Option A, firewall-cmd –zone=public –add-port=443/tcp, opens port 443 to all sources, not just the specified subnet. It also lacks the –permanent option unless paired with additional commands.

Option B adds a rich rule but does not specify the source network. This opens HTTPS access to all IPv4 clients, which violates the requirement to restrict access only to the subnet.

Option D changes the trusted zone, not the public zone, and applies to all incoming HTTPS traffic rather than restricting by subnet. The trusted zone bypasses filtering entirely, making it excessively permissive.

Firewalld rich rules are powerful tools because they allow precise control not achievable with simple port commands. With rich rules, administrators can enforce:

source-based filtering

port-based conditions

interface-specific conditions

logging actions

rate limits

acceptance or rejection behavior

Using –permanent ensures the rule persists beyond system reboots, and administrators must reload firewalld after applying permanent rules:

firewall-cmd –reload

Without reloading, the rule exists only in firewalld’s configuration but does not actively apply to runtime traffic.

Restricting HTTPS traffic to a specific subnet is common in controlled environments such as:

internal administrative panels

secure internal-only web services

limited-access application endpoints

environment isolation between departments

compliance environments requiring subnet-based segmentation

Using the correct rich rule protects the system from unauthorized traffic while allowing necessary subnet access. Because option C meets all requirements, it is the correct answer.

Question 118:

A Linux administrator must identify all shared library dependencies of a compiled application named dataengine before deploying it to production. Which command displays the shared libraries required by this application?

A) cat dataengine | grep lib
B) readelf -h dataengine
C) ldd dataengine
D) nm dataengine

Answer:

C

Explanation:

The correct command is ldd dataengine because it prints all shared libraries that the executable depends on. This includes their resolved paths, whether they are found on the system, and their memory addresses at runtime linkage. Understanding shared library dependencies is crucial for deploying applications correctly, especially when moving binaries between systems or preparing containers.

Option A, piping dataengine through cat and grep, is meaningless because binary files contain unreadable data. This method does not reliably identify library dependencies.

Option B, readelf -h, displays ELF header information but not shared library dependencies. The ELF header shows basic metadata only.

Option D, nm, lists symbols from object files but does not show which shared libraries the executable depends on.

Using ldd allows administrators to determine:

whether required libraries exist on the system

whether library paths are correct

whether a missing dependency will cause runtime errors

whether symlinks or library versions are mismatched

whether additional packages need to be installed

For instance, ldd reveals items like:

libc.so.6 => /lib64/libc.so.6
libm.so.6 => /lib64/libm.so.6

Administrators often use this information to install missing dependencies using package managers or to statically compile applications when dynamic linking is problematic.

Because ldd provides clear shared library dependency information, option C is correct.

Question 119:

A Linux engineer needs to copy only files that have changed from /source/data to /backup/data while preserving ownership, permissions, and timestamps. Which command performs this synchronization efficiently?

A) cp -r /source/data /backup/data
B) rsync -av /source/data /backup/data
C) mv /source/data /backup/data
D) dd if=/source/data of=/backup/data

Answer:

B

Explanation:

The correct command is rsync -av /source/data /backup/data because rsync efficiently copies only changed files, preserves metadata such as permissions and ownership, and provides incremental synchronization capabilities. The -a flag enables archive mode, which ensures attribute preservation, while -v provides verbose output.

Option A, cp -r, performs a full recursive copy every time and does not optimize by copying only changed files. It also may not preserve all attributes without using additional flags.

Option C, mv, relocates files rather than synchronizing them, making it inappropriate for backup purposes.

Option D misuses dd, which performs raw data copying and is not suited for directory-based file synchronization.

Rsync is ideal for backup operations, replication tasks, and maintaining mirror directories. It uses file checksums, timestamps, and optional block-level comparison to minimize data transfer. Administrators often create cron jobs or automation scripts utilizing rsync to maintain consistent backup sets.

Because rsync -av is optimized for incremental, attribute-preserving copying, option B is correct.

Question 120:

A Linux administrator must verify that a systemd timer named cleanup.timer is correctly configured and determine when it is next scheduled to run. Which command displays this timer’s status?

A) systemctl reload cleanup.timer
B) systemctl list-units –type=service
C) systemctl status cleanup.timer
D) timedatectl status cleanup.timer

Answer:

C

Explanation:

The correct command is systemctl status cleanup.timer because it shows the timer’s active state, last activation, next scheduled run, and linked service. Systemd timers are used to schedule task execution in place of cron, offering dependency control, precise time control, and integration with systemd service units.

Option A reloads the timer but does not display status.

Option B lists service units only, not timers.

Option D is used for system time management and has no relevance to timers.

Systemctl status cleanup.timer provides information including:

whether the timer is active

the last time the timer triggered

the next time it will run

triggers such as OnCalendar or OnUnitActiveSec

links between timer and service unit

This ensures administrators can confirm schedule accuracy and verify that automation is functioning.

Because systemctl status cleanup.timer reveals all required scheduling details, option C is correct.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!