Visit here for our full CompTIA XK0-005 exam dumps and practice test questions.
Question 141:
A Linux administrator needs to determine which systemd services are currently queued for start-up, stop, or reload operations but have not yet completed their transitions. Which command displays the list of active systemd jobs waiting to be processed?
A) systemd-analyze jobs
B) systemctl list-jobs
C) journalctl -u jobs
D) systemctl queued
Answer:
B
Explanation:
The correct command is systemctl list-jobs because it displays all currently scheduled systemd jobs, including those pending execution, in progress, or waiting for dependencies. A systemd job represents an action systemd must perform on a unit, such as starting, stopping, reloading, isolating, enabling, or changing a unit’s state. By listing these jobs, administrators gain insight into tasks that systemd is currently processing or is waiting to process once prerequisites are met.
Option A, systemd-analyze jobs, is not a valid systemd subcommand. While systemd-analyze provides boot performance metrics and dependency visualizations, it does not list pending jobs or unit transitions.
Option C, journalctl -u jobs, attempts to query logs for a unit named “jobs,” which does not exist. Journalctl filters logs by unit name, not by systemd job queue status, making this option irrelevant.
Option D, systemctl queued, is not an actual systemctl subcommand. The systemctl tool uses specific subcommands like status, start, stop, restart, enable, disable, and list-jobs; there is no “queued” command available.
Systemctl list-jobs provides essential output for diagnosing:
delayed services
stuck services waiting on dependencies
slow boot sequences
conflicting unit operations
services blocked due to ordering constraints
failed jobs preventing additional activations
The job queue is particularly important when diagnosing why a service has not started when expected. A unit might be waiting for network initialization, dependent mounts, device availability, or completion of another job. Because systemd parallelizes many operations, it intelligently waits for certain events or other units to reach stable states before proceeding. The job list exposes these wait states and helps administrators determine where bottlenecks exist.
A typical job might be shown as:
123 start network-online.target waiting
124 start sshd.service running
125 stop postfix.service waiting
Each job contains a numerical ID, the action (start, stop, reload), the target unit, and the current state (waiting, running). Administrators can analyze these states to identify misconfigurations or unexpected delays. For example, if a service continually waits on network-online.target, it may indicate a slow DHCP server or misconfigured interface. If a job never transitions because a dependency unit fails, the administrator can resolve the dependency failure to allow the queued job to proceed.
Systemctl list-jobs also plays a role when administrators issue commands like:
systemctl isolate rescue.target
This triggers a large set of jobs to stop unnecessary services and start the rescue environment. Listing jobs during this process helps track progress.
Because systemctl list-jobs is the correct and only command that displays the active job queue, option B is correct.
Question 142:
A Linux engineer needs to set a sticky bit on the shared directory /shared/data so multiple users can collaborate while preventing users from deleting each other’s files. Which command applies the correct permissions?
A) chmod 777 /shared/data
B) chmod 755 /shared/data
C) chmod 1777 /shared/data
D) chmod 2777 /shared/data
Answer:
C
Explanation:
The correct command is chmod 1777 /shared/data because the leading 1 sets the sticky bit, while the remaining 777 ensures full read, write, and execute permissions for all users. The sticky bit is a security and collaboration mechanism that ensures users can delete only their own files—even when the directory is world-writable.
Option A, chmod 777, makes the directory fully writable by all users but does not prevent users from deleting files owned by others. This leads to potential data loss, making it unsuitable for collaborative environments.
Option B, chmod 755, only gives write access to the owner of the directory. Other users can read or enter the directory but cannot create or remove files, which defeats the purpose of providing a shared writable workspace.
Option D, chmod 2777, sets the setgid bit rather than the sticky bit. While the setgid bit ensures that new files inherit the group of the directory (useful in group-collaboration scenarios), it does not prevent users from deleting one another’s files. Although setgid is often paired with sticky bits in shared directories, it alone does not meet the requirements.
A directory with the sticky bit set is recognizable by a t in the permission string:
drwxrwxrwt
This is seen in directories such as /tmp, which allow all users to create files but prohibit deletion of others’ files. The sticky bit ensures that:
the file owner
the directory owner
root
are the only users permitted to delete or rename files within the directory.
This mechanism is vital in multi-user environments such as:
shared project workspaces
university lab systems
collaborative development environments
public directories where users exchange files
shared storage in enterprise systems
Without the sticky bit, users could overwrite or delete important files belonging to others, leading to security risks, accidental data removal, and significant disruptions.
Thus, applying chmod 1777 /shared/data ensures the directory is both world-writable and protected by the sticky bit. Because this meets the requirement to allow collaboration while preventing file deletion across users, option C is correct.
Question 143:
A Linux administrator must configure a systemd service to start only after a specific mount point, /mnt/storage, becomes available. Which directive should be added in the service unit file to enforce this ordering requirement?
A) Before=mnt-storage.mount
B) Wants=mnt-storage.mount
C) Requires=mnt-storage.mount
D) After=mnt-storage.mount
Answer:
D
Explanation:
The correct directive is After=mnt-storage.mount because the After= directive ensures that the specified unit reaches an active state before the current service is started. In systemd, mount points are treated as units, typically named according to their path, with slashes converted to dashes (for example, /mnt/storage becomes mnt-storage.mount). By specifying After=mnt-storage.mount, the service will not begin until the mount point is fully initialized.
Option A, Before=mnt-storage.mount, reverses the ordering and makes the service start before the mount point, which is the opposite of what is required.
Option B, Wants=mnt-storage.mount, creates a weak dependency but does not guarantee ordering. Wants= indicates that the service requires the mount to be pulled in, but the mount may start after the service unless ordering directives are also applied.
Option C, Requires=mnt-storage.mount, enforces that the mount point must exist for the service to run, but like Wants=, it does not impose startup order. Requires= ensures the service fails if the mount is absent but does not ensure that the mount is active before the service launches.
Correct dependency ordering in systemd usually requires two directives:
Requires=mnt-storage.mount
After=mnt-storage.mount
The Requires= directive ensures the mount exists, while After= controls the start order. Without After=, the system may attempt to start the service and mount simultaneously, risking failures if the service references the directory before the mount is ready.
Mount dependencies are crucial in scenarios where services access storage directly for:
reading configuration files
writing logs
storing database data
performing backup operations
accessing shared or remote filesystems
If the service starts prematurely, it may:
write to the wrong directory
create directories that block mount operations
fail due to missing files
experience corrupted configuration
generate misleading error messages
Using After=mnt-storage.mount guarantees predictable startup behavior, ensuring that the mount point is fully operational before the service interacts with it. Thus, option D is correct.
Question 144:
A Linux engineer needs to force a running process to immediately write all modified memory pages to disk rather than waiting for the kernel’s writeback mechanism. Which command triggers this action system-wide?
A) sync
B) flushmem
C) dropcache –write
D) fsflush
Answer:
A
Explanation:
The correct command is sync because sync instructs the kernel to flush all buffered data to disk immediately. This forces the writeback of dirty pages from memory into persistent storage. The kernel normally delays writes for performance reasons, batching operations and reducing disk I/O overhead. However, in situations requiring data integrity, administrators can use sync to prevent data loss.
Option B, flushmem, is not a Linux command and provides no memory or disk flushing functionality.
Option C, dropcache –write, is invalid because drop_caches is a kernel parameter used for clearing filesystem buffers but does not trigger write-out of pending memory pages. It also is not executed through a dropcache command but instead via echoing values into /proc/sys/vm/drop_caches, which serves a different purpose entirely.
Option D, fsflush, is not a standard Linux command and does not exist in typical Linux distributions.
Sync forces the flushing of:
filesystem journals
dirty buffers
delayed write pages
metadata waiting for commits
Administrators use sync before:
shutting down systems in recovery mode
unmounting filesystems
performing risky operations on disks
testing filesystems
power-off in embedded or remote systems
copying large amounts of critical data
Because sync forces immediate flush of in-memory data to disk, option A is correct.
Question 145:
A Linux administrator must view all kernel modules currently loaded into the system, including their memory sizes and dependencies. Which command provides this comprehensive list?
A) lsmod
B) modinfo
C) modprobe -l
D) insmod -v
Answer:
A
Explanation:
The correct command is lsmod because it lists all loaded kernel modules, showing their names, sizes, and dependent modules. Kernel modules provide hardware support, filesystem handlers, networking protocols, and other kernel-level features. Monitoring loaded modules helps administrators understand which components the kernel is actively using.
Option B, modinfo, provides details about a single module but does not list all loaded modules.
Option C, modprobe -l, lists modules available on the system but not those actively loaded into the kernel.
Option D, insmod -v, loads a module but does not display loaded modules.
Lsmod displays modules in a tabular form with:
module name
size in memory
list of dependent modules
Administrators use lsmod to diagnose:
driver issues
hardware detection problems
unnecessary modules consuming memory
conflicts between modules
troubleshooting missing dependencies
Because lsmod provides a full list of currently loaded modules, option A is correct.
Question 146:
A Linux administrator must configure a swap file on a system that currently has no swap space. The swap file should be 4GB in size and activated permanently so it persists across reboots. Which sequence of commands correctly creates and enables the swap file?
A) dd if=/dev/zero of=/swapfile bs=1M count=4000; chmod 600 /swapfile; mkswap /swapfile; swapon /swapfile
B) fallocate -l 4M /swapfile; chmod 600 /swapfile; mkswap /swapfile; swapon /swapfile
C) mkdir /swapfile; chmod 777 /swapfile; swapon /swapfile
D) swapon /dev/swap4g; systemctl enable swap.target
Answer:
A
Explanation:
The correct answer is option A because it provides the full and proper sequence required to create a 4GB swap file, secure it, format it for swap usage, and enable it immediately. Configuring swap space is a foundational administrative task in Linux, and doing it correctly is essential for system stability, performance, and memory management. The sequence in option A follows best practices used across all major Linux distributions.
The first step uses dd to create a 4GB file by writing zeros. Using dd if=/dev/zero of=/swapfile bs=1M count=4000 generates a file that is exactly 4000 megabytes in size. This approach ensures a correctly sized, fully allocated file. Many administrators prefer dd over fallocate because some filesystems or kernel versions may produce swap file incompatibilities when using fallocate. Therefore, dd ensures broad compatibility.
After the file is created, chmod 600 /swapfile secures the file’s permissions. This is critical because swap space may contain sensitive information such as memory fragments, passwords, encryption keys, or login data. With mode 600, only the root user may read or write to the file. Not applying proper permissions would pose a security risk.
The next step, mkswap /swapfile, formats the file as a valid swap area. Without this step, the kernel does not recognize the file as swap space. Mkswap writes necessary metadata to make the file usable for virtual memory operations.
Then, swapon /swapfile activates the swap file immediately. This means the system will begin using it without requiring a reboot. Administrators can verify the successful activation by checking output from free -h or checking the /proc/swaps file.
To make the swap permanent, the administrator must add an entry to /etc/fstab such as:
/swapfile swap swap defaults 0 0
This step is not shown in the answers but is standard practice for persistence.
Option B is incorrect primarily because fallocate -l 4M creates a 4 megabyte file, not 4 gigabytes. This is far too small to be useful as swap. Additionally, fallocate is not always reliable for swap depending on underlying filesystem.
Option C is incorrect because mkdir creates a directory rather than a file. Swap cannot be a directory; it must be a file or a dedicated partition. Furthermore, permissions set to 777 are insecure and dangerous.
Option D assumes the existence of a block device named /dev/swap4g, which is nonstandard unless manually created. It also incorrectly assumes that enabling swap.target automatically activates swap space, which is not true.
Swap files help systems prevent out-of-memory events, support hibernation, and improve performance in workloads that occasionally exceed RAM. Incorrect swap configuration may lead to instability, kernel OOM events, or performance degradation. Option A is the only answer that follows correct, secure, and functional procedures, making it the correct choice.
Question 147:
A Linux engineer needs to identify which system component is generating abnormal disk write activity. The administrator suspects a process is writing excessively to storage, causing performance degradation. Which tool provides real-time per-process disk I/O statistics?
A) vmstat 5
B) iotop
C) free -h
D) sar -d 1
Answer:
B
Explanation:
The correct answer is iotop because it provides detailed, real-time per-process disk I/O statistics, allowing administrators to identify exactly which processes are performing significant read or write operations. This is essential when diagnosing storage-related performance issues or identifying misbehaving applications that excessively or unexpectedly write to disk.
Iotop displays metrics such as process ID, user, disk read rate, and disk write rate. It also shows I/O priority levels and whether operations are caused by user processes or kernel threads. This level of detail allows administrators to pinpoint the root cause of unusual disk activity quickly.
Option A, vmstat 5, reports general system activity including memory, process, and I/O statistics, but it does not break down disk activity by process. Therefore, an administrator using vmstat would only see overall disk waits or throughput, not which process is responsible.
Option C, free -h, displays memory usage information including RAM and swap consumption, but it provides no insight into disk I/O operations.
Option D, sar -d 1, provides device-level statistics such as disk utilization percentages and read/write rates per block device. Although sar is powerful for long-term performance trend analysis, it also does not provide per-process insight.
Disk I/O bottlenecks can be caused by many issues including runaway log files, poorly optimized applications, misconfigured databases, constant writing to temporary directories, or backup utilities running unexpectedly. Without a per-process view, these issues can be difficult to diagnose.
Iotop is particularly useful in environments where:
virtual machines are hosted
container workloads perform unexpected writes
databases and log-heavy services operate
high-performance systems depend on fast I/O
Administrators can use iotop to see whether an application is continuously writing small amounts of data, performing large sequential writes, or generating bursts of activity. Once identified, the administrator may adjust application settings, reduce logging verbosity, or examine why an application is writing excessively.
For systems experiencing slow performance caused by high I/O wait percentages, iotop becomes an essential diagnostic tool. Because it is the only option that provides per-process disk write visibility in real time, option B is correct.
Question 148:
A Linux administrator must diagnose an issue where certain system calls intermittently fail due to insufficient permissions. The administrator wants a tool that can log specific system call invocations with fine-grained control and rule-based auditing, rather than just tracing them. Which tool should be used?
A) strace
B) auditd
C) psacct
D) perf trace
Answer:
B
Explanation:
The correct tool is auditd because it provides rule-based, kernel-level auditing that captures specific system call activity, including failures caused by permission issues. Auditd allows administrators to define rules that precisely specify which syscalls should be logged, under what circumstances, for which users or processes, and with what detail. This is essential for diagnosing intermittent syscall failures that may not be easily reproducible.
Strace, although useful for debugging live system call behavior, is not practical for long-term monitoring or intermittent issues because it requires actively attaching to a process and cannot persist auditing rules. Strace also captures every system call, which may be too noisy unless filtered, and is not designed for persistent security auditing.
Psacct, or process accounting, maintains logs of executed commands, CPU usage, and login durations but does not monitor system calls or permission-related failures.
Perf trace, while capable of showing syscall events, is intended for performance analysis and does not serve as a forensic auditing tool. It lacks the persistent, rule-driven nature of auditd.
Auditd allows administrators to create detailed audit rules. For example, an administrator can instruct auditd to log syscalls such as open, chmod, or execve when they fail with specific error codes such as permission denied. Audit logs stored by auditd provide historical information for extended troubleshooting, investigation, or compliance reporting.
Auditd is especially useful for environments requiring strict tracking of system activities such as secure servers, financial systems, regulated healthcare systems, or other locations where administrators must verify system call validity or confirm unauthorized access attempts.
Because auditd provides the required security, precision, and auditing capabilities, option B is correct.
Question 149:
A Linux engineer must isolate why a service fails during boot only when the root filesystem is mounted as read-only. The engineer suspects the service is attempting to write to disk during initialization. Which tool can record and analyze system calls to confirm write attempts during this early boot stage?
A) lsof
B) journalctl -k
C) strace
D) top
Answer:
C
Explanation:
The correct tool is strace because it records system calls and reveals exactly when a process attempts actions such as writing to disk, opening files, creating directories, or modifying configuration files. When a filesystem is read-only, services that try to write to areas like /var, /etc, or temporary directories may fail during boot. By running a service under strace, administrators can capture detailed logs of all system calls, including those that fail.
Option A, lsof, only shows open files, not system calls or failures.
Option B, journalctl -k, shows kernel logs but does not trace specific syscalls.
Option D, top, shows process performance metrics and cannot reveal system-level write attempts.
Strace logs system calls such as write, open, creat, unlink, and rename. It shows return values, errno codes, and file paths involved in each attempt. This allows administrators to pinpoint exactly which operations fail when the filesystem is read-only.
Strace is especially useful when debugging early boot stages, as many services rely on writable directories for logging, PID files, socket creation, and lock files. If the system boots in emergency or rescue mode with a read-only filesystem, strace can confirm whether the service’s failure results from specific write attempts.
Because strace provides granular syscall visibility, option C is correct.
Question 150:
A Linux administrator wants to monitor memory allocations by kernel subsystems, including slab allocator usage, to troubleshoot suspected kernel memory leaks. Which file or interface provides detailed slab allocation statistics?
A) /proc/meminfo
B) /proc/slabinfo
C) /sys/kernel/mm/slab
D) top
Answer:
B
Explanation:
The correct interface is /proc/slabinfo because it provides detailed statistics collected from the kernel’s slab allocator. These metrics include total objects, active objects, memory consumption, and cache utilization for each slab cache used by the kernel. This is critical for diagnosing kernel memory leaks or unusual kernel memory usage.
Option A, /proc/meminfo, shows general memory statistics but does not provide granular slab usage.
Option C, /sys/kernel/mm/slab, contains configuration parameters but does not present comprehensive allocation statistics.
Option D, top, monitors only user-space memory and cannot reveal kernel memory allocator behavior.
Slabinfo displays data for kernel caches such as inode_cache, dentry_cache, buffer_head, and others. By monitoring this data, administrators can determine whether a cache is growing abnormally, indicating a potential memory leak.
Because /proc/slabinfo delivers the required slab allocation details, option B is correct.
Question 151:
A Linux administrator needs to configure a system so that a service is automatically restarted not only after failures, but also when it exits normally. Which systemd directive ensures that a service restarts regardless of the exit status, including clean exits?
A) Restart=never
B) Restart=on-success
C) Restart=always
D) Restart=on-abort
Answer:
C
Explanation:
The correct directive is Restart=always because it instructs systemd to restart a service regardless of the reason it stopped. This means the service will restart whether it exits with a success code, a failure code, a signal, or any other termination scenario. Restart=always is commonly used for long-running daemons that must remain active continuously, regardless of normal or abnormal exit behavior.
Option A, Restart=never, means the service will not restart under any condition. This is the exact opposite of the required behavior.
Option B, Restart=on-success, seems like it might apply, but it restarts the service only when it exits normally. It will not restart the service when it fails or crashes. Therefore, it does not meet the requirement to restart under all conditions.
Option D, Restart=on-abort, restarts the service only when it terminates abnormally due to signals such as abort signals or segmentation faults. It will not restart the service after a normal exit.
Restart=always is especially useful for:
persistent monitoring agents
background daemons that are expected to run indefinitely
containers or isolated processes requiring high availability
services providing continuous network functionality
watchdog-like or supervisory services
A service that exits normally may still need to restart because its termination is part of the expected workflow. Some services process tasks and exit once complete, but administrators may want them to immediately run again. In these situations, Restart=always ensures that systemd interprets all exits as restart triggers.
Systemd controls service behavior through several related directives, including:
RestartSec: specifies delay before restarting
StartLimitBurst and StartLimitIntervalSec: prevent runaway restart loops
Type: defines how systemd interprets service readiness
When Restart=always is used in combination with proper StartLimit values, administrators can prevent infinite restart loops from flooding logs or consuming resources.
Because Restart=always is the only directive that restarts services on all exit outcomes, it is the correct answer.
Question 152:
A Linux engineer must ensure that a system service begins only after the network stack is fully configured and all interfaces have obtained valid IP addresses. Which systemd unit should the service depend on to guarantee this state?
A) network.target
B) network-pre.target
C) network-online.target
D) multi-user.target
Answer:
C
Explanation:
The correct dependency is network-online.target because this target represents the point at which the system networking stack is fully initialized and all network interfaces are online. This includes waiting for DHCP leases or other dynamic configurations to complete before the service starts.
Option A, network.target, only guarantees that the basic networking services are present, not that the network is actually usable or fully configured. For example, network.target may be reached while interfaces are still negotiating DHCP leases. Services that require actual connectivity may fail if they start too early.
Option B, network-pre.target, is reached before networking is brought up, making it unsuitable when waiting for interfaces to become ready.
Option D, multi-user.target, represents the general multi-user operating state, similar to the old runlevel 3. It does not guarantee networking availability.
Using network-online.target is essential for services requiring:
DNS resolution
remote database access
authentication against network services
mounting network filesystems
communication with cloud APIs
receiving configurations through network provisioning
To enforce this dependency, administrators use:
Requires=network-online.target
After=network-online.target
This combination ensures that systemd starts the service only after the networking components report readiness through network configuration tools such as NetworkManager or systemd-networkd.
Using network-online.target avoids timing issues that commonly occur when services attempt network access before interfaces are ready. Such timing issues can lead to:
connection failures
DNS resolution errors
authentication timeouts
failed mounts
service startup delays
Because network-online.target confirms complete network readiness, option C is correct.
Question 153:
A Linux administrator needs to identify which shared libraries a dynamically linked executable depends on before it runs. Which tool displays the list of shared objects required by an executable?
A) ldconfig
B) ldd
C) nm
D) objdump -S
Answer:
B
Explanation:
The correct tool is ldd because it prints the shared library dependencies of an executable. It reveals which dynamic libraries the program will load at runtime, allowing administrators to diagnose issues related to missing libraries, version conflicts, or incorrect library paths.
Option A, ldconfig, maintains the system’s shared library cache but does not display dependencies of individual executables.
Option C, nm, lists symbols within object files but does not identify shared library dependencies.
Option D, objdump -S, provides mixed assembly and source code but not shared library information.
Ldd is commonly used to diagnose:
missing .so files
misconfigured LD_LIBRARY_PATH
failed dynamic linking
dependency loops
applications failing to start due to runtime library issues
For example, if an application fails to start with errors stating that a library cannot be found, ldd helps pinpoint exactly which library is missing and where the loader is searching for it. This is especially important in systems where custom software or compiled binaries rely on nonstandard library paths.
Because ldd displays the required shared objects, option B is correct.
Question 154:
A Linux engineer wants to ensure that a script runs automatically whenever a user logs in through any login method, including console, SSH, or graphical sessions. Which file should be modified to execute commands for every user at login?
A) ~/.bash_profile
B) ~/.bashrc
C) /etc/profile
D) /etc/skel/.bashrc
Answer:
C
Explanation:
The correct file is /etc/profile because it is executed for all users during login sessions across multiple shell environments. It applies to console logins, SSH logins, and graphical sessions that load shell environments. Editing /etc/profile affects all users system-wide.
Option A, ~/.bash_profile, is per-user and applies only to bash login shells. It cannot enforce system-wide behavior.
Option B, ~/.bashrc, is executed for non-login interactive shells and will not apply to all login methods.
Option D, /etc/skel/.bashrc, is only copied into new user home directories during account creation. Editing it does not affect existing users.
Scripts requiring universal execution include:
setting system-wide environment variables
loading shared functions
initiating security banners
configuring global paths
Because /etc/profile runs for all login sessions, it is the correct answer.
Question 155:
A Linux administrator needs to verify whether a kernel module is currently loaded and also identify which modules depend on it. Which command provides this information?
A) insmod
B) lsmod
C) modprobe -l
D) modinfo
Answer:
B
Explanation:
The correct answer is lsmod because it lists all loaded kernel modules, including the size and reference count, which indicates how many other modules depend on it. This makes it useful for determining dependencies and understanding kernel module interactions.
Option A, insmod, loads a module but does not display module lists.
Option C, modprobe -l, lists available modules on disk, not loaded ones.
Option D, modinfo, shows details about a specific module but cannot list currently loaded modules.
Lsmod provides:
module names
memory size
usage count
dependent modules
Administrators use lsmod to troubleshoot driver issues, verify hardware module loading, and observe module relationships.
Because lsmod shows loaded modules and dependency counts, option B is correct.
Question 156:
A Linux administrator needs to create a bind mount to make the directory /srv/data accessible at /mnt/data without copying data or creating symbolic links. The mount must persist across system reboots. Which entry should be added to /etc/fstab to configure this persistent bind mount?
A) /srv/data /mnt/data ext4 defaults 0 0
B) /srv/data /mnt/data none bind 0 0
C) /srv/data /mnt/data auto defaults 0 0
D) bind /srv/data /mnt/data auto 0 0
Answer:
B
Explanation:
The correct entry is option B because persistent bind mounts require the fstab entry format that uses none as the filesystem type and bind as the mount option. A bind mount allows a directory to appear in multiple locations on the filesystem without duplicating its content. This is especially useful for rearranging directory structures, consolidating storage access, granting access to applications expecting files in specific locations, or exposing directories inside chroot or container environments.
Option A incorrectly specifies ext4 as the filesystem type. This would attempt to treat the source directory as an ext4 filesystem, which is not correct. Ext4 refers to block devices, not directories. Using ext4 here would cause mount errors because a directory cannot be mounted as a block device.
Option C uses auto as the filesystem type and defaults as the options. Again, this is incorrect because auto causes the system to attempt to detect a filesystem type on the directory itself, which will not work. Defaults apply standard filesystem mounting behavior but are not applicable for bind mounts.
Option D uses bind as the source and attempts to treat bind as a device, which is incorrect. The correct positioning is source directory first, destination directory second, filesystem type third, and bind as the mount option. The ordering in option D does not follow the required fstab schema.
A correct persistent bind mount entry must follow the form:
/source /target none bind 0 0
In this case:
/srv/data /mnt/data none bind 0 0
This tells the kernel to mount /srv/data at /mnt/data using a bind mount, with none indicating that the filesystem type should not be interpreted as a traditional block-device filesystem.
Bind mounts differ from symbolic links in several ways. First, they maintain original permissions and ownership. Second, they work transparently for applications that might not handle symbolic links correctly. Third, they preserve filesystem boundaries and do not appear as links but as actual directories. This makes bind mounts ideal for exposing directories to isolated environments such as chroots, containers, jails, or restricted service accounts.
Bind mounts are also commonly used when reorganizing filesystem structures without altering the underlying disk layout. For example, an administrator may want to make a directory under /srv available under /var/www for a web server. A bind mount allows both paths to reference the same data.
To activate the mount immediately after editing fstab, an administrator can run:
mount -a
This applies all fstab entries, including bind mounts.
Because option B correctly follows the required fstab syntax for a persistent bind mount, it is the correct answer.
Question 157:
A Linux engineer is asked to enforce resource limits so that a specific user named devuser cannot run processes using more than 30% CPU time. Which mechanism should be used to restrict CPU utilization for a single user in a controlled and persistent manner?
A) nice value adjustments
B) renice applied to all processes of the user
C) cgroups configured for CPU limits
D) ulimit commands in the user’s shell profile
Answer:
C
Explanation:
The correct answer is cgroups configured for CPU limits because control groups allow administrators to apply fine-grained, kernel-level restrictions to system resources including CPU, memory, I/O, and more. Cgroups provide enforceable and persistent resource constraints that operate independently of user actions, processes, or shells.
Option A, adjusting the nice value, influences CPU scheduling priority but does not cap CPU usage. A lower priority process may still consume high CPU if no competing workload exists. Nice cannot enforce a maximum limit.
Option B, applying renice to all processes for the user, suffers from the same limitations. It reprioritizes tasks relative to others but cannot restrict CPU percentage. Renice also requires continuous monitoring because new processes launched by the user revert to default nice values unless forcibly changed again.
Option D, ulimit commands, control shell-level resource limits such as open files, stack sizes, and core dump generation, but they cannot limit CPU usage as a percentage. Ulimit does prevent excessive resource abuse in some cases, but CPU quotas are not among the standard ulimit constraints.
Cgroups, whether using cgroup v1 controllers or cgroup v2 unified hierarchy, allow administrators to place all processes of a user inside a CPU-limited control group. The CPU quota or CPU shares can be configured so that the user cannot exceed 30% of available CPU time, regardless of how many processes they spawn. The kernel enforces these limits directly.
Administrators can target cgroups by user through PAM integration, systemd user slices, or manual assignment. With systemd-managed cgroups, each user already receives their own slice, making it straightforward to apply CPU limits at the slice level. The CPUQuota directive can enforce absolute percentages such as CPUQuota=30%.
Cgroups support:
CPU bandwidth limiting
process grouping
isolation for services
container resource control
hierarchical enforcement
They are widely used in containerized environments such as Docker, Kubernetes, and systemd-managed services.
Because cgroups are the only mechanism listed that can enforce strict CPU usage caps for a specific user, option C is correct.
Question 158:
A Linux administrator is troubleshooting slow system performance and suspects that excessive swapping is occurring. Which command provides real-time visibility into swap usage paging in and out, showing si and so fields that reflect swap-in and swap-out activity?
A) free -h
B) vmstat 1
C) top
D) dmesg
Answer:
B
Explanation:
The correct answer is vmstat 1 because vmstat displays real-time system performance metrics including swap-in (si) and swap-out (so) fields. These metrics represent the rate at which memory pages are moved between RAM and swap space.
Option A, free -h, shows only total and used swap but not active swapping activity. It does not show how fast swap operations are occurring.
Option C, top, displays overall system performance but does not show swap paging metrics like si and so. Although top shows memory usage, it does not report paging rates directly.
Option D, dmesg, displays kernel messages but does not provide continuous real-time swap activity metrics.
Vmstat is ideal for diagnosing memory pressure. In vmstat output:
si shows pages swapped in from disk
so shows pages swapped out to disk
If either value is consistently above zero, the system is actively swapping, which is a sign of memory shortage or excessive memory allocation by processes.
Excessive swap activity can degrade performance because accessing disk is far slower than accessing RAM. Sustained swapping can cause sluggishness, slow application responses, and high I/O delays.
Vmstat also reports:
process queue length
CPU usage
disk activity
memory statistics
context switches
Because vmstat 1 reports swap activity continuously in a clear, real-time format, option B is correct.
Question 159:
A Linux engineer must verify that a recently added cron job runs at the correct time and logs all of its output. Which log file should be checked to confirm cron execution and detect any failures or run-time errors?
A) /var/log/messages
B) /var/log/cron
C) /var/log/jobs.log
D) /etc/cron.d/cron.log
Answer:
B
Explanation:
The correct log file is /var/log/cron because it records all cron activity including job execution times, script start events, job completion times, and cron daemon behavior. This file is the authoritative source for verifying whether a cron job ran as scheduled.
Option A, /var/log/messages, may contain general system information but does not consistently log cron activity across all distributions.
Option C, /var/log/jobs.log, is not a standard log file in Linux environments and is not used by cron.
Option D, /etc/cron.d/cron.log, is incorrect because /etc/cron.d contains cron job definitions, not logs.
Cron jobs may fail silently if scripts contain errors or permissions are incorrect. Examining /var/log/cron allows administrators to confirm whether cron attempted to run a job and whether it encountered issues such as:
missing executable permissions
incorrect shebang lines
environment variable conflicts
bad path references
syntax errors in cron entry formats
Cron logs help identify misfires and highlight whether the cron daemon encountered internal scheduling errors.
Because /var/log/cron is the standard and correct log location for cron execution information, option B is correct.
Question 160:
A Linux administrator needs to determine why a system is running low on available file descriptors. Which command reveals the system-wide file descriptor limits and how many are currently in use?
A) ulimit -n
B) cat /proc/sys/fs/file-max
C) cat /proc/sys/fs/file-nr
D) lsof
Answer:
C
Explanation:
The correct command is cat /proc/sys/fs/file-nr because this file shows three important values: the number of allocated file descriptors, the number currently unused, and the total system-wide limit. These values help administrators diagnose file descriptor exhaustion, which occurs when the system runs out of available descriptors due to high process counts, network connections, or file operations.
Option A, ulimit -n, shows the per-shell soft limit, not the system-wide usage or availability.
Option B, cat /proc/sys/fs/file-max, displays only the maximum allowed file descriptors, not the current usage.
Option D, lsof, lists open files but does not show system-wide limits or usage counts.
File descriptors are essential kernel resources used by processes to represent files, sockets, pipes, and device handles. When file descriptor exhaustion occurs, applications may fail to open new files or sockets.
Using file-nr allows administrators to understand the relationship between allocated descriptors and the system maximum. Because it provides both usage and limits simultaneously, option C is correct.