Visit here for our full Microsoft PL-200 exam dumps and practice test questions.
Question 81:
You are creating a calculated field that concatenates multiple text fields. Some fields may be empty. How should you handle null values?
A) Use IF(ISBLANK()) to check each field
B) Concatenate directly; nulls are handled automatically
C) Use COALESCE function
D) Set default values on source fields
Answer: A) Use IF(ISBLANK()) to check each field
Explanation:
Using IF(ISBLANK()) to check each field before concatenating is the correct approach for handling potentially null or empty fields in calculated field formulas in Dataverse. When working with text fields, null values can cause unexpected behavior, such as displaying the word “null” in the result or creating extra spaces between text segments. By explicitly checking each field with ISBLANK and conditionally including it in the concatenation, you ensure that the output is clean, readable, and accurately represents the available data.
For instance, consider a scenario where you want to concatenate a person’s first name, middle name, and last name into a full name field. If the middle name is empty or null, concatenating without checking could produce output like “John Doe” with an extra space, or in some systems, it could display “John null Doe.” Using a formula like IF(ISBLANK(middlename), firstname & ” ” & lastname, firstname & ” ” & middlename & ” ” & lastname) ensures that the middle name is only included if it contains data. This simple conditional check preserves proper spacing and formatting and avoids the display of unwanted null values.
This approach can be extended to multiple fields. For example, if you are building an address field from street, apartment, city, and postal code, each component can be checked with ISBLANK before concatenation. You can include only the fields that have data, automatically handling cases where certain fields are missing. A formula like IF(ISBLANK(apartment), street & “, ” & city & “, ” & postalcode, street & “, ” & apartment & “, ” & city & “, ” & postalcode) ensures that addresses without apartment numbers remain formatted correctly. This method provides flexibility in producing consistent, readable outputs regardless of how many optional fields are populated.
While some systems automatically handle nulls in concatenation, explicitly using ISBLANK provides more control over the final result. Other functions like COALESCE, which are available in some SQL environments, are not supported in Dataverse calculated fields, and setting default values on source fields would require changes to the underlying data model, which may not always be feasible or desirable. Using IF(ISBLANK()) within calculated field formulas is a declarative, no-code way to control how null or empty values are treated while building concatenated outputs.
Question 82:
You need to configure a Power Apps portal so that users can edit their own profile information. What should you implement?
A) Entity form with profile management
B) Web form for contact updates
C) Profile page configuration
D) Self-service portal template
Answer: C) Profile page configuration
Explanation:
Profile page configuration is the correct approach for enabling users to view and edit their own information in Power Apps portals. Portals provide built-in profile management functionality that is purpose-built for this scenario, allowing authenticated users to access and update their own contact records without requiring complex customization or security configuration. Each profile page is automatically associated with the logged-in user’s contact record, which ensures that users can only modify their own data while maintaining the integrity and security of other users’ information.
When configuring a profile page, you can select which fields are visible and editable. Common fields include first name, last name, email, phone number, mailing address, and other contact-related information. You can also define fields as read-only when necessary, such as unique identifiers or fields that should not be changed by the user. This flexibility allows organizations to tailor the profile page to their specific requirements, presenting only relevant information and preventing inadvertent modification of sensitive fields. The profile page can also be styled to match the look and feel of the portal, ensuring a seamless user experience that aligns with corporate branding.
The profile page integrates automatically with portal authentication. When a user logs in, the system identifies their associated contact record and retrieves the corresponding data. This built-in linkage simplifies self-service scenarios, as there is no need to manually apply user-level security or custom logic to restrict access. Any changes a user makes are updated directly in Dataverse, maintaining real-time synchronization between the portal and backend systems. Additionally, validation rules and business logic defined in Dataverse, such as required fields or field-level security, continue to apply, ensuring data consistency and compliance.
Alternative approaches are available but less appropriate. Entity forms could allow record editing, but they do not automatically enforce user-specific access, meaning additional configuration would be required to prevent users from editing other records. Web forms are designed primarily for creating new records rather than managing existing user data. Self-service portal templates provide a starting point for building a portal, but they do not specifically address profile management. Using these alternatives often requires extra effort and custom logic to achieve the same level of security and ease of use provided by the built-in profile page.
Question 83:
You are creating a Power Automate flow that needs to retry an HTTP action up to three times if it fails. What should you configure?
A) Retry policy on the HTTP action
B) Do until loop with error handling
C) Multiple HTTP actions with conditions
D) Try-catch scope
Answer: A) Retry policy on the HTTP action
Explanation:
Retry policy on the HTTP action is the correct built-in configuration for implementing automatic retry logic in Power Automate flows. Many actions in Power Automate, including HTTP actions and various connector actions, support retry policy settings that allow flow designers to define how the system should respond when an action fails due to transient issues. Transient failures are temporary problems such as network connectivity glitches, intermittent service unavailability, or brief timeouts. By configuring a retry policy, the flow can automatically attempt the failed action multiple times without requiring complex manual logic or intervention.
When configuring a retry policy, you can specify several parameters. These include the retry strategy, which determines how the delay between retry attempts is calculated, the maximum number of retry attempts, and the interval between retries. Common strategies include fixed interval and exponential interval. In a fixed interval strategy, retries occur at consistent, predetermined time gaps. In an exponential interval strategy, the wait time between retries increases exponentially, allowing services additional time to recover from temporary issues. For example, an HTTP action might be configured to retry up to three times using an exponential interval strategy, meaning the first retry occurs after a short delay, the second after a longer delay, and the third after an even longer delay. This approach ensures the flow does not overload a struggling service and provides a higher likelihood of successful execution.
Retry policies in HTTP actions are more efficient and maintainable than manually implementing retry logic. Without built-in retry policies, users might attempt to create loops such as “Do until” to repeatedly call the action until it succeeds. While possible, this requires additional steps to track the number of attempts, implement delays between retries, handle failures, and exit the loop appropriately. Multiple duplicated HTTP actions with conditional checks would also complicate the flow design and increase maintenance overhead. Power Automate’s retry policy handles all of this internally, automatically triggering retries according to the configured strategy and only allowing the flow to fail if all retry attempts are exhausted. This significantly reduces flow complexity and improves reliability.
Additionally, retry policies integrate seamlessly with error handling. If the HTTP action fails even after all retries, the flow can route execution to a configured error-handling scope, log the error, or notify users, ensuring predictable and robust flow behavior. The retry policy does not require additional expressions or variables and works consistently across supported connectors and services.
Question 84:
You need to create a business rule that makes a field read-only when a record is in Approved status. What should you configure?
A) Condition on status with Lock/Unlock field action
B) JavaScript to disable the field
C) Field-level security based on status
D) Form properties for read-only
Answer: A) Condition on status with Lock/Unlock field action
Explanation:
Condition on status with Lock/Unlock field action is the correct business rule configuration for making fields read-only based on the value of a record’s status in Dataverse. Business rules provide a no-code way to implement dynamic field behavior, allowing fields to become read-only or editable based on conditions without writing custom scripts. The Lock and Unlock actions are specifically designed for this purpose. By creating a business rule that evaluates the status field and applies the Lock action when the status equals Approved, you can ensure that critical fields cannot be modified once a record reaches a certain stage. Conversely, using the Unlock action in the else branch ensures that fields remain editable when the record is in a status that allows updates, maintaining flexibility for in-progress records.
This approach ensures that data governance and business process compliance are enforced consistently. When a user opens a record with an Approved status, the fields protected by the Lock action automatically appear read-only. Users cannot manually override this restriction, preventing accidental or unauthorized modifications to important or finalized data. If the status changes back to a value other than Approved, the Unlock action automatically restores editability, making the field writable again. This eliminates the need for manual intervention or separate security roles to control field accessibility, streamlining the management of conditional field behavior while preserving data integrity.
Using business rules for this scenario has several advantages. It is fully declarative, meaning administrators can configure it entirely through the Power Platform interface without writing code. It also applies consistently across forms for the same entity, so the behavior is predictable for all users interacting with the record. Additionally, business rules evaluate in real-time as the user interacts with the form, ensuring that the field’s read-only or editable status updates immediately when the underlying condition changes. This is more user-friendly than static read-only settings, which cannot respond to changing record data.
Alternative approaches exist but are less optimal. JavaScript can replicate this behavior, but it requires custom code, registration on form events, and ongoing maintenance. Field-level security provides broad access control but cannot dynamically respond to changing record values. Form properties allow fields to be set as read-only statically but do not offer conditional logic. Therefore, for scenarios where field editability must depend on specific record conditions such as status, business rules with conditions and Lock/Unlock actions are the most efficient, maintainable, and user-friendly solution.
Question 85:
You are configuring a model-driven app form with multiple sections. You need to ensure a section is visible only to users with a specific security role. What should you implement?
A) JavaScript with user role check
B) Form-level security
C) Section properties visibility
D) Enable security roles on the form
Answer: A) JavaScript with user role check
Explanation:
JavaScript with user role check is the correct implementation for controlling section visibility in model-driven app forms based on security roles. Model-driven apps do not provide an out-of-the-box configuration to conditionally show or hide individual sections on a form depending on the user’s role. To achieve role-based section visibility, a JavaScript web resource must be implemented that evaluates the logged-in user’s security roles and programmatically adjusts the visibility of specific sections. This enables tailored form experiences where certain information or controls are visible only to users with the appropriate permissions, improving usability and ensuring sensitive data is exposed only to authorized users.
To implement this solution, a JavaScript web resource is created and registered on the form’s OnLoad event. The script uses Xrm.Utility.getGlobalContext().userSettings to retrieve information about the current user, including the security roles assigned. The user’s roles are then compared against the target role IDs to determine if they have access to specific sections. Based on this check, the formContext.ui.tabs.get().sections.get().setVisible() method is used to show or hide individual sections as needed. Role IDs can be stored in web resources or configuration tables to avoid hardcoding, which simplifies maintenance and ensures the script can easily adapt to role changes or additions in the future. The visibility logic executes each time the form loads, dynamically presenting a customized interface based on the user’s security role.
Using JavaScript for this purpose has several advantages. It provides a flexible, maintainable, and centralized approach to UI customization that can accommodate multiple roles and complex visibility conditions. It also ensures that users see only the information relevant to their role, which reduces clutter, enhances user experience, and strengthens security. Administrators can configure different visibility rules for different roles without creating multiple forms or duplicating logic.
Alternative approaches are less effective for this scenario. Form-level security controls access to the entire form rather than individual sections. Section properties in the form designer do not provide native support for filtering based on security roles. Enabling security roles on the form controls which users can open the form but does not allow dynamic section-level visibility based on roles. Therefore, relying on these features alone cannot achieve the desired role-based section behavior.
Question 86:
You need to create a canvas app button that when clicked saves the current form data to Dataverse and navigates to a different screen. What functions should you use?
A) Patch() then Navigate()
B) SubmitForm() then Navigate()
C) SaveData() then Navigate()
D) Collect() then Navigate()
Answer: A) Patch() then Navigate()
Explanation:
Patch function followed by Navigate function is the correct approach for saving data and navigating in canvas apps. The Patch function writes data to a data source like Dataverse, creating new records or updating existing ones. After successfully patching the data, the Navigate function changes the currently displayed screen. The formula would be structured as Patch(DataSource, RecordToUpdate, {Field1: Value1, Field2: Value2}); Navigate(TargetScreen, ScreenTransition.None) to save data then move to another screen.
This pattern gives you explicit control over the save operation and allows you to handle the navigation only after confirming the save succeeded. You can also add error handling between these functions to check if Patch completed successfully before navigating. This is commonly used on custom save buttons that perform data operations and then redirect users to list screens or confirmation screens after successful save.
SubmitForm is used with edit forms but the question implies a button with custom logic. SaveData is for local storage, not Dataverse. Collect adds to collections rather than saving to Dataverse. For creating custom save functionality in canvas apps that writes data to Dataverse and then navigates to another screen, using Patch to save the data followed by Navigate to change screens provides the explicit control pattern for sequential save-and-navigate operations.
Question 87:
You are configuring a Power Automate flow that receives data from a webhook. The data structure varies depending on the source system. How should you handle the variable JSON structure?
A) Parse JSON with schema for each structure
B) Compose action to transform data
C) Dynamic content without parsing
D) Condition to check structure then parse appropriately
Answer: D) Condition to check structure then parse appropriately
Explanation:
Using Condition to check the structure then parsing appropriately is the correct approach for handling variable JSON structures in Power Automate. When webhook payloads vary by source, you need logic to identify which structure you received, then parse it with the appropriate schema. You might check for the presence of specific fields, a source identifier, or other indicators, then use conditions to route to different Parse JSON actions configured with schemas matching each possible structure.
This pattern allows handling multiple data formats in a single flow endpoint. Each condition branch includes a Parse JSON action with the schema for that specific source system, making subsequent actions able to access the parsed data correctly. After parsing, you can include source-specific processing logic or converge to common actions that work with normalized data.
Parse JSON with a single schema only works for consistent structures. Compose doesn’t parse JSON into accessible properties. Dynamic content without parsing limits your ability to access nested properties. For handling webhooks with varying JSON structures from different sources, using conditional logic to identify the structure type then routing to appropriate Parse JSON actions with source-specific schemas provides the flexible approach that accommodates multiple data formats in a single flow.
Question 88:
You need to configure a lookup field on a form that filters available records based on another field’s value on the same form. What should you implement?
A) Addfilter using JavaScript
B) Filtered lookup configuration
C) View filter on the related table
D) Business rule with conditions
Answer: A) Addfilter using JavaScript
Explanation:
JavaScript addPreSearch with addCustomFilter is the correct implementation for creating filtered lookups based on other form field values. You would create a JavaScript web resource that registers on the form load and the controlling field’s OnChange event. When the controlling field value changes, your script uses the addPreSearch event and addCustomFilter method on the lookup control to dynamically filter the available records in the lookup based on the controlling field’s current value.
For example, if you have a Country field and a State lookup, you would filter the State lookup to show only states belonging to the selected country. The JavaScript retrieves the country value and applies a filter to the state lookup’s query before it executes, ensuring users see only relevant state options. This provides cascading dropdown behavior that improves data quality by preventing invalid combinations and simplifies data entry by showing only applicable choices.
Filtered lookup configuration is not a native form feature. View filters on related tables affect all lookups globally, not form-specifically. Business rules cannot dynamically filter lookup options. For implementing dependent lookups where available options in one field depend on the value selected in another field, using JavaScript with addPreSearch and addCustomFilter provides the dynamic lookup filtering capability that creates context-aware cascading lookups based on form data.
Question 89:
You are creating a solution that needs to include both managed and unmanaged customizations. What type of solution should you create?
A) Unmanaged solution only
B) Managed solution only
C) This scenario is not supported
D) Separate managed and unmanaged solutions
Answer: D) Separate managed and unmanaged solutions
Explanation:
Separate managed and unmanaged solutions is the correct approach because a single solution cannot be both managed and unmanaged simultaneously. Solutions are packaged as either managed (locked, production-ready) or unmanaged (editable, development-ready) when exported. If you need both types of customizations, you must create separate solutions: one for components that should be managed in the target environment, and another for components that should remain unmanaged and editable in the target environment.
This typically occurs in scenarios where you deploy base functionality as managed while allowing environment-specific customizations as unmanaged. For example, deploying a managed solution with core tables and processes while providing unmanaged solution with environment-specific forms or views that administrators can modify. You would export and import these as separate solution packages, with each serving its distinct purpose in the target environment.
A single unmanaged solution cannot provide the protection and clean uninstall characteristics of managed solutions. A single managed solution prevents the customization flexibility of unmanaged components. Since solutions have a single managed/unmanaged state at export time, scenarios requiring both types necessitate separate solutions. For deploying customizations that need both the protection of managed solutions and the flexibility of unmanaged solutions, creating and deploying separate solutions with appropriate managed/unmanaged designation provides the architectural approach that accommodates both requirements.
Question 90:
You need to create a chart in a model-driven app that shows opportunity count grouped by stage and owner. What type of chart should you create?
A) Column chart with series grouping
B) Pie chart with legend
C) Tag chart with multiple dimensions
D) Funnel chart with stages
Answer: A) Column chart with series grouping
Explanation:
Column chart with series grouping is the correct chart type for displaying data across two dimensions like stage and owner. Column charts can display series (one dimension like owner) on the X-axis with grouped or stacked columns representing the other dimension (stage), showing counts or sums for each combination. This allows comparing opportunity distribution across stages for each owner simultaneously, providing insights into both stage progression and workload distribution.
When configuring the chart, you would set the horizontal axis to Owner, use opportunity count as the vertical axis measure, and configure series grouping by Stage. This creates grouped columns where each owner has a set of columns representing their opportunity counts in each stage. Users can quickly visualize which owners have opportunities in which stages and compare distribution patterns across the team.
Pie charts show proportions of a whole but don’t effectively display two-dimensional groupings. Tag charts are not a standard chart type in model-driven apps. Funnel charts typically show stage progression for a single dimension. For visualizing data grouped by two dimensions showing counts or sums for each combination of values, column charts with series grouping provide the appropriate visualization that displays multi-dimensional data in an easily interpretable format showing patterns across both grouping dimensions.
Question 91:
You are configuring a canvas app gallery that should allow users to select multiple items. What property should you set?
A) SelectMultiple = true
B) Selection Mode = Multiple
C) Multi-select = Enabled
D) AllowMultipleSelection = true
Answer: A) SelectMultiple = true
Explanation:
SelectMultiple property set to true is the correct configuration for enabling multi-select functionality in canvas app galleries. When this property is enabled, users can select multiple gallery items by clicking or tapping them, with each selected item remaining highlighted. The selected items are available through the gallery’s AllItemsSelected property, which returns a table of all selected items that you can use in other controls or formulas.
This functionality is useful for scenarios where users need to perform bulk operations like deleting multiple records, exporting selected items, or applying changes to multiple selections. You can add buttons that operate on the gallery’s selected items using formulas like ForAll(Gallery1.AllItemsSelected, Remove(DataSource, ThisRecord)) to process all selected items in batch operations.
SelectionMode, Multi-select, and AllowMultipleSelection are not valid gallery property names. For enabling users to select multiple items simultaneously in a gallery control to facilitate bulk operations or multi-item processing, setting the SelectMultiple property to true activates the built-in multi-selection capability that tracks and exposes all selected items for use in app logic.
Question 92:
You need to create a Power Automate flow that reads data from an Excel file where the structure changes frequently. What is the most maintainable approach?
A) Parse Excel file directly
B) Convert Excel to JSON
C) Use Excel table with dynamic columns
D) Import Excel to Dataverse first
Answer: C) Use Excel table with dynamic columns
Explanation:
Using Excel tables with dynamic column handling is the most maintainable approach for frequently changing Excel structures. Excel tables formatted with headers can be accessed through Power Automate’s Excel connector, and you can implement flows that adapt to column changes by referencing columns dynamically rather than hardcoding column names. The “List rows present in a table” action returns data where columns are accessible as properties, and you can use expressions to handle optional or varying columns gracefully.
When the Excel structure changes, flows using dynamic references continue functioning or fail gracefully with clear error messages rather than breaking entirely. You can implement logic that checks for column existence before accessing values, uses column mappings stored in configuration, or processes only columns that are present. This approach balances structure with flexibility, allowing the flow to handle known columns while adapting to additions or changes.
Parsing Excel files directly is complex and doesn’t address structure changes. Converting to JSON adds unnecessary steps. Importing to Dataverse first adds complexity and latency. For working with Excel files whose structure changes frequently in Power Automate flows, using Excel tables with dynamic column references and defensive coding practices provides the maintainable approach that adapts to structure changes without requiring flow modifications for every Excel update.
Question 93:
You are configuring a security role that should allow users to read all accounts but only create accounts in their own business unit. Which configuration should you use?
A) Read: Organization, Create: Business Unit
B) Read: Business Unit, Create: User
C) Read: Organization, Create: User
D) Create: Business Unit, Read: Organization
Answer: A) Read: Organization, Create: Business Unit
Explanation:
Read at Organization level and Create at Business Unit level is the correct configuration for this access pattern. Read privilege at Organization level allows users to view all account records throughout the entire organization regardless of ownership or business unit. Create privilege at Business Unit level means when users create new accounts, those accounts are owned within the user’s business unit hierarchy, enforcing organizational boundaries on new record creation while maintaining broad visibility.
This configuration supports scenarios where users need organization-wide visibility for reference and coordination while ensuring new accounts are created within appropriate organizational boundaries. When a user creates an account, they become the owner and the account is associated with their business unit. This maintains data governance around account creation while providing the visibility needed for business operations.
Option B restricts read access unnecessarily. Option C restricts create to user-owned only which doesn’t align with the requirement. Option D lists privileges in non-standard order but represents the same configuration as A. For providing organization-wide read access while restricting account creation to the user’s business unit context, configuring Read at Organization level with Create at Business Unit level provides the appropriate combination of visibility and creation boundaries.
Question 94:
You need to create a calculated field that displays “High,” “Medium,” or “Low” based on numeric values in another field. What formula structure should you use?
A) IF with nested conditions checking ranges
B) SWITCH function with cases
C) CASE statement with ranges
D) Multiple IF statements separately
Answer: A) IF with nested conditions checking ranges
Explanation:
IF functions with nested conditions checking ranges is the correct approach for categorizing numeric values in calculated fields. You would create a formula like IF(field >= 1000, “High”, IF(field >= 500, “Medium”, “Low”)) that checks the numeric field against threshold values and returns the appropriate category text. The nested IF structure evaluates conditions sequentially from highest to lowest priority, returning the first matching category.
This pattern is effective for range-based categorization where values fall into buckets defined by thresholds. The formula evaluates the highest tier first, then progressively lower tiers, with a default value for anything below all thresholds. You can extend this pattern to any number of categories by adding more nested IF statements with appropriate threshold checks and return values.
SWITCH and CASE statements are not available in Dataverse calculated field formulas. Multiple separate IF statements wouldn’t properly evaluate ranges. For creating calculated fields that categorize numeric values into text categories based on range thresholds, using nested IF functions that check value ranges in sequence provides the appropriate formula structure that evaluates thresholds and returns category labels based on where values fall in the defined ranges.
Question 95:
You are configuring a business process flow and need to allow users to enter data in fields that are not visible on the form. What should you configure?
A) Add data steps for the hidden fields
B) Make fields visible on form
C) Use quick create form
D) This is not possible
Answer: D) This is not possible
Explanation:
This scenario is not possible because business process flow data steps can only reference fields that are visible on the form. This is a fundamental limitation of business process flows – data steps must correspond to fields that exist and are visible on the form where the business process flow runs. If a field is not on the form, it cannot be added as a data step in the business process flow, and users cannot interact with it through the process flow interface.
The rationale for this limitation is that business process flows guide users through completing form fields, so those fields must be accessible on the form. If you need users to provide data through the business process flow, you must first add those fields to the form and ensure they’re visible. You can control field visibility with business rules or JavaScript if needed, but fields must at least be present on the form to be available as data steps.
Adding data steps requires the fields to be on the form first. Making fields visible is the solution to enable data steps, not a separate option. Quick create forms are different forms with their own field selection. For including fields in business process flow data steps, those fields must be present and accessible on the form where the business process flow runs, as data steps cannot reference fields not included in the form definition.
Question 96:
You need to create a Power Automate flow that processes attachments from Dataverse notes and extracts text using AI Builder. Which actions should you use?
A) List rows for notes, Get file content, AI Builder extract text
B) Get attachments, Download file, Form processing
C) List notes, Read file, Text recognition
D) Get row for note, Parse JSON, OCR processing
Answer: A) List rows for notes, Get file content, AI Builder extract text
Explanation:
List rows for notes (or annotations), Get file content, and AI Builder text recognition actions form the correct sequence for processing Dataverse note attachments with AI. You would first use “List rows” action to query the Annotation table for notes related to your target records, filtering for records with attachments. Then use “Get file content” or the file content from the annotation record to retrieve the actual file data. Finally, pass that content to AI Builder’s text recognition model to extract text from images or documents.
This pattern enables automating document processing workflows where files attached to records need text extraction for data entry, compliance, or analysis. The extracted text can then be parsed, stored in fields, or used in subsequent flow logic. This is commonly used for processing scanned documents, receipts, business cards, or any image-based content attached to Dataverse records.
Get attachments is not the correct Dataverse action name. Form processing is for structured forms rather than general text extraction. Read file is not a standard action. OCR processing is achieved through AI Builder text recognition. For automatically processing file attachments from Dataverse notes and extracting text content using AI capabilities, the sequence of listing annotation rows, retrieving file content, and applying AI Builder text recognition provides the complete workflow for intelligent document processing automation.
Question 97:
You are configuring a model-driven app and need to display a warning message when users navigate away from a form with unsaved changes. What should you implement?
A) OnSave event handler
B) Form OnLoad event
C) OnChange event for fields
D) No implementation needed; built-in functionality
Answer: D) No implementation needed; built-in functionality
Explanation:
No custom implementation is needed because model-driven apps include built-in unsaved changes detection and warning functionality. When users make changes to a form and attempt to navigate away without saving, the system automatically displays a warning dialog asking if they want to leave without saving changes. This default behavior protects against accidental data loss without requiring any custom configuration or code.
The platform automatically tracks when form data has been modified and hasn’t been saved, marking the form as “dirty.” Any navigation attempt including closing the form, switching records, or clicking links triggers the unsaved changes warning. This functionality works consistently across all model-driven app forms and doesn’t require JavaScript or customization. Users can choose to stay and save their changes or proceed and lose unsaved data.
OnSave events occur during save operations. Form OnLoad runs when forms load. OnChange events fire when field values change. None of these are needed for navigation warnings. The built-in form dirty tracking and navigation warning is automatic platform behavior. For warning users about unsaved changes when navigating away from forms in model-driven apps, the platform’s built-in functionality provides this protection automatically without requiring any custom implementation or configuration.
Question 98:
You need to create a view that shows accounts with no related opportunities. What filter configuration should you use?
A) Related records filter: Opportunities equals 0
B) No related records filter for Opportunities
C) Rollup field count equals 0
D) This requires a custom view with FetchXML
Answer: B) No related records filter for Opportunities
Explanation:
No related records filter for Opportunities is the correct configuration for finding accounts without related opportunities. The view designer supports a “No related” filter type that checks for the absence of related records through relationships. You would configure a filter on the Account view that checks for “No related” opportunities, which returns only accounts that have zero related opportunity records.
This filter type specifically handles the absence-of-relationship scenario without requiring rollup fields or complex FetchXML. It’s useful for identifying accounts that need attention, such as active customers without current sales opportunities. You can combine this with other filters to create targeted views like “Active accounts without opportunities in the last 90 days” by combining the no-related-records filter with other criteria.
While a rollup field counting opportunities could be created, it’s not necessary for view filtering. The related records filter with equals 0 is not the correct syntax. FetchXML could accomplish this but isn’t required. For creating views that show parent records without any related child records through a specific relationship, using the built-in “No related” records filter type provides the straightforward configuration that identifies records lacking related records without additional customization.
Question 99:
You are creating a Power Automate flow that needs to handle errors gracefully and continue processing remaining items even if one fails. What pattern should you implement?
A) Scope with configure run after settings
B) Try-catch block
C) Error handling action
D) Condition to check success
Answer: A) Scope with configure run after settings
Explanation:
Scope with configure run after settings is the correct pattern for implementing error handling in Power Automate flows. You place actions that might fail inside a Scope control, then add subsequent actions configured to run after the scope based on specific conditions like “has failed” or “has timed out.” This allows you to implement error handling logic that executes when errors occur while allowing the flow to continue rather than terminating on first failure.
For processing multiple items, you would use Apply to each with scopes inside the loop. Each iteration’s actions go in a scope, and you configure parallel branches with one handling success and another handling failure through run-after settings. This ensures that if processing one item fails, the flow logs the error and continues processing remaining items rather than stopping entirely. This pattern is essential for robust bulk processing flows.
Try-catch is not a native Power Automate construct; scope with run-after provides this functionality. Error handling action is not specific enough. Condition checks could detect failures but don’t provide the structured error handling pattern. For implementing comprehensive error handling in flows that continues processing after errors, catches failures for logging or remediation, and provides resilient automation, using Scope controls with configure run after settings provides the proper error handling pattern that makes flows fault-tolerant and resilient.
Question 100:
You need to configure a field that automatically generates unique sequential numbers for new records. What type of field should you create?
A) Text field with default value
B) Autonumber field
C) Calculated field with formula
D) Whole number field with increment
Answer: B) Autonumber field
Explanation:
Autonumber field is the correct field type for generating unique sequential numbers automatically. Autonumber fields are specifically designed to create system-generated unique identifiers using customizable patterns. You can configure the format to include prefixes, suffixes, and the number of digits, such as CASE-{SEQNUM:5} which generates CASE-00001, CASE-00002, and so on. The system automatically assigns the next number when records are created.
Autonumber fields ensure uniqueness across all records and handle concurrency correctly when multiple users create records simultaneously. They’re commonly used for case numbers, order numbers, invoice numbers, or any scenario requiring human-readable unique identifiers that follow a consistent pattern. The numbers are assigned during record creation and never change, providing stable reference numbers for records throughout their lifecycle.
Text fields with default values cannot guarantee uniqueness or sequential numbering. Calculated fields are read-only and don’t generate sequences. Whole number fields don’t have automatic increment functionality. For creating fields that automatically generate unique sequential numbers with optional formatting patterns for new records, autonumber fields provide the purpose-built field type that handles sequence generation, uniqueness enforcement, and concurrency safely with customizable number formats.