Error unable to update user visit data

error unable to update user visit data

Cloud account cannot be monitored due to missing permissions. Hit the Review button to see the current status and update settings. Error: 426578. When attempting to upload Data Access for users through the security Data Access Template(ADFDI), every time when user insets a record it says. If you try to update your device, you might see one of these messages: "Unable to Check for Update. An error occurred while checking for a.

Error unable to update user visit data - remarkable, rather

412 Precondition Failed

The HyperText Transfer Protocol (HTTP) client error response code indicates that access to the target resource has been denied. This happens with conditional requests on methods other than or when the condition defined by the or headers is not fulfilled. In that case, the request, usually an upload or a modification of a resource, cannot be made and this error response is sent back.

Status

Examples

Avoiding mid-air collisions

With the help of the and the headers, you can detect mid-air edit collisions.

For example, when editing MDN, the current wiki content is hashed and put into an in the response:

When saving changes to a wiki page (posting data), the request will contain the header containing the values to check freshness against.

If the hashes don't match, it means that the document has been edited in-between and a error is thrown.

Specifications

Browser compatibility

BCD tables only load in the browser

The information below has been pulled from MDN's GitHub (https://github.com/mdn/browser-compat-data).

See also

Contact and Chat

Troubleshoot Azure Data Factory and Synapse pipelines

  • Article
  • 38 minutes to read

APPLIES TO: Azure Data Factory Azure Synapse Analytics

This article explores common troubleshooting methods for external control activities in Azure Data Factory and Synapse pipelines.

Connector and copy activity

For connector issues such as an encounter error using the copy activity, refer to the Troubleshoot Connectors article.

Azure Databricks

Error code: 3200

  • Message: Error 403.

  • Cause:

  • Recommendation: By default, the Azure Databricks access token is valid for 90 days. Create a new token and update the linked service.

Error code: 3201

  • Message:

  • Cause:

  • Recommendation: Specify the notebook path in the Databricks activity.


  • Message:

  • Cause:

  • Recommendation: Verify that the Databricks cluster exists.


  • Message:

  • Cause:

  • Recommendation: Specify either absolute paths for workspace-addressing schemes, or for files stored in the Databricks File System (DFS).


  • Message:

  • Cause:

  • Recommendation: Verify the linked service definition.


  • Message:

  • Cause:

  • Recommendation: Verify the linked service definition.


  • Message:

  • Cause:

  • Recommendation: Refer to the error message.


Error code: 3202

  • Message:

  • Cause:

  • Recommendation: Check all pipelines that use this Databricks workspace for their job creation rate. If pipelines launched too many Databricks runs in aggregate, migrate some pipelines to a new workspace.


  • Message:

  • Cause:

  • Recommendation: Inspect the pipeline JSON and ensure all parameters in the baseParameters notebook specify a nonempty value.


  • Message: SimpleUserContext{userId=..., [email protected], orgId=...}

  • Cause: The user who generated the access token isn't allowed to access the Databricks cluster specified in the linked service.

  • Recommendation: Ensure the user has the required permissions in the workspace.


  • Message:

  • Cause: The job has not initialized.

  • Recommendation: Wait and try again later.

Error code: 3203

  • Message:

  • Cause: The cluster was terminated. For interactive clusters, this issue might be a race condition.

  • Recommendation: To avoid this error, use job clusters.

Error code: 3204

  • Message:

  • Cause: Error messages indicate various issues, such as an unexpected cluster state or a specific activity. Often, no error message appears.

  • Recommendation: N/A

Error code: 3208

  • Message:

  • Cause: The network connection to the Databricks service was interrupted.

  • Recommendation: If you're using a self-hosted integration runtime, make sure that the network connection is reliable from the integration runtime nodes. If you're using Azure integration runtime, retry usually works.

The Boolean run output starts coming as string instead of expected int

  • Symptoms: Your Boolean run output starts coming as string (for example, or ) instead of expected int (for example, or ).

    Screenshot of the Databricks pipeline.

    You noticed this change on September 28, 2021 at around 9 AM IST when your pipeline relying on this output started failing. No change was made on the pipeline, and the Boolean output had been coming as expected before the failure.

    Screenshot of the difference in the output.

  • Cause: This issue is caused by a recent change, which is by design. After the change, if the result is a number that starts with zero, Azure Data Factory will convert the number to the octal value, which is a bug. This number is always 0 or 1, which never caused issues before the change. So to fix the octal conversion, the string output is passed from the Notebook run as is.

  • Recommendation: Change the if condition to something like .

Azure Data Lake Analytics

The following table applies to U-SQL.

Error code: 2709

  • Message:

  • Cause: Incorrect Azure Active Directory (Azure AD) tenant.

  • Recommendation: Incorrect Azure Active Directory (Azure AD) tenant.


  • Message:

  • Cause: This error is caused by throttling on Data Lake Analytics.

  • Recommendation: Reduce the number of submitted jobs to Data Lake Analytics. Either change triggers and concurrency settings on activities, or increase the limits on Data Lake Analytics.


  • Message:

  • Cause: This error is caused by throttling on Data Lake Analytics.

  • Recommendation: Reduce the number of submitted jobs to Data Lake Analytics. Either change triggers and concurrency settings on activities, or increase the limits on Data Lake Analytics.

Error code: 2705

  • Message:

  • Cause: The service principal or certificate doesn't have access to the file in storage.

  • Recommendation: Verify that the service principal or certificate that the user provides for Data Lake Analytics jobs has access to both the Data Lake Analytics account, and the default Data Lake Storage instance from the root folder.

Error code: 2711

  • Message:

  • Cause: The service principal or certificate doesn't have access to the file in storage.

  • Recommendation: Verify that the service principal or certificate that the user provides for Data Lake Analytics jobs has access to both the Data Lake Analytics account, and the default Data Lake Storage instance from the root folder.


  • Message:

  • Cause: The path to the U-SQL file is wrong, or the linked service credentials don't have access.

  • Recommendation: Verify the path and credentials provided in the linked service.

Error code: 2704

  • Message:

  • Cause: The service principal or certificate doesn't have access to the file in storage.

  • Recommendation: Verify that the service principal or certificate that the user provides for Data Lake Analytics jobs has access to both the Data Lake Analytics account, and the default Data Lake Storage instance from the root folder.

Error code: 2707

  • Message:

  • Cause: The Data Lake Analytics account in the linked service is wrong.

  • Recommendation: Verify that the right account is provided.

Error code: 2703

  • Message:

  • Cause: The error is from Data Lake Analytics.

  • Recommendation: The job was submitted to Data Lake Analytics, and the script there, both failed. Investigate in Data Lake Analytics. In the portal, go to the Data Lake Analytics account and look for the job by using the Data Factory activity run ID (don't use the pipeline run ID). The job there provides more information about the error, and will help you troubleshoot.

    If the resolution isn't clear, contact the Data Lake Analytics support team and provide the job Universal Resource Locator (URL), which includes your account name and the job ID.

Azure functions

Error code: 3602

  • Message:

  • Cause: The Httpmethod specified in the activity payload isn't supported by Azure Function Activity.

  • Recommendation: The supported Httpmethods are: PUT, POST, GET, DELETE, OPTIONS, HEAD, and TRACE.

Error code: 3603

  • Message:

  • Cause: The Azure function that was called didn't return a JSON Payload in the response. Azure Data Factory and Synapse pipeline Azure function activity only support JSON response content.

  • Recommendation: Update the Azure function to return a valid JSON Payload such as a C# function may return

Error code: 3606

  • Message: Azure function activity missing function key.

  • Cause: The Azure function activity definition isn't complete.

  • Recommendation: Check that the input Azure function activity JSON definition has a property named .

Error code: 3607

  • Message:

  • Cause: The Azure function activity definition isn't complete.

  • Recommendation: Check that the input Azure function activity JSON definition has a property named .

Error code: 3608

  • Message:

  • Cause: The Azure function details in the activity definition may be incorrect.

  • Recommendation: Fix the Azure function details and try again.

Error code: 3609

  • Message:

  • Cause: The Azure function activity definition isn't complete.

  • Recommendation: Check that the input Azure Function activity JSON definition has a property named .

Error code: 3610

  • Message:

  • Cause: The function URL may be incorrect.

  • Recommendation: Verify that the value for in the activity JSON is correct and try again.

Error code: 3611

  • Message:

  • Cause: The Azure function activity definition isn't complete.

  • Recommendation: Check that the input Azure function activity JSON definition has a property named .

Error code: 3612

  • Message:

  • Cause: The Azure function activity definition isn't complete.

  • Recommendation: Check that the input Azure function activity JSON definition has linked service details.

Azure Machine Learning

Error code: 4101

  • Message:

  • Cause: Bad format or missing definition of property .

  • Recommendation: Check if the activity has the property defined with correct data.

Error code: 4110

  • Message:

  • Cause: The AzureMLExecutePipeline activity definition isn't complete.

  • Recommendation: Check that the input AzureMLExecutePipeline activity JSON definition has correctly linked service details.

Error code: 4111

  • Message:

  • Cause: Incorrect activity definition.

  • Recommendation: Check that the input AzureMLExecutePipeline activity JSON definition has correctly linked service details.

Error code: 4112

  • Message:

  • Cause: Bad format or missing definition of property '%propertyName;'.

  • Recommendation: Check if the linked service has the property defined with correct data.

Error code: 4121

  • Message:

  • Cause: The Credential used to access Azure Machine Learning has expired.

  • Recommendation: Verify that the credential is valid and retry.

Error code: 4122

  • Message:

  • Cause: The credential provided in Azure Machine Learning Linked Service is invalid, or doesn't have permission for the operation.

  • Recommendation: Verify that the credential in Linked Service is valid, and has permission to access Azure Machine Learning.

Error code: 4123

  • Message:

  • Cause: The properties of the activity such as are invalid for the Azure Machine Learning (ML) pipeline.

  • Recommendation: Check that the value of activity properties matches the expected payload of the published Azure ML pipeline specified in Linked Service.

Error code: 4124

  • Message:

  • Cause: The published Azure ML pipeline endpoint doesn't exist.

  • Recommendation: Verify that the published Azure Machine Learning pipeline endpoint specified in Linked Service exists in Azure Machine Learning.

Error code: 4125

  • Message:

  • Cause: There is a server error on Azure Machine Learning.

  • Recommendation: Retry later. Contact the Azure Machine Learning team for help if the issue continues.

Error code: 4126

  • Message:

  • Cause: The Azure ML pipeline run failed.

  • Recommendation: Check Azure Machine Learning for more error logs, then fix the ML pipeline.

Azure Synapse Analytics

Error code: 3250

  • Message:

  • Cause: Insufficient resources

  • Recommendation: Try ending the running job(s) in the workspace, reducing the numbers of vCores requested, increasing the workspace quota or using another workspace.

Error code: 3251

  • Message:

  • Cause: Insufficient resources

  • Recommendation: Try ending the running job(s) in the pool, reducing the numbers of vCores requested, increasing the pool maximum size or using another pool.

Error code: 3252

  • Message:

  • Cause: Insufficient vcores

  • Recommendation: Try reducing the numbers of vCores requested or increasing your vCore quota. For more information, see Apache Spark core concepts.

Error code: 3253

  • Message:

  • Cause: Throttling threshold was reached.

  • Recommendation: Retry the request after a wait period.

Error code: 3254

  • Message:

  • Cause: Bad format or missing definition of property '%propertyName;'.

  • Recommendation: Check if the linked service has property '%propertyName;' defined with correct data.

Common

Error code: 2103

  • Message:

  • Cause: The required value for the property has not been provided.

  • Recommendation: Provide the value from the message and try again.

Error code: 2104

  • Message:

  • Cause: The provided property type isn't correct.

  • Recommendation: Fix the type of the property and try again.

Error code: 2105

  • Message:

  • Cause: The value for the property is invalid or isn't in the expected format.

  • Recommendation: Refer to the documentation for the property and verify that the value provided includes the correct format and type.

Error code: 2106

  • Message:

  • Cause: The connection string for the storage is invalid or has incorrect format.

  • Recommendation: Go to the Azure portal and find your storage, then copy-and-paste the connection string into your linked service and try again.

Error code: 2110

  • Message:

  • Cause: The linked service specified in the activity is incorrect.

  • Recommendation: Verify that the linked service type is one of the supported types for the activity. For example, the linked service type for HDI activities can be HDInsight or HDInsightOnDemand.

Error code: 2111

  • Message:

  • Cause: The type of the provided property isn't correct.

  • Recommendation: Fix the property type and try again.

Error code: 2112

  • Message:

  • Cause: The cloud type is unsupported or couldn't be determined for storage from the EndpointSuffix.

  • Recommendation: Use storage in another cloud and try again.

Custom

The following table applies to Azure Batch.

Error code: 2500

  • Message:

  • Cause:

  • Recommendation: Ensure that the executable file exists. If the program started, verify that stdout.txt and stderr.txt were uploaded to the storage account. It's a good practice to include logs in your code for debugging.

Error code: 2501

  • Message:

  • Cause: Incorrect Batch access key or pool name.

  • Recommendation: Verify the pool name and the Batch access key in the linked service.

Error code: 2502

  • Message:

  • Cause: Incorrect storage account name or access key.

  • Recommendation: Verify the storage account name and the access key in the linked service.

Error code: 2504

  • Message:

  • Cause: Too many files in the of the custom activity. The total size of can't be more than 32,768 characters.

  • Recommendation: Remove unnecessary files, or Zip them and add an unzip command to extract them.

    For example, use

Error code: 2505

  • Message:

  • Cause: Custom activities support only storage accounts that use an access key.

  • Recommendation: Refer to the error description.

Error code: 2507

  • Message:

  • Cause: No files are in the storage account at the specified path.

  • Recommendation: The folder path must contain the executable files you want to run.

Error code: 2508

  • Message:

  • Cause: Multiple files of the same name are in different subfolders of folderPath.

  • Recommendation: Custom activities flatten folder structure under folderPath. If you need to preserve the folder structure, zip the files and extract them in Azure Batch by using an unzip command.

    For example, use

Error code: 2509

  • Message:

  • Cause: Batch URLs must be similar to

  • Recommendation: Refer to the error description.

Error code: 2510

  • Message:

  • Cause: The batch URL is invalid.

  • Recommendation: Verify the batch URL.

HDInsight

Error code: 206

  • Message:

  • Cause: There was an internal problem with the service that caused this error.

  • Recommendation: This issue could be transient. Retry your job after sometime.

Error code: 207

  • Message:

  • Cause: There was an internal error while trying to determine the region from the primary storage account.

  • Recommendation: Try another storage.

Error code: 208

  • Message:

  • Cause: There was an internal error while trying to read the Service Principal or instantiating the MSI authentication.

  • Recommendation: Consider providing a service principal, which has permissions to create an HDInsight cluster in the provided subscription and try again. Verify that the Manage Identities are set up correctly.

Error code: 2300

  • Message:

  • Cause: The error message contains a message similar to . The provided cluster URI might be invalid.

  • Recommendation: Verify that the cluster hasn't been deleted, and that the provided URI is correct. When you open the URI in a browser, you should see the Ambari UI. If the cluster is in a virtual network, the URI should be the private URI. To open it, use a Virtual Machine (VM) that is part of the same virtual network.

    For more information, see Directly connect to Apache Hadoop services.


  • Cause: If the error message contains a message similar to , the job submission timed out.

  • Recommendation: The problem could be either general HDInsight connectivity or network connectivity. First confirm that the HDInsight Ambari UI is available from any browser. Then check that your credentials are still valid.

    If you're using a self-hosted integrated runtime (IR), perform this step from the VM or machine where the self-hosted IR is installed. Then try submitting the job again.

    For more information, read Ambari Web UI.


  • Cause: When the error message contains a message similar to or , the credentials for HDInsight are incorrect or have expired.

  • Recommendation: Correct the credentials and redeploy the linked service. First verify that the credentials work on HDInsight by opening the cluster URI on any browser and trying to sign in. If the credentials don't work, you can reset them from the Azure portal.

    For ESP cluster, reset the password through self service password reset.


  • Cause: When the error message contains a message similar to , this error is returned by HDInsight service.

  • Recommendation: A 502 error often occurs when your Ambari Server process was shut down. You can restart the Ambari Services by rebooting the head node.

    1. Connect to one of your nodes on HDInsight using SSH.

    2. Identify your active head node host by running .

    3. Connect to your active head node as Ambari Server sits on the active head node using SSH.

    4. Reboot the active head node.

      For more information, look through the Azure HDInsight troubleshooting documentation. For example:


  • Cause: When the error message contains a message similar to or , too many jobs are being submitted to HDInsight at the same time.

  • Recommendation: Limit the number of concurrent jobs submitted to HDInsight. Refer to activity concurrency if the jobs are being submitted by the same activity. Change the triggers so the concurrent pipeline runs are spread out over time.

    Refer to HDInsight documentation to adjust as the error suggests.

Error code: 2301

  • Message:

  • Cause: HDInsight cluster or service has issues.

  • Recommendation: This error occurs when the service doesn't receive a response from HDInsight cluster when attempting to request the status of the running job. This issue might be on the cluster itself, or HDInsight service might have an outage.

    Refer to HDInsight troubleshooting documentation, or contact Microsoft support for further assistance.

Error code: 2302

  • Message:

  • Cause: The job was submitted to the HDI cluster and failed there.

  • Recommendation:

  1. Check Ambari UI:
    1. Ensure that all services are still running.
    2. From Ambari UI, check the alert section in your dashboard.
      1. For more information on alerts and resolutions to alerts, see Managing and Monitoring a Cluster.
    3. Review your YARN memory. If your YARN memory is high, the processing of your jobs may be delayed. If you do not have enough resources to accommodate your Spark application/job, scale up the cluster to ensure the cluster has enough memory and cores.
  2. Run a Sample test job.
    1. If you run the same job on HDInsight backend, check that it succeeded. For examples of sample runs, see Run the MapReduce examples included in HDInsight
  3. If the job still failed on HDInsight, check the application logs and information, which to provide to Support:
    1. Check whether the job was submitted to YARN. If the job wasn't submitted to yarn, use .
    2. If the application finished execution, collect the start time and end time of the YARN Application. If the application didn't complete the execution, collect Start time/Launch time.
    3. Check and collect application log with .
    4. Check and collect the yarn Resource Manager logs under the directory.
    5. If these steps are not enough to resolve the issue, contact Azure HDInsight team for support and provide the above logs and timestamps.

Error code: 2303

  • Message:

  • Cause: The job was submitted to the HDI cluster and failed there.

  • Recommendation:

  1. Check Ambari UI:
    1. Ensure that all services are still running.
    2. From Ambari UI, check the alert section in your dashboard.
      1. For more information on alerts and resolutions to alerts, see Managing and Monitoring a Cluster.
    3. Review your YARN memory. If your YARN memory is high, the processing of your jobs may be delayed. If you do not have enough resources to accommodate your Spark application/job, scale up the cluster to ensure the cluster has enough memory and cores.
  2. Run a Sample test job.
    1. If you run the same job on HDInsight backend, check that it succeeded. For examples of sample runs, see Run the MapReduce examples included in HDInsight
  3. If the job still failed on HDInsight, check the application logs and information, which to provide to Support:
    1. Check whether the job was submitted to YARN. If the job wasn't submitted to yarn, use .
    2. If the application finished execution, collect the start time and end time of the YARN Application. If the application didn't complete the execution, collect Start time/Launch time.
    3. Check and collect application log with .
    4. Check and collect the yarn Resource Manager logs under the directory.
    5. If these steps are not enough to resolve the issue, contact Azure HDInsight team for support and provide the above logs and timestamps.

Error code: 2304

  • Message:

  • Cause: The storage linked services used in the HDInsight (HDI) linked service or HDI activity, are configured with an MSI authentication that isn't supported.

  • Recommendation: Provide full connection strings for storage accounts used in the HDI linked service or HDI activity.

Error code: 2305

  • Message:

  • Cause: The connection information for the HDI cluster is incorrect, the provided user doesn't have permissions to perform the required action, or the HDInsight service has issues responding to requests from the service.

  • Recommendation: Verify that the user information is correct, and that the Ambari UI for the HDI cluster can be opened in a browser from the VM where the IR is installed (for a self-hosted IR), or can be opened from any machine (for Azure IR).

Error code: 2306

  • Message:

  • Cause: The JSON provided for the script action is invalid.

  • Recommendation: The error message should help to identify the issue. Fix the json configuration and try again.

    Check Azure HDInsight on-demand linked service for more information.

Error code: 2310

  • Message:

  • Cause: The service tried to create a batch on a Spark cluster using Livy API (livy/batch), but received an error.

  • Recommendation: Follow the error message to fix the issue. If there isn't enough information to get it resolved, contact the HDI team and provide them the batch ID and job ID, which can be found in the activity run Output in the service Monitoring page. To troubleshoot further, collect the full log of the batch job.

    For more information on how to collect the full log, see Get the full log of a batch job.

Error code: 2312

  • Message:

  • Cause: The job failed on the HDInsight Spark cluster.

  • Recommendation: Follow the links in the activity run Output in the service Monitoring page to troubleshoot the run on HDInsight Spark cluster. Contact HDInsight support team for further assistance.

    For more information on how to collect the full log, see Get the full log of a batch job.

Error code: 2313

  • Message:

  • Cause: The batch was deleted on the HDInsight Spark cluster.

  • Recommendation: Troubleshoot batches on the HDInsight Spark cluster. Contact HDInsight support for further assistance.

    For more information on how to collect the full log, see Get the full log of a batch job, and share the full log with HDInsight support for further assistance.

Error code: 2328

  • Message:

  • Cause: The error message should show the details of what went wrong.

  • Recommendation: The error message should help to troubleshoot the issue.

Error code: 2329

  • Message:

  • Cause: The error message should show the details of what went wrong.

  • Recommendation: The error message should help to troubleshoot the issue.

Error code: 2331

  • Message:

  • Cause: The provided file path is empty.

  • Recommendation: Provide a path for a file that exists.

Error code: 2340

  • Message:

  • Cause: The HDInsightOnDemand linked service doesn't support execution via SelfHosted IR.

  • Recommendation: Select an Azure IR and try again.

Error code: 2341

  • Message:

  • Cause: The provided URL isn't in correct format.

  • Recommendation: Fix the cluster URL and try again.

Error code: 2342

  • Message:

  • Cause: Either the provided credentials are wrong for the cluster, or there was a network configuration or connection issue, or the IR is having problems connecting to the cluster.

  • Recommendation:

    1. Verify that the credentials are correct by opening the HDInsight cluster's Ambari UI in a browser.

    2. If the cluster is in Virtual Network (VNet) and a self-hosted IR is being used, the HDI URL must be the private URL in VNets, and should have listed after the cluster name.

      For example, change to . Note the after , but before

    3. If the cluster is in VNet, the self-hosted IR is being used, and the private URL was used, and yet the connection still failed, then the VM where the IR is installed had problems connecting to the HDI.

      Connect to the VM where the IR is installed and open the Ambari UI in a browser. Use the private URL for the cluster. This connection should work from the browser. If it doesn't, contact HDInsight support team for further assistance.

    4. If self-hosted IR isn't being used, then the HDI cluster should be accessible publicly. Open the Ambari UI in a browser and check that it opens up. If there are any issues with the cluster or the services on it, contact HDInsight support team for assistance.

      The HDI cluster URL used in the linked service must be accessible for the IR (self-hosted or Azure) in order for the test connection to pass, and for runs to work. This state can be verified by opening the URL from a browser either from VM, or from any public machine.

Error code: 2343

  • Message:

  • Cause: Either the user name or the password is empty.

  • Recommendation: Provide the correct credentials to connect to HDI and try again.

Error code: 2345

  • Message:

  • Cause: The script file doesn't exist or the service couldn't connect to the location of the script.

  • Recommendation: Verify that the script exists, and that the associated linked service has the proper credentials for a connection.

Error code: 2346

  • Message:

  • Cause: The service tried to establish an Open Database Connectivity (ODBC) connection to the HDI cluster, and it failed with an error.

  • Recommendation:

    1. Confirm that you correctly set up your ODBC/Java Database Connectivity (JDBC) connection.
      1. For JDBC, if you're using the same virtual network, you can get this connection from:
      2. To ensure that you have the correct JDBC set up, see Query Apache Hive through the JDBC driver in HDInsight.
      3. For Open Database (ODB), see Tutorial: Query Apache Hive with ODBC and PowerShell to ensure that you have the correct setup.
    2. Verify that Hiveserver2, Hive Metastore, and Hiveserver2 Interactive are active and working.
    3. Check the Ambari user interface (UI):
      1. Ensure that all services are still running.
      2. From the Ambari UI, check the alert section in your dashboard.
        1. For more information on alerts and resolutions to alerts, see Managing and Monitoring a Cluster .
    4. If these steps are not enough to resolve the issue, contact the Azure HDInsight team.

Error code: 2347

  • Message:

  • Cause: The service submitted the hive script for execution to the HDI cluster via ODBC connection, and the script has failed on HDI.

  • Recommendation:

    1. Confirm that you correctly set up your ODBC/Java Database Connectivity (JDBC) connection.
      1. For JDBC, if you're using the same virtual network, you can get this connection from:
      2. To ensure that you have the correct JDBC set up, see Query Apache Hive through the JDBC driver in HDInsight.
      3. For Open Database (ODB), see Tutorial: Query Apache Hive with ODBC and PowerShell to ensure that you have the correct setup.
    2. Verify that Hiveserver2, Hive Metastore, and Hiveserver2 Interactive are active and working.
    3. Check the Ambari user interface (UI):
      1. Ensure that all services are still running.
      2. From the Ambari UI, check the alert section in your dashboard.
        1. For more information on alerts and resolutions to alerts, see Managing and Monitoring a Cluster .
    4. If these steps are not enough to resolve the issue, contact the Azure HDInsight team.

Error code: 2348

  • Message:

  • Cause: The storage linked service properties are not set correctly.

  • Recommendation: Only full connection strings are supported in the main storage linked service for HDI activities. Verify that you are not using MSI authorizations or applications.

Error code: 2350

  • Message:

  • Cause: The credentials provided to connect to the storage where the files should be located are incorrect, or the files do not exist there.

  • Recommendation: This error occurs when the service prepares for HDI activities, and tries to copy files to the main storage before submitting the job to HDI. Check that files exist in the provided location, and that the storage connection is correct. As HDI activities do not support MSI authentication on storage accounts related to HDI activities, verify that those linked services have full keys or are using Azure Key Vault.

Error code: 2351

  • Message:

  • Cause: The file doesn't exist at specified path.

  • Recommendation: Check whether the file actually exists, and that the linked service with connection info pointing to this file has the correct credentials.

Error code: 2352

  • Message:

  • Cause: The file storage linked service properties are not set correctly.

  • Recommendation: Verify that the properties of the file storage linked service are properly configured.

Error code: 2353

  • Message:

  • Cause: The script storage linked service properties are not set correctly.

  • Recommendation: Verify that the properties of the script storage linked service are properly configured.

Error code: 2354

  • Message:

  • Cause: The storage linked service type isn't supported by the activity.

  • Recommendation: Verify that the selected linked service has one of the supported types for the activity. HDI activities support AzureBlobStorage and AzureBlobFSStorage linked services.

    For more information, read Compare storage options for use with Azure HDInsight clusters

Error code: 2355

  • Message:

  • Cause: The provided value for is incorrect.

  • Recommendation: Verify that the provided value is similar to:

    Also verify that each variable appears in the list only once.

Error code: 2356

  • Message:

  • Cause: The provided value for is incorrect.

  • Recommendation: Verify that the provided value is similar to:

    Also verify that each variable appears in the list only once.

Error code: 2357

  • Message:

  • Cause: The provided credentials are incorrect.

  • Recommendation: Verify that the connection information in ADLS Gen 1 linked to the service, and verify that the test connection succeeds.

Error code: 2358

  • Message:

  • Cause: The provided value for the required property has an invalid format.

  • Recommendation: Update the value to the suggested range and try again.

Error code: 2359

  • Message:

  • Cause: The provided value for the property is invalid.

  • Recommendation: Update the value to be one of the suggestions and try again.

Error code: 2360

  • Message:

  • Cause: The provided connection string for the is invalid.

  • Recommendation: Update the value to a correct Azure SQL connection string and try again.

Error code: 2361

  • Message:

  • Cause: The cluster creation failed, and the service did not get an error back from HDInsight service.

  • Recommendation: Open the Azure portal and try to find the HDI resource with provided name, then check the provisioning status. Contact HDInsight support team for further assistance.

Error code: 2362

  • Message:

  • Cause: The provided additional storage was not Azure Blob storage.

  • Recommendation: Provide an Azure Blob storage account as an additional storage for HDInsight on-demand linked service.

SSL error when linked service using HDInsight ESP cluster

  • Message:

  • Cause: The issue is most likely related with System Trust Store.

  • Resolution: You can navigate to the path Microsoft Integration Runtime\4.0\Shared\ODBC Drivers\Microsoft Hive ODBC Driver\lib and open DriverConfiguration64.exe to change the setting.

    Uncheck Use System Trust Store

HDI activity stuck in preparing for cluster

If the HDI activity is stuck in preparing for cluster, follow the guidelines below:

  1. Make sure the timeout is greater than what is described below and wait for the execution to complete or until it is timed out, and wait for Time To Live (TTL) time before submitting new jobs.

    The max default time that it takes to spin up a cluster is 2 hours, and if you have any init script, it will add up, up to another 2 hours.

  2. Make sure the storage and HDI are provisioned in the same region.

  3. Make sure that the service principal used for accessing the HDI cluster is valid.

  4. If the issue still persists, as a workaround, delete the HDI linked service and re-create it with a new name.

Web Activity

Error code: 2128

  • Message:

  • Cause: This issue is due to either Network connectivity, a DNS failure, a server certificate validation, or a timeout.

  • Recommendation: Validate that the endpoint you are trying to hit is responding to requests. You may use tools like Fiddler/Postman/Netmon/Wireshark.

Error code: 2108

  • Message:

  • Cause: The request failed due to an underlying issue such as network connectivity, a DNS failure, a server certificate validation, or a timeout.

  • Recommendation: Use Fiddler/Postman/Netmon/Wireshark to validate the request.

More details

To use Fiddler to create an HTTP session of the monitored web application:

  1. Download, install, and open Fiddler.

  2. If your web application uses HTTPS, go to Tools > Fiddler Options > HTTPS.

    1. In the HTTPS tab, select both Capture HTTPS CONNECTs and Decrypt HTTPS traffic.

      Fiddler options

  3. If your application uses TLS/SSL certificates, add the Fiddler certificate to your device.

    Go to: Tools > Fiddler Options > HTTPS > Actions > Export Root Certificate to Desktop.

  4. Turn off capturing by going to File > Capture Traffic. Or press F12.

  5. Clear your browser's cache so that all cached items are removed and must be downloaded again.

  6. Create a request:

  7. Select the Composer tab.

    1. Set the HTTP method and URL.

    2. If needed, add headers and a request body.

    3. Select Execute.

  8. Turn on traffic capturing again, and complete the problematic transaction on your page.

  9. Go to: File > Save > All Sessions.

For more information, see Getting started with Fiddler.

General

REST continuation token NULL error

Error message: {"token":null,"range":{"min":..}

Cause: When querying across multiple partitions/pages, backend service returns continuation token in JObject format with 3 properties: token, min and max key ranges, for instance, {"token":null,"range":{"min":"05C1E9AB0DAD76","max":"05C1E9CD673398"}}). Depending on source data, querying can result 0 indicating missing token though there is more data to fetch.

Recommendation: When the continuationToken is non-null, as the string {"token":null,"range":{"min":"05C1E9AB0DAD76","max":"05C1E9CD673398"}}, it is required to call queryActivityRuns API again with the continuation token from the previous response. You need to pass the full string for the query API again. The activities will be returned in the subsequent pages for the query result. You should ignore that there is empty array in this page, as long as the full continuationToken value != null, you need continue querying. For more details, please refer to REST api for pipeline run query.

Activity stuck issue

When you observe that the activity is running much longer than your normal runs with barely no progress, it may happen to be stuck. You can try canceling it and retry to see if it helps. If it's a copy activity, you can learn about the performance monitoring and troubleshooting from Troubleshoot copy activity performance; if it's a data flow, learn from Mapping data flows performance and tuning guide.

Payload is too large

Error message:

Cause: The payload for each activity run includes the activity configuration, the associated dataset(s), and linked service(s) configurations if any, and a small portion of system properties generated per activity type. The limit of such payload size is 896 KB as mentioned in the Azure limits documentation for Data Factory and Azure Synapse Analytics.

Recommendation: You hit this limit likely because you pass in one or more large parameter values from either upstream activity output or external, especially if you pass actual data across activities in control flow. Check if you can reduce the size of large parameter values, or tune your pipeline logic to avoid passing such values across activities and handle it inside the activity instead.

Unsupported compression causes files to be corrupted

Symptoms: You try to unzip a file that is stored in a blob container. A single copy activity in a pipeline has a source with the compression type set to "deflate64" (or any unsupported type). This activity runs successfully and produces the text file contained in the zip file. However, there is a problem with the text in the file, and this file appears corrupted. When this file is unzipped locally, it is fine.

Cause: Your zip file is compressed by the algorithm of "deflate64", while the internal zip library of Azure Data Factory only supports "deflate". If the zip file is compressed by the Windows system and the overall file size exceeds a certain number, Windows will use "deflate64" by default, which is not supported in Azure Data Factory. On the other hand, if the file size is smaller or you use some third party zip tools that support specifying the compress algorithm, Windows will use "deflate" by default.

Next steps

For more troubleshooting help, try these resources:

Feedback

View all page feedback

Linked In

Unable to update data with Laravel does not show any error

Your update route is using the wrong "verb" and url. If you take a look at Laravel's Resource Controllers you can see the different actions and route names available for editing, updating, deleting etc. when creating a "CRUD" controller.

You can see the route for an "update" action and its "verb".

Change your routes to

Or if you want to add a complete CRUD controller use the short form:

This will create all required routes in one convenient line of code. Use to check all routes.

HTTP forms only support GET and POST methods, Laravel uses in blade to add the other verbs (put,patch,delete):

Edit:

your form uses attributes on some s. Those values will not be sent along with your request. Here's an updated :

  • attributes have been swapped with .
  • The uses Laravel's RouteModelBinding
  • Removed the since you only have one item to be edited

edit.blade.php:

Since it's using RouteModelBinding you can change your method to:

Laravel will know what Contact is

error unable to update user visit data

Thematic video

Windows update access denied windows 10, wuauserv missing in windows 10.

0 Comments

Leave a Comment