Using Data Mapper in WSO2 ESB - wso2-esb

I follow up below document for using Data Mapper in WSO2 ESB:
https://docs.wso2.com/display/ESB500/Using+Data+Mapper+Mediator+in+WSO2+ESB
When I tried to run the application using the server in WSO2 Developer Studio ESB Tool, the application is deployed.
But when I tried to create .car file (CompositeApplication -> right click on pom.xml -> Export Configuration), I get below window:
And I filled required fields and clicked on 'Finished'. Finally I checked on the "Export location" whether the file is created. But in the location, no file created. Please give proper solution.
Thanks in advance.

In order to create the car file, you should right click on your composite application project and choose Export Composite Application Project. According to the screenshot you are triggering some other Visual Data Mapper exporter.

Related

The document creation or update failed because of invalid reference

I am having trouble completing an excersice on the Microsoft Learn platform.
https://learn.microsoft.com/en-us/learn/modules/examine-components-of-modern-data-warehouse/5-exercise-azure-synapse
I have followed the instructions, but get the following error
Error message
Source settings
Does anyone know what's causing this, and how I can fix the issue?
Regards,
Anders
I ran into this issue too. Unless I missed a step in the Explore Azure Synapse Analytics exercise (https://learn.microsoft.com/en-us/learn/modules/examine-components-of-modern-data-warehouse/5-exercise-azure-synapse), I found the issue to be that the source linked service was not being created for some reason in Steps 3 and 4. I suggest doing the following in a different tab, so you can come back to Ingest Data wizard later to pick up where you left off. I created the source linked service by going to my workspace, then clicking on the "Manage" icon (the one with the wrench on the briefcase). After that, click on "Linked services" followed by clicking on "+ New". At this point, a number of connection icon should appear. In the search bar type "HTTP" and click on it and click "continue". This will bring up a form that will be the same as the one you saw in step 3 of the Explore Azure Synapse Analytics exercise. Fill it out as directed in the exercise, and you should have a source linked service connection of Type "HTTP" created.
Screenshot of Buttons to Click on to Create Source Linked Service
As long as you've completed the rest of the steps of the exercise, going back to the Deployment step and proceeding should now allow it successfully complete.
I followed the same document. When I tried in my environment, I got the same error.
Solution :
I was able to solve by Connecting with GitHub and Troubleshooting Git integration.
Note: Disconnect from the existing git repo. Reconnect back to the same repo, and create a new git branch. Then
use git to create more commits on top of that branch.
Sample dataset:
Deployment status:
Linked service:
For more in detail, please refer to the below links:
https://learn.microsoft.com/en-us/azure/data-factory/source-control#connect-to-a-git-repository
https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-delivery#best-practices-for-cicd

Unable to Modify Azure DevOps Project "Process". I just provisioned a free cloud server instance. I am an admin

I just created a new Azure DevOps cloud/instance. I am the project admin, but I am unable to update the Organization Settings -> "Process". When I try to add a custom field into a work item or try to change the "States", It seems I don't have the permission to do so.
Please help.
I figured it out. I created a Process that inherited the default Process. Then I changed the Process for the Project. Poof, now I can make changes to my Project's Process, e.g. add new work item types.

Data Factory - Cannot connect to SQL Database only when triggered from Blob

Also on the Microsoft Docs site here
I have a data factory pipeline that should use a Copy Data activity to insert rows from a CSV file of a blob into Azure Sql.
If I run the pipeline by clicking the "Debug" button in the designer window then it all works great. However, if I trigger the pipeline by copying the sample CSV to the blob container then I get the following error:
ErrorCode=SqlFailedToConnect,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot
connect to SQL Database: '', Database: '', User: ''. Check the linked
service configuration is correct, and make sure the SQL Database
firewall allows the integration runtime to access
I have checked that the target sql server database has the option checked to "Allow Azure services and resources to access this server".
Any ideas gratefully received!
Problem was I was missing a connection string value from the "Override template parameters" section of the release pipeline.
This meant that after deployment, the linked service did not have a connection string.
The key to Understanding this was learning of the "Switch to live mode" button from the data factory pipeline editor view:
After clicking this, I was able to browse the status of the linked service as were - rather than as per "development" mode.

Visual Studio LightSwitch HTML project will not deploy database schema to Azure SQL Database

I ran into deployment issues, so I created a test app to prove out the deployment process. I've kept everything as "out of the box" as possible:
I've created a simple (one table and one screen) VS2013 LightSwitch HTML client app, but the deployment fails, because it will not deploy the database schema.
I've created the Azure website and linked it to my Azure SQL Database, also the "Deploy database schema" checkbox is checked in the wizard.
It seems that my only option at the moment is to manually create the DB objects, which seems kind of absurd.
I have found a workaround to this issue.
It seems that the problem stems from not having the deployment credentials. What I have found is that if I attempt to deploy the server project there is a drop down list box that's supposed to be populated with available destinations. At the first attempt the list comes up blank but if I proceed to publish a message flashes up confirming a new set of credentials has been downloaded. After that I found I am able to publish the main project itself database objects as well.
In short; make sure you are properly logged in even if you have to log out and then in again, and also have deployment credentials up to date.

Access PME Metadata in Pivot4J/Saiku

I am using Pentaho BI Server 5.0.1 with the Pivot4J plug-in for Pentaho and Pentaho Metadata Editor (PME) 5.1.0. I created a domain in PME with tables, relationships, security, etc. and published it to the Pentaho server. I can see this Domain as a datasource of type 'Metadata' when I go to 'Manage Datasources' in the Admin console. However my Domain does not appear as a source when I try to create a new Pivot4J view.
I do see the Pentaho demos (Sampledata and Steel Wheels) in Pivot4J but I do not see my domain/metadata. I also edited the PentahoObject.spring.xml file to remove security on metadata objects as suggested in the Pentaho doc. Same thing for Saiku. What is the step that I am missing here?