Looking for suggestions
We have a requirement to automate the test scenario where we had to get the details from Poller based service(this will fetch the data when ever a data updated on existing or created a new record), once the record is picked we internally process them.
I would like to know how we can automate the process to pick the record as part of the testing.
Related
I am trying to create a dashboard app for my company that displays data from a few different sources that they use. I am starting with an in house system that stores data in MSSQL. I'm struggling to decide how I can display real time (or at least updated regularly) data based on this database.
I was thinking of writing a node server to poll the company database and check for updates and then store a copy of the relevant tables in my own database. Then creating another node server that computes metrics (average delivery time, Turnover, etc.) from my database and then a frontend (probably react) to display these metrics nicely and trigger the logic in the backend whenever the page is loaded by a user.
This is my first project so just need some guidance on whether this is the right way to go about this or if I'm over complicating it.
Thanks
One of solutions is to implement a CRON job in nodejs or in you frontEnd side, then you can retrieve new Data inserted to your Database.
You can reffer to this link for more information about the CROB job :
https://www.npmjs.com/package/cron
if you are using MySQL, you can use the mysql-events listner, it is a MySQL database and runs callbacks on matched events.
https://www.npmjs.com/package/mysql-events
I currently have an etl job that reads source table with over 1 million records and then sequentially processing to target table. Both source and target are in same schema but in between there is an external rest endpoint call to post some data from the source table and this job is performing very bad right now and Can someone please let me know what are some ways to improve performance in terms of how to parallelize this or reducing fetchsize etc to reduce this job running time ?
Check if your rest endpoint supports batching, and then implement that. Most APIs do these days. (In this case, you send multiple requests in one json/xml file to the end point)
Otherwise you simply need to use multiple copies of the REST client step. you should be able to get away with 8-10 at least, but check that you're not limited in some way at the other end.
Finally if none of that helps, try concocting your own httpclient in the java class step (not the javascript) and be sure that you only authenticate with the rest endpoint once, not every request, by keeping the session open. I'm not 100% convinced the rest client does this, and authentication is often the most expensive bit.
I need to create the interaction of the two SQL-table using the BizTalk Server
The simplest example is when a new record is added to one table. Is it possible to call the BizTalk, transfer this row to BizTalk-solution, where it row will process and transfer to another SQL-table?
I found some information about BizTalk-To-SQL interaction, but i cannot find any information or example about SQL-to-BizTalk interaction.
If it possible, can you say - how, or give me some instruction?
Yes, it is possible. But it is not possible for use to give detailed instructions based on your question.
You would have to have a receive location that either polls for records in that table, either inline SQL in the receive location or a call a Stored Procedure.
Then you would do whatever transformation etc. you needed using maps, possibly Orchestrations, and have a send port that would insert it into another table.
How to Configure a port using the WCF-SQL adapter
However, as others have said in the comments, you have to consider if BizTalk is a best fit of this. This would depend on the frequency, what sort of processing is needed, how quickly after insert the record needs to be processed, number of rows, and if each row is a discreet message or a large group of records.
Some other possibilities to consider include
Insert trigger on the first table, if you need it processed instantly
A SSIS package running on a schedule, if it is a large batch of messages that needs to be processed on a schedule basis
In a media management system my task is to create a workflow automation. Currently, i have created it using SQL Server triggers and the UI using ASP.NET with JQuery.
For Ex:
When a new file enters the system the trigger works and it will update the database metadata table with some data for that file.
Millions of assets get through the system. Is it ideal to have triggers to do this process.
Is there a better way to create this automation?
Is there a "best practice" to do this kind of works?
I'm having the same issue and data enters my central asset database on several ways (may differ from client to client).
So I also want to create an easily customizable workflow in the data layer (no other dependencies)
As the other people mention, triggers may affect the parent activity.
That is overcome by writing your action that should be performed away to a queue table.
Example Trigger. Hardware.Status = "Issue Work Order"
INSERT INTO Queue (Created, Task, Completed) VALUES (GETUTCDATE(),"EXEC dbo.IssueWorkOrder(123)",0);
The insert of a record into your queue table will reduce the problems as highlighted by other user comments.
The you build a scheduling tool (hangfire, sql tasks, or whatever), that execute tasks in the queue in the data order it wAS added.
Now, of course in practice it's not as simple as that. You will have to address the following:
What if the step fails2
Dependencies of previous steps to first have been completed
Multiple operators changing a record. (the deploy time between the job step being executed, and another person updating the same record.
I guess #2 and #3 is an issue with any workflow engine / pipleline. To address this a locking mechanism must be put in place.
I am using c#.net, the db is MS SQL 2008 R2.
I have a question that seems to have been asked a lot in the forums here. I want to use a database table as a a queue...but the processing of these messages cannot be done from the database.
I have a table that stores the requests i get from a .Net component. I now have to read the data from these tables and make http calls to 2 webservices. Based on the response received from the webservices, the data gets archived or deleted.
I had a few specific questions:
1. How do i make sure that if i pick a record for processing and the http call fails I should be able to go on to the next record, and then come back to this record at the end of the run
2. Is there an alternative to using the database as a queue(like MSMQ etc.), which option is better
3. I want to maintain an audit trail of the record status. Is creating a trigger to log the changes before the edit the best way to do it?
Regards
Leo
Use Service Broker!
I am using it for a while and think its great thing althought its takes time to understand how it works. Was using book to learn.
Service Broker solves:
concurency
application state
... many, many other things