Hangfire: launch queued job, for tests - hangfire

I have part of code:
var jobId = jobClient.Enqueue<IComposerJob>(c => c.Compose(email.Id));
inside Compose I Enqueue another job:
var jobId = BackgroundJob.Enqueue<ISenderJob>(s => s.Send(emailId));
So, for test that I have Mock to enqueue:
jobclient.Verify(x => x.Create( It.Is<Job>(job => job.Method.Name == "Compose" && job.Args[0].ToString() == composed.Id.ToString()), It.IsAny<EnqueuedState>()));
and here I want launch Compose job to check creating internal Send job, that's a problem
Thanx in advance)

Related

Hangfire executes job twice

I am using Hangfire.AspNetCore 1.7.17 and Hangfire.MySqlStorage 2.0.3 for software that is currently in production.
Now and then, we get a report of jobs being executed twice, despite the usage of the [DisableConcurrentExecution] attribute with a timeout of 30 seconds.
It seems that as soon as those 30 seconds have passed, another worker picks up that same job again.
The code is fairly straightforward:
public async Task ProcessPicking(HttpRequest incomingRequest)
{
var filePath = await StoreStreamAsync(incomingRequest, TriggerTypes.Picking);
var picking = await XmlHelper.DeserializeFileAsync<Picking>(filePath);
// delay with 20 minutes so outbound-out gets the chance to be send first
BackgroundJob.Schedule(() => StartPicking(picking), TimeSpan.FromMinutes(20));
}
[TriggerAlarming("[IMPORTANT] Failed to parse picking message to **** object.")]
[DisableConcurrentExecution(30)]
public void StartPicking(Picking picking)
{
var orderlinePickModels = picking.ToSalesOrderlinePickQuantityRequests().ToList();
var orderlineStatusModels = orderlinePickModels.ToSalesOrderlineStatusRequests().ToList();
var isParsed = DateTime.TryParse(picking.Order.UnloadingDate, out var unloadingDate);
for (var i = 0; i < orderlinePickModels.Count; i++)
{
// prevents bugs with usage of i in the background jobs
var index = i;
var id = BackgroundJob.Enqueue(() => SendSalesOrderlinePickQuantityRequest(orderlinePickModels[index], picking.EdiReference));
BackgroundJob.ContinueJobWith(id, () => SendSalesOrderlineStatusRequest(
orderlineStatusModels.First(x=>x.SalesOrderlineId== orderlinePickModels[index].OrderlineId),
picking.EdiReference, picking.Order.PrimaryReference, isParsed ? unloadingDate : DateTime.MinValue));
}
}
[TriggerAlarming("[IMPORTANT] Failed to send order line pick quantity request to ****.")]
[AutomaticRetry(Attempts = 2)]
[DisableConcurrentExecution(30)]
public void SendSalesOrderlinePickQuantityRequest(SalesOrderlinePickQuantityRequest request, string ediReference)
{
var audit = new AuditPostModel
{
Description = $"Finished job to send order line pick quantity request for item {request.Itemcode}, part of ediReference {ediReference}.",
Object = request,
Type = AuditTypes.SalesOrderlinePickQuantity
};
try
{
_logger.LogInformation($"Started job to send order line pick quantity request for item {request.Itemcode}.");
var response = _service.SendSalesOrderLinePickQuantity(request).GetAwaiter().GetResult();
audit.StatusCode = (int)response.StatusCode;
if (!response.IsSuccessStatusCode) throw new TriggerRequestFailedException();
audit.IsSuccessful = true;
_logger.LogInformation("Successfully posted sales order line pick quantity request to ***** endpoint.");
}
finally
{
Audit(audit);
}
}
It schedules the main task (StartPicking) that creates the objects required for the two subtasks:
Send picking details to customer
Send statusupdate to customer
The first job is duplicated. Perhaps the second job as well, but this is not important enough to care about as it just concerns a statusupdate. However, the first job causes the customer to think that more items have been picked than in reality.
I would assume that Hangfire updates the state of a job to e.g. in progress, and checks this state before starting a job. Is my time-out on the disabled concurrent execution too low? Is it possible in this scenario that the database connection to update the state takes about 30 seconds (to be fair, it is running on a slow server with ~8GB Ram, 6 vCores) due to which the second worker is already picking up the job again?
Or is this a Hangfire specific issue that must be tackled?

old sequelize migration did not run, how to re run it

I have a database with is interfaced through sequelize. I have multiple environments, and for some reason a specific migration did not run on production, but seemingly went trough as expected on our development, staging and test databases. Since then, multiple migrations has been run. Is it possible to safely rerun a specific migration after it is recognized as having been run?
The migration name is 20210316102540-delete_clothes_columns_expired_in_webshop_and_in_store.js
The migration has no down function, so maybe I would be able to db:migrate:undo --name 20210316102540-delete_clothes_columns_expired_in_webshop_and_in_store.js and db:migrate after? Is that a safe approach?
20210316102540-delete_clothes_columns_expired_in_webshop_and_in_store.js
'use strict';
module.exports = {
up: async (queryInterface, Sequelize) => {
queryInterface.sequelize.transaction(async (t) => {
await queryInterface.sequelize.query(`UPDATE clothes SET status = 'ACTIVE' WHERE in_store = TRUE AND status = 'IDLE';`, {transaction: t})
await queryInterface.removeColumn('clothes', 'in_webshop', {transaction: t})
await queryInterface.removeColumn('clothes', 'expired', {transaction: t})
await queryInterface.removeColumn('clothes', 'in_store', {transaction: t})
})
},
down: async (queryInterface, Sequelize) => {
/**
* Add reverting commands here.
*
* Example:
* await queryInterface.dropTable('users');
*/
}
};
To make sure it has been run
production=# select * from "SequelizeMeta";
name
-----------------------------------------------------------------------------------------
...
20210316102540-delete_clothes_columns_expired_in_webshop_and_in_store.js
...
(11 rows)
I am still not sure why the migration had not run. But I ended up deleting the row in SequelizeMeta and rerunning it.
delete from "SequelizeMeta" where name = '20210316102540-delete_clothes_columns_expired_in_webshop_and_in_store.js';
sequelize-cli db:migrate

BigQuery php client library queryresult shows isComplete false. But I can confirm the query is successful

$queryResults = $this->bigQuery->runQuery($query, ['parameters' => ['id' => $id]]);
$info = $queryResults->info();
var_dump($info);
// var_dump($queryResults);
$isComplete = $queryResults->isComplete();
if ($isComplete) {
exit("Insert. Done!");
}
The query is a "insert select" statement. which I had run successful on bigquery cloud console directly.
When I use php client library here, the query finished successful too and I can confirm that it is the same as I run query directly on google cloud console.
But the info of the query shows isComplete false.
array(3) {
["kind"]=>
string(22) "bigquery#queryResponse"
["jobReference"]=>
array(2) {
["projectId"]=>
string(26) "myprojectid"
["jobId"]=>
string(31) "job_OHcckVSSwAI7pHXijmmUqK5H4XE"
}
["jobComplete"]=>
bool(false)
}
There are no errors report form the $queryResults->info();. How could I find out why it shows me isComplete false?
Probably you still need to run $queryResults -> reload() to update the job status. As in the docs:
$isComplete = $queryResults->isComplete();
while (!$isComplete) {
sleep(1); // small delay between requests
$queryResults->reload();
$isComplete = $queryResults->isComplete();
}

EPiServer 9 - Add block to new page programmatically

I have found some suggestions on how to add a block to a page, but can't get it to work the way I want, so perhaps someone can help out.
What I want to do is to have a scheduled job that reads through a file, creating new pages with a certain pagetype and in the new page adding some blocks to a content property. The blocks fields will be updated with data from the file that is read.
I have the following code in the scheduled job, but it fails at
repo.Save((IContent) newBlock, SaveAction.Publish);
giving the error
The page name must contain at least one visible character.
This is my code:
public override string Execute()
{
//Call OnStatusChanged to periodically notify progress of job for manually started jobs
OnStatusChanged(String.Format("Starting execution of {0}", this.GetType()));
//Create Person page
PageReference parent = PageReference.StartPage;
//IContentRepository contentRepository = EPiServer.ServiceLocation.ServiceLocator.Current.GetInstance<IContentRepository>();
//IContentTypeRepository contentTypeRepository = EPiServer.ServiceLocation.ServiceLocator.Current.GetInstance<IContentTypeRepository>();
//var repository = EPiServer.ServiceLocation.ServiceLocator.Current.GetInstance<IContentRepository>();
//var slaegtPage = repository.GetDefault<SlaegtPage>(ContentReference.StartPage);
IContentRepository contentRepository = EPiServer.ServiceLocation.ServiceLocator.Current.GetInstance<IContentRepository>();
IContentTypeRepository contentTypeRepository = EPiServer.ServiceLocation.ServiceLocator.Current.GetInstance<IContentTypeRepository>();
SlaegtPage slaegtPage = contentRepository.GetDefault<SlaegtPage>(parent, contentTypeRepository.Load("SlaegtPage").ID);
if (slaegtPage.MainContentArea == null) {
slaegtPage.MainContentArea = new ContentArea();
}
slaegtPage.PageName = "001 Kim";
//Create block
var repo = ServiceLocator.Current.GetInstance<IContentRepository>();
var newBlock = repo.GetDefault<SlaegtPersonBlock1>(ContentReference.GlobalBlockFolder);
newBlock.PersonId = "001";
newBlock.PersonName = "Kim";
newBlock.PersonBirthdate = "01 jan 1901";
repo.Save((IContent) newBlock, SaveAction.Publish);
//Add block
slaegtPage.MainContentArea.Items.Add(new ContentAreaItem
{
ContentLink = ((IContent) newBlock).ContentLink
});
slaegtPage.URLSegment = UrlSegment.CreateUrlSegment(slaegtPage);
contentRepository.Save(slaegtPage, EPiServer.DataAccess.SaveAction.Publish);
_stopSignaled = true;
//For long running jobs periodically check if stop is signaled and if so stop execution
if (_stopSignaled) {
return "Stop of job was called";
}
return "Change to message that describes outcome of execution";
}
You can set the Name by
((IContent) newBlock).Name = "MyName";

Get test outcome/result using TFS API

Using the TFS API, how can I get the outcome/result of a specific test case in a given test suite and plan?
With outcome/result I mean the value that tests are grouped by in MTM:
Passed, failed, active, in progress or blocked
This is how I do it.
To get passed and totalTests I use:
ITestRun run*
run.PassedTests and run.TotalTests
To see run state I use:
TestRunSTate.Aborted and TestRunState.InProgress
To see if the failed or is inconclusive I use:
TestOutcome.Failed or TestOutcome.Inconclusive
First I only used ITestRun to easy se results, but I see they lack any kind of "failed" there which I find very disturbing.
So to send the right numbers to my test report that is mailed to the product owner I do the following when talking to the tfs api:
var tfs = Connect(optionsModel.CollectionUri);
var tcm = GetService<ITestManagementService>(tfs);
var wis = GetService<WorkItemStore>(tfs);
_testProject = tcm.GetTeamProject(optionsModel.TeamProjectName);
var plan = _testProject.TestPlans.Find(optionsModel.PlanId);
if (plan == null)
throw new Exception("Could not find plan with that id.");
var run = plan.CreateTestRun(true);
var testSuite = _testProject.TestSuites.Find(optionsModel.SuiteId);
if (testSuite == null)
throw new Exception("Could not find suite with that id.");
AddTestCasesBySuite(testSuite, optionsModel.ConfigId, plan, run);
run.Title = optionsModel.Title;
run.Save();
var failedTests = run.QueryResultsByOutcome(TestOutcome.Failed).Count;
var inconclusiveTests = run.QueryResultsByOutcome(TestOutcome.Inconclusive).Count;
Hope this helps
optionsmodel is the information I take in from the user running the tsts
I was trying to do the same thing, but using the REST API.
Just in case it helps someone, I managed to do that obtaining the testpoints from the suite:
https://dev.azure.com/{organization}/{project}/_apis/testplan/Plans/{planId}/Suites/{suiteId}/TestPoint?api-version=5.1-preview.2
More info: https://learn.microsoft.com/en-us/rest/api/azure/devops/testplan/test%20point/get%20points%20list?view=azure-devops-rest-5.1
You can use ITestManagementService and TestPlan query to get the result of specific Test plan
var server = new Uri("http://servername:8080/tfs/collectionname");
var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(server);
var service = tfs.GetService<ITestManagementService>();
var testProject = service.GetTeamProject(teamProject);
var plans = testProject.TestPlans.Query("SELECT * FROM TestPlan").Where(tp => tp.Name == YOURTESTPLANNAME).FirstOrDefault();
ITestPlanCollection plans = tfsConnectedTeamProject.TestPlans.Query("Select * From TestPlan");
foreach (ITestPlan plan in plans)
{
if (plan.RootSuite != null && plan.RootSuite.Entries.Count > 0)
{
foreach (ITestSuiteEntry suiteEntry in plan.RootSuite.Entries)
{
var suite = suiteEntry.TestSuite as IStaticTestSuite;
if (suite != null)
{
ITestSuiteEntryCollection suiteentrys = suite.TestCases;
foreach (ITestSuiteEntry testcase in suiteentrys)
{
// Write code to get the test case
}
}
}
}
}
I hope this may help you.