I have found some suggestions on how to add a block to a page, but can't get it to work the way I want, so perhaps someone can help out.
What I want to do is to have a scheduled job that reads through a file, creating new pages with a certain pagetype and in the new page adding some blocks to a content property. The blocks fields will be updated with data from the file that is read.
I have the following code in the scheduled job, but it fails at
repo.Save((IContent) newBlock, SaveAction.Publish);
giving the error
The page name must contain at least one visible character.
This is my code:
public override string Execute()
{
//Call OnStatusChanged to periodically notify progress of job for manually started jobs
OnStatusChanged(String.Format("Starting execution of {0}", this.GetType()));
//Create Person page
PageReference parent = PageReference.StartPage;
//IContentRepository contentRepository = EPiServer.ServiceLocation.ServiceLocator.Current.GetInstance<IContentRepository>();
//IContentTypeRepository contentTypeRepository = EPiServer.ServiceLocation.ServiceLocator.Current.GetInstance<IContentTypeRepository>();
//var repository = EPiServer.ServiceLocation.ServiceLocator.Current.GetInstance<IContentRepository>();
//var slaegtPage = repository.GetDefault<SlaegtPage>(ContentReference.StartPage);
IContentRepository contentRepository = EPiServer.ServiceLocation.ServiceLocator.Current.GetInstance<IContentRepository>();
IContentTypeRepository contentTypeRepository = EPiServer.ServiceLocation.ServiceLocator.Current.GetInstance<IContentTypeRepository>();
SlaegtPage slaegtPage = contentRepository.GetDefault<SlaegtPage>(parent, contentTypeRepository.Load("SlaegtPage").ID);
if (slaegtPage.MainContentArea == null) {
slaegtPage.MainContentArea = new ContentArea();
}
slaegtPage.PageName = "001 Kim";
//Create block
var repo = ServiceLocator.Current.GetInstance<IContentRepository>();
var newBlock = repo.GetDefault<SlaegtPersonBlock1>(ContentReference.GlobalBlockFolder);
newBlock.PersonId = "001";
newBlock.PersonName = "Kim";
newBlock.PersonBirthdate = "01 jan 1901";
repo.Save((IContent) newBlock, SaveAction.Publish);
//Add block
slaegtPage.MainContentArea.Items.Add(new ContentAreaItem
{
ContentLink = ((IContent) newBlock).ContentLink
});
slaegtPage.URLSegment = UrlSegment.CreateUrlSegment(slaegtPage);
contentRepository.Save(slaegtPage, EPiServer.DataAccess.SaveAction.Publish);
_stopSignaled = true;
//For long running jobs periodically check if stop is signaled and if so stop execution
if (_stopSignaled) {
return "Stop of job was called";
}
return "Change to message that describes outcome of execution";
}
You can set the Name by
((IContent) newBlock).Name = "MyName";
Related
I am using Hangfire.AspNetCore 1.7.17 and Hangfire.MySqlStorage 2.0.3 for software that is currently in production.
Now and then, we get a report of jobs being executed twice, despite the usage of the [DisableConcurrentExecution] attribute with a timeout of 30 seconds.
It seems that as soon as those 30 seconds have passed, another worker picks up that same job again.
The code is fairly straightforward:
public async Task ProcessPicking(HttpRequest incomingRequest)
{
var filePath = await StoreStreamAsync(incomingRequest, TriggerTypes.Picking);
var picking = await XmlHelper.DeserializeFileAsync<Picking>(filePath);
// delay with 20 minutes so outbound-out gets the chance to be send first
BackgroundJob.Schedule(() => StartPicking(picking), TimeSpan.FromMinutes(20));
}
[TriggerAlarming("[IMPORTANT] Failed to parse picking message to **** object.")]
[DisableConcurrentExecution(30)]
public void StartPicking(Picking picking)
{
var orderlinePickModels = picking.ToSalesOrderlinePickQuantityRequests().ToList();
var orderlineStatusModels = orderlinePickModels.ToSalesOrderlineStatusRequests().ToList();
var isParsed = DateTime.TryParse(picking.Order.UnloadingDate, out var unloadingDate);
for (var i = 0; i < orderlinePickModels.Count; i++)
{
// prevents bugs with usage of i in the background jobs
var index = i;
var id = BackgroundJob.Enqueue(() => SendSalesOrderlinePickQuantityRequest(orderlinePickModels[index], picking.EdiReference));
BackgroundJob.ContinueJobWith(id, () => SendSalesOrderlineStatusRequest(
orderlineStatusModels.First(x=>x.SalesOrderlineId== orderlinePickModels[index].OrderlineId),
picking.EdiReference, picking.Order.PrimaryReference, isParsed ? unloadingDate : DateTime.MinValue));
}
}
[TriggerAlarming("[IMPORTANT] Failed to send order line pick quantity request to ****.")]
[AutomaticRetry(Attempts = 2)]
[DisableConcurrentExecution(30)]
public void SendSalesOrderlinePickQuantityRequest(SalesOrderlinePickQuantityRequest request, string ediReference)
{
var audit = new AuditPostModel
{
Description = $"Finished job to send order line pick quantity request for item {request.Itemcode}, part of ediReference {ediReference}.",
Object = request,
Type = AuditTypes.SalesOrderlinePickQuantity
};
try
{
_logger.LogInformation($"Started job to send order line pick quantity request for item {request.Itemcode}.");
var response = _service.SendSalesOrderLinePickQuantity(request).GetAwaiter().GetResult();
audit.StatusCode = (int)response.StatusCode;
if (!response.IsSuccessStatusCode) throw new TriggerRequestFailedException();
audit.IsSuccessful = true;
_logger.LogInformation("Successfully posted sales order line pick quantity request to ***** endpoint.");
}
finally
{
Audit(audit);
}
}
It schedules the main task (StartPicking) that creates the objects required for the two subtasks:
Send picking details to customer
Send statusupdate to customer
The first job is duplicated. Perhaps the second job as well, but this is not important enough to care about as it just concerns a statusupdate. However, the first job causes the customer to think that more items have been picked than in reality.
I would assume that Hangfire updates the state of a job to e.g. in progress, and checks this state before starting a job. Is my time-out on the disabled concurrent execution too low? Is it possible in this scenario that the database connection to update the state takes about 30 seconds (to be fair, it is running on a slow server with ~8GB Ram, 6 vCores) due to which the second worker is already picking up the job again?
Or is this a Hangfire specific issue that must be tackled?
I am trying to mimic the autosave function in GMS v3 so that I can use in version 1 and 2. I would like to first acknowledge that the main bulk of the script originates from Dr Bernhard Schaffer's "How to script... Digital Micrograph Scripting Handbook". I have modified it a bit, so that any new image recorded by the camera can be autosave into the file. However, I met some problems because if I decide to click on live-view image and move the image around, or using live-fft, the live view image or the FFT image will be saved as well. One of the ideas I have is to use the taggroup information such as the "Acquisition:Parameters:Parameter Set Name" because for live view or live-FFT, this would be either in search or focus mode. Another idea is to use the document ID e.g iDocID = idoc.ImageDocumentGETID() to locate the ID of the live image. However, i am clueless then how to use this information to exclude them from autosaving. Can anyone point to me how i can proceed with this script?
Below is the script
Class PeriodicAutoSave : Object
{
Number output
PeriodicAutoSave(Object self) Result("\n Object ID"+self.ScriptObjectGetID()+" created.")
~PeriodicAutoSave(Object self) Result("\n Object ID"+self.ScriptObjectGetID()+" destroyed")
Void Init2(Object self, Number op)
output=op
Void AutoSave_SaveAll(Object self)
{
String path, name, targettype, targettype1, area, mag, mode, search, result1
ImageDocument idoc
Number nr_idoc, count, index_i, index, iDocID, iDocID_search
path = "c:\\path\\"
name = "test"
targettype=="Gatan Format (*.dm4)"
targettype1 = "dm4"
If (output) Result("\n AutoSave...")
nr_idoc = CountImageDocuments()
For (count = 1; count<nr_idoc; count++)
{
idoc = GetImageDocument(count) //imagedocument
index = 1 // user decide the index to start with
index_i= nr_idoc - index
If (idoc.ImageDocumentIsDirty())
{
idoc = getfrontimagedocument()
iDocID = idoc.ImageDocumentGetID()
TagGroup tg = ImageGetTagGroup(idoc.ImageDocumentGetImage(0)) // cannot declare an 'img' for this line as it will prompt an error?
tg.TagGroupGetTagAsString("Microscope Info:Formatted Indicated Mag", mag)
Try{
{
idoc.ImageDocumentSavetoFile( "Gatan Format", path+index_i+"-"+name+"-"+mag+".dm4")
idoc.ImageDocumentSetName(index_i + "-"+name+"-"+mag+".dm4")
idoc.ImageDocumentClean()
}
If (Output) Result("\n\t saving: "+idoc.ImageDocumentGetCurrentFile())
}
Catch{
Result("\n image cannot be saved at the moment:" + GetExceptionString())
Break
}
Result("\ Continue autosave...")
}
}
}
}
Void LaunchAutoSave()
{
Object obj = Alloc(PeriodicAutoSave)
obj.Init2(2)
Number task_id = obj.AddMainThreadPeriodicTask("AutoSave_SaveALL",6)
//Sleep(10)
while(!shiftdown()) 1==2
RemoveMainThreadTask(task_id)
}
LaunchAutoSave()
thank you very much for your pointers! I have tried and it works very well with my script. as the 'TagGroupDoesTagExist' only refers to the taggroup, I modified further to include the tags I want to filter e.g "Search" or "Focus" and it seems to work well. The script that I modified to your existing ones is as below :
If (idoc.ImageDocumentIsDirty())
{
//now find out what is a filter condition and skip if it is true
skip = 0
TagGroup tg = idoc.ImageDocumentGetImage(0).ImageGetTagGroup()
tg.TagGroupGetTagAsString("Microscope Info:Formatted Indicated Mag", mag)
tg.TagGroupGetTagAsString("Acquisition:Parameters:Parameter Set Name", mode)
skip = tg.TagGroupDoesTagExist("Acquisition:Parameters:Parameter Set Name")
if(skip && (mode == "Search" || mode== "Focus")) continue
Your idea of filtering is a good one, but there is something strange with your for loop.
in
nr_idoc = CountImageDocuments()
For (count = 1; count<nr_idoc; count++)
{
idoc = GetImageDocument(count) //imagedocument
you iterate over all currently open imageDocuments (except the first one!?) and get them one by one, but then in
If (idoc.ImageDocumentIsDirty())
{
idoc = getfrontimagedocument()
you actually get the front-most (selected) document instead each time. Why are you doing this?
Why not go with:
number nr_idoc = CountImageDocuments()
for (number count = 0; count<nr_idoc; count++)
{
imagedocument idoc = GetImageDocument(count)
If (idoc.ImageDocumentIsDirty())
{
// now find out what is a filter condition and skip if it is true
number skip = 0
TagGroup tg = idoc.ImageDocumentGetImage(0).ImageGetTagGroup()
skip = tg.TagGroupDoesTagExist("Microscope Info:Formatted Indicated Mag")
if (skip) continue
// do saving
}
}
I'm manually creating a Job using Kettle from Java, but I get the error message Couldn't find starting point in this job.
KettleEnvironment.init();
JobMeta jobMeta = new JobMeta();
JobEntrySpecial start = new JobEntrySpecial("START", true, false);
start.setStart(true);
JobEntryCopy startEntry = new JobEntryCopy(start);
jobMeta.addJobEntry(startEntry);
JobEntryTrans jet1 = new JobEntryTrans("first");
Trans trans1 = jet1.getTrans();
jet1.setFileName("file.ktr");
JobEntryCopy jc1 = new JobEntryCopy(jet1);
jobMeta.addJobEntry(jc1);
jobMeta.addJobHop(new JobHopMeta(startEntry, jc1));
Job job = new Job(null, jobMeta);
job.setInteractive(true);
job.start();
I've discovered that I was missing
job.setStartJobEntryCopy(startEntry);
Class org.pentaho.di.job.JobMeta class has method findJobEntry
U can use it to look for entry point called "START
This is how it is original source of kettle-pdi
private JobMeta jobMeta;
....
// Where do we start?
jobEntryCopy startpoint;
....
if ( startJobEntryCopy == null ) {
startpoint = jobMeta.findJobEntry( JobMeta.STRING_SPECIAL_START, 0, false );
// and then
JobEntrySpecial jes = (JobEntrySpecial) startpoint.getEntry();
I am learning to use the Rx extensions for a Silverlight 4 app I am working on. I created a sample app to nail down the process and I cannot get it to return anything.
Here is the main code:
private IObservable<Location> GetGPSCoordinates(string Address1)
{
var gsc = new GeocodeServiceClient("BasicHttpBinding_IGeocodeService") as IGeocodeService;
Location returnLocation = new Location();
GeocodeResponse gcResp = new GeocodeResponse();
GeocodeRequest gcr = new GeocodeRequest();
gcr.Credentials = new Credentials();
gcr.Credentials.ApplicationId = APP_ID2;
gcr.Query = Address1;
var myFunc = Observable.FromAsyncPattern<GeocodeRequest, GeocodeResponse>(gsc.BeginGeocode, gsc.EndGeocode);
gcResp = myFunc(gcr) as GeocodeResponse;
if (gcResp.Results.Count > 0 && gcResp.Results[0].Locations.Count > 0)
{
returnLocation = gcResp.Results[0].Locations[0];
}
return returnLocation as IObservable<Location>;
}
gcResp comes back as null. Any thoughts or suggestions would be greatly appreciated.
The observable source you are subscribing to is asynchronous, so you can't access the result immediately after subscribing. You need to access the result in the subscription.
Better yet, don't subscribe at all and simply compose the response:
private IObservable<Location> GetGPSCoordinates(string Address1)
{
IGeocodeService gsc =
new GeocodeServiceClient("BasicHttpBinding_IGeocodeService");
Location returnLocation = new Location();
GeocodeResponse gcResp = new GeocodeResponse();
GeocodeRequest gcr = new GeocodeRequest();
gcr.Credentials = new Credentials();
gcr.Credentials.ApplicationId = APP_ID2;
gcr.Query = Address1;
var factory = Observable.FromAsyncPattern<GeocodeRequest, GeocodeResponse>(
gsc.BeginGeocode, gsc.EndGeocode);
return factory(gcr)
.Where(response => response.Results.Count > 0 &&
response.Results[0].Locations.Count > 0)
.Select(response => response.Results[0].Locations[0]);
}
If you only need the first valid value (the location of the address is unlikely to change), then add a .Take(1) between the Where and Select.
Edit: If you want to specifically handle the address not being found, you can either return results and have the consumer deal with it or you can return an Exception and provide an OnError handler when subscribing. If you're thinking of doing the latter, you would use SelectMany:
return factory(gcr)
.SelectMany(response => (response.Results.Count > 0 &&
response.Results[0].Locations.Count > 0)
? Observable.Return(response.Results[0].Locations[0])
: Observable.Throw<Location>(new AddressNotFoundException())
);
If you expand out the type of myFunc you'll see that it is Func<GeocodeRequest, IObservable<GeocodeResponse>>.
Func<GeocodeRequest, IObservable<GeocodeResponse>> myFunc =
Observable.FromAsyncPattern<GeocodeRequest, GeocodeResponse>
(gsc.BeginGeocode, gsc.EndGeocode);
So when you call myFunc(gcr) you have an IObservable<GeocodeResponse> and not a GeocodeResponse. Your code myFunc(gcr) as GeocodeResponse returns null because the cast is invalid.
What you need to do is either get the last value of the observable or just do a subscribe. Calling .Last() will block. If you call .Subscribe(...) your response will come thru on the call back thread.
Try this:
gcResp = myFunc(gcr).Last();
Let me know how you go.
Richard (and others),
So I have the code returning the location and I have the calling code subscribing. Here is (hopefully) the final issue. When I call GetGPSCoordinates, the next statement gets executed immediately without waiting for the subscribe to finish. Here's an example in a button OnClick event handler.
Location newLoc = new Location();
GetGPSCoordinates(this.Input.Text).ObserveOnDispatcher().Subscribe(x =>
{
if (x.Results.Count > 0 && x.Results[0].Locations.Count > 0)
{
newLoc = x.Results[0].Locations[0];
Output.Text = "Latitude: " + newLoc.Latitude.ToString() +
", Longtude: " + newLoc.Longitude.ToString();
}
else
{
Output.Text = "Invalid address";
}
});
Output.Text = " Outside of subscribe --- Latitude: " + newLoc.Latitude.ToString() +
", Longtude: " + newLoc.Longitude.ToString();
The Output.Text assignment that takes place outside of Subscribe executes before the Subscribe has finished and displays zeros and then the one inside the subscribe displays the new location info.
The purpose of this process is to get location info that will then be saved in a database record and I am processing multiple addresses sequentially in a Foreach loop. I chose Rx Extensions as a solution to avoid the problem of the async callback as a coding trap. But it seems I have exchanged one trap for another.
Thoughts, comments, suggestions?
For Actionscript 2.0
Let's say this page
www.example.com/mypage
returns some html that I want to parse in Actionscript.
How do i call this page from Actionscript while getting back the response in a string variable?
use LoadVars():
var lv = new LoadVars();
//if you want to pass some variables, then:
lv.var1 = "BUTTON";
lv.var2 = "1";
lv.sendAndLoad("http://www.example.com/mypage.html", lv, "POST");
lv.onLoad = loadedDotNetVars;
function loadedDotNetVars(success)
{
if(success)
{
// operation was a success
trace(lv.varnameGotFromPage)
}
else
{
// operation failed
}
}
//if you dont want to send data, just get from it, then use just lv.Load(...) instead of sendAndLoad(...)
I understand. Use this code then:
docXML = new XML(msg);
XMLDrop = docXML.childNodes;
XMLSubDrop = XMLDrop[0].childNodes;
_root.rem_x = (parseInt(XMLSubDrop[0].firstChild));
_root.rem_y = (parseInt(XMLSubDrop[1].firstChild));
_root.rem_name = (XMLSubDrop[2].firstChild);
var htmlFetcher:LoadVars = new LoadVars();
htmlFetcher.onData = function(thedata) {
trace(thedata); //thedata is the html code
};
Use:
htmlFetcher.load("http://www.example.com/mypage");
to call.
I suppose you could use:
page = getURL("www.example.com/mypage.html");
And it would load the page contents on the page variable.