How to save information to LevelDB and then process this information in another route? - kotlin

I want to save information being processed to a level DB, right now what I have is
from(sourceUri)
.transacted()
.unmarshal().json(JsonLibrary.Jackson, HashMap::class.java)
.log(LoggingLevel.INFO, "input ${body}")
<not sure what to put here>
.end()
So what do i put there to save the data incoming into a levelDB? I dont want to use aggregator as it doesn't fit what im doing so at the moment I need to save it into the database. Then i want to make another route using a quartz2 timer to process everything inside this database every minute.
from("quartz2://myGroup/myTimerName?cron=30+*+*+?+*+*+*")
.bean(processData)
.to("activemq:output");

Related

Trying to use adminp.DeleteReplicas followed by adminp.ApproveReplicaDeletion gives error "Invalid Approval Request note"

I am trying delete a database and any associated replicas using LotusScript adminp calls. This is basically the code:
Dim session As New NotesSession
Dim adminp As NotesAdministrationProcess
Set adminp = session.CreateAdministrationProcess("Software_Server")
noteid$ = adminp.DeleteReplicas("Software_Server", "Guys1")
If noteid$ <> "" Then
Call adminp.ApproveReplicaDeletion(noteid$) 'This is where the error is thrown
End If
The first adminp call is successful and returns a noteid, and if I look in the admin requests database can see the document. The next call to ApproveReplicateDeletion results in the error "Invalid Approval Request note"
The documentation doesn't contain any examples for the adminp approve methods. I have a feeling that maybe the second request cannot be called until much later when adminp has processed the first request?
Also related question, do I only have to make this request on a single server and it will remove replicas on all other servers, or do I need to make this request for each server?
So this is a bit more complicated than the help gives any indication of, and might explain why I couldn't find any examples on the internet on how to do it. So the Workflow for using AdminP is as Follows:
Create the request to get the noteID for the initial noteid of the DeleteReplicas request. The returned noteID is not the one that is used to approve the delete replica request. noteid$ = adminp.DeleteReplicas("Software_Server", "Guys1")
This creates a document in the admin.nsf db for the initial request, but the approval docs don't yet exist, for the server to create those, the adminp process must run.
So in the code send a Console command to the Server "tell adminp process now"
Sleep the agent for a few seconds to give adminp time to process the request (this will also fire off any other waiting adminp requests unfortunately)
Now new documents will have been created in the admin database, that are awaiting approval by an admin. These documents contain the noteids that should be sent the approvereplicadeletion
To get them, first lookup the notes document by the noteID in the admin db obtained in step 1
Using that noteid for the document, get the field value ProxyOriginatingRequestUNID from the document.
using this UNID value, perform a getalldocumentsbykey on the view ($AllRequestsbyOriginatingUNID)
if the returned document has a ProxyAction field with a value of "82", this is an approval request document. This documents noteid can be passed to approveReplicaDeletion to have adminp remove the database next time it processes requests.
You can either send a console command to process adminp again, or just wait for the database deletions etc. to happen next time around.

Resume interrupted uploads via filepond

I'm using filepond to handle chunk uploads. Everything works fine, except one thing. Is there any way to continue interrupted uploads? I mean, for example, the customer started to upload a large video using mobile net, but she terminated it around 40%. Then, a few hours later, she want to continue the upload using wifi. Same file, but different browser, different IP address. In this case I'd like to continue the upload from the last completed chunk, not from the beginning.
As the documentation wrote:
If one of the chunks fails to upload after the set amount of retries in chunkRetryDelays the user has the option to retry the upload.
In my case there are no failed chunk uploads. The customer simply set the same file to upload.
Exactly this is what I'd want:
As FilePond remembers the previous transfer id the process now starts of with a HEAD request accompanied by the transfer id (12345) in the URL. server responds with Upload-Offset set to the next expected chunk offset in bytes. FilePond marks all chunks with lower offsets as complete and continues with uploading the chunk at the requested offset.
During upload, I send a custom header with a unique hash identifier of the file/user id, and store it in the db. When the customer wants to upload the same file, and there is an uncompleted version already uploaded, I can able to find it and send back an Upload-Offset header. This is clear for me. But I couldn't ask filepond to send HEAD/GET request before start the chunk upload, to get the correct offset. It always starts from zero.
I already checked this question, but my case is different. I don't want to continue a paused upload, I'd like to handle an abandoned but later re-uploaded file.
As I see the filepond.js (4.30.3) source code, I can create a workaround, simply add value to state.serverId. In this case the requestTransferOffset will fired, and continues the upload from the given offset.
// let's go!
if (!state.serverId) {
requestTransferId(function(serverId) {
// stop here if aborted, might have happened in between request and callback
if (state.aborted) return;
// pass back to item so we can use it if something goes wrong
transfer(serverId);
// store internally
state.serverId = serverId;
processChunks();
});
} else {
requestTransferOffset(function(offset) {
// stop here if aborted, might have happened in between request and callback
if (state.aborted) return;
// mark chunks with lower offset as complete
chunks
.filter(function(chunk) {
return chunk.offset < offset;
})
.forEach(function(chunk) {
chunk.status = ChunkStatus.COMPLETE;
chunk.progress = chunk.size;
});
// continue processing
processChunks();
});
}
...but I think this is NOT a clear way.
Was anybody facing this issue yet? Or do I missed anything, and is there a simplest way to continue interrupted uploads?

attributes.headers getting lost after a http Request call in Mulesoft?

I am getting some attributes in an API but all getting lost after an HTTP request connector in mule4.
why is it happening?
Look in the connector's configuration properties -> advanced tab for the connector configuration (in this case the HTTP connector's "request" operation) and you'll find a target variable and target value. If you fill in the target with a name - this does an enrichment to avoid overwriting the Mule message. If you leave it blank (the default) it will save the message (attributes, payload) over the top of the existing one - which is what you're seeing now. This mirrors the old mule 3 functionality, but sometimes you want it to leave what you have there alone.
So for the target value you get to pick exactly what gets saved.. If you want just payload: put that in. If you want both payload and attributes - I'd use "message" as that will mean you get both payload and attributes saved in the variable. Of course you may not want as much saved, so feel free to put in whatever dataweave expression you like - so you could even create something with bits from anywhere like:
{
statusCode: attributes.statusCode,
headers: attributes.headers,
payload: payload
}
A connector operation may replace the attributes with those of the operation. If you need to preserve the previous attributes you need to save them to a variable.
This is a default behaviour of MuleSoft. Whenever request crosses to transport barrier it losses existing attributes. You need to preserve attribute before HTTP Request.

Passing multiple values between ThreadGroups - Jmeter

I have ThreadGroup1 which performs login operation where it is getting Credentials from CSV file using CSV Dataset Config and saves username and password in two different variables like:
${__setProperty(USERNAMEGlobal, ${USERNAME})}
${__setProperty(PASSWORDGlobal, ${PASSWORD})}
Now in ThreadGroup2 I use these credentials using:
${__property()}
it works fine for a single user, but if I try multiple users (requests) last value overrides the previous all values and ThreadGroup2 receives only the last credentials defined.
I want all the credentials to be passed one by one to ThreadGroup2 and then the requests present in ThreadGroup2 should work according to all those credentials respectively.
How this can be done?
PS: I defined ramp-up period=1, Number of Users=3, loop=1.
There are some options:
Inter-Thread Communication.
Put them to different properties:
${__setProperty(USERNAMEGlobal1, ${USERNAME1})}
${__setProperty(USERNAMEGlobal2, ${USERNAME2})}
etc.
Initialize array with all usernames, stringify it and then put to property. However, it looks like a hack that will slow your plan.
Looks like you can save all the username-password pairs into file csv-file in ThreadGroup1 and then re-use they in ThreadGroup2 via e.g. reading with CSV Data Set Config.
I'm wondering if you really need two separate ThreadGroups?
It seems like you need only one ThreadGroup inside which you should perform your login actions and then save user/pass parameters in vars, not in props. Vars are thread local, so values of one thread won't override values of another.
You can set variable within the script: vars.put("var_name", "var_value"), and then use it like ${var_name}. Another option to set variable.

ExtJs 4 Store's AJAX proxy is not called on Store add — what is missing?

I have a Grid, a Store and Model for its data and AJAX proxy for the Store that is pointing to my self-written PHP back-end. The PHP backend writes to log each time it is called.
The system works OK for Read, Update and Delete calls. However now I need to add new field to Store, which I do in such a way:
(here, some new data were generated...)
var newEntry=Ext.ModelManager.create({
id:id,
title: title,
url: '/php/'+fname,
minithumb: '/php/'+small,
thumb:'/php/'+thumb
}, 'MyApp.model.fileListModel');
var store=Ext.getCmp('currGallery').getStore();
store.add(newEntry);
store.sync();
I have the new line appearing in the Grid.
But with or withour sync() call, I have no calls going to my PHP back end. It however reads one more time. Store has parameter autoSync :true and does great updating data automatically when I edit existing line in the Grid.
What am I missing?
Try not to set id when creating new record.
In fact I was missing a
newEntry.phantom = true;
flag. After I set it before adding to store, Store and its Proxy started to send data to server.
Maybe ID solution also works, dunno.