Default deserializeFormats for DateTime in JMS Serializer - jmsserializerbundle

I see default_format and default_timezone configs for deserializing datetime values but I don't see a config to specify a list of alternate formats. I would like for my API to accept timestamps with our without timezone (assuming UTC if not specified) and with or without fractional seconds (microseconds). The annotation example below accomplishes this but I'd rather not have to copy/paste that into the myriad of input types I use.
/**
* #JMS\Type("DateTime<'Y-m-d\TH:i:s.uP', '+00:00', ['Y-m-d\TH:i:sP', 'Y-m-d\TH:i:s.uP', 'Y-m-d\TH:i:s', 'Y-m-d\TH:i:s.u']>"
*/
protected \DateTimeInterface $timestamp;
Does anyone have an example override to accomplish this? Maybe a feature request to add support for a new default_deserialize_format config?

Typical... Finally break down, post a question, figure it out an hour later. A Handler with priorities set high so they override the build-in handlers was what I was looking for.

Related

Quarkus: Using placeholders in #Path

With Spring MVC, it is possible to use placeholders with Path configurations. In #RequestMapping("${myapp.path}") the placeholder will be replaced with the myapp.path propert.
Is there an equivalent method for the #Path annotation in Quarkus? There the {myapp.path} part is interpreted to be a path parameter.
To prevent an X-Y Problem: I'm looking for a way to configure a path on application startup and looked towards the solution I've known from Spring.
So, after struggling for a bit, I found this solution. Tell me if it helped.
I used the brackets in the #Path annotation (in my interface class service).
Then I specified what my id parameter is by specifying it with the #PathParam annotation in my method.
#GET
#Path("api/{myId}")
#Produces(MediaType.APPLICATION_JSON)
public Response findArticlePriceById(#HeaderParam("header") String myheader, #PathParam("id") String myId, #QueryParam("name") String myName);
You can then just send that parameter as a string from where you call your method.
Response jsonResponse = myService.findArticlePriceById("my-header", "133", "my-name");
My GET request will be : localhost:8080/api/133?name=my-name
As it turns out, this is currently impossible. I've opened up an issue, so we'll see if it will be in the future.

Enable Impala Impersonation on Superset

Is there a way to make the logged user (on superset) to make the queries on impala?
I tried to enable the "Impersonate the logged on user" option on Databases but with no success because all the queries run on impala with superset user.
I'm trying to achieve the same! This will not completely answer this question since it does not still work but I want to share my research in order to maybe help another soul that is trying to use this instrument outside very basic use cases.
I went deep in the code and I found out that impersonation is not implemented for Impala. So you cannot achieve this from the UI. I found out this PR https://github.com/apache/superset/pull/4699 that for whatever reason was never merged into the codebase and tried to copy&paste code in my Superset version (1.1.0) but it didn't work. Adding some logs I can see that the configuration with the impersonation is updated, but then the actual Impala query is with the user I used to start the process.
As you can imagine, I am a complete noob at this. However I found out that the impersonation thing happens when you create a cursor and there is a constructor parameter in which you can pass the impersonation configuration.
I managed to correctly (at least to my understanding) implement impersonation for the SQL lab part.
In the sql_lab.py class you have to add in the execute_sql_statements method the following lines
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
where cursor_kwargs is defined in db_engine_specs/impala.py as the following
#classmethod
def get_configuration_for_impersonation(cls, uri, impersonate_user, username):
logger.info(
'Passing Impala execution_options.cursor_configuration for impersonation')
return {'execution_options': {
'cursor_configuration': {'impala.doas.user': username}}}
#classmethod
def get_cursor_configuration_for_impersonation(cls, uri, impersonate_user,
username):
logger.debug('Passing Impala cursor configuration for impersonation')
return {'configuration': {'impala.doas.user': username}}
Finally, in models/core.py you have to add the following bit in the get_sqla_engine def
params = extra.get("engine_params", {}) # that was already there just for you to find out the line
self.cursor_kwargs = self.db_engine_spec.get_cursor_configuration_for_impersonation(
str(url), self.impersonate_user, effective_username) # this is the line I added
...
params.update(self.get_encrypted_extra()) # already there
#new stuff
configuration = {}
configuration.update(
self.db_engine_spec.get_configuration_for_impersonation(
str(url),
self.impersonate_user,
effective_username))
if configuration:
params.update(configuration)
As you can see I just shamelessy pasted the code from the PR. However this kind of works only for the SQL lab as I already said. For the dashboards there is an entirely different way of querying Impala that I did not still find out.
This means that queries for the dashboards are handled in a different way and there isn't something like this
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
My gut (and debugging) feeling is that you need to first understand the sqlalchemy part and extend a new ImpalaEngine class that uses a custom cursor with the impersonation conf. Or something like that, however it is not simple (if we want to call this simple) as the sql_lab part. So, the trick is to find out where the query is executed and create a cursor with the impersonation configuration. Easy, isnt'it ?
I hope that this could shed some light to you and the others that have this issue. Let me know if you did find out another way to solve this issue, or if this comment was useful.
Update: something really useful
A colleague of mine succesfully implemented impersonation with impala without touching any superset related, but instead working directly with the impyla lib. A PR was open with the code to change. You can apply the patch directly in the impyla src used by superset. You have to edit both dbapi.py and hiveserver2.py.
As a reminder: we are still testing this and we do not know if it works with different accounts using the same superset instance.

Changing the GemFire query ResultSender batch size

I am experiencing a performance issue related to the default batch size of the query ResultSender using client/server config. I believe the default value is 100.
If I run a simple query to get keys (with some order by columns due to the PARTITION Region type), this default batch size causes too many chunks being sent back for even 1000 records. In my tests, even the total query time is only less than 100 ms, however, the app takes more than 10 seconds to process those chunks.
Reading between the lines in your problem statement, it seems you are:
Executing an OQL query on a PARTITION Region (PR).
Running the query inside a Function as recommended when executing queries on a PR.
Sending batch results (as opposed to streaming the results).
I also assume since you posted exclusively in the #spring-data-gemfire channel, that you are using Spring Data GemFire (SDG) to:
Execute the query (e.g. by using the SDG GemfireTemplate; Of course, I suppose you could also be using the GemFire Query API inside your Function directly, too)?
Implemented the server-side Function using SDG's Function annotation support?
And, are possibly (indirectly) using SDG's BatchingResultSender, as described in the documentation?
NOTE: The default batch size in SDG is 0, NOT 100. Zero means stream the results individually.
Regarding #2 & #3, your implementation might look something like the following:
#Component
class MyApplicationFunctions {
#GemfireFunction(id = "MyFunction", batchSize = "1000")
public List<SomeApplicationType> myFunction(FunctionContext functionContext) {
RegionFunctionContext regionFunctionContext =
(RegionFunctionContext) functionContext;
Region<?, ?> region = regionFunctionContext.getDataSet();
if (PartitionRegionHelper.isPartitionRegion(region)) {
region = PartitionRegionHelper.getLocalDataForContext(regionFunctionContext);
}
GemfireTemplate template = new GemfireTemplate(region);
String OQL = "...";
SelectResults<?> results = template.query(OQL); // or `template.find(OQL, args);`
List<SomeApplicationType> list = ...;
// process results, convert to SomeApplicationType, add to list
return list;
}
}
NOTE: Since you are most likely executing this Function "on Region", the FunctionContext type will actually be a RegionFunctionContext in this case.
The batchSize attribute on the SDG #GemfireFunction annotation (used for Function "implementations") allows you to control the batch size.
Of course, instead of using SDG's GemfireTemplate to execute queries, you can, of course, use the GemFire Query API directly, as mentioned above.
If you need even more fine grained control over "result sending", then you can simply "inject" the ResultSender provided by GemFire to the Function, even if the Function is implemented using SDG, as shown above. For example you can do:
#Component
class MyApplicationFunctions {
#GemfireFunction(id = "MyFunction")
public void myFunction(FunctionContext functionContext, ResultSender resultSender) {
...
SelectResults<?> results = ...;
// now process the results and use the `resultSender` directly
}
}
This allows you to "send" the results however you see fit, as required by your application.
You can batch/chunk results, stream, whatever.
Although, you should be mindful of the "receiving" side in this case!
The 1 thing that might not be apparent to the average GemFire user is that GemFire's default ResultCollector implementation collects "all" the results first before returning them to the application. This means the receiving side does not support streaming or batching/chunking of the results, allowing them to be processed immediately when the server sends the results (either streamed, batched/chunked, or otherwise).
Once again, SDG helps you out here since you can provide a custom ResultCollector on the Function "execution" (client-side), for example:
#OnRegion("SomePartitionRegion", resultCollector="myResultCollector")
interface MyApplicationFunctionExecution {
void myFunction();
}
In your Spring configuration, you would then have:
#Configuration
class ApplicationGemFireConfiguration {
#Bean
ResultCollector myResultCollector() {
return ...;
}
}
Your "custom" ResultCollector could return results as a stream, a batch/chunk at a time, etc.
In fact, I have prototyped a "streaming" ResultCollector implementation that will eventually be added to SDG, here.
Anyway, this should give you some ideas on how to handle the performance problem you seem to be experiencing. 1000 results is not a lot of data so I suspect your problem is mostly self-inflicted.
Hope this helps!
John,
Just to clarify, I use client/server topology(actually wan, but that is not important in here). My client is a spring boot web app which has kendo grid as ui. Users can filter/sort on any combination of the columns, which will be passed to the spring boot app for generating dynamic OQL and create the pagination. Till now, except for being dynamic, my OQL queries are quite straight forward. I do not want to introduce server side functions due to the complexity of our global deployment process. But I can if you think that is something I have to do.
Again, thanks for your answers.

Activiti BPMN - How to pass username in variables/expression who have completed task?

I am very new to Activiti BPMN. I am creating a flow diagram in activiti. I m looking for how username (who has completed the task) can be pass into shell task arguments. so that I can fetch and save in db that user who has completed that task.
Any Help would be highly appreciated.
Thanks in advance...
Here's something I prepared for Java developers based on I think a blog post I saw
edit: https://community.alfresco.com/thread/224336-result-variable-in-javadelegate
RESULT VARIABLE
Option (1) – use expression language (EL) in the XML
<serviceTask id="serviceTask"
activiti:expression="#{myService.toUpperCase(myVar)}"
activiti:resultVariable="myVar" />
Java
public class MyService {
public String toUpperCase(String val) {
return val.toUpperCase();
}
}
The returned String is assigned to activiti:resultVariable
HACKING THE DATA MODEL DIRECTLY
Option (2) – use the execution environment
Java
public class MyService implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
String myVar = (String) execution.getVariable("myVar");
execution.setVariable("myVar", myVar.toUpperCase());
}
}
By contrast here we are being passed an ‘execution’, and we are pulling values out of it and twiddling them and putting them back.
This is somewhat analogous to a Servlet taking values we are passed in the HTMLRequest and then based on them doing different things in the response. (A stronger analogy would be a servlet Filter)
So in your particular instance (depnding on how you are invoking the shell script) using the Expression Language (EL) might be simplest and easiest.
Of course the value you want to pass has to be one that the process knows about (otherwise how can it pass a value it doesn't have a variable for?)
Hope that helps. :D
Usually in BPM engines you have a way to hook out listener to these kind of events. In Activiti if you are embedding it inside your service you can add an extra EventListener and then record the taskCompleted events which will contain the current logged in user.
https://www.activiti.org/userguide/#eventDispatcher
Hope this helps.
I have used activiti:taskListener from activiti app you need to configure below properties
1. I changed properties in task listener.
2. I used java script variable for holding task.assignee value.
Code Snip:-

How can you use SessionAsSigner in a Java Bean called from an XPage?

According to Phillip Riand (see: discussion on openNTF) this is not possible... They need to know the design element to find out who signed it. Therefore, it is only available in SSJS.
There are 2 ways that I know of to use the sessionAsSigner object in Java beans:
1 By resolving the sessionAsSigner object:
FacesContext context = FacesContext.getCurrentInstance();
Session sessionAsSigner = context.getApplication().getVariableResolver().
resolveVariable(context, "sessionAsSigner");
2 By using the getCurrentSessionAsSigner() function from the com.ibm.xsp.extlib.util.ExtLibUtil class in the Extension Library.
To be able to use it (in Java as wel as SSJS) you'll want to make sure that all design elements were signed by the same user ID. If that's not the case, the sessionAsSigner object will not be available ('undefined').
I found that the solution is right at hand :-)
I changed my XPage (in this example an XAgent) to:
<xp:view xmlns:xp="http://www.ibm.com/xsp/core" rendered="false">
This is an xAgent returning json data...
<xp:this.afterRenderResponse><![CDATA[#{javascript:Controller.verify(sessionAsSigner)}]]></xp:this.afterRenderResponse>
and in the bean I simply used the session in the argument when I needed to open a database/document as signer. Sometimes the solution is so simple :-)
/John
This is quite an old post that I just stumbled upon. Tried some of the solutions mentioned above:
resolveVariable did not work for me, at least not for sessionAsSigner as this throws a runtime error (I can resolve plain old session, though...)
to be honest I didn't quite understand the Controller.verify(sessionAsSigner) method; is Controller something specific to XAgents? If so, I don't have an XAgent here, so can't use it
didn't feel like importing extra ExtLib classes here...
So I came up with another solution that appears to be very simple:
created a method in my javaBean that takes a session object as argument; since sessionAsSigner belongs to the same class as session I don't have to import something new.
Javacode is:
public void testSession(Session s) throws Exception{
System.out.println(" > test effective user for this session = "
+ s.getEffectiveUserName());
}
This is called from SSJS as either
mybean.testSession(session);
or
myBean.testSession(sessionAsSigner);
Maybe helps others, too