Using VRS for concurrent request - optaplanner

We need to use a single instace of VRS to support concurrent request.
We have a requirement where multiple different users should be able to create a route plan for different vehicles and locations same time. However, looking at VRS functionality, I am not able to understand how applications supports it. For demo, when I create a different route using different browser, it always merges first and second request and give one single result.
Just a little more elobration on the question:
We are aiming to convert requests as REST API endpoints which will be invoked by different uses same time for their usecase.
Eg. Request 1: Vehicle 1&2 with 50 locations. VRS can calculate route & give one message with all detailed calculations for request1.
Request 2: Vehicle 3 & 4 with 40 locations. So VRS can calculate route which later we can get as one message with all detailed calculations limited to request 2.
Both requests can be submitted same time & application should considered as separate requests without getting merged.
Is there a way to add request ID or any other paramaters to achive this?

For multi-tenant solving, the SolverManager API is ideal:
public class TimeTableService {
// tenantId is Long, but it can also be String or UUID
private SolverManager<TimeTable, Long> solverManager;
// Returns immediately, call it for every dataset
public void solveBatch(Long tenantId) {
solverManager.solve(tenantId,
// Called once, when solving starts
this::findById,
// Called once, when solving ends
this::save);
}
public TimeTable findById(Long tenantId) {...}
public void save(TimeTable timeTable) {...}
}

Related

How to push Salesforce Order to an external REST API?

I have experience in Salesforce administration, but not in Salesforce development.
My task is to push a Order in Salesforce to an external REST API, if the order is in the custom status "Processing" and the Order Start Date (EffectiveDate) is in 10 days.
The order will be than processed in the down-stream system.
If the order was successfully pushed to the REST API the status should be changed to "Activated".
Can anybody give me some example code to get started?
There's very cool guide for picking right mechanism, I've been studying from this PDF for one of SF certifications: https://developer.salesforce.com/docs/atlas.en-us.integration_patterns_and_practices.meta/integration_patterns_and_practices/integ_pat_intro_overview.htm
A lot depends on whether the endpoint is accessible from Salesforce (if it isn't - you might have to pull data instead of pushing), what authentication it needs.
For push out of Salesforce you could use
Outbound Message - it'd be an XML document sent when (time-based in your case?) workflow fires, not REST but it's just clicks, no code. The downside is that it's just 1 object in message. So you can send Order header but no line items.
External Service would be code-free and you could build a flow with it.
You could always push data with Apex code (something like this). We'd split the solution into 2 bits.
The part that gets actual work done: At high level you'd write function that takes list of Order ids as parameter, queries them, calls req.setBody(JSON.serialize([SELECT Id, OrderNumber FROM Order WHERE Id IN :ids]));... If the API needs some special authentication - you'd look into "Named Credentials". Hard to say what you'll need without knowing more about your target.
And the part that would call this Apex when the time comes. Could be more code (a nightly scheduled job that makes these callouts 1 minute after midnight?) https://salesforce.stackexchange.com/questions/226403/how-to-schedule-an-apex-batch-with-callout
Could be a flow / process builder (again, you probably want time-based flows) that calls this piece of Apex. The "worker" code would have to "implement interface" (a fancy way of saying that the code promises there will be function "suchAndSuchName" that takes "suchAndSuch" parameters). Check Process.Plugin out.
For pulling data... well, target application could login to SF (SOAP, REST) and query the table of orders once a day. Lots of integration tools have Salesforce plugins, do you already use Azure Data Factory? Informatica? BizTalk? Mulesoft?
There's also something called "long polling" where client app subscribes to notifications and SF pushes info to them. You might have heard about CometD? In SF-speak read up about Platform Events, Streaming API, Change Data Capture (although that last one fires on change and sends only the changed fields, not great for pushing a complete order + line items). You can send platform events from flows too.
So... don't dive straight to coding the solution. Plan a bit, the maintenance will be easier. This is untested, written in Notepad, I don't have org with orders handy... But in theory you should be able to schedule it to run at 1 AM for example. Or from dev console you can trigger it with Database.executeBatch(new OrderSyncBatch(), 1);
public class OrderSyncBatch implements Database.Batchable, Database.AllowsCallouts {
public Database.QueryLocator start(Database.BatchableContext bc) {
Date cutoff = System.today().addDays(10);
return Database.getQueryLocator([SELECT Id, Name, Account.Name, GrandTotalAmount, OrderNumber, OrderReferenceNumber,
(SELECT Id, UnitPrice, Quantity, OrderId FROM OrderItems)
FROM Order
WHERE Status = 'Processing' AND EffectiveDate = :cutoff]);
}
public void execute(Database.BatchableContext bc, List<sObject> scope) {
Http h = new Http();
List<Order> toUpdate = new List<Order>();
// Assuming you want 1 order at a time, not a list of orders?
for (Order o : (List<Order>)scope) {
HttpRequest req = new HttpRequest();
HttpResponse res;
req.setEndpoint('https://example.com'); // your API endpoint here, or maybe something that starts with "callout:" if you'd be using Named Credentials
req.setMethod('POST');
req.setHeader('Content-Type', 'application/json');
req.setBody(JSON.serializePretty(o));
res = h.send(req);
if (res.getStatusCode() == 200) {
o.Status = 'Activated';
toUpdate.add(o);
}
else {
// Error handling? Maybe just debug it, maybe make a Task for the user or look into
// Database.RaisesPlatformEvents
System.debug(res);
}
}
update toUpdate;
}
public void finish(Database.BatchableContext bc) {}
public void execute(SchedulableContext sc){
Database.executeBatch(new OrderSyncBatch(), Limits.getLimitCallouts()); // there's limit of 10 callouts per single transaction
// and by default batches process 200 records at a time so we want smaller chunks
// https://developer.salesforce.com/docs/atlas.en-us.apexref.meta/apexref/apex_methods_system_limits.htm
// You might want to tweak the parameter even down to 1 order at a time if processing takes a while at the other end.
}
}

How to call more than 1000 web APIs with a main API with very less response time

I have to implement MVC .Net Web api (say "Main" api) which includes two parts.
1) Database call to fetch the record.
2) And more than 1000s of another web api call(response time 100 ms on avg. for each) which will use records returned by above db call.
Also, the Main api will be called in every 3 seconds continuously. I tried implementing using async/await method but didn't find much progress and when trying to test it using Apache Benchmark tool, it throws timeout specified has expired error.
Is there any way to achieve this? Please suggest.
Code snippet
[HttpGet]
public async Task<string> doTaskasync()
{
TripDetails obj = new TripDetails();
GPSCoordinates objGPS = new GPSCoordinates();
try
{
/* uriArray Contains more than 1000 APIs which needs to be exectued. */
string[] uriArray = await dolongrunningtaskasync();
IEnumerable<Task<GPSCoordinates>> allTasks = uriArray .Select(u => GetLocationsAsync(u));
IEnumerable<GPSCoordinates> allResults = await Task.WhenAll(allTasks);
}
catch (Exception ex)
{
return ex.Message ;
}
return "success";
}
Ab.exe test
There is nothing wrong programmatically with your code - except that this is practically a DOS attack on the second (location) API. You should definitely add caching to avoid at least part of the 1000 api calls, especially if you call the main api often (as you wrote).
What I would do is to make the inner api calls a centralized operation instead of making these calls individually inside your web api method. For example you could use a central list for location api calls (tasks) that have been started (but not finished), and another list for results that have been already finished. Both list could be a concurrent dictionary by the unique urls you use for the location api calls.

Does Laravel query database each time I call Auth::user()?

In my Laravel application I used Auth::user() in multiple places. I am just worried that Laravel might be doing some queries on each call of Auth::user()
Kindly advice
No the user model is cached. Let's take a look at Illuminate\Auth\Guard#user:
public function user()
{
if ($this->loggedOut) return;
// If we have already retrieved the user for the current request we can just
// return it back immediately. We do not want to pull the user data every
// request into the method because that would tremendously slow an app.
if ( ! is_null($this->user))
{
return $this->user;
}
As the comment says, after retrieving the user for the first time, it will be stored in $this->user and just returned back on the second call.
For same Request, If you run Auth::user() multiple time, it will only run 1 query and not multiple time.
But , if you go and call for another request with Auth::user() , it will go and run 1 query again.
This cannot be cached for all request after first request has been made due to security point of view.
So, It runs 1 query per request irrespective of number of time you are calling.
I see use of some session here to avoid run multiple query, so you can try these code : http://laravel.usercv.com/post/16/using-session-against-authuser-in-laravel-4-and-5-cache-authuser
Thanks

Multitenant/Shared Application system, how to maintain multiple tentant-specific identifiers?

I have a multi-tenant system where each tenant shares the same instance of the codebase, but has their own databases.
I'm using RavenDB for persistence, with a standard c# facade/BLL backend wrapped with Asp.net WebAPI, and I'm finding that at every lower level operation (deep within my business logic classes) that touch the datbase, I need to pass in an identifier so that my RavenDb client session knows which database to operate against.
When the user authenticates, I resolve the appropriate database identifer, store it in the session manager. Every call against the Web API layer passes in a session ID which resolves the database ID in the backend, which is then used to pass into every single facade/BLL call.
All my dependencies are handled via an IoC container at the WebAPI level, but i can't pass in the database ID at this phase because it can be different for every user that is logged in.
this, of course is getting tedious.
can someone give me some guidance as to what I can do to alleviate this? Maybe perhaps some sort of policy injection/AOP solution?
a rough sample of my backend code looks like..
public class WidgetService()
{
private WidgetBLL _widgetBLL;
private ISessionManager _sessionManager;
public WidgetService(WidgetBLL _widgetBLL, ISessionManager sessionManager)
{
_widgetBLL = widgetBLL;
_sessionManager = sessionManager
}
public Widget getWidget(string sessionId, string widgetId)
{
string DbId = sessionManager.ResolveDbId(sessionId)
return _widgetBLL.GetWidget(string dbId, string widgetId);
}
}
public class WidgetManager()
{
public GetWidget(string dbId, string widgetId)
{
using (IDocumentSession session = documentStore.OpenSession(dbId)
{
var widget = session.load<Widget>(widgetid);
}
return widget;
}
}
the DBID is the identifier for that particular tenant that this particular user is a member of.
You need to change how you are using the session.
Instead of opening and closing the session yourself, do that in the IoC code.
Then you pass a session that is already opened for the right db.

How can I load balance FastAGI?

I am writing multiple AGIs using Perl that will be called from the Asterisk dialplan. I expect to receive numerous similtaneous calls so I need a way to load balance them. I have been advised to use FastAGI instead of AGI. The problem is that my AGIs will be distributed over many servers not just one, and I need that my entry point Asterisk dispatches the calls among those servers (where the agis reside) based on their availability. So, I thought of providing the FastAGI application with multiple IP addresses instead of one. Is it possible?
Any TCP reverse proxy would do the trick. HAProxy being one and nginx with the TCP module being another one.
A while back, I've crafted my own FastAGI proxy using node.js (nodast) to address this very specific problem and a bit more, including the ability to run FastAGI protocol over SSL and route requests based on AGI request location and parameters (such as $dnis, $channel, $language, ...)
Moreover, as the proxy configuration is basically javascript, you could actually load balance in really interesting ways.
A sample config would look as follow:
var config = {
listen : 9090,
upstreams : {
test : 'localhost:4573',
foobar : 'foobar.com:4573'
},
routes : {
'agi://(.*):([0-9]*)/(.*)' : function() {
if (this.$callerid === 'unknown') {
return ('agi://foobar/script/' + this.$3);
} else {
return ('agi://foobar/script/' + this.$3 + '?callerid' + this.$callerid);
}
},
'.*' : function() {
return ('agi://test/');
},
'agi://192.168.129.170:9090/' : 'agi://test/'
}
};
exports.config = config;
I have a large IVR implementation using FastAGI (24 E1's all doing FastAGI calls, peaks at about 80% so that's nearly 600 Asterisk channels calling FastAGI). I didn't find an easy way to do load balancing, but in my case there are different FastAGI calls: one at the beginning of the call to validate the user in a database, then a different one to check the user's balance or their most recent transactions, and another one to perform a transacion.
So what I did was send all the validation and simple queries to one application on one server and all the transaction calls to a different application on a different server.
A crude way to do load balancing if you have a lot of incoming calls on zaptel/dahdi channels would be to use different groups for the channels. For example suppose you have 2 FastAGI servers, and 4 E1's receiving calls. You can set up 2 E1's in group g1 and the other 2 E1's in group g2. Then you declare global variables like this:
[globals]
serverg1=ip_of_server1
serverg2=ip_of_server2
Then on your dialplan you call FastAGI like this:
AGI(agi://${server${CHANNEL(callgroup)}}/some_action)
On channels belonging to group g1, that will resolve to serverg1 which will resolve to ip_of_server1; on channels belonging to group g2, CHANNEL(callgroup) will resolve to g2 so you get ${serverg2} which resolves to ip_of_server2.
It's not the best solution because usually calls start coming in on one span and then another, etc so one server will get more work, but it's something.
To get real load balancing I guess we would have to write a FastAGI load balancing gateway, not a bad idea at all...
Mehhh... use the same constructs that would apply to load balancing something like web page requests.
One way is to round robin in DNS. So if you have vru1.example.com 10.0.1.100 and vru2.example.com 10.0.1.101 you put two entries in DNS like...
fastagi.example.com 10.0.1.100
fastagi.example.com 10.0.1.101
... then from the dial plan agi(agi://fastagi.example.com/youagi) should in theory alternate between 10.0.1.100 and 10.0.1.101. And you can add as many hosts as you need.
The other way to go is with something a bit too complicated to explain here but proxy tools like HAProxy should be able to route between multiple servers with the added benefit of being able to "take one out" of the mix for maintenance or do more advanced balancing like distribute equally based on current load.