Current MongoDB server time in VB.Net - vb.net

How do I get the MongoDB's time or use it in a query from VB.NET?
For example, in the Mongo shell I would do:
db.Cookies.find({ expireOn: { $lt: new Date() } });
In PHP I can easily do something like this:
$model->expireOn = new MongoDate();
How do I approach this in VB.Net? I don't want to use the local machine's time. This obviously doesn't work...
MongoDB.Driver.Builders.Query.LT("expireOn", "new Date()")

If you merely want to remove expired cookies from your collection, you could use the TTL collection feature which will automatically remove expired entries using a background worker on the server, hence using the server's time:
db.Cookies.ensureIndex( { "expireOn": 1 }, { expireAfterSeconds: 0 } )
If you really need to query, use a service program that runs on the server or ensure your clocks are reasonably synchronized because clocks that are considerably off can cause a plethora of problems, especially for web servers and email servers. (Consider HTTP headers like Date, LastModified and If-Modified-Since, Email Timestamps, HMAC/timestamp validation against replay attacks, etc.).

Related

How to push Salesforce Order to an external REST API?

I have experience in Salesforce administration, but not in Salesforce development.
My task is to push a Order in Salesforce to an external REST API, if the order is in the custom status "Processing" and the Order Start Date (EffectiveDate) is in 10 days.
The order will be than processed in the down-stream system.
If the order was successfully pushed to the REST API the status should be changed to "Activated".
Can anybody give me some example code to get started?
There's very cool guide for picking right mechanism, I've been studying from this PDF for one of SF certifications: https://developer.salesforce.com/docs/atlas.en-us.integration_patterns_and_practices.meta/integration_patterns_and_practices/integ_pat_intro_overview.htm
A lot depends on whether the endpoint is accessible from Salesforce (if it isn't - you might have to pull data instead of pushing), what authentication it needs.
For push out of Salesforce you could use
Outbound Message - it'd be an XML document sent when (time-based in your case?) workflow fires, not REST but it's just clicks, no code. The downside is that it's just 1 object in message. So you can send Order header but no line items.
External Service would be code-free and you could build a flow with it.
You could always push data with Apex code (something like this). We'd split the solution into 2 bits.
The part that gets actual work done: At high level you'd write function that takes list of Order ids as parameter, queries them, calls req.setBody(JSON.serialize([SELECT Id, OrderNumber FROM Order WHERE Id IN :ids]));... If the API needs some special authentication - you'd look into "Named Credentials". Hard to say what you'll need without knowing more about your target.
And the part that would call this Apex when the time comes. Could be more code (a nightly scheduled job that makes these callouts 1 minute after midnight?) https://salesforce.stackexchange.com/questions/226403/how-to-schedule-an-apex-batch-with-callout
Could be a flow / process builder (again, you probably want time-based flows) that calls this piece of Apex. The "worker" code would have to "implement interface" (a fancy way of saying that the code promises there will be function "suchAndSuchName" that takes "suchAndSuch" parameters). Check Process.Plugin out.
For pulling data... well, target application could login to SF (SOAP, REST) and query the table of orders once a day. Lots of integration tools have Salesforce plugins, do you already use Azure Data Factory? Informatica? BizTalk? Mulesoft?
There's also something called "long polling" where client app subscribes to notifications and SF pushes info to them. You might have heard about CometD? In SF-speak read up about Platform Events, Streaming API, Change Data Capture (although that last one fires on change and sends only the changed fields, not great for pushing a complete order + line items). You can send platform events from flows too.
So... don't dive straight to coding the solution. Plan a bit, the maintenance will be easier. This is untested, written in Notepad, I don't have org with orders handy... But in theory you should be able to schedule it to run at 1 AM for example. Or from dev console you can trigger it with Database.executeBatch(new OrderSyncBatch(), 1);
public class OrderSyncBatch implements Database.Batchable, Database.AllowsCallouts {
public Database.QueryLocator start(Database.BatchableContext bc) {
Date cutoff = System.today().addDays(10);
return Database.getQueryLocator([SELECT Id, Name, Account.Name, GrandTotalAmount, OrderNumber, OrderReferenceNumber,
(SELECT Id, UnitPrice, Quantity, OrderId FROM OrderItems)
FROM Order
WHERE Status = 'Processing' AND EffectiveDate = :cutoff]);
}
public void execute(Database.BatchableContext bc, List<sObject> scope) {
Http h = new Http();
List<Order> toUpdate = new List<Order>();
// Assuming you want 1 order at a time, not a list of orders?
for (Order o : (List<Order>)scope) {
HttpRequest req = new HttpRequest();
HttpResponse res;
req.setEndpoint('https://example.com'); // your API endpoint here, or maybe something that starts with "callout:" if you'd be using Named Credentials
req.setMethod('POST');
req.setHeader('Content-Type', 'application/json');
req.setBody(JSON.serializePretty(o));
res = h.send(req);
if (res.getStatusCode() == 200) {
o.Status = 'Activated';
toUpdate.add(o);
}
else {
// Error handling? Maybe just debug it, maybe make a Task for the user or look into
// Database.RaisesPlatformEvents
System.debug(res);
}
}
update toUpdate;
}
public void finish(Database.BatchableContext bc) {}
public void execute(SchedulableContext sc){
Database.executeBatch(new OrderSyncBatch(), Limits.getLimitCallouts()); // there's limit of 10 callouts per single transaction
// and by default batches process 200 records at a time so we want smaller chunks
// https://developer.salesforce.com/docs/atlas.en-us.apexref.meta/apexref/apex_methods_system_limits.htm
// You might want to tweak the parameter even down to 1 order at a time if processing takes a while at the other end.
}
}

My SQL Server Can Only Handle 2 players?

I am developing a game using TCP. The clients send and listen the server using TCP. When the server receives a request, then it consults the database (SQL Server Express / Entity Framework) and sends a response back to client.
I'm trying to make a MMORPG, so I need to know all the players locations frequently, so I used a System.Timer to ask the server the location of the players around me.
The problem:
If I configure the timer to trigger for every 500ms a method that asks the server the currently players location, then I can open 2 instances of the client app, but it's laggy. If I configure to trigger for every 50ms, then when I open the second instance, the SQL Server throws this exception often:
"The connection was not closed. The connection's current state is open."
I mean, what the hell? I know I am requesting A LOT of things to the database in a short period, but how do real games deals with this?
Here is one code that throws the error when SQL Server seems to be overloaded (second line of the method):
private List<CharacterDTO> ListAround()
{
List<Character> characters = new List<Character>();
characters = ObjectSet.Character.AsNoTracking().Where(x => x.IsOnline).ToList();
return GetDto(characters);
}
Your real problem is ObjectSet is not Thread Safe. You should be creating a new database context inside ListAround and disposing it when you are done with it, not re-using the same context over and over again.
private List<CharacterDTO> ListAround()
{
List<Character> characters = new List<Character>();
using(var ObjectSet = new TheNameOfYourDataContextType())
{
characters = ObjectSet.Character.AsNoTracking().Where(x => x.IsOnline).ToList();
return GetDto(characters);
}
}
I resolved the problem changing the strategy. Now I don't update the players positions in real time to the database. Instead, I created a list (RAM memory) in the server, so I manage only this list. Eventually I will update the information to the database.

Can Flash be integrated with SQL?

Can Flash be used together with SQL? I have a Flash form and I need to connect it to SQL. If there is any example on the net about this topic. I can't find it.
You don't use ActionScript directly with an SQL database. Instead you make http requests from ActionScript to a server, specifying the correct parameters. A typical opensource setup, is a PHP script communicating with a MySQL DB, but you can use Java with Oracle, Ruby with CouchDB, .NET with SQL or any other possible configuration. The important point is that you must be able to call a server script and pass variables... typically a Restful setup.
Once your PHP script has been properly configured, you can use http POST or http GET to send values from ActionScript.
PHP:
<?php
$updateValue = $_POST["updateValue"];
$dbResult = updateDB( $updateValue ); //This should return the db response
echo( $dbResult );
?>
To call this script from ActionScript, you need to create a variables object.
var variables:URLVariables = new URLVariables();
variables.updateValue = "someResult";
The variable name .updateValue, must match the php variable exactly.
now create a URLRequest Object, specifying the location of your script. For this example the method must be set to POST. You add the variable above to the data setter of the request.
var request:URLRequest = new URLRequest( "yourScript.php" );
request.method = URLRequestMethod.POST;
request.data = variables;
Now create a URLLoader and add an event listener. Do not pass the request created above to the constructor, but to the load method.
var loader:URLLoader = new URLLoader();
loader.addEventListener(Event.COMPLETE, onComplete );
loader.load( request );
The handler would look something like this.
private function onComplete( e:Event ) : void
{
trace( URLLoader( e.target ).data.toString() );
}
This example shows how to update and receive a response from a server / db combo. However, you can also query a DB through the script and parse the result. So in the PHP example above, you can output JSON, XML or even a piped string, and this can be consumed by ActionScript.
XML is a popular choice, as ActionScript's e4x support treats XML like a native object.
To treat the response above like an XML response, use the following in the onComplete handler.
private function onComplete( e:Event ) : void
{
var result:XML = XML( URLLoader( e.target ).data );
}
This will throw an error if your xml is poorly formed, so ensure the server script always prints out valid XML, even if there is a DB error.
The problem with this is giving someone a flash file that directly accesses SQL server is very insecure. Even if it's possible, which I have seen SOCKET classes out there to do so for MySQL (though never used it), allowing users to remotely connect to your DB is insecure as the user can sniff the login information.
In my opinion, the best way to do this is to create a Client/Server script. You can easily do this with PHP or ASP.net by using SendAndLoad to send the data you need to pass to SQL via POST fields. You can then send back the values in PHP with:
echo 'success='.+urlencode(data);
With this, flash can access the data via the success field.
I don't personally code flash but I work with a company who develops KIOSK applications for dozens of tradeshow companies, and my job is to store the data, return it to them. This is the method we use. You can make it even cleaner by using actual web services such as SOAP, but this method gets the job done if its just you using it.
You should look into Zend Amf or even the Zend Framework for server side communication with Flash. As far as I know Zend Amf is the fastest way to communicate with PHP ( therefore your database ) also you can pass & return complex Objects from the client to the server and vice versa.
Consider this , for instance. You have a bunch of data in your database , you implement functions in ZF whereas this data is formatted and set as a group of Value Objects. From Flash , you query ZF , Zf queries the database , retrieve & formats your data, return your Value Objects as a JSON string ( for instance ). In Flash, you retrieve you JSON string , decode it and assign your Value Objects to whatever relevant classes you have.
There are plenty of tutorials out there regarding Flash communication with the Zend Framework.
Here's an example:
http://gotoandlearn.com/play.php?id=90

How can I load balance FastAGI?

I am writing multiple AGIs using Perl that will be called from the Asterisk dialplan. I expect to receive numerous similtaneous calls so I need a way to load balance them. I have been advised to use FastAGI instead of AGI. The problem is that my AGIs will be distributed over many servers not just one, and I need that my entry point Asterisk dispatches the calls among those servers (where the agis reside) based on their availability. So, I thought of providing the FastAGI application with multiple IP addresses instead of one. Is it possible?
Any TCP reverse proxy would do the trick. HAProxy being one and nginx with the TCP module being another one.
A while back, I've crafted my own FastAGI proxy using node.js (nodast) to address this very specific problem and a bit more, including the ability to run FastAGI protocol over SSL and route requests based on AGI request location and parameters (such as $dnis, $channel, $language, ...)
Moreover, as the proxy configuration is basically javascript, you could actually load balance in really interesting ways.
A sample config would look as follow:
var config = {
listen : 9090,
upstreams : {
test : 'localhost:4573',
foobar : 'foobar.com:4573'
},
routes : {
'agi://(.*):([0-9]*)/(.*)' : function() {
if (this.$callerid === 'unknown') {
return ('agi://foobar/script/' + this.$3);
} else {
return ('agi://foobar/script/' + this.$3 + '?callerid' + this.$callerid);
}
},
'.*' : function() {
return ('agi://test/');
},
'agi://192.168.129.170:9090/' : 'agi://test/'
}
};
exports.config = config;
I have a large IVR implementation using FastAGI (24 E1's all doing FastAGI calls, peaks at about 80% so that's nearly 600 Asterisk channels calling FastAGI). I didn't find an easy way to do load balancing, but in my case there are different FastAGI calls: one at the beginning of the call to validate the user in a database, then a different one to check the user's balance or their most recent transactions, and another one to perform a transacion.
So what I did was send all the validation and simple queries to one application on one server and all the transaction calls to a different application on a different server.
A crude way to do load balancing if you have a lot of incoming calls on zaptel/dahdi channels would be to use different groups for the channels. For example suppose you have 2 FastAGI servers, and 4 E1's receiving calls. You can set up 2 E1's in group g1 and the other 2 E1's in group g2. Then you declare global variables like this:
[globals]
serverg1=ip_of_server1
serverg2=ip_of_server2
Then on your dialplan you call FastAGI like this:
AGI(agi://${server${CHANNEL(callgroup)}}/some_action)
On channels belonging to group g1, that will resolve to serverg1 which will resolve to ip_of_server1; on channels belonging to group g2, CHANNEL(callgroup) will resolve to g2 so you get ${serverg2} which resolves to ip_of_server2.
It's not the best solution because usually calls start coming in on one span and then another, etc so one server will get more work, but it's something.
To get real load balancing I guess we would have to write a FastAGI load balancing gateway, not a bad idea at all...
Mehhh... use the same constructs that would apply to load balancing something like web page requests.
One way is to round robin in DNS. So if you have vru1.example.com 10.0.1.100 and vru2.example.com 10.0.1.101 you put two entries in DNS like...
fastagi.example.com 10.0.1.100
fastagi.example.com 10.0.1.101
... then from the dial plan agi(agi://fastagi.example.com/youagi) should in theory alternate between 10.0.1.100 and 10.0.1.101. And you can add as many hosts as you need.
The other way to go is with something a bit too complicated to explain here but proxy tools like HAProxy should be able to route between multiple servers with the added benefit of being able to "take one out" of the mix for maintenance or do more advanced balancing like distribute equally based on current load.

How to send out email at a user's local time in .NET / Sql Server?

I am writing a program that needs to send out an email every hour on the hour, but at a time local to the user.
Say I have 2 users in different time zones. John is in New York and Fred is in Los Angeles. The server is in Chicago. If I want to send an email at 6 PM local to each user, I'd have to send the email to John at 7 PM Server time and Fred at 4 PM Server time.
What's a good approach to this in .NET / Sql Server? I have found an xml file with all of the time zone information, so I am considering writing a script to import it into the database, then querying off of it.
Edit: I used “t4znet.dll” and did all comparisons on the .NET side.
You have two options:
Store the adjusted time for the mail action into the database for each user. Then just compare server time with stored time. To avoid confusion and portability issues, I would store all times in UTC. So, send mail when SERVER_UTC_TIME() == storedUtcTime.
Store the local time for each mail action into the database, then convert on-the-fly. Send mail when SERVER_UTC_TIME() == TO_UTC_TIME(storedLocalTime, userTimeZone).
You should decide what makes most sense for your application. For example if the mailing time is always the same for all users, it makes more sense to go with option (2). If the events times can change between users and even per user, it may make development and debugging easier if you choose option (1). Either way you will need to know the user's time zone.
*These function calls are obviously pseudo, since I don't know their invocations in T-SQL, but they should exist.
I'm a PHP developer so I'll share what I know from PHP. I'm sure .NET will include something similar.
In PHP you can get timezone differences for the server time - as you've suggested you'd send the emails at different times on the server.
Every time you add a user save their time offset from the server time (or their timezone in case the server timezone changes).
Then when you specify an update, have an automated task (Cron for LAMP people) that runs each hour checking to see if an email needs to be sent. Do this until there are no emails left to send.
You can complement your solution with this excellent article "World Clock and the TimeZoneInformation class", I made a webservice that sent a file with information that included the local and receiver time, what I did was to modify this class so I could handle that issue and it worked perfect, exactly as I needed.
I think you could take this class and obtain from the table "Users" the time zone of them and "calculate" the appropiate time, my code went like this;
//Get correct destination time
DateTime thedate = DateTime.Now;
string destinationtimezone = null;
//Load the time zone where the file is going
TimeZoneInformation tzi = TimeZoneInformation.FromName(this.m_destinationtimezone);
//Calculate
destinationtimezone = tzi.FromUniversalTime(thedate.ToUniversalTime()).ToString();
This class has an issue in Windows Vista that crashes the "FromIndex(int index)" function but you can modify the code, instead of using the function:
public static TimeZoneInformation FromIndex(int index)
{
TimeZoneInformation[] zones = EnumZones();
for (int i = 0; i < zones.Length; ++i)
{
if (zones[i].Index == index)
return zones[i];
}
throw new ArgumentOutOfRangeException("index", index, "Unknown time zone index");
}
You can change it to;
public static TimeZoneInformation FromName(string name)
{
TimeZoneInformation[] zones = EnumZones();
foreach (TimeZoneInformation tzi in zones)
{
if (tzi.DisplayName.Equals(name))
return tzi;
}
throw new ArgumentOutOfRangeException("name", name, "Unknown time zone name");
}