Trigger an API call in servicenow On New Incident Created - api

I am trying to trigger an API call with Incident Sys_id as parameter, when an new incident gets created in Service Now.
(function executeRule(current, previous /*null when async*/) {
If(current.operation() == 'insert') ; {
//
}
// Add your code here
})(current, previous);
how can i achieve this ?
My api:
https://Demand.jitterbit.cc/defaultUrlPrefix/v1.0/snowSalesforceCaseCreate

you need to create rest message, and there you can use the method which you want to use(ex: post), and you can add the parameters that you want(ex: sys_id)..
later to that you can create Business Rule, in that business rule call the rest message and pass the parameters

Related

How to push Salesforce Order to an external REST API?

I have experience in Salesforce administration, but not in Salesforce development.
My task is to push a Order in Salesforce to an external REST API, if the order is in the custom status "Processing" and the Order Start Date (EffectiveDate) is in 10 days.
The order will be than processed in the down-stream system.
If the order was successfully pushed to the REST API the status should be changed to "Activated".
Can anybody give me some example code to get started?
There's very cool guide for picking right mechanism, I've been studying from this PDF for one of SF certifications: https://developer.salesforce.com/docs/atlas.en-us.integration_patterns_and_practices.meta/integration_patterns_and_practices/integ_pat_intro_overview.htm
A lot depends on whether the endpoint is accessible from Salesforce (if it isn't - you might have to pull data instead of pushing), what authentication it needs.
For push out of Salesforce you could use
Outbound Message - it'd be an XML document sent when (time-based in your case?) workflow fires, not REST but it's just clicks, no code. The downside is that it's just 1 object in message. So you can send Order header but no line items.
External Service would be code-free and you could build a flow with it.
You could always push data with Apex code (something like this). We'd split the solution into 2 bits.
The part that gets actual work done: At high level you'd write function that takes list of Order ids as parameter, queries them, calls req.setBody(JSON.serialize([SELECT Id, OrderNumber FROM Order WHERE Id IN :ids]));... If the API needs some special authentication - you'd look into "Named Credentials". Hard to say what you'll need without knowing more about your target.
And the part that would call this Apex when the time comes. Could be more code (a nightly scheduled job that makes these callouts 1 minute after midnight?) https://salesforce.stackexchange.com/questions/226403/how-to-schedule-an-apex-batch-with-callout
Could be a flow / process builder (again, you probably want time-based flows) that calls this piece of Apex. The "worker" code would have to "implement interface" (a fancy way of saying that the code promises there will be function "suchAndSuchName" that takes "suchAndSuch" parameters). Check Process.Plugin out.
For pulling data... well, target application could login to SF (SOAP, REST) and query the table of orders once a day. Lots of integration tools have Salesforce plugins, do you already use Azure Data Factory? Informatica? BizTalk? Mulesoft?
There's also something called "long polling" where client app subscribes to notifications and SF pushes info to them. You might have heard about CometD? In SF-speak read up about Platform Events, Streaming API, Change Data Capture (although that last one fires on change and sends only the changed fields, not great for pushing a complete order + line items). You can send platform events from flows too.
So... don't dive straight to coding the solution. Plan a bit, the maintenance will be easier. This is untested, written in Notepad, I don't have org with orders handy... But in theory you should be able to schedule it to run at 1 AM for example. Or from dev console you can trigger it with Database.executeBatch(new OrderSyncBatch(), 1);
public class OrderSyncBatch implements Database.Batchable, Database.AllowsCallouts {
public Database.QueryLocator start(Database.BatchableContext bc) {
Date cutoff = System.today().addDays(10);
return Database.getQueryLocator([SELECT Id, Name, Account.Name, GrandTotalAmount, OrderNumber, OrderReferenceNumber,
(SELECT Id, UnitPrice, Quantity, OrderId FROM OrderItems)
FROM Order
WHERE Status = 'Processing' AND EffectiveDate = :cutoff]);
}
public void execute(Database.BatchableContext bc, List<sObject> scope) {
Http h = new Http();
List<Order> toUpdate = new List<Order>();
// Assuming you want 1 order at a time, not a list of orders?
for (Order o : (List<Order>)scope) {
HttpRequest req = new HttpRequest();
HttpResponse res;
req.setEndpoint('https://example.com'); // your API endpoint here, or maybe something that starts with "callout:" if you'd be using Named Credentials
req.setMethod('POST');
req.setHeader('Content-Type', 'application/json');
req.setBody(JSON.serializePretty(o));
res = h.send(req);
if (res.getStatusCode() == 200) {
o.Status = 'Activated';
toUpdate.add(o);
}
else {
// Error handling? Maybe just debug it, maybe make a Task for the user or look into
// Database.RaisesPlatformEvents
System.debug(res);
}
}
update toUpdate;
}
public void finish(Database.BatchableContext bc) {}
public void execute(SchedulableContext sc){
Database.executeBatch(new OrderSyncBatch(), Limits.getLimitCallouts()); // there's limit of 10 callouts per single transaction
// and by default batches process 200 records at a time so we want smaller chunks
// https://developer.salesforce.com/docs/atlas.en-us.apexref.meta/apexref/apex_methods_system_limits.htm
// You might want to tweak the parameter even down to 1 order at a time if processing takes a while at the other end.
}
}

Why is .call() necessary when I want to see returned values from a smart contract function?

In my contract I have this function (solc 0.8.4):
function makeDecision(address person) external returns (string memory name, bool approved) {
require(msg.sender == loanOfficer, "Only the loan officer can initiate a decision.");
require(bytes(applicants[person].name).length != 0, "That person is not in the pool of applicants.");
if (applicants[person].credScore > 650 && applicants[person].credAge > 5) {
applicants[person].approved = true;
}
return (applicants[person].name, applicants[person].approved);
}
When I go into my truffle console and call my function this way loanContract.makeDecision(accounts[1]) everything works fine, but I get a tx receipt as the response.
When I call my function this way via truffle console loanContract.makeDecision.call(accounts[1]) I get the expected response from my function. I am wanting an explanation that tells me why this difference in response occurs so that I understand what is going on on a deeper level. I hate using things without understanding why they work.
If it helps, my contract (which is named LoanDisbursement) was initialized in the console like so: let loanContract = await LoanDisbursement.deployed() and my accounts variable: let accounts = await web3.eth.getAccounts()
any tips would help since I am still learning and diving into this ecosystem. I've not been able to find any decent documentation on this functionality as of yet. Thanks.
Truffle contract functions create a transaction - and return the transaction data.
The call function doesn't create a transaction, it just makes a call. So it cannot return transaction receipt and the authors of Truffle decided to return the function value instead.
Without transaction, the state of your contract is not changed. Which is probably not what you want, and you should always create a transaction when you need to save state changes to the blockchain.
Truffle doesn't return the function value when you're creating a transaction. Using Truffle, there are two approaches that they recommend:
Reading event logs that the transaction produced
Add an event to your function emit MadeDecision(applicants[person].name, applicants[person].approved);, and then access it in your JS code in result.logs.
Calling a getter in a subsequent call.
Tx setValue(5) and then call getValue(). Or in your case:
Tx makeDecision(0x123) and then call applicants[0x123] (assuming applicants is public).

Creating trigger Dynamically in APEX

I want to create trigger dynamically in my apex class.
Can anyone here help me..
Please guide me for this.
I am fresher for visual force pages
You cannot created trigger dynamically in Apex. Because Apex code has no access to the Trigger object So, you can not create triggers programmatically. Anyways we never need to create trigger dynamically. Look here: http://boards.developerforce.com/t5/Apex-Code-Development/Create-Trigger-dynamically/td-p/667868
Sample apex code to create a trigger by Tooling API endpoint using REST callout:
String json = '{ "Name" : "COTrigger", \'+
'"TableEnumOrId" : "Custom_Object__c",'+
'"Body" : "trigger COTrigger on Custom_Object__c (after insert) { // Do Something }" }'; // JSON format to create trigger
Httprequest req = new HttpRequest();
req.setEndpoint('https://[salesforce instance].salesforce.com/services/data/v27.0/sobjects/ApexTrigger');
req.setMethod('POST');
req.setHeader('Content-Type':'application/json');
req.setHeader('Authorization':'Bearer: '+sessionId);
req.setBody(json);
Http httpReq = new HttpReq();
HttpResponse res = httpReq.send(req);
System.debug(res.getBody());
Correct some syntax error, Tooling API is basically a set of Objects, components accessible through it. Try this code, Actually I used this code to create Apex class not Apex Trigger and here i just changed body & endpoint to make it work for trigger. If it doesn't work it means creating Trigger from Tooling API is still not supported.
Read this guide http://www.salesforce.com/us/developer/docs/api_toolingpre/api_tooling.pdf It has all about tooling API and not any complex configuration is required to do this. You only need to REST callout on endpoint url to create trigger. Endpoint url are provided in guide, of which link i have given.

WCF Unique ID for each service method call

I'm logging using log4net, and I want to log a id that is unique for each serice method call. I dont need it unique across service calls, just within a method call. Is there any built in id i can use in wcf? I don't want to manually create a guid or something at the start of the method call.
e.g.
wcfMethod(int x)
{
log("xxx");
somework
log("yyy");
}
private log(string message)
{
var frame = new StackFrame(1);
var method = frame.GetMethod();
var type = method.DeclaringType;
var name = method.Name;
var log = LogManager.GetLogger(type);
// LOG ID HERE
ThreadContext.Properties["MessageId"] = OperationContext.Current.IncomingMessageHeaders.MessageId; // SOMETHING HERE
}
I've tried OperationContext.Current.IncomingMessageHeaders.MessageId but thats always null.
I've read about wcf instance correlation but i don't need something that complicated (e.g. unique across different method calls).
Please if anyone can help that would be much apprieciated. Thanks in advance.
Plain SOAP or REST has no such identification included in messages. You must use some additional feature or transport protocol (for example MSMQ) supporting identifications of messages. In case of MessageId you have to use SOAP service with WS-Addressing and this information must be passed from client.

WCF routing -- how to correctly add filter table programmatically

I am using the WCF 4 routing service, and need to configure the service programmatically (as opposed to via config). The examples I have seen of doing so, which are rare, create a MessageFilterTable as follows:
var filterTable=new MessageFilterTable<IEnumerable<ServiceEndpoint>>();
But, the generic parameter to that method is supposed to be TFilterData (the type of data you are filtering on)? I have my own custom filter that accepts a string -- can I still create the filter table this way?
If this will work...will the routing infrastructure create client endpoints out of the list I pass in?
I have created a WCF 4 routing service and configured it programmatically. My code is a bit more spaced out than it needs to be (maintainability for others being a concern, hence the comments), but it definitely works. This has two filters: one filters some specific Actions to a given endpoint, and the second sends the remaining actions to a generic endpoint.
// Create the message filter table used for routing messages
MessageFilterTable<IEnumerable<ServiceEndpoint>> filterTable = new MessageFilterTable<IEnumerable<ServiceEndpoint>>();
// If we're processing a subscribe or unsubscribe, send to the subscription endpoint
filterTable.Add(
new ActionMessageFilter(
"http://etcetcetc/ISubscription/Subscribe",
"http://etcetcetc/ISubscription/KeepAlive",
"http://etcetcetc/ISubscription/Unsubscribe"),
new List<ServiceEndpoint>()
{
new ServiceEndpoint(
new ContractDescription("ISubscription", "http://etcetcetc/"),
binding,
new EndpointAddress(String.Format("{0}{1}{2}", TCPPrefix, HostName, SubscriptionSuffix)))
},
HighRoutingPriority);
// Otherwise, send all other packets to the routing endpoint
MatchAllMessageFilter filter = new MatchAllMessageFilter();
filterTable.Add(
filter,
new List<ServiceEndpoint>()
{
new ServiceEndpoint(
new ContractDescription("IRouter", "http://etcetcetc/"),
binding,
new EndpointAddress(String.Format("{0}{1}{2}", TCPPrefix, HostName, RouterSuffix)))
},
LowRoutingPriority);
// Then attach the filter table as part of a RoutingBehaviour to the host
_routingHost.Description.Behaviors.Add(
new RoutingBehavior(new RoutingConfiguration(filterTable, false)));
You can find a good example on MSDN here: How To: Dynamic Update Routing Table
Note how they dont directly create an instance of the MessageFilterTable, but instead use the 'FilterTable' property provided by a new RoutingConfiguration instance.
If you have written a custom filter, then you will add it like this:
rc.FilterTable.Add(new CustomMessageFilter("customStringParameter"), new List<ServiceEndpoint> { physicalServiceEndpoint });
The CustomMessageFilter will be your filter, and the "customStringParameter" is the string that (I believe) you are talking about.
When the Router receives a connection request, it will attempt to map it via this table entry, if this is successful, then you are right, the router will create a client endpoint to talk to the ServiceEndpoint that you provided.