I’m creating a message thread schema, where users join the message thread, and fetch the messages for this particular thread.
Front end: Each message should show a little avatar & name
This means that I need to fetch the user who sent the message for every single message (this is the current implementation - running a db query to get the user for every single message)
A solution for this was to grab the users at the thread level. This would mean I’d have to match the message.user_id with the thread.thread_users['user_id']. Would it be weird if I did this on the front-end? I feel like I should be attaching the user for that particular message on the backend?
Is there a way I could pass down my thread.thread_users array into my messages connection?
Here is the current query I have:
thread(input...) {
id
// THIS HOLDS THE USERS WHO ARE IN THIS THREAD
threadUsers {
name
firstName
avatar {
original
thumbnail
medium
}
}
lastReadMessageId
messages {
edges {
cursor
node {
from {
firstName
lastName
}
messageContent
}
}
}
}
I guess the question is:
Is it possible to grab the thread_users array from inside the Message type? So I can then match the thread_user to the message.user_id
OR whether I should be matching the thread_users to the message.from_id on the front-end?
The database looks like this:
thread:
id
created_at
owner_id
messages:
from_id (the user who sent the message)
messageContent (the message content)
thread_users: (simply there to record what users are in which threads)
thread_id
user_id
What I ended up doing was letting the front-end handle this. I could've potentially used Dataloader with Graphql but there was no need.
Related
I have experience in Salesforce administration, but not in Salesforce development.
My task is to push a Order in Salesforce to an external REST API, if the order is in the custom status "Processing" and the Order Start Date (EffectiveDate) is in 10 days.
The order will be than processed in the down-stream system.
If the order was successfully pushed to the REST API the status should be changed to "Activated".
Can anybody give me some example code to get started?
There's very cool guide for picking right mechanism, I've been studying from this PDF for one of SF certifications: https://developer.salesforce.com/docs/atlas.en-us.integration_patterns_and_practices.meta/integration_patterns_and_practices/integ_pat_intro_overview.htm
A lot depends on whether the endpoint is accessible from Salesforce (if it isn't - you might have to pull data instead of pushing), what authentication it needs.
For push out of Salesforce you could use
Outbound Message - it'd be an XML document sent when (time-based in your case?) workflow fires, not REST but it's just clicks, no code. The downside is that it's just 1 object in message. So you can send Order header but no line items.
External Service would be code-free and you could build a flow with it.
You could always push data with Apex code (something like this). We'd split the solution into 2 bits.
The part that gets actual work done: At high level you'd write function that takes list of Order ids as parameter, queries them, calls req.setBody(JSON.serialize([SELECT Id, OrderNumber FROM Order WHERE Id IN :ids]));... If the API needs some special authentication - you'd look into "Named Credentials". Hard to say what you'll need without knowing more about your target.
And the part that would call this Apex when the time comes. Could be more code (a nightly scheduled job that makes these callouts 1 minute after midnight?) https://salesforce.stackexchange.com/questions/226403/how-to-schedule-an-apex-batch-with-callout
Could be a flow / process builder (again, you probably want time-based flows) that calls this piece of Apex. The "worker" code would have to "implement interface" (a fancy way of saying that the code promises there will be function "suchAndSuchName" that takes "suchAndSuch" parameters). Check Process.Plugin out.
For pulling data... well, target application could login to SF (SOAP, REST) and query the table of orders once a day. Lots of integration tools have Salesforce plugins, do you already use Azure Data Factory? Informatica? BizTalk? Mulesoft?
There's also something called "long polling" where client app subscribes to notifications and SF pushes info to them. You might have heard about CometD? In SF-speak read up about Platform Events, Streaming API, Change Data Capture (although that last one fires on change and sends only the changed fields, not great for pushing a complete order + line items). You can send platform events from flows too.
So... don't dive straight to coding the solution. Plan a bit, the maintenance will be easier. This is untested, written in Notepad, I don't have org with orders handy... But in theory you should be able to schedule it to run at 1 AM for example. Or from dev console you can trigger it with Database.executeBatch(new OrderSyncBatch(), 1);
public class OrderSyncBatch implements Database.Batchable, Database.AllowsCallouts {
public Database.QueryLocator start(Database.BatchableContext bc) {
Date cutoff = System.today().addDays(10);
return Database.getQueryLocator([SELECT Id, Name, Account.Name, GrandTotalAmount, OrderNumber, OrderReferenceNumber,
(SELECT Id, UnitPrice, Quantity, OrderId FROM OrderItems)
FROM Order
WHERE Status = 'Processing' AND EffectiveDate = :cutoff]);
}
public void execute(Database.BatchableContext bc, List<sObject> scope) {
Http h = new Http();
List<Order> toUpdate = new List<Order>();
// Assuming you want 1 order at a time, not a list of orders?
for (Order o : (List<Order>)scope) {
HttpRequest req = new HttpRequest();
HttpResponse res;
req.setEndpoint('https://example.com'); // your API endpoint here, or maybe something that starts with "callout:" if you'd be using Named Credentials
req.setMethod('POST');
req.setHeader('Content-Type', 'application/json');
req.setBody(JSON.serializePretty(o));
res = h.send(req);
if (res.getStatusCode() == 200) {
o.Status = 'Activated';
toUpdate.add(o);
}
else {
// Error handling? Maybe just debug it, maybe make a Task for the user or look into
// Database.RaisesPlatformEvents
System.debug(res);
}
}
update toUpdate;
}
public void finish(Database.BatchableContext bc) {}
public void execute(SchedulableContext sc){
Database.executeBatch(new OrderSyncBatch(), Limits.getLimitCallouts()); // there's limit of 10 callouts per single transaction
// and by default batches process 200 records at a time so we want smaller chunks
// https://developer.salesforce.com/docs/atlas.en-us.apexref.meta/apexref/apex_methods_system_limits.htm
// You might want to tweak the parameter even down to 1 order at a time if processing takes a while at the other end.
}
}
Currently I'm working on a lot of network-related features. At the moment, I'm dealing with a network channel that allows me to send 1 single piece of information at a time, and I have to wait for it to be acknowledged before I can send the next piece of information. I'm representing the server with 1..n connected clients.
Some of these messages, I have to send in chunks, which is fairly easy to do with RxJava. Currently my "writing" method looks sort of like this:
fun write(bytes: ByteArray, ignoreMtu: Boolean) =
server.deviceList()
.first(emptyList())
.flatMapObservable { devices ->
Single.fromCallable {
if (ignoreMtu) {
bytes.size
} else {
devices.minBy { device -> device.mtu }?.mtu ?: DEFAULT_MTU
}
}
.flatMapObservable { minMtu ->
Observable.fromIterable(bytes.asIterable())
.buffer(minMtu)
}
.map { it.toByteArray() }
.doOnNext { server.currentData = bytes }
.map { devices }
// part i've left out: waiting for each device acknowledging the message, timeouts, etc.
}
What's missing in here is the part where I only allow one piece of information to be sent at the same time. Also, what I require is that if I'm adding a message into my queue, I have to be able to observe the status of only this message (completed, error).
I've thought about what's the most elegant way to achieve this. Solutions I've came up with include for example a PublishSubject<ByteArray> in which I push the messages (queue-like), add a subscriber and observe it - but this would throw for example onError if the previous message failed.
Another way that crossed my mind was to give each message a number upon creating / queueing it, and have a global "message-counter" Observable which I'd hook into the chain's beginning with a filter for the currently sent message == MY_MESSAGE_ID. But this feels kind of fragile. I could increment the counter whenever the subscription terminates, but I'm sure there must be a better way to achieve my goal.
Thanks for your help.
For future reference: The most straight-forward approach I've found is to add a scheduler that's working on a single thread, thus working each task sequential.
The documentation for Spring WebSockets states:
4.4.13. User Destinations
An application can send messages targeting a specific user, and Spring’s STOMP support recognizes destinations prefixed with "/user/" for this purpose. For example, a client might subscribe to the destination "/user/queue/position-updates". This destination will be handled by the UserDestinationMessageHandler and transformed into a destination unique to the user session, e.g. "/queue/position-updates-user123". This provides the convenience of subscribing to a generically named destination while at the same time ensuring no collisions with other users subscribing to the same destination so that each user can receive unique stock position updates.
Is this supposed to work in a multi-server environment with RabbitMQ as broker?
As far as I can tell, the queue name for a user is generated by appending the simpSessionId. When using the recommended client library stomp.js this results in the first user getting the queue name "/queue/position-updates-user0", the next gets "/queue/position-updates-user1" and so on.
This in turn means the first users to connect to different servers will subscribe to the same queue ("/queue/position-updates-user0").
The only reference to this I can find in the documentation is this:
In a multi-application server scenario a user destination may remain unresolved because the user is connected to a different server. In such cases you can configure a destination to broadcast unresolved messages to so that other servers have a chance to try. This can be done through the userDestinationBroadcast property of the MessageBrokerRegistry in Java config and the user-destination-broadcast attribute of the message-broker element in XML.
But this only makes the it possible to communicate with a user from a different server than the one where the web socket is established.
I feel I'm missing something? Is there anyway to configure Spring to be able to safely use MessagingTemplate.convertAndSendToUser(principal.getName(), destination, payload) in a multi-server environment?
If they need to be authenticated (I assume their credentials are stored in a database) you can always use their database unique user id to subscribe to.
What I do is when a user logs in they are automatically subscribed to two topics an account|system topic for system wide broadcasts and account|<userId> topic for specific broadcasts.
You could try something like notification|<userid> for each person to subscribe to then send messages to that topic and they will receive it.
Since user Ids are unique to each user you shouldn't have an issue within a clustered environment as long as each environment is hitting the same database information.
Here is my send method:
public static boolean send(Object msg, String topic) {
try {
String destination = topic;
String payload = toJson(msg); //jsonfiy the message
Message<byte[]> message = MessageBuilder.withPayload(payload.getBytes("UTF-8")).build();
template.send(destination, message);
return true;
} catch (Exception ex) {
logger.error(CommService.class.getName(), ex);
return false;
}
}
My destinations are preformatted so if i want to send a message to user with id of one the destinations looks something like /topic/account|1.
Ive created a ping pong controller that tests websockets for users who connect to see if their environment allows for websockets. I don't know if this will help you but this does work in my clustered environment.
/**
* Play ping pong between the client and server to see if web sockets work
* #param input the ping pong input
* #return the return data to check for connectivity
* #throws Exception exception
*/
#MessageMapping("/ping")
#SendToUser(value="/queue/pong", broadcast=false) // send only to the session that sent the request
public PingPong ping(PingPong input) throws Exception {
int receivedBytes = input.getData().length;
int pullBytes = input.getPull();
PingPong response = input;
if (pullBytes == 0) {
response.setData(new byte[0]);
} else if (pullBytes != receivedBytes) {
// create random byte array
byte[] data = randomService.nextBytes(pullBytes);
response.setData(data);
}
return response;
}
I am trying to figure out which fragments are related to operation:
managedObject
event
measurement
alarm
So , Is there a way to get all these fragments ?
Also there are additional Properties for which field name is defined as * and the value can be an Object or anything else(*). I have gone through the device management library and sensor library in cumulocity documentation but found it does not contain all the possible fragments and there is no clarity as in which object the fragment goes i.e does it go in operation or managedObject, or both?
Since every user, device and application can contribute such fragments, there is no "global list" of them that you could refer to. Normally, a client (application, device) knows what data it sends or what data it requests, so it's also in most cases not required.
Regarding the relationship between operations and managed objects, there are some typical design pattern. Let's say you want to configure something in a device, like a polling interval:
"mydevice_Configuration": { "pollingRate": 60 }
What your application would do is to send that fragment as an operation to a device:
POST /devicecontrol/operations HTTP/1.1
...
{
"deviceId": "12345",
"mydevice_Configuration": { "pollingRate": 60 }
}
The device would accept the operation (http://cumulocity.com/guides/rest/device-integration/#step-6-finish-operations-and-subscribe) and change its configuration. When it does that successfully, it will update its managed object to contain the new configuration:
PUT /inventory/managedObjects/12345 HTTP/1.1
{
"mydevice_Configuration": { "pollingRate": 60 }
}
This way, your inventory reflects as closely as possible the true state of devices.
Hope that helps ...
Say I use the deferred messaging feature to send a message at some later future point in time, but then later I might want to cancel that message.
Question 1 - When making the original bus.Defer(...) call, how do I get a unique identifier back for that message? I would expect there to be a message id or a timeout id of some sort.
Question 2 - Short of calling the RavenDB database directly, is there a way to query the bus to get back all pending deferred messages?
Question 3 - Is there some way to cancel a deferred message? I would expect something like bus.CancelDeferred(messageid)
Is any of this available, or are there any other mechanisms I can use to achieve similar results?
I've had the need to abandon deferred messages and outstanding replies a few times, and I did it by "incrementing the correlation ID" on my saga. You don't mention sagas though, so I'm not sure if this solution will be usable to you. I do think, however, that it goes under "any other mechanisms" that you ask for :)
Check out this example - here I have the state of my saga which, among other things, contains a custom CorrelationId:
public class MySagaData : ISagaData
{
// ... the usual stuff here
public string CorrelationId { get; set; }
}
and then, each time I defer a message or request something, I correlate the deferred message and/or reply with the current value of the correlation ID:
bus.Defer(time, new Something { CorrelationId = Data.CorrelationId });
bus.Send(new SomeRequest { CorrelationId = Data.CorrelationId });
thus conceptually correlating the deferred message and the reply with the current state of the saga.
And then, in cases where I want to abandon all outstanding messages, I simply re-set the saga's correlation ID to a new value - I usually set the value to something like somethingWithBusinessMeaning/timestamp.
This way, abandoned messages will not correlate with any saga instance, effectively being ignored.
Does it make sense?
1) There is currently no way to do this. You could add your own header with a app specific id if you need to keep track of them
2) No, you have to query the storage as you mention. That said what would be the use case for this?
3) No, and this is by design. Given that you can't assume when a message will arrive you can't rely on timeouts being canceled since a defered message might be stuck in a queue and processed right after you cancel. The "cancel" message might also get lost.
In short: your code needs to be prepared to discard "invalid" messages no matter what.