Function Running in local and not when deployed to Azure - azure-function-app

I have a http trigger function, when I sent a message to the url, it logs data on the queuestorage as required;
http://localhost:7071/api/xxxx?message=89000
However when I do the same in azure on the function url
https://yyyyy.azurewebsites.net/api/xxxx?message=89000 nothing is logged.
How can I have this resolved?
Another question;
The underlying code is
import logging
import azure.functions as func
def main(req: func.HttpRequest,msg: func.Out[str]) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
input_msg = req.params.get('message')
logging.info(input_msg)
msg.set(input_msg)
return func.HttpResponse(
"This is a test.",
status_code=200
)
It is expected to receive the following payload
{
"layerId":0,
"serviceName": "myService",
"changeType": "FeaturesCreated",
"orgId": "myorgId"
"changesUrl": "https://olserver/services/myService/FeatureService/extractChanges?serverGens=[1122, 1124]"
}
when I do http://localhost:7071/api/xxxx?message=89000, it logs the queue storage just fine but not when this payload is delivered. How can I have this configured?

Azure function app don't get environment variable from local.settings.json on portal.
You need to add below setting:
To
And in order to prevent the inability to save messages due to certain settings of the queue, you can try to create a new queue to avoid this situation.

Related

What is the order of HTTP responses, data, and error, and are they guaranteed to be in that order?

I am debugging/testing the part of my app that sends an HTTP POST, with a file to upload, to a 3rd party server and need to be sure of the order of the info that server sends back.
Below is a very trimmed down sample of how I handle what is returned by the server.
At this moment, am debugging for files sent that exceed the LimitRequestBody size set in Apache. Yes, I do check file size in my app before sending, but am trying to debug for anything possible, i.e. malicious bot sending data outside of my app.
What I can't seem to find online is the lifecycle of what a server will send back in terms of the response, data, and error, and need to be sure I will get them back in this order:
Response
Data
and if there's an error:
Error (and then nothing else)
uploadSession.dataTask(with: upFile.toUrl!)
{ (data, response, error) in
if let response = response {
upLoadInvClass.upResp(resp: response)
}
if let error = error {
upLoadInvClass.upErr(error: error)
}
if let data = data {
upLoadInvClass.upData(data: data)
}
}.resume()

Akka HTTP Source Streaming vs regular request handling

What is the advantage of using Source Streaming vs the regular way of handling requests? My understanding that in both cases
The TCP connection will be reused
Back-pressure will be applied between the client and the server
The only advantage of Source Streaming I can see is if there is a very large response and the client prefers to consume it in smaller chunks.
My use case is that I have a very long list of users (millions), and I need to call a service that performs some filtering on the users, and returns a subset.
Currently, on the server side I expose a batch API, and on the client, I just split the users into chunks of 1000, and make X batch calls in parallel using Akka HTTP Host API.
I am considering switching to HTTP streaming, but cannot quite figure out what would be the value
You are missing one other huge benefit: memory efficiency. By having a streamed pipeline, client/server/client, all parties safely process data without running the risk of blowing up the memory allocation. This is particularly useful on the server side, where you always have to assume the clients may do something malicious...
Client Request Creation
Suppose the ultimate source of your millions of users is a file. You can create a stream source from this file:
val userFilePath : java.nio.file.Path = ???
val userFileSource = akka.stream.scaladsl.FileIO(userFilePath)
This source can you be use to create your http request which will stream the users to the service:
import akka.http.scaladsl.model.HttpEntity.{Chunked, ChunkStreamPart}
import akka.http.scaladsl.model.{RequestEntity, ContentTypes, HttpRequest}
val httpRequest : HttpRequest =
HttpRequest(uri = "http://filterService.io",
entity = Chunked.fromData(ContentTypes.`text/plain(UTF-8)`, userFileSource))
This request will now stream the users to the service without consuming the entire file into memory. Only chunks of data will be buffered at a time, therefore, you can send a request with potentially an infinite number of users and your client will be fine.
Server Request Processing
Similarly, your server can be designed to accept a request with an entity that can potentially be of infinite length.
Your questions says the service will filter the users, assuming we have a filtering function:
val isValidUser : (String) => Boolean = ???
This can be used to filter the incoming request entity and create a response entity which will feed the response:
import akka.http.scaladsl.server.Directives._
import akka.http.scaladsl.model.HttpResponse
import akka.http.scaladsl.model.HttpEntity.Chunked
val route = extractDataBytes { userSource =>
val responseSource : Source[ByteString, _] =
userSource
.map(_.utf8String)
.filter(isValidUser)
.map(ByteString.apply)
complete(HttpResponse(entity=Chunked.fromData(ContentTypes.`text/plain(UTF-8)`,
responseSource)))
}
Client Response Processing
The client can similarly process the filtered users without reading them all into memory. We can, for example, dispatch the request and send all of the valid users to the console:
import akka.http.scaladsl.Http
Http()
.singleRequest(httpRequest)
.map { response =>
response
.entity
.dataBytes
.map(_.utf8String)
.foreach(System.out.println)
}

User destinations in a multi-server environment? (Spring WebSocket and RabbitMQ)

The documentation for Spring WebSockets states:
4.4.13. User Destinations
An application can send messages targeting a specific user, and Spring’s STOMP support recognizes destinations prefixed with "/user/" for this purpose. For example, a client might subscribe to the destination "/user/queue/position-updates". This destination will be handled by the UserDestinationMessageHandler and transformed into a destination unique to the user session, e.g. "/queue/position-updates-user123". This provides the convenience of subscribing to a generically named destination while at the same time ensuring no collisions with other users subscribing to the same destination so that each user can receive unique stock position updates.
Is this supposed to work in a multi-server environment with RabbitMQ as broker?
As far as I can tell, the queue name for a user is generated by appending the simpSessionId. When using the recommended client library stomp.js this results in the first user getting the queue name "/queue/position-updates-user0", the next gets "/queue/position-updates-user1" and so on.
This in turn means the first users to connect to different servers will subscribe to the same queue ("/queue/position-updates-user0").
The only reference to this I can find in the documentation is this:
In a multi-application server scenario a user destination may remain unresolved because the user is connected to a different server. In such cases you can configure a destination to broadcast unresolved messages to so that other servers have a chance to try. This can be done through the userDestinationBroadcast property of the MessageBrokerRegistry in Java config and the user-destination-broadcast attribute of the message-broker element in XML.
But this only makes the it possible to communicate with a user from a different server than the one where the web socket is established.
I feel I'm missing something? Is there anyway to configure Spring to be able to safely use MessagingTemplate.convertAndSendToUser(principal.getName(), destination, payload) in a multi-server environment?
If they need to be authenticated (I assume their credentials are stored in a database) you can always use their database unique user id to subscribe to.
What I do is when a user logs in they are automatically subscribed to two topics an account|system topic for system wide broadcasts and account|<userId> topic for specific broadcasts.
You could try something like notification|<userid> for each person to subscribe to then send messages to that topic and they will receive it.
Since user Ids are unique to each user you shouldn't have an issue within a clustered environment as long as each environment is hitting the same database information.
Here is my send method:
public static boolean send(Object msg, String topic) {
try {
String destination = topic;
String payload = toJson(msg); //jsonfiy the message
Message<byte[]> message = MessageBuilder.withPayload(payload.getBytes("UTF-8")).build();
template.send(destination, message);
return true;
} catch (Exception ex) {
logger.error(CommService.class.getName(), ex);
return false;
}
}
My destinations are preformatted so if i want to send a message to user with id of one the destinations looks something like /topic/account|1.
Ive created a ping pong controller that tests websockets for users who connect to see if their environment allows for websockets. I don't know if this will help you but this does work in my clustered environment.
/**
* Play ping pong between the client and server to see if web sockets work
* #param input the ping pong input
* #return the return data to check for connectivity
* #throws Exception exception
*/
#MessageMapping("/ping")
#SendToUser(value="/queue/pong", broadcast=false) // send only to the session that sent the request
public PingPong ping(PingPong input) throws Exception {
int receivedBytes = input.getData().length;
int pullBytes = input.getPull();
PingPong response = input;
if (pullBytes == 0) {
response.setData(new byte[0]);
} else if (pullBytes != receivedBytes) {
// create random byte array
byte[] data = randomService.nextBytes(pullBytes);
response.setData(data);
}
return response;
}

Azure Queue, AddMessage then UpdateMessage

Is it possible to Add a message to an Azure queue then, in the same flow, update or delete that message?
The idea would be to use the queue to ensure that some work gets done - there's a worker role monitoring that queue. But, the Web role which added the message may be able to make some progress toward (and sometimes even to complete) the transaction.
The worker would already be designed to handle double-delivery and reprocessing partially handled messages (from previous, failed worker attempts) - so there isn't a technical problem here, just time inefficiency and some superfluous storage transactions.
So far it seems like adding the message allows for a delivery delay, giving the web role some time, but doesn't give back a pop-receipt which it seems like we'd need to update/delete the message. Am I missing something?
It seems this feature was added as part of the "2016-05-31” REST API
we now make pop receipt value available in the Put Message (aka Add Message) response which allows users to update/delete a message without the need to retrieve the message first.
I suggest you follow these steps as it worked for me
How to: Create a queue
A CloudQueueClient object lets you get reference objects for queues. The following code creates a CloudQueueClient object. All code in this guide uses a storage connection string stored in the Azure application's service configuration. There are also other ways to create a CloudStorageAccount object. See CloudStorageAccount documentation for details.
// Retrieve storage account from connection string
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the queue client
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
Use the queueClient object to get a reference to the queue you want to use. You can create the queue if it doesn't exist.
// Retrieve a reference to a queue
CloudQueue queue = queueClient.GetQueueReference("myqueue");
// Create the queue if it doesn't already exist
queue.CreateIfNotExists();
How to: Insert a message into a queue
To insert a message into an existing queue, first create a new CloudQueueMessage. Next, call the AddMessage method. A CloudQueueMessage can be created from either a string (in UTF-8 format) or a byte array. Here is code which creates a queue (if it doesn't exist) and inserts the message 'Hello, World':
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the queue client.
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
// Retrieve a reference to a queue.
CloudQueue queue = queueClient.GetQueueReference("myqueue");
// Create the queue if it doesn't already exist.
queue.CreateIfNotExists();
// Create a message and add it to the queue.
CloudQueueMessage message = new CloudQueueMessage("Hello, World");
queue.AddMessage(message);
For more details, refer this link.
http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-queues/
Girish Prajwal

Prevent RabbitMQ web-stomp client from sending

I have RabbitMQ + WebStomp. I would like totally restrict ability to send info to queue from JavaScript code. Instead only server side should do this.
In other words I would like allow following code:
...
client.subscribe("/queue/My-One-Way-Queue", function(m) {
...
client.onreceive = function(message) {
console.log(message);
}
And prevent malicious software to do following:
client.send('/queue/My-One-Way-Queue',
{'reply-to': '/temp-queue/My-One-Way-Queue'}, text);
You need to create a user for the JavaScript client that has read permissions only. See: https://www.rabbitmq.com/access-control.html