I am building a digital signage application and i need to allow my users to upload large images/videos. I have looked at the streaming mode to allow upload and download of the file, which seems to be the way to go. My problem is though that i need to figure out the proper approach to uploading. I need to put an entry into my database, then upload the file to a specific folder on the server (each customer has his own folder, whre the file needs to be placed). My problem is that it doesnt seem possible to send more information along with the file, than the stream to upload. All i need is some metadata, name of file and customer id. Anyone has a working example of this or point me in the right direction...
Sincerely
/Brian
Well, you're not saying what you've tried and how it failed, but here's a basic outline of how we're doing it:
[ServiceContract]
public interface IMyStreamingService
{
[OperationContract]
void Upload(FileUploadRequest request);
}
[MessageContract]
public class FileUploadRequest
{
[MessageHeader(MustUnderstand = true)]
public string Path;
[MessageBodyMember(Order = 1)]
public Stream FileData;
public FileUploadRequest(string path, Stream fileData)
{
this.Path = path;
this.FileData = fileData;
}
}
I have answered similar question few days ago. You have to declare operation which accepts and returns message contracts. You have to create message contracts. For streaming contract can contain only single body member which is of type Stream. Other contract members must to be declared as headers. Linked question contains full example for downloading. You just need to do the same for uploading.
Related
Currently I am able to see the streaming values exposed by the code below, but only one http client will receive the continuous stream of values, the others will not be able to.
The code, a modified version of the quarkus quickstart for kafka reactive streaming is:
#Path("/migrations")
public class StreamingResource {
private volatile Map<String, String> counterBySystemDate = new ConcurrentHashMap<>();
#Inject
#Channel("migrations")
Flowable<String> counters;
#GET
#Path("/stream")
#Produces(MediaType.SERVER_SENT_EVENTS) // denotes that server side events (SSE) will be produced
#SseElementType("text/plain") // denotes that the contained data, within this SSE, is just regular text/plain data
public Publisher<String> stream() {
Flowable<String> mainStream = counters.doOnNext(dateSystemToCount -> {
String key = dateSystemToCount.substring(0, dateSystemToCount.lastIndexOf("_"));
counterBySystemDate.put(key, dateSystemToCount);
});
return fromIterable(counterBySystemDate.values().stream().sorted().collect(Collectors.toList()))
.concatWith(mainStream)
.onBackpressureLatest();
}
}
Is it possible to make any modification that would allow multiple clients to consume the same data, in a broadcast fashion?
I guess this implies letting go of backpressure, because that would imply a state per consumer?
I saw that Observable is not accepted as a return type in the resteasy-rxjava2 for the Server Side Events media-tpe.
Please let me know any ideas,
Thank you
Please find the full code in Why in multiple connections to PricesResource Publisher, only one gets the stream?
I have a restful web service that is receiving a POST with json data coming over. Here is the json sample with the third key/pair having a forward slash in the key name.
{
"_notes": "Test",
"_received": true,
"item/id": "8a69d38fba4c40d5a3d730807db87859"
}
Here is my Post method
Public Sub Post(value As Testing)
And here is the Testing Class definition
Public Class Testing
Public _notes As String
Public _received As Boolean
Public item/ID As String
End Class
I get a compiler error since I can't have the forward slash in the variable name. Is there a different way I'm supposed to be capturing the data on my side? Unfortunately I can't control the key name in the json.
Assuming you're using the .NET Web API framework's built-in de-serialization, you should spend some time learning about those serializers and how to control them. Here is a good introductory point in the documentation.
The built-in stuff uses JSON.NET, by default, for JSON serialization, which has a number of attributes which allow you to control it. The one you will be interested in for this problem is the JsonPropertyAttribute. For instance:
Public Class Testing
Public _notes As String
Public _received As Boolean
<JsonProperty("item/id")>
Public ItemID As String
End Class
Trying a very simple camel route:
from("aws-s3://javatutorial1232boomiau?amazonS3Client=#s3client&deleteAfterRead=true&fileName=My2.jsp").process(Empty2).log(LoggingLevel.INFO, "Replay Message Sent to file:s3out ${in.header.CamelAwsS3Key}")
.to("stream:out");
I am using verion 2.20.2 (latest as of today). The file is not getting deleted from the bucket. I have done some research and by the looks of it the the exchange passed in to the processCommit method lacks any headers. The headers it is looking for are bucket name and key
String bucketName = exchange.getIn().getHeader(S3Constants.BUCKET_NAME, String.class);
String key = exchange.getIn().getHeader(S3Constants.KEY, String.class);
I've also tried a to("file://Users/user/out.txt") file is also not getting deleted and the headers appear to be those of file component.
EDIT:
I noticed that if I remove the .processor(Empty2) the file is deleted from the bucket. The processor does not do any work:
#Override
public void process(Exchange exchange) throws Exception {
Object body = exchange.getIn().getBody();
System.out.println("1: "+body);
Object body2 = exchange.getOut().getBody();
System.out.println("2: "+body2);
}
So why would it work without it but not with a processor? How should I process a message if processor cannot be used?
As Claus pointed out, using exchange.getOut() creates outgoing (empty) message body on the exchange. None of the headers are copied to the exchange and as such they are all lost. When it comes to processCommit the header for bucket name and key have been lost.
So either do not access getOut() or copy all headers from In to Out.
All of my data contract objects in my service inherit from BaseMessage...
[DataContract(Name = "BaseMessage", Namespace = "http://www..."]
public class BaseMessage
{
[DataMember]
public Guid MessageId { get; set; }
}
I am familiar with using Message Inspectors to look at the actual SOAP payload that goes across the wire. However, what I want to do is to somehow hook into the message pipeline to do the following:
Look at an incoming message and read out of it the MessageId field ideally without searching the whole string message object for a string match - unless there is a fast way to do this.
Extract out of a message the MessageId with a view to creating a header inside the message containing the MessageId. Again I dont really want to search the whole message for the a string match.
I am familiar with using IClientMessageInspector and IDispatchMessageInspector to look at the messages, but I think at this point in the pipeline I dont have access to the actual object to access its fields.
Thanks
If you want to determine what members go in the body of the message versus its headers, you need a message contract.
I have a resource that looks something like this:
/users/{id}/summary?format={format}
When format is "xml" or "json" I respond with a user summary object that gets automagically encoded by WCF - fine so far. But when format equals "pdf", I want my response to consist of a trivial HTTP response body and a PDF file attachment.
How is this done? Hacking on WebOperationContext.Current.OutgoingResponse doesn't seem to work, and wouldn't be the right thing even if it did. Including the bits of the file in a CDATA section or something in the response isn't safe. Should I create a subclass of Message, then provide a custom IDispatchMessageFormatter that responds with it? I went a short distance down that path but ultimately found the documentation opaque.
What's the right thing?
It turns out that what I need is WCF "raw" mode, as described here. Broadly speaking, I want to do this:
[OperationContract, WebGet(UriTemplate = "/users/{id}/summary?format={format}"]
public Stream GetUserSummary(string id, string format)
{
if(format == "pdf")
{
WebOperationContext.Current.OutgoingResponse.ContentType = "application/pdf";
return new MemoryStream(CreatePdfSummaryFileForUser(id));
}
else
{
// XML or JSON serialization. I can't figure out a way to not do this explicitly, but my use case involved custom formatters anyway so I don't care.
}
}
In theory you could do it with a multi-part content MIME type (see http://www.faqs.org/rfcs/rfc2387.html). However, it would be much easier to return a URL in the XML/JSON response and the client can do a GET on that link to return the file.