In a Windows 8 Metro application written in JS I open a file, get the stream, write some image data to it using the 'promise - .then' pattern. It works fine - the file is successfully saved to the file system, except after using the BitmapEncoder to flush the stream to the file, the stream is still open. ie; I can't access the file until I kill the application, but the 'stream' variable is out of scope for me to reference, so I can't close() it. Is there something comparable to the C# using statement that could be used?
...then(function (file) {
return file.openAsync(Windows.Storage.FileAccessMode.readWrite);
})
.then(function (stream) {
//Create imageencoder object
return Imaging.BitmapEncoder.createAsync(Imaging.BitmapEncoder.pngEncoderId, stream);
})
.then(function (encoder) {
//Set the pixel data in the encoder ('canvasImage.data' is an existing image stream)
encoder.setPixelData(Imaging.BitmapPixelFormat.rgba8, Imaging.BitmapAlphaMode.straight, canvasImage.width, canvasImage.height, 96, 96, canvasImage.data);
//Go do the encoding
return encoder.flushAsync();
//file saved successfully,
//but stream is still open and the stream variable is out of scope.
};
This simple imaging sample from Microsoft might help. Copied below.
It looks like, in your case, you need to declare the stream before the chain of then calls, make sure you don't name-collide with your parameter to your function accepting the stream (note the part where they do _stream = stream), and add a then call to close the stream.
function scenario2GetImageRotationAsync(file) {
var accessMode = Windows.Storage.FileAccessMode.read;
// Keep data in-scope across multiple asynchronous methods
var stream;
var exifRotation;
return file.openAsync(accessMode).then(function (_stream) {
stream = _stream;
return Imaging.BitmapDecoder.createAsync(stream);
}).then(function (decoder) {
// irrelevant stuff to this question
}).then(function () {
if (stream) {
stream.close();
}
return exifRotation;
});
}
Related
I know that is currently possible to download objects by byte range in Google Cloud Storage buckets.
const options = {
destination: destFileName,
start: startByte,
end: endByte,
};
await storage.bucket(bucketName).file(fileName).download(options);
However, I would need to read by line as the files I deal with are *.csv:
await storage
.bucket(bucketName)
.file(fileName)
.download({ destination: '', lineStart: number, lineEnd: number });
I couldn't find any API for it, could anyone advise on how to achieve the desired behaviour?
You could not read a file line by line directly from Cloud Storage, as it stores them as objects , as shown on this answer:
The string you read from Google Storage is a string representation of a multipart form. It contains not only the uploaded file contents but also some metadata.
To read the file line by line as desired, I suggest loading it onto a variable and then parse the variable as needed. You could use the sample code provided on this answer:
const { Storage } = require("#google-cloud/storage");
const storage = new Storage();
//Read file from Storage
var downloadedFile = storage
.bucket(bucketName)
.file(fileName)
.createReadStream();
// Concat Data
let fileBuffer = "";
downloadedFile
.on("data", function (data) {
fileBuffer += data;
})
.on("end", function () {
// CSV file data
//console.log(fileBuffer);
//Parse data using new line character as delimiter
var rows;
Papa.parse(fileBuffer, {
header: false,
delimiter: "\n",
complete: function (results) {
// Shows the parsed data on console
console.log("Finished:", results.data);
rows = results.data;
},
});
To parse the data, you could use a library like PapaParse as shown on this tutorial.
For one call, I am replying with a huge JSON object which sometimes causes the Node event loop to become blocked. As such, I'm using Big Friendly JSON package to stream JSON instead. My issue is I cannot figure out how to actually reply with the stream
My original code was simply
let searchResults = s3Access.getSavedSearch(guid)).Body;
searchResults = JSON.parse(searchResults.toString());
return reply(searchResults);
Works great but bogs down on huge payloads
I've tried things like, using the Big Friendly JSON package https://gitlab.com/philbooth/bfj
const stream = bfj.streamify(searchResults);
return reply(stream); // according to docs it's a readable stream
But then my browser complained about an empty response. I then tried to add the below to the reply, same result.
.header('content-encoding', 'json')
.header('Content-Length', stream.length);
I also tried return reply(null, stream); but that produced a ton of node errors
Is there some other way I need to organize this? My understanding was I could just reply a readable stream and Hapi would take care of it, but the response keeps showing up as empty.
Did you try to use h.response, here h is reply.
Example:
handler: async (request, h) => {
const { limit, sortBy, order } = request.query;
const queryString = {
where: { status: 1 },
limit,
order: [[sortBy, order]],
};
let userList = {};
try {
userList = await _getList(User, queryString);
} catch (e) {
// throw new Boom(e);
Boom.badRequest(i18n.__('controllers.user.fetchUser'), e);
}
return h.response(userList);
}
This one is hard to explain, so I give you some actual and pseudo code:
try
{
// If source (a string) points towards a file that is available with
// StorageFile.GetFileFromPathAsync(), just open the file that way.
// If that is not possible, use the path to look up an Access Token
// and use the file from the StorageFolder gotten via that token.
StorageFile file = await GetFileFromAccessList(source);
if (file != null)
{
bitmap = new BitmapImage();
using (IRandomAccessStream fileStream = await file.OpenAsync(FileAccessMode.Read))
{
await bitmap.SetSourceAsync(fileStream);
}
}
}
catch (Exception e)
{
string s = e.Message;
bitmap = null;
}
with the following method:
public async Task<StorageFile> GetFileFromAccessList(string path)
{
StorageFile result = null;
if (String.IsNullOrEmpty(path) == false)
try
{
// Try to access to file directly...
result = await StorageFile.GetFileFromPathAsync(path);
}
catch (Exception)
{
result = null;
try
{
// See if the folder this thing is in is in the access list...
StorageFolder folder = await GetFolderFromAccessList(Path.GetFullPath(path));
// If there is a folder, try that.
if (folder != null)
result = await folder.GetFileAsync(Path.GetFileName(path));
}
catch (Exception)
{
result = null;
}
}
return result;
}
The resulting bitmap is used in Image.SetSource() as an ImageSource.
Now what kills me: this call works perfectly, fast and rock solid for files stored within the apps folder or KnownFolders. So it works like a charm when I don't need an Access Token. Windows.Storage.AccessCache.StorageApplicationPermissions.FutureAccessList.GetFolderAsync(token)
However, it breaks if I have to use an access token, just not all the time
This code does not break immediately: it breaks when I try to open more than 5-7 source files at the same time.
Repeat that: this works if I display 5-7 images. If I try to open more, it freezes the PC. No such problem occurs when I open StorageFiles without tokens.
I can access such files using normal file operations. I can create bitmaps from them, process them, the work.
I just cannot make them a source of an XAML Image.
Any thoughts?
Ah clarity.
So it turns out that using the DataContextChanged event to refresh the bitmap through Image.SetSource() is the murder weapon.
The solution: declare a property of type BitmapSource. Bind the Image.Source to that property. Update the property with the loaded bitmap upon Image.Loaded and Image.DataContextChanged. Works stable and fast now in all conditions I was able to test.
I am trying to read the body in a middleware for authentication purposes, but when the request gets to the api controller the object is empty as the body has already been read. Is there anyway around this. I am reading the body like this in my middleware.
var buffer = new byte[ Convert.ToInt32( context.Request.ContentLength ) ];
await context.Request.Body.ReadAsync( buffer, 0, buffer.Length );
var body = Encoding.UTF8.GetString( buffer );
If you're using application/x-www-form-urlencoded or multipart/form-data, you can safely call context.Request.ReadFormAsync() multiple times as it returns a cached instance on subsequent calls.
If you're using a different content type, you'll have to manually buffer the request and replace the request body by a rewindable stream like MemoryStream. Here's how you could do using an inline middleware (you need to register it soon in your pipeline):
app.Use(next => async context =>
{
// Keep the original stream in a separate
// variable to restore it later if necessary.
var stream = context.Request.Body;
// Optimization: don't buffer the request if
// there was no stream or if it is rewindable.
if (stream == Stream.Null || stream.CanSeek)
{
await next(context);
return;
}
try
{
using (var buffer = new MemoryStream())
{
// Copy the request stream to the memory stream.
await stream.CopyToAsync(buffer);
// Rewind the memory stream.
buffer.Position = 0L;
// Replace the request stream by the memory stream.
context.Request.Body = buffer;
// Invoke the rest of the pipeline.
await next(context);
}
}
finally
{
// Restore the original stream.
context.Request.Body = stream;
}
});
You can also use the BufferingHelper.EnableRewind() extension, which is part of the Microsoft.AspNet.Http package: it's based on a similar approach but relies on a special stream that starts buffering data in memory and spools everything to a temp file on disk when the threshold is reached:
app.Use(next => context =>
{
context.Request.EnableRewind();
return next(context);
});
FYI: a buffering middleware will probably be added to vNext in the future.
Usage for PinPoint's mention of EnableRewind
Startup.cs
using Microsoft.AspNetCore.Http.Internal;
Startup.Configure(...){
...
//Its important the rewind us added before UseMvc
app.Use(next => context => { context.Request.EnableRewind(); return next(context); });
app.UseMvc()
...
}
Then in your middleware you just rewind and reread
private async Task GenerateToken(HttpContext context)
{
context.Request.EnableRewind();
string jsonData = new StreamReader(context.Request.Body).ReadToEnd();
...
}
This works with .Net Core 2.1 and higher.
Today I run in a similar issue. Long story short, what used to work with
Body.Seek(0, SeekOrigin.Begin);
resulted in today in exception, at least in my case. This happened after the code was migrated to the latest version of .NET Core.
The workaround for me was to add this:
app.Use(next => context => { context.Request.EnableBuffering(); return next(context);
Add this before setting up controllers or MVC. This seems to be added as part of the .NET Core 2.1 version.
Hope this helps someone!
Cheers and happy coding.
I am a newbie to WebRTC. I am building an application that enables users to view each other's video stream, as well as exchange files. The audio/video part is implemented and working. The problem is I need to add the ability to exchange files now. I am using the below code to initialize the PeerConnection object
var connection = _getConnection(partnerId);
console.log("Initiate offer")
// Add our audio/video stream
connection.addStream(stream);
// Send an offer for a connection
connection.createOffer(function (desc) { _createOfferSuccess(connection, partnerId, desc) }, function (error) { console.log('Error creating session description: ' + error); });
_getConnection creates a new RTCPeerConnection object using
var connection = new RTCPeerConnection(iceconfig);
i.e., with no explicit constraints. It also initializes the different event handlers on it. Right after this, I attach the audio/video stream to this connection. I also cache these connections using the partner id, so I can use it later.
The question is, can I later recall the connection object from the cache, add a data channel to it using something like
connection.createDataChannel("DataChannel", dataChannelOptions);
And use it to share files, or do I have to create a new RTCPeerConnection object and attach the data channel to it?
You certainly do not have to create a another PeerConnection for file transfer alone. Existing PeerConnection can utilize RTCDatachannel with behaves like traditional websocket mechanism ( ie 2 way communication without a central server )
`var PC = new RTCPeerConnection();
//specifying options for my datachannel
var dataChannelOptions = {
ordered: false, // unguaranted sequence
maxRetransmitTime: 2000, // 2000 miliseconds is the maximum time to try and retrsanmit failed messages
maxRetransmits : 5 // 5 is the number of times to try to retransmit failed messages , other options are negotiated , id , protocol
};
// createing data channel using RTC datachannel API name DC1
var dataChannel = PC.createDataChannel("DC1", dataChannelOptions);
dataChannel.onerror = function (error) {
console.log("DC Error:", error);
};
dataChannel.onmessage = function (event) {
console.log("DC Message:", event.data);
};
dataChannel.onopen = function () {
dataChannel.send(" Sending 123 "); // you can add file here in either strings/blob/array bufers almost anyways
};
dataChannel.onclose = function () {
console.log("DC is Closed");
};
`
PS : while sending files over datachannel API , it is advisable to break down the files into small chunks beforehand . I suggest chunk size of almost 10 - 15 KB .