RavendDB faceted search results formatting - asp.net-mvc-4

Currently I'm playing with faceted search after reading RavenDB doc about it.
The result returned is OK, but there's a small problem. Since the result comes as
IDictionary<string, IEnumerable<FacetValue>>
it's necessary to iterate over it and do some fancy string manipulation to format the result and show it in a PartialView. More specifically this facet:
new Facet
{
Name = "Value_Range",
Mode = FacetMode.Ranges,
Ranges =
{
"[NULL TO Dx500.0]",
"[Dx500.0 TO Dx1000.0]",
"[Dx1000.0 TO Dx2500.0]",
"[Dx2500.0 TO Dx5000.0]",
"[Dx5000.0 TO NULL]",
}
}
View code:
#fv.Range
This is this "beautiful" string that gets output on the view: [Dx400.0 TO Dx600.0]
RavenDB uses the Dx prefix above to do a number to string conversion.
Controller code where the facet result is passed to a specific ViewModel:
var facetResults = DocumentSession.Query<Realty>("RealtyFacets")
//.Where(x => x.Value >= 100 && x.Value <= 1000)
.ToFacets("facets/RealtyFacets").ToArray();
var model = new RealtyFacetsViewModel();
model.Cities = facetResults[0];
model.Purposes = facetResults[1];
model.Types = facetResults[2];
model.Values = facetResults[3];
return PartialView("RealtyFacets", model);
Is there any other/better way of getting results from a faceted search so that no string manipulation must be done to format the returned data?
After Ayende's suggestion, I did this in my controller:
foreach (var val in facetResults[3].Value)
{
switch(val.Range)
{
case "[Dx0.0 TO Dx200.0]":
val.Range = string.Format("{0:C2} {1} {2:C2}",
0, #Localization.to, 200);
break;
case "[Dx200.0 TO Dx400.0]":
val.Range = string.Format("{0:C2} {1} {2:C2}",
200, #Localization.to, 400);
break;
case "[Dx400.0 TO Dx600.0]":
val.Range = string.Format("{0:C2} {1} {2:C2}",
400, #Localization.to, 600);
break;
case "[Dx600.0 TO Dx800.0]":
val.Range = string.Format("{0:C2} {1} {2:C2}",
600, #Localization.to, 800);
break;
case "[Dx800.0 TO Dx1000000.0]":
val.Range = string.Format("{0:C2} {1} {2:C2}",
800, #Localization.to, 1000000);
break;
}
}
model.Values = facetResults[3];

As per #MattWarren suggestion, I ended up using:
foreach (var val in facetResults[3].Value)
{
// Original string format: [Dx5000.0 TO Dx10000.0]
var limits = val.Range.Split(new string[] { "TO", "[", "]", " " },
StringSplitOptions.RemoveEmptyEntries);
// Leveraging RavenDB NumberUtil class...
val.Range = string.Format("{0:C0} {1} {2:C0}",
Raven.Abstractions.Indexing.NumberUtil.StringToNumber(limits.ElementAt(0)),
#Localization.to,
Raven.Abstractions.Indexing.NumberUtil.StringToNumber(limits.ElementAt(1)));
}

Leniel,
In your code, create a dictionary that would map between the facet value and the display string.
RavenDB currently have no way to influence the facet value.

Related

Repast: query an agent set and count the number of agents in while loop

I want to achieve a logic like this:
while (count loading_docks with [status == "free"] > 0 and trucks with [status == "free" and type == "20'" and capacity < 1000] > 0) {
match a truck satisfying above 3 condidtions to a free dock for unloading cargo;
}
as can be seen, the query needs to be repetively called and updated in the while loop, and the second query is composed of 3 conditions (Which is not easy with AndQuery() method).
This is very easy to implement in Netlogo. What is the suitable and shorter way to achieve in repast?
UPDATE - the initial attempt
public void match_dock() {
for (Truck t: this.getTruck_queue()) {
if (this.Count_freeDock() > 0) {
Query<Object> fit_dock = new AndQuery(
new PropertyEquals(context, "status", 1),
new PropertyGreaterThanEquals(context, "max_veh", t.getTruck_type()));
double min = 10000;
Dock match = null;
for(Object o: fit_dock.query()) {
if (((Dock)o).getMax_veh() < min) {
match = (Dock)o;
}
}
match.setStatus(2);
match.getServe_list().add(t.getReq_id());
t.setServe_dock(match.getId());
// if (t.getServe_dock() != -1) {
// this.getTruck_queue().remove(t);
// }
}
}
}
public int Count_freeDock() {
List<Dock> free_list = new ArrayList<Dock>();
Query<Object> free_dock = new PropertyEquals<Object>(context, "status", 1);
for (Object o : free_dock.query()) {
if (o instanceof Dock) {
free_list.add((Dock)o);
}
}
return free_list.size();
}
There are three issues to fix:
1) The query of a particular agent set has to consider three conditions; AndQuery only composes two conditions. is there a Query method which allows more than two conditions to be considered at the same time?
current problem:
Query<Object> pre_fit = new AndQuery(
new PropertyEquals(context, "status", 1),
new PropertyGreaterThanEquals(context, "max_veh", t.getTruck_type()));
Query<Object> fit_dock = new AndQuery(pre_fit, new PropertyEquals(context, "ops_type", 3));
The initial composition of two conditions works fine and queries fast. However, when I add the third condition "ops_type", the query speed becomes hugely slow. What's the reason behind? Or is this a correct way to compose three conditions?
2) Is there simpler way to query the size (count) of a particular agent set, other than writing a custom count function (as shown in example)?
3) what is the shortest way to add(or copy) the queried agent set into a list for related list operations?
update entire code block:
public void match_dock() {
Iterator<Truck> truck_list = this.getTruck_queue().iterator();
while(truck_list.hasNext() && this.Count_freeDock() > 0) {
Truck t = truck_list.next();
// Query<Object> pre_fit = new AndQuery(
// new PropertyEquals(context, "status", 1),
// new PropertyGreaterThanEquals(context, "max_veh", t.getTruck_type()));
// Query<Object> ops_fit = new OrQuery<>(
// new PropertyEquals(context, "ops_type", 3),
// new PropertyEquals(context, "ops_type", this.getOps_type(t.getOps_type())));
// Query<Object> fit_dock = new AndQuery(pre_fit, new PropertyEquals(context, "ops_type", 3));
// Query<Object> fit_dock = new AndQuery(pre_fit, ops_fit);
Query<Object> pre_fit = new AndQuery(
new PropertyEquals(context, "status", 1),
new PropertyGreaterThanEquals(context, "max_veh", t.getTruck_type()));
Query<Object> q = new PropertyEquals(context, "ops_type", 3);
double min = 10000;
Dock match = null;
for (Object o : q.query(pre_fit.query())) {
// for(Object o: fit_dock.query()) {
if (((Dock)o).getMax_veh() < min) {
match = (Dock)o;
}
}
try {
match.setStatus(2);
match.getServe_list().add(t.getReq_id());
t.setServe_dock(match.getId());
if (t.getServe_dock() != -1) {
System.out.println("truck id " + t.getReq_id() + "serve dock: " + t.getServe_dock());
t.setIndock_tm(this.getTick());
truck_list.remove();
}
}
catch (Exception e){
// System.out.println("No fit dock found");
}
}
}
public int Count_freeDock() {
List<Dock> free_list = new ArrayList<Dock>();
Query<Object> free_dock = new PropertyEquals<Object>(context, "status", 1);
for (Object o : free_dock.query()) {
if (o instanceof Dock) {
free_list.add((Dock)o);
}
}
// System.out.println("free trucks: " + free_list.size());
return free_list.size();
}
UPDATE on 5/5
I have moved the query outside the while loop for better detection.
I found the slow speed could be largely due to the use of "PropertyGreaterThanEquals". regardless whether the queried field is int or double.
when you query using "PropertyGreaterThanEquals", the query runs very slow regardless wether the queried field is int or double. However, it returns correct result.
when you query using "PropertyEquals", the query runs in less than one second regardless wether the queried field is int or double. however, it returns result which is not correct since it needs to consider ">=".
public void match_dock() {
System.out.println("current tick is: " + this.getTick());
Iterator<Truck> truck_list = this.getTruck_queue().iterator();
Query<Object> pre_fit = new AndQuery(
new PropertyEquals(context, "status", 1),
new PropertyGreaterThanEquals(context, "max_veh", 30));
//new PropertyEquals(context, "max_veh", 30));
Query<Object> q = new PropertyEquals(context, "hv_spd", 240);
for (Object o : q.query(pre_fit.query())) {
if (o instanceof Dock) {
System.out.println("this object is: " + ((Dock)o).getId());
}
}
}
For 1, you could try chaining the queries like so:
Query<Object> pre_fit = new AndQuery(
new PropertyEquals(context, "status", 1),
new PropertyGreaterThanEquals(context, "max_veh", t.getTruck_type()));
Query<Object> q = new PropertyEquals(context, "ops_type", 3);
for (Object o : q.query(pre_fit.query())) { ...
I think this can be faster than the embedding the AndQuery, but I'm not entirely sure.
For 2, I think some of the Iterables produced by a Query are in fact Java Sets. You could try to cast to one of those and then call size(). If its not a set then you do in fact have to iterate as the query filter conditions are actually applied as part of the iteration.
For 3, I think there are some Java methods for this. new ArrayList(Iterable), and some methods in Collections.

Proper storage/retrieval of termVector

I'm using Lucene.NET 4.8-beta00005.
I have a "name" field in my documents defined as follows:
doc.Add(CreateField(NameField, entry.Name.ToLower()));
writer.AddDocument(doc);
Where CreateField is implemented as follows
private static Field CreateField(string fieldName, string fieldValue)
{
return new Field(fieldName, fieldValue, new FieldType() {IsIndexed = true, IsStored = true, IsTokenized = true, StoreTermVectors = true, StoreTermVectorPositions = true, StoreTermVectorOffsets = true, StoreTermVectorPayloads = true});
}
The "name" field is assigned a StandardAnalyzer.
Then in my CustomScoreProvider I'm retriving the terms from the term vector as follows:
private List<string> GetDocumentTerms(int doc, string fieldName)
{
var indexReader = m_context.Reader;
var termVector = indexReader.GetTermVector(doc, fieldName);
var termsEnum = termVector.GetIterator(null);
BytesRef termBytesRef;
termBytesRef = termsEnum.Next();
var documentTerms = new List<string>();
while (termBytesRef != null)
{
//removing trailing \0 (padded to 16 bytes)
var termText = Encoding.Default.GetString(termBytesRef.Bytes).Replace("\0", "");
documentTerms.Add(termText);
termBytesRef = termsEnum.Next();
}
return documentTerms;
}
Now I have a document where the value of the "name" field is "dan gertler diamonds ltd."
So the terms from the term vector I'm expecting are:
dan gertler diamonds ltd
But my GetDocumentTerms gives me the following terms:
dan diamonds gertlers ltdtlers
I'm using as StandardAnalyzer with the field so I'm not expecting it to do much transformation to the orignal words in the field (and I did check with this particular name and StandardAnalyzer).
What am I doing wrong here and how to fix it?
Edit: I'm extracing the terms manually with each field's Analyzer and stroing the them in a separate String field as a workaroud for now.
If you want to get the terms in correct order, you must also use the positional information. Test this code:
Terms terms = indexReader.GetTermVector(doc, fieldName);
if (terms != null)
{
var termIterator = terms.GetIterator(null);
BytesRef bytestring;
var documentTerms = new List<Tuple<int, string>>();
while ((bytestring = termIterator.Next()) != null)
{
var docsAndPositions = termIterator.DocsAndPositions(null, null, DocsAndPositionsFlags.OFFSETS);
docsAndPositions.NextDoc();
int position;
for(int left = docsAndPositions.Freq; left > 0; left--)
{
position = docsAndPositions.NextPosition();
documentTerms.Add(new Tuple<int, string>(position, bytestring.Utf8ToString()));
}
}
documentTerms.Sort((word1, word2) => word1.Item1.CompareTo(word2.Item1));
foreach (var word in documentTerms)
{
Console.WriteLine("{0} {1} {2}", fieldName, word.Item1, word.Item2);
}
}
This code also handles the situation where you have the same term (word) in more than one place.

Google BigQuery returns only partial table data with C# application using .net Client Library

I am trying to execute the query (Basic select statement with 10 fields). My table contains more than 500k rows. C# application returns the response with only 4260 rows. However Web UI returns all the records.
Why my code returns only partial data, What is the best way to select all the records and load into C# Data Table? If there is any code snippet it would be more helpful to me.
using Google.Apis.Auth.OAuth2;
using System.IO;
using System.Threading;
using Google.Apis.Bigquery.v2;
using Google.Apis.Bigquery.v2.Data;
using System.Data;
using Google.Apis.Services;
using System;
using System.Security.Cryptography.X509Certificates;
namespace GoogleBigQuery
{
public class Class1
{
private static void Main()
{
try
{
Console.WriteLine("Start Time: {0}", DateTime.Now.ToString());
String serviceAccountEmail = "SERVICE ACCOUNT EMAIL";
var certificate = new X509Certificate2(#"KeyFile.p12", "notasecret", X509KeyStorageFlags.Exportable);
ServiceAccountCredential credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(serviceAccountEmail)
{
Scopes = new[] { BigqueryService.Scope.Bigquery, BigqueryService.Scope.BigqueryInsertdata, BigqueryService.Scope.CloudPlatform, BigqueryService.Scope.DevstorageFullControl }
}.FromCertificate(certificate));
BigqueryService Service = new BigqueryService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = "PROJECT NAME"
});
string query = "SELECT * FROM [publicdata:samples.shakespeare]";
JobsResource j = Service.Jobs;
QueryRequest qr = new QueryRequest();
string ProjectID = "PROJECT ID";
qr.Query = query;
qr.MaxResults = Int32.MaxValue;
qr.TimeoutMs = Int32.MaxValue;
DataTable DT = new DataTable();
int i = 0;
QueryResponse response = j.Query(qr, ProjectID).Execute();
string pageToken = null;
if (response.JobComplete == true)
{
if (response != null)
{
int colCount = response.Schema.Fields.Count;
if (DT == null)
DT = new DataTable();
if (DT.Columns.Count == 0)
{
foreach (var Column in response.Schema.Fields)
{
DT.Columns.Add(Column.Name);
}
}
pageToken = response.PageToken;
if (response.Rows != null)
{
foreach (TableRow row in response.Rows)
{
DataRow dr = DT.NewRow();
for (i = 0; i < colCount; i++)
{
dr[i] = row.F[i].V;
}
DT.Rows.Add(dr);
}
}
Console.WriteLine("No of Records are Readed: {0} # {1}", DT.Rows.Count.ToString(), DateTime.Now.ToString());
while (true)
{
int StartIndexForQuery = DT.Rows.Count;
Google.Apis.Bigquery.v2.JobsResource.GetQueryResultsRequest SubQR = Service.Jobs.GetQueryResults(response.JobReference.ProjectId, response.JobReference.JobId);
SubQR.StartIndex = (ulong)StartIndexForQuery;
//SubQR.MaxResults = Int32.MaxValue;
GetQueryResultsResponse QueryResultResponse = SubQR.Execute();
if (QueryResultResponse != null)
{
if (QueryResultResponse.Rows != null)
{
foreach (TableRow row in QueryResultResponse.Rows)
{
DataRow dr = DT.NewRow();
for (i = 0; i < colCount; i++)
{
dr[i] = row.F[i].V;
}
DT.Rows.Add(dr);
}
}
Console.WriteLine("No of Records are Readed: {0} # {1}", DT.Rows.Count.ToString(), DateTime.Now.ToString());
if (null == QueryResultResponse.PageToken)
{
break;
}
}
else
{
break;
}
}
}
else
{
Console.WriteLine("Response is null");
}
}
int TotalCount = 0;
if (DT != null && DT.Rows.Count > 0)
{
TotalCount = DT.Rows.Count;
}
else
{
TotalCount = 0;
}
Console.WriteLine("End Time: {0}", DateTime.Now.ToString());
Console.WriteLine("No. of records readed from google bigquery service: " + TotalCount.ToString());
}
catch (Exception e)
{
Console.WriteLine("Error Occurred: " + e.Message);
}
Console.ReadLine();
}
}
}
In this Sample Query get the results from public data set, In table contains 164656 rows but response returns 85000 rows only for the first time, then query again to get the second set of results. (But not known this is the only solution to get all the results).
In this sample contains only 4 fields, even-though it does not return all rows, in my case table contains more than 15 fields, I get response of ~4000 rows out of ~10k rows, I need to query again and again to get the remaining results for selecting 1000 rows takes time up to 2 minutes in my methodology so I am expecting best way to select all the records within single response.
Answer from User #:Pentium10
There is no way to run a query and select a large response in a single shot. You can either paginate the results, or if you can create a job to export to files, then use the files generated in your app. Exporting is free.
Step to run a large query and export results to files stored on GCS:
1) Set allowLargeResults to true in your job configuration. You must also specify a destination table with the allowLargeResults flag.
Example:
"configuration":
{
"query":
{
"allowLargeResults": true,
"query": "select uid from [project:dataset.table]"
"destinationTable": [project:dataset.table]
}
}
2) Now your data is in a destination table you set. You need to create a new job, and set the export property to be able to export the table to file(s). Exporting is free, but you need to have Google Cloud Storage activated to put the resulting files there.
3) In the end you download your large files from GCS.
It my turn to design the solution for better results.
Hoping this might help someone. One could retrieve next set of paginated result using PageToken. Here is the sample code for how to use PageToken. Although, I liked the idea of exporting for free. Here, I write rows to flat file but you could add them to your DataTable. Obviously, it is a bad idea to keep large DataTable in memory though.
public void ExecuteSQL(BigqueryService bqservice, String ProjectID)
{
string sSql = "SELECT r.Dealname, r.poolnumber, r.loanid FROM [MBS_Dataset.tblRemitData] R left join each [MBS_Dataset.tblOrigData] o on R.Dealname = o.Dealname and R.Poolnumber = o.Poolnumber and R.LoanID = o.LoanID Order by o.Dealname, o.poolnumber, o.loanid limit 100000";
QueryRequest _r = new QueryRequest();
_r.Query = sSql;
QueryResponse _qr = bqservice.Jobs.Query(_r, ProjectID).Execute();
string pageToken = null;
if (_qr.JobComplete != true)
{
//job not finished yet! expecting more data
while (true)
{
var resultReq = bqservice.Jobs.GetQueryResults(_qr.JobReference.ProjectId, _qr.JobReference.JobId);
resultReq.PageToken = pageToken;
var result = resultReq.Execute();
if (result.JobComplete == true)
{
WriteRows(result.Rows, result.Schema.Fields);
pageToken = result.PageToken;
if (pageToken == null)
break;
}
}
}
else
{
List<string> _fieldNames = _qr.Schema.Fields.ToList().Select(x => x.Name).ToList();
WriteRows(_qr.Rows, _qr.Schema.Fields);
}
}
The Web UI automatically flattens the data. This means that you see multiple rows for each nested field.
When you run the same query via the API, it won't be flattened, and you get fewer rows, as the nested fields are returned as objects. You should check if this is the case at you.
The other is that indeed you need to paginate through the results. Paging through list results has this explained.
If you want to do only one job, than you should write your query ouput to a table, than export the table as JSON, and download the export from GCS.

Instagram & Processing - Real time view

I'm working on a small app similar to instaprint and need some help. I'm using the source code from Globalgram by Andrew Haskin, it searches instagram for a particular hashtag and displays the most recent image posted with that hashtag. The problem is it only does it once, I need it to continuously search for the hashtag and display an image when a new one is added, so a refresh, I've been tinkering with it but to no avail. Any help would be greatly appreciated
Code Below :
import com.francisli.processing.http.*;
PFont InstagramFont;
PImage backgroundimg;
PImage brand;
PImage userphoto;
PImage profilepicture;
String username;
String tag;
String[] tagStrings;
com.francisli.processing.http.HttpClient client;
void setup() {
size(580, 900);
smooth();
backgroundimg = loadImage("iso_background.jpg");
brand = loadImage("iso.jpg");
InstagramFont = loadFont("Helvetica-Bold-36.vlw");
client = new com.francisli.processing.http.HttpClient(this, "api.instagram.com");
client.useSSL = true;
//// instantiate a new HashMap
HashMap params = new HashMap();
//// put key/value pairs that you want to send in the request
params.put("access_token", "------ACCESS TOKEN HERE------");
params.put("count", "1");
client.GET("/v1/tags/coffee/media/recent.json", params);
}
void responseReceived(com.francisli.processing.http.HttpRequest request, com.francisli.processing.http.HttpResponse response) {
println(response.getContentAsString());
//// we get the server response as a JSON object
com.francisli.processing.http.JSONObject content = response.getContentAsJSONObject();
//// get the "data" value, which is an array
com.francisli.processing.http.JSONObject data = content.get("data");
//// get the first element in the array
com.francisli.processing.http.JSONObject first = data.get(0);
//// the "user" value is another dictionary, from which we can get the "full_name" string value
println(first.get("user").get("full_name").stringValue());
//// the "caption" value is another dictionary, from which we can get the "text" string value
//println(first.get("caption").get("text").stringValue());
//// get profile picture
println(first.get("user").get("profile_picture").stringValue());
//// the "images" value is another dictionary, from which we can get different image URL data
println(first.get("images").get("standard_resolution").get("url").stringValue());
com.francisli.processing.http.JSONObject tags = first.get("tags");
tagStrings = new String[tags.size()];
for (int i = 0; i < tags.size(); i++) {
tagStrings[i] = tags.get(i).stringValue();
}
username = first.get("user").get("full_name").stringValue();
String profilepicture_url = first.get("user").get("profile_picture").stringValue();
profilepicture = loadImage(profilepicture_url, "png");
String userphoto_url = first.get("images").get("standard_resolution").get("url").stringValue();
userphoto = loadImage(userphoto_url, "png");
//noLoop();
}
void draw() {
background(255);
imageMode(CENTER);
image(brand, 100, height/1.05);
if (profilepicture != null) {
image(profilepicture, 60, 70, 90, 90);
}
imageMode(CENTER);
if (userphoto != null) {
image(userphoto, width/2, height/2.25, 550, 550);
}
textFont(InstagramFont, 20);
if (username != null) {
text(username, 110, 115);
fill(0);
}
textFont(InstagramFont, 15);
if ((tagStrings != null) && (tagStrings.length > 0)) {
String line = tagStrings[0];
for (int i = 1; i < tagStrings.length; i++) {
line += ", " + tagStrings[i];
}
text(line, 25, 720, 550, 50);
fill(0);
}
}
AFAIK it should be the
client.GET("/v1/tags/coffee/media/recent.json", params);
line that actually polls Instagram. Try wrapping that in a function like this:
void getGrams() {
client.GET("/v1/tags/coffee/media/recent.json", params);
}
then call that once in setup() and then again when you want to ...
I'd start with trying to do it on mousePressed() or keyPressed(), so that it only fires once when you really want it to
Don't try to do it in draw() without a timer (something like if(frameCount % 5000 == 0) which will fire every five seconds ((which may be too fast still but you get the idea))

How to create a single-color Bitmap to display a given hue?

I have a requirement to create an image based on a certain color. The color will vary and so will the size of the output image. I want to create the Bitmap and save it to the app's temporary folder. How do I do this?
My initial requirement came from a list of colors, and providing a sample of the color in the UI. If the size of the image is variable then I can create them for certain scenarios like result suggestions in the search pane.
This isn't easy. But it's all wrapped in this single method for you to use. I hope it helps. ANyway, here's the code to create a Bitmap based on a given color & size:
private async System.Threading.Tasks.Task<Windows.Storage.StorageFile> CreateThumb(Windows.UI.Color color, Windows.Foundation.Size size)
{
// create colored bitmap
var _Bitmap = new Windows.UI.Xaml.Media.Imaging.WriteableBitmap((int)size.Width, (int)size.Height);
byte[] _Pixels = new byte[4 * _Bitmap.PixelWidth * _Bitmap.PixelHeight];
for (int i = 0; i < _Pixels.Length; i += 4)
{
_Pixels[i + 0] = color.B;
_Pixels[i + 1] = color.G;
_Pixels[i + 2] = color.R;
_Pixels[i + 3] = color.A;
}
// update bitmap data
// using System.Runtime.InteropServices.WindowsRuntime;
using (var _Stream = _Bitmap.PixelBuffer.AsStream())
{
_Stream.Seek(0, SeekOrigin.Begin);
_Stream.Write(_Pixels, 0, _Pixels.Length);
_Bitmap.Invalidate();
}
// determine destination
var _Folder = Windows.Storage.ApplicationData.Current.TemporaryFolder;
var _Name = color.ToString().TrimStart('#') + ".png";
// use existing if already
Windows.Storage.StorageFile _File;
try { return await _Folder.GetFileAsync(_Name); }
catch { /* do nothing; not found */ }
_File = await _Folder.CreateFileAsync(_Name, Windows.Storage.CreationCollisionOption.ReplaceExisting);
// extract stream to write
// using System.Runtime.InteropServices.WindowsRuntime;
using (var _Stream = _Bitmap.PixelBuffer.AsStream())
{
_Pixels = new byte[(uint)_Stream.Length];
await _Stream.ReadAsync(_Pixels, 0, _Pixels.Length);
}
// write file
using (var _WriteStream = await _File.OpenAsync(Windows.Storage.FileAccessMode.ReadWrite))
{
var _Encoder = await Windows.Graphics.Imaging.BitmapEncoder
.CreateAsync(Windows.Graphics.Imaging.BitmapEncoder.PngEncoderId, _WriteStream);
_Encoder.SetPixelData(Windows.Graphics.Imaging.BitmapPixelFormat.Bgra8,
Windows.Graphics.Imaging.BitmapAlphaMode.Premultiplied,
(uint)_Bitmap.PixelWidth, (uint)_Bitmap.PixelHeight, 96, 96, _Pixels);
await _Encoder.FlushAsync();
using (var outputStream = _WriteStream.GetOutputStreamAt(0))
await outputStream.FlushAsync();
}
return _File;
}
Best of luck!