I have huge source (some IEnumerable that goes through an IDbReader row by row), that I need to serialize to yaml and deserialize back.How can I avoid collecting all items in memory?
You should be able to use the Serializer directly to serialize the IEnumerable. Be sure to disable aliases on the serializer and it should serialize in a streaming fashion, without loading the entire source first:
var serializer = new SerializerBuilder()
.DisableAliases()
.Build();
You can see this in action in the following code. It will serialize the first 100 items then fail with an exception, but you can see that the first items were already serialized:
using System;
using YamlDotNet.Serialization;
using System.Collections.Generic;
using System.Linq;
public class Program
{
public static void Main()
{
var source = Enumerable.Range(1, 10000).Select(i => {
if(i == 100) throw new Exception("I'm done");
return new {
Index = i,
Title = "Item " + i
};
});
var serializer = new SerializerBuilder()
.DisableAliases()
.Build();
serializer.Serialize(Console.Out, source);
}
}
See it running here: https://dotnetfiddle.net/Rk1nrx
Related
So I want to get a list of all the table names from the database through a controller as an ASP.net API project. I tried to do it through raw sql query without entity and it looked something like this.
public async Task<ActionResult<IList>> GetAllTableNames()
{
using (var context = new DbContext())
{
List<string> results = context.Database.SqlQuery<string>("SELECT name FROM sys.tables").ToListAsync();
}
}
But when I try to use the SqlQuery method I get the error
" 'DatabaseFacade' does not contain a definition for 'SqlQuery' and no accessible extension method 'SqlQuery' ". Anybody that has any idea how to solve this?
First create an Helper method in your controller as shown below
using System.Data.SqlClient;
public async IAsyncEnumerable<string> GetTableNamesAsync()
{
using var connection = new SqlConnection(_dbContext.Database.GetConnectionString());
var command = new SqlCommand("SELECT name FROM sys.tables",connection);
await connection.OpenAsync();
var reader = await command.ExecuteReaderAsync();
while (await reader.ReadAsync())
{
yield return reader.GetString(0);
}
}
Then call in your action Like this
public async Task<IActionResult> Index()
{
var list=new List<string>();
await foreach (var item in GetTableNamesAsync())
{
list.Add(item);
}
return Ok(list);
}
I am trying to build a service client to simplify calling my microservices in .net core.
Here is a service client sample:
public ProductServiceClient(SystemEnvironment.MachineEnvironment? environment = null)
{
this.url = ServiceEnvironment.Urls.GetUrl(ServiceEnvironment.Service.Product, environment);
}
private RestClient GetClient(string method)
{
return new RestClient(url + "/api/" + method);
}
private RestRequest GetRestRequest(Method method)
{
var restRequest = new RestRequest(method);
restRequest.RequestFormat = DataFormat.Json;
restRequest.AddHeader("Content-Type", "application/json");
return restRequest;
}
public FindProductsResponse FindProducts(FindProductsRequest request)
{
var restRequest = GetRestRequest(Method.GET);
restRequest.AddJsonBody(request);
var client = this.GetClient("Products");
var restResponse = client.Get(restRequest);
return new JsonDeserializer().Deserialize<FindProductsResponse>(restResponse);
}
public void Dispose()
{
}
And here is how I am trying to read it in my .net core api:
[HttpGet]
public ActionResult<FindProductsResponse> Get()
{
var request = "";
using (StreamReader reader = new StreamReader(Request.Body, Encoding.UTF8))
{
request = reader.ReadToEnd();
}
var buildRequest = JsonConvert.DeserializeObject<FindProductsRequest>(request);
var products = _service.FindProducts(buildRequest);
if (products != null && products.Any())
{
return new FindProductsResponse()
{
Products = products
};
}
return BadRequest("Not found");
}
However the request variable is always empty after Request.Body has been processed by the StreamReader.
If I make the same request from Postman (also using GET), I get the body just fine.
What am I doing wrong here?
EDIT: This is the unit test calling the api:
[Test]
public void Test1()
{
using (var productServiceClient = new ProductServiceClient())
{
var products = productServiceClient.FindProducts(new FindProductsRequest()
{
Id = 50
}).Products;
}
}
It can be your Request.Body has been already consumed.
Try to call Request.EnableRewind() before to open the StreamReader.
I'm not sure why you are manually doing it. It looks like you are reinventing the wheel. ASP.NET Core already does that for you.
This is what your service should look like:
[HttpGet] // oops, GET requests will not allow Bodies, this won't work
public ActionResult<FindProductsResponse> Get([FromBody]FindProductsRequest buildRequest)
{
// skip all the serialization stuff, the framework does that for you
var products = _service.FindProducts(buildRequest);
if (products != null && products.Any())
{
return new FindProductsResponse()
{
Products = products
};
}
return BadRequest("Not found");
}
And if you don't want to redo all the busy work that is retyping all the code on the client side, I suggest you read up on swagger (probably in the form of Swashbuckle). Client code can be generated. Even from within Visual Studio, if you right-click on the project and in the context menu pick "Add REST API Client...". Please don't erroneously hand-code what can be generated flawlessly by a machine instead. I don't really know what went wrong in your specific case, but searching bugs that could be avoided altogether is just busywork, that time should be spent on other parts of the program.
I just realized this is a GET request. ASP.NET will not recognize bodies for GET-Requests. You will need to make it a PUT or POST request or put your parameters in the query string.
If you happen to make that mistake as often as I did, you might want to write some unit tests that cover this. Because .NET is not helping you there. Been there, done that..
I'm trying to access a request's raw input body/stream in ASP.net 5. In the past, I was able to reset the position of the input stream to 0 and read it into a memory stream but when I attempt to do this from the context the input stream is either null or throws an error (System.NotSupportedException => "Specified method is not supported.").
In the first example below I can access the raw request in a controller if I declare the controller method's parameter object type as dynamic. For various reasons, this is not a solution and I need to access the raw request body in an authentication filter anyways.
This Example Works, But Is Not a Reasonable Solution:
[HttpPost("requestme")]
public string GetRequestBody([FromBody] dynamic body)
{
return body.ToString();
}
Throws Error:
[HttpPost("requestme")]
public string GetRequestBody()
{
var m = new MemoryStream();
Request.Body.CopyTo(m);
var contentLength = m.Length;
var b = System.Text.Encoding.UTF8.GetString(m.ToArray());
return b;
}
Throws Error:
[HttpPost("requestme")]
public string GetRequestBody()
{
Request.Body.Position = 0;
var input = new StreamReader(Request.Body).ReadToEnd();
return input;
}
Throws Error:
[HttpPost("requestme")]
public string GetRequestBody()
{
Request.Body.Position = 0;
var input = new MemoryStream();
Request.Body.CopyTo(input);
var inputString = System.Text.Encoding.UTF8.GetString(input.ToArray());
return inputString;
}
I need to access the raw request body of every request that comes in for an API that I am building.
Any help or direction would be greatly appreciated!
EDIT:
Here is the code that I would like to read the request body in.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNet.Mvc;
using Microsoft.AspNet.Http;
namespace API.Filters
{
public class CustomAuthorizationAttribute : Attribute, IAuthorizationFilter
{
public CustomAuthorizationAttribute()
{ }
public void OnAuthorization(AuthorizationContext context)
{
if (context == null)
throw new ArgumentNullException("OnAuthorization AuthorizationContext context can not be null.");
else
{
if (this.AuthorizeCore(context.HttpContext) == false)
{
// Do Other Stuff To Check Auth
}
else
{
context.Result = new HttpUnauthorizedResult();
}
}
}
protected virtual bool AuthorizeCore(HttpContext httpContext)
{
var result = false;
using (System.IO.MemoryStream m = new System.IO.MemoryStream())
{
try
{
if (httpContext.Request.Body.CanSeek == true)
httpContext.Request.Body.Position = 0;
httpContext.Request.Body.CopyTo(m);
var bodyString = System.Text.Encoding.UTF8.GetString(m.ToArray());
return CheckBody(bodyString); // Initial Auth Check returns true/false <-- Not Shown In Code Here on Stack Overflow
}
catch (Exception ex)
{
Logger.WriteLine(ex.Message);
}
}
return false;
}
}
}
This code would be accessed when a call is made to a controller method marked with the CustomAuthorization attribute like so.
[Filters.CustomAuthorizationAuthorization]
[HttpPost]
public ActionResult Post([FromBody]UserModel Profile)
{
// Process Profile
}
Update
The information below is pretty outdated by now. Due to performance reasons this is not possible by default, but fortunately can be changed. The latest solution should be to enable request buffering with EnableBuffering:
Request.EnableBuffering();
See also this blog post for more information: https://devblogs.microsoft.com/aspnet/re-reading-asp-net-core-request-bodies-with-enablebuffering/.
Old, outdated answer for reference
The implementation of Request.Body depends on the controller action.
If the action contains parameters it's implemented by Microsoft.AspNet.WebUtilities.FileBufferingReadStream, which supports seeking (Request.Body.CanSeek == true). This type also supports setting the Request.Body.Position.
However, if your action contains no parameters it's implemented by Microsoft.AspNet.Loader.IIS.FeatureModel.RequestBody, which does not support seeking (Request.Body.CanSeek == false). This means you can not adjust the Position property and you can just start reading the stream.
This difference probably has to do with the fact that MVC needs to extract the parameters values from the request body, therefore it needs to read the request.
In your case, your action does not have any parameters. So the Microsoft.AspNet.Loader.IIS.FeatureModel.RequestBody is used, which throws an exception if you try to set the Position property.
**Solution**: either do not set the position or check if you actually _can_ set the position first:
if (Request.Body.CanSeek)
{
// Reset the position to zero to read from the beginning.
Request.Body.Position = 0;
}
var input = new StreamReader(Request.Body).ReadToEnd();
The exceptions you see in your three last snippets are the direct consequence of trying to read the request body multiple times - once by MVC 6 and once in your custom code - when using a streamed host like IIS or WebListener. You can see this SO question for more information: Read body twice in Asp.Net 5.
That said, I'd only expect this to happen when using application/x-www-form-urlencoded, since it wouldn't be safe for MVC to start reading the request stream with lengthy requests like file uploads. If that's not the case, then it's probably a MVC bug you should report on https://github.com/aspnet/Mvc.
For workarounds, you should take a look at this SO answer, that explains how you can use context.Request.ReadFormAsync or add manual buffering: Read body twice in Asp.Net 5
app.Use(next => async context => {
// Keep the original stream in a separate
// variable to restore it later if necessary.
var stream = context.Request.Body;
// Optimization: don't buffer the request if
// there was no stream or if it is rewindable.
if (stream == Stream.Null || stream.CanSeek) {
await next(context);
return;
}
try {
using (var buffer = new MemoryStream()) {
// Copy the request stream to the memory stream.
await stream.CopyToAsync(buffer);
// Rewind the memory stream.
buffer.Position = 0L;
// Replace the request stream by the memory stream.
context.Request.Body = buffer;
// Invoke the rest of the pipeline.
await next(context);
}
}
finally {
// Restore the original stream.
context.Request.Body = stream;
}
});
I just had this same issue. Remove the parameters from the method signature, and then read the Request.Body Stream how you want to.
You need to call Request.EnableRewind() to allow the stream to be rewound so you can read it.
string bodyAsString;
Request.EnableRewind();
using (var streamReader = new StreamReader(Request.Body, Encoding.UTF8))
{
bodyAsString = streamReader.ReadToEnd();
}
I Know this my be late but in my case its Just I had a problem in routing as bellow
At startup.cs file I was beginning the routing with /api
app.MapWhen(context => context.Request.Path.StartsWithSegments(new PathString("/api")),
a =>
{
//if (environment.IsDevelopment())
//{
// a.UseDeveloperExceptionPage();
//}
a.Use(async (context, next) =>
{
// API Call
context.Request.EnableBuffering();
await next();
});
}
//and I was putting in controller
[HttpPost]
[Route("/Register", Name = "Register")]
//Just Changed the route to start with /api like my startup.cs file
[HttpPost]
[Route("/api/Register", Name = "Register")]
//and now the params are not null and I can ready the body request multiple
ITextSharp throws an error when you attempt to create a PdfTable with 0 columns.
I have a requirement to take XHTML that is generated using an XSLT transformation and generate a PDF from it. Currently I am using ITextSharp to do so. The problem that I am having is the XHTML that is generated sometimes contains tables with 0 rows, so when ITextSharp attempts to parse them into a table it throws and error saying there are 0 columns in the table.
The reason it says 0 columns is because ITextSharp sets the number of columns in the table to the maximum of the number of columns in each row, and since there are no rows the max number of columns in any given row is 0.
How do I go about catching these HTML table declarations with 0 rows and stop them from being parsed into PDF elements?
I've found the piece of code that is causing the error is within the HtmlPipeline, so I could copy and paste the implementation into a class extending HtmlPipeline and overriding its methods and then do my logic to check for empty tables there, but that seems sloppy and inefficient.
Is there a way to catch the empty table before it is parsed?
=Solution=
The Tag Processor
public class EmptyTableTagProcessor : Table
{
public override IList<IElement> End(IWorkerContext ctx, Tag tag, IList<IElement> currentContent)
{
if (currentContent.Count > 0)
{
return base.End(ctx, tag, currentContent);
}
return new List<IElement>();
}
}
And using the Tag Processor...
//CSS
var cssResolver = XMLWorkerHelper.GetInstance().GetDefaultCssResolver(true);
//HTML
var fontProvider = new XMLWorkerFontProvider();
var cssAppliers = new CssAppliersImpl(fontProvider);
var tagProcessorFactory = Tags.GetHtmlTagProcessorFactory();
tagProcessorFactory.AddProcessor(new EmptyTableTagProcessor(), new string[] { "table" });
var htmlContext = new HtmlPipelineContext(cssAppliers);
htmlContext.SetTagFactory(tagProcessorFactory);
//PIPELINE
var pipeline =
new CssResolverPipeline(cssResolver,
new HtmlPipeline(htmlContext,
new PdfWriterPipeline(document, pdfWriter)));
//XML WORKER
var xmlWorker = new XMLWorker(pipeline, true);
using (var stringReader = new StringReader(html))
{
xmlParser.Parse(stringReader);
}
This solution removes the empty table tags and still writes the PDF as a part of the pipeline.
You should be able to write your own tag processor that accounts for that scenario by subclassing iTextSharp.tool.xml.html.AbstractTagProcessor. In fact, to make your life even easier you can subclass the already existing more specific iTextSharp.tool.xml.html.table.Table:
public class TableTagProcessor : iTextSharp.tool.xml.html.table.Table {
public override IList<IElement> End(IWorkerContext ctx, Tag tag, IList<IElement> currentContent) {
//See if we've got anything to work with
if (currentContent.Count > 0) {
//If so, let our parent class worry about it
return base.End(ctx, tag, currentContent);
}
//Otherwise return an empty list which should make everyone happy
return new List<IElement>();
}
}
Unfortunately, if you want to use a custom tag processor you can't use the shortcut XMLWorkerHelper class and instead you'll need to parse the HTML into elements and add them to your document. To do that you'll need an instance of iTextSharp.tool.xml.IElementHandler which you can create like:
public class SampleHandler : iTextSharp.tool.xml.IElementHandler {
//Generic list of elements
public List<IElement> elements = new List<IElement>();
//Add the supplied item to the list
public void Add(IWritable w) {
if (w is WritableElement) {
elements.AddRange(((WritableElement)w).Elements());
}
}
}
You can use the above with the following code which includes some sample invalid HTML.
//Hold everything in memory
using (var ms = new MemoryStream()) {
//Create new PDF document
using (var doc = new Document()) {
using (var writer = PdfWriter.GetInstance(doc, ms)) {
doc.Open();
//Sample HTML
string html = "<table><tr><td>Hello</td></tr></table><table></table>";
//Create an instance of our element helper
var XhtmlHelper = new SampleHandler();
//Begin pipeline
var htmlContext = new HtmlPipelineContext(null);
//Get the default tag processor
var tagFactory = iTextSharp.tool.xml.html.Tags.GetHtmlTagProcessorFactory();
//Add an instance of our new processor
tagFactory.AddProcessor(new TableTagProcessor(), new string[] { "table" });
//Bind the above to the HTML context part of the pipeline
htmlContext.SetTagFactory(tagFactory);
//Get the default CSS handler and create some boilerplate pipeline stuff
var cssResolver = XMLWorkerHelper.GetInstance().GetDefaultCssResolver(false);
var pipeline = new CssResolverPipeline(cssResolver, new HtmlPipeline(htmlContext, new ElementHandlerPipeline(XhtmlHelper, null)));//Here's where we add our IElementHandler
//The worker dispatches commands to the pipeline stuff above
var worker = new XMLWorker(pipeline, true);
//Create a parser with the worker listed as the dispatcher
var parser = new XMLParser();
parser.AddListener(worker);
//Finally, parse our HTML directly.
using (TextReader sr = new StringReader(html)) {
parser.Parse(sr);
}
//The above did not touch our document. Instead, all "proper" elements are stored in our helper class XhtmlHelper
foreach (var element in XhtmlHelper.elements) {
//Add these to the main document
doc.Add(element);
}
doc.Close();
}
}
}
Does the IsolatedStorageSettings.Save method in a Windows Phone application save the whole dictionary regardless of the changes we made in it? I.e. if we have say 50 items in it, and change just one, does the Save method saves (serializes, etc) the whole dictionary again and again? Is there any detailed documentation on this class and does anybody know what data storage format is used "under the hood"?
I've managed to find the implementation of the IsolatedStorageSettings.Save method in the entrails of the Windows Phone emulator VHD images supplied with the Windows Phone SDK (the answer to this question on SO helped me to do that). Here is the source code of the method:
public void Save()
{
lock (this.m_lock)
{
using (IsolatedStorageFileStream isolatedStorageFileStream = this._appStore.OpenFile(this.LocalSettingsPath, 4))
{
using (MemoryStream memoryStream = new MemoryStream())
{
Dictionary<Type, bool> dictionary = new Dictionary<Type, bool>();
StringBuilder stringBuilder = new StringBuilder();
using (Dictionary<string, object>.ValueCollection.Enumerator enumerator = this._settings.get_Values().GetEnumerator())
{
while (enumerator.MoveNext())
{
object current = enumerator.get_Current();
if (current != null)
{
Type type = current.GetType();
if (!type.get_IsPrimitive() && type != typeof(string))
{
dictionary.set_Item(type, true);
if (stringBuilder.get_Length() > 0)
{
stringBuilder.Append('\0');
}
stringBuilder.Append(type.get_AssemblyQualifiedName());
}
}
}
}
stringBuilder.Append(Environment.get_NewLine());
byte[] bytes = Encoding.get_UTF8().GetBytes(stringBuilder.ToString());
memoryStream.Write(bytes, 0, bytes.Length);
DataContractSerializer dataContractSerializer = new DataContractSerializer(typeof(Dictionary<string, object>), dictionary.get_Keys());
dataContractSerializer.WriteObject(memoryStream, this._settings);
if (memoryStream.get_Length() > this._appStore.get_AvailableFreeSpace() + isolatedStorageFileStream.get_Length())
{
throw new IsolatedStorageException(Resx.GetString("IsolatedStorageSettings_NotEnoughSpace"));
}
isolatedStorageFileStream.SetLength(0L);
byte[] array = memoryStream.ToArray();
isolatedStorageFileStream.Write(array, 0, array.Length);
}
}
}
}
So, as we can see the whole dictionary is serialized every time when we call Save. And we can see from code what method is used to serialize the collection values.