Storing and Retrieving System.Data.Dataset to Redis with ServiceStack - redis

I am just few hour old to Redis and ServiceStack.Redis and trying to learn it.
Previously i had used ASP.NET cache where i store DataSet to cache and retrieve when required.
I am trying to accomplish same with ServiceStack.Redis but it is raising exception:
An unhandled exception of type 'System.StackOverflowException' occurred in ServiceStack.Text.dll
Here is the code
static void Main(string[] args)
{
var redisClient = new RedisClient("localhost");
DataSet ds = new DataSet();
ds.Tables.Add("table1");
ds.Tables[0].Columns.Add("col1", typeof(string));
DataRow rw = ds.Tables[0].NewRow();
rw[0] = "samtech";
ds.Tables[0].Rows.Add(rw);
//following line raises exception
redisClient.Set<System.Data.DataSet>("my_ds", ds, DateTime.Now.AddSeconds(60));
}
Can someone tell me what i am doing wrong?
Can i store only custom classes to Redis not the DataSet?

DataSet's are extremely poor candidates for serialization which as a result are not supported by any ServiceStack library, use clean POCO models only.

Related

Getting ClassNotFoundException while storing a list of custom objects in ignite cache and works well if I don't use a list

I have a non-persistent ignite cache that stores the following elements,
Key --- java.lang.String
Value --- Custom class
public class Transaction {
private int counter;
public Transaction(int counter) {
this.counter = counter;
}
public String toString() {
return String.valueOf(counter);
}
}
The below code works fine even though I am trying to put a custom object into Ignite.
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setPeerClassLoadingEnabled(true);
cfg.setDeploymentMode(DeploymentMode.CONTINUOUS);
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509"));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
Ignite ignite = Ignition.start(cfg);
IgniteCache<String, Transaction> cache = ignite.getOrCreateCache("blocked");
cache.put("c1234_t2345_p3456", new Transaction(100);
The below code fails with ClassNotFoundException when I am trying to set a list of objects instead. This code is exactly same as the above, except for the list of objects. Why is it that list of objects fail where-in custom objects stored directly works fine?
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setPeerClassLoadingEnabled(true);
cfg.setDeploymentMode(DeploymentMode.CONTINUOUS);
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509"));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
Ignite ignite = Ignition.start(cfg);
IgniteCache<String, List<Transaction>> cache = ignite.getOrCreateCache("blocked");
cache.put("c1234_t2345_p3456", Arrays.asList(new Transaction(100)));
Storing custom objects in-memory to ignite worked, but trying to store List objects instead caused ClassNotFoundException in server. I was able to solve this by copying the custom class definition to "/ignite_home/bin/libs", but curious to know why the first case worked and second case didn't. Can anyone please help me understand what's happening in this case? Is there any other way to resolve this issue?
Ok, after many trials, I have an observation that evens out the differences between the above 2 scenarios. In dynamic declaration of caches as I have done earlier in code, somehow I am seeing the error from Ignite to keep the custom classes in bin/libs folder. But if I define the cache in the ignite-config.xml, then ignite is somehow able to digest both use-case evenly and doesn't even throw the ClassNotFoundException. So, I take the summary here that pre-declared caches are much safer as I am seeing some different behaviour while using them as dynamic ones from code. So, I changed the cache declaration to declarative model and now the above use-cases are working fine.

Using ASP.NET Core Web API WITHOUT Entity Framework

I need to build a Web API from ASP.NET Core without Entity Framework. It's an existing database that has some custom stored procedures and we do not want to use EF.
I searched this topic and can't find anything about it, is this even possible?
This is possible.
The first problem you will run into is getting the database connection string. You will want to import the configuration to do so. In a controller, it might look like this:
private readonly IConfiguration _configuration;
public WeatherForecastController(ILogger<WeatherForecastController> logger, IConfiguration configuration)
{
_logger = logger;
_configuration = configuration;
}
Add using System.Data and using System.Data.SqlClient (you'll need NuGet for SqlClient) as well as using Microsoft.Extensions.Configuration. With access to the database, you are writing code "old style", for example:
[HttpGet]
[Route("[controller]/movies")]
public IEnumerable<Movie> GetMovies()
{
List<Movie> movies = new List<Movie>();
string connString = ConfigurationExtensions.GetConnectionString(_configuration, "RazorPagesMovieContext");
SqlConnection conn = new SqlConnection(connString);
conn.Open();
SqlDataAdapter sda = new SqlDataAdapter("SELECT * FROM Movie", conn);
DataSet ds = new DataSet();
sda.Fill(ds);
DataTable dt = ds.Tables[0];
sda.Dispose();
foreach (DataRow dr in dt.Rows)
{
Movie m = new Movie
{
ID = (int)dr["ID"],
Title = dr["Title"].ToString(),
ReleaseDate = (DateTime)dr["ReleaseDate"],
Genre = dr["Genre"].ToString(),
Price = (decimal)dr["Price"],
Rating = dr["Rating"].ToString()
};
movies.Add(m);
}
conn.Close();
return movies.ToArray();
}
The connection string name is in appsettings.json.
"ConnectionStrings": {
"RazorPagesMovieContext": "Server=localhost;Database=Movies;Trusted_Connection=True;MultipleActiveResultSets=true"
}
Yes it is possible. Just implement the API by yourself. Or here is also sample for the identity scaffold, without EF.
https://markjohnson.io/articles/asp-net-core-identity-without-entity-framework/
Just used Dapper as our ORM in a project rather than EF.
https://dapper-tutorial.net/
It is similar to ADO.Net, but it has some additionally features that we leveraged and it was really clean to implement.
I realize this is an old question, but it came up in a search I ran so I figured I'd add to the answers given.
First, if the custom stored procedures are your concern, you can still run them using Entity Framework's .FromSql method (see here for reference: https://www.entityframeworktutorial.net/efcore/working-with-stored-procedure-in-ef-core.aspx)
The relevant info is found at the top of the page:
EF Core provides the following methods to execute a stored procedure:
1. DbSet<TEntity>.FromSql(<sqlcommand>)
2. DbContext.Database.ExecuteSqlCommand(<sqlcommand>)
If you are avoiding Entity Framework for other reasons, it's definitely possible to use any database connection method you want in ASP.NET Core. Just implement your database connection methods using whatever library is relevant to your database and set up your controller to return the data in whatever format you want. Most, if not all, of Microsoft's examples return Entity Framework entities, but you can easily return any data format you want.
As an example, this controller method returns a MemoryStream object after running a query against an MS SQL server (note, in most cases where you want data returned it's my understanding that it should be a "GET" method, not "POST" as is done here, but I needed to send and use information in the HttpPost body)
[HttpPost]
[Route("Query")]
public ActionResult<Stream> Query([FromBody]SqlDto content)
{
return Ok(_msSqlGenericService.Query(content.SqlCommand, content.SqlParameters));
}
Instead of a MemoryStream, you could return a generic DataTable or a List of any custom class you want. Note that you'll also need to determine how you are going to serialize/deserialize your data.

WCF streaming SQLFileStream by Message Contract

In my WCF service, I try to load a File from MS SQL table which has a FileStream column and I try to pass it as a stream back
responseMsg.DocSqlFileStream = new MemoryStream();
try
{
using (FileStreamDBEntities dbEntity = new FileStreamDBEntities())
{
...
using (TransactionScope x = new TransactionScope())
{
string sqlCmdStr = "SELECT dcraDocFile.PathName() AS InternalPath, GET_FILESTREAM_TRANSACTION_CONTEXT() AS TransactionContext FROM dcraDocument WHERE dcraDocFileID={0}";
var docFileStreamInfo = dbEntity.Database.SqlQuery<DocFileStreamPath>(sqlCmdStr, new object[] { docEntity.dcraDocFileID.ToString() }).First();
SqlFileStream sqlFS = new SqlFileStream(docFileStreamInfo.InternalPath, docFileStreamInfo.TransactionContext, FileAccess.Read);
sqlFS.CopyTo(responseMsg.DocSqlFileStream);
if( responseMsg.DocSqlFileStream.Length > 0 )
responseMsg.DocSqlFileStream.Position = 0;
x.Complete();
}
}
...
I'm wondering whats the best way to pass the SQLFileStream back through a message contract back to take advantage of streaming. Currently I copied the SQLFilEStream to a memory stream because I got an error message in WCF trace which says: Type 'System.Data.SqlTypes.SqlFileStream' cannot be serialized.
In WebApi there is such thing as PushStreamContent it allows delegating all transaction stuff to async lambda, don't know if there is something similar in WCF, but the following approach may be helpful:
http://weblogs.asp.net/andresv/archive/2012/12/12/asynchronous-streaming-in-asp-net-webapi.aspx
You can't stream an SQLFileStream back to the client because it can only be read within the SQL transaction. I think your solution with the MemoryStream is a good way of dealing with the problem.
I had a similar problem and was worried about the large object heap when using a new Memory Stream every time. I came up with the idea of using a temporary file on the disk instead of a memory stream. We are using this solution in several project now and it works really well.
See here for the example code:
https://stackoverflow.com/a/11307324/173711

Redis on Appharbor - Booksleeve GetString exception

i am trying to setup Redis on appharbor. I have followed their instructions and again i have an issue with the Booksleeve API. Here is the code i am using to make it work initially:
var connectionUri = new Uri(url);
using (var redis = new RedisConnection(connectionUri.Host, connectionUri.Port, password: connectionUri.UserInfo.Split(new[] { ':' }, 2)[1]))
{
redis.Strings.Set(1, "greeting", "welcome to remember your stuff!");
try
{
var task = redis.Strings.GetString(1, "greeting");
redis.Wait(task);
ViewBag.Message = task.Result;
}
catch (Exception)
{
// It throws an exception trying to wait for the task?
}
}
However, the issue is that it sets the string correctly, but when trying to retrieve the same string from the key value store, it throws a timeout exception waiting for the task to eexecute. However, this code works on my local redis server connection.
Am i using the API in a wrong way? or is this something related to Appharbor?
Thanks
Like a SqlConnection, you need to call Open() (otherwise your messages are queued for delivery).
Unlike SqlConnection, you should not fire up a RedisConnection each time you need it - it is intended to be used as a shared, thread-safe, multiplexer - i.e. a single connection is held somewhere and used by lots and lots of unrelated callers. Unless of course you only need to do one thing!

Example of using ADO.NET with Oracle in WCF

Without any Linq or Entity Framework, I would like to see an example of using ADO.NET with Oracle in WCF. I have seen the ABC's and the different contracts, but with say consuming a Restful WCF service sending in 1 to several parameters I would like to see an example of using this type of code:
connection = new OracleConnection(EnvironmentSettings.connectionString);
connection.Open();
command = new OracleCommand("H16.WEB_FACILITY.get_facility_info", connection);
command.CommandType = CommandType.StoredProcedure;
// Input Parameters
command.Parameters.Add("pfcode", OracleDbType.Varchar2, facilityCode, ParameterDirection.Input);
// Output Parameters
command.Parameters.Add("pfacility", OracleDbType.RefCursor).Direction = ParameterDirection.Output;
adapter = new OracleDataAdapter(command);
DataSet ds = new DataSet();
adapter.Fill(ds);
So that I can do CRUD operations will a WCF good practice design. Fault Contracts / Data Contracts etc... I see many examples, but not specific to something that seems so simple. I guess that it why so many people are still doing asmx ... I'm wanting to convert a project I am on and I see tons of asmx web services everywhere and wish for an expert or someone who has done this to point my in the right direction or even better show me how to write that ADO code into WCF ... Thanks in advance.
Slightly confusted as to exactly what you mean, but Linq and EntityFramework have nothing to do with WCF, and the paradigm doesn't change one bit in using them. You could do something as simple as:
[ServiceContract]
public class MyService
{
[OperationContract]
public DataSet LoadData(string facilityCode)
{
command = new OracleCommand("H16.WEB_FACILITY.get_facility_info", connection);
command.CommandType = CommandType.StoredProcedure;
// Input Parameters
command.Parameters.Add("pfcode", OracleDbType.Varchar2, facilityCode, ParameterDirection.Input);
// Output Parameters
command.Parameters.Add("pfacility", OracleDbType.RefCursor).Direction = ParameterDirection.Output;
adapter = new OracleDataAdapter(command);
DataSet ds = new DataSet();
adapter.Fill(ds);
return ds;
}
}
In practice you would probably want to use a [DataContract] class, and return that instead of a DataSet, but the only real change would be reading your results into a real class instead of a DataSet, something like:
[DataContract]
public class MyData
{
[DataMember]
public string Facility { get; set; }
}
Then your service method returns that instead of DataSet:
[OperationContract]
public MyData LoadData(string facilityCode)
{
MyData data;
// read from Oracle into data object...
return data;
}
You can also look at WCF Transaction Flow to control your database transaction scope. It is a nice way to have every WCF service operation be trapped in its own transaction scope, or even control the transaction from the WCF client if needed.
[FaultContract]s are a subject on their own, but you can find some good examples if you google for it. Basically you would set up your own exception type, and then add that to the service like:
[ServiceContract]
[FaultContract(typeof(MyException))]
public class MyService
and that tells WCF to add the serialization info for MyException to the WSDL, so then your operations can throw new MyException(); and that will serialize back to the clients, so they will get your exception.