entity framework, self-tracking entity and sqlserver file stream - wcf

I just start a project where I need to have a WCF services that read and write files.
The architecture is based on DDD using Entity Framework Self-Tracking Entity.
The simple GUI should show a grid with a list of file and then click the row you can download it.
Can I use the file stream sql server 2008 feature with this architecture? Which strategy is the best one to manage this kind of entity?
Thanks.

Filestream will not help you when using EF. EF doesn't use streaming feature, it loads it as varbinary(max). If you wan to take advantage of filestream you must load it from database with ADO.NET directly and you need a streaming service to pass it back to the client in efficient way.

Related

Interview task, stuck on local database connection, need alternative

I'm required to create a bit of software for a company, to illustrate my code use. I'm using .NET CORE Web App MVC and I believe it requires me to use a database but I would need to upload my code on GitHub for them to inspect and run but obviously wouldn't read the database from my machine. What are the alternatives? Can a fake DB be created within the project for instance? Or is there something else I could do that doesn't involve Azure?
I tried scaffolding a DBContext from a controller but it requires a connection of a database
Have you considered mocking your data connection? It is the same thing you would do if you were unit testing your application. You would not want to connect directly to your database; instead, you would create a mock connection and return the data yourself.
You have multiple choices here. You can use a Mock framework like Moq, FakeItEasy, JustMock, and NSubstitute. Otherwise, you can roll your own.

Apache Calcite - Access RESTFul Service with SQL

I have went through documentation and its a bit hard for me to grasp how one should go about writing adapter for anything. I want to ease the access of RESTful web services with SQL like interface for business folks.
Coarse requirements look something like:
Register data source, in this case endpoint
Add mapping for endpoint to table
Execute simple select queries
Allow joins to be performed on the basis of some join key but in client application memory
Represent the output in the tabular format
Try using Calcite's file adapter, which was just added in release 1.12.
The simplest use case is reading and parsing a CSV file from the file system, and presenting it as a table that can be used in a SQL statement. But in addition to files, the file adapter read documents via HTTP, and it can parse the contents of HTML tables. So you should be able to use it to read data from a REST service.

How to migrate the data from Magnolia CMS Apache Jackrabbit content repository to normal SQL SERVER database

I am new to Magnolia CMS and the Apache Jackrabbit content repository concepts.
There is a web application which is using Magnolia CMS. Magnolia is using SQL SERVER 2012 database as persistence manager.
Here Apache Jackrabbit content repository implementation is done. There are two separate configurations of the Magnolia CMS which are used for the application, referred to as the public and author instances.
Now here we are trying to replace the existing Magnolia CMS with a custom ASP.NET MVC 5 application with all the functionalities.
I analysed the tables in the SQL SERVER database and found that data stored in format of Node_ID and Bundle_Data which is very difficult to analyse.
In short, it is not easy to interpret.
Based on the custom CMS a new database model for author instance (SQL SERVER 2012) is developed.
Hence as part of migration task ,I am trying to migrate the old data that is stored in the SQL SERVER with the Apache Jackrabbit content repository implementation to a normal SQL SERVER 2012 (as per the new database model).
Can anyone help me to know are there are any proven methods or tools available to accomplish this task.
The question is more on the jackrabbit-side, not so much on the Magnolia side, especially since you want to replace Magnolia entirely, not just the persistence layer:
Now here we are trying to replace the existing Magnolia CMS with a
custom ASP.NET MVC 5 application with all the functionalities.
although my question really is whether you really want to replace Jackrabbit entirely, or still use Jackrabbit with your ASP.NET application but with a MS SQL Server datastore (which would be my personal suggestion)? Otherwise you will be getting rid of all the benefits that Jackrabbit has.
Jackrabbit does support SQL Server and I would suggest to use it.
https://wiki.apache.org/jackrabbit/DataStore#Configuration-1:
Currently supported are: db2, derby, h2, mssql, mysql, oracle,
sqlserver.
Developing a WebCMS with just ASP.NET and SQL Server and without a content repository layer in between sounds like developing everything that a WebCMS usually comes with from scratch, especially if you want to have all the functionality that Magnolia offers (versioning, history, search, etc.).
You can check details regarding Jackrabbit data store here: http://wiki.apache.org/jackrabbit/DataStore although I am wondering why you or your customer would want to change the data store of the content repository to SQL Server. I guess you are not speaking of using MySQL for the persistence of the meta data, but really to store the binary content (a mistake that by the way OpenCms, another Java-based open source WebCMS, made in their architecture design - imho).
Note that usually large files are not stored in the database itself (with Magnolia), but on the file system.
https://wiki.magnolia-cms.com/display/WIKI/Setting+up+a+Jackrabbit+persistence+manager#SettingupaJackrabbitpersistencemanager-Datastorageandbackup:
BLOBs are not by default stored in the database when they exceed a
certain threshold definied in your Jackrabbit configuration - instead
they are saved on the file system. The default threshold used by a
Magnolia installation is 1024 bytes. All files above the defined
threshold are put onto the filesystem and not in the database.
In case you really want to get rid of Jackrabbit entirely and only use SQL Server as the persistence layer and store all binary content in it regardless of size (not recommended), I would write a custom export/import script for it, which queries the Jackrabbit repo (standard CMIS protocol) and takes the content from the file system, reading as FileInputStream and writing it to the Oracle DB (Example: http://www.java2s.com/Code/Java/Database-SQL-JDBC/StoreBLOBsdataintodatabase.htm). This would be my suggested method.
I don't think there are any out-of-the-box tools for that.

How to pass data using Entity Framework and wcf

I'm trying to develop a .net4 application  using c#, wcf and entity framework. My first idea was to pass the EF generated objects through wcf (first with the default entity objects, then with the POCO entities), but I soon got several connection problems (connection is closed) due to non serializable objects in the generated entities. I ended up writing several data-only classes to host the data queried with EF, but now I miss the role of the EF with WCF. I guess I'm doing something wrong, so how do you send data through wcf using EF? What is the point of EF? Wouldn't it be easier to write stored procs and standard ado.net...?
Entity Framework is just a data access technology. You can create a data access layer which talks to your database and return the required data using Entity framework and then plug that to your WCF service so that your WCF service will get the data. You can use the same data access layer with any other consumers ( a Silver light application, A Windows form project or an MVC application). The advantage of using Entity framework is that it will load the data to your domain objects (your POCO classes) so that you do not need to do it manually yourself. In the case of Stored proc, you need to execute the stored proc, Iterate thru the DataReader/ DataTable the fill your objects. For this you have to write code. If you use Entity framework, EF does this for you so you can save some dev time.
You should clearly logically seperate your project so that there will be a data access and a consumer which consumes the Data Acccess layer( your WCF service).

best way of migrating customised metadata associated with source component into Tridion environment

If we are migrating content from source Content Management System to Tridion, what is the best way of migrating customized metadata associated with the components(content) of source Content Management System into Tridion? Should we directly migrate it to the sql server or is there an option to migrate it in the form of some xml file, etc.?
Migrating directly into SQL Server is unsupported, and the entire system would be unsupported at that point, due to possible data consistency issues.
The most straightforward way is to read the data from the source system, and use the Tridion API to recreate the item.
If migrating metadata, some of the data would likely fit best into a taxonomy, which would mean you'd want to migrate the keywords / structure first, then tag the content as it came into Tridion.
You have a few options when migrating content into Tridion.
I can't understand from the above if you are talking about migrating to SQL server as an intermediate format, or directly into the Tridion database. Importing directly into the Tridion database is definitely not a supported solution, and could lead to unpredictable results.
You need to use the API, either the Core Service or the TOM.NET API (If you have Tridion 2011) or the old TOM API if not.
A popular approach is to export all content into an XML format that you can then process with a .NET application.
There's some good articles on migrating content into Tridion by Ryan Durkin here, and Nuno Linhares here.
As mention before, migrating directly into the Database is not an option if you are planning to use SDL Tridion as the final CMS.
Apart of the supported mechanism chosen for Migrate, play attention about how you are going to structure the metadata in the new CMS, as depending on the volume, structure, hierarchy, relation across metadata items the process can become complex.
Also play special attention at the Blueprint concept, as probably you can merge duplicated values from the old system into only one that is inherited.
Don't think only in how to put the metadata in the system, also how that Metadata will be used and maintained in the new CMS, in this case SDL Tridion
You can check also a recent post about Migration and plan Migration in general, in case adds some more information
Can we automate migrating to SDL Tridion?