Best practices for prepopulated tables via OrmLite in Servicestack - orm

I'm generating tables via OrmLite and I was wondering about best practices for prepopulating tables. Example tables - countries, states, cities, etc.
I can think of a few ways to pre-populate tables:
List item
Seed DB
API (when possible)
Static file
In code
Separate project
However, in some cases the data could get large as in the example of cities around the world so in code is not viable.
I could also consider generating tables that need to be pre-populated directly via another project where I can fetch data from a source and get it into the DB.
However, I was wondering about the scenario when you do generate it via an ORM (especially in production). How would you approach the problem?
This must be a common problem across all ORM's.

If it's only code tables like countries, states, etc, they're small enough to still have them as part of the project, normally I'd create a separate static class called SeedData with all the data in POCO's
1. Maintaining Code Tables in Host Project
public static class SeedData
{
public static List<Country> Countries
{
get { return new[] { new Country(...), ... }; }
}
}
Then in your AppHost populate add a flag on whether to re-create them on startup, e.g:
public void Configure(Container container)
{
var appSettings = new AppSettings(); //Read from Web.config <appSettings/>
if (appSettings.Get("RecreateTables", false))
{
using (var db = container.Resolve<IDbConnectionFactory>().Open())
{
db.DropAndCreateTable<Country>();
db.InsertAll(SeedData.Countries);
...
}
}
}
Change AppSetting to recreate tables
This will then let you re-create the tables and re-populate the data when you change the RecreateTables appSetting to True, e.g:
<appSettings>
<add key="RecreateTables" value="True" />
</appSettings>
As the default behavior of ASP.NET will automatically restart the AppDomain, just saving a change to Web.config is enough to restart your ASP.NET application the next time any page gets refreshed.
2. Add to Test Project in adhoc Explicit Test
If the Data gets too big to fit in the working project I would first move it to a separate test project inside an [Explicit] text fixture (so it's never automatically run), that you can easily run manuallu, e.g:
[Explicit]
[TestFixture]
public class AdminTasks
{
[Test]
public void Recreate_and_populate_tables()
{
var dbFactory = new OrmLiteConnectionFactory(...);
using (var db = dbFactory.Open())
{
db.DropAndCreateTable<Country>();
db.InsertAll(SeedData.Countries);
...
}
}
}
3. Save data in external static text Files
Finally if the data is even too big to fit in C# classes, I would then save it out to a static file in the test that you can easily re-hydrate into POCO's that you can populate with OrmLite, e.g:
[Test]
public void Recreate_and_populate_tables()
{
var dbFactory = new OrmLiteConnectionFactory(...);
using (var db = dbFactory.Open())
{
db.DropAndCreateTable<Country>();
var countries = File.ReadAllText("~/countries.txt".MapAbsolutePath())
.FromJson<List<Country>>();
db.InsertAll(countries);
...
}
}

Related

ABP IO Code sample for run multiple databases for multi-tenancy

Please notice that I am talking about ABP.io, not the Boilerplate framework.
The in-build free module Tenant-Management is developed to work with multiple tenants and a unique database. however, the documentation says that the framework has a built-in friendly way to use the multiple database approach, including:
new dbContext
database migration and seeding
Connection String service
I am new in ABP IO, and I want a sample that employs the framework elements to implement a single database for every tenant.
I get started by overriding the tenant create sync method of the tenant management module as follows.
[Dependency(ReplaceServices = true)]
[ExposeServices(typeof(ITenantAppService), typeof(TenantAppService), typeof(ExtendedTenantManagementAppService))]
public class ExtendedTenantManagementAppService : TenantAppService
{
public ExtendedTenantManagementAppService(ITenantRepository tenantRepository,
ITenantManager tenantManager,
IDataSeeder dataSeeder) : base(tenantRepository, tenantManager, dataSeeder)
{
LocalizationResource = typeof(WorkspacesManagerResource);
ObjectMapperContext = typeof(WorkspacesManagerApplicationModule);
}
public override async Task<TenantDto> CreateAsync(TenantCreateDto input)
{
var tenant = await TenantManager.CreateAsync(input.Name);
input.MapExtraPropertiesTo(tenant);
await TenantRepository.InsertAsync(tenant);
await CurrentUnitOfWork.SaveChangesAsync();
using (CurrentTenant.Change(tenant.Id, tenant.Name))
{
//TODO: Handle database creation?
// create database
// migrate
// seed with essential data
await DataSeeder.SeedAsync(
new DataSeedContext(tenant.Id)
.WithProperty("AdminEmail", input.AdminEmailAddress)
.WithProperty("AdminPassword", input.AdminPassword)
);
}
return ObjectMapper.Map<Tenant, TenantDto>(tenant);
}
}
Any code sample?

Asp.net Boilerplate - Implement setting manager with database

I've been building an asp.net core website, using the asp.net boilerplate template. As of now, I've been storing all of the settings in the appsettings.json file. As the application gets bigger, I'm thinking I should start storing some settings via ABP's SettingProvider and ISettingStore.
My question is, does anyone have, or know of, a sample application that show's how to implement ISettingStore and storing the settings in the database?
The only post I could find so far is this, but the link hikalkan supplies is broken.
Thanks for any help,
Joe
ABP stores settings on memory with default values. When you insert a new setting value into database, then it reads from database and overrides the default value. So basically when database has no settings then it means all the settings are on default values. Setting values are stored in AbpSettings table.
To start using settings mechanism. Create your own SettingProvider inherited from SettingProvider. Initialize it in your module (eg:
ModuleZeroSampleProjectApplicationModule).
As SettingProvider is automatically registed to dependency injection; You can inject ISettingManager wherever you want.
public class MySettingProvider : SettingProvider
{
public override IEnumerable<SettingDefinition> GetSettingDefinitions(SettingDefinitionProviderContext context)
{
return new[]
{
new SettingDefinition(
"SmtpServerAddress",
"127.0.0.1"
),
new SettingDefinition(
"PassiveUsersCanNotLogin",
"true",
scopes: SettingScopes.Application | SettingScopes.Tenant
),
new SettingDefinition(
"SiteColorPreference",
"red",
scopes: SettingScopes.User,
isVisibleToClients: true
)
};
}
}
In application services and controllers you don't need to inject ISettingManager
(because there's already property injected) and you can directly use SettingManager property. Forexample :
//Getting a boolean value (async call)
var value1 = await SettingManager.GetSettingValueAsync<bool>("PassiveUsersCanNotLogin");
And for the other classes (like Domain Services) can inject ISettingManager
public class UserEmailer : ITransientDependency
{
private readonly ISettingManager _settingManager;
public UserEmailer(ISettingManager settingManager)
{
_settingManager = settingManager;
}
[UnitOfWork]
public virtual async Task TestMethod()
{
var settingValue = _settingManager.GetSettingValueForUser("SmtpServerAddress", tenantAdmin.TenantId, tenantAdmin.Id);
}
}
Note: To modify a setting you can use these methods in SettingManager ChangeSettingForApplicationAsync, ChangeSettingForTenantAsync and ChangeSettingForUserAsync

How to link to a child site-map file from a parent site map in ASP.NET MVC4 using MVCSitemapProvider?

I am using MVCSitemapProvider by Maarten Balliauw with Ninject DI in MVC4. Being a large-scale web app, enumerating over the records to generate the sitemap xml accounts for 70% of the page load time. For that purpose, I went for using new sitemap files for each level-n dynamic node provider.
<mvcSiteMap xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://mvcsitemap.codeplex.com/schemas/MvcSiteMap-File-4.0" xsi:schemaLocation="http://mvcsitemap.codeplex.com/schemas/MvcSiteMap-File-4.0 MvcSiteMapSchema.xsd">
<mvcSiteMapNode title="$resources:SiteMapLocalizations,HomeTitle" description="$resources:SiteMapLocalizations,HomeDescription" controller="Controller1" action="Home" changeFrequency="Always" updatePriority="Normal" metaRobotsValues="index follow noodp noydir"><mvcSiteMapNode title="$resources:SiteMapLocalizations,AboutTitle" controller="ConsumerWeb" action="Aboutus"/>
<mvcSiteMapNode title="Sitemap" controller="Consumer1" action="SiteMap"/><mvcSiteMapNode title=" " action="Action3" controller="Consumer2" dynamicNodeProvider="Comp.Controller.Utility.NinjectModules.PeopleBySpecDynamicNodeProvider, Comp.Controller.Utility" />
<mvcSiteMapNode title="" siteMapFile="~/Mvc2.sitemap"/>
</mvcSiteMapNode>
</mvcSiteMap>
But, it doesn't seem to work. For localhost:XXXX/sitemap.xml, the child nodes from Mvc2.sitemap don't seem to load.
siteMapFile is not a valid XML attribute in MvcSiteMapProvider (although you could use it as a custom attribute), so I am not sure what guide you are following to do this. But, the bottom line is there is no feature that loads "child sitemap files", and even if there was, it wouldn't help with your issue because all of the nodes are loaded into memory at once. Realistically on an average server there is an upper limit of around 10,000 - 15,000 nodes.
The problem that you describe is a known issue. There are some tips available in issue #258 that may or may not help.
We are working a new XML sitemap implementation that will allow you to connect the XML sitemap directly to your data source, which can be used to circumvent this problem (at least as far as the XML sitemap is concerned). This implementation is stream-based and has paging that can be tied directly to the data source, and will seamlessly page over multiple tables, so it is very efficient. However, although there is a working prototype, it is still some time off from being made into a release.
If you need it sooner rather than later, you are welcome to grab the prototype from this branch.
You will need some code to wire it into your application (this is subject to change for the official release). I have created a demo project here.
Application_Start
var registrar = new MvcSiteMapProvider.Web.Routing.XmlSitemapFeedRouteRegistrar();
registrar.RegisterRoutes(RouteTable.Routes, "XmlSitemap2");
XmlSitemap2Controller
using MvcSiteMapProvider.IO;
using MvcSiteMapProvider.Web.Mvc;
using MvcSiteMapProvider.Xml.Sitemap.Configuration;
using System.Web.Mvc;
public class XmlSitemap2Controller : Controller
{
private readonly IXmlSitemapFeedResultFactory xmlSitemapFeedResultFactory;
public XmlSitemap2Controller()
{
var builder = new XmlSitemapFeedStrategyBuilder();
var xmlSitemapFeedStrategy = builder
.SetupXmlSitemapProviderScan(scan => scan.IncludeAssembly(this.GetType().Assembly))
.AddNamedFeed("default", feed => feed.WithMaximumPageSize(5000).WithContent(content => content.Image().Video()))
.Create();
var outputCompressor = new HttpResponseStreamCompressor();
this.xmlSitemapFeedResultFactory = new XmlSitemapFeedResultFactory(xmlSitemapFeedStrategy, outputCompressor);
}
public ActionResult Index(int page = 0, string feedName = "")
{
var name = string.IsNullOrEmpty(feedName) ? "default" : feedName;
return this.xmlSitemapFeedResultFactory.Create(page, name);
}
}
IXmlSiteMapProvider
And you will need 1 or more IXmlSitemapProvider implementations. For convenience, there is a base class XmlSiteMapProviderBase. These are similar to creating controllers in MVC.
using MvcSiteMapProvider.Xml.Sitemap;
using MvcSiteMapProvider.Xml.Sitemap.Specialized;
using System;
using System.Linq;
public class CategoriesXmlSitemapProvider : XmlSitemapProviderBase, IDisposable
{
private EntityFramework.MyEntityContext db = new EntityFramework.MyEntityContext();
// This is optional. Don't override it if you don't want to use last modified date.
public override DateTime GetLastModifiedDate(string feedName, int skip, int take)
{
// Get the latest date in the specified page
return db.Category.OrderBy(x => x.Id).Skip(skip).Take(take).Max(c => c.LastUpdated);
}
public override int GetTotalRecordCount(string feedName)
{
// Get the total record count for all pages
return db.Category.Count();
}
public override void GetUrlEntries(IUrlEntryHelper helper)
{
// Do not call ToList() on the query. The idea is that we want to force
// EntityFramework to use a DataReader rather than loading all of the data
// at once into RAM.
var categories = db.Category
.OrderBy(x => x.Id)
.Skip(helper.Skip)
.Take(helper.Take);
foreach (var category in categories)
{
var entry = helper.BuildUrlEntry(string.Format("~/Category/{0}", category.Id))
.WithLastModifiedDate(category.LastUpdated)
.WithChangeFrequency(MvcSiteMapProvider.ChangeFrequency.Daily)
.AddContent(content => content.Image(string.Format("~/images/category-image-{0}.jpg", category.Id)).WithCaption(category.Name));
helper.SendUrlEntry(entry);
}
}
public void Dispose()
{
db.Dispose();
}
}
Note that there is currently not an IXmlSiteMapProvider implementation that reads the nodes from the default (or any) SiteMap, but creating one is similar to what is shown above, except you would query the SiteMap for nodes instead of a database for records.
Alternatively, you could use a 3rd party XML sitemap generator. Although, nearly all of them are set up in a non-scalable way for large sites, and most leave it up to you to handle the paging. If they aren't streaming the nodes, it will not realistically scale to more than a few thousand URLs.
The other detail you might need to take care of is to use the forcing a match technique to reduce the total number of nodes in the SiteMap. If you are using the Menu and/or SiteMap HTML helpers, you will need to leave all of your high-level nodes alone. But any node that does not appear in either is a good candidate for this. Realistically, nearly any data-driven site can be reduced to a few dozen nodes using this technique, but keep in mind every node that is forced to match multiple routes in the SiteMap means that individual URL entries will need to be added in the XML sitemap.

MVC DBContext, how to connect to a table?

im just creating my first MVC applicaiton and am having trouble connecting to my database located on my sql server.
i have added the connection string to the web config as normal, created a model with all the fields in.
i created a model and created a new DBContext as there wasnt one listed. this created the below file
im not sure how it connects to the right table in my SQLDB, how do i do this?
also how do i make it run stored procedures?
Thanks
public EquipmentDBContext()
: base("name=ITAPPConnectionString")
{
}
public DbSet<Equipment> Equipments { get; set; }
public EquipmentDBContext()
: base("name=ITAPPConnectionString")//this name should be the name of database
{
}
public DbSet<Equipment> Equipments { get; set; }
here you say you have a
Datamodoel called Equipment. Your context also defines a single property, Equipments, which is of type DbSet. This property acts as a collection that allows you to query the data in you table in database as though it were an in-memory collection of objects.
So, if you create an object of class EquipmentDbContext in controller named lets say db, then you can access the data in table with something like
db.Equipments
To expand further on Cybercop's answer you would do something like this
using (var context = new EquipmentDBContext())
{
var equipments = context.Equipments.ToList();
var equipment = context.Equipments.FirstOrDefault(c=>c.Id == 1);
var blueThings= context.Equipments.Where(c=>c.Color == "blue").ToList();
}

Use MEF to compose parts but postpone the creation of the parts

As explained in these questions I'm trying to build an application that consists of a host and multiple task processing clients. With some help I have figured out how to discover and serialize part definitions so that I could store those definitions without having to have the actual runtime type loaded.
The next step I want to achieve (or next two steps really) is that I want to split the composition of parts from the actual creation and connection of the objects (represented by those parts). So if I have a set of parts then I would like to be able to do the following thing (in pseudo-code):
public sealed class Host
{
public CreationScript Compose()
{
CreationScript result;
var container = new DelayLoadCompositionContainer(
s => result = s);
container.Compose();
return script;
}
public static void Main()
{
var script = Compose();
// Send the script to the client application
SendToClient(script);
}
}
// Lives inside other application
public sealed class Client
{
public void Load(CreationScript script)
{
var container = new ScriptLoader(script);
container.Load();
}
public static void Main(string scriptText)
{
var script = new CreationScript(scriptText);
Load(script);
}
}
So that way I can compose the parts in the host application, but actually load the code and execute it in the client application. The goal is to put all the smarts of deciding what to load in one location (the host) while the actual work can be done anywhere (by the clients).
Essentially what I'm looking for is some way of getting the ComposablePart graph that MEF implicitly creates.
Now my question is if there are any bits in MEF that would allow me to implement this kind of behaviour? I suspect that the provider model may help me with this but that is a rather large and complex part of MEF so any guidelines would be helpful.
From lots of investigation it seems that is not possible to separate the composition process from the instantiation process in MEF so I have had to create my own approach for this problem. The solution assumes that the scanning of plugins results in having the type, import and export data stored somehow.
In order to compose parts you need to keep track of each part instance and how it is connected to other part instances. The simplest way to do this is to make use of a graph data structure that keeps track of which import is connected to which export.
public sealed class CompositionCollection
{
private readonly Dictionary<PartId, PartDefinition> m_Parts;
private readonly Graph<PartId, PartEdge> m_PartConnections;
public PartId Add(PartDefinition definition)
{
var id = new PartId();
m_Parts.Add(id, definition);
m_PartConnections.AddVertex(id);
return id;
}
public void Connect(
PartId importingPart,
MyImportDefinition import,
PartId exportingPart,
MyExportDefinition export)
{
// Assume that edges point from the export to the import
m_PartConnections.AddEdge(
new PartEdge(
exportingPart,
export,
importingPart,
import));
}
}
Note that before connecting two parts it is necessary to check if the import can be connected to the export. In other cases MEF does that but in this case we'll need to do that ourselves. An example of how to approach that is:
public bool Accepts(
MyImportDefinition importDefinition,
MyExportDefinition exportDefinition)
{
if (!string.Equals(
importDefinition.ContractName,
exportDefinition.ContractName,
StringComparison.OrdinalIgnoreCase))
{
return false;
}
// Determine what the actual type is we're importing. MEF provides us with
// that information through the RequiredTypeIdentity property. We'll
// get the type identity first (e.g. System.String)
var importRequiredType = importDefinition.RequiredTypeIdentity;
// Once we have the type identity we need to get the type information
// (still in serialized format of course)
var importRequiredTypeDef =
m_Repository.TypeByIdentity(importRequiredType);
// Now find the type we're exporting
var exportType = ExportedType(exportDefinition);
if (AvailableTypeMatchesRequiredType(importRequiredType, exportType))
{
return true;
}
// The import and export can't directly be mapped so maybe the import is a
// special case. Try those
Func<TypeIdentity, TypeDefinition> toDefinition =
t => m_Repository.TypeByIdentity(t);
if (ImportIsCollection(importRequiredTypeDef, toDefinition)
&& ExportMatchesCollectionImport(
importRequiredType,
exportType,
toDefinition))
{
return true;
}
if (ImportIsLazy(importRequiredTypeDef, toDefinition)
&& ExportMatchesLazyImport(importRequiredType, exportType))
{
return true;
}
if (ImportIsFunc(importRequiredTypeDef, toDefinition)
&& ExportMatchesFuncImport(
importRequiredType,
exportType,
exportDefinition))
{
return true;
}
if (ImportIsAction(importRequiredTypeDef, toDefinition)
&& ExportMatchesActionImport(importRequiredType, exportDefinition))
{
return true;
}
return false;
}
Note that the special cases (like IEnumerable<T>, Lazy<T> etc.) require determining if the importing type is based on a generic type which can be a bit tricky.
Once all the composition information is stored it is possible to do the instantiation of the parts at any point in time because all the required information is available. Instantiation requires a generous helping of reflection combined with the use of the trusty Activator class and will be left as an exercise to the reader.