How to control DB schema (varchar length) of generated NServiceBus data tables - nhibernate

By default NServiceBus generates DB-schema based on code (with NHibernate's ConventionModelMapper, I think). Thus string-properties are mapped to NVARCHAR(255) columns. Any chance to change/modify NVARCHAR length?
I tried a manual DB schema change after creation, but that feels wrong.

It's possible to define a single custom mapping (without need to create a custom mapping for all sagas). See https://docs.particular.net/samples/nhibernate/custom-mappings

Related

How to create BigQuery table with required fields from DataFlow with string schema definition?

I am using DataFlow's WriteToBigQuery with CREATE_IF_NEEDED, and thus have to specify the schema.
I define the schema in the beginning of my code (outside the actual pipeline), but since I need the flag --save_main_session, I get the same error as here, which explains that the schema cannot be passed along with the pipeline since a BigQuery schema definition is not pickleable.
The solution mentioned on the page is not an option for me (disable the --save_main_session flag), and thus the other option to specify the schema is through a string.
However, I need to set some fields to REQUIRED. Is there a way to do this with the string schema definition?
As you can see from bigquery.py the conversion from a string schema to a TableSchema is quite straightforward and does indeed set the mode to NULLABLE. Perhaps you can create the TableSchema with REQUIRED fields based on this code snippet.

change field datatype in Cosmos DB

I have a field in Cosmos DB which is mapped as an number, but it should be a string. I'd like to alter the schema in-place without reloading the data, is this possible with a query in the same way it can be achieved in SQL?
ALTER TABLE EVENTS
MODIFY COLUMN eventAmount varchar;
Have consulted the docs but they only reference simple SQL commands.
DocumentDB is schemaless. There is no structure defined outside documents themselves so each document has their own schema. If you want to enforce some documents follow a certain structure, then that must be enforced by yourself in your application logic.
So, this means you can not "alter schema" for collection to change data types.
What you can and should do, is to fix documents which you consider having wrong schema by updating them. Query docs where eventAmount is stored as JS number and save the document with the value stored as a corresponding javascript string instead.

Restrict SqlEntityConnection Type Provider to only certain tables

I am writing a utility for myself that needs to be able to access a pair of tables in a SQL database. I used the SqlEntityConnection Type Provider and was rewarded with the data I needed from the table as easy to use entities.
One thing I noticed though was that startup and compiling of the project increased by quite a lot. I suspect this is because the database has over a hundred tables and it's compiling and getting data from all of them as opposed to just the two I need. Is there a way to restrict the EntityTypeProvider to only referencing the needed tables in the schema?
type private EntityConnection = SqlEntityConnection<ConnectionString="Server=Server;Initial Catalog=Database;Integrated Security=SSPI;MultipleActiveResultSets=true", Pluralize = true>
let private context = EntityConnection.GetDataContext()
I have not tried this myself, but I think you could add a new "ADO.NET Entity Data Model" (edmx) file to your project, let it generate from your existing database, and then delete from the model every table you don't want accessible to your code.
The EDMX designer will generate a *.csdl file that you can then reference from the LocalSchemaFile parameter of SqlEntityConnection. You'd use this parameter instead of ConnectionString.
The end result is that the entity provider would not automatically pick up changes to your database, but compilation times will go down, and only the tables you care about would be visible to your code.

NHibernate (and Fluent): Possible to prevent a specific table from being created via SchemaExport.Create?

I'm using Fluent NHibernate (and I'm a newbie). I have mapped a read-only table that already exists in the database (it's actually a view in the db). In addition, I have mapped new classes for which I want to create tables using SchemaExport.Create().
In my fluent mapping, I have specified "ReadOnly()" to mark the view as immutable. However, when I execute SchemaExport.Create(), it still tries to create the table so I get the error "There is already an object named 'vw_Existing'".
Is there a way to prevent NHibernate from trying to create that specific table?
I supposed I could export and modify the sql (SetOutputFile), but it would be nice to use SchemaExport.Create().
Thanks.
You're looking for
SchemaAction.None();

Doctrine schema changes while keeping data?

We're developing a Doctrine backed website using YAML to define our schema. Our schema changes regularly (including fk relations) so we need to do a lot of:
Doctrine::generateModelsFromYaml(APPPATH . 'models/yaml', APPPATH . 'models', array('generateTableClasses' => true));
Doctrine::dropDatabases();
Doctrine::createDatabases();
Doctrine::createTablesFromModels();
We would like to keep existing data and store it back in the re-created database. So I copy the data into a temporary database before the main db is dropped.
How do I get the data from the "old-scheme DB copy" to the "new-scheme DB"? (the new scheme only contains NEW columns, NO COLUMNS ARE REMOVED)
NOTE:
This obviously doesn't work because the column count doesn't match.
SELECT * FROM copy.Table INTO newscheme.Table
This obviously does work, however this is consuming too much time to write for every table:
SELECT old.col, old.col2, old.col3,'somenewdefaultvalue' FROM copy.Table as old INTO newscheme.Table
Have you looked into Migrations? They allow you to alter your database schema in programmatical way. WIthout losing data (unless you remove colums, of course)
How about writing a script (using the Doctrine classes for example) which parses the yaml schema files (both the previous version and the "next" version) and generates the sql scripts to run? It would be a one-time job and not require that much work. The benefit of generating manual migration scripts is that you can easily store them in the version control system and replay version steps later on. If that's not something you need, you can just gather up changes in the code and do it directly through the database driver.
Of course, the more fancy your schema changes becomes, the harder the maintenance will get i.e. column name changes, null to not null etc.