Asp.Net core 2.0: Detect Startup class invoked from migration or other ef operation - asp.net-core

At the current moment all default Startup.cs flow executed on every db related operation like droping db, adding migration, updating db to migrations, etc.
I have heavy app specific code in Startup which need to be invoked only if application run for real. So how could I detect that Startup class run from migration or other database related dotnet command.

Well, as it was already noticed in comment to a question there is a IDesignTimeDbContextFactory interface which need to be implemented to resolve DbContext at design time.
It could look somewhat like this:
public static class Programm{
...
public static IWebHost BuildWebHostDuringGen(string[] args)
{
return WebHost.CreateDefaultBuilder(args)
.UseStartup<StartupGen>() // <--- I'm just using different Startup child there where could be less complex code
.UseDefaultServiceProvider(options => options.ValidateScopes = false).Build();
}
}
public class DbContextFactory : IDesignTimeDbContextFactory<MyDbContext>
{
public MyDbContex CreateDbContext(string[] args)
{
return Program.BuildWebHostDuringGen(args).Services.GetRequiredService<MyDbContext>();
}
}
However, due to some unclear reasons (I asked guys from Microsoft, but they don't explain this to me) dotnet currently on every operation implicitly call Programm.BuildWebHost even if it's private - that's the reason why standard flow executed each time for the question's author. Workaround for that - Rename Programm.BuildWebHost to something else, like InitWebHost
There is an issue created for that, so maybe it will be resolved in 2.1 release on in future.

The documentation is still a bit unclear as to why this occurs. I've yet to find any concrete answer as to why it runs Startup.Configure. In 2.0 it's recommend to move any migration/seeding code to Program.Main. Here's an example by bricelam on Github.
public static IWebHost MigrateDatabase(this IWebHost webHost)
{
using (var scope = webHost.Services.CreateScope())
{
var services = scope.ServiceProvider;
try
{
var db = services.GetRequiredService<ApplicationDbContext>();
db.Database.Migrate();
}
catch (Exception ex)
{
var logger = services.GetRequiredService<ILogger<Program>>();
logger.LogError(ex, "An error occurred while migrating the database.");
}
}
return webHost;
}
public static void Main(string[] args)
{
BuildWebHost(args)
.MigrateDatabase()
.Run();
}

Related

Register Hibernate 5 Event Listeners

I am working on a legacy non-Spring application, and it is being migrated from Hibernate 3 to Hibernate 5.6.0.Final (latest at this time). I have generally never used Hibernate Event Listeners in my work, so this is quite new to me, and I am studying these in Hibernate 5.
Currently in some test class we have defined the code this way for Hibernate 3:
protected static Configuration createSecuredDatabaseConfig() {
Configuration config = createUnrestrictedDatabaseConfig();
config.setListener("pre-insert", "com.app.server.services.db.eventlisteners.MySecurityHibernateEventListener");
config.setListener("pre-update", "com.app.server.services.db.eventlisteners.MySecurityHibernateEventListener");
config.setListener("pre-delete", "com.app.server.services.db.eventlisteners.MySecurityHibernateEventListener");
config.setListener("pre-load", "com.app.server.services.db.eventlisteners.EkoSecurityHibernateEventListener");
return config;
}
This is obviously no longer valid, and I believe I need to create a Hibernate Integrator, which I have done.
public class MyEventListenerIntegrator implements Integrator {
#Override
public void integrate(Metadata metadata, SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
EventListenerRegistry eventListenerRegistry = serviceRegistry.getService(EventListenerRegistry.class);
eventListenerRegistry.getEventListenerGroup(EventType.PRE_INSERT).appendListener(new MySecurityHibernateEventListener());
eventListenerRegistry.getEventListenerGroup(EventType.PRE_UPDATE).appendListener(new MySecurityHibernateEventListener());
eventListenerRegistry.getEventListenerGroup(EventType.PRE_DELETE).appendListener(new MySecurityHibernateEventListener());
eventListenerRegistry.getEventListenerGroup(EventType.PRE_LOAD).appendListener(new MySecurityHibernateEventListener());
}
So, now I believe the next step is to add this to the session via the registry builder. I am using this website to help me:
https://www.boraji.com/hibernate-5-event-listener-example
Because we were using older Hibernate 3, we had code to create our session factory as follows:
protected static SessionFactory buildSessionFactory(Database db)
{
if (db == null) {
throw new NullPointerException("Database specifier cannot be null");
}
try {
Configuration config = createSessionFactoryConfiguration(db);
String url = config.getProperty("connection.url");
String user = config.getProperty("connection.username");
String password = config.getProperty("connection.password");
try {
String dbDriver = config.getProperty("hibernate.connection.driver_class");
Class.forName(dbDriver);
Connection conn = DriverManager.getConnection(url, user, password);
}
catch (SQLException error) {
logger.info("Didn't find driver, on QA or production, so it's okay to assume we have DB connection");
error.printStackTrace();
}
SessionFactory sessionFactory = config.buildSessionFactory();
sessionFactoryConfigs.put(sessionFactory, config); // Cannot recover config from factory instance, must be stored.
return sessionFactory;
}
catch (Throwable ex) {
// Make sure you log the exception, as it might be swallowed
logger.error("Initial SessionFactory creation failed.", ex);
throw new ExceptionInInitializerError(ex);
}
}
The link that I referred to above has a much different way of creating the sessionfactory. So, I'll be testing that out to see if it works in our app.
Without Spring handling our sessions and transactions, in this app it is coded by hand the way it was done before Spring, and I haven't seen that kind of code in years.
I solved this issue with the help from the link I provided above. However, I didn't copy exactly what they did, but some of it helped. My solution is as follows:
protected static SessionFactory createSecuredDatabaseConfig() {
Configuration config = createUnrestrictedDatabaseConfig();
BootstrapServiceRegistry bootstrapRegistry =
new BootstrapServiceRegistryBuilder()
.applyIntegrator(new EkoEventListenerIntegrator())
.build();
ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder(bootstrapRegistry).applySettings(config.getProperties()).build();
SessionFactory sessionFactory = config.buildSessionFactory(serviceRegistry);
return sessionFactory;
}
This was it. I tried multiple different ways to register the events without the BootstrapServiceRegistry, but none of those worked. I did have to create the integrator. What I did NOT include was the following:
MetadataSources sources = new MetadataSources(serviceRegistry )
.addPackage("com.myproject.server.model");
Metadata metadata = sources.getMetadataBuilder().build();
// did not create the sessionFactory this way
sessionFactory = metadata.getSessionFactoryBuilder().build();
If I had gone further and use this method to create the sessionFactory, then all of my queries would have been complaining about not being able to find the parameterName, which is something else.
The Hibernate Integrator and this method to create the sessionFactory is all for the unit tests. Without registering these events, one unit test would fail, and now it doesn't. So, this solves my problem for now.

How to share the camel context between 2 different applications or war's

I have created 2 different application and started the camel context in one of them. How do I use this already started context in the second application ?
I tried getting the context by using lookUpByname() and binding camel context with jndi context but could on load the existing context.
Also tried by setting NameStrategy in context in application 1 and getting the same in application 2 but looks like camel auto generates name and prefix in DefaultCamelContextNameStrategy.
code snippet:
Application 1 :
public static void main(String[] args)
{
CamelContext ctx = new DefaultCamelContext();
String camelContextId= "sample";
ctx.setNameStrategy(new DefaultCamelContextNameStrategy(
camelContextId));
ctx.start();
}
Application 2:
public static void main(String[] args)
{
sampleRouter testobj = new sampleRouter();
testobj.test();
}
public class sampleRouter extends RouteBuilder
{
public static CamelContext camelContext;
public void test()
try
{
camelContext = getContext();
try {
camelContext.stop();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Please guide me to get the already started context in different applications as I want to avoid creating a new context every time.
Why do you want to avoid having multiple CamelContexts? What goal are you trying to accomplish?
Without a clear requirement it's not easy to help you, however I'll try and suggest a couple of ideas.
Looking at your code you are using two different JVMs, since you have 2 main methods.
If your applications run in different JVMs, use a JMS Message Broker like ActiveMQ as communication layer.
If you deploy 2 wars / applications in the same JVM, you can use two CamelContexts and have them communicate through VM endpoints, like seda-vm and direct-vm.

CRM 2011 - ITracingService getting access to the traceInfo at runtime

I have some custom logging in my plugin and want to include the contents of my tracingService in my custom logging (which is called within a catch block, before the plugin finishes).
I cant seem to access the content of tracingService. I wonder if it is accessible at all?
I tried tracingService.ToString() just incase the devs had provided a useful overload, alas as expected I get name of the class "Microsoft.Crm.Sandbox.SandboxTracingService".
Obviously Dynamics CRM makes use of the tracingService content towards the end of the pipeline if it needs to.
Anybody have any ideas on this?
Kind Regards,
Gary
The tracing service does not provide access to the trace text during execution but that can be overcome by creating your own implementation of ITracingService. Note, you cannot get any text that was written to the trace log prior to the Execute method of your plugin being called - meaning if you have multiple plugins firing you won't get their trace output in the plugin that throws the exception.
public class CrmTracing : ITracingService
{
ITracingService _tracingService;
StringBuilder _internalTrace;
public CrmTracing(ITracingService tracingService)
{
_tracingService = tracingService;
_internalTrace = new StringBuilder();
}
public void Trace(string format, params object[] args)
{
if (_tracingService != null) _tracingService.Trace(format, args);
_internalTrace.AppendFormat(format, args).AppendLine();
}
public string GetTraceBuffer()
{
return _internalTrace.ToString();
}
}
Just instantiate it in your plugin passing in the CRM provided ITracingService. Since it is the same interface it works the same if you pass it to other classes and methods.
public class MyPlugin : IPlugin
{
public void Execute(IServiceProvider serviceProvider)
{
var tracingService = new CrmTracing((ITracingService)serviceProvider.GetService(typeof(ITracingService)));
tracingService.Trace("Works same as always.");
var trace = tracingService.GetTraceBuffer();
}
}
To get the traceInfo string from traceService at runtime I used debugger to interrogate the tracingService contents.
So the trace string is accessible from these expressions...
for Plugins
((Microsoft.Crm.Extensibility.PipelineTracingService)(tracingService)).TraceInfo
for CWA
((Microsoft.Crm.Workflow.WorkflowTracingService)(tracingService)).TraceInfo
You can drill into the tracing service by debugging and extract the expression.
However, at design time neither of these expressions seem to be accessible from any of the standard CRM 2011 SDK dlls... so not sure if its possible as yet.

Ninject Moqing Kernel (what does reset do?)

I have an interface that I'm using with a couple different concrete classes. What I wish is that there was something like this...
_kernel.GetMock<ISerializeToFile>().Named("MyRegisteredName")
.Setup(x => x.Read<ObservableCollection<PointCtTestDataInput>>(
It.IsAny<string>()));
The project I'm working on uses the service locator pattern - anti-pattern which I'm getting less fond of all the time...
Originally I tried..
[ClassInitialize]
public static void ClassInitialize(TestContext testContext)
{
_kernel = new MoqMockingKernel();
}
[TestInitialize]
public void TestInitialize()
{
_kernel.Reset();
ServiceLocator.SetLocatorProvider(
() => new NinjectServiceLocator(_kernel));
_kernel.Bind<ISerializeToFile>().ToMock()
.InSingletonScope().Named("ObjectToFile");
_kernel.GetMock<ISerializeToFile>()
.Setup(x => x.Read<ObservableCollection<PointCtTestDataInput>>(
It.IsAny<string>()));
_kernel.GetMock<ISerializeToFile>()
.Setup(x => x.Save<ObservableCollection<PointCtTestDataInput>>(
It.IsAny<ObservableCollection<PointCtTestDataInput>>(),
It.IsAny<string>()));
}
I got the standard Ninject error stating that more than one matching binding is available. So, I moved _kernel = new MoqMockingKernel(); into the TestInitialize, and then that error went away... Perhaps I'm incorrectly guess at what _kernel.Reset() does?
Reset removes any instance from the cache. It does not delete existing bindings. So the second test will have the ISerializeToFile twice.

How to list JBoss AS 7 datasource properties in Java code?

I'm running JBoss AS 7.1.0.CR1b. I've got several datasources defined in my standalone.xml e.g.
<subsystem xmlns="urn:jboss:domain:datasources:1.0">
<datasources>
<datasource jndi-name="java:/MyDS" pool-name="MyDS_Pool" enabled="true" use-java-context="true" use-ccm="true">
<connection-url>some-url</connection-url>
<driver>the-driver</driver>
[etc]
Everything works fine.
I'm trying to access the information contained here within my code - specifically the connection-url and driver properties.
I've tried getting the Datasource from JNDI, as normal, but it doesn't appear to provide access to these properties:
// catches removed
InitialContext context;
DataSource dataSource = null;
context = new InitialContext();
dataSource = (DataSource) context.lookup(jndi);
ClientInfo and DatabaseMetadata from a Connection object from this Datasource also don't contain these granular, JBoss properties either.
My code will be running inside the container with the datasource specfied, so all should be available. I've looked at the IronJacamar interface org.jboss.jca.common.api.metadata.ds.DataSource, and its implementing class, and these seem to have accessible hooks to the information I require, but I can't find any information on how to create such objects with these already deployed resources within the container (only constructor on impl involves inputting all properties manually).
JBoss AS 7's Command-Line Interface allows you to navigate and list the datasources as a directory system. http://www.paykin.info/java/add-datasource-programaticaly-cli-jboss-7/ provides an excellent post on how to use what I believe is the Java Management API to interact with the subsystem, but this appears to involve connecting to the target JBoss server. My code is already running within that server, so surely there must be an easier way to do this?
Hope somebody can help. Many thanks.
What you're really trying to do is a management action. The best way to is to use the management API's that are available.
Here is a simple standalone example:
public class Main {
public static void main(final String[] args) throws Exception {
final List<ModelNode> dataSources = getDataSources();
for (ModelNode dataSource : dataSources) {
System.out.printf("Datasource: %s%n", dataSource.asString());
}
}
public static List<ModelNode> getDataSources() throws IOException {
final ModelNode request = new ModelNode();
request.get(ClientConstants.OP).set("read-resource");
request.get("recursive").set(true);
request.get(ClientConstants.OP_ADDR).add("subsystem", "datasources");
ModelControllerClient client = null;
try {
client = ModelControllerClient.Factory.create(InetAddress.getByName("127.0.0.1"), 9999);
final ModelNode response = client.execute(new OperationBuilder(request).build());
reportFailure(response);
return response.get(ClientConstants.RESULT).get("data-source").asList();
} finally {
safeClose(client);
}
}
public static void safeClose(final Closeable closeable) {
if (closeable != null) try {
closeable.close();
} catch (Exception e) {
// no-op
}
}
private static void reportFailure(final ModelNode node) {
if (!node.get(ClientConstants.OUTCOME).asString().equals(ClientConstants.SUCCESS)) {
final String msg;
if (node.hasDefined(ClientConstants.FAILURE_DESCRIPTION)) {
if (node.hasDefined(ClientConstants.OP)) {
msg = String.format("Operation '%s' at address '%s' failed: %s", node.get(ClientConstants.OP), node.get(ClientConstants.OP_ADDR), node.get(ClientConstants.FAILURE_DESCRIPTION));
} else {
msg = String.format("Operation failed: %s", node.get(ClientConstants.FAILURE_DESCRIPTION));
}
} else {
msg = String.format("Operation failed: %s", node);
}
throw new RuntimeException(msg);
}
}
}
The only other way I can think of is to add module that relies on servers internals. It could be done, but I would probably use the management API first.