spring boot switching from in-memory database to persistent database - intellij-idea

I have developed my web-application using spring-boot and spring-data-jpa and and in-memory database, and I have a couple questions:
how can i now switch to a persistent, let's say, MySQL database? What do I have to change in my configuration?
Can spring-boot set a database up for me with a specific port and where does it get stored in my file system?
Does IntelliJ provide a datasource browser for the created database?
I am sure this must be covered somewhere in the endless jungle of spring-boot documentation.

You can change the application properties for the datasource according to the link Gabor Bakos already provided.
That depends on the type of the database you want to use. HSQLDB and H2 allow you to specify a file path for the database file, however the database instance itself is still running within your application process. With full RMDBS like MySQL you have to install and configure the MySQL server yourself and provide the connection data to your Spring Boot application.
Yes, IntelliJ has a datasource browser for all major databases (maybe you have to download the database driver).

Related

How to connect sql database with ignite cluster to sync data?

I am new to apache ignite. I created ignite cluster and connect my nodejs thin client to it. It is working fine but It only create cache create functions specified in js file. Now I want to sync my sql server data with ignite. Any idea how I will do it?
I tried to connect with Grid gain but it does not allow me to create free cluster?
Please refer to 3rd Party Persistence documentation regarding RDBMS integration.
GridGain Web Console can help you set up database integration by generating Maven project corresponding to your RDBMS data model.
GridGain Community Edition is free to use as long as you deploy it on your own. But, it is also supported by stock Apache Ignite.

Pentaho Data integration how to move transformation from one server to another

What's the best practice of migrating pentaho job/transformations from one server to another?
We've set up DEV, QA, UAT, Production PDI server with carte running on AWS. And developers in our team are using community edition to program and test locally with local carte service.
The servers are using database repository and local pcs are using file based repository.
Typically, when we migrate a transformation we will have to export xml and find those xml piece for that transformation/job and import into target servers.
I don't think this is a good practice, considering we are moving on CI/CD along with other java/js code.
Please advice a better way to do migration.
Thanks,
Martin
I think your issue is less about migrating from one server to another, and more about migrating from one repository type to another. Do you have a compelling reason to use different repository types?
We use file-based repositories for all environments, and a directory synchronization tool for migrations. We went with file-based repositories so our source control system could be used with it.

ZF2 DB based session management vs Redis

I am not sure if this is the right place to post this.
I am writing a PHP and ZF2 based website that needs to be scalable. So, I am looking into Database based sessions. I understand ZF2 supports DB Session management, so I can create a MySQL DB, and use it. But DB session management could be slow. So, I have looked into redis as a cache management solution.
My question is will using redis as a standalone server work for both server side session management and a cache solution (as it seems to have it's own in memory DB) or do I need to combine it with ZF2 DB Session management?

Change DB server at runtime

I have a large vb.net project running on subsonic 2.2.
I am looking for a way to change the connection string for subsonic at run-time, based on a separate configuration file. (basically connect with the sever information in my own configuration file.)
How can this be accomplished?

Pentaho Report Designer - Dynamic Data Sources

I have a local instance of the Pentaho Report Designer running on my box and it has a local development database configured as its data sources. (2 datasource configs, both pointing to the same local data server; source and target databases.)
Obviously, when I publish this report to the production BI server the reports fail because my local datasources are no longer reachable.
Clearly configuring the report to rely on the production databases would resolve any identity crisises (crisi?) but I live in the sticks so network is slow and I don't want to impact the production DB for development purposes.
In Kettle, I have updated the kettle.properties file to provide localized datasource variables (Great for unit testing my transformations!) and was wondering if there is a similar method for localizing variables in PRD?
In PRD, you use JNDI connections to have the same sort of abstraction. You can find the JNDI configuration in $HOME/.pentaho/simple-jndi. Create a datasource there and a datasource with the same name in the BI-Server's admin-console. Then define your connection as "JNDI" connection type and mention that name you given your datasources.
Then, depending on whether you run local or on the server, the engine will lookup the connection info from the runtime context.
But one warning: Given the fact that SQL is not a real standard, make sure that your local and remote environment use the same database type. Otherwise, if you - for instance - use MySQL on the client and Oracle on the server, your SQL created for MySQL will not be accepted by the oracle driver and vice versa.
On Windows you find the jndi config file here:
C:\Users\(username)\.pentaho\simple-jndi