Adding new meta-data too multiple legacy services - sql

I have many legacy services with their own Schema.
Currently, I want to adding 1 field into those services to help organize things better.
Example: Field(Account_id) -- Get all item in services A with account_id
The most strange forward thing is to add a Field into all legacy services schema. The cons about this solution is that I have to change all the old schema and re-deploy all of those legacy services.
I also have another solution is to create a service that mapping account-id with legacy services-id. The cons is that it will increase complexity when I try to do search and pagination.
Example: I searching on fields on legacy services and I also want to filter it with account_id in new services. The process-ing of searching on legacy services I will have to return all result to new services in order for it to filter by account_ids and pagination.
So my question is if I want to avoid changing in legacy services. Is there a way to solve the searching and filtering with account_id problem?
Thanks you for reading

Related

Integration questions when migrate monolithic to microservices using Quarkus

Currently I have a monolithic application with some modules like financial and accounting. This application uses a single database and the modules are divided into schemas, so when I need to display the data on user interface or in a report I just need to do a simple query with a couple joins.
My question is, in a microservices structure where each module has his own database, how do I retrieve this data getting the same result as if I were in a single database?
When talking about splitting the database in the process of migrating a monolith to Microservices, there are some known patterns like:
The shared database
Database view
Database wrapping service
Database as a service
Seems the database view or the database as a service could be a candidate in this case, but of course no one better than you can decide which one is viable in your project.
I highly recommend you to have a look at chapter 4 of "Monolith to Microservices" by Sam Newman.

Transitioning from SQL-based data retrieval to Web Services

My customer is moving from providing data directly through SQL access to exposing the same data through a web service that will expose the same DB, but due to political reasons they're cutting the direct DB access. They're using SOAP for the web service but that's irrelevant to the issue. They come to me with requests that are vague in a sense that they don't know where the answer to their question is, so I'm left with no other options than to go poking around their data to see from where I could possibly find the correct data for their needs.
With SQL it's been more or less painless, write a simple query with a join or three and Bob's your uncle. In the worst case I have needed to select one row from the tables to see what they contain but that's not part of the actual data retrieval.
Now, with web services, I'm struggling to achieve the same. I feel like I have no tools to efficiently explore the data in a flexible way and each join from SQL requires a new query that includes manually merging results from the previous ones. A good example is
Show me all the users that are part of a service where the name begins with "FOO"
With SQL I would do a simple
SELECT
users.first_name,
users.last_name,
services.name
FROM users
LEFT JOIN services ON (users.service_id=services.id)
WHERE services.name LIKE 'FOO%'
With the SOAP API I'm forced to do a search for the services, write down the IDs and then do a search for the users. They keyword here is efficiency, I can get the same results but it takes so much more time that for anything more complex we might talk about hours instead of minutes.
The question is two-fold:
Is there a more efficient way of achieving the same?
Are there tools that would ease the pain, even if it means the join equivalents would need manual configuration to make work but would then work transparently once configured?
So far I've been using SOAP-UI (and Postman when applicable) for exploring the data. With Postman the scripting helps a bit so I can save the intermediate result into a variable and use them on the subsequent calls. I feel like I'm left with no other options than to take a simple(ish) way of programmatically accessing the API and scripting the same searches. Java is the weapon of choice at the customer for multiple reasons but that isn't the simplest way, so I'm also looking for recommendations regardless of the language.

How to isolate SQL Data from different customers?

I'm currently developing a service for an App with WCF. I want to host this data on windows-azure and it should host data from differed users. I'm searching for the right design of my database. In my opinion there are only two differed possibilities:
Create a new database for every customer
Store a customer-id to every table (or the main table when every table is connected via entities)
The first approach has very good speed and isolating, but it's very expansive on windows azure (or am I understanding something of the azure pricing wrong?). Also I don't know how to configure a WCF- Service that way, that it always use another database.
The second approach is low on speed and the isolating is poor. But it's easy to implement and cheaper.
Now to my question:
Is there any other way to get high isolation of data and also easy integration in a WCF- service using azure?
What design should I use and why?
You have two additional options: build multiple schema containers within a database (see my blog post about this technique), or even better use SQL Database Federations (you can use my open-source project called Enzo SQL Shard to access federations). The links I am providing give you access to other options as well.
In the end it's a rather complex decision that involves a tradeoff of performance, security and manageability. I usually recommend Federations, even if it has its own set of limitations, because it is a flexible multitenant option for the cloud with the option to filter data automatically. Check out the open source project - you will see how to implement good separation of customer of data independently of the physical storage.

custom table in Ektron database

I am adding a custom table to an Ektron database. What is the best practice for connecting to the database? Using standard ADO.NET code or is there a way to use the CMS' connection to the database? What is best practice?
Ektron 8.0.1 SP1
Adding Custom tables to the Ektron Database will not cause any issues,there is no need of another Database if you are having only few custom tables to be added.
Altering the Ektron tables will create issues,so it is better not to go for that.
For accessing data from the Custom Tables make use of LINQ (refer:here).
I know this question is a little old and answered, but I wanted to add my two cents. While altering Ektron's tables isn't advised (that is, without the API or scripts they've provided), adding your own table does no harm. If Ektron didn't support it they wouldn't provide the "Sync Custom Tables" option in eSync.
I came across this and thought that I could add a little to the discussion in case anyone is considering adding a custom table to the Ektron database. I think this topic is still relevant to the current version of Ektron and could be helpful.
Here are some good points:
Do not alter tables created by Ektron. (Point made by Bisileesh extended comment below)
Adding custom tables to the Ektron database is recommended in certain circumstances.
Using a smart form for content may be recommended but there are times when it is not optimal.
Here are some reasons why I say these things:
You should not alter tables created by Ektron for several reasons. Basically you don't want to change these because the Ektron software relies on these tables and modifications could cause errors. Besides the possibility of breaking things, if you ever upgrade Ektron, the Ektron Update may alter table definitions and erase your changes.
Adding tables to the existing Ektron database is a good idea when compared to adding a new database for several reasons.
First, you don't incur the additional cost of a full database structure on your server when you add a table.
Second if you are working in a multiple server environment (development, staging, live) by adding your tables to the Ektron database you will be able to use eSync to manage transferring the data between servers. If you use your own database, you will need to manage synchronization elsewhere.
I started with the idea that it was better to use my own database, but over the years I have discovered the advantages of using the Ektron database. Just as if you were using your own database, you should save the scripts to create the custom tables and perform database backups on a regular basis to ensure that you are protected.
After doing Ektron upgrades you should ensure that your customized tables are still present in the Ektron database.
When setting up eSync for custom tables I had to first run the sync on an empty table. After running the sync to establish a relationship, I was able to add data. There is also a requirement that there be a primary key on the custom tables and I don't think it can be an auto-incremented field. Consult Ektron for the latest requirements.
When considering whether to add data to a smart form or a custom table here are some things to consider. If you use the Smart Form you are committing to using the Ektron provided controls to access your data. This may be a good thing or a bad thing depending on your requirements and the current state of Ektron.In my case, search was a big deal. In versions 7.6 and 8.0 there were problems with the Ektron Search and it was no easy to do boolean searches across multiple fields. To overcome this I used custom tables that I could directly query. The search in version 8.6 has been changed but I still use my custom solution so I don't know if things are working better now.There are other data management issues that come up with smart forms and the Ektron Workarea that make it a good idea to avoid smart forms in some other cases too. The best place to store your data is not one place or the other, it depends on your requirements.
Best practice is to not use custom tables. If you can store your data as smart forms, users can use the workarea to edit their data. If you have to use a custom table, there are several ways:
One way is to pull the connection string from the web.config in an ASPX page
<asp:SqlDataSource ID="EktronSqlDataSource" runat="server" ConnectionString="<%$ConnectionStrings:Ektron.DBConnection %>" ></asp:SqlDataSource>
I'd look at using a different database. As mentioned by maddoxej, Ektron doesn't really like you messing with SQL and tables and what-not.
Granted, you may have admin reasons for using one database, but for the sake of maintainability I think it's worth having a second database which you fully control.
You can add custom tables without effecting existing ones. But to use them you need custom controls each time. Like custom layouts, custom forms, custom widgets.

New to sharepoint development, do lists replace your database?

We're just starting Sharepoint development, and one of my first tasks is to build a data collection tool. It will be used across multiple sites, so there will be an admin area, and each site will pull in it's related questions, and record the data. I've gone through a bunch of tutorials on development, and have a fairly good idea of how to start. I just want to make sure I understand one thing. Do lists basically take the place of your database? If this was a regular app, I would create a question table, a link table that tells which questions are connected to which site, a table that stores the answer, linking to the site and question table.
Is this the basic pattern you follow, or should I be doing things differently for Sharepoint applications?
If the thought is to use an external databse, can anyone point me to some info on this?
In our Sharepoint project we stared with Lists. It was good to some point - till DB had only few relations between data. After adding tables and relations performance falls a lot and we had to switch to use standard DB in MS SQL Server. So I recomend to use DB.
Disadventages: you cannot use sharepoint controls to edit/view data and cannot restrict access to data from sharepoint level
Adventages: much faster access to data