I recently learned about consumer-driven contract testing to supplement complex integration/E2E testing. I would like to verify my database and service are in sync through a contract. Is someone aware of how to do this?
If you are talking about an SQL relational database (where you would write a contract using a "mock" database, and then verify against a real one) there is no existing Pact solution for this (though it has been considered in a very abstract way before).
If you're talking about a documented oriented database, where the data is basically just a JSON document, then you can use the underlying Pact matching code to ensure that the document structure in the database and what your code thinks the document structure is, are in sync. The specifics on this will depend on which language you are using however.
Hop on to https://slack.pact.io if you would like to discuss it more with the Pact maintainers and users.
Related
We have a layer based application architecture. It is written in C# and uses the sql objects available in .Net for data access. Some of it is a home built ORM, some is with stored procedures. We have a number of windows services that use this architecture to process data. Scaling and performance have always been issues. A new person on our team is pushing to convert our data access to use rest based data services. This would replace our current data access layer.
I don't think rest is meant for our architecture. I also have concerns about performance. I have to think it will be significantly slower. I don't see how going out of process to effectively a web service and then to the database for CRUD operations is not going to make our performance issues worse. I know rest can lead to performance improvements with caching and further scaling out abilities but that is not being addressed now. It's just a data access replacement with no bells and whistles for now. On top of this the initial implementation will not allow us to use stored procedures. All processing will be table based CRUD operations and any data massaging will be done in the C# code, no set based operations.
I could easily be wrong but I can see a disaster coming and I don't know if I'm right or if I'm a chicken little. Looking for any guidance, advice, case study references on this. Anything that can either help my case or resolve my dread. Thanks.
If all the clients, such as Windows services etc are using stored procedures or direct SQL access, that's tight coupling. Any change in the database schema will mean all clients need to be updated to handle the change. If the schema never changes, it's not an issue. If the schema does change and the people who developed one of the services have left, it's a big issue.
If the database is abstracted away from the clients behind a REST layer, that's loose coupling. Any change in the database schema is irrelevant to the clients. The only thing that needs to change is the data access layer of the REST layer. The client facing endpoints won't change. How those endpoints interact with the database will change.
Essentially, moving from direct SQL to REST is taking a step back from your system design. To paraphrase:
SELECT * FROM ORDERS WHERE NOT PAID
becomes
https://api.com/orders/unpaid
and the returned object is a domain object representing a list of unpaid order objects.
So the clients move away from the implementation (select *) and move towards a domain solution. There is no more tight coupling between clients and database but a loose coupling between the clients and the domain.
Rather than speaking to a database in its own language, the clients now talk the domain language, "get all unpaid orders". They don't care how those orders are stored, or where. They just ask the REST endpoint.
This is all just implementation so far but your understanding of the business will increase as you'll be talking in Domain Driven Design (DDD) terms as the REST endpoints will accept and return domain objects rather than raw SQL.
Is this good for the business? Is it good for developers to understand the business at a domain level? If these answers are "yes", is the cost/benefit ratio of rewriting tons of client code to talk REST positive enough to make the change? Will having a REST/domain interface with the data open up new ways of looking at the data? Will that touch on profits?
The real question is along the lines of, will changing the architecture to be a loosely coupled REST integration that improves understanding of business objects and opens the door to a wider talent pool (potentially more REST coders than SQL gurus?) be worth it in terms of future proofing the business without hitting profits in the short term?
Will thinking of the business in DDD terms be worth the initial hit of moving from SQL to REST? Will that new DDD experience open up new doors in future design?
These are the real questions to ask. Caching, scaling etc are REST implementation issues, only relevant once you've answered the philosophical questions posed above.
Good luck, sounds like an exciting time for you!
Hi everyone I’m new to server-side technologies so maybe this is a bit of a dumb question but after reading dozens of articles and viewing dozens of videos I’m still very confused. This has to do with arquitecture principles of modern apps.
Relational model:
I know that a few years ago the model was to have a database (mostly relational) and a DBMS that enabled the connection between an app and the database.
Question 1: Since we are talking about a relational model some examples of DBMS’s are MySQL or PostgreSQL?
Question 2: What is the process of information exchange? The client-side uses a language like PHP to make a request to the server and then the DBMS transforms the request into SQL and accesses the database? Is the conversion of the PHP into SQL part of DBMS function or another server-side software is needed?
(If someone could provide me summary detailed explanation I would be very thankful)
Non-Relational Models:
Question 2: Nowadays with the rise of NoSQL models does the same concept of DBMS apply? Since these systems allow other querying language other than SQL there should be some piece of software that has this function?
Service Oriented Arquitecture:
Almost every app uses this type of arquitecture. I understand the concept of avoid the creation of too tight software relation between client and server side allowing for multiple use across several platforms. What I don’t understand is what parts constitute a system that is build this way.
Question 3: Does the DBMS provides the API’s that constitute the web services made available?
Web Frameworks:
Last but not least, where do frameworks like Django or Ruby on Rails land on?
Question 4: These are supposed to provide tools to develop everything between the front-end and the database of a SOA system right?
Question 5: I’ve seen a lot of buzz about REST arquitecture. Can you explain me of the querying process happens and what are the software entities involved.
Thank you in advance for any explanation that helps me understating these questions. Please provide some links or any diagrams that you find useful.
EDIT:
I'll tackle your questions individually:
Question 1: Since we are talking about a relational model some examples of DBMS’s are MySQL or PostgreSQL?
Correct. The Database Management System is the suite of software that lets you interact with a particular database technology. The examples you give are correct.
Question 2: What is the process of information exchange? The client-side uses a language like PHP to make a request to the server and then the DBMS transforms the request into SQL and accesses the database? Is the conversion of the PHP into SQL part of DBMS function or another server-side software is needed?
There are many different avenues for this. Typically the API for accessing a database is done via ODBC (Open Database Connectivity). ODBC drivers are available for most (if not all) Relational DB vendors and are all very similar.
A language like PHP could connect to the database via an ODBC connection library (eg http://php.net/manual/en/intro.uodbc.php) which would allow you to send CRUD operations to the DBMS to be executed on the database.
Since most DBMS's use a subset or superset of a SQL standard for querying the database you can either pass this code directly via ODBC or you may use another level of abstraction. A common method is called an ORM (Object Relational Mapper). ORMs (eg SQLAlcmehy for Python: http://www.sqlalchemy.org/) provides an abstraction layer so you're not reliant on writing SQL but instead write queries and database commands in a format more common of your language of choice.
Question 2: Nowadays with the rise of NoSQL models does the same concept of DBMS apply? Since these systems allow other querying language other than SQL there should be some piece of software that has this function?
Same general concept (in that there is a DB Driver that exposes an API that languages can hook into) but there are generally more and differing ways of interacting with DB's now since they have many differing structures. Most NoSQL DBs still have ODBC connectors (eg MongoDB and Hadoop) so the general programming practices still apply whilst connecting with them, but the things you expect the database to do (and their natural query languages) will differ.
This continues to be an evolving space as these technologies evolve.
Question 3: Does the DBMS provides the API’s that constitute the web services made available?
Not sure I'm understanding this question. ODBC and webservices are different. Often webservices sit on top of ODBC if you want to query a database via a web API, but it's one layer of abstraction more than connecting to the DB directly via ODBC.
Last but not least, where do frameworks like Django or Ruby on Rails land on?
Web Frameworks are a way of quickening the development of web applications by trying to stop some of the "reinvent the wheel" things that you commonly do with every web application. They give you the basics, and have a lot of extensions that allow you to implement other common elements of web apps (like a subscription/login system, session management, admin system etc).
These are supposed to provide tools to develop everything between the front-end and the database of a SOA system right?
Both Django and RoR aim to be end-to-end frameworks. They include all the common elements you'll need including an Object Relational Mapper. They don't prescribe which DBMS you have to use, their ORMs can interface with many so that choice is still up to you.
Yes, they are aimed to cover everything from the front end to the DB, including the interaction with and initialization of the structure of the database.
Question 5: I’ve seen a lot of buzz about REST arquitecture. Can you explain me of the querying process happens and what are the software entities involved.
REST stands for Representational State Transfer (quick Wikipedia article: https://en.wikipedia.org/wiki/Representational_state_transfer). In a nutshell the creation of a "RESTful" (REST compliant) web API would mean that you have GET, PUT, POST and DELETE methods to accomplish all your services. REST aligns closely with the HTTP protocol which is why it's been very appropriate for the ever growing concept of Web Apps, it's helped transform the thinking from a Web Page (or set of web pages) into Web Apps.
It's hard to sum up better than the Wikipedia article does, I'd suggest you dive into it.
Hope that's cleared up a few of the questions!
Most questions revolving around the title of this post ask about making Hibernate, or some other access layer, run in an OSGi container. Or they ask about making the data source run in an OSGi container.
My questions concern the effect of OSGi modularity on the structure of the database itself. Specifically:
How do we make the structure of a database itself modular, so that
when we load a module--say, Contact Management--the schema is
updated to include tables specifically associated with that module?
What is the effect of the foregoing approach on relationships?
I think the second question is the more interesting. Let's say that Contact Management and Project Management are two distinct OSGi modules. Each would have its own set of tables in the schema. But what if, at the database level, we need to form cross-module relationships between two or more tables? Maybe we wish to see a list of projects that a certain contact is, or has been, working on.
Any solution seems to lead down the path of the various modules' having to know too much about each other. We could write into the Project Management specification that that module expects a source of contacts, and then abstract such an expectation through services, interfaces, pub-sub etc. Seems like a lot of work, though, to avoid a hard-wired relationship between the two modules' underlying tables.
What's the point of being modular up top and in the middle if we may necessarily need to break that modularity with relationships between tables down below? Are denormalization and a service bus really a healthy solution?
Any thoughts?
Thank you.
Regarding first question, LiquiBase can be used. You can update and rollback changesets on bundle activation and deactivation.
Regarding second question, I think it is something that should be considered while designing your architecture. There is no help from some tool.
If PM module depends on CM module, it is safe for PM module to assume CM tables currently exist and make foreign relations to them, but not in the opposite direction. You should make it clear in your architecture that what modules depends on what modules and prevent dependency cycles.
After 5 years of JPA, I decided to leave it and after months of investigation I found Querydsl+Liquibase combo the best.
I worked a lot on developing helper OSGi components and a maven-plugin. The functionality of maven-plugin (code generation) can be easily integrated into other build tools as the maven plugin is only a wrapper around a standalone library.
You can find a detailed article about the solution here: http://bzsoldos.wordpress.com/2014/06/18/modularized-persistence/
In this kind of situation it is important to evaluate how independent are these modules/contexts. In DDD terms these two seem to be independent bounded contexts, so a contact in the pm module is a distinct entity (and also another class) than contact in the cm module. If you keep this distinction you then have some denormalization wrt the contact entity (e.g. you copy the id and name of the contact when adding it to a project, later changes to the contact in the cm module will require some pubsub to keep consistency) but each module will be very independent. I would keep the ui as a separate module, depending on both and providing the necessary glue (i.e. passing the ids and required info between them).
Maybe I misread the question, but in my opinion OSGI modularity has absolutely no impact on database structure. It's a data storage level, it can be modular of course, but for it's very own reasons - performance, data volumes, load, etc and with it's very own solutions - clusters, olap, partitioning and replication.
If you need data integrity between cm and pm, it should be provided by means which were initially designed for such sort of task - RDBMS. If you need software modularity - you select OSGI solution and your modules are communicating on much more higher logical/business level. They can be absolutely unaware of how persistence is provided - plain text file or 100-node Oracle RAC cluster.
I've seen recommendations (Juval Lowy, et al) that a service contract should have "no more than 20 members...twelve is probably the practical limit". Why? It seems that if you wish to provide a service as the interface to a relatively large db (50-100 tables) you're going to go way past that in just CRUD alone. I've worked with plenty of other services that provided hundreds of 'OperationContracts'...is there something peculiar about WCF? Is there something I'm missing here?
probably the fact that you should not expose CRUD in the SOA world.... the idea is to expose business processes. Inidividual CRUD operations lead to a TERIIBLY slow and granular interface. Look more how RIA Services / ASTORIA do the CRUD thing.
I don tthink this is a technical limit. the idea is a service defines all contracts necessary for a business operation (order management, account management) and should not be TOO complicated.
I think it's related to the principles of SOA. Many people would regard a service with hundreds of OperationContracts as a poorly designed service, even if technically correct.
You should not simply expose a web interface for a bunch of tables. Rather, the services should expose abstract operations (probably mapped to business processes) that interact with those tables under the hood.
I've seen similar recommendations in the past and I think it depends on who's going to use your service.
If, like me, you're writing it to link an app to a remote data source then the most abstract interface you can write will still include a "get" and a "save" method for each logical object in your database.
My latest project has a servicecontract with 246 operation contracts in it. It's a mostrosity of a file and a pig to edit, but the client side code is neat and tidy and it's just easier to work with. It's not like anyone but me is ever going to see it.
In short, there are no technical or performance implications to either approach. Go with whatever seems most appropriate to your project.
Our application is interfacing with a lot of web services these days. We have our own package that someone wrote a few years back using UTL_HTTP and it generally works, but needs some hard-coding of the SOAP envelope to work with certain systems. I would like to make it more generic, but lack experience to know how many scenarios I would have to deal with. The variations are in what namespaces need to be declared and the format of the elements. We have to handle both simple calls with a few parameters and those that pass a large amount of data in an encoded string.
I know that 10g has UTL_DBWS, but there are not a huge number of use-cases on-line. Is it stable and flexible enough for general use? Documentation
I have used UTL_HTTP which is simple and works. If you face a challenge with your own package, you can probably find a solution in one of the many wrapper packages around UTL_HTTP on the net (Google "consuming web services from pl/sql", leading you to e.g.
http://www.oracle-base.com/articles/9i/ConsumingWebServices9i.php)
The reason nobody is using UTL_DBWS is that it is not functional in a default installed database. You need to load a ton of Java classes into the database, but the standard instructions seem to be defective - the process spews Java errors right and left and ultimately fails. It seems very few people have been willing to take the time to track down the package dependencies in order to make this approach work.
I had this challenge and found and installed the 'SOAP API' package that Sten suggests on Oracle-Base. It provides some good envelope-creation functionality on top of UTL_HTTP.
However there were some limitations that pertain to your question. SOAP_API assumes all requests are simple XML- i.e. only one layer tag hierarchy.
I extended the SOAP_API package to allow the client code to arbitrarily insert an extra tag. So you can insert a sub-level such as , continue to build the request, and remember to insert a closing tag.
The namespace issue was a bear for the project- different levels of XML had different namespaces.
A nice debugging tool that I used is TCP Trace from Pocket Soap.
www.pocketsoap.com/tcptrace/
You set it up like a proxy and watch the HTTP request and response objects between client and server code.
Having said all that, we really like having a SOAP client in the database- we have full access to all data and existing PLSQL code, can easily loop through cursors and call the external app via SOAP when needed. It was a lot quicker and easier than deploying a middle tier with lots of custom Java or .NET code. Good luck and let me know if you'd like to see my enhanced SOAP API code.
We have also used UTL_HTTP in a manner similar to what you have described. I don't have any direct experience with UTL_DBWS, so I hope you can follow up with any information/experience you can gather.
#kogus, no it's a quite good design for many applications. PL/SQL is a full-fledged programming language that has been used for many big applications.
Check out this older post. I have to agree with that post's #1 answer; it's hard to imagine a scenario where this could be a good design.
Can't you write a service, or standalone application, which would talk to a table in your database? Then you could implement whatever you want as a trigger on that table.