I'm looking for some resources to get me started on how to design and implement an API for SQL.
Is this done by writing a series of functions and/or stored procedures to process your transactions on the SQL server (T-SQL).
I have read a bit about Transaction API's v Table API's. While you don't have to chose one of the other I would prefer to avoid the Table API's and focus more on Transaction API's to keep performance high and avoid using cursors.
Also from what I understand RESTFUL API's are just making the requests through HTTP requests (Using JSON) rather than connecting to the DB directly.
If my understanding is completely wrong on this subject please correct me as I am trying to learn.
Thanks
Related
I have been hurting a wall for quite a while now, I am making an application linked to a software that we are using, which will allow the user to either access data from the software with my application and update data with my application on the software.
So here is the whole idea:
So my app will be linked to the software's database (Software Patient) with the help of foreign key (patientId on "App Patient").
And I need to be able to search for email, password, firstName, lastName, secretStuff directly from my app and be able to update data as well on both databases.
The biggest issue here is that I can't make a third table that merge all the data into one because the data from the software's database (Software Patient) will be updated quite a lot directly from the software by others people.
The current stack is composed of :
My application: Node.js with Sequelize, GraphQL & PostgreSQL
Software that we use: SQL Server Express
Thank you in advance!
The app you are developing must get data from your commercial Software Patient (we'll call it SP) system. That presents several questions. You really really need clear answers to these questions to finish designing the data flow in your app. Some of the questions:
How will your app get data from SP? Will you issue SQL queries to SP's database? Does SP publish an Application Programmer Interface (API) for this purpose? Or a data export function you'll use in you app's workflow?
Must your app's view of SP data be up-to-the-minute? Will an hourly update be enough? Daily?
Will your app change SP data, insert new data, or delete data in the SP system? If so see the first question.
Must you reverse-engineer SP, that is, guess how its data is structured, to make your app work? Or can you get specs / documentation from SP's developers?
If you update a reverse-engineered database, dude, be careful!
If your app will use SQL to get data from SP, it will send that SQL to SP's SQL Server Express database. nodejs has tooling for that, but both the tooling and the SQL dialect used in postgreSQL are different. Maybe it would be wise to use SQL Server throughout: doing so reduces the cognitive load on people who will maintain and enhance your app in the future. Neither they nor you will have to keep straight the differences between the two DBMSs.
If you'll use an API, great! That's a clean interface between two systems. (It will probably have some irritating and confusing bugs, so allow some time for that. I've had to send pull requests to several API maintainers.)
If you figure out the answers to these sorts of questions, you'll make a good decision about your question of the third table. It's impossible to address your specific third-table question without some of these questions.
And. Please. Don't forget infosec. You have a duty to keep personal data of the patients you serve away from cybercreeps.
My customer is moving from providing data directly through SQL access to exposing the same data through a web service that will expose the same DB, but due to political reasons they're cutting the direct DB access. They're using SOAP for the web service but that's irrelevant to the issue. They come to me with requests that are vague in a sense that they don't know where the answer to their question is, so I'm left with no other options than to go poking around their data to see from where I could possibly find the correct data for their needs.
With SQL it's been more or less painless, write a simple query with a join or three and Bob's your uncle. In the worst case I have needed to select one row from the tables to see what they contain but that's not part of the actual data retrieval.
Now, with web services, I'm struggling to achieve the same. I feel like I have no tools to efficiently explore the data in a flexible way and each join from SQL requires a new query that includes manually merging results from the previous ones. A good example is
Show me all the users that are part of a service where the name begins with "FOO"
With SQL I would do a simple
SELECT
users.first_name,
users.last_name,
services.name
FROM users
LEFT JOIN services ON (users.service_id=services.id)
WHERE services.name LIKE 'FOO%'
With the SOAP API I'm forced to do a search for the services, write down the IDs and then do a search for the users. They keyword here is efficiency, I can get the same results but it takes so much more time that for anything more complex we might talk about hours instead of minutes.
The question is two-fold:
Is there a more efficient way of achieving the same?
Are there tools that would ease the pain, even if it means the join equivalents would need manual configuration to make work but would then work transparently once configured?
So far I've been using SOAP-UI (and Postman when applicable) for exploring the data. With Postman the scripting helps a bit so I can save the intermediate result into a variable and use them on the subsequent calls. I feel like I'm left with no other options than to take a simple(ish) way of programmatically accessing the API and scripting the same searches. Java is the weapon of choice at the customer for multiple reasons but that isn't the simplest way, so I'm also looking for recommendations regardless of the language.
My question is not about a specific code. I am trying to automate a business data governance data flow using a SQL backend. I have put a lot of time searching the internet or reaching out people for the right direction, but unfortunately I have not yet found something promising so I have a lot of hope I would find some people here to save from a big headache.
Assume that we have a flow (semi static/dynamic flow) for our business process. We have different departments owning portions of data. we need to take different actions during the flow such as data entry, data validation, data exportation, approvals, rejections, notes etc and also automatically define deadlines, create reports of overdue tasks and people accountable for them etc.
I guess the data management part would not be extremely difficult, but how to write an application (codes) to run the flow (workflow engine) is where I struggle. Should I use triggers or should I choose to write codes to frequently run queries to push the completed steps to next step, how I can use SQL tables to keep the track of flow etc
If one could give me some hints on this matter, I would be greatly appreciated
I would suggest using the sql server integration services SSIS, you can easily mange the scripts and workflow based on some lookup selections, and also you can schedule SSIS package on timely bases to trigger and do the job.
It's hard task to implement application server on sql server. Also it's will be very vendor depended solution. Best way i think to use sql server as data storage and some application server for business logic over data storage.
My client wants to run arbitrary SQL SELECT queries on the backend database of our web app, for ad-hoc reporting purposes. The requests will be read-only. Suppose the choice of analysis tool is flexible, but might include Access or psql. Rather than exposing the database on the public Internet, I want to transmit SQL queries over HTTP.
Can I implement a web service that would allow database analysis tools to communicate with the database using a user's web app credentials? E.g. instead of the database connection string starting with postgres://, it would start with https://. Ideally I'm looking for a [de facto] standard way of doing this.
Related but different/unanswered:
Communicate Sql Server over http
Standards for queries over SOAP
I'm not aware of a standard for this. MK has a point, this sounds like a huge opportunity for a SLQ Injection attack. Services expose the results of database queries all the time. They're typically requesting a handful of parameters and exposing a well defined response. Giving a public user of the service carte-blanche to run any query they want, means that you have to ensure that they don't sneak in a drop database or delete from table query some how. It can be tricky to defend. All of that said, I've seen this pattern used for a private service to pool the connections that the database server is aware. Database connections tend to be pretty expensive.
I'm wondering what the community suggests for extracting data from an OData API to SQL 2008 R2. I need to create a nightly job that imports the data to SQL. Should I create a simple console app that iterates through the OData API and imports to SQL? Or can I create some type of SQL Server BI app? Or is there a better way to do this?
This is going to be sooo slow. OData is not an API for bulk operations. It is designed for clients to access individual entities and navigate relations between them, at most paginate across some filtered lists.
Extracting a entire dump via OData is not going to make anybody happy. The OData API owner will have to investigate who is doing all these nightly crawls over his API and discover it is you and likely cut you off. You on the other hand will discover that OData is not an efficient bulk transport format and marshaling HTTP encoded entities back and forth is not exactly the best way to spend your bandwidth. And crawling the entire database every time, as opposed to just discovering the deltagrams from last crawl, is only going to work until the database reaches that critical size S at which the update takes longer than the frequency you're pooling!
Besides, if is not your data, it is extremely likely that the use terms for the OData API explicitly prevent such bulk crawls.
Get a dump of the data, archive it, and copy it using FTP.