Reading Data via Api for app from table without entity/definition - shopware6

I need to read data from the table cart for my app. But an api/search/ call wouldn't work, I think, because cart has no entity/definition. SQL is not possible via the api or in scripts as far as I know.
So, is there a way to get these data?
I do not know what to try because I am afraid it is not possible.

It is intended that you should not access the cart table via the database abstraction, hence why there is no entity definition and repository for it. In fact it is to discourage altering data sets in the cart table entirely. You'd run the risk of violating the business logic by altering the table entries, when there are multiple collectors and processors relying on the integrity of the tables content.
If you want to alter the cart you should use the cart service inside app scripts instead.

Related

Database design & 3rd party integrations

We're building an application where eCommerce owners can connect their store from different platforms (e.g. Shopify, Magento, Woocommerce). We do this in order to import data from these various platforms.
So we have a Stores table. In there we have data that are common to all platforms and some data that are specific to the platforms.
I'm not sure what to do here. Should we create specific tables that contain platform-specific information or we create columns to store certain information but that will be empty for the stores from the other platforms?
What would be the pros and cons? Knowing that we would then need to create tables for all new platforms that we integrate with if we go for option 2.
You haven't said which specific RDBMS you're using, but with PostgreSQL you have the option of foreign data wrappers. These let you federate data from other sources and APIs into your application database and read and write foreign tables just like you do the internal tables (assuming the external APIs allow you to modify data). With this approach, you just need to make sure that your stores are properly associated with their respective entries in the foreign tables. Developing FDWs is relatively easy with Multicorn.
If that's not an option: using columns is efficient to query since the information is right there in your store record. However, it could get unwieldy depending on how much of it there is, and if you could have a tenant with multiple presences on one of those external platforms -- weirder things have happened -- you're in for some trouble. And the relational form makes adding and changing support for the external platforms easier since you don't have to lock the entire tenants table to add or remove columns.
The simpler approach may be all you need to start out with, but it'd probably be smart to plan for tables in the end.

Store UI column settings in DataBase

I am developing a website which uses tables in UI, and our customer wants that the users can customize the columns they want to see in UI for each table, and to store this settings inside the Database.
My question is: what is the cleanest way to support this feature into the Database, avoiding to have the data stored coupled to the UI?

custom table in Ektron database

I am adding a custom table to an Ektron database. What is the best practice for connecting to the database? Using standard ADO.NET code or is there a way to use the CMS' connection to the database? What is best practice?
Ektron 8.0.1 SP1
Adding Custom tables to the Ektron Database will not cause any issues,there is no need of another Database if you are having only few custom tables to be added.
Altering the Ektron tables will create issues,so it is better not to go for that.
For accessing data from the Custom Tables make use of LINQ (refer:here).
I know this question is a little old and answered, but I wanted to add my two cents. While altering Ektron's tables isn't advised (that is, without the API or scripts they've provided), adding your own table does no harm. If Ektron didn't support it they wouldn't provide the "Sync Custom Tables" option in eSync.
I came across this and thought that I could add a little to the discussion in case anyone is considering adding a custom table to the Ektron database. I think this topic is still relevant to the current version of Ektron and could be helpful.
Here are some good points:
Do not alter tables created by Ektron. (Point made by Bisileesh extended comment below)
Adding custom tables to the Ektron database is recommended in certain circumstances.
Using a smart form for content may be recommended but there are times when it is not optimal.
Here are some reasons why I say these things:
You should not alter tables created by Ektron for several reasons. Basically you don't want to change these because the Ektron software relies on these tables and modifications could cause errors. Besides the possibility of breaking things, if you ever upgrade Ektron, the Ektron Update may alter table definitions and erase your changes.
Adding tables to the existing Ektron database is a good idea when compared to adding a new database for several reasons.
First, you don't incur the additional cost of a full database structure on your server when you add a table.
Second if you are working in a multiple server environment (development, staging, live) by adding your tables to the Ektron database you will be able to use eSync to manage transferring the data between servers. If you use your own database, you will need to manage synchronization elsewhere.
I started with the idea that it was better to use my own database, but over the years I have discovered the advantages of using the Ektron database. Just as if you were using your own database, you should save the scripts to create the custom tables and perform database backups on a regular basis to ensure that you are protected.
After doing Ektron upgrades you should ensure that your customized tables are still present in the Ektron database.
When setting up eSync for custom tables I had to first run the sync on an empty table. After running the sync to establish a relationship, I was able to add data. There is also a requirement that there be a primary key on the custom tables and I don't think it can be an auto-incremented field. Consult Ektron for the latest requirements.
When considering whether to add data to a smart form or a custom table here are some things to consider. If you use the Smart Form you are committing to using the Ektron provided controls to access your data. This may be a good thing or a bad thing depending on your requirements and the current state of Ektron.In my case, search was a big deal. In versions 7.6 and 8.0 there were problems with the Ektron Search and it was no easy to do boolean searches across multiple fields. To overcome this I used custom tables that I could directly query. The search in version 8.6 has been changed but I still use my custom solution so I don't know if things are working better now.There are other data management issues that come up with smart forms and the Ektron Workarea that make it a good idea to avoid smart forms in some other cases too. The best place to store your data is not one place or the other, it depends on your requirements.
Best practice is to not use custom tables. If you can store your data as smart forms, users can use the workarea to edit their data. If you have to use a custom table, there are several ways:
One way is to pull the connection string from the web.config in an ASPX page
<asp:SqlDataSource ID="EktronSqlDataSource" runat="server" ConnectionString="<%$ConnectionStrings:Ektron.DBConnection %>" ></asp:SqlDataSource>
I'd look at using a different database. As mentioned by maddoxej, Ektron doesn't really like you messing with SQL and tables and what-not.
Granted, you may have admin reasons for using one database, but for the sake of maintainability I think it's worth having a second database which you fully control.
You can add custom tables without effecting existing ones. But to use them you need custom controls each time. Like custom layouts, custom forms, custom widgets.

In Oracle: how can I tell if an SQL query will cause changes without executing it?

I've got a string containing an SQL statement. I want to find out whether the query will modify data or database structure, or if it will only read data. Is there some way to do this?
More info: In our application we need to let the users enter SQL-queries, mainly as part of the applications report system. These SQL queries should be allowed to read whatever they like from the databse, but they shouldn't be allowed to modify anything. No updates, deletes insert, table drops, constraint removals etc.
As of now I only test whether the first word in the string is "select", but this is too constricting and too insecure.
You should grant only select privileges on your tables for the login used by the application to be sure.
Create a new user for that part of the application that only has select privileges. Bear in mind that you'll also need to create synonyms for all the tables/views that that "read-only" user will be able to view.
The "regular" part of your application will still be able to do other operations (insert, update, delete). Just the reporting will use the read-only user.
As Horacio suggests, it is also a good idea/practice to add "wrapper" views that only expose what you want to expose. Some sort of "public API". This can give you flexibility if you need to change the underlying tables and don't want to/can't change the reports to the new definitions of said tables. This might, however, be seen as a lot of "extra work".
I agree with others that the right thing to do is use a separate schema with limited access & privileges for those queries that should be read-only.
Another option, however, is to set the transaction read-only before executing the statement entered by the user (SET TRANSACTION READ ONLY).
Create VIEWS to expose the data to end users, this is worthy because of three things:
The end user doesn't know how really your database look like.
You may can provide a simpler way to extract some pieces of data.
You can create the view with a read-only constraint:
CREATE VIEW items (name, price, tax)
AS SELECT name, price, tax_rate
FROM item
WITH READ ONLY;
Something that has worked well for me in the past, but may not fit your situation:
Use stored procedures to implement an API for the application. All modifications are done via that API. The procedures exposed to the front end are all complete units of work, and those procedures are responsible for rights enforcement.
The users running the front end application are only allowed to call the API stored procedures and read data.
Since the exposed API does complete units of work that correspond to actions the user could take via the GUI, letting them run the procedures directly doesn't get them any additional ability, nor allow them to corrupt the database accidently.
SELECT * FROM table FOR UPDATE works even with only SELECT privilege, and can still cause a lot of damage. If you want to be safe, the read only transactions are better.

Ideas for Combining Thousand Databases into One Database

We have a SQL server that has a database for each client, and we have hundreds of clients. So imagine the following: database001, database002, database003, ..., database999. We want to combine all of these databases into one database.
Our thoughts are to add a siteId column, 001, 002, 003, ..., 999.
We are exploring options to make this transition as smoothly as possible. And we would LOVE to hear any ideas you have. It's proving to be a VERY challenging problem.
I've heard of a technique that would create a view that would match and then filter.
Any ideas guys?
Create a client database id for each of the client databases. You will use this id to keep the data logically separated. This is the "site id" concept, but you can use a derived key (identity field) instead of manually creating these numbers. Create a table that has database name and id, with any other metadata you need.
The next step would be to create an SSIS package that gets the ID for the database in question and adds it to the tables that have to have their data separated out logically. You then can run that same package over each database with the lookup for ID for the database in question.
After you have a unique id for the data that is unique, and have imported the data, you will have to alter your apps to fit the new schema (actually before, or you are pretty much screwed).
If you want to do this in steps, you can create views or functions in the different "databases" so the old client can still hit the client's data, even though it has been moved. This step may not be necessary if you deploy with some downtime.
The method I propose is fairly flexible and can be applied to one client at a time, depending on your client application deployment methodology.
Why do you want to do that?
You can read about Multi-Tenant Data Architecture and also listen to SO #19 (around 40-50 min) about this design.
The "site-id" solution is what's done.
Another possibility that may not work out as well (but is still appealing) is multiple schemas within a single database. You can pull common tables into a "common" schema, and leave the customer-specific stuff in customer-specific schema. In some database products, however, the each schema is -- effectively -- a separate database. In other products (Oracle, DB2, for example) you can easily write queries that work in multiple schemas.
Also note that -- as an optimization -- you may not need to add siteId column to EVERY table.
Sometimes you have a "contains" relationship. It's a master-detail FK, often defined with a cascade delete so that detail cannot exist without the parent. In this case, the children don't need siteId because they don't have an independent existence.
Your first step will be to determine if these databases even have the same structure. Even if you think they do, you need to compare them to make sure they do. Chances are there will be some that are customized or missed an upgrade cycle or two.
Now depending on the number of clients and the number of records per client, your tables may get huge. Are you sure this will not create a performance problem? At any rate you may need to take a fresh look at indexing. You may need a much more powerful set of servers and may also need to partion by client anyway for performance.
Next, yes each table will need a site id of some sort. Further, depending on your design, you may have primary keys that are now no longer unique. You may need to redefine all primary keys to include the siteid. Always index this field when you add it.
Now all your queries, stored procs, views, udfs will need to be rewritten to ensure that the siteid is part of them. PAy particular attention to any dynamic SQL. Otherwise you could be showing client A's information to client B. Clients don't tend to like that. We brought a client from a separate database into the main application one time (when they decided they didn't still want to pay for a separate server). The developer missed just one place where client_id had to be added. Unfortunately, that sent emails to every client concerning this client's proprietary information and to make matters worse, it was a nightly process that ran in the middle of the night, so it wasn't known about until the next day. (the developer was very lucky not to get fired.) The point is be very very careful when you do this and test, test, test, and test some more. Make sure to test all automated behind the scenes stuff as well as the UI stuff.
what I was explaining in Florence towards the end of last year is if you had to keep the database names and the logical layer of the database the same for the application. In that case you'd do the following:
Collapse all the data into consolidated tables into one master, consolidated database (hereafter referred to as the consolidated DB).
Those tables would have to have an identifier like SiteID.
Create the new databases with the existing names.
Create views with the old table names which use row-level security to query the tables in the consolidated DB, but using the SiteID to filter.
Set up the databases for cross-database ownership chaining so that the service accounts can't "accidentally" query the base tables in the consolidated DB. Access must happen through the views or through stored procedures and other constructs that will enforce row-level security. Now, if it's the same service account for all sites, you can avoid the cross DB ownership chaining and assign the rights on the objects in the consolidated DB.
Rewrite the stored procedures to either handle the change (since they are now referring to views and they don't know to hit the base tables and include SiteID) or use InsteadOf Triggers on the views to intercept update requests and put the appropriate site specific information into the base tables.
If the data is large you could look at using a partioned view. This would simplify your access code as all you'd have to maintain is the view; however, if the data is not large, just add a column to identify the customer.
Depending on what the data is and your security requirements the threat of cross contamination may be a show stopper.
Assuming you have considered this and deem it "safe enough". You may need/want to create VIEWS or impose some other access control to prevent customers from seeing each-other's data.
IIRC a product called "Trusted Oracle" had the ability to partition data based on such a key (about the time Oracle 7 or 8 was out). The idea was that any given query would automagically have "and sourceKey = #userSecurityKey" (or some such) appended. The feature may have been rolled into later versions of the popular commercial product.
To expand on Gregory's answer, you can also make a parent ssis that calls the package doing the actual moving within a foreach loop container.
The parent package queries a config table and puts this in an object variable. The foreach loop then uses this recordset to pass variables to the package, such as your database name and any other details the package might need.
You table could list all of your client databases and have a flag to mark when you are ready to move them. This way you are not sitting around running the ssis package on 32,767 databases. I'm hooked on the foreach loop in ssis.