In need of finding a way to search to compare and generate a communication-relation table which apparently seem to involve some complex logic.
I have a table with two columns where one lists the servers and the other shows the list of different servers that it communicates with (how it communicates is out of the scope of requirement).
Need to search for different servers that apparently communicate with one another in one or the other way. The sample image added here will explain the source and expected result tables.
Please let me know if and/or how this could be achieved in SQL Server or Splunk.
Related
I am seeing if there is an efficient/fast way of finding all tables/columns updated from a specific process. Basically we want to know all SQL columns that are updated from a frontend ERP process.
I know of two ways - either enable Change Tracking on every single table which is not very efficient, or spin up a blank test environment, perform the process and do row counts on all tables then go and view the data.
Does anyone else have a better method than the two described above?
In short: I have a client who wish to be able to add domain tables, without adding SQL tables.
I am working with an application in wich data are organized and made available with a postgresql catalogue. What I mean by catalogue is that the database hold the path to the actual data file(s) as well as some metadata.
Adding a new table means that the (Java class of the) client application has to be updated. This is a costly process for the client, who want us to find a way to let him add new kind of data in the catalogue, without having to change the schema.
I don't have many more specificities about the db itself and it's configuration as I'm usualy mostly a client of the said db.
My idea: to solve this was to have a generic table with the most often used columns (like date, comment etc.) and a column containing a domain key. The domain key would be used by the client application to request the kind of generic data is needed (and would have no meaning whatsoever to the db provider). Adding metadata could be done with a companion file within the catalogue and further filtering would have to be done on the client side.
Question: as I am by no mean an SQL expert, I would like to know if it is an acceptable solution, and what limitation I could be facing ? I'm thinking of performance, data volume etc. Or maybe a different approach, is advisable ?
Regarding expected volume, for a single domain data type, it could be arround 30 new entry per day.
I'm pretty new here and usually don't resort to forum post unless I really can't figure things out by searching for a solution on my own but I'm stuck. I work in an IT department and I am developing a tool that will be used to compare across three data pulls to see where we are missing data. All data is supposed to be accounted for in all three databases so we need to find discrepancies where it does not match. This is data used across all of our car dealerships and it is pulled from three providers who give it to us. (Example: Our website listing, cars actually on sale in our inventory, and the third deals with web listings on other sites).
Unfortunately, whenever we do an export from each site the dealership locations do not match with the exact same syntax. I have all three tables in a sql database that is reuploaded by the user each month. I have case statements written so I can run a query to change each matching dealership in a way that matches syntax across all three tables. For example 'Ford Denham' and 'Denham Ford' are all changed to 'ASFD' which is an acronym we use for that dealership.
Now we have reports which I have created with SQL Report Builder. My problem is, all of my queries are written as if the Location is always 'ASFD' so I can match records based on location. When the user uploads data how can I automatically have my Case Statement run on the new files in the database without having to trigger the query myself? If I don't run the Case Statement rename then none of the reports will run correctly because Locations do not match correctly.
Thank you for any help. Let me know if I should have gone about this a different way since I have never really posted here before. I tried to be as descriptive as possible.
I am using report builder 3.0 (very similar to SQL server reporting services) to create reports for users on an application using SQL server 2012 database.
To set the scene, we have a database with over 1200 tables. We actually only need about 100 of these for reporting purposes. But it is very common that we need to combine fields from multiple tables together to get a common resource of data that my colleagues and I need for our reports.
Eg if I want a view of a customer, I would want to bring in information about the customer from the customer_table, information about his phone details from the Phone table, information about his account(s) from the accounts table and so on. Then I might need another view of the accounts - account type, various balance amounts, opening date, status etc.
What I would love to do is create a "customer view" where we combine all these fields into a single combined virtual table. Then we have an "Accounts view". It would be easier to use, easier to manage etc. Then we use this for all our reports going forwards. And when we need to, we can combine the customer and accounts view to use on a report plus actual tables into one combo-dataset to use on a report.
I am unsure about the right way to do this.
I see I can create a data source. This doesn't seem right as this appears to be what one might do if working off 2 or more databases. We are using just 1 database.
Then there are report models. It seems these are being deprecated and phased out so this doesn't seem a good option.
Finally I see we can create shared datasets. However, this option (as far as I can tell) won't allow me to combine this with another dataset. So using the example above, I won't be able to combine the customer view and the account view with this approach to use for a report to display details about the customer and his/her accounts.
Would appreciate guidance on the best way to achieve what I am trying to do...
Thanks
I can only speak from personal experience, but using the the data source approach has been good for our purposes. We have a single database with 50+ tables in it. This is linked to as a shared data source in the project so is available to all 50+ reports.
We then use Stored Procedures to make the information in the databases available to the reports, each report has it's own Stored Procedure that joins as many tables as required to provide the data for the report. The advantage of using Stored Procedures also allows you to only return rows you are interested in, rather than entire tables.
I'm not certain if this is the kind of answer that you were after, but describes how we solve a similar (smaller) issue.
I'm planing a webproject, containing 4 websites build in MVC3. As a databaseserver I'm going to use the ms sql server.
Each of this websites will have something arround 40 tables. But some of the tables are shared between the websites:
Contact, Cities, Postalcodes, Countries...
How to handle this? should I put all the tables of each database into a common database (so that the database of website 1,2,3 and website 4 are in one databse together). Or should I create one database containing shared datase?
But then I think I'm getting problems with the data consitency, because I think there is no way to point from one database to an other (linking for example the citytable in database one to the buldingtable in databse 2).
Any ideas?
Thanks a lot!
What I like about splitting it out into separate databases is that if each web site has its own database, and one of those web sites gets extremely popular, it is very easy to just move their database to a different, more powerful database server and not much has to change except (a) you need to reference the central "control" data remotely (or replicate/mirror/etc), and (b) you point that web site at a different database server. Another benefit is that if two web sites have the same types of tables (e.g. Patients), you don't have to have tables like Patients_WebSite1, Patients_WebSite2, with different stored procedures that are identical except for table names (or ugly dynamic SQL procedures that paste the table name in). Separated out you can have the exact same schema and the exact same codebase without having to combine everyone's data into a single table.
If you mix the data within a single database, data consistency is easier, and the whole setup is slightly simpler, but splitting it out when you grow is a lot tougher. If you split it out into different databases, no you won't be able to enforce referential integrity using standard DRI (foreign keys). You can accomplish this in other ways if it is important (triggers, validation before insert/update, etc).