r code production in SQL Server - sql

Above if I generated a SQL query that produced the defined table and an "R" summary dataframe that produced the defined table, how you I be able to link them both together into production.
*Note, I'm not asking how to create a SQL query or an R Dataframe for the defined tables (I have the code for that), but rather how can they both work together. For example, could that R dataframe code be used in SQL Server (I have the latest version) so that as soon as the SQL query created the tables (BY DATE), the summary table would automatically update itself (BY DATE) in SQL Server?
So as soon as it went aggregated the summary of the first date, it would move onto the next and essentially generate (stack on top of each other)
Thank you.

Related

How to update database table with table in SQL

I have a SQL Server table with columns including Ticket id, and each row corresponding to one ticket.
Weekly, I will pull data with the same columns with rows corresponding to all tickets that have been created or changed since the last pull.
My question is, what is the correct SQL command to update my existing table so that the new tickets are added and the changed tickets are updated?
Apologies, I am new to SQL. I will be using pyodbc in python with a db in SQL Server.
Edit: Ok, for anyone still looking at this, my question becomes: how can one upsert/MERGE a large json to sql server table using pyodbc?

How to pass a local access table as parameter to SQL server?

I am modifying an access 2010 application so that it uses SQL server to run its queries. The data has been transferred to the server some times ago, and used as linked tables, but that proves a bit slow and non-optimal. So I'm trying to put the queries on the server.
I have no problem for simple queries, views,... and I'm using stored functions when there is a need for simple parameters (dates, ids,...).
But now I have a process in that application that selects a bunch of ids in the database, stores them in a local table, does a bunch of actions on them (report with sub report, print preview, print, update of the original records with the date of print when the user confirms that everything printed OK), and empties the local table if all actions succeed.
I can't simply use an SQL server table to store the ids since many people use the application at the same time, and the same table is used in several processes; I can't use temporary tables since they disappear as soon as access goes to the next action; and I can't find a way to use a local table as a parameter to server stored procedures. Basically I'm stuck.
Can anyone help? Is there a way to do that (pass a bunch of values as a table to a server stored function)? Or another process that would achieve the same result (have a table on the server specific to the current user, or a general table and somehow identify the lines belonging to current user, or anything else)?
There are 2 methods that I use. Both work very well for multi-user apps. Here are the basics. You'll need to work out the details.
Create a table (tblSessions) in SQL Server with an identity column SessID (INT NOT NULL).
Create a function in Access to generate and retrieve a new SessID.
Create a second SS table tblWork with 2 columns SessID, YourID. Add appropriate indexes and keys. Link this table to your Access app. Then instead of inserting the IDs for your query into an Access temp table, insert them into tblWork along with a new SessID. You can now join tblWork to other SS tables/views to use as the datasource for you reports. Be sure to add the SessID that you generated to your where clause.
Create a stored procedure to generate the data for your reports. Include a parameter #YourIDList VARCHAR(MAX). Call the proc via a passthrough queryand pass the list of your IDs as a comma (or whatever you prefer) separated string to #YourIDList. In the proc, split #YourIDList into a temp table. SS2016+ has a STRING_SPLIT function. For older versions, roll your own. There are plenty of examples available. Then join the temp table to the other tables you need to generate your output. Use the PT query as your report datasource, or dump it into an Access temp table and use that as you report datasource.

How to automate SQL query to get row count of all tables

We need to automate the SQL row count, as of now we are using an Excel formula to fill count(*) query to all tables and copying it and pasting in Oracle SQL Developer and running all at once.
So, what I am looking for: is there any way to get automated by using Python or any programming language which directly connects to a database by asking necessary inputs and get all the process done by itself and get me row count for each table without involving me or any process said above.

Create database view from SQL stored as string in Netezza

I got a task to create a view based on 10 SQL queries, which are stored in one of the tables. In this table, there are two columns. First of them is Site, and the second is mentioned before SQL query.
Now, there whole deal is, that I would like to create a view, with two columns - first would be the name of the Site and the second one would be whatever ID mentioned query returns.
Can anyone please tell me how to execute this string as a query?
I have done something like this couple years ago on SQL Server (using procedure), non the less on Neteeza I can't figure how this should works; as a further complication, we are forbidden to use procedures an PROD.

Why does my SSIS package takes so long to execute?

I am fairly new creating SSIS packages. I have the following SQL Server 2008 table called BanqueDetailHistoryRef containing 10,922,583 rows.
I want to extract the rows that were inserted on a specific date (or dates) and insert them on a table on another server. I am trying to achieve this through a SSIS which diagram looks like this:
OLEDB Source (the table with the 10Million+ records) --> Lookup --> OLEDB Destination
On the look up I have set:
Now, the query (specified on the Lookup transformation):
SELECT * FROM BanqueDetailHistoryRef WHERE ValueDate = '2014-01-06';
Takes around 1 second to run through SQL Server Management Studio, but the described SSIS package is taking really long time to run (like an hour).
Why is causing this? Is this the right way to achieve my desired results?
You didn't show how your OLEDB Source component was set up but looking at the table names I'd guess you are loading the whole 10 million rows in the OLEDB source and then using the Lookup to filter out only the ones you need. This is needlessly slow.
You can remove the Lookup completely and filter the rows in OLEDB source using the same query you had in the Lookup.