SQL Server 2000 pivot without temporary tables? - sql-server-2000

I'm trying to pivot some data in SQL Server 2000, but the user that runs the application only have read/write permissions. I've looked the solutions posted here, but all involving create/destroy temporary tables.

I think you may be a bit confused as to what temporary tables are.
You should be able to create a temp table (in memory) within your SQL statement. No extra permissions are necessary.

Related

Slow Access query when joining SQL table with Access table

I am using a SQL database and MS Access 2019 as the front end. The SQL database tables are linked to the Access db using an ODBC connection.
All my queries (they have multiple joined linked tables) run just fine, but as soon as I add a join to a table stored in the Access app (for example, a small table just for mapping values) the query will slow to a crawl. Doesn't matter if the joined fields are indexed or what type of join I'm using.
If anyone has seen this behaviour and found a solution I would much appreciate hearing it.
Joining tables from two separate databases requires the client app to retrieve both tables in their entirety in order to determine the rows needed. That's why it's slow.
If your Access table is small, try using a stored procedure on the SQL side with the data from Access moved to a temporary table. (Or better yet, move the Access table to SQL).

How to query PostgreSQL database table from Access?

I am very new to SQL, MS Access & PostgreSQL. So this might be a very silly question but somehow I can't figure it out. I'm trying to run SQL queries in access and my data is in a PostgreSQL database table which was linked to access by my colleague earlier. When I make this simple query why do I get an error that the table doesn't exist? Is the syntax different for linked database tables? Or is the link not yet established?
You have created a Pass-Through query. This query is executed on the server, not in Access, so you need to use the original table names from the PostgreSQL database.
So it's not FROM public_tb_change but FROM tb_change.
Or maybe FROM public.tb_change, if public isn't the default schema.
I advise to rename your linked tables to the original name (remove public_), that makes things much less confusing. The schema name is automatically added by Access when linking the tables.

Does Modifying Data in Access Table Modify Data in an ODBC-connected Oracle SQL table?

I am new to access. I am using a tool/access database someone built, and it has an ODBC connection to an Oracle SQL database in it.
There are different queries on the side panel, and some of them are delete queries.
If I run these delete queries will they just modify data in my local access database without modifying the data in the Oracle Database?
Yes these will change something in the database whether its linked with another access database table or oracle table and within the database. To review the query you can open the queries in design view and run a normal select query so you can see what the queries are deleting. You can have a normal table image and or globe with a arrow in front pointing towards the table then its linked. A lot of times when I am testing I just run select queries and then I make a copy of what I will be deleting just in case anything goes wrong.

SAS Enterprise Guide / SQL Performance

I'm looking for a little guidance on a SAS/SQL performance issue I'm having. In SAS Enterprise Guide, I've created a program that creates a table. This table has about 90k rows:
CREATE TABLE test AS (
SELECT id, SUM(myField)
FROM table1
GROUP BY id
)
I have a much larger table with millions of rows. Each row has an id. I want to sum values on this table, using only id's present in the 'test' table. I tried this:
CREATE TABLE test2 AS(
SELECT big.id, SUM(big.myOtherField)
FROM big
INNER JOIN test
ON test.id = big.id
GROUP BY big.id
)
The problem I'm having is that it takes forever to run the second query against the big table with millions of records. I thought the inner join on the subset of id's would help (and maybe it is) but I wanted to make sure I was doing everything I could to speed it up.
I don't have any way to get information on the indexing of the underlying database. I'm more interested in getting the opinion of someone who has more SQL and SAS experience than me.
From what you show in your question, you are joining two SAS data sets, not two database objects. In any case, you can speed up the processing by defining indexes on the JOIN columns used in each table. Assuming you have permission to do so, here are examples:
proc sql;
create index id on big(id);
create index id on test(id);
quit;
Of course, you probably should first check the table definition before doing that. You can use the "describe" statement to see the structure:
proc sql;
describe table big;
quit;
Indexes improve access performance at the cost of disk space and update maintenance. Once created, the indexes will be a permanent part of the SAS data set and will be automatically updated if you use SQL INSERT or DELETE statements. But be aware that the indexes will be deleted if you recreate the data set with a simple data step.
On the other hand, if these tables really are in an external database (like Oracle for example), you have a different challenge. If that's the case, I'd ask a new question and provide a complete example of the SAS code you are using (including and libname statements).
If you are working with non-SAS data, ie, data that resides in a SQL DB or a no-SQL database for that matter, you will see significant improvements in performance using pass-through SQL or, if supported and you have the licenses for it, in-database processing.
One important point about proc sql vs pass-through sql. Proc sql, by default, creates duplication of the original source data in SAS datasets prior to doing the work. Whereas, pass-through just requests the result set from the source data provider. In short, you can imagine that a table with 5 million rows will take a lot longer to use with proc sql (even if you are only interested in about 1% of the data) than if you just have to pull that 1% of data across the network using the pass-through mechanism.

Periodically store data from a PostgreSQL table to SQL Server 2005 table (with the same schema)

I have a PostgreSQL database that stores real-time data from sensors in a specific table (every 30sec).
What I want to do, is to get periodically the data from the remote PostgreSQL database (for instance every 30sec) and store them in SQL Server 2005 to manipulate them locally. I don't care about having the two databases with duplicate tables. Actually this is what I want to achieve!
So far, I have as Linked Server the PostgreSQL to SQL Server and I can query and retrieve the sensor data. However, I prefer to store them in my SQL Server for performance reasons.
Solution so far:
Make select openquery statements with the linked PostgreSQL and insert the results to my table in SQL Server. Repeat this periodically and store fresh data only (e.g. with a larger timestamp).
I assume that my proposed solution is not ideal. I want to know what are the best practices to achieve this synchronization between the two databases.
Thank you in advance!
If you don't want to write your own code(implementations) to do that you can use SymmetricDS to synch the table from postgreSQL to MSSQL .