I use "Lexware Warenwirtschaft Premium 2014" (a well-known merchandise management software in Germany). It uses Sybase as a database. I connect to the database by using a ODBC connection(SQL Anywhere driver). The database has 800+ tables. For example when Lexware creates a new Article, it writes data into different tables.
Is there a way to track into which tables Lexware wrote data?
As an ad-hoc measure you could switch on ODBC tracing, and then review the contents.
http://support.microsoft.com/kb/274551 tells you how to do this from a Windows client, and you can find similar information for Linux/Unix and other clients.
You'd then have to parse the trace file to see which queries were inserted into. The first step would probably be to isolate all the SQLPrepare and SQLExecDirect statements, and check them for INSERT, UPDATE and other relevant Sybase statements.
Note that this is not something you'd want as an ongoing solution, just a way to find out what an ODBC client does if you do not have access to e.g. logging information on the database itself. However, the trace slows down execution and would generate a very large trace file if you left it running for any significant period.
I don't think so. Whatever this program does behind the interface is hidden in its binaries and unreadable for humans, so you can't read the code to see which tables are altered.
You might be able to figure out which table was edited last, depending on the SQL-Server and it's version.
Related
This question is quite general, however, i can not find a good answer for it.
What are the possibilities for using an external database with MS Access?
I see that MySQL can be used, but I would have to setup a ODBC connection and install drivers on every machine. The issue is that I have a software developed in MS Access that uses a lot of data, and it gets very slow at processing the data when i include a lot of data.
The software analyzes data from wind turbines, so it is used by different customers and it may contain a lot of different turbines with 50,000+ rows in each data set.
I would like these turbine data to be stored in a separate file that is pointed to by MS Access, so I include the software + whatever turbine data wanted.
As it is now, i have a lot of Access database files where the data is included in the software. It becomes impossible to keep track of - Especially when I do an edit to the source code of the software, which is do a lot these days.
Another issue is that the users may only have Access Runtime.
What are my options here? Is the best method to use the Access Link function?
Best regards, Emil.
Edit:
SQL's - Can they be combined? :
SELECT q_DataLimited.YAW001, q_DataLimited.YAW002
FROM q_DataLimited
WHERE (((q_DataLimited.YAW002)>Degree_dsp() And (q_DataLimited.YAW002)<Degree_dsp_high()));
And
SELECT Count(q_WindRose_PCU.YAW001) AS CountOfYAW0011
FROM q_WindRose_PCU;
Edit 2:
Public Degree As Long
Public Function Degree_dsp() As Long
Degree_dsp = Degree * 20
End Function
I have the degree as a counter outside the function in a form being:
For Degree = 0 To 17
DoCmd.OpenQuery "q_WindRose_PCU"
DoCmd.Close
Next Degree
Edit 3:
How to combine a query and the append of it to a table?
SELECT q_PowerBinned.Bin, Avg(q_PowerBinned.POW001) AS AvgOfPOW001, StDev(q_PowerBinned.POW001) AS StDevOfPOW001, Avg(q_PowerBinned.WSP001) AS AvgOfWSP001, StDev(q_PowerBinned.WSP001) AS StDevOfWSP001, Avg(q_PowerBinned.POW002) AS AvgOfPOW002, StDev(q_PowerBinned.POW002) AS StDevOfPOW002, Avg(q_PowerBinned.WSP002) AS AvgOfWSP002, StDev(q_PowerBinned.WSP002) AS StDevOfWSP002, Count(q_PowerBinned.Bin) AS CountOfBin
FROM q_PowerBinned
GROUP BY q_PowerBinned.Bin;
And then the append of the above to a table:
INSERT INTO t_Average_Stored ( Bin, PowAvg001, WindAvg001, PowAvg002, WindAvg002, n_samples, PowDev001, WindDev001, PowDev002, WindDev002 )
SELECT q_Average_Temp.Bin, q_Average_Temp.AvgOfPOW001, q_Average_Temp.AvgOfWSP001, q_Average_Temp.AvgOfPOW002, q_Average_Temp.AvgOfWSP002, q_Average_Temp.CountOfBin, q_Average_Temp.StDevOfPOW001, q_Average_Temp.StDevOfWSP001, q_Average_Temp.StDevOfPOW002, q_Average_Temp.StDevOfWSP002
FROM q_Average_Temp;
I see already a few suggestions in the comments, but I am going to answer the general question you posted. In short, the possibilities are endless.
MS Access, and Excel for that matter, have excellent external data tools that allow you to connect to almost any external data source and leverage on regular SQL-based databases or even use OLAP cubes to do your analysis. Access itself should be powerful enough to handle the data sets you mention. Even Access 2010 should be able to handle millions of records with relative ease.
MS Access does have a significant limitation, which is the 2GB file size. Once your database reaches 2GB, everything goes out the window and you are very likely to get data corruption. This is a well known issue, but I don't think you are anywhere near these limits.
Before considering an upgrade, though, there are a few things to suggest:
Analyze the structure of your data and your database. Perhaps your tables are too big (lots of columns) and unnecessarily redundant. It may make sense to process the raw data you receive to split it into different tables that reduce the redundancy and improve performance.
Look into indexing some key fields in your tables. This is heavily dependent on the type of analysis you do and what queries are most common. Read up on indexes and how to use them and explore some options with actual datasets. You may be surprised how queries that used to take minutes to run become almost instantaneous when the right indexes are created and maintained.
Analyze your queries for performance. If I remember correctly, MS Access 2010 had a performance analyzer, which could improve your queries to make them run more efficiently.
If you have already looked into the items above and you decide you really need to take a step up, one fairly easy path (and inexpensive) is to install SQL Server Express, which you can download for free from Microsoft. Access was made to talk to SQL Server and the performance is many times better. You can run SQL Server Express in your personal pc and use it as a back-end for Access, or you could actually install it in a networked pc and use it as a server (behind a firewall, of course, NEVER connected to the Internet). In this setup you can access your data from several PCs.
One key thing to keep in mind once you start using Access as a front end, is that you want to push the processing to the back end, not keep it in Access. The best way to do this is to create what Access calls pass-through queries. These queries are written in the backend's native SQL language and are sent to the back end server for processing. Only the processed data comes back. If you don't do this, for example by creating the queries in the visual editor in Access instead, the raw data will be sent to Access and then Access will try to create your results. This, as you can imagine, can actually be a lot slower than your initial situation, so don't do it.
If you are not a SQL expert and need a visual editor, there is a tool that you can download from Microsoft: SQL-Server Management Studio Express. The query editor is not that different from Access and will allow you to create queries in a visual manner, but in Transact-SQL (the language of SQL Server). You can also manage your SQL Server Express with this tool and maintain your data in this manner (import, export, etc). You can create the SQL statements you need in this editor and then copy and paste into the pass-through queries in Access. The data will be available for you in the program you are familiar with, but with the power of a much bigger database engine behind the scenes.
Since I do not want to sound like a Microsoft shill, I definitely want to mention other options for external data that could be equally or even more powerful than SQL Server Express. The only reason I mentioned these is because you are already familiar with Microsoft products and the learning curve is a bit less steep. Also, most things should work together out of the box.
The first option that comes to mind is SQLite, which is a high performing database that is actually file-based. It is very small, yet very powerful and fast, and it is ideal for a locally based application like what you mention. There are also lots of graphical interfaces for SQLite and you can connect to it via ODBC from Access. Again, you want to run everything using pass-through queries and let SQLite pick up the load. SQLite is Open Source and it is free.
If you are keen on having "a real database server", then MySQL is probably the next step up. Also Open Source and free, it is very popular, which means lots of places to get support and different graphical interfaces to choose from.
Any search for Open Source Database will give you even more options to try and choose from.
One key thing to keep in mind: if you install any database server in your PC, it will become a server, and will start advertising its services in your local network or on the internet if you bring it to a local Starbucks. Be careful with that, learn how to start/stop the services in your PC, and make sure you turn them off when you are not behind a firewall. There are many exploits for different database servers and you will get quickly detected once your PC starts advertising its newly acquired abilities.
Just to close, there is no difference in the performance of Access and the runtime. Just the ability to edit the queries and so on. Whatever front end you create in Access, your users will be able to utilize in the same manner.
I need to do some data migration between two oracle databases that in different servers. I've thought of some ways to do it like writing a jdbc program but i think the best way is to do it in SQL itself. I can also copy the entire table over to the database I am migrating to but these tables are big and doesnt seem like a "elegant" solution.
Is it possible to open a connection to one DB in SQL developer then connect to the other one using SQL and writing update/insert functions on tables as if they were both in the same connection?
I have read some examples on creating linked tables but none seem to be oracle specific or tell me how to open the external connection by supplying it the server hostname/port/SID/user credentials.
thanks for the help!
If you create a Database Link, you can just select a from different database by querying TABLENAME#dblink.
You can create such a link using the CREATE DATABASE LINK statement.
It depends if its a one time thing or a normal process and if you need to do ETL (Extract, Transform and Load) or not, but ill help you out based on what you explained.
From what i can gather from your explanation, what you attempt to accomplish is to copy a couple of tables from one db to another, if they can reach one another then its really simple, you could just create a DBLINK (http://www.dba-oracle.com/t_how_create_database_link.htm) and then do a SELECT AS INSERT from either side using the DBLINK for one of the tables and the local table as the receiver or sender. Its pretty straight forward.
But if its a one time thing i would just move the table with expdp and impdp since that will be a lot faster and a lot less strain on the DB.
If its something you need to maintain and keep updated, why not just add the DBLINK and use that on both sides, this will be dependent on network performance though.
If this is a bit out of you depth or you cant create dblinks due to restrictions, SQL Developer has had a database copy option for a while and you can go as far a copying individual tables, but its very heavy on the system where its being run (http://deepak-sharma.net/2014/01/12/copy-database-objects-between-two-databases-in-oracle-using-sql-developer/).
I have a delicate situation wherein some records in my database are inexplicably missing. Each record has a sequential number, and the number sequence skips over entire blocks. My server program also keeps a log file of all the transactions received and posted to the database, and those missing records do appear in the log, but not in the database. The gaps of missing records coincide precisely with the dates and times of the records that show in the log.
The project, still currently under development, consists of a server program (written by me in Visual Basic 2010) running on a development computer in my office. The system retrieves data from our field personnel via their iPhones (running a specialized app also developed by me). The database is located on another server in our server room.
No one but me has access to my development server, which holds the log files, but there is one other person who has full access to the server that hosts the database: our head IT guy, who has complained that he believes he should have been the developer on this project.
It's very difficult for me to believe he would sabotage my data, but so far there is no other explanation that I can see.
Anyway, enough of my whining. What I need to know is, is there a way to determine who has done what to my database?
If you are using identity for your "sequential number", and your insert statement errors out the identity value will still be incremented even though no record has been inserted. Just another possible cause for this issue outside of "tampering".
Look at the transaction log if it hasn't been truncated yet:
How to view transaction logs in SQL Server 2008
How do I view the transaction log in SQL Server 2008?
If you want to catch the changes in real time, I suggest you consider using SqlDependency. This way, when data changes, you will be alerted immediately and can check which user is using the database at the very moment (this could also be done using code).
You can use this code sample.
Coming to think about it, you can establish the same effect using a trigger and writing ti a table active users. Of course, if you are suspecting someone is tempering with data, using SqlDependency might be a better way to go with, as the data will be stored outside of the tampered database.
You can run a trace, for example a distant profiler trace, that will get all SQL queries containing the DELETE keyword. This way, nobody will be aware that queries are traced. You can also query the default trace regularly to get the last DELETE commands: Maintaining SQL Server default trace historical events for analysis and reporting
I've been searching the internet for this question:
What are ways to transfer data and tables on a daily basis from an Oracle's Hyperion to SQL Server 2000?
I am an intern at a company and trying to figure out possible ways to do this. Any help or point in the right direction is greatly appreciated
This is going to depend a lot on specifics. Here are just a few possible solutions:
DTS
DTS is packaged with SQL 2000 and is made for this kind of a task. If written correctly, your DTS package can have good error-handling and be rerunnable/reusable.
SSIS
SSIS is actually packaged with SQL 2005 and above, but you can connect it to other databases. It's basically a better version of DTS. (technically it's radically different than DTS, but has a lot of the same functionality)
Linked Servers
From SQL 2000 you should be able to connect directly to your Oracle database as a linked server. In the pros column this kind of direct access can be easy to work with if you don't have any other technical skills such as DTS or SSIS, but it can be complex to get the initial set-up right and there may be security concerns/issues.
Build Your Own
Depending on what other technologies you use you can build your own application to do the ETL (Extract/Transform/Load, which is what you're doing). This could be in .NET, Java, etc. In the pros column you can use something with which you're familiar but there's a big downside here in that most of the low level type of work is already out there in tools like DTS/SSIS, so why reinvent the wheel?
BCP
You can simply extract the data from Oracle as .csv files (or some other format) and then import them back in using SQL Server's Bulk Copy Process. This can be fast, but there aren't many bells and whistles to go with this. If this is a one-time thing with just a few tables though then this is probably the easiest and fastest way to do it.
Third Party Applications
There are a slew of ETL applications already written out there (Data Import, Data Slave, etc.). They will usually provide wizards and one-click solutions (maybe a few more than one click), but they are also going to cost a bit of extra money.
EDIT:
Given your latest comment, I would probably go with a DTS package that's scheduled in SQL Agent to run daily. You can add in error-handling and have the system email/text/call someone if there's ever an issue (or do positive case reporting - ie. send a message when it's successful so that someone knows that there's a problem if they don't get a message each day.
In our company we use ADO.Net for the same task.
We created a source to Oracle , taking all data and then creating it in SQL server
You could write DTS packages to copy the data, and schedule them to run within Sql Server Agent.
See DTS Overview for information on DTS packages.
Here's a tutorial on creating a DTS package: Creating DTS Packages With SQL Server 2000
Oracle Hyperion is a suite of products, largely unrelated to Oracle's database product. I expect you are referring to a product such as Hyperion Financial Management or Hyperion Strategic Finance. These products have APIs that can be consumed using COM Interop or web services. The data can be extracted from the internal multidimensional database by analyzing the database metadata, creating dimension trees, and then using the information to create selections, that represent subcubes within the database; allowing you to get or set cell data.
I don't know what your level of knowledge of multidimensional databases is, but unless it is substantial you may find the task pretty hard. You also need to get a handle on the particular product API.
My company specializes in these kinds of activities, and we have components for this kind of thing. Drop me a line on my blog if you need further advice.
danielvaughan.org
Cheers,
Daniel
I don't know anything about Hyperion, but SQL Server 2000 is very old and may not have a driver to be able to pull data from Hyperion if the version of that is newer than the year 2000. You may need to look to see if there is a way to push the data from Hyperion rather than pull it into SQL Server 2000. One way i have done this is the past is to create pipe delimited text file from the data base that orginally has the data and palce it in a processing directory. I do know that DTS will process a pipe-delimited text file. So if you can't find a driver to process this data directly, consider if you can push it out to file and then process. You wil have to schedule a time gap between the job on Hyperion that creates the file and the DTS package job. But if you are only doing it once a day, that's prbably not a problme.
I would like to log changes made to all fields in a table to another table. This will be used to keep a history of all the changes made to that table (Your basic change log table).
What is the best way to do it in SQL Server 2005?
I am going to assume the logic will be placed in some Triggers.
What is a good way to loop through all the fields checking for a change without hard coding all the fields?
As you can see from my questions, example code would be veeery much appreciated.
I noticed SQL Server 2008 has a new feature called Change Data Capture (CDC). (Here is a nice Channel9 video on CDC). This is similar to what we are looking for except we are using SQL Server 2005, already have a Log Table layout in-place and are also logging the user that made the changes. I also find it hard to justify writing out the before and after image of the whole record when one field might change.
Our current log file structure in place has a column for the Field Name, Old Data, New Data.
Thanks in advance and have a nice day.
Updated 12/22/08: I did some more research and found these two answers on Live Search QnA
You can create a trigger to do this. See
How do I audit changes to sql server data.
You can use triggers to log the data changes into the log tables. You can also purchase Log Explorer from www.lumigent.com and use that to read the transaction log to see what user made the change. The database needs to be in full recovery for this option however.
Updated 12/23/08: I also wanted a clean way to compare what changed and this looked like the reverse of a PIVOT, which I found out in SQL is called UNPIVOT. I am now leaning towards a Trigger using UNPIVOT on the INSERTED and DELETED tables. I was curious if this was already done so I am going through a search on "unpivot deleted inserted".
Posting Using update function from an after trigger had some different ideas but I still believe UNPIVOT is going to be the route to go.
Quite late but hopefully it will be useful for other readers…
Below is a modification of my answer I posted last week on a similar topic.
Short answer is that there is no “right” solution that would fit all. It depends on the requirements and the system being audited.
Triggers
Advantages: relatively easy to implement, a lot of flexibility on what is audited and how is audit data stored because you have full control
Disadvantages: It gets messy when you have a lot of tables and even more triggers. Maintenance can get heavy unless there is some third party tool to help. Also, depending on the database it can cause a performance impact.
Creating audit triggers in SQL Server
Log changes to database table with trigger
CDC
Advantages: Very easy to implement, natively supported
Disadvantages: Only available in enterprise edition, not very robust – if you change the schema your data will be lost. I wouldn’t recommend this for keeping a long term audit trail
Reading transaction log
Advantages: all you need to do is to put the database in full recovery mode and all info will be stored in transaction log
Disadvantages: You need a third party log reader in order to read this effectively
Read the log file (*.LDF) in sql server 2008
SQL Server Transaction Log Explorer/Analyzer
Third party tools
I’ve worked with several auditing tools from ApexSQL but there are also good tools from Idera (compliance manager) and Krell software (omni audit)
ApexSQL Audit – Trigger based auditing tool. Generated and manages auditing triggers
ApexSQL Log – Allows auditing by reading transaction log
Under SQL '05 you actually don't need to use triggers. Just take a look at the OUTPUT clause. OUTPUT works with inserts, updates, and deletes.
For example:
INSERT INTO mytable(description, phone)
OUTPUT INSERTED.description, INSERTED.phone INTO #TempTable
VALUES('blah', '1231231234')
Then you can do whatever you want with the #TempTable, such as inserting those records into a logging table.
As a side note, this is an extremely easy way of capturing the value of an identity field.
You can use Log Rescue. It quite the same as Log Explorer, but it is free.
It can view history of each row in any tables with logging info of user, action and time.
And you can undo to any versions of row without set database to recovery mode.