Accessing tables from different .mdb files - vb.net

I need to show a grid of saved projects (compare "orders") in a datagrid, where the projects are saved in an Access 2000 database with a similar schema as follows:
ID Name Country_ID Plant_Type
1 'Test' 1 1
2 'Second' 2 2
Let's call the file "Projects.mdb". This is then showed in the datagrid as:
ID Name Country Plant Type
1 'Test' 'Germany' 'Free Range'
2 'Second' 'France' 'Inclined Roof'
where the countries and "Plant Types" are fetched from a different table in a different .mdb file (also Access 2000, call it "Language.mdb", although there is a lot of different background data in it), depending on the current user's language preference. It is unfortunately not an alternative to merge these .mdb's into one file.
To be able to show the datagrid I have so far linked the tables from "Language.mdb" into "Projects.mdb", but this screws up when the project is being installed on another computer with the .msi file i created (we'd like to have this easily packaged and installed), as the "Language.mdb" doesn't exist on the linked path on the target computer (Basically the problem here).
I can come up with the following solutions:
Force all users to install on the same path, so that the links will work (undesirable)
Use connection strings in the query as shown here on MSDN (still trying this out, but I need to work on the details)
make a post-install script that relinks the tables according to the correct path.
But I think I'm doing something wrong here. As stated above, it is not an option to merge the .mdb-files, but other suggestions to changing the database schema or whatever it could be (I'm not very experienced with databases) would be very appreciated.

To get around the 'different install paths' problem I use code (on every database load) which first looks for any back end databases in the current db folder; if not found, it asks the user to locate the missing .mdb file. Then the code relinks the database(s). Once the dbs have been successfully linked, the database saves the path and checks this path first on subsequent loads.

Well, based on the constraints that you have put on the solution. I would either go with option 2 or 3. There is not an elegant solution to this at all.
I would however, lean towards your third option, as a "one time" fix to get the files linked, so that the path between them is known, and you are not dynamically adding path information into every query.
note
I'll just mention, but I'm sure you already know this, that if you are looking at doing something like this, it just feels wrong to be doing it with Access, let alone access 2000 at this time for client deployments. I would strongly recommend additionally truly evaluating the solution and see if you can either merge to one, or possibly move to SQL Server Express or something that you could send off to the user as an installer

Is Project split, as it should be, to allow a front end on each user's computer? If so, can you not store the path on the front-end and only re-link if it changes? Code to re-link tables is quite simple, for the most part. The user can be allowed to browse for the location and the Connect property can be updated accordingly.

Related

Automatically add database entry after ftp upload

Sorry if this seems stupid but I wonder if it's possible to add a database entry after an ftp upload.
To be more clear, thanks to winSCP I have several folders sending everything I put in there automatically to my server.
However, I would like to create a mysql entry for each uploaded files and once again, automatically. Is it possible to do that? How?
To gives the full details of what I need to do, you can read the following.
I have several folders with pictures and each folders are uploaded automatically.
Each of those folders belong to one user and the goal is to give them an account and allow them to see and download those files through a web interface. Since one account = one folder, that's kinda easy.
And I think a simple .htaccess can simply secure things so one user can only see and download the file in his own repository, no?
However if I want them to be able to see what's new (=something they didn't download or simply mark as read) I think I need a table to manage those files.
Something like id | file (string) | read (bool).
If you think this way to proceed is bad, they I'm open to change how to do things, but to be clear uploading the file need to work this way. Not using any kind of formulary.
Thanks for reading that, sorry for my english.
Your problem contains three steps:
Folders/Files been automatically uploaded to your server directory, as you say, this been efficiently handled by winSCP.
You need to update your database with all the files and folders present in your server directory.
You need to update whether or not it is been read/downloaded by the user.
Since your first step is in place, we don't need anything there. For second step, you should write a script and schedule that script to run at a fixed time interval using CRON (if using LINUX or UNIX, or WINDOWS). The script would be responsible to create a list of file(s) present in the directory, and simply insert the file(s) information that are not present in your database.
EDIT:
This edit is to describe how your script file should work. As I explained, the cron jobs would simply help you run your script file in fixed set of interval (which can be every minute, or every hour, or every day, and so on). Lets say your database table has following columns:
fileid (varchar[20])
filepath (varchar[20])
status (boolean)
Your script file should do following things:
Create a list of existing filepaths in your server directory
Run a select query, create a list of existing filepaths from database table.
Compare list1 with list2, and find the ones that doesn't exist in list2 (This would give you a list of filepath that needs to be inserted into table)
Just insert the list of file paths you got above, and set there status to be false (which means the file is not read/downloaded yet)
NOTE: Please keep in mind that I am not advising right now that how your database table should look like. It can be what you have proposed or can even differ depending on your will or requirements.
For the third step, simply keep the status of your file to be unread when creating entries in your table from the second step, and then when user click on the file link in your application whether to view or download it, send a POST request to your server updating the file status to be marked as read.
Let me know if this helps!

SSDT Schema Compare keeps finding differences for users?

The title says it all but for further details. When i use the schema compare tool in VS2015 for my SSDT Project and my database on a server. The compare results always come back with the Users as being different. I check the differences it speaks about but there is not a single difference between the environments.
I even went as far as updating my project from the compare results to try and correct these "differences". I then ran another compare and the same users came back with differences again... WHAT! haha.
Anyone have a clue what would cause this sync issue or am i doing something wrong? The users were added manually to the database on the server rather then thru SSDT deployment so maybe that is a reason?
EDIT:
Please hit the arrow on the left of the user differences found. You will see a properties folder and the missing login which is the real difference. If you go Schema Compare Options -> Object Types (tab) -> Non-Application-scoped -> Logins to add Login object type to your comparison then your issue will be resolved.
Logins objects
Although I consider this a bit of a workaround (I haven't found a real solution to get rid of the Users from the comparison even when they're indentical), it's the best I've found. Simply exclude Users from the comparison.
You can access this menu by clicking the gear icon on the Schema compare window and expanding the Application-scoped object and unchecking Users (or indeed anything you else you want to exclude).
When I drill into the schema compare under Change > User > Properties > Spanner icon, I can see this difference:
Source (SQL Azure) Target (Project)
============= ===================
UserType=2 UserType=0
What does this mean? I googled to no avail.
Updating doesn't fix it. Deleting the user script from the project and Updating doesn't either.
I can't find UserType in the source code so this must be generated internally by the compare.
In Visual Studio 2017 at least (and it looks like option is in 2015 as well), try going to Schema Compare options and under the "General" tab, untick the option "Ignore login SIDs".
I had this same problem with a user (where there wasn't a login defined for the user), and unticking this option resolved the problem for me. I still picked up actual correct user changes, but for existing users that are unchanged, the comparison no longer shows a bogus item.

Logging When Files Are Saved, Modified or Deleted Using VBA

I work with VBA in MS Access databases. I'd like to be able to log when files are saved, modified or deleted without having to update the existing code to do the logging when the pertinent events take place. I want the time, location and the name of the file.
I found a good example here: when file modified
However, it only allows for monitoring a particular location (path). I want to be able to log regardless of where the save, modify or delete takes place. I'm only allowed to program in the MS Office environment in this situation. It seems as though using the Windows API is going to be how this task will be achieved. However, I don't have much experience working with the API. Is there an easier way to achieve what I want that doesn't involve using the API?
Have you worked with After_Updates or After_Insert macros? Also, is your application split? Meaning there's a front-end and a back-end of the database. You can create a separate table that mirrors that table that you need to track changes for. Every time a table is updates, run a macro that inserts a row to that table.
I'm assuming you're saving files to the database. If that's the case, add a after_update or after_insert macro that can keep track of when then files are being modified or added to the table.

Lotus 8.51, 1 application, 1 server, 1 pc but different location and user different data?

i designed a software for testing work flow in lotus 8.51. i ran the program and it worked fine, but i found a single oddity in the application when i switch location to switch user id on my PC, the data i inputted in one location differ/not exist from the other one.
could you suggest something to correct this?
I would suggest verifying that the database you are accessing is indeed the same from both ID's. (Do this by looking at the Application > Properties dialog - verify that both the server name and the path to the NSF file are the same).
If the server and path are the same, but you do not see the data entered from the other ID, then it could be simply that the view you are accessing is filtering the documents to only show documents created (or "owned", etc) by the current user id. See if you can switch to a different view, or create a custom private view with no selection formula, and see if you now see the documents.
If you still do not see the document, then the database was probably designed to use a reader-names field computed in some way to the current user. That would result in user id "X" not seeing documents created by user id "Y". The only way to "fix" this would be to modify the form design to not use reader-names.

Do you put your database static data into source-control ? How?

I'm using SQL-Server 2008 with Visual Studio Database Edition.
With this setup, keeping your schema in sync is very easy. Basically, there's a 'compare schema' tool that allow me to sync the schema of two databases and/or a database schema with a source-controlled creation script folder.
However, the situation is less clear when it comes to data, which can be of three different kind :
static data referenced in the code. typical example : my users can change their setting, and their configuration is stored on the server. However, there's a system-wide default value for each setting that is used in case the user didn't override it. The table containing those default settings grows as more options are added to the program. This means that when a new feature/option is checked in, the system-wide default setting is usually created in the database as well.
static data. eg. a product list populating a dropdown list. The program doesn't rely on the existence of a specific product in the list to work. This can be for example a list of unicode-encoded products that should be deployed in production when the new "unicode version" of the program is deployed.
other data, ie everything else (logs, user accounts, user data, etc.)
It seems obvious to me that my third item shouldn't be source-controlled (of course, it should be backuped on a regular basis)
But regarding the static data, I'm wondering what to do.
Should I append the insert scripts to the creation scripts? or maybe use separate scripts?
How do I (as a developer) warn the people doing the deployment that they should execute an insert statement ?
Should I differentiate my two kind of data? (the first one being usually created by a dev, while the second one is usually created by a non-dev)
How do you manage your DB static data ?
I have explained the technique I used in my blog Version Control and Your Database. I use database metadata (in this case SQL Server extended properties) to store the deployed application version. I only have scripts that upgrade from version to version. At startup the application reads the deployed version from the database metadata (lack of metadata is interpreted as version 0, ie. nothing is yet deployed). For each version there is an application function that upgrades to the next version. Usually this function runs an internal resource T-SQL script that does the upgrade, but it can be something else, like deploying a CLR assembly in the database.
There is no script to deploy the 'current' database schema. New installments iterate trough all intermediate versions, from version 1 to current version.
There are several advantages I enjoy by this technique:
Is easy for me to test a new version. I have a backup of the previous version, I apply the upgrade script, then I can revert to the previous version, change the script, try again, until I'm happy with the result.
My application can be deployed on top of any previous version. Various clients have various deployed version. When they upgrade, my application supports upgrade from any previous version.
There is no difference between a fresh install and an upgrade, it runs the same code, so I have fewer code paths to maintain and test.
There is no difference between DML and DDL changes (your original question). they all treated the same way, as script run to change from one version to next. When I need to make a change like you describe (change a default), I actually increase the schema version even if no other DDL change occurs. So at version 5.1 the default was 'foo', in 5.2 the default is 'bar' and that is the only difference between the two versions, and the 'upgrade' step is simply an UPDATE statement (followed of course by the version metadata change, ie. sp_updateextendedproperty).
All changes are in source control, part of the application sources (T-SQL scripts mostly).
I can easily get to any previous schema version, eg. to repro a customer complaint, simply by running the upgrade sequence and stopping at the version I'm interested in.
This approach saved my skin a number of times and I'm a true believer now. There is only one disadvantage: there is no obvious place to look in source to find 'what is the current form of procedure foo?'. Because the latest version of foo might have been upgraded 2 or 3 versions ago and it wasn't changed since, I need to look at the upgrade script for that version. I usually resort to just looking into the database and see what's in there, rather than searching through the upgrade scripts.
One final note: this is actually not my invention. This is modeled exactly after how SQL Server itself upgrades the database metadata (mssqlsystemresource).
If you are changing the static data (adding a new item to the table that is used to generate a drop-down list) then the insert should be in source control and deployed with the rest of the code. This is especially true if the insert is needed for the rest of the code to work. Otherwise, this step may be forgotten when the code is deployed and not so nice things happen.
If static data comes from another source (such as an import of the current airport codes in the US), then you may simply need to run an already documented import process. The import process itself should be in source control (we do this with all our SSIS packages), but the data need not be.
Here at Red Gate we recently added a feature to SQL Data Compare allowing static data to be stored as DML (one .sql file for each table) alongside the schema DDL that is currently supported by SQL Compare.
To understand how this works, here is a diagram that explains how it works.
The idea is that when you want to push changes to your target server, you do a comparison using the scripts as the source data source, which generates the necessary DML synchronization script to update the target. This means you don't have to assume that the target is being recreated from scratch each time. In time we hope to support static data in our upcoming SQL Source Control tool.
David Atkinson, Product Manager, Red Gate Software
I have come across this when developing CMS systems.
I went with appending the static data (the stuff referenced in the code) to the database creation scripts, then a separate script to add in any 'initialisation data' (like countries, initial product population etc).
For the first two steps, you could consider using an intermediate format (ie XML) for the data, then using a home grown tool, or something like CodeSmith to generate the SQL, and possible source files as well, if (for example) you have lookup tables which relate to enumerations used in the code - this helps enforce consistency.
This has another benefit that if the schema changes, in many cases you don't have to regenerate all your INSERT statements - you just change the tool.
I really like your distinction of the three types of data.
I agree for the third.
In our application, we try to avoid putting in the database the first, because it is duplicated (as it has to be in the code, the database is a duplicate). A secondary benefice is that we need no join or query to get access to that value from the code, so this speed things up.
If there is additional information that we would like to have in the database, for example if it can be changed per customer site, we separate the two. Other tables can still reference that data (either by index ex: 0, 1, 2, 3 or by code ex: EMPTY, SIMPLE, DOUBLE, ALL).
For the second, the scripts should be in source-control. We separate them from the structure (I think they typically are replaced as time goes, while the structures keeps adding deltas).
How do I (as a developer) warn the people doing the deployment that they should execute an insert statement ?
We have a complete procedure for that, and a readme coming with each release, with scripts and so on...
First off, I have never used Visual Studio Database Edition. You are blessed (or cursed) with whatever tools this utility gives you. Hopefully that includes a lot of flexibility.
I don't know that I'd make that big a difference between your type 1 and type 2 static data. Both are sets of data that are defined once and then never updated, barring subsequent releases and updates, right? In which case the main difference is in how or why the data is as it is, and not so much in how it is stored or initialized. (Unless the data is environment-specific, as in "A" for development, "B" for Production. This would be "type 4" data, and I shall cheerfully ignore it in this post, because I've solved it useing SQLCMD variables and they give me a headache.)
First, I would make a script to create all the tables in the database--preferably only one script, otherwise you can have a LOT of scripts lying about (and find-and-replace when renaming columns becomes very awkward). Then, I would make a script to populate the static data in these tables. This script could be appended to the end of the table script, or made it's own script, or even made one script per table, a good idea if you have hundreds or thousands of rows to load. (Some folks make a csv file and then issue a BULK INSERT on it, but I'd avoid that is it just gives you two files and a complex process [configuring drive mappings on deployment] to manage.)
The key thing to remember is that data (as stored in databases) can and will change over time. Rarely (if ever!) will you have the luxury of deleting your Production database and replacing it with a fresh, shiny, new one devoid of all that crufty data from the past umpteen years. Databases are all about changes over time, and that's where scripts come into their own. You start with the scripts to create the database, and then over time you add scripts that modify the database as changes come along -- and this applies to your static data (of any type) as well.
(Ultimately, my methodology is analogous to accounting: you have accounts, and as changes come in you adjust the accounts with journal entries. If you find you made a mistake, you never go back and modify your entries, you just make a subsequent entries to reverse and fix them. It's only an analogy, but the logic is sound.)
The solution I use is to have create and change scripts in source control, coupled with version information stored in the database.
Then, I have an install wizard that can detect whether it needs to create or update the db - the update process is managed by picking appropriate scripts based on the stored version information in the database.
See this thread's answer. Static data from your first two points should be in source control, IMHO.
Edit: *new
all-in-one or a separate script? it does not really matter as long as you (dev team) agree with your deployment team. I prefer to separate files, but I still can always create all-in-one.sql from those in the proper order [Logins, Roles, Users; Tables; Views; Stored Procedures; UDFs; Static Data; (Audit Tables, Audit Triggers)]
how do you make sure they execute it: well, make it another step in your application/database deployment documentation. If you roll out application which really needs specific (new) static data in the database, then you might want to perform a DB version check in your application. and you update the DB_VERSION to your new release number as part of that script. Then your application on a start-up should check it and report an error if the new DB version is required.
dev and non-dev static data: I have never seen this case actually. More often there is real static data, which you might call "dev", which is major configuration, ISO static data etc. The other type is default lookup data, which is there for users to start with, but they might add more. The mechanism to INSERT these data might be different, because you need to ensure you do not destoy (power-)user-created data.