Error adding vfp table to existing - variables

I copied a new version of a table into an existing vfp installation and when it then tries to access that table it comes up with a variable not found error. The old version and new version of the table appear to have the same structure. Why could this happen? Does the dbc need to be updated in some way if I copy a new version of the table in. The structure is the same, but the data in it is different.
I copied the table in in Windows Explorer.

If the DBC in the same folder as the table? If not, are they in the same relative position on the two different drives? If not, you'll get errors, though I wouldn't expect "Variable not found."
Did you bring along the FPT and CDX for the new file? Again, that's not the error I would expect, but failure to do so would cause problems.
Assuming all that is right, what's the actual line of code that's failing?

Was the table that you've copied in "freed" from it's previous DBC before copying? If not, as soon as you attempt to USE it in the new location then I believe VFP will try to locate the DBC that it belongs to.
If you believe the table structure to be identical then you might be better off leaving the existing one in place and just ZAPping it to clean it out then appending the records from the other copy... of course you might need to temporarily switch off any INSERT triggers or row-level validation if you've got anything clever happening therein such as updating a "last modified" field. AutoInc fields will also need to be handled with care too, but it doesn't sound like this is something you're expecting to do on a regular basis so shouldn't be too onerous as a one-off exercise.

Related

Update access backend schema with vba code

I have created an application in ms access with vba code. The app is split in front end and back end. Back end holds the user data. Now i want to add some new features but this needs some excessive back end changes in the schema structure.
My question is, what is the best way or practice to deliver to the end user the changes i want every time I upgrade my application? Of course it has to be something that keeps the current data intact.
I have seen some suggestions about a free program called CompareEm that compares the old and new database, and produces the appropriate vba code to do the job.
I was also considering if it would be more convenient to copy an empty database that has the wanted schema alongside the uograded frontend and having a module that compares the old database of the user with the empty one and try to change the old schema according the new one. (First my removing all relations, converting the tables, and then reapplying the new relations).
In any case i would like something that could be done automatically or by some customer code, in order to avoid messing up with users data.
Any help is appreciated.
This is one weak spot of Access. We don't have a really great scripting language and system built that allows scripting out of database "schema".
So while the split design is a must have, and makes update of the code and application part very easy, a update of the back end (BE) is a challenge.
I have done this in the past. And the way you go about this?
Well, you have to make up a rule, and the rule is this:
Anytime you add a new field/column, a new table?
You MUST write the change(s) into a code module. In other words, you NEVER use the GUI anymore to add a new column (or say change the length). You code to that sub that will do this for you. In fact, in one application I had (many customers, all 100% Access runtime).
So, to add new features, changes, fixes? I often had to add a few new columns or tables. So what I would do is WRITE the code into that "change/update" routine. (and I also passed it a version number). On startup, that code would be called.
It started out with say about 5 lines of code. A few years later? I think it had well over 100 lines of code. And worked very well. If they opend a older data file (the BE), then if it really was a older verison - or say they had not paid for upgrades, then their current verison of software could and would be several upgrades behind. But, as noted, since on startup all of the "updates" to the database were ALWAYS written as code, then even if they were say 5 versions back, then all of the version numbered code would run, make the changes to the BE, and they would be up to date.
So, this means you have to code this out. It not all that hard, but you DO HAVE do adopt this way of working.
So, the code in that module looked like this:
There were two common cases. The first case was a WHOLE new table. I did not want to write out the code to create that table, so what I would do is INCLUDE the whole new table in the FE for this purpose (and I would append the letter "C" to the table).
So, the code on startup would check if the table exists, and if it does not, then I would execute a transfer command to copy the table. The code stub looked like this:
' check for new table tblRemindDefaults
On Error GoTo reminddefaultsadd
Set rst = CurrentDb.OpenRecordset("tblRemindDefaults")
So, in above, I set the error handler and try to open the table. If the table does not exist, then above calls the routine remindDefaultsAdd. Its job of course is to add this new table to the BE.
The above code stub looked like this:
remindadd:
Dim strFromDB As String
Dim strToDB As String
Dim strSql As String
Dim cp As Object
Dim shellpath As String
strFromDB = CurrentProject.FullName
strToDB = strBackEnd
DoCmd.TransferDatabase acExport, "Microsoft Access", strToDB, acTable, "tblGroupRemindC", "tblGroupRemind", True
Note how then I simply copy a table from the FE to the BE.
The next type of upgrade was adding a new column to a existing table.
And again, similar to above, the code would check for the column, and if not present, I would add the column. In this case we not creating a new table, so no copy of the table.
Typical code looked like this:
' add our new default user field
Dim nF As dao.field
Dim nT As dao.TableDef
Dim nR As dao.Relation
strFromDB = CurrentProject.FullName
strToDB = strBackEnd
Set db = OpenDatabase(strToDB)
Set nT = db.TableDefs("tblEmployee")
nT.Fields.Append nT.CreateField("DefaultUser", dbText, 25)
nT.Fields.Refresh
Set nT = Nothing
So in above we add a new field called DefaultUser to the table.
If you OPEN DIRECT the BE, and do NOT use linked tables, then you are free and able to modify the table in question with code. You can ALSO use SQL ddl's statements. So while I noted that scripting support in Access is not all that great, you can do this:
Set db = OpenDatabase(strToDB)
strSql = "ALTER TABLE tblEmployee ADD COLUMN DefaultUser varCHAR(25);"
CurrentDB.Execute strSQL, dbFailOnError.
So, you can write out code using table defs, create the field and add it. Or you can use SQL DDL's statements and execute those. Of course each upgrade part I wrote had a version number test.
So, the simple concept is that every time you need a new column, new table etc.?
You write out this code to make the change. So, you might say work for a month or 2 add a whole bunch of new features - and some of these features will require new columns. So you NEVER use the GUI to do this. You write little code routines in your upgrade module, and the run them. Then keep on developing. So, when you done, all your updates to the table(s) are now done, and you can deploy the new FE to those users. On startup, the update code will run based on version number. (I have a small one row table in the FE with version number, and also a small one row table in the BE that has version number). After a upgrade, the new columns, new tables are now in the BE, and then of course I update that version number.
So, you can use a combination of SQL ddl commands, or use code to create the field def as I did above.
The only really important issue is that table changes can NOT be made against linked tables. So you have to create + open a SEPERATE instance of the database object as I did per above. And on startup, you have to ensure that no main form that is bound to a linked table has or is running yet. (since that will open the BE, and you can't make table changes against a open table).
So, it not too hard to make the updates. But the HARD part is your self discipline . You have to ALWAYS go to your upgrade routines and add the new column in code. So that way after working for a few weeks, you have coded out the table changes, and thus are able to re-run that code against older existing BE's out in the field.
Welcome to Stack Overflow! When you send out a new version of your front-end db, you would need either to (1) include a script to update the backend db, or (2) a fresh copy of the back-end db and a script to transfer the data from the previous version. In either case, you need to consider customers who may have skipped an update, so you would need some kind of version stamp on the back-end db.
You can accomplish either strategy with Access DDL (see here: https://learn.microsoft.com/en-us/office/client-developer/access/desktop-database-reference/data-definition-language) and/or Access DAO (see here: https://learn.microsoft.com/en-us/office/client-developer/access/desktop-database-reference/microsoft-data-access-objects-reference).
Please let us know if you have any specific questions but keep in mind SO is not intended for offering general advice or product reviews.

Copied an existing report in a current solution file and getting the following error

I copied an existing report and changed a tiny bit. New report keeps showing error message.
Nothing seems to change but still it is complaining about some group expressions. Dataset is embedded query. Can pointers would be appriciated.enter image description here
Most likely causes are
Your dataset no longer contains a field called PROJECTID (this is case sensitive).
You have not refreshed the dataset fields (which may cause another error message which will lead you to the actual issue)
Your dataset is pointing to a datasource that no longer exists (I think these are also case sensitive) which in turn means the query fails, which means PROJECTID does not exist, hence your error.
Start from the end and work backwards.
Open the dataset query and execute it or hit Refresh Fields
If no errors are returned, check that the column PROJECTID exists and has the correct case.
This should point you in the right direction.

MS Access Error updating memo field with long text

Searching this problem returns quite a few search hits, but many off-track answers, so I'm posting a concise description here, and answer below.
The problem afflicts Microsoft Access 2010, and some versions before. Access 2013 renames Memo type to Long Text. I don't know if it has the same problem.
The root problem is associated with running an UPDATE query on a table with a memo field, in certain particular circumstances. This might be an UPDATE query composed in the visual query window, or some VBA running SQL via DAO or ADO or similar. Or it could arise while updating via a form.
(The current post is concerned with this occurrence just within an Access database, though elsewhere you will find discussion of similar-sounding issues when Access is connected to an external database server.)
Instead of generating an immediate and obvious error alert, Access (or perhaps Jet) places the value #Error (which is not just the string "#Error"!) into the Memo field. This might easily go unnoticed until some later time, resulting in visible errors such as:
-- You use Compact and Repair. That seems to complete, but Access quietly adds a MSysCompactError table with a couple of rows. One error -1611 complains that Access was stopped and couldn't complete the operation. A second, more-specific-seeming error complains that it can't find field "Description". That appears to be an internal error that has no relevance.
-- You try to copy the table to another database. Access gives an error complaining that another user is using the table or has updated the table, and won't complete the operation.
-- Other operations on the rows that, unnoticed by you, happen to contain the #Error values fail.
Regardless, the root problem is whatever causes the #Error values to get placed into the Memo fields in the first place.
Many posters have noted that it occurs if the UPDATE attempts to put strings longer than about 2000 characters into the Memo field. That's a surprise, as Memo fields should be able to hold 1 gig characters or more depending on version, even if it only allows 65k through the UI.
So why does the error occur when Updating using >2000 characters?
The key factor that provokes this error is the Memo field having an index. Apparently, although the Memo type field can hold a bazillion characters, the index can't deal with more than about 2000.
Knowing that this is the precipitating factor, probably a number of workarounds come to mind. First, you can obviously just disable the index. This solution is easy to verify in a dummy database: Create two tables containing Memo fields, one with an index and the other without. Run update queries that put >2000 characters into each Memo and note the results.
But perhaps you think you need the index? Your use case might be satisfied if you create a second field that will contain an initial substring of the main Memo (shorter than 2000 characters), and index that instead. This could be used for sorting purposes for example. In most cases, where a memo contains narrative information, it's unlikely that the memo data values differ only after 2000 characters. Or perhaps you can devise a hash function and make a separate column of that.
What if you have a database that already contains these #Error values? Some advice floating around on the web, especially in relation to downstream problems like failure of Compact and Repair, suggests that your database is corrupt and should be abandoned. I'm not so sure. If you can delete the #Error-afflicted rows, then delete the index, and then recreate the deleted rows, you may be back in business. Compact and Repair should run properly at that point, giving some confidence that you fixed the offending part. (Make backups along the way, obviously.)
Workaround solution
Create two macros (Macro1 Macro2)
Macro 1
Get all the necessary information from the open form which includ this long text and close it.
Macro 2
Insert all needed actions (starting with the update query that you get error)
Create a form (Form_on_error) with only a button that run Macro2
Finally add at the end of macro 1
On Error
Go to :Macro Name
Macro Name: On_Error_2590
RunMacro Macro2
Submacro On_error_2590
OpenForm (Form_on_error)
End Submacro
.......and it works !!!
So, only when the update query get error, the user must click the button on the form : Form_on_error

Table '' could not be loaded

I was hoping someone out there may have experienced this before.
I have a database that (as far as I'm aware) is in perfect working order. I have no problems with it whatsoever. I'm trying to add a column to some of the tables but when I save the changes I get the following message
This error message is then stuck in a loop and the only thing I can do is kill the SQL Management Studio process.
The database exists, the table exists, I can run any query I want against it, I just can't make any changes to it.
The steps I'm taking are:
Right click table
Select "design"
Right click "add new column" in designer
Fill in the details as normal
Click Save
Anyone know how I can resolve this?
Thanks.
It's telling you that you haven't specified the name of the table. The name of the table should be between the two single quotes.
Without knowing how you're doing this it's hard to tell more, but the first two possibilities off the top of my head are:
If you're looping through tables in code to do something, you may be hitting a record with no table name.
If it's pure SQL, perhaps an error in your syntax

Get list of columns of source flat file in SSIS

We get weekly data files (flat files) from our vendor to import into SQL, and at times the column names change or new columns are added.
What we have currently is an SSIS package to import columns that have been defined. Since we've assigned the mapping, SSIS only throws up an error when a column is absent. However when a new column is added (apart from the existing ones), it doesn't get imported at all, as it is not named. This is a concern for us.
What we'd like is to get the list of all the columns present in the flat file so that we can check whether any new columns are present before we import the file.
I am relatively new to SSIS, so a detailed help would be much appreciated.
Thanks!
Exactly how to code this will depend on the rules for the flat file layout, but I would approach this by writing a script task that reads the flat file using the file system object and a StreamReader object, and looks at the columns, which are hopefully named in the first line of the file.
However, about all you can do if the columns have changed is send an alert. I know of no way to dynamically change your data transformation task to accomodate new columns. It will have to be edited to handle them. And frankly, if all you're going to do is send an alert, you might as well just use the error handler to do it, and save yourself the trouble of pre-reading the column list.
I agree with the answer provided by #TabAlleman. SSIS can't natively handle dynamic columns (and niether can your SQL destination).
May I propose an alternative? You can detect a change in headers without using a C# Script Tasks. One way to do this would be to create a flafile connection that reads the entire row as a single column. Use a Conditional Split to discard anything other than the header row. Save that row to a RecordSet object. Any change? Send Email.
The "Get Header Row" DataFlow would look like this. Row Number if needed.
The Control Flow level would look like this. Use a ForEach ADO RecordSet object to assign the header row value to an SSIS variable CurrentHeader..
Above, the precedent constraints (fx icons ) of
[#ExpectedHeader] == [#CurrentHeader]
[#ExpectedHeader] != [#CurrentHeader]
determine whether you load data or send email.
Hope this helps!
i have worked for banking clients. And for banks to randomly add columns to a db is not possible due to fed requirements and rules. That said I get your not fed regulated bizz. So here are some steps
This is not a code issue but more of soft skills and working with other teams(yours and your vendors).
Steps you can take are:
(1) reach a solid columns structure that you always require. Because for newer columns older data rows will carry NULL.
(2) if a new column is going to be sent by the vendor. You or your team needs to make the DDL/DML changes to the table were data will be inserted. Ofcouse of correct data type.
(3) document this change in data dictanary as over time you or another member will do analysis on this data and would like to know what is the use of each attribute or column.
(4) long-term you do not wish to keep changing table structure monthly because one of your many vendors decided to change the style the send you data. Some clients push back very aggresively other not so much.
If a third-party tool is an option for you, check out CozyRoc's Data Flow Task Plus. It handles variable columns in sources.
SSIS cannot make the columns dynamic,
one thing, i always do, is use a script task to read the first and last lines of a file.
if it is not an expected list of csv columns i mark file as errored and continue/fail as required.
Headers are obviously important, but so are footers. Files can through any unknown issue be partially built. Requesting the header be placed at the rear of the file it is a double check.
I also do not know if SSIS can do this dynamically, but it never ceases to amaze me how people add/change order of columns and assume things will still work.
1-SSIS Does not provide dynamic source and destination mapping.But some third party component such as Data flow task plus , supporting this feature
2-We can achieve this using ssis script task.
3-If the Header is correct process further for migration else fail the package before DFT execute.
4-Read the line from the header using script task and store in array or list object
5-Then compare those array values to user defined variables declare earlier contained default value as column name.
6-If values are matching exactly then progress further else fail it.