VFP. Re-creating indexes - indexing

I would like to be able to re-create the index to a table which is part of a database.
The index to the table is corrupted, so the table cannot be opened (certainly not without an error message). I would like to use the information in the database container to reconstruct the index (with its several tags)
Perhaps I could offer the user a facility to browse a list of tables in the database and then choose one table (or maybe all) to be re-indexed or packed. Have looked at DBGETPROP() and CURSORGETPROP() function calls, but have not found options which give me the information I need.
As always, grateful for guidance.
Thanks Tamar. Yes, I had been opening the .dbc, but then wanted the tag names and index expressions.
What I would like to do is build up a cursor with details of the tag names and index expressions for each table in the database.
As far as I can see I need to get the names of the tables from the .dbc and then open each table to find its indexes . I think these are only in the tables themselves not in the .dbc. Can see that the SYS(14) function will give me the index expressions and that TAG() will give me the tag names; Is that the way to go, or is there some function already available which will do these things for me?

The database container is a table, so you can open it with USE to read the data there.

You shouldn't rely on dbc to find and to do anything afterwards. It doesn't have the index information anyway (except for some like PK, field is indexed etc which is not much of a value). Sometimes, opening the database exclusively (and have access to its tables exclusively) and then doing a:
validate database recover
helps but you can't rely on it either. To correctly recreate the indexes, you have to delete them all (aka delete the CDX files) and then recreate from scratch.
If it helps, below is the utility code I have written and using in "repair" in case needed. It creates a kind of data dictionary for both Dbc and free tables:
* CreateDictionary.prg
* Author: Cetin Basoz
* CreateDictionary('c:\mypath\v210\data','','zipcodes,states')
* Creates DataDictionary files in 'DataDic' directory (Default if not specified)
* using 'c:\mypath\v210\data' dir as source data dir
* adds zipcodes and states.dbf as static files (with data as is)
* tcUserStatic - tables that should be kept on user as is
Lparameters tcDataDir, tcDicDir, tcStaticList, tcUserStaticList
tcDataDir = iif(type('tcDataDir')='C' and directory(addbs(m.tcDataDir)), addbs(m.tcDataDir), sys(5)+curdir())
tcDicDir = iif(type('tcDicDir')='C' and !empty(m.tcDicDir), addbs(m.tcDicDir), 'DataDic\')
tcStaticList = iif(Type('tcStaticList')='C',trim(m.tcStaticList),'')
If !directory(justpath(m.tcDicDir))
Md (justpath(m.tcDicDir))
Endif
Close data all
lnDatabases = adir(arrDBC,m.tcDataDir+'*.dbc')
Create table (m.tcDicDir+'DBCreator') (DBCName M nocptrans, FileBin M nocptrans, Filename c(128) nocptrans)
for ix = 1 to m.lnDatabases
open data (m.tcDataDir+arrDBC[m.ix,1])
do home()+'Tools\Gendbc\gendbc' with forceext(m.tcDicDir+arrDBC[m.ix,1],'PRG')
compile (forceext(m.tcDicDir+arrDBC[m.ix,1],'PRG'))
insert into (m.tcDicDir+'DBCreator') ;
values (arrDBC[m.ix,1], ;
FileToStr(forceext(m.tcDicDir+arrDBC[m.ix,1],'FXP')), ;
forceext(arrDBC[m.ix,1],'FXP'))
erase (forceext(m.tcDicDir+arrDBC[m.ix,1],'PRG'))
erase (forceext(m.tcDicDir+arrDBC[m.ix,1],'FXP'))
if file(forceext(m.tcDicDir+arrDBC[m.ix,1],'KRT'))
insert into (m.tcDicDir+'DBCreator') ;
values (arrDBC[m.ix,1], ;
FileToStr(forceext(m.tcDicDir+arrDBC[m.ix,1],'KRT')),;
forceext(arrDBC[m.ix,1],'KRT'))
erase (forceext(m.tcDicDir+arrDBC[m.ix,1],'KRT'))
endif
endfor
Close data all
Create cursor crsSTRUCTS ;
(FIELD_NAME C(128) nocptrans, ;
FIELD_TYPE C(1), ;
FIELD_LEN N(3, 0), ;
FIELD_DEC N(3, 0), ;
FIELD_NULL L, ;
FIELD_NOCP L, ;
_TABLENAME M nocptrans)
Create cursor crsINDEXES ;
(TAG_NAME C(10) nocptrans, ;
KEY_EXPR M, ;
NDXTYPE C(1), ;
IS_DESC L, ;
FILTEREXPR M nocptrans, ;
_TABLENAME M nocptrans)
Select 0
lnTables = adir(arrTables,m.tcDataDir+'*.dbf')
For ix=1 to m.lnTables
Use (m.tcDataDir+arrTables[m.ix,1])
if empty(cursorgetprop('Database'))
lnFields=afields(arrStruc)
For jx=1 to m.lnFields
arrStruc[m.jx,7]=arrTables[m.ix,1]
Endfor
Insert into crsSTRUCTS from array arrStruc
Release arrStruc
If tagcount()>0
Dimension arrIndexes[tagcount(),6]
For jx=1 to tagcount()
arrIndexes[m.jx,1] = tag(m.jx)
arrIndexes[m.jx,2] = key(m.jx)
arrIndexes[m.jx,3] = iif(Primary(m.jx),'P',iif(Candidate(m.jx),'C',iif(unique(m.jx),'U','R')))
arrIndexes[m.jx,4] = descending(m.jx)
arrIndexes[m.jx,5] = sys(2021,m.jx)
arrIndexes[m.jx,6] = arrTables[m.ix,1]
Endfor
Insert into crsINDEXES from array arrIndexes
Endif
endif
Use
Endfor
Select crsSTRUCTS
Copy to (m.tcDicDir+'NewStruc')
Select crsINDEXES
Copy to (m.tcDicDir+'NewIndexes')
Create table (m.tcDicDir+'static') (FileName M nocptrans, FileBin M nocptrans)
If !empty(m.tcStaticList)
lnStatic = alines(arrStatic,chrtran(m.tcStaticList,',',chr(13)))
For ix = 1 to m.lnStatic
lnFiles = adir(arrFiles,m.tcDataDir+trim(arrStatic[m.ix])+'.*')
For jx=1 to m.lnFiles
If inlist(justext(arrFiles[m.jx,1]),'DBF','CDX','FPT')
Insert into (m.tcDicDir+'static') values ;
(arrFiles[m.jx,1], FileToStr(m.tcDataDir+arrFiles[m.jx,1]))
Endif
Endfor
Release arrFiles
Endfor
Endif
CREATE TABLE (m.tcDicDir+'userstatic') (FileName c(50))
If !empty(m.tcUserStaticList)
lnUserStatic = alines(arrUserStatic,chrtran(m.tcUserStaticList,',',chr(13)))
For ix = 1 to m.lnUserStatic
lnFiles = adir(arrFiles,m.tcDataDir+trim(arrUserStatic[m.ix])+'.*')
For jx=1 to m.lnFiles
If inlist(justext(arrFiles[m.jx,1]),'DBF','CDX','FPT')
Insert into (m.tcDicDir+'userstatic') values (arrFiles[m.jx,1])
Endif
Endfor
Release arrFiles
Endfor
Endif
close data all
close tables all

Related

DBT - Use dynamic array for Pivot

I would like to use DBT to pivot a column in my BigQuery table.
Since I have more than 100 values, I want my pivoted column to be dynamic, I would like something like that:
select *
from ( select ceiu.value, ceiu.user_id, ce.name as name
from company_entity_item_user ceiu
left join company_entity ce on ce.id = ceiu.company_entity_id)
PIVOT(STRING_AGG(value) FOR name IN (select distinct name from company_entity))
The problem here is I can't use a SELECT statement inside IN.
I know I can use Jinja templates with DBT, it could look like this:
...
PIVOT(STRING_AGG(value) FOR name IN ('{{unique_company_entities}}'))
...
But I have no idea how to use a SELECT statement to create such variable.
Also, since I am using BigQuery, I tried using DECLARE and SET but I don't know how to use them in DBT, if it is even possible.
Thank for your help
Elevating the comment by #aleix-cc to an answer, because that is the best way to do this in dbt.
To get data from your database into the jinja context, you can use dbt's built-in run_query macro. Or if you don't mind using the dbt-utils package, you can use the get_column_values macro from that package, which will return a list of the distinct values in that column (it also guards the call to run_query with an {% if execute %} statement, which is critical to preventing dbt compilation errors).
Assuming company_entity is already a dbt model, your model becomes:
{% set company_list = dbt_utils.get_column_values(table=ref('company_entity'), column='name') %}
# company_list is a jinja list of strings. We need a comma-separated
# list of single-quoted string literals
{% set company_csv = "'" ~ company_list | join("', '") ~ "'" %}
select *
from ( select ceiu.value, ceiu.user_id, ce.name as name
from company_entity_item_user ceiu
left join company_entity ce on ce.id = ceiu.company_entity_id)
PIVOT(STRING_AGG(value) FOR name IN ({{ company_csv }})

How can I execute a custom function in Microsoft Visual FoxPro 9?

Using Microsoft Visual FoxPro 9, I have a custom function, "newid()", inside of the stored procedures for Main:
function newId
parameter thisdbf
regional keynm, newkey, cOldSelect, lDone
keynm=padr(upper(thisdbf),50)
cOldSelect=alias()
lDone=.f.
do while not lDone
select keyvalue from main!idkeys where keyname=keynm into array akey
if _tally=0
insert into main!idkeys (keyname) value (keynm)
loop
endif
newkey=akey+1
update main!idkeys set keyvalue=newkey where keyname=keynm and keyvalue=akey
if _tally=1
lDone=.t.
endif
enddo
if not empty(cOldSelect)
select &cOldSelect
else
select 0
endif
return newkey
This function is used to generate a new ID for records added to the database.
It is called as the default value:
I would like to call this newid() function and retrieve its returned value. When executing SELECT newid("TABLENAME"), the error is is thrown:
Invalid subscript reference
How can I call the newid() function and return the newkey in Visual FoxPro 9?
As an addition to what Stefan Wuebbe said,
You actually had your answer in your previous question here that you forgot to update.
From your previous question, as I understand you are coming from a T-SQL background. While in T-SQL (and in SQL generally) there is:
Select < anyVariableOrFunction >
that returns a single column, single row result, in VFP 'select' like that has another meaning:
Select < aliasName >
aliasName is an alias of a working area (or it could be number of a work area) and is used to change the 'current workarea'. When it was used in xBase languages like FoxPro (and dBase), those languages didn't yet meet ANSI-SQL if I am not wrong. Anyway, in VFP there are two Select, this one and SELECT—SQL which definitely requires a FROM clause.
VFP has direct access to variables and function calls though, through the use of = operator.
SELECT newid("TABLENAME")
in T-SQL, would be (you are just displaying the result):
? newid("TABLENAME")
To store it in a variable, you would do something like:
local lnId
lnId = newid("TABLENAME")
* do something with m.lnId
* Note the m. prefix, it is a built-in alias for memory variables
After having said all these, as per your code.
It looks like it has been written by a very old FoxPro programmer and I must admit I am seeing it the first time in my life that someone used "REGIONAL" keyword in VFP. It is from FoxPro 2.x days I know but I didn't see anyone use it up until now :) Anyway, that code doesn't seem to be robust enough in a multiuser environment, you might want to change it. VFP ships with a NewId sample code and below is the slightly modified version that I have been using in many locations and proved to be reliable:
Function NewID
Lparameters tcAlias,tnCount
Local lcAlias, lnOldArea, lcOldReprocess, lcTable, lnTagNo, lnNewValue, lnLastValue, lcOldSetDeleted
lnOldArea = Select()
lnOldReprocess = Set('REPROCESS')
* Uppercase Alias name
lcAlias = Upper(Iif(Parameters() = 0, Alias(), tcAlias))
* Lock reprocess - try once
Set Reprocess To 1
If !Used("IDS")
Use ids In 0
Endif
* If no entry yet create
If !Seek(lcAlias, "Ids", "tablename")
Insert Into ids (tablename, NextID) Values (lcAlias,0)
Endif
* Lock, increment id, unlock, return nextid value
Do While !Rlock('ids')
* Delay before next lock trial
lnStart = Seconds()
Do While Seconds()-lnStart < 0.01
Enddo
Enddo
lnLastValue = ids.NextID
lnNewValue = m.lnLastValue + Evl(m.tnCount,1)
*Try to query primary key tag for lcAlias
lcTable = Iif( Used(lcAlias),Dbf(lcAlias), Iif(File(lcAlias+'.dbf'),lcAlias,''))
lcTable = Evl(m.lcTable,m.lcAlias)
If !Empty(lcTable)
Use (lcTable) In 0 Again Alias '_GetPKKey_'
For m.lnTagNo=1 To Tagcount('','_GetPKKey_')
If Primary(m.lnTagNo,'_GetPKKey_')
m.lcOldSetDeleted = Set("Deleted")
Set Deleted Off
Select '_GetPKKey_'
Set Order To Tag (Tag(m.lnTagNo,'_GetPKKey_')) ;
In '_GetPKKey_' Descending
Locate
lnLastValue = Max(m.lnLastValue, Evaluate(Key(m.lnTagNo,'_GetPKKey_')))
lnNewValue = m.lnLastValue + Evl(m.tnCount,1)
If Upper(m.lcOldSetDeleted) == 'ON'
Set Deleted On
Endif
Exit
Endif
Endfor
Use In '_GetPKKey_'
Select ids
Endif
* Increment
Replace ids.NextID With m.lnNewValue In 'ids'
Unlock In 'ids'
Select (lnOldArea)
Set Reprocess To lnOldReprocess
Return ids.NextID
Endfunc
Note: If you use this, as I see from your code, you would need to change the "id table" name to idkeys, field names to keyname, keyvalue:
ids => idKeys
tablename => keyName
nextId => keyValue
Or in your database just create a new table with this code:
CREATE TABLE ids (TableName c(50), NextId i)
INDEX on TableName TAG TableName
When executing SELECT newid("TABLENAME")
The error: Invalid subscript reference is thrown
The SQL Select command in Vfp requires a From clause.
Running a procedure or a function can, or better usually needs to be done differently:
For example, in the IDE's Command Window you can do a
? newid("xy") && the function must be "in scope",
&& i.e in your case the database that contains the "Stored
&& Procedure" must have been opened in advance
&& or you store the function result in a variable
Local lnNextID
lnNextID = newid("xy")
Or you can use it in an SQL SELECT when you have a From alias
CREATE CURSOR placebo (col1 Int)
INSERT INTO placebo VALUES (8)
Select newid("xy") FROM placebo

Assign value of a column to variable in sql use jinja template language

I have a sql file like this to transform a table has a column include a json string
{{ config(materialized='table') }}
with customer_orders as (
select
time,
data as jsonData,
{% set my_dict = fromjson( jsonData ) %}
{% do log("Printout: " ~ my_dict, info=true) %}
from `warehouses.raw_data.customer_orders`
limit 5
)
select *
from customer_orders
When I run dbt run, it return like this:
Running with dbt=0.21.0
Encountered an error:
the JSON object must be str, bytes or bytearray, not Undefined
I even can not print out the value of column I want:
{{ config(materialized='table') }}
with customer_orders as (
select
time,
tag,
data as jsonData,
{% do log("Printout: " ~ data, info=true) %}
from `warehouses.raw_data.customer_orders`
limit 5
)
select *
from customer_orders
22:42:58 | Concurrency: 1 threads (target='dev')
22:42:58 |
Printout:
22:42:58 | Done.
But if I create another model to printout the values of jsonData:
{%- set payment_methods = dbt_utils.get_column_values(
table=ref('customer_orders_model'),
column='jsonData'
) -%}
{% do log(payment_methods, info=true) %}
{% for json in payment_methods %}
{% set my_dict = fromjson(json) %}
{% do log(my_dict, info=true) %}
{% endfor %}
It print out the json value I want
Running with dbt=0.21.0
This is log
Found 2 models, 0 tests, 0 snapshots, 0 analyses, 372 macros, 0 operations, 0 seed files, 0 sources, 0 exposures
21:41:15 | Concurrency: 1 threads (target='dev')
21:41:15 |
['{"log": "ok", "path": "/var/log/containers/...log", "time": "2021-10-26T08:50:52.412932061Z", "offset": 527, "stream": "stdout", "#timestamp": 1635238252.412932}']
{'log': 'ok', 'path': '/var/log/containers/...log', 'time': '2021-10-26T08:50:52.412932061Z', 'offset': 527, 'stream': 'stdout', '#timestamp': 1635238252.412932}
21:41:21 | Done.
But I want to process this jsonData with in a model file like customer_orders_model above.
How can I get value of a column and assign it to a variable and continue to process whatever I want (check if in json have a key I want and set it value to new column).
Notes: My purpose is that: In my table, has a json string column, I want extract this json string column into many columns so I can easily write sql query what I want.
In case BigQuery database, Google has a JSON functions in Standard SQL
If your column is JSON string, I think you can use JSON_EXTRACT to get value of the key you want
EX:
with customer_orders as (
select
time,
tag,
data as jsonData,
json_extract(data, '$.log') AS log,
from `dc-warehouses.raw_data.logs_trackfoe_prod`
limit 5
)
select *
from customer_orders
You are very close! The thing to remember is that dbt and jinja is primarily for rendering text. Anything that isn't in curly brackets is just text strings.
So in your first example, data and jsonData are a substring of the larger query (that is also a string). So they aren't variables that Jinja knows about, which explains the error message that they are Undefined
with customer_orders as (
select
time,
data as jsonData,
from `warehouses.raw_data.customer_orders`
limit 5
)
select *
from customer_orders
This is why dbt_utils.get_column_values() works for you because that macro actually runs a query to get the data and your assigning the result to a variable. the run_query macro can be helpful for situations like this (and i'm fairly certain get_column_values uses run_query in the background).
In regards to your original question, you want to turn a JSON dict into multiple columns, I'd first recommend having your db do this directly. Many dbs have functions that let you do this. Primarily jinja is for generating SQL queries dynamically, not for manipulating data. Even if you could load all the JSON into jinja, I don't know how you'd write that back into a table without using something like a INSERT INTO VALUES statement which, IMHO, goes against the design principle of dbt.

Foxpro String Variable combination in Forloop

As in title, there is an error in my first code in FOR loop: Command contains unrecognized phrase. I am thinking if the method string+variable is wrong.
ALTER TABLE table1 ADD COLUMN prod_n c(10)
ALTER TABLE table1 ADD COLUMN prm1 n(19,2)
ALTER TABLE table1 ADD COLUMN rbon1 n(19,2)
ALTER TABLE table1 ADD COLUMN total1 n(19,2)
There are prm2... until total5, in which the numbers represent the month.
FOR i=1 TO 5
REPLACE ALL prm+i WITH amount FOR LEFT(ALLTRIM(a),1)="P" AND
batch_mth = i
REPLACE ALL rbon+i WITH amount FOR LEFT(ALLTRIM(a),1)="R"
AND batch_mth = i
REPLACE ALL total+i WITH sum((prm+i)+(rbon+i)) FOR batch_mth = i
NEXT
ENDFOR
Thanks for the help.
There are a number of things wrong with the code you posted above. Cetin has mentioned a number of them, so I apologize if I duplicate some of them.
PROBLEM 1 - in your ALTER TABLE commands I do not see where you create fields prm2, prm3, prm4, prm5, rbon2, rbon3, etc.
And yet your FOR LOOP would be trying to write to those fields as the FOR LOOP expression i increases from 1 to 5 - if the other parts of your code was correct.
PROBLEM 2 - You cannot concatenate a String to an Integer so as to create a Field Name like you attempt to do with prm+i or rbon+1
Cetin's code suggestions would work (again as long as you had the #2, #3, etc. fields defined). However in Foxpro and Visual Foxpro you can generally do a task in a variety of ways.
Personally, for readability I'd approach your FOR LOOP like this:
FOR i=1 TO 5
* --- Keep in mind that unless fields #2, #3, #4, & #5 are defined ---
* --- The following will Fail ---
cFld1 = "prm" + STR(i,1) && define the 1st field
cFld2 = "rbon" + STR(i,1) && define the 2nd field
cFld3 = "total" + STR(i,1) && define the 3rd field
REPLACE ALL &cFld1 WITH amount ;
FOR LEFT(ALLTRIM(a),1)="P" AND batch_mth = i
REPLACE ALL &cFld2 WITH amount ;
FOR LEFT(ALLTRIM(a),1)="R" AND batch_mth = i
REPLACE ALL &cFld3 WITH sum((prm+i)+(rbon+i)) ;
FOR batch_mth = i
NEXT
NOTE - it might be good if you would learn to use VFP's Debug tools so that you can examine your code execution line-by-line in the VFP Development mode. And you can also use it to examine the variable values.
Breakpoints are good, but you have to already have the TRACE WINDOW open for the Break to work.
SET STEP ON is the Debug command that I generally use so that program execution will stop and AUTOMATICALLY open the TRACE WINDOW for looking at code execution and/or variable values.
Do you mean you have fields named prm1, prm2, prm3 ... prm12 that represent the months and you want to update them in a loop? If so, you need to understand that a "fieldName" is a "name" and thus you need to use a "name expression" to use it as a variable. That is:
prm+i
would NOT work but:
( 'pro'+ ltrim(str(m.i)) )
would.
For example here is your code revised:
For i=1 To 5
Replace All ('prm'+Ltrim(Str(m.i))) With amount For Left(Alltrim(a),1)="P" And batch_mth = m.i
Replace All ('rbon'+Ltrim(Str(m.i))) With amount For Left(Alltrim(a),1)="R" And batch_mth = m.i
* ????????? REPLACE ALL ('total'+Ltrim(Str(m.i))) WITH sum((prm+i)+(rbon+i)) FOR batch_mth = i
Endfor
However, I must admit, your code doesn't make sense to me. Maybe it would be better if you explained what you are trying to do and give some simple data with the result you expect (as code - you can use FAQ 50 on foxite to create code for data).

VFP insert, index updating

So the main program is in C#. Inserting new records into a VFP database table. It was taking too long to generate the next ID for the record via
select max(id)+1 from table
, so I put that code into a compile dll in VFP and am calling that COM object through C#.
The COM object returns the new ID in about 250ms. I then just do an update through OLEDB. The problem I am having is that after the COM object returns the newly inserted ID, I cannot immediately find it from C# via the OLEDB
select id form table where id = *newlyReturnedID*
returns 0 rows back. If I wait an unknown time period the query will return 1 row. I can only assume it returns 0 rows immediately because it has yet to add the newly minted ID into the index and therefore the select cannot find it.
Has anyone else ever run into something similar? If so, how did you handle it?
DD
Warning: your code is flawed in a multi-user environment. Two people could run the query at the same time and get the same ID. One of them will fail on the INSERT if the column has a primary or candidate key, which is a best practice for key fields.
My recommendation is to either have the ID be a auto-incrementing integer field (I'm not a fan of them), or even better, create a table of keys. Each record in the table is for a table that gets keys assigned. I use the a structure similar to this:
Structure for: countergenerator.dbf
Database Name: conferencereg.dbc
Long table name: countergenerator
Number of records: 0
Last updated: 11/08/2008
Memo file block size: 64
Code Page: 1252
Table Type: Visual FoxPro Table
Field Name Type Size Nulls Next Step Default
----------------------------------------------------------------------------------------------------------------
1 ccountergenerator_pk Character 36 N guid(36)
2 ckey Character (Binary) 50 Y
3 ivalue Integer 4 Y
4 mnote Memo 4 Y "Automatically created"
5 cuserid Character 30 Y
6 tupdated DateTime 8 Y DATETIME()
Index Tags:
1. Tag Name: PRIMARY
- Type: primary
- Key Expression: ccountergenerator_pk
- Filter: (nothing)
- Order: ascending
- Collate Sequence: machine
2. Tag Name: CKEY
- Type: regular
- Key Expression: lower(ckey)
- Filter: (nothing)
- Order: ascending
- Collate Sequence: machine
Now the code for the stored procedure in the DBC (or in another program) is this:
FUNCTION NextCounter(tcAlias)
LOCAL lcAlias, ;
lnNextValue, ;
lnOldReprocess, ;
lnOldArea
lnOldArea = SELECT()
IF PARAMETERS() < 1
lcAlias = ALIAS()
IF CURSORGETPROP("SOURCETYPE") = DB_SRCLOCALVIEW
*-- Attempt to get base table
lcAlias = LOWER(CURSORGETPROP("TABLES"))
lcAlias = SUBSTR(lcAlias, AT("!", lcAlias) + 1)
ENDIF
ELSE
lcAlias = LOWER(tcAlias)
ENDIF
lnOrderNumber = 0
lnOldReprocess = SET('REPROCESS')
*-- Lock until user presses Esc
SET REPROCESS TO AUTOMATIC
IF !USED("countergenerator")
USE EventManagement!countergenerator IN 0 SHARED ALIAS countergenerator
ENDIF
SELECT countergenerator
IF SEEK(LOWER(lcAlias), "countergenerator", "ckey")
IF RLOCK()
lnNextValue = countergenerator.iValue
REPLACE countergenerator.iValue WITH countergenerator.iValue + 1
UNLOCK
ENDIF
ELSE
* Create the new record with the starting value.
APPEND BLANK IN countergenerator
SCATTER MEMVAR MEMO
m.cKey = LOWER(lcAlias)
m.iValue = 1
m.mNote = "Automatically created by stored procedure."
m.tUpdated = DATETIME()
GATHER MEMVAR MEMO
IF RLOCK()
lnNextValue = countergenerator.iValue
REPLACE countergenerator.iValue WITH countergenerator.iValue + 1
UNLOCK
ENDIF
ENDIF
SELECT (lnOldArea)
SET REPROCESS TO lnOldReprocess
RETURN lnNextValue
ENDFUNC
The RLOCK() ensures there is no contention for the records and is fast enough to not have bottleneck the process. This is way safer than the approach you are currently taking.
Rick Schummer
VFP MVP
VFP needs to FLUSH its workareas.