NAT3176 Error in call to Adabas subroutine or in inverted list - adabas

I am having trouble figuring out how to fix this error in Natural Adabas. I am just starting out with Natural on a very old version and the problem I keep running into trying to run most of the existing programs written by previous developers is that I keep getting this error.
NAT3176 Error in call to Adabas subroutine or in inverted list.
So I have attempted a very basic program of my own with the same result (see below). Does anybody know how to resolve this problem or what steps can be taken to debug?
My first thought is that the STUD list does not exist even though there is a DDM for it. Is there a way I could verify that it exists?
My test program is as follows:
0010 DEFINE DATA
0020 LOCAL
0030 1 STUD-VIEW VIEW OF STUD
0040 2 STUDNO
0050 END-DEFINE
0060 READ STUD-VIEW BY STUDNO
0070 DISPLAY STUDNO
0080 END-READ
0090 END

NAT3176: Inconsistency detected in an inverted list.
That's what the docs say, so very likely the Index ("inverted list") is corrupt.
You'll certainly need your Admin to fix that.
Try omitting the BY STUDNO from the READ Statement.
It will then perform a READ PHYSICAL ("Full Table Scan" in a relational DB) which accesses the data without using the index.
That would look something like this:
DEFINE DATA
LOCAL 1 STUD-VIEW VIEW OF STUD
2 STUDNO
END-DEFINE
*
READ STUD-VIEW
DISPLAY STUDNO
END-READ
END

Related

SQL Selection Query getting corrupted

Ive come across a very unusual problem (for me at least) and I have no idea how to solve it.
Essentially I made a really simple selection query to search our clients in a table (dbo_t_Person) and return their records. I needed them to be searchable even if we only have an email address, or phone number for some clients on hand. Therefore I wrote the criteria to either ignore a field if no data was entered, or to search similar (via 'Like') if only partial details were entered into any given field. See the SQL below, apologies for how repetitive it is.
This is all well and good, it works perfectly and is fast enough for our uses.
However.
I can run the query as many times as I wish with new data entered and it works fine, but if I close the query and reopen it, the SQL goes haywire and it runs out of memory and crashes access, this is crashing just opening the SQL as well as running it. By haywire I mean that if i manage to luck out and reopen the SQL, lines of SQL are suddenly copied endlessly on the page.
This happens every time I rewrite the SQL from scratch, how the hell do I stop this happening?
Here is the working clean code:
SELECT dbo_t_Person.PersonID
,dbo_t_Person.FullName
,dbo_t_Person.Address1
,dbo_t_Person.Address2
,dbo_t_Person.City
,dbo_t_Person.Zip
,dbo_t_Person.STATE
,dbo_t_Person.Country
,dbo_t_Person.Mobile
,dbo_t_Person.Phone
,dbo_t_Person.Email
FROM dbo_t_Person
WHERE (
(
(dbo_t_Person.PersonID) = [Forms]![from MICHAEL TEST WORKING]![OwnerIDEntry]
OR [Forms]![from MICHAEL TEST WORKING]![OwnerIDEntry] IS NULL
)
AND (
(dbo_t_Person.FullName) LIKE "*" & [Forms]![from MICHAEL TEST WORKING]![NameEntry] & "*"
OR [Forms]![from MICHAEL TEST WORKING]![NameEntry] IS NULL
)
)
And so on for the remaining entry fields
However if I can get the SQL back open again it it appears thousands of lines of
Or [Forms]![from MICHAEL TEST WORKING]![NameEntry] Is Null
for all entry fields is endlessly repeated.
Something is making the code copy end on end, how do I stop it?
Consider an adjusted WHERE clause with NZ() to handle if controls are empty or not.
WHERE dbo_t_Person.PersonID = NZ([Forms]![from MICHAEL TEST WORKING]![OwnerIDEntry],
dbo_t_Person.PersonID)
AND dbo_t_Person.FullName = LIKE "*" & NZ([Forms]![from MICHAEL TEST WORKING]![NameEntry],
dbo_t_Person.FullName) & "*"
Try changing your criteria to be more efficient and clean, like this:
IIF(ISNULL([Forms]![from MICHAEL TEST WORKING]![OwnerIDEntry]),TRUE,PersonID=[Forms]![from MICHAEL TEST WORKING]![OwnerIDEntry])
Since you are only dealing with a single table you can also do away with dbo_t_Person. from everywhere, like this:
SELECT PersonID,FullName,Address1,Address2,City,Zip,STATE,Country,Mobile,Phone,Email
FROM dbo_t_Person
Maybe the simplified version of the SQL will stop Access from corrupting it.

KDB: try-catch over a list and return list of fails

I want to run the function
{`Security$x}
over a list
order`KDB_SEC_ID
and return the list of values that failed. I have the below, which works, but I'm wondering if there is a neater way to write this without the use of a do loop.
Example Code:
idx:0;
fails:();
do[count (order`KDB_SEC_ID);
error:#[{`Security$x};(order`KDB_SEC_ID)[idx];0Nj];
if[error=0Nj;fails:fails,(order`KDB_SEC_ID)[idx]];
idx:idx+1;
];
missingData:select from order where KDB_SEC_ID in distinct fails;
I agree that Terry's answer is the simplest method but here is a simpler way to do the method you were trying to help you see how achieve it without using do loops
q)SECURITY
`AAPL`GOOG`MSFT
q)order
KDB_SEC_ID val
--------------
AAPL 1
GOOG 2
AAPL 3
MSFT 4
IBM 5
q)order where #[{`SECURITY$x;0b};;1b] each order`KDB_SEC_ID
KDB_SEC_ID val
--------------
IBM 5
It outputs a 0b if it passes and 1b if it fails resulting in a boolean list. Using where on a boolean list returns the indices where the 1b's occur which you can use to index into order to return the failing rows.
If your test is to check which of the KDB_SEC_ID's can be enumerated against the Security list, couldn't you do
q)select from order where not KDB_SEC_ID in Security
Or am I missing something?
To answer your question in a more general case, you could achieve a try-catch over a list to return the list of fails using something like
q){x where #[{upper x;0b};;1b] each x}(2;`ab;"Er";1)
2 1

MS access convert convert rows to columns

I'm working on a database in MS Access 2007 for a flight simulator, and I need to pivot the data - that is, convert the rows to columns.
It's difficult to explain, so let me show what my problem is.
The data I have to start with looks like this:
Waypoint Lat Lon previous/next minimim-alt airwayName
00MKK 22.528056 -156.170961 BITTA 12 R464
00MKK 22.528056 -156.170961 CKH99 12 R464
03SML 25.61 30.635278 57SML 195 L321
03SML 25.61 30.635278 AST 85 W8
03SML 25.61 30.635278 KHG 85 W8
03SML 25.61 30.635278 KUNAK 195 L321
I need the data to look like this:
Waypoint Lat Lon AirwayName Previous Next AirwayName Previous Next
03SML 25.61 30.635278 L321 57SML KUNAK W8 AST KHG
00MKK 22.52805 -156.1709 R464 BITTA CKH99 blank blank blank
For every airway the same waypoint has, I need a new column with the previous and next fields next to it. Each waypoint may have several airways associated with it(usually not more than 10). The order in which the previous and next entries appear is not especially important.
From what I've gathered, if this is even possible, this kind of operation can be done using multiple crosstab queries.
Any help is appreciated. Thank you.
I reckon you need VBA. You can create a recordset ordered by Waypoint and just keep adding to a delimited string until the next waypoint. This way, you will end up with something that can be saved as a CSV. Alternatively, if it is a once off and there are not too many rows, you might consider importing the whole lot into Excel and doing the work there.

RavenDB Query Documents with Property Removed

In the RavenDB Studio, I can see 69 CustomVariableGroup documents. My query only returns 66 of them. After some digging, I see that the three docs that are not returned have the new class structure: a property was removed. Since I saved these three CustomVariableGroup documents, their structure is different from the other 66. Why though, when I query for all of them, do I only get the other 66 documents with the old structure?
Both my C# code, and my query in LinqPad, only return the 66. Here's the LinqPad query:
Session.Query<CustomVariableGroup>().Dump(); // returns 66 docs
But, if I do this, I can get one of the three documents that is missing from the above query:
Session.Query<CustomVariableGroup>().Where(x => x.Name == "Derating").Dump();
How can I get all 69 documents returned in one query?
** Edit: Index Info **
In the SQL tab of the LinqPad query (and in the Raven server output), the index looks like this:
Url: /indexes/dynamic/CustomVariableGroups?query=&start=0&pageSize=128&aggregation=None
I don't see that index in Raven Studio, presumably because it's dynamic.
** Edit 2: This HACK works **
If I do this, I get all 69 documents:
Session.Query<CustomVariableGroup>().Where(x => x.Name != string.Empty).Dump();
My guess is that Raven must be using an old index that only gets documents that still contain that deleted column. I somehow need to use a new/different index...
Interestingly, this does not work; it only returns 66:
Session.Query<CustomVariableGroup>().Where(x => x.Id != string.Empty).Dump();
** Edit 3: This HACK works as well **
Session.Advanced.LuceneQuery<CustomVariableGroup>("Raven/DocumentsByEntityName").Where("Tag:CustomVariableGroups").Dump();
An index, with the old property, had to be removed.
** Before ** This didn't work (only returned 66 of the 69 documents):
Session.Query<CustomVariableGroup>().Dump();
** Fix ** Delete index that used the old property that was deleted from my C# class:
In Raven Studio, I deleted this index: Auto/CustomVariableGroups/ByApplicationId
** After ** This same query now returns all 69 documents:
Session.Query<CustomVariableGroup>().Dump();
Now, I'm not sure why these queries would use that index. I'm querying for all CustomVariableGroup documents, and not ByApplicationId. However, removing that index fixed it. I'm sure someone else can explain why.
What do the indexes look like? Did you manually create the indexes or were they dynamically created? Just wondering if that is the cause of the issue based on your comments above that there was a structure change to the object.
--S
Could it be a stale index.. if its not returning all the results you expect.
You could use
.Customize(x=>x.WaitForNonStaleResultsAsOfLastWrite())

NHibernate HiLo algorithm re-using hi as lo?

I'm using NHibernate 3.1 with FluentNHibernate 1.2.0.712.
We're using the HiLo generator to generate Ids - with standard settings except max_lo is set to 100 (default 1000).
Our mappings all have this line in the ctor:
Id(m => m.Id)
.GeneratedBy.HiLo("100");
Hovewer, when we start fresh with a new SessionFactory, and the first item is saved - let's say the next hi is 12 it gets Id 1212 (I would have expected 1200 or 1201). Is this intended behaviour or am I missing some vital part of the configuration?
I've tried using default values ("1000") as the max_lo, but then the above would result in 12012 - still not exactly what I would expect.
I read through the nhibernate code-base. This is apparently intended behaviour - for the initial set, it 'clocks over' (for a reason beyond me - but probably has something to do with keeping parity with hibernate (since that has that exact same comment :-)).
For all subsequent increments - all is performing as expected.
So, closing down this question.