Maximo: What is the purpose of MAXVARS? - variables

A PDF about Maximo formulas mentions MAXVARS:
Maximo Formulas are the logical next step in Maximo customization
after Maximo Scripting. Maximo formulas follow Excel-like grammar to
define expressions that use input from variables to calculate a value.
Unlike scripting, where most of the variables need to get predefined
and bound to some Maximo attributes/properties/MAXVARS, the formula
expression can use any of those Maximo attributes/properties/MAXVARS
inside the expression without ever needing to predefine or bind them.
I assume that MAXVARS are some sort of global variable.
But when I search the docs, I don't see anything that explains them in detail.
What are MAXVARS and how are they used?

Generally, they are system level configuration elements. They are used in special cases in the code to determine how the system should behave. It contains things like whether Admin Mode is on for the system, or whether to automatically close completed POs when an invoice comes in, or what status to put a work order in when assignments are completed. It's nature is really just a generic key-value pairing table at the ORG level, so it can be used for any kind of system variable one might want to store, though generally there isn't much of a use-case for it in customizations.

MaxVars Variables
As others have mentioned previously, MAXVARS has its origins in early versions of Maximo (e.g. 3.x, 4.x) prior to it becoming a Java application and prior to multitenancy so initially all MAXVARS values applied at the System level as there were no Organisations and Sites in the system. I don't recall specifically which version introduced MaxVars entries with ORG and SITE scope in addition to SYSTEM but those are available in Maximo 7.6.x.
The following explains how to query the maxvartype database table for an explanation of what each of the entries in the maxvars table does:
https://www.ibm.com/support/pages/checking-purpose-maxvars-variables
https://developer.ibm.com/static/site-id/155/maximodev/7609/maximocore/businessobjects/psdi/app/system/MaxVars.html
Usage
One example is the MaxVars values that are used for Inventory. Each Organisation in the system has 6 MaxVars entries:
A_BREAKPOINT : 0.8
B_BREAKPOINT : 0.15
C_BREAKPOINT : 0.05
A_CCF : 30
B_CCF : 60
C_CCF : 90
The first 3 values dictate what percentage of type A, B and C inventory items make up the items being cycle counted for the organisation. The latter 3 values dicate the cycle count frequency for type A, B and C inventory items in days. In short the MaxVars entries allow some flexibility in the Cycle Count functionality rather than hard-coding these values. More details of these specific MaxVars entries are provided here:
https://developer.ibm.com/static/site-id/155/maximodev/7609/maximocore/businessobjects/psdi/app/inventory/Inventory.html
System Properties
System properties were introduced later but perform a similar role with a list of property names and values. System properties apply to either the instance or globally to all instances using the same database server. An added benefit of System Properties over MaxVars variables is that some system properties can be live-refreshed and the new property value is used immediatley rather than for example having to restart the application server.
Usage
One common example is the property name mxe.adminmode.logoutmin which records the number of minutes users have to log out before Admin Mode is enabled. This is usually modified in Database Configuration from More Actions -> Manage Admin Mode. Before enabling Admin Mode you can edit the "Number of Minutes for User Logout" and click the "Update Properties" to update the mxe.adminmode.logoutmin property value in System Properties.
https://www.ibm.com/support/knowledgecenter/en/SSLKT6_7.6.0/com.ibm.mbs.doc/propmaint/r_ctr_sysprops_overview.html
https://developer.ibm.com/static/site-id/155/maximodev/7609/maximocore/businessobjects/index.html?index-all.html
Why Are There Two Methods For Largely The Same Functionality?
I would guess that there is probably a lot of legacy code that still references the MaxVars variables rather than the newer System Properties and refactoring the code to use System Properties instead may not be a high priority but it's possible MaxVars may be phased out over time.
Maximo Customisation
When creating Maximo customisations either MaxVars variables or System Properties can be useful (with a preference to the latter) in order to avoid hard-coding values to provide reusability and flexibility. For example say you have a workflow that routes purchase orders over a particular value to the CEO for approval. Rather than hard-code the currency value in the workflow you could create a custom system property to store the threshold value amount and use a custom condition automation script to compare the PO total cost to the system property value and return a true or false accordingly. Therefore if the value threshold for CEO approval changes in future you only need modify the system property not the workflow.
Formulas
Whilst the link to the pdf in your question no longer works so I haven't viewed that document, from the excerpt you provided I would expect System Properties and MaxVars variables to be used in formulas in a similar fashion, to avoid hardcoding a value which would require us to modify the formula if it changes in future when a property can be used instead.

MAXVARS is a database table which identifies a number of system properties within the MAXIMO environment. Its a hang over from older versions and was around in version 3 as I remember (current version is 7.6).
Some MAXVARS entries can be amended through the MAXIMO UI (e.g. Organisations application - PM Options), others have to be amended through appropriate SQL (e.g. if Admin Mode becomes stuck you need to update the relevant MAXVARS entry via a SQL update)
New System values are now defined as 'system properties' (defined within MAXPROP and MAXPROPVALUE tables), and are visible and can be amended within the System Properties application.
:)

Related

Unbound checkbox in continuous form

I'm beating a dead horse here, but I still haven't found the answer I am looking for. I am throwing together an Access Database that deals with lockout procedures for our various machines at work. I have a continuous form setup so that it dynamically populates based on various complex/machine criteria. Since only portions of the machines need to be locked out at a given time, it is necessary to select the various devices from the list that was populated dynamically. When users select the various devices that they wish to lockout, they will then be able to automatically print tags for the selected devices. Which is where the unbound checkbox conundrum comes in... Yay!!!
Since it is possible for multiple users to be using the database at a given time, I don't believe that binding the checkbox to a yes/no selection within my table is the correct path to take. This is due to the fact that having multiple users picking various devices would result in additional/unnecessary tags being printed out to each user. I know that it's possible to have an unbound checkbox within a continuous form, but I have not come across any sample code that has this functionality.
If there is another path that I can take, please offer any suggestions as I am an Access novice, and am open to new ideas.
EDIT
I should mention that the database will reside within Citrix. I am not sure if this affect anything, but its worth mentioning at least.
I am assuming that you are using a client server setting, where the application file resides on a local machine. (Or on a local instance in the case of RDP / Citrix)
In that case, you can have a local table to save the checkbox information without causing any conflicts between users.
You will be using a bound checkbox, so problem solved.

User settings in SQL Server

I am trying to design an efficient database schema for user settings in SQL Server 2008 R2. The wrinkle here is that we need multiple levels of granularity, and I'm not sure how to efficiently represent that.
We have a handful of settings that can be applied to a full Account, a single Module, or a specific Feature. Currently the way the table has been set up is something to the effect of:
AccountId int
ModuleId int
FeatureId int
SettingData string
(please don't get hung up on what SettingData is or isn't, I just made it a string here in the example to distinguish it from the other Ids).
Problem: Many customers have access to many modules, and these modules have access to many features. A single Account making a change to SettingData can modify 4000 records. This is absolutely not tenable for obvious reasons, and I'm determined to fix it.
The solution is obviously to have a few different tables that, by their usage, override eachother and allow some account wide settings and granular preferences. However, I've never done this before and my attempts at designing it end up looking disturbingly similar to the inefficient table structure we currently have.
Thanks in advance, any help is appreciated.
It sounds as though settings can currently be specified at the following levels:
Account
Module
Feature
Given that there are probably already tables set up for each of Account, Module and Feature, it would appear to make sense to:
Remove the existing table.
Set up a new field for setting data on each of the existing Account, Module and Feature tables.
Since the general principle is that the specific should override the general, a Module-level setting should override an Account-level setting, and a Feature-level setting should override a Module-level setting.
The advantage of this approach is that any time a specific setting was updated, only a single record would need to be updated.
The disadvantage is that to determine which setting should apply to a specific feature (for a specific account) in a specific module, 3 tables would have to be queried instead of one.

sql server global data version

I wonder what is the best way to implement global data version for database. I want for any modification that is done to the database to incerease the version in "global version table" by one. I need this so that when I talk to application users I know what version of data we are talking about.
Should I store this information in table?
Should I use triggers for this?
This version number can be stored in a configuration table or in a dedicated table (with one field).
This parameter should not be automatically updated because you are the owner of the schema and you are responsible for knowing when you need to update it. Basically, you need to update this number every time you deploy a new application package (regardless of the reason for the package: code or database change).
Each and every deployment package should take care of updating the schema version number and the database schema (if necessary)
I tend to have a globals or settings table with various pseudo-static values stored.
- Just one row
- Many fields
This can include version numbers.
In terms of maintaining the version number you refer to, would this change when the data content changes? If so, the a trigger would be useful. If you mean for the version number to relate to table structures, etc, I'd be more inclined to manage this by hand. (Some changes may be irrelevant as far as teh applications are concerned, or there maybe several changes wrapped up into a single version upgrade.)
The best way to implement a "global data version for database" is via your source control system and build process. When all the changes have been submitted and passed testing your build process will increment your versioning number schema.
The version number could be implemented in a stored procedure. The result of the call to the stored proc could be added to a screen in your app so you can avoid users directly accessing a table.
To complete the previous answers, I came across the concept of "Migrations" (from the Ruby on Rails world apparently) today, and there was already a question on SO that covered existing frameworks in .Net.
The concept is still to store DB versioning information as data in a table somewhere, but for that versioning information to be managed automatically by a framework, rather than manually by your custom deployment processes:
previous SO question with overview of options: https://stackoverflow.com/questions/313/net-migrations-engine

Is there any RFC or BAPI implementing the transaction rsscd001 for displaying change documents in SAP?

I would like to know whether there is any RFC or BAPI functions to display change documents (transaction RSSCD001) based on input query in SAP. The customer requirement is to implement a java monitor system on SAP without adding any ABAP functions on the SAP server.
I tried to make use of 'RFC_READ_TABLE' functions, which is deprecated according to the official documents, to read the CDPOS and CDHDR table and join them. But as vwegert said, to traverse the table CDPOS is really time-costing, as it contains billions of table entries.
My intention of this query is to find changes to all bank details of vendors.
Any other thoughts?
Many thanks in advance!
The least resource-consuming way to do this would be to use the workflow runtime system to actively notify the java application whenever a change document is written. You don't have to write any ABAP functions to do this, just setup the workflow engine (using the automatic customizing) and customize the event generation (documentation). Then, you write a java service that connects to the SAP system using JCo and registers as an RFC server using a destination of Type TCP/IP and a registered program ID. This java server program has to provide a function module handler that can be called using tRFC from the SAP system. Finally, add a linkage entry that will tell the workflow runtime system to call your java program each time a change document is written.
Of course, this will only record the changes that happen after installation, not the historical changes.
warning : I'm not very familliar with this field.
The RFC function BAPI_VENDOR_FIND (BAPI Vendor) seems to be used to find vendor based on values in table. You could use it to check gainst the modification date. This is not perfect, as there is no relationnal operator, only equals, and you'll have to check against several dates...
hopes this helps
Guillaume

Do you put your database static data into source-control ? How?

I'm using SQL-Server 2008 with Visual Studio Database Edition.
With this setup, keeping your schema in sync is very easy. Basically, there's a 'compare schema' tool that allow me to sync the schema of two databases and/or a database schema with a source-controlled creation script folder.
However, the situation is less clear when it comes to data, which can be of three different kind :
static data referenced in the code. typical example : my users can change their setting, and their configuration is stored on the server. However, there's a system-wide default value for each setting that is used in case the user didn't override it. The table containing those default settings grows as more options are added to the program. This means that when a new feature/option is checked in, the system-wide default setting is usually created in the database as well.
static data. eg. a product list populating a dropdown list. The program doesn't rely on the existence of a specific product in the list to work. This can be for example a list of unicode-encoded products that should be deployed in production when the new "unicode version" of the program is deployed.
other data, ie everything else (logs, user accounts, user data, etc.)
It seems obvious to me that my third item shouldn't be source-controlled (of course, it should be backuped on a regular basis)
But regarding the static data, I'm wondering what to do.
Should I append the insert scripts to the creation scripts? or maybe use separate scripts?
How do I (as a developer) warn the people doing the deployment that they should execute an insert statement ?
Should I differentiate my two kind of data? (the first one being usually created by a dev, while the second one is usually created by a non-dev)
How do you manage your DB static data ?
I have explained the technique I used in my blog Version Control and Your Database. I use database metadata (in this case SQL Server extended properties) to store the deployed application version. I only have scripts that upgrade from version to version. At startup the application reads the deployed version from the database metadata (lack of metadata is interpreted as version 0, ie. nothing is yet deployed). For each version there is an application function that upgrades to the next version. Usually this function runs an internal resource T-SQL script that does the upgrade, but it can be something else, like deploying a CLR assembly in the database.
There is no script to deploy the 'current' database schema. New installments iterate trough all intermediate versions, from version 1 to current version.
There are several advantages I enjoy by this technique:
Is easy for me to test a new version. I have a backup of the previous version, I apply the upgrade script, then I can revert to the previous version, change the script, try again, until I'm happy with the result.
My application can be deployed on top of any previous version. Various clients have various deployed version. When they upgrade, my application supports upgrade from any previous version.
There is no difference between a fresh install and an upgrade, it runs the same code, so I have fewer code paths to maintain and test.
There is no difference between DML and DDL changes (your original question). they all treated the same way, as script run to change from one version to next. When I need to make a change like you describe (change a default), I actually increase the schema version even if no other DDL change occurs. So at version 5.1 the default was 'foo', in 5.2 the default is 'bar' and that is the only difference between the two versions, and the 'upgrade' step is simply an UPDATE statement (followed of course by the version metadata change, ie. sp_updateextendedproperty).
All changes are in source control, part of the application sources (T-SQL scripts mostly).
I can easily get to any previous schema version, eg. to repro a customer complaint, simply by running the upgrade sequence and stopping at the version I'm interested in.
This approach saved my skin a number of times and I'm a true believer now. There is only one disadvantage: there is no obvious place to look in source to find 'what is the current form of procedure foo?'. Because the latest version of foo might have been upgraded 2 or 3 versions ago and it wasn't changed since, I need to look at the upgrade script for that version. I usually resort to just looking into the database and see what's in there, rather than searching through the upgrade scripts.
One final note: this is actually not my invention. This is modeled exactly after how SQL Server itself upgrades the database metadata (mssqlsystemresource).
If you are changing the static data (adding a new item to the table that is used to generate a drop-down list) then the insert should be in source control and deployed with the rest of the code. This is especially true if the insert is needed for the rest of the code to work. Otherwise, this step may be forgotten when the code is deployed and not so nice things happen.
If static data comes from another source (such as an import of the current airport codes in the US), then you may simply need to run an already documented import process. The import process itself should be in source control (we do this with all our SSIS packages), but the data need not be.
Here at Red Gate we recently added a feature to SQL Data Compare allowing static data to be stored as DML (one .sql file for each table) alongside the schema DDL that is currently supported by SQL Compare.
To understand how this works, here is a diagram that explains how it works.
The idea is that when you want to push changes to your target server, you do a comparison using the scripts as the source data source, which generates the necessary DML synchronization script to update the target. This means you don't have to assume that the target is being recreated from scratch each time. In time we hope to support static data in our upcoming SQL Source Control tool.
David Atkinson, Product Manager, Red Gate Software
I have come across this when developing CMS systems.
I went with appending the static data (the stuff referenced in the code) to the database creation scripts, then a separate script to add in any 'initialisation data' (like countries, initial product population etc).
For the first two steps, you could consider using an intermediate format (ie XML) for the data, then using a home grown tool, or something like CodeSmith to generate the SQL, and possible source files as well, if (for example) you have lookup tables which relate to enumerations used in the code - this helps enforce consistency.
This has another benefit that if the schema changes, in many cases you don't have to regenerate all your INSERT statements - you just change the tool.
I really like your distinction of the three types of data.
I agree for the third.
In our application, we try to avoid putting in the database the first, because it is duplicated (as it has to be in the code, the database is a duplicate). A secondary benefice is that we need no join or query to get access to that value from the code, so this speed things up.
If there is additional information that we would like to have in the database, for example if it can be changed per customer site, we separate the two. Other tables can still reference that data (either by index ex: 0, 1, 2, 3 or by code ex: EMPTY, SIMPLE, DOUBLE, ALL).
For the second, the scripts should be in source-control. We separate them from the structure (I think they typically are replaced as time goes, while the structures keeps adding deltas).
How do I (as a developer) warn the people doing the deployment that they should execute an insert statement ?
We have a complete procedure for that, and a readme coming with each release, with scripts and so on...
First off, I have never used Visual Studio Database Edition. You are blessed (or cursed) with whatever tools this utility gives you. Hopefully that includes a lot of flexibility.
I don't know that I'd make that big a difference between your type 1 and type 2 static data. Both are sets of data that are defined once and then never updated, barring subsequent releases and updates, right? In which case the main difference is in how or why the data is as it is, and not so much in how it is stored or initialized. (Unless the data is environment-specific, as in "A" for development, "B" for Production. This would be "type 4" data, and I shall cheerfully ignore it in this post, because I've solved it useing SQLCMD variables and they give me a headache.)
First, I would make a script to create all the tables in the database--preferably only one script, otherwise you can have a LOT of scripts lying about (and find-and-replace when renaming columns becomes very awkward). Then, I would make a script to populate the static data in these tables. This script could be appended to the end of the table script, or made it's own script, or even made one script per table, a good idea if you have hundreds or thousands of rows to load. (Some folks make a csv file and then issue a BULK INSERT on it, but I'd avoid that is it just gives you two files and a complex process [configuring drive mappings on deployment] to manage.)
The key thing to remember is that data (as stored in databases) can and will change over time. Rarely (if ever!) will you have the luxury of deleting your Production database and replacing it with a fresh, shiny, new one devoid of all that crufty data from the past umpteen years. Databases are all about changes over time, and that's where scripts come into their own. You start with the scripts to create the database, and then over time you add scripts that modify the database as changes come along -- and this applies to your static data (of any type) as well.
(Ultimately, my methodology is analogous to accounting: you have accounts, and as changes come in you adjust the accounts with journal entries. If you find you made a mistake, you never go back and modify your entries, you just make a subsequent entries to reverse and fix them. It's only an analogy, but the logic is sound.)
The solution I use is to have create and change scripts in source control, coupled with version information stored in the database.
Then, I have an install wizard that can detect whether it needs to create or update the db - the update process is managed by picking appropriate scripts based on the stored version information in the database.
See this thread's answer. Static data from your first two points should be in source control, IMHO.
Edit: *new
all-in-one or a separate script? it does not really matter as long as you (dev team) agree with your deployment team. I prefer to separate files, but I still can always create all-in-one.sql from those in the proper order [Logins, Roles, Users; Tables; Views; Stored Procedures; UDFs; Static Data; (Audit Tables, Audit Triggers)]
how do you make sure they execute it: well, make it another step in your application/database deployment documentation. If you roll out application which really needs specific (new) static data in the database, then you might want to perform a DB version check in your application. and you update the DB_VERSION to your new release number as part of that script. Then your application on a start-up should check it and report an error if the new DB version is required.
dev and non-dev static data: I have never seen this case actually. More often there is real static data, which you might call "dev", which is major configuration, ISO static data etc. The other type is default lookup data, which is there for users to start with, but they might add more. The mechanism to INSERT these data might be different, because you need to ensure you do not destoy (power-)user-created data.