How to add my own PRAGMA statements in SQLite to store custom meta data - sql

I want to store meta data like author name, copyright, source to my SQLite DB without creating a new table. I found out we can use PRAGMA statements to set some values.. I would like to store my own custom name and value ... How to create custom PRAGMA statement? http://www.sqlite.org/pragma.html
Has anyone done this before? The doc says..
"The C-language API for SQLite provides the SQLITE_FCNTL_PRAGMA file control which gives VFS implementations the opportunity to add new PRAGMA statements or to override the meaning of built-in PRAGMA statements."
Please let me know how can I achieve this?

The built-in PRAGMAs that store their value in the database do so in the database header.
There is not enough space to add custom values.
You have no choice but to create a new table.

Not sure if this is enough for you, but I use user_version to keep track of the last schema check.
What I do is save the timestamp it was last checked against the code schema. If it's been 24h, or 5m, or whatever, I do a re-check and update stuff. I only need 1 32b int for that, so it's perfect. If you need more, this is not it.
More info on user_version here (PRAGMA schema_version) and here (db reserved header space)

Take a look at the new Checksum VFS Shim for an example of a VFS adding a custom pragma.
The source code is linked on the page above, but here is it again, just in case.
But of course, you cannot store additional information, except via tables.
So while the above does show how to implement a custom pragma, your actual
end-goal is not possible, since the header is fixed and reserved for SQLite itself. If you have a table, why have a pragma then?

The answer from ddevienne about looking at the VFS Shim sources is the step in the right direction, but the comment about impossibility to store your own data is partially correct (I could not add a comment due to my early insufficient reputation so posting as an answer). It's your own vfs so you're free to organize the data as you wish. xRead and xWrite interfaces just assumes the sqlite own positional logic, but you can reserve an area at the start of the file for your own purposes and translate the positions accordingly. I can confirm this works since implemented an vfs reserving a small area at the start using this technique. But there are consequences, in this case the file won't be compatible with conventional sqlite tools and libraries.

Related

Best practice instead of hard-coded RFC destinations?

is there a good way to use not hardcoded RFC destinations?
Right now our solution is to check which system is used and then assign the destination to a variable
IF cl_role EQ 'P'.
p_dest = 'ESW300'.
ELSE.
p_dest = 'EAW300'.
ENDIF.
which we use when calling our destination function.
CALL FUNCTION 'XYZ' DESTINATION p_dest
Is there a good way to not use the hardcoded destinations?
Thank you for the help!
RFC destinations are already an abstraction of the endpoint so I would not recommend to abstract it yet again. For that reason I would suggest using the same name across systems as a leading practice and not change them to be instance specific.
Otherwise I would suggest instead of hard-coding that you determine the RFC destination dynamically if you really want to use different RFC destination names across systems (I would not). If you look at some SAP standard programs they use specific format to determine the expected RFC destination name for example <hostname>_<systemname>_<system number> is used by SolMan but there are many examples if you look at the standard RFC destinations you can find.
I would also recommend as a best practice that any hard coded values never be populated inline as your example shows but in a header constant.
I realize you were probably only trying to focus on your question but for others reading this.
I saw every company creating its own custom table containing the RFC destinations (maintained differently in every SAP system by administrators; eventually it can be custom entries in the standard table TVARVC), but nobody published anything about it. Eventually this blog post but it's a very complex solution, only documented, but no code provided.
An easier (and adequate?) way is to create a Logical System (transaction code BD54) for every system, and assign a RFC destination (transaction code BD97).
In your program, do this kind of thing:
SELECT SINGLE rfcdest
FROM tblsysdest
WHERE logsys = 'CRM'
INTO #DATA(crm_system).
CALL FUNCTION 'RFC_PING' DESTINATION crm_system
EXCEPTIONS OTHERS = 1.
PS: prefer to abstract things, like creating a global class with one method GET_CRM_SYSTEM instead of hardcoding the SELECT in every program.

Liquibase load data in a format other than CSV

With the load data option that Liquibase provides, one can specify seed data in a CSV format. Is there a way I can provide say, a JSON or XML file with data that Liquibase would understand?
The use case is we are trying to put in some sample data which is hierarchical. E.g. Category - Subcategory relation which would require putting in parent id for all related categories. If there is a way to avoid including the ids in the seed data via say, JSON.
{
"MainCat1": ["SubCat11", "SubCat12"],
"MainCat2": ["SubCat21", "SubCat22"]
}
Very likely to have this as not supported (couldn't make Google help me) but is there a way to write a plugin or something that does this? Pointer to a guide (if any) would help.
NOTE: This is not about specifying the change log in that format.
This not currently supported and supporting it robustly would be pretty difficult. The main difficultly lies in the fact that Liquibase is designed to be database-platform agnostic, combined with the design goal of being able to generate the SQL required to do an operation without actually doing the operation live.
Inserting data like you want without knowing the keys and just generating SQL that could be run later is going to be very difficult, perhaps even impossible. I would suggest approaching Nathan, who is the main developer for Liquibase, more directly. The best way to do that might be through the JIRA bug database for Liquibase.
If you want to have a crack at implementing it, you could start by looking at the code for the LoadDataChange class (source in Github), which is where the CSV support currently lives.

Biztalk- flat file schema definitions

I have defined a flat file schema which works fine. However, I got now a new requirement for this schema: It has to support future potential additional fields in the end of the records.
The solution I used is quit "ugly". I added an additional filler at the end of the record and configured it as "minOccurs = 0" and set Allow early termination of optional fileds to true.
This works but I don't like it.
It seems to me that there must be a property for ignoring any additional fields after the last one, so I won't need this filler field.
Does anyone familiar with such option/ property?
Thanks all.
Nope, what you've done is the correct way to handle this situation. Beauty is in the eye of the beholder.
The Flat File Parser requires all possible content be defined in the schema so it doesn't ever have to 'guess' what's next.
When a flat file changes, the schema must change as well. That is part of the job for a BizTalk developer.
You can't anticipate changes to the flat file inside your schema. With the filler field you have now, what are you going to do if 2 extra fields appear and have to be used? How will you get the data in, say, a mapping?
This is the way the flat file parser works, everything has to be well defined and if the specs change you must update your schemas. There is no magic here to make it all completely dynamic. Unless if you were to write a custom flat file disassembler from scratch that supports it, but good luck with that.

Is there an editor for inserting/editing rows into a Core Data DB?

I've created a Core Data schema in xcode (3.2.5 if it matters) so I have the .xcdatamodel file with the proper entities and relations.
Now - How can I insert data, edit data and/or delete data from it, NOT from within the code ?
Like what phpMyAdmin is for MySql.
Thanks.
Core Data is meant to be used programmatically. Once you run the app once, it should create a file somewhere on disk (exactly where is probably specified in the AppDelegate class). It is likely that this file will be a SQLite database, but it doesn't have to be (the point of Core Data is to abstract your data away from the file format used to store it). It could also be an XML file or a binary file.
If it's a SQLite file, then you can open it in your favorite SQLite editor.
HOWEVER
The schema used in the SQLite format is not documented. If you go mucking around in it, you might get stuff to work, but it's also very likely that you could irreparably screw it up. (If it's an XML file or a binary file, you're probably totally out of luck)
In the end, Core Data is supposed to be used programmatically. To use it in a different way (such as what you're asking for) would be to use it in a way for which it was not intended and therefore not designed.
I don't know if you already solved your problem, but there's this SQLite Manager plug-in for firefox: http://code.google.com/p/sqlite-manager/
I haven't tried importing data or using the INSERT command to insert individual rows, but you could give it a try. It's free and works very well for me as is.
There's quite a few database management tools available for sqlite that allow you to do this. I've tried a few but to be honest none of them have impressed me that much as yet.
Would be great to have something like Toad available.
Anyway, find wherever your database file is, then drop it onto whichever application.
You can then add, delete, and edit rows and columns.
Of course, you will need to maintain any foreign keys and such like.
I find the generated Core Data models to be pretty easy to understand.
Example tools are SQLite Database Browser (free), SQLiteManager (not free), and Base. A quick Google search should reveal those and a few more.
I normally use SQLite Database Browser although it does crash occasionally.
See Christian Kienle's Core data editor. It's not free, but is designed to work directly with core data models and stores via Apple's API, supports binary data, builds relationships and even triggers validation, etc. I've found it's worth the $20.

Alternatives to using RFile in Symbian

This question is in continuation to my previous question related to File I/O.
I am using RFile to open a file and read/write data to it. Now, my requirement is such that I would have to modify certain fields within the file. I separate each field within a record with a colon and each record with a newline. Sample is below:
abc#def.com:Albert:1:2
def#ghi.com:Alice:3:1
Suppose I want to replace the '3' in the second record by '2'. I am finding it difficult to overwrite specific field in the file using RFile because RFile does not provide its users with such facility.
Due to this, to modify a record I have to delete the contents of the file and serialize ( that is loop through in memory representation of records and write to the file ). Doing this everytime there is a change in a record's value is quite expensive as there are hundreds of records and the change could be quite frequent.
I searched around for alternatives and found CPermanentFileStore. But I feel the API is hard to use as I am not able to find any source on the Internet that demonstrates its use.
Is there a way around this. Please help.
Depending on which version(s) of Symbian OS you are targetting, you could store the information in a relational database. Since v9.4, Symbian OS includes an SQL implementation (based on the open source SQLite engine).
Using normal files for this type of records takes a lot of effort no matter the operating system. To be able to do this efficiently you need to reserve space in the file for expansion of each record - otherwise you need to rewrite the entire file if a record value changes from say 9 to 10. Also storing a lookup table in the file will make it possible to jump directly to a record using RFile::Seek.
The CPermamanentFileStore simplifies the actual reading and writing of the file but basically does what you have to do yourself otherwise. A database may be a better choice in this instance. If you don't want to use a database I think using stores would be be a better solution.