Adding support for REGEXP() in SQLite through Haskell HDBC? - sql

I've unfortunately just realized that having committed to HDBC as a database access framework for connecting to my SQLite3 database, the ability to add a function to handle REGEXP() in SQLite SQL seems to exists only in Database.SQLite.
Regular expression is something I need in the application at SQL level, but before I start converting everything to Database.SQLite I just thought I'd ask whether there are other options...?

Ok,
I see that HDBC may not be able to provide this ability, but I found this extension for SQLite:
https://github.com/eatnumber1/sqlite-regexp
which will do just that via an SQL LOAD statement. Of course, this adds an external dependency, but it fixas the problem (without having to rewrite lots and lots of code).
EDIT:
Actually, it seems that I cannot use load_extension(X) inside of HDBC, which means that I cannot loda the extension. So this is still an open issue.

Related

Trying to figure why NewID function stopped working on Oracle

I have used SELECT NEWID() FROM DUAL to generate new (random) guids in the past.
However, today when tried I am getting the below Error:
ORA-00904: "NEWID": invalid identifier
I am not able to find this Particular Error by googling. So I guessed the case must be any of the two:
Either this has been somehow blocked by my System Admin or somehow the instance of Oracle is unable to find the Function due to some installation/ version update issues.
-or-
Oracle has stopped support for NEWID() and wants us to only use SYS_GUID() for Guid generation.
(if yes, then I'll have to implement a REGEXP_REPLACE as GUIDs in my system are '-' -separated.
Also - I'll have to update all existing codes that use NEWID()).
Any suggestion will be helpful. Thx.
Oracle has never had a built-in newID function. That is a function that exists in SQL Server so it is entirely possible that someone had previously created a custom newID function that you were accustomed to calling. Whether that function was just calling sys_guid under the covers or whether it was replicating the format of the GUIDs in SQL Server like this implementation is something you'd have to determine. Frequently, tools that help you migrate code from one database engine to another will install a library of functions that emulate the built-in functions of the source database engine in the target database in order to make migrations easier. So it is possible that the function you're accustomed to calling was installed by some migration tool.
Since you talk about "version/ installation issues" my guess is that you are connected to a new/ different database that doesn't have the function you are accustomed to. If so, you can probably just go to the previous database where the code worked and copy the code for the custom function to the new database. If you are connected to the same database with the same user where this previously worked, that would imply that someone has revoked your user's access to the function or dropped the function entirely in which case you'd need to talk to your DBA/ DevOps team to see what changed and why.

release postgresql extension

I'm developing application that holds data in postgres. So i must prepare database before working with application, there must be created few tables. I'm creating this tables by running sql code but i think it's not convenient after i found this doc:
A useful extension to PostgreSQL typically includes multiple SQL
objects; for example, a new data type will require new functions, new
operators, and probably new index operator classes. It is helpful to
collect all these objects into a single package to simplify database
management
The main advantage of using an extension, rather than just running the
SQL script to load a bunch of "loose" objects into your database, is
that PostgreSQL will then understand that the objects of the extension
go together
I believe that i must use this approach
What i don't understand is that how can i share my extension. I thought that it works like maven, you create your extension with custom types, functions, tables and than you can pack it, name it (eg my-ext-0.1), give a version and release into some kind of a repository. After that you can connect to a database, run sql 'create extension my-ext-0.1' and have everything done :)
I thought that 'create extension' command will download extension and install it without downloading this by hands. I use maven, ivy and i expected similar behaviour from postgresql.
Documentation says that you need to place your extension files under some directory and only than run 'create extension' under some database.
How do you create your extensions and share them between different servers?
Postgres extensions do not work like this. They can have access to database internals and can run any code as database OS user. Therefore installing them is typically limited only to superusers, from a specific directory and only some of them are available on managed hosting servers.
I though that you can achieve something similar with installing your supplemental functions, types and tables in a special schema which is added to a search path. Upgrade would then be as simple as:
drop schema mylib cascade; -- don't do this!!!
create schema mylib;
\i mylib.sql
But unfortunately this would also remove all dependent objects from other schemas - columns using a custom type, triggers using a custom function etc. So it's not a solution for your problem.
I'd rather create my functions, types and all in my schema, using available extensions and "standard" languages.
Postgres will not download your extension (unless you create extension that will add this functionality to postgres). But your extension should be still created "usual" way.
to check your "directory for extension", run:
t=# create extension "where should I put control file";
ERROR: could not open extension control file "/usr/local/share/postgresql/extension/where should I put control file.control": No such file or directory
And repeating comment, before extending SQL, please check out plpgsql and existing commands.
When you get bored and make sure existing postgres functionality is too limited, install postgres-contrib package and check other extensions as best practices. And of course check out https://pgxn.org/

SQL Server 2012 keyword override

A question for which I already know there is no pretty answer.
I have a third party application that I cannot change. The application's database has been converted from MS Access to SQL Server 2012. The program connects with ODBC and does not care about the backend. It sends pretty straight-forward SQL that seems to work on SQL Server nicely as well.
There is however a problem with one table that has the name "PLAN" which I already know is a SQL Server keyword.
I know that you would normally access such a table with square brackets, but since I'm not able to change the SQL I was wondering if there is any "ugly" hack that can either override a keyword or transform SQL on the fly.
You could try to edit the third party application with a hex editor. If you find the strings PLAN, edit this to something like PPAN and then rename the table, views etc. If you catch all, it could work. But, of course it is an ugly thing.
I think you are screwed I am afraid. The only other approaches I could suggest are
Intercepting the network packets before it hits the SQL Server which is clearly quite complicated. See https://reverseengineering.stackexchange.com/questions/1617/server-side-query-interception-with-ms-sql-server and in particular answer https://reverseengineering.stackexchange.com/a/1816
Decompiling the program in order to change it if it's a Java or .Net app for instance.
I suspect you're hosed. You could
Wire up the 3rd party app to a shim MS Access database that uses linked tables, where the Access table is nothing but a pass-through to the underlying SQL Server table. What you want to do is:
Change the offending column names in the SQL Server schema.
Create the linked tables in Access
Create a set of view/query in access that has the same schema that the 3rd party app expects.
Having done that, the 3rd party app should be able to speak "Access SQL" like it always has. Access takes care of the "translation" to T-SQL. Life is good. I suspect you'll take something of a performance hit, since you're proxying everything through Access, but I don't think it'll be huge.
That would be my preferred solution.
The other option would be to write a "shim" DLL that implements the ODBC API and simply wraps the actual calls to the true ODBC driver. Then capture the requests and improve them as necessary prior to invoking the wrapped DLL method. The tricky part is that your 3rd party app might be going after columns by ordinal position or might be going after them by column name. Or a mix. That means that you might need to transform the columns names on the way back, which might be more difficult than it seems.

Rails: copy data from production MySQL into development SQLite3

I'm having trouble copying data from a production MySQL server to a development SQLite3 file (so that I have real data to play with on development machine). I've found tons of resources around the 'net on how to convert from MySQL to SQLite3, most of which were bash scripts with elaborate sed filters, but none worked (the most common problem was syntax issues upon import).
Anyway, then I stumbled upon YamlDB, and I thought "Why, of course! Let Rails do the conversion for me!" Well, this doesn't work either because all of the NULL fields (which are represented in the YAML file as !!null) end up being imported into the SQLite3 database exactly as "--- !!null" instead of actual NULLs. I seem to be the only person with this problem, as there were no mentions of it in the GitHub issues queue.
I even tried the workaround for using syck instead of psych (found in this SO question), but it made no difference.
So my question is this: does ANYONE know of a SIMPLE way to export data from one rails database for importing into another, regardless of database kind? And by "simple", I mean a few commands at the console, or whatever.
Look into taps # http://github.com/ricardochimal/taps
It will dump your MySQL db into a local sqlite db and is relatively simple to use.
From the comments: If you get an error stating schema parsing returned no columns, table probably doesn't exist then you need to specify an absolute path to the sqlite3 db instead of a relative one

Can Linq to SQL create a database from DDL files?

Is there anything built into the Linq to SQL libraries that allow me to create an entire database from a collection of DDL files?
Lets say you have a collection of DDL files, one for each table in the database. Conceptually it would be pretty straight forward to call a create table function for each one and each one that succeeds (does not through SQL exception for example due to a relationship or foreign key error) pop the file name off the stack. For any that failed you could try to call the DDL again until it finally succeeded and all of your tables existed in the database ... however ... if there is something like this that already existed in say Linq to SQL or the Migrations project that would be great. Does anyone know if this exists already without having to combine all of the DDL's into a single script? Thanks in advance.
If you have Visual Studio 2008 or 2010 Professional or Above, it includes the new version of database projects. Which can handle that precisely for you (it will even validate the scripts before execution so you can see what errors exists).
I don't believe so. Linq-to-Sql is not really made for manipulating database schemas. You might have more luck with something like the Microsoft SMO libraries.
Use ADO.NET commands instead for that. That should be able to handle it, depending how complex each file is. As long as each file has on executable statement, ADO.NET commands may work fine for what you want to do.