HSQLDB - how to set .lobs file encoding? - hsqldb

We run a little java web application on a hsqldb 2.4 database, and apparently the encoding of the .lobs file where blobs are persisted depends on the OS holding it. For instance, it is encoded in Ansi when the application runs on windows, which is problematic when we want to send on a Linux system (via http) the files represented by its blobs.
Do anyone know how to specify the encoding of this .lobs file ?

HSQLDB does not encode the data in blobs. It stores the data exactly as inserted. If you need to encode the files represented by the blobs in a different character set, you have to do it in your application.

Related

Use RStudio to connect to, and run queries on, a locally stored, compressed SQL databse

I'm trying to connect to and run queries on two large, locally-stored SQL databases with file extensions like so:
filename.sql.zstd.part
filename2.sql.zstd
My preference is to use the RMySQL package- however i am finding it hard to find documentation of a) how to access locally stored SQL files, and b) how to deal with the zstd extension.
This may be very basic but help is appreciated!
Seems like you have problems understanding the file extensions.
filename.sql.zstd.part
.part usually means you are downloading a file from the internet, but the download isn't complete yet (so downloads that are in progress or have been stopped)
So to get from filename.sql.zstd.part to filename.sql.zstd you need to complete your download
.zstd means it is a compressed file (to save disk space). You need a decompression program to get from filename.sql.zstd to filename.sql
The compression algorithm used is called Zstandard so you need a decompressor specifically for this program. Look here https://facebook.github.io/zstd/ for such a program.
There was also once an R package for this - but it has been archived. But you could also download an older version
(https://cran.r-project.org/web/packages/zstdr/index.html)
In filename.sql is actually not a database. In an .sql file are usually SQL statements for creating / modifying database structures. You'd have to install a database e.g. MariaDB and then import this .sql file to actually really have the files in a database on your computer. And then you would access this database via R.

Database model to manage documents

I need to build a tables related to manage documents such as jpg,doc,msg,pdf using a sql server 2008 .
As i know sql server support .jpg images, so my question is if it's possible to upload other kind of files into a db.
This is an example of the table (could be redefined if it's needed).
Document : document_id int(10)
name varchar(10)
type image (doesnt know how it might works)
Those are the initial values for a table, but i dont know how to make it useful for any type.
pd: do i need to assign a directory to save this documents into the server?
You can store almost any file type in an sql server table...if you do, you will almost certainly regret it.
Store a meta-data / a pointer to the file in your database instead, and store the files themselves on a disk directly where they belong.
Your database size - and thus hardware required to run it - will grow very rapidly, so you will be incurring large costs that you do not need to incur.
Use Filestream
https://learn.microsoft.com/en-us/sql/relational-databases/blob/filestream-sql-server
I know that a link-only answer is not an answer but I can't believe no one has mentioned it yet
The proper database design pattern is not to save Files into DBMS. You should develop a kind of File Manager Subsystem to manage your files for all of your projects.
File Manager Subsystem
This subsystem should be Reusable, Extendable, Secure and etc. All your projects that want to save Files, can use this subsystem.
Files can be saved in every where such as Local Hard, Network Drive, External Drives, Clouds and etc. So this subsystem should be design to support all kind of requests.
(you can improve the mentioned subsystem by adding a lot of features to it. for example checking duplicate files,...)
This subsystem, should generate a Unique Key for each file. After uploading and saving the files, the subsystem should generate that key.
Now, you can use this Unique Key to save in database (instead of file). Every time if you want to get the file, you can get the Unique Key from database and request to get file from the subsystem by unique key.

mysql workbench not exporting data with utf8

I have a database encoded in utf8 and I want to export it to a dump file. The problem is that when I do it the data in the dump file are not encoded in utf8. Is there a way to define the encoding when creating the dump file ?
Your DB when you created it, may have been using another form of encoding aside from UTF8. You may want to refer to this article about how to change encoding settings. Hopefully once that has been changed you will be able to export.
https://dev.mysql.com/doc/refman/5.0/en/charset-applications.html
This doc will show you how to encode per table, as well as how to change your encoding via CLI.
Cheers.

what data type to use for storing different files in database

Im trying to make a database accept different files in a postgres database table. The files I want to support are of different mime-types. I want to support pdf, word, plain text, and power point. The problem is that i don't know what datatype to choose. The documentation to pgadmin (the tool im using) is very (let´s say) unsatisfactory. Thanks
While you can store the file contents in the database, consider storing the file path instead and using the file system to store the file.
In the IT world "you can do anything with anything", but that doesn't mean you should.
In this case, you're trying to use a database as a file system, which it can do, but databases are not as efficient or practical as file systems for storing file contents (typically "large" data). It will:
make your backups longer and larger
slow your insert queries down (more I/O)
make your log files larger (slower and fill more often)
make accessing the files slower (query vs simple disk I/O)
require you to go via the database to access the files (hassle, can't use browser etc)
etc
You can use bytea type in PostgreSQL.

How to create a fixed blocked (FB) file for IBM mainframe/FTP in VBA

I've got VBA code that generates a text file with some pretty basic information included. I then upload that file via FTP.
I got a message from the server admin of the IBM mainframe today that my file was in variable blocking (VB) format and their job process uses a fixed blocking (FB) up to a max size of 256.
How is this done? During the file creation? 3rd party tool?
B
You can simply convert the VB file into FB in mainframe before running the actual process.VB to FB conversion JCL is a small JCL step to do your conversion
You can use Locsite to set the record format on the host dataset(File).
You can find all the list of FTP sub commands in the below user guide
IP User’s Guide and Commands SC31-8780-05
Sorry all, I have a feeling I didn't explain this correctly, because I do now have an answer which is rather simple. These 2 commands seemed to have setup the environment correctly for the file to be fb and not vb.
ftp> quote site lr=94
ftp> quote site rec=fb
If I rightly remember FB is in multiples of the block sizes, that is just how DASD stores the files on disk, it must fit in that multiple block size, which increases speed and throughput on the Mainframe. If the data file is not within the boundary of multiple block sizes (This has nothing to do with the actual size of the data), the DASD system just access files in blocks of 256 bytes...there will be a host of special fields inserted into the data file to describe the blocking and so on...which will get inserted when transferred to the mainframe and that data gets transferred to magnetic tape backups...
There should be a script available on the Mainframe to convert it using JCL (Job Control Language)..ask the Mainframe administrator to do it for you...
By the way it should be noted that the character set you used in your data file, just be aware that the mainframe uses EBCDIC character set...There are plenty of tools out there that can convert from ASCII data to the format to be readable by the mainframe, just something to bear in mind of...If the data gets converted that could impact the file size...Thought it would be worth mentioning and important!
There is a Unix/Linux utility that can convert the data to a fixed block size using the dd utility, although I do not think it would be the right way to do it...
Here's a useful link that will help you in understanding this. And also here on SO a similar user was asking about MVS/TSO data...