Start situation:
a folder with lots of files (images mostly png).
two tables in database (MariaDB) which contains supposed image filenames. I query the filenames like this:
select filename from table1
UNION
select filename from table2;
I want to know if I have files not registered in the database tables.
My first approach is to put the list of filenames inside a textfile (I've used Linux command line, list is filename per line), but I don't know how to continue.
I can't write in the database. UPDATED. I got more auth to perform my job tasks. Therefore I solved this with the suggestion.
You need to do couple of things:
Import the file with list of files into the database.
Use cursor to go through the list of file names and match it to your table list which contains the list-b of file names
To import file name you can use the import method or directly read the file as a table virtually.
Thanks
I'm trying to find a specific string on my database. I'm currently using FlameRobin to open the FDB file, but this software doesn't seems to have a properly feature for this task.
I tried the following SQL query but i didn't work:
SELECT
*
FROM
*
WHERE
* LIKE '126278'
After all, what is the best solution to do that? Thanks in advance.
You can't do such thing. But you can convert your FDB file to a text file like CSV so you can search for your string in all the tables/files at the same time.
1. Download a database converter
First step you need a software to convert you databse file. I recommend using Full Convert to do it. Just get the free trial and download it. It is really easy to use and it will export each table in a different CSV file.
2. Find your string in multiple files at the same time
For that task you can use the Find in files feature of Notepad++ to search the string in all CSV files located at the same folder.
3. Open the desired table on FlameRobin
When Notepad++ highlight the string, it shows in what file it is located and the number of the line. Full Convert saves each CSV with the same name as the original table, so you can find it easily whatever database manager software you are using.
Here is Firebird documentation: https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25.html
You need to read about
Stored Procedures of "selectable" kind,
execute statement command, including for execute statement variant
system tables, having "relation" in names.
Then in your SP you do enumerate all the tables, then you do enumerate all the columns in those tables, then for every of them you run a usual
select 'tablename', 'columnname', columnname
from tablename
where columnname containing '12345'
over every field of every table.
But practically speaking, it most probably would be better to avoid SQL commands and just to extract ALL the database into a long SQL script and open that script in Notepad (or any other text editor) and there search for the string you need.
I am currently trying to move several data tables in my current AWS instance's redshift database to a new database in a different AWS instance (for background my company has acquired a new one and we need to consolidate to on instance of AWS).
I am using the UNLOAD command below on a table and I plan on making that table a csv then uploading that file to the destination AWS' S3 and using the COPY command to finish moving the table.
unload ('select * from table1')
to 's3://destination_folder'
CREDENTIALS 'aws_access_key_id=XXXXXXXXXXXXX;aws_secret_access_key=XXXXXXXXX'
ADDQUOTES
DELIMITER AS ','
PARALLEL OFF;
My issue is that when I change the file type to .csv and open the file I get inconsistencies with the data. there are areas where many rows are skipped and on some rows when the expected columns end I get additional columns with the value "f" for unknown reasons. Any help on how I could achieve this transfer would be greatly appreciated.
EDIT 1: It looks like fields with quotes are having the quotes removed. Additionally fields with commas are having the commas separated away. I've identified some fields with quotes and commas and they are throwing everything off. Would the addquotes clause I have apply to the entire field regardless of whether there are quotes and commas within the field?
Default document will have extension as txt and with quotes. Try to open it with Excel and then save as csv file.
refer https://help.xero.com/Q_ConvertTXT
Looking at the AM data, just for a data analysis project and I'm having trouble importing the data into my dbms (postgresql).
My code is sql code is this:
DROP TABLE IF EXISTS member_details;
CREATE TABLE member_details(
pnum varchar(255),
.....
updatedon timestamp);
COPY member_details
FROM '/Users/etc/data/sample_dump.csv'
WITH DELIMITER ','
CSV;
Problem is that the csv file has no line breaks to separate the data, instead each record is within a bracket which my code above does not recognise and thus just imports all the data into the header in one line and so no records are created
how the data is structured
(dataA1, ....,dataAx),(dataB1,...,dataBx)
How can I alter my code so that postgresql imports the data record by record by recognising the brackets.
Based on the PostgreSQL COPY documentation, I don't believe it allows for row delimiters other than carriage returns and/or line feeds. I believe you'll need to process your file before importing. You can simply replace all ,( with \n(, then replace all the parenthesis to make it a standard csv format that COPY will happy consume.
Perhaps there's another method for PostgreSQL that would work too, but I haven't come across anything yet.
is it possible to programmatically have access open a certain csv file and read it into a datasheet and set parameters like what time of delimiter, TEXT QUALIFIER etc, including all the steps that one takes to manually import a csv file into a table?
You can create a Scripting.FileSystemObject, then stream in the file line by line, separating fields by your delimiter, and adding each parsed record to the underlying recordset or another table.
You can also experiment with DoCmd.TransferText, as that seems the most promising built-in method.
Edit: for the more complete (and complex and arguably less efficient) solution, which would give you more control over schema and and treat any csv file like any other datasource, look at ODBC: Microsoft Text Driver - http://support.microsoft.com/kb/187670 for script examples. I believe you can already just link to any cvs files via the standard MS Access front-end, thereby allowing you to do just about any table read/copy operation on it. Otherwise, just set up a ODBC file dns entry to plug in your csv file (via Start->Program Files->Administrative Tools->Data Sources), and then you can select/link to it via your Access file.
It greatly depends on whether the Access table is already created or if you want to create it on the fly. Let's assume (for the sake of simplicity) that the table already exists (you can always go back and create the table). As suggested before you need some scripting tools:
Dim FSO As FileSystemObject, Fo As TextStream, line As String
Set FSO = CreateObject("Scripting.FileSystemObject")
Set Fo = FSO.OpenTextFile("csvfile.csv")
This will allow you to read your csv file which is a text file. You control here which delimiter you use and which date format will be employed ect. You need also to put in play the database engine:
Dim db As Database
Set db = DBEngine.OpenDatabase("mydatabase.accdb")
And this is basically all what you need. You read your csv file line by line:
While Not Fo.AtEndOfStream
line = Fo.ReadLine
Now you need to be able to format each field adequately for the table, meaning: text fields must be surrounded by quote marks ("), date fields must be surrounded by #, ect. Otherwise the database engine will noisily complain. But again, you are in charge here and you can do whatever cosmetic surgery you need to your input line. Another assumption I am making (for the sake of simplicity) is that you are comfortable programming in VBA.
So let's get to the crux of the problem:
db.Execute ("INSERT INTO myTable VALUES (" & line & ")")
Wend
At this moment, line has been changed into something edible for the database engine like, for example, if the original read line was
33,04/27/2019,1,1,0,9,23.1,10,72.3,77,85,96,95,98,10,5.4,5.5,5.1,Cashew,0
you changed it into
33,#04/27/2019#,1,1,0,9,23.1,10,72.3,77,85,96,95,98,10,5.4,5.5,5.1,"Cashew",0
One last note: It is important that each field of the csv file matches the one on the table. This includes at least, but not necessarily last, being in the right order. You must ensure that in you preprocessing stage. Hope this would put you on the right track.