I want to generate the insertions of a table using 'sqlplus' and I do not know how to do it. Does anyone know? I have also tried using the command 'exp', with the following statement:
exp user/password#server file=data_table.dmp buffer=10485867 statistics=none tables='name_table'
but it does not work either. It generates a .dmp file that I can not use. I want it to generate INSERTS.
Thanks and regards.
Related
I'm a total newbie to SQL, and I'd like to know whether anyone knows a means for easily "copy and pasting" hundreds of entries to a sqlite database. Again, I'm not a professional programmer, so software that could automate that process would be great. (I primarily code in JavaScript, but SQL code can be used as well if you could kindly explain the code.)
Essentially, the text I'd be adding would be delimited by a character (the '|' character in my case) for the columns, and line breaks for the rows. It would be added onto a table that's already being used in the database, with columns already set up.
Thanks a lot!! Any suggestions are most appreciated!
You can use DB Browser for SQLite and then File > Import > From CSV file.. after creating a New Database.
I have to save a PDF-File into a BLOB-Column from a Oracle DB.
I can't use Java and have to use an INSERT-Statement.
The only solutions I've found while searching were very complex.
Is there an easy solution like: INSERT INTO (BLOB_COLUMN) VALUES(BLOBPDF("myPDF.pdf") or something like that?
I would suggest that you use a stored procedure in Oracle where you pass the path to your PDF file and calling the stored procedure does the insert.
Look at the last two sample example here.
If the load is "one-shot" you can use SQLDeveloper.
Otherwise you can use sqlloader (http://docs.oracle.com/cd/B19306_01/server.102/b14215/ldr_params.htm) that is designed for this type of operations
Ive tried to execute below delete through SQL script in Pentaho Job, I get the error as
Unknown table 'a' in MULTI DELETE. Can somebody throw light on this. Is there any other way
to go around this?
DELETE a.* FROM pm_report.PM_CONCERTS_GQV_REPORT_TEST a
WHERE EXISTS
(SELECT 1 FROM pm_report.PM_CONCERTS_GQV_REPORT_TEST_3 b WHERE b.TM_EVENT_ID=a.TM_EVENT_ID
GROUP BY b.TM_EVENT_ID)
This is mysql right?
See similar solutions here - recommends removing the table alias.
Worth noting this is nothing to do with Pentaho, if you did it in a SQL client you'd get the same error. If you don't then the difference is probably in the jdbc driver version - may be worth checking that.
i can suggest these options:
dont use aliases
try this directly on your mysql and check if it works for you.
dont use pentaho like this : make a transformation and break apart the query to steps
with table input and lookup then delete the rows by row_id
its a little bit longer but a lot more undersrandable and easy to maintain.
"dont over optimize"
I have not had experience in SQL queries or SQL database , so please excuse me if my terminology is wrong.
So, I have a file containing around 17,000 SQL insert statements where I enter data for 5 columns/attributes in a database. In those 17,000 statements there are only around 1200 statements which have data for all of the 5 columns/attributes while the rest have data only for 4 columns. I need to delete all those unwanted statements( which dont have data for all 5 columns).
Is there a simple way/process to do it other than going one by one and deleting? If so, it would be great if someone could help me out with it.
A different approach from my fine colleagues here would be to run the file into a staging/disposable database. Use the delete that #Rob called out in his response to pare the table down to the desired dataset. Then use an excellent, free tool like SSMS Tools Pack to reverse engineer those insert statements.
I can think of two approaches:
1: Using SQL: insert all the data and then run a query that removes any records where it does not have all of the necessary data. If the table is not currently empty, keep track of the ID where your current data "ends" so that your query can use that as a WHERE statement.
DELETE FROM myTable WHERE a IS NULL OR b IS NULL /* etc. */
2: Process the SQL file with a regular expression: Use a text editor or command line to match either "bad" records or "good" records. Most text editors have a find and replace that allows you to use regular expressions. And command line you can use grep or other tools to process. Or even a script that parses in your language of choice, for that matter.
Open file in notepad++, replace all "bad" lines using regular expressions.
I'm working in Ubuntu with MySql and I also have Query Browser and Administrator installed, I'm not afraid of the command line either if it helps.
I want simply to be able to run a query and see a result set but then convert that result set into a series of commands that could be used to create the same rows in a table of an identical schema.
I hope the question makes sense, it's quite a simple problem and one that must have been solved but I can't for the life of me work out where this kind of conversion is made available.
Thanks in advance,
Gav
I think you need to use a command line utility mysqldump http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
if you want to dump one or more tables.
If you need to dump a result of an arbitrary query and restore it later, take a look on SELECT ... INTO OUTFILE and LOAD DATA INFILE( http://dev.mysql.com/doc/refman/5.0/en/load-data.html)
I do not know if I understood you at all but you can use a SELECT INTO statement.
SELECT *
INTO new_table_name
FROM old_tablename
WHERE ...