I exported tables and queries from SQL.
The ruby (1.9+) way to read csv appears to be:
require 'csv'
CSV.foreach("exported_mysql_table.csv", {:headers=>true}) do |row|
puts row
end
Which works great if your data is like this:
"name","email","potato"
"Bob","bob#bob.bob","omnomnom"
"Charlie","char#char.com","andcheese"
"Doug","diggyd#diglet.com","usemeltattack"
Works fine (The first line is a header, the attributes). However, if the data is like this:
"id","name","email","potato"
1,"Bob","bob#bob.bob","omnomnom"
2,"Charlie","char#char.com","andcheese"
4,"Doug","diggyd#diglet.com","usemeltattack"
Then we get the error:
.rbenv/versions/1.9.3-p194/lib/ruby/1.9.1/csv.rb:1894:in `block (2 levels) in shift': Missing or stray quote in line 2 (CSV::MalformedCSVError)
I think this is because the id is stored as a number, not a string, and thus has no quotes, and the csv parser expects ALL the entries to have quotes. Ideally I'd like to read "Bob" as a string and 1 as a number (and stuff it into a Hash of hashes)
(Have tried 'FasterCSV', that gem became 'csv' since ruby 1.9)
EDIT:
Was pointed out that the example worked fine (derp), was looking in the wrong place, it was an error with multi-line fields, question moved to Ruby CSV read multiline fields
Using the input you provided, I am unable to reproduce this.
1.9.3p194 :001 > require 'csv'
=> true
1.9.3p194 :002 > CSV.foreach("test.txt", {:headers => true}) { |row| puts row }
1,Bob,bob#bob.bob,omnomnom
2,Charlie,char#char.com,andcheese
4,Doug,diggyd#diglet.com,usemeltattack
=> nil
The only difference I see between our environments is that you are using rbenv, and I am using RVM. I also verified this on another machine I have with ruby 1.9.3-p194. Does the input you provided exactly match what is in your csv?
Related
I have a robot script which inserts some sql statements from a sql file; some of these statements contain utf8 characters. If I insert this file manually into database using navicat tool, everything's fine. But when I try to execute this file using database library of robot framework, utf8 characters go crazy!
This is my utf8 included sql statement:
INSERT INTO "MY_TABLE" VALUES (2, 'تست1');
This is how I use database library:
Connect To Database Using Custom Params cx_Oracle ${dbConnection}
Execute Sql Script ${sqlFile}
Disconnect From Database
This is what I get in the database:
������������ 1
I have tried to execute the SQL file using cx_Oracle directly and it's still failing! It seems there is a problem in the original library. This is what I've used for importing SQL file:
import cx_Oracle
if __name__ == "__main__":
dsn_tns = cx_Oracle.makedsn(ip, port, sid)
db = cx_Oracle.connect(username, password, dsn_tns)
sql_commands = open(sql_file_addr, 'r').read().split(";")
cr = db.cursor()
for command in sql_commands:
if not command in ["", "\t", "\n", "\r", "\n\r", "\r\n", None]:
print "Executing SQL command:", command
cr.execute(command)
db.commit()
I have found that I can define character-set in the connection string. I've done it for mysql database and it the framework successfully inserted UTF8 characters into database; this is my connection string for MySQL:
database='db_name', user='db_username', password='db_password', host='db_ip', port=3306, charset='utf8'
But I don't know how to define character-set for Oracle connection string. I have tried this:
'db_username','db_password','db_ip:1521/db_sid','utf8'
And I've got this error:
TypeError: an integer is required
As #Yu Zhang suggested, I read discussion in this link and I found out that I should set an environment variable NLS_LANG in order to have a UTF-8 connection to the database. So I've added below line in my test setup:
os.environ["NLS_LANG"] = "AMERICAN_AMERICA.AL32UTF8"
Would any of links below help?
http://docs.oracle.com/cd/B19306_01/server.102/b14225/ch6unicode.htm#i1006779
http://www.theserverside.com/news/thread.tss?thread_id=39575
https://community.oracle.com/thread/502949
There can be several problems in here...
The first problem might be that you don't save the test files using UTF-8 encoding.
Robot framework expects plain text test files to be saved using UTF-8 encoding, yet most text editors will not save by default using UTF-8.
Verify that your editor saves that way - for example, by opening the file using NotePad++ and choosing Encoding -> UTF-8
Another problem might be the connection to the Oracle database. It doesn't seem like you can configure the connection custom properties to explicitly state UTF-8
This means you probably need to state that the database schema itself is UTF-8
Ok, i should feel ashamed for that, but i'm unable to understand how awk works...
A few days ago i posted this question which questions about how to replace fields on file A using the file B as a reference ( both files have matching ID's for reference ).
But after accepting the answer as correct ( Thanks Ed !) i'm struggling about how to do it using this following pattern:
File A
{"test_ref":32132112321,"test_id":12345,"test_name":"","test_comm":"test", "null_test": "true"}
{"test_ref":32133321321,"test_id":12346,"test_name":"","test_comm":"test", "test_type": "alfa"}
{"test_ref":32132331321,"test_id":12347,"test_name":"","test_comm":"test", "test_val": 1923}
File B
{"test_id": 12345, "test_name": "Test values for null"}
{"test_id": 12346, "test_name": "alfa tests initiated"}
{"test_id": 12347, "test_name": "discard values"}
Expected result:
{"test_ref":32132112321,"test_id":12345,"test_name":"Test values for null","test_comm":"test", "null_test": "true"}
{"test_ref":32133321321,"test_id":12346,"test_name":"alfa tests initiated","test_comm":"test", "test_type": "alfa"}
{"test_ref":32132331321,"test_id":12347,"test_name":"discard values","test_comm":"test", "test_val": 1923}
I tried some alterations with the original solution but without success. So, Based on the Question posted before, how could i achieve the same results with this new pattern?
PS: One important note, the lines on file A not always have the same length
Big Thanks in advance.
EDIT:
After trying the solution posted by Wintermute, it seens it doens't work with lines having:
{"test_ref":32132112321,"test_id":12345,"test_name":"","test_comm":"test", "null_test": "true","modifiers":[{"type":3,"value":31}{"type":4,"value":33}]}
Error received.
error: parse error: Expected separator between values at line xxx, column xxx
Parsing JSON with awk or sed is not a good idea for the same reasons that it's not a good idea to parse XML with them: sed works based on lines, and JSON is not line-based. awk works on vaguely tabular data, and JSON is not vaguely tabular. People don't expect their JSON tools to break when they insert newlines in benign places.
Instead, consider using a tool geared towards JSON processing, such as jq. In this particular case, you could use
jq -c -s 'group_by(.test_id) | map(.[0] + .[1]) | .[]' a.json b.json > c.json
Here jq slurps (-s) the input files into an array of JSON objects, groups these by test_id, merges them and unpacks the array. -c means compact output format, so each JSON object in the result ends up on a single line in the output.
I would like to import csv file into my database. I am using Ruby 1.8.7 and Rails 3.2.13 and the gem 'csv_importer'.
I am going to fetch productname and release_date from csv file
In my controller
def import
csv_text = File.read('test.csv')
csv = CSV.parse(csv_text, :headers => true)
csv.each do |row|
puts row #getting entire values from csv
puts row['productname'] #getting error
end
If I print row/row[0], I am getting entire values from csv file as
productname,release_date
xxx,yyy
in my log.
If I print row['productname'], I am getting error as can't convert String into Integer.
How can I rectify this error?
It looks like you are actually expecting the FasterCSV API, which does support a signature like parse(csv_text, :headers => true).
In Ruby version 1.9 the CSV library in stdlib was replaced with the FasterCSV library. Your code looks like it might work straight up in Ruby 1.9+.
If you want to use the FasterCSV style API without upgrading to a newer Ruby, you can grab the gem and use it instead of CSV:
require 'fastercsv'
csv_text = File.read('test.csv')
csv = FasterCSV.parse(csv_text, :headers => true)
csv.each do |row|
puts row['productname']
end
From http://ruby-doc.org/stdlib-1.8.7/libdoc/csv/rdoc/CSV.html#method-c-parse:
CSV.parse(str_or_readable, fs = nil, rs = nil, &block)
Parse lines from given string or stream. Return rows as an Array of Arrays.
... so row in your case is an Array. Array[] takes an Integer as argument, not aString`, which is why you're getting the error.
In other words; row['anything'] cannot work, but row[0] and row[1] will give you the values from column 1 and 2 of the row.
Now, in your case, you are actually calling CSV.parse like so:
CSV.parse(csv_text, :headers => true)
Looking at the docs, we see that the second argument to CSV.parse is the field separator. You're passing :headers => true as a field separator. That tells CSV to split each row whenever it encounters the string "headerstrue" - it doesn't, so it doesn't split each row.
If you remove the second argument to CSV.parse you should be closer to what you expect.
I've searched all over this forum for solution to the above problem, but everything I tried didn't work. Basically, i have a model Library, with corresponding libraries table in my sqlite3 database. I have a csv file named libraries.csv which contains all the data I want to import into the database.
I tried the method on the second answer on this page but it's still not working. I made sure to create my rake file 'import_libraries.rake in the lib/tasks folder and I also saved the libraries.csv file in that folder but i keep getting this error message:
rake aborted!
Don't know how to build task 'import_libraries' (See
full trace by running task with --trace)
This is the current code I'm using:
require 'csv'
desc "Imports a CSV file into an ActiveRecord table"
task :import, [:filename] => :environment do
CSV.foreach('libraries.csv', :headers => true) do |row|
Library.create!(row.to_hash)
end
end
But when I run bundle exec rake import_libraries, I get the error message above.
Is there anything I am doing wrong? i would appreciate your help guys. Thanks
EDIT
I renamed the rake file from import_libraries.rake to just import.rake On running bundle exec rake import, the error message i now get is:
rake aborted! invalid byte sequence in UTF-8
C:/Users/username/rails_app_name/lib/tasks/import.rake:4:in `block in
' Tasks: TOP => import (See full trace by running task
with --trace)
Based on the error you're getting and the task you have defined, you should be calling:
bundle exec rake import #=> you're currently calling import_libraries which indeed doesn't exist
With rake you call a task based on the name you give to the tasks, not on the name of the file (remember you can have many task inside each of those rake files).
I finally solved the problem by using this code:
namespace :csv do
desc "Import CSV Data"
task :import => :environment do
require 'csv'
csv_file_path = 'lib/tasks/libraries.csv'
CSV.foreach(csv_file_path, headers: true) do |row|
row = Library.create!({
:column_name1 => row[0],
:column_name2 => row[1],
:column_name3 => row[2],
.
.
:last_column => row[6]
})
puts "Success!"
end
end
end
I have a problem of JSON encoding on my rails application:
h = {:status=>200, :promotions=>[{:id=>719788, :title=>"test"}]}
and result of
puts h.to_json
is
{"status":200,"promotions":{"{\"id\"=>719788, \"title\"=>\"test\"}":null}}
Which is not the expected result!
This is the correct result:
{"promotions":[{"title":"test","id":719788}],"status":200}
What could generate such error in JSON generation?
ruby -v
ruby 1.9.3p194 (2012-04-20) [x86_64-linux]
rails -v
Rails 3.1.4
gem list ==> json (1.6.6, 1.5.4)
Ok, this has nothing to do with a configuration of rails or ruby...
One of the engineer added this into core_extensions for Array
def to_hash # Recursively convert array to hash
inject({}) do |hash, (key, value)|
value = value.to_hash if value.kind_of?(Array)
hash.merge!({key => value})
end
end
I guess I can delete this question tomorrow