I have a table that I am trying to update after I have made changes to the url column. Currently, the data is seeded into the databse when I run rake db:seed, but if I make changes to the csv, in this case to the url, I want to update the table to reflect that change. Currently, the table will not update that value.
require 'csv'
datafile = Rails.root + 'db/data.csv'
CSV.foreach(datafile, headers: true) do |row|
Data.find_or_create_by({address: row[0]}) do |hr|
hr.address = row[0]
hr.city = row[1]
hr.state = row[2]
hr.zip = row[3]
hr.name = row[4]
hr.url = row[5]
end
end
CSV.foreach(datafile, headers: true) do |row|
Data.find_or_create_by({url: row[5]}) do |hr|
hr.url = row[5]
end
end
I tried doing the find_or_create_by on just the url (row[5]), but those changes are not being reflected. How can I make this seed file update any changes or new entries in the CSV in the postgreSQL database?
CSV.foreach(datafile, headers: true) do |row|
Data.find_or_create_by({url: row[5]}) do |hr|
hr.url = row[5]
# call the save method, to saved the change
hr.save
end
end
Related
I made a Web app that takes in a text file, reads each line, takes the 11th character and saves it to SQLite3 db. How do I lock the database or have two or more separate tables while multiple requests are running?
I have added adding ATOMIC_REQUESTS': True to the settings.py in Django.
and I tried creating temporary tables for each request, but can't figure it out. I am pretty fresh to Django 2.2
My View.py
def home(request):
if request.method == 'GET':
return render(request, 'home.html')
if request.method == 'POST':
form = DocumentForm(data=request.POST, files=request.FILES)
print(form.errors)
if form.is_valid():
try:
f = request.FILES['fileToUpload']
except:
print('\033[0;93m'+ "No File uploaded, Redirecting" +'\033[0m')
return HttpResponseRedirect('/tryagain')
print('\033[32m'+ "Success" +'\033[0m')
print('Working...')
line = f.readline()
while line:
#print(line)
mst = message_typer.messages.create_message(str(line)[11])
line = f.readline()
else:
print('\033[0;91m'+ "Failed to validate Form" +'\033[0m')
return HttpResponseRedirect('/output')
return HttpResponse('Failure')
def output(request):
s = message_typer.messages.filter(message='s').count()
A = message_typer.messages.filter(message='A').count()
d = message_typer.messages.filter(message='d').count()
E = message_typer.messages.filter(message='E').count()
X = message_typer.messages.filter(message='X').count()
P = message_typer.messages.filter(message='P').count()
r = message_typer.messages.filter(message='r').count()
B = message_typer.messages.filter(message='B').count()
H = message_typer.messages.filter(message='H').count()
I = message_typer.messages.filter(message='I').count()
J = message_typer.messages.filter(message='J').count()
R = message_typer.messages.filter(message='R').count()
message_types = {'s':s, 'A':A, 'd':d, 'E':E, 'X':X, 'P':P,\
'r':r, 'B':B, 'H':H, 'I':I, 'J':J, 'R':R }
output = {'output':message_types}
#return HttpResponse('Received')
message_typer.messages.all().delete()
return render(request, 'output.html',output)
When the web page loads, it should display a simple break down each character in the 11th position of the uploaded text file.
However, if two requests are running concurrently, the first page that makes the request gets an Operation Error; Db is locked.
Traceback to here:
message_typer.messages.all().delete()
The second page will sum the total of the two files that were uploaded.
I do want to wipe the table after so that the next user will have an empty table to populate and perform a count on.
Is there a better way?
I am having a hard time updating information in a database. Originally, when I try to save the changes, it gives me the error that the database is locked.
SQLite3::BusyException: cannot rollback transaction - SQL statements in progress: rollback transaction
Here is the daemon in question:
require 'daemons'
require File.expand_path(
File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
Daemons.run_proc('clock.rb')do
daemon_log = ActiveSupport::BufferedLogger.new(
File.join(Rails.root, "log", "clock.log"))
Rails.logger = ActiveRecord::Base.logger = daemon_log
Rails.logger = ActiveRecord::Base.logger.info("running clock.rb")
Rails.logger = ActiveRecord::Base.logger.info("Information Retrieved")
UserStats = UgloungeSkills.find(:all)
loop do
Rails.logger = ActiveRecord::Base.logger.info("running main loop")
UserStats.each do |row|
powerLevel = row['taming'] + row['mining'] + row['woodcutting'] + row['repair'] + row['unarmed'] + row['herbalism'] + row['excavation'] + row['archery'] + row['swords'] + row['axes'] + row['acrobatics'] + row['fishing']
userName = UgloungeUser.find(row['user_id'])['user']
McMMO_id = row['user_id']
if(User.exists?(name: userName))
#Update it
singleUser = User.find(McMMO_id)
Rails.logger = ActiveRecord::Base.logger.info("Just updating information")
Rails.logger = ActiveRecord::Base.logger.info("User: " + singleUser['name'])
#Rails.logger = ActiveRecord::Base.logger.info("Values of power_level and McMMO_id: " + powerLevel)
singleUser.name = "Poopyhead"
singleUser.save
#singleUser.update_attributes(power_level: powerLevel, mcmmo_id: McMMO_id)
Rails.logger = ActiveRecord::Base.logger.info("Finished updating")
else
#Create a new user
Rails.logger = ActiveRecord::Base.logger.info("Creating new user")
User.create(name: userName, power_level: powerLevel, mcmmo_id: McMMO_id)
end
end
sleep(120)
Rails.logger = ActiveRecord::Base.logger.info("Sleeping")
end
end
I am only using the sqlite to store some information, the other database is a MySQL database which is only being read from. Any help is much appreciated.
This sounds like an issue I had with a project recently. SQLite was working great in development and beta testing, but under the strain of production environment, the whole thing broke down. I'm converting the project to a different architecture (Resque) and so far performance is better and I plan to deploy the new architecture next week. At a minimum, you should convert to MySQL to avoid these errors.
I have a ruby script that extracts information from a file (genbank) and I would like to load this data into the database. I have created the model and the schema and a connection script:
require 'active_record'
def establish_connection(db_location= "protein.db.sqlite3")
ActiveRecord::Base.establish_connection(
:adapter => "sqlite3",
:database => db_location,
:pool => 5,
:timeout => 5000
)
end
This is my script that outputs the data:
require 'rubygems'
require 'bio'
require 'snp_db_models'
establish_connection
snp_positions_file = File.open("snp_position.txt")
outfile = File.open("output.txt", "w")
genome_sequence = Bio::FlatFile.open(Bio::EMBL, "ref.embl").next_entry
snp_positions = Array.new
snp_positions_file.gets # header line
while line = snp_positions_file.gets
snp_details = line.chomp.split("\t")
snp_seq = snp_details[1]
snp_positions << snp_details[1].to_i
end
mean_snp_per_base = snp_positions.size/genome_sequence.sequence_length.to_f
puts "Mean snps per base: #{mean_snp_per_base}"
#outfile = File.open("/Volumes/DataRAID/Projects/GAS/fastq_files/bowtie_results/snp_annotation/genes_with_higher_snps.tsv", "w")
outfile.puts("CDS start\tCDS end\tStrand\tGene\tLocus_tag\tnote\tsnp_ID\ttranslation_seq\tProduct\tNo_of_snps_per_gene\tsnp_rate_vs_mean")
genome_sequence.features do |feature|
if feature.feature !~ /gene/i && feature.feature !~ /source/i
start_pos = feature.locations.locations.first.from
end_pos = feature.locations.locations.first.to
number_of_snps_in_gene = (snp_positions & (start_pos..end_pos).to_a).size # intersect finds number of times snp occurs within cds location
mean_snp_per_base_in_gene = number_of_snps_in_gene.to_f/(end_pos - start_pos)
outfile.print "#{start_pos}\t"
outfile.print "#{end_pos}\t"
if feature.locations.locations.first.strand == 1
outfile.print "forward\t"
else
outfile.print "reverse\t"
end
qualifiers = feature.to_hash
["gene", "locus_tag", "note", "snp_id", "translation", "product"].each do |qualifier|
if qualifiers.has_key?(qualifier) # if there is gene and product in the file
# puts "#{qualifier}: #{qualifiers[qualifier]}"
outfile.print "#{qualifiers[qualifier].join(",")}\t"
else
outfile.print " \t"
end
end
outfile.print "#{number_of_snps_in_gene}\t"
outfile.print "%.2f" % (mean_snp_per_base_in_gene/mean_snp_per_base)
outfile.puts
end
end
outfile.close
How can I load the data in outfile.txt into the database. Do I have to do something like marshall dump?
Thanks in advance
Mark
Your can write a rake task to do this. Save it in lib/tasks and give it a .rake extension.
desc "rake task to load data into db"
task :load_data_db => :environment do
...
end
Since the rails environment is loaded, you can access your Model directly as you would in any Rails model/controller. Of course, it'll connect to the database depending on the environment variable defined when you execute your rake task.
In a mere script, your models are unknown.
You have to define a minimum to use them as if in a Rails App. Simply declare them:
class Foo << ActiveRecord:Base
end
Otherwise, in a Rails context, use rake tasks which are aware of the Rails app details.
I am doing a multi-form wizard, following the steps provide by Ryan Bates. Creating a new record works, so I was trying to use the same logic for when I edit a record. However, the values that I change do not change -- when I edit something from the first form, go forward then backwards, my edits do not save. Here is the code in my controller:
def edit
session[:edit] = "Only change the fields you wish to edit"
#demographic = Demographic.find(params[:id])
session[:demographic_params] ||= {}
end
def update
session[:demographic_params].deep_merge!(params[:demographic]) if params[:demographic]
#demographic = Demographic.find(params[:id])
#demographic.current_step = session[:demographic_step]
if params[:back_button]
#demographic.previous_step
elsif #demographic.last_step?
#demographic.update_attributes(params[:demographic])
updated = true
else
#demographic.next_step
end
session[:demographic_step] = #demographic.current_step
if not updated
render "edit"
else
session[:demographic_params] = session[:demographic_step] = nil
flash[:notice] = "Entry entered successfully"
redirect_to demographic_path
end
end
What should I change that allows for saving the edits?
I don't know if this will work, but I think that should be something like this to save on every "step change":
def update
session[:demographic_params].deep_merge!(params[:demographic]) if params[:demographic]
#demographic = Demographic.find(params[:id])
#demographic.current_step = session[:demographic_step]
#demographic.update_attributes(params[:demographic])
if params[:back_button]
#demographic.previous_step
elsif #demographic.last_step?
updated = true
else
#demographic.next_step
end
session[:demographic_step] = #demographic.current_step
if not updated
render "edit"
else
session[:demographic_params] = session[:demographic_step] = nil
flash[:notice] = "Entry entered successfully"
redirect_to demographic_path
end
end
I.e., move the #demographic.update_attributes outsite the "step-by-step" logic.
So, I think that you should walk this way to solve your problem.
Hope this helps.
How can the header line of the CSV file be ignored in ruby on rails while doing the CSV parsing!! Any ideas
If you're using ruby 1.8.X and FasterCSV, it has a 'headers' option:
csv = FasterCSV.parse(your_csv_file, {:headers => true}) #or false if you do want to read them
If you're using ruby 1.9.X, the default library is basically FasterCSV, so you can just do the following:
csv = CSV.parse(your_csv_file, {headers: true})
csv = CSV.read("file")
csv.shift # <-- kick out the first line
csv # <-- the results that you want
I have found the solution to above question. Here is the way i have done it in ruby 1.9.X.
csv_contents = CSV.parse(File.read(file))
csv_contents.slice!(0)
csv=""
csv_contents.each do |content|
csv<<CSV.generate_line(content)
end
Easier way I have found is by doing this:
file = CSV.open('./tmp/sample_file.csv', { :headers => true })
# <#CSV io_type:File io_path:"./tmp/sample_file.csv" encoding:UTF-8 lineno:0 col_sep:"," row_sep:"\n" quote_char:"\"" headers:true>
file.each do |row|
puts row
end
Here is the simplest one worked for me. You can read a CSV file and ignore its first line which is the header or field names using headers: true:
CSV.foreach(File.join(File.dirname(__FILE__), filepath), headers: true) do |row|
puts row.inspect
end
You can do what ever you want with row. Don't forget headers: true
To skip the header without the headers option (since that has the side-effect of returning CSV::Row rather than Array) while still processing a line at a time:
File.open(path, 'r') do |io|
io.readline
csv = CSV.new(io, headers: false)
while row = csv.shift do
# process row
end
end