How to INSERT a reference to UUID from another table in PostgreSQL? - sql

I'm learning to use Sequelize to use with a PostgreSQL database. All of the following is happening on a dev. environment. This happened while manually trying to insert data into my tables to check if things are setup correctly through Sequelize, check on failing unit tests, etc.
I've made two tables with Sequelize models: User and Publication. Both these tables are generating UUIDv4. I've associated the User hasMany Publications, and Publication belongsTo User (you may reference the extra info).
On my psql shell, I've inserted the following record to my User table (rest of the data cut out for brevity):
| id | firstName | lastName | ..|
|----------------------------------------|------------|-----------|---|
| 8c878e6f-ee13-4a37-a208-7510c2638944 | Aiz | .... |...|
Now I'm trying to insert a record into my Publication table while referencing my newly created user above. Here's what I entered into the shell:
INSERT INTO "Publications"("title", "fileLocation", ..., "userId")VALUES('How to Pasta', 'www.pasta.com', ..., 8c878e6f-ee13-4a37-a208-7510c2638944);
It fails and I receive the following error:
ERROR: syntax error at or near "c878e6f"
LINE 1: ...8c878e6f-ee...
(it points to the second character on the terminal in LINE 1 reference - the 'c').
What's wrong here? Are we supposed to enter UUIDs another way if we want to do it manually in psql? Do we paste the referenced UUID as a string? Is there a correct way I'm missing from my own research?
Some extra info if it helps:
From my models:
Publication.associate = function(models) {
// associations can be defined here
Publication.belongsTo(models.User, {
foreignKey: "userId"
});
};
and
User.associate = function(models) {
// associations can be defined here
User.hasMany(models.Publication, {
foreignKey: "userId",
as: "publications"
});
};
Here's how I've defined userId in Publication:
userId: {
type: DataTypes.UUID,
references: {
model: "User",
key: "id",
as: "userId"
}
}
If it's worth anything, my (primaryKey) id on both models are type: DataTypes.UUID, defaultValue: DataTypes.UUIDV4 (I don't know if this is an issue).

surround your uuid in apostrophes (write it as a string) and pg will convert it to a uuid
Starting and ending your string with {} is optional
Eg
INSERT INTO "Publications"("title", "fileLocation", ..., "userId")VALUES('How to Pasta', 'www.pasta.com', ..., '8c878e6f-ee13-4a37-a208-7510c2638944');
Or
INSERT INTO "Publications"("title", "fileLocation", ..., "userId")VALUES('How to Pasta', 'www.pasta.com', ..., '{8c878e6f-ee13-4a37-a208-7510c2638944}');
Source (I don't do pgsql much so I casted around for another person who wrote some working pgsql. If this doesn't work out for you let me know and I'll remove the answer): PostgreSQL 9.3: How to insert upper case UUID into table

Related

Prisma PostgreSQL queryRaw error code 42P01 table does not exist

I am trying to run a query that searches items in the Item table by how similar their title and description are to a value, the query is the following:
let items = await prisma.$queryRaw`SELECT * FROM item WHERE SIMILARITY(name, ${search}) > 0.4 OR SIMILARITY(description, ${search}) > 0.4;`
However when the code is run I receive the following error:
error - PrismaClientKnownRequestError:
Invalid `prisma.$queryRaw()` invocation:
Raw query failed. Code: `42P01`. Message: `table "item" does not exist`
code: 'P2010',
clientVersion: '4.3.1',
meta: { code: '42P01', message: 'table "item" does not exist' },
page: '/api/marketplace/search'
}
I have run also the following query:
let tables = await prisma.$queryRaw`SELECT * FROM pg_catalog.pg_tables;`
Which correctly shows that the Item table exists! Where is the error?
After doing some light research, It looks like you possibly need double-quotes. Try
let items = await prisma.$queryRaw`SELECT * FROM "Item" ... blah blah
I say this because PostgreSQL tables names and columns default to lowercase when not double-quoted. If you haven't built much of your db, it may be worth wild to make all the tables and columns lowercase so that you won't have to keep adding double quotes and escaping characters.
References:
PostgreSQL support
Are PostgreSQL columns case sensitive?

How To Query an array of JSONB

I have table (orders) with jsonb[] column named steps in Postgres db.
I need create SQL query to select records where Step1 and Step2 and Step3 has success status
[
{
"step_name"=>"Step1",
"status"=>"success",
"timestamp"=>1636120240
},
{
"step_name"=>"Step2",
"status"=>"success",
"timestamp"=>1636120275
},
{
"step_name"=>"Step3",
"status"=>"success",
"timestamp"=>1636120279
},
{
"step_name"=>"Step4",
"timestamp"=>1636120236
"status"=>"success"
}
]
table structure
id | name | steps (jsonb)
'Normalize' steps into a list of JSON items and check whether every one of them has "status":"success". BTW your example is not valid JSON. All => need to be replaced with : and a comma is missing.
select id, name from orders
where
(
select bool_and(j->>'status' = 'success')
from jsonb_array_elements(steps) j
where j->>'step_name' in ('Step1','Step2','Step3') -- if not all steps but only these are needed
);
You can use JSON value contain operation for check condition exist or not
Demo
select
*
from
test
where
steps #> '[{"step_name":"Step1","status":"success"},{"step_name":"Step2","status":"success"},{"step_name":"Step3","status":"success"}]'

PostgreSQL import from CSV NULL values are text - Need null

I had exported a bunch of tables (>30) as CSV files from MySQL database using phpMyAdmin. These CSV file contains NULL values like:
"id","sourceType","name","website","location"
"1","non-commercial","John Doe",NULL,"California"
I imported many such csv to a PostgreSQL database with TablePlus. However, the NULL values in the columns are actually appearing as text rather than null.
When my application fetches the data from these columns it actually retrieves the text 'NULL' rather than a null value.
Also SQL command with IS NULL does not retrieve these rows probably because they are identified as text rather than null values.
Is there a SQL command I can do to convert all text NULL values in all the tables to actual NULL values? This would be the easiest way to avoid re-importing all the tables.
PostgreSQL's COPY command has the NULL 'some_string' option that allows to specify any string as NULL value: https://www.postgresql.org/docs/current/sql-copy.html
This would of course require re-importing all your tables.
Example with your data:
The CSV:
"id","sourceType","name","website","location"
"1","non-commercial","John Doe",NULL,"California"
"2","non-commercial","John Doe",NULL,"California"
The table:
CREATE TABLE import_with_null (id integer, source_type varchar(50), name varchar(50), website varchar(50), location varchar(50));
The COPY statement:
COPY import_with_null (id, source_type, name, website, location) from '/tmp/import_with_NULL.csv' WITH (FORMAT CSV, NULL 'NULL', HEADER);
Test of the correct import of NULL strings as SQL NULL:
SELECT * FROM import_with_null WHERE website IS NULL;
id | source_type | name | website | location
----+----------------+----------+---------+------------
1 | non-commercial | John Doe | | California
2 | non-commercial | John Doe | | California
(2 rows)
The important part that transforms NULL strings into SQL NULL values is NULL 'NULL' and could be any other value NULL 'whatever string'.
UPDATE For whoever comes here looking for a solution
See answers for two potential solutions
One of the solutions provides a SQL COPY method which must be performed before the import itself. The solution is provided by Michal T and marked as accepted answer is the better way to prevent this from happening in the first place.
My solution below uses a script in my application (Built in Laravel/PHP) which can be done after the import is already done.
Note- See the comments in the code and you could potentially figure out a similar solution in other languages/frameworks.
Thanks to #BjarniRagnarsson suggestion in the comments above, I came up with a short PHP Laravel script to perform update queries on all columns (which are of type 'string' or 'text') to replace the 'NULL' text with NULL values.
public function convertNULLStringToNULL()
{
$tables = DB::connection()->getDoctrineSchemaManager()->listTableNames(); //Get list of all tables
$results = []; // an array to store the output results
foreach ($tables as $table) { // Loop through each table
$columnNames = DB::getSchemaBuilder()->getColumnListing($table); //Get list of all columns
$columnResults = []; // array to store the results per column
foreach ($columnNames as $column) { Loop through each column
$columnType = DB::getSchemaBuilder()->getColumnType($table, $column); // Get the column type
if (
$columnType == 'string' || //check if column type is string or text
$columnType == 'text'
) {
$query = "update " . $table . " set \"" . $column . "\"=NULL where \"" . $column . "\"='NULL'"; //Build the update query as mentioned in comments above
$r = DB::update($query); //perform the update query
array_push($columnResults, [
$column => $r
]); //Push the column Results
}
}
array_push($results, [
$table => $columnResults
]); // push the table results
}
dd($results); //Output the results
}
Note I was using Laravel 8 for this.

ActiveRecord: List columns in table from console

I know that you can ask ActiveRecord to list tables in console using:
ActiveRecord::Base.connection.tables
Is there a command that would list the columns in a given table?
This will list the column_names from a table
Model.column_names
e.g. User.column_names
This gets the columns, not just the column names and uses ActiveRecord::Base::Connection, so no models are necessary. Handy for quickly outputting the structure of a db.
ActiveRecord::Base.connection.tables.each do |table_name|
puts table_name
ActiveRecord::Base.connection.columns(table_name).each do |c|
puts "- #{c.name}: #{c.type} #{c.limit}"
end
end
Sample output: http://screencast.com/t/EsNlvJEqM
Using rails three you can just type the model name:
> User
gives:
User(id: integer, name: string, email: string, etc...)
In rails four, you need to establish a connection first:
irb(main):001:0> User
=> User (call 'User.connection' to establish a connection)
irb(main):002:0> User.connection; nil #call nil to stop repl spitting out the connection object (long)
=> nil
irb(main):003:0> User
User(id: integer, name: string, email: string, etc...)
If you are comfortable with SQL commands, you can enter your app's folder and run rails db, which is a brief form of rails dbconsole. It will enter the shell of your database, whether it is sqlite or mysql.
Then, you can query the table columns using sql command like:
pragma table_info(your_table);
complementing this useful information, for example using rails console o rails dbconsole:
Student is my Model, using rails console:
$ rails console
> Student.column_names
=> ["id", "name", "surname", "created_at", "updated_at"]
> Student
=> Student(id: integer, name: string, surname: string, created_at: datetime, updated_at: datetime)
Other option using SQLite through Rails:
$ rails dbconsole
sqlite> .help
sqlite> .table
ar_internal_metadata relatives schools
relationships schema_migrations students
sqlite> .schema students
CREATE TABLE "students" ("id" integer PRIMARY KEY AUTOINCREMENT NOT NULL, "name" varchar, "surname" varchar, "created_at" datetime NOT NULL, "updated_at" datetime NOT NULL);
Finally for more information.
sqlite> .help
Hope this helps!
You can run rails dbconsole in you command line tool to open sqlite console. Then type in .tables to list all the tables and .fullschema to get a list of all tables with column names and types.
To list the columns in a table I usually go with this:
Model.column_names.sort.
i.e. Orders.column_names.sort
Sorting the column names makes it easy to find what you are looking for.
For more information on each of the columns use this:
Model.columns.map{|column| [column.name, column.sql_type]}.to_h.
This will provide a nice hash.
for example:
{
id => int(4),
created_at => datetime
}
For a more compact format, and less typing just:
Portfolio.column_types
I am using rails 6.1 and have built a simple rake task for this.
You can invoke this from the cli using rails db:list[users] if you want a simple output with field names. If you want all the details then do rails db:list[users,1].
I constructed this from this question How to pass command line arguments to a rake task about passing command line arguments to rake tasks. I also built on #aaron-henderson's answer above.
# run like `rails db:list[users]`, `rails db:list[users,1]`, `RAILS_ENV=development rails db:list[users]` etc
namespace :db do
desc "list fields/details on a model"
task :list, [:model, :details] => [:environment] do |task, args|
model = args[:model]
if !args[:details].present?
model.camelize.constantize.column_names.each do |column_name|
puts column_name
end
else
ActiveRecord::Base.connection.tables.each do |table_name|
next if table_name != model.underscore.pluralize
ActiveRecord::Base.connection.columns(table_name).each do |c|
puts "Name: #{c.name} | Type: #{c.type} | Default: #{c.default} | Limit: #{c.limit} | Precision: #{c.precision} | Scale: #{c.scale} | Nullable: #{c.null} "
end
end
end
end
end

Effect mongodb _id generation on Indexing

I am using MonoDB as a databse.......
I am going to generate a _id for each document for
that i use useId and FolderID for that user
here userId is different for each User and also Each user has different FolderIds
i generate _id as
userId="user1"
folderId="Folder1"
_id = userId+folderId
is there any effect of this id generation on mongoDB Indexing...
will it work Fast like _id generated by MongoDB
A much better solution would be to leave the _id column as it is and have separate userId and folderId fields in your document, or create a separate field with them both combined.
As for if it will be "as fast" ... depends on your query, but for ordering by "create" date of the document for example you'd lose the ability to simply order by the _id you'd also lose the benefits for sharding and distribution.
However if you want to use both those ID's for your _id there is one other option ...
You can actually use both but leave them separate ... for example this is a valid _id:
> var doc = { "_id" : { "userID" : 12345, "folderID" : 5152 },
"field1" : "test", "field2" : "foo" };
> db.crazy.save(doc);
> db.crazy.findOne();
{
"_id" : {
"userID" : 12345,
"folderID" : 5152
},
"field1" : "test",
"field2" : "foo"
}
>
It should be fine - the one foreseeable issue is that you'll lose the ability to reverse out the date / timestamp from the MongoID. Why not just add another ID object within the document? You're only losing a few bytes, and you're not screwing with the built in indexing system.